Tom Bradicich, PhD
GM/VP Servers and IoT Systems
Hewlett Packard Enterprise
Dr. Tom Bradicich is VP/GM Servers/IoT Systems at Hewlett Packard Enterprise (HPE) , where he leads the global business unit for dense scalable servers and IoT Systems, with P&L ownership, product development, and customer experience worldwide. He and his team also direct the HPE Discovery Labs for partner and customer collaborations on systems and solutions. Tom’s systems received an InfoWorld 2015 Technology of the Year Award, ARM TechCon Best of Show, and a CRN 2015 Product of the Year Award.
He was previously VP Server Engineering at HP, responsible for global R&D and delivery of the workload optimized, converged server product line. Tom and his staff directed several worldwide engineering teams, releasing over 20 products and integrated solutions stacks. They pioneered the first Intel Xeon™ server with on-chip integrated graphics, and the first 64 bit enterprise ARM server.
He has also been an R&D Fellow and Corporate Officer at National Instruments (NI) where he led teams developing end-to-end solutions based on data acquisition and analysis systems and IT infrastructures for the test, measurement, and control industries. Before joining NI, Tom was an IBM Fellow, Vice President of Systems Technology, Distinguished Engineer, Engineering Director, and CTO for IBM’s x86 server and converged systems lines. He managed the architectural design of scale-up x86 SMP servers, and received the IBM Chairman’s Award for his role in pioneering enterprise blade servers. Tom was elected to the IBM Academy of Technology and co-founded several technical industry standards groups, including PCI SIG, DMTF SMASH, The Green Grid, and Blade.org. He holds several patents in PC and server design. He earned a BSEE from Florida Atlantic University, an MS from North Carolina State University, and a PhD from the University of Florida. He has served on several university adjunct faculties.
Abstract: Managing Enormous Data from the Internet of Things
How the Internet of Things (IoT) will play out is still uncertain, but we do know the "things" will produce huge amounts of data. Since the amounts will be more than anyone can manage easily, we must look at how to process and store it in new ways. Workload-optimized, scalable, industry-standard servers will play a key role in end-to-end IoT solutions, in which analytics on the data can be executed in several compute domains. The systems must be scaled-in or scaled-out to meet the demands of many diverse applications, and render new business, engineering, and scientific insight from the data. To achieve desired business results, compute solutions at the IoT edge must reach new levels in performance, cost, durability, and energy consumption, all coupled with open augmentable software from both vendors and the open source community.
General Manager/Cloud Hardware Engineering
Kushagra Vaid is the General Manager for Server Engineering in Microsoft’s Cloud and Enterprise division. He is responsible for driving hardware R&D, engineering designs, deployments, and support for Microsoft’s cloud scale services, such as Bing, Azure, and Office 365, across global datacenters. The engineering effort also includes development for the OpenCloud Server (OCS) specification that Microsoft contributed to the Open Compute Project (OCP).
Kushagra joined Microsoft in 2007 as Principal Architect working on improving performance/power ratios for large cloud services, and contributed to the development of standardized energy efficiency benchmarks at the SPEC and TPC industry forums. He was also responsible for driving Microsoft’s datacenter hardware strategy and cloud optimized server designs, working closely with the broader hardware industry. Before joining Microsoft, Kushagra was a Principal Engineer at Intel where he drove the technology direction for Xeon microprocessors and platforms. He started his career as a CPU design engineer working on enterprise class CPUs and platforms.
Kushagra has presented several papers at international research conferences, and holds over 25 patents in computer architecture and datacenter design. He has been a featured speaker at industry conferences on cloud services, hardware engineering, and datacenter architecture. He has an MS in Computer Science from SUNY Binghamton and a BE in Computer Engineering from the University of Mumbai (India).
Abstract: Innovating in the Cloud at Hyperscale
Large cloud operators must build out the most advanced infrastructures to deliver great services to customers and to compete in the marketplace. So they must constantly look for ways to innovate, while at the same time optimizing for cost and performance. Rapid changes in cloud workloads have led to new computing models involving diverse system architectures. For example, cloud servers may need to use FPGAs to accelerate encryption and flow control of network traffic. This talk will cover the latest server innovations aimed at large scale cloud computing and will provide insight into the future of cloud infrastructures.
IT Brand Pulse
A 30-year veteran of the IT industry including senior executive positions with QLogic and Quantum, Frank founded IT Brand Pulse and leads the Business Development practice. The “Biz Dev” team helps clients create demand for their products and services through PR, advertising, trade shows, on-line events, channel programs and IT education. Frank also contributes industry analysis to leading IT publications and as a keynote speaker.
Abstract: 2016 Open Server Innovation Leader Awards
The IT Brand Pulse awards are the symbols for brand leadership; covering hundreds of IT categories each year – from servers, storage and networking to cloud, software and other broad IT market segments.
Measuring the perceptions of IT professionals from large enterprise, medium enterprise and HPC environments, winners are voted in surveys that are independent, non-sponsored research.
The surveys are designed to capture the pulse of brand leadership in different product categories. For each survey, IT Pros are asked to vote for Market, Innovation, Performance, Reliability, Service & Support, and Price leaders from a randomized field of provided vendors (or the opportunity to write in an answer choice).
Senior Principal Engineer, Huawei
Chairperson, IEEE P802.3bs 400GbE Task Force and Ethernet Alliance
John D’Ambrosia is the single most widely recognized figure in the development of new Ethernet standards. In his role as a Senior Principal Engineer at Huawei, he leads the drive toward higher rates and other advances. Currently, he chairs the IEEE P802.3bs 400GbE Task Force, is a member of the IEEE 802 Executive Committee, and chairs the IEEE 802.3 Industry Connections Next Generation Enterprise / Data Center / Campus (ECDC) Ad Hoc, a forum for exploring new ideas for Ethernet standards. Previously, he chaired the IEEE 802.3ba Task Force that developed 40GbE and 100GbE. He is also the Chairman of the Ethernet Alliance, an organization dedicated to the promotion of all Ethernet technologies, and a popular blogger on Ethernet matters. In 2013 D’Ambrosia was awarded the IEEE-SA 2013 Standards Medallion and was inducted into the Light Reading Hall of Fame. He has previous experience with Dell, Force10 Networks, and Tyco Electronics.
Abstract: Building on Ethernet's New Diversity
Today’s Ethernet consists of a series of specifications that address diverse application spaces. Solutions range from operation over copper traces for a few millimeters to optical fibers that range up to 40 km. At the top end, higher signaling rates require new modulation methods, such as PAM4. New standards always require system cost and performance tradeoffs, as well as ensuring the availability of connectors, board materials, components, cabling, and other parts. System designers will face new challenges in employing the latest series of specifications, but they will also find an arsenal of tools for the development of the next generation of Ethernet equipment.
Having so many interfaces to support is a mixed blessing. There are now many interoperability issues, as well as a need for varied test methods, test equipment, and associated hardware and software. However, designers will also find they have the right tools for both traditional and emerging applications, as well as ones not yet even imagined.
Executive Director/Distinguished Engineer
Greg Pruett is Executive Director/Distinguished Engineer/Chief Architect Enterprise Systems for the Lenovo Enterprise Business Group (EBG). He also leads the enterprise Strategic Technologies Innovation Center, where he focuses on creating innovation that truly matters to customers. As Chief Architect, he has overall responsibility for the design of Lenovo enterprise systems and software, including both traditional datacenter and new software-defined strategies. He is a Master Inventor with 48 issued or pending patents in such areas as systems management, blade server technology, and automated provisioning.
Before joining Lenovo, he was Chief Architect at IBM, where he led the architecture and development of software and firmware for PureFlex, IBM’s first converged system. He worked at IBM for over 18 years. He is the author of many technical publications and conference presentations and a member of the DMTF Board of Directors. Greg holds a Master’s degree in computer science from the University of North Carolina at Chapel Hill and a Bachelor’s degree with majors in mathematics and computer science from Furman University.
Abstract: Up in the Air: Making an Effective Transition to Cloud Computing
Enterprise data centers are rapidly discarding their traditional role as providers of on-premise computing. They now are moving toward being largely users of private and public clouds instead of their own facilities. The transition requires a change from today’s largely fragmented management processes to well-orchestrated software-defined approaches that optimize distributed resources to meet the needs of workloads or applications. Hyperconverged infrastructure, software-defined networking and storage, and unified service management, plus a resurgence of leveraging bare-metal facilities, are the leading server-based solutions datacenters are adopting to address the imminent challenge.
As editorial director of TechTarget's Storage media group, Rich oversees content for Storage magazine, SearchStorage.com, SearchDataBackup.com, SearchDisasterRecovery.com, SearchVirtualStorage.com, SearchCloudStorage.com, SearchSMBStorage.com and the Storage Decisions conferences. Rich has been involved with high-tech journalism for nearly 20 years; previously, he was executive editor of ZDNet Tech Update and Cnet Enterprise; editor in chief of Windows Systems magazine; senior editor for Windows magazine; and senior editor and technical editor for PC Sources. In those roles, and as a freelancer, Rich has written more than 500 computer technology articles.
Abstract: TechTarget Server/Storage/Networking Websites
TechTarget storage websites are the best online information resource for news, tips and expert advice for the storage, backup and disaster recovery markets. Topics include SSD, backup, storage for virtual servers and cloud storage. TechTarget networking websites cover routing and switching, network security and management, application performance and delivery, VoIP, unified communications and collaboration, wireless LANs, Software Defined Networking, Wide Area Networks and mobility.
Vice President of Marketing
Kevin Deierling is Mellanox's Vice President of Marketing, responsible for sleeping in his office and throwing water balloons out the window at unsuspecting passers-by. He has previous experience as Chief Architect at Silver Spring Networks (a startup involved in networking electric meters), VP Marketing and Business Development at SpansLogic (a startup involved in network acceleration), and VP Product Marketing at Mellanox. Deierling has managed teams for developing core strategies in networking, cloud, SDN, big data, virtualization, and storage. He has contributed to multiple technology standards through organizations such as the InfiniBand Trade Association and the PCI Industrial Computer Manufacturing Group (PICMG). He holds over 25 patents in areas such as networking, wireless, security, and video compression, and was a contributing author of a text on BiCmos design. Deierling holds a BA in Solid State Physics from UC Berkeley.
Abstract: Increasing System Performance by Speeding up Server-to-Server Storage Transactions
Distributed systems involving many servers often spend much of their time doing transfers from one server’s storage to another. The transfers may involve data to be worked on, virtual server images, operating system services, or chains of tasks to be executed. Such transfers can be quite slow, particularly if they require processor interaction. Remote DMA (RDMA) over a fabric (such as PCIe) allows for direct transfers from one server’s persistent memory to another’s, providing both high speed and low latency. This approach also protects critical data from power failures without compromising overall performance or tying up memory buses.
VP/GM Server Processor Group
Gopal Hegde is the VP/GM of the Data Center Processor Group at Cavium, responsible for ThunderX line of processors. Gopal has over 22 years of experience driving business, technology and product innovations in silicon, software and systems. Gopal joined Cavium from Calxeda where he was the COO. Gopal revamped Calxeda solutions team and moved the company to an ODM-driven and workload specific model for delivery of ARM based servers. Prior to Calxeda, Gopal worked at Cisco as Senior Director of Engineering responsible for Cisco's UCS platforms. Prior to Cisco, Gopal worked for Adaptec and Intel. At Intel, he served as the GM of Ethernet switching division in Intel Communications group. Later as Chief Architect of IO for Intel Server Platforms, Gopal led the development of technologies enabling integration of PCIe into SandyBridge CPUs, IO virtualization for servers, Data Center Bridging (DCB), Backplane Ethernet and Fiber Channel over Ethernet (FCoE). Gopal holds MSECE from University of Massachusetts at Amherst, and ME from Indian Institute of Science, Bangalore, India.
Abstract: How Workload Optimized SoC Processors Can Revolutionize Cloud Computing
Cloud applications differ greatly from traditional enterprise IT and require a different approach to computing. They require a high level of scalability, compute power, and memory bandwidth/latency, as well as the ability to support specific workload needs such as high-performance networking, the latest security methods, application acceleration, high-speed I/O, and massive amounts of storage. New workload optimized SoC processors designed specially to meet all these needs deliver improved performance while reducing system cost and power consumption. Such SoCs designed for the cloud thus make cloud computing the logical approach for future enterprise needs.
GM, Datacenter/Cloud USA
Dolly Wu is the GM of Inspur Systems’ Datacenter/Cloud Division, where she is in charge of opening the US and Canadian markets for the company. She has over 20 years experience in the high technology industry working with data centers, CSPs, direct end user customers (in financial services, Internet/cloud, life sciences, and media/entertainment), VARs, system integrators, OEMs, and government and education customers. Her focus is currently on the big data, cloud computing, HPC, and storage market segments. She has previous experience with Synnex/Hyve Solutions, Newisys Data Storage (a Sanmina company), Supermicro, and Everex. She holds a BS in Business Administration from the University of California at Berkeley.
Abstract: Today’s Open Platforms for Hyperscale Datacenters
Many open hardware platforms are currently available, including OCP (Facebook implementation), Scorpio Project Designs (Open Datacenter Designs for China), and Intel (Rack Scale Architecture). Each has advocates among hyperscale datacenters that have deployed them in volume. Cloud designers must be aware of the differences among them and must understand which is most suitable for specific types of private cloud, public cloud, and hybrid cloud implementations.
Vice President, Advanced Storage
Rob Peglar is Vice President, Advanced Storage at Micron Technology, and a member of the Storage Business Unit’s five-person global leadership team. The unit, with $3-plus billion dollars annual revenue, provides solid state disk (SSD) and other non-volatile memory products such as 3D NAND and 3D Xpoint. Rob leads a team directing partner and customer-facing collaborations for future designs of advanced storage systems and data-intensive-computing solutions. His team is focused on using non-volatile memory technology to solve difficult problems in machine learning, data analytics, scale-out computing design, and several other key areas. Rob is a 39-year veteran of the storage industry, published author, and frequent industry speaker at leading storage and cloud-related seminars and conferences worldwide.
He was previously CTO Americas for EMC Isilon, responsible for customer-facing scale-out NAS technology requirements, designs, and implementations. Rob’s team directed hundreds of strategic customer engagements, spanning multiple product releases and integrated solutions stacks. He focused on solutions for big data/analytics, media and entertainment, life sciences, EDA, oil and gas, financial services, and other vertical market workloads. His team pioneered the first customer deployments of the Hadoop Filesystem (HDFS) embedded in a scale-out NAS platform.
He has also been a Senior Fellow and VP Technology at Xiotech, principal storage architect for StorageTek, and Manager of UNIX Development for ETA Systems. Rob is a member of the Board of Directors of the Storage Networking Industry Association (SNIA) and a member of the Program Executive Committee for the Flash Memory Summit. He is a three-time member of the EMC Elect (2014-2016) and was selected for the CRN Storage Superstars Award in 2010.
Rob holds a BS in computer science from Washington University (St. Louis) and did graduate work there.
Abstract: Speed Up Your Applications with 3D XPoint Technology
An obvious problem in application design for big data is that storage is too far away from the processor. Hard drives take so long to access (up to 100,000 times longer than DRAM) that the whole system slows down. Designers must expend a lot of effort either avoiding storage accesses or trying to mask them with other activity. The new 3D Xpoint technology (jointly developed by Intel and Micron) brings storage much closer. It can reduce the difference to as little as 80 times today and 20 times in the near future. The result is to create new approaches for system architects and to enable entirely new applications involving enormous data sets and real-time analysis. Areas of interest include the Internet-of-Things (IoT), genome mapping, and virtual reality
Principal Architect, Azure Networking
Brad Booth is a long-time leader in Ethernet technology development and standardization. Currently heading up the 25/50G Ethernet Consortium and the Consortium for On-Board Optics, he is a Principal Engineer at Microsoft, where he leads the development of hyper-scale interconnect strategy for Microsoft’s cloud datacenters. He is also the founder and past Chairman of the Ethernet Alliance. Brad was previously a Distinguished Engineer in the Office of the CTO at Dell Networking, where he developed Dell's next generation server-storage-networking fabric strategy. He has also held senior strategist and engineering positions at Applied Micro, Intel, and PMC-Sierra. The holder of 16 patents related to networking technologies, he has received awards from the IEEE Standards Association for work on Ethernet standards and awards for his contributions to Gigabit Ethernet, 10 Gigabit Ethernet and Ethernet in the First Mile. He was listed as one of the 50 most powerful people in networking by Network World magazine.
Abstract: Is It Time for Optics to the Server?
At 10 Gb/s and above, electrical signals cannot travel beyond the box unless designers use expensive, low-loss materials. Optical signaling behaves better but requires costly cables and connectors. For 25 Gb/s technology, datacenters have been able to stick with electrical signaling and copper cabling by keeping the servers and the first switch together in the rack. However, for the newly proposed 50 Gb/s technology, this solution is likely to be insufficient. The tradeoffs are complex here since most datacenters don’t want to use fiber optics to the servers. The winning strategy will depend on bandwidth demands, datacenter size, equipment availability, and user experience.