Deep Learning and GPU Computing are now being deployed across many industries, helping to solve big data problems ranging from computer vision and natural language-processing to self-driving cars. At the heart of these solutions is the NVIDIA GPU, providing the computing power to both train these massive deep neural networks as well as efficiently provide inference and implementation of those networks. But how did the GPU get to this point?
In this talk I will present a personal perspective and some lessons learned during the GPU's journey and evolution from being the heart of the PC gaming platform, to today also powering the world's largest datacenters and supercomputers.
Slides:
Download the slides for this presentation in PDF format.
Videos:
About the speaker:
Stuart Oberman is Vice President of GPU ASIC Engineering at NVIDIA. Since
2002, he has contributed to the design and verification of seven GPU
architectures. He currently directs multiple GPU design and verification
teams. He previously worked at AMD, where he was an architect of the
3DNow! multimedia instruction set and the Athlon floating-point unit. Stuart earned the BS degree in electrical engineering from the University of Iowa, and the MS and PhD degrees in electrical engineering from Stanford University, where he performed research in the Stanford Architecture and Arithmetic Group. He has coauthored one book and more than 20 technical papers. He holds more than 55 granted US patents. |
Contact information:
Stuart Oberman