Stanford EE Computer Systems Colloquium

4:30 PM, Wednesday, April 27, 2016
NEC Auditorium, Gates Computer Science Building Room B3
http://ee380.stanford.edu

Can the brain do back-propagation?

Geoffrey Hinton
Google & University of Toronto
About the talk:

Deep learning has been very successful for a variety of difficult perceptual tasks. This suggests that the sensory pathways in the brain might also be using back-propagation to ensure that lower cortical areas compute features that are useful to higher cortical areas. Neuroscientists have not taken this possibility seriously because there are so many obvious objections: Neurons do not communicate real numbers; the output of a neuron cannot represent both a feature of the world and the derivative of a cost function with respect to the neuron's output; the feedback connections to lower cortical areas that are needed to communicate error derivatives do not have the same weights as the feedforward connections; the feedback connections do not even go to the neurons from which the feedforward connections originate; there is no obvious source of labelled data. I will describe joint work with Timothy Lillicrap on ways of overcoming these objections.

Slides:

No slides from this talk are available for download at this time.

Videos:

About the speaker:

[speaker photo] Geoffrey Hinton received his BA in Experimental Psychology from Cambridge in 1970 and his PhD in Artificial Intelligence from Edinburgh in 1978. He did postdoctoral work at Sussex University and the University of California San Diego and spent five years as a faculty member in the Computer Science department at Carnegie-Mellon University. He then became a fellow of the Canadian Institute for Advanced Research and moved to the Department of Computer Science at the University of Toronto. He spent three years from 1998 until 2001 setting up the Gatsby Computational Neuroscience Unit at University College London and then returned to the University of Toronto where he is now an emeritus distinguished professor. From 2004 until 2013 he was the director of the program on "Neural Computation and Adaptive Perception" which is funded by the Canadian Institute for Advanced Research. Since 2013 he has been working half-time for Google in Mountain View and Toronto.

Geoffrey Hinton is a fellow of the Royal Society the Royal Society of Canada, and the Association for the Advancement of Artificial Intelligence. He is an honorary foreign member of the American Academy of Arts and Sciences and the National Academy of Engineering, and a former president of the Cognitive Science Society. He was awarded the first David E. Rumelhart prize (2001), the IJCAI award for research excellence (2005), the Killam prize for Engineering (2012), the IEEE James Clerk Maxwell Gold medal (2016), and the NSERC Herzberg Gold Medal (2010) which is Canada's top award in Science and Engineering.

Geoffrey Hinton designs machine learning algorithms. He was one of the researchers who introduced the back-propagation algorithm and the first to use backpropagation for learning word embeddings. His other contributions to neural network research include Boltzmann machines, distributed representations, time-delay neural nets, mixtures of experts, variational learning, and deep belief nets. His research group in Toronto made major breakthroughs in deep learning that have revolutionized speech recognition and object classification.

Contact information:

Geoffrey Hinton
Google and University of Toronto