Deep Neural Networks Are Helping Decipher How Brains Work

In the winter of 2011, Daniel Yamins, a postdoctoral researcher in computational neuroscience at the Massachusetts Institute of Technolo...


In the winter of 2011, Daniel Yamins, a postdoctoral researcher in computational neuroscience at the Massachusetts Institute of Technology, would at times toil past midnight on his machine vision project. He was painstakingly designing a system that could recognize objects in pictures, regardless of variations in size, position, and other properties—something that humans do with ease. The system was a deep neural network, a type of computational device inspired by the neurological wiring of living brains.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research develop­ments and trends in mathe­matics and the physical and life sciences.

“I remember very distinctly the time when we found a neural network that actually solved the task,” he said. It was 2 am, a tad too early to wake up his adviser, James DiCarlo, or other colleagues, so an excited Yamins took a walk in the cold Cambridge air. “I was really pumped,” he said.

It would have counted as a noteworthy accomplishment in artificial intelligence alone, one of many that would make neural networks the darlings of AI technology over the next few years. But that wasn’t the main goal for Yamins and his colleagues. To them and other neuroscientists, this was a pivotal moment in the development of computational models for brain functions.

DiCarlo and Yamins, who now runs his own lab at Stanford University, are part of a coterie of neuroscientists using deep neural networks to make sense of the brain’s architecture. In particular, scientists have struggled to understand the reasons behind the specializations within the brain for various tasks. They have wondered not just why different parts of the brain do different things, but also why the differences can be so specific: Why, for example, does the brain have an area for recognizing objects in general but also for faces in particular? Deep neural networks are showing that such specializations may be the most efficient way to solve problems.

The computational neuroscientist Daniel Yamins, now at Stanford University, showed that a neural network processing the features of a scene hierarchically, much as the brain does, could match the performance of humans at recognizing objects.Photograph: Fontejon Photography/Wu Tsai Neurosciences Institute

Similarly, researchers have demonstrated that the deep networks most proficient at classifying speech, music, and simulated scents have architectures that seem to parallel the brain’s auditory and olfactory systems. Such parallels also show up in deep nets that can look at a 2D scene and infer the underlying properties of the 3D objects within it, which helps to explain how biological perception can be both fast and incredibly rich. All these results hint that the structures of living neural systems embody certain optimal solutions to the tasks they have taken on.

These successes are all the more unexpected given that neuroscientists have long been skeptical of comparisons between brains and deep neural networks, whose workings can be inscrutable. “Honestly, nobody in my lab was doing anything with deep nets [until recently],” said the MIT neuroscientist Nancy Kanwisher. “Now, most of them are training them routinely.”

Deep Nets and Vision

Artificial neural networks are built with interconnecting components called perceptrons, which are simplified digital models of biological neurons. The networks have at least two layers of perceptrons, one for the input layer and one for the output. Sandwich one or more “hidden” layers between the input and the output and you get a “deep” neural network; the greater the number of hidden layers, the deeper the network.

Deep nets can be trained to pick out patterns in data, such as patterns representing the images of cats or dogs. Training involves using an algorithm to iteratively adjust the strength of the connections between the perceptrons, so that the network learns to associate a given input (the pixels of an image) with the correct label (cat or dog). Once trained, the deep net should ideally be able to classify an input it hasn’t seen before.

In their general structure and function, deep nets aspire loosely to emulate brains, in which the adjusted strengths of connections between neurons reflect learned associations. Neuroscientists have often pointed out important limitations in that comparison: Individual neurons may process information more extensively than “dumb” perceptrons do, for example, and deep nets frequently depend on a kind of communication between perceptrons called back-propagation that does not seem to occur in nervous systems. Nevertheless, for computational neuroscientists, deep nets have sometimes seemed like the best available option for modeling parts of the brain.

Source link

COMMENTS

Name

Africa,702,Americas,3690,Art & Culture,13836,Arts,6228,Arts & Design,1475,Asia,3045,Automobile,399,Baseball,484,Basketball,378,Books,3634,Business,5045,Celebrity,2527,Cricket,523,Crime,94,Cryptocurrency,1267,Dance,564,Defense,739,Diplomatic Relations,2403,Economy,966,Editorial,260,Education,1072,Elections,287,Energy & Environment,2914,Entertainment,21589,Environment,3379,Europe,3952,Faith & Religion,181,Family & Life,711,Fashion & Style,2990,Finance,18257,Food & Drink,3490,Football,998,Games,56,Gossip,10150,Health & Fitness,3805,Health Care,758,Hockey,165,Home & Garden,808,Humour,832,Latin America,49,Lifestyle,15626,Media,429,Middle East,1362,Movies,1555,Music,2411,Opinion,2521,Other,10640,Other Sports,4788,Political News,11037,Political Protests,2196,Politics,16855,Real Estate,1640,Relationship,59,Retail,3007,Science,2362,Science & Tech,9200,Soccer,135,Space & Cosmos,251,Sports,10878,Technology,3208,Tennis,489,Theater,1507,Transportation,234,Travel,2408,TV,3435,US Sports,1344,Video News,3531,War & Conflict,934,Weird News,929,World,14953,
ltr
item
Newsrust: Deep Neural Networks Are Helping Decipher How Brains Work
Deep Neural Networks Are Helping Decipher How Brains Work
https://media.wired.com/photos/5f9c17ea06e9afd2b38e204b/191:100/w_1280,c_limit/Science_BrainDeepNets-2880x1620-Lede.jpg
Newsrust
https://www.newsrust.com/2020/11/deep-neural-networks-are-helping.html
https://www.newsrust.com/
https://www.newsrust.com/
https://www.newsrust.com/2020/11/deep-neural-networks-are-helping.html
true
732247599994189300
UTF-8
Loaded All Posts Not found any posts VIEW ALL Readmore Reply Cancel reply Delete By Home PAGES POSTS View All RECOMMENDED FOR YOU LABEL ARCHIVE SEARCH ALL POSTS Not found any post match with your request Back Home Sunday Monday Tuesday Wednesday Thursday Friday Saturday Sun Mon Tue Wed Thu Fri Sat January February March April May June July August September October November December Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec just now 1 minute ago $$1$$ minutes ago 1 hour ago $$1$$ hours ago Yesterday $$1$$ days ago $$1$$ weeks ago more than 5 weeks ago Followers Follow THIS PREMIUM CONTENT IS LOCKED STEP 1: Share to a social network STEP 2: Click the link on your social network Copy All Code Select All Code All codes were copied to your clipboard Can not copy the codes / texts, please press [CTRL]+[C] (or CMD+C with Mac) to copy Table of Content