Can AI Match the Human Brain? | Surya Ganguli | TED

76,477 views ・ 2025-02-21

TED


Please double-click on the English subtitles below to play the video.

00:04
So what the heck happened in the field of AI in the last decade?
0
4335
3170
00:07
It's like a strange new type of intelligence
1
7972
2436
00:10
appeared on our planet.
2
10441
1802
00:12
But it's not like human intelligence.
3
12276
1869
00:14
It has remarkable capabilities,
4
14612
2169
00:16
but it also makes egregious errors that we never make.
5
16814
2869
00:20
And it doesn't yet do the deep logical reasoning that we can do.
6
20317
3037
00:24
It has a very mysterious surface of both capabilities and fragilities.
7
24221
5272
00:29
And we understand almost nothing about how it works.
8
29527
2836
00:32
I would like a deeper scientific understanding of intelligence.
9
32396
3737
00:37
But to understand AI,
10
37101
1368
00:38
it's useful to place it in the historical context
11
38502
2937
00:41
of biological intelligence.
12
41472
2269
00:43
The story of human intelligence
13
43774
1502
00:45
might as well have started with this little critter.
14
45309
2636
00:47
It's the last common ancestor of all vertebrates.
15
47978
2736
00:50
We are all descended from it.
16
50748
1902
00:52
It lived about 500 million years ago.
17
52683
2202
00:55
Then evolution went on to build the brain, which in turn,
18
55586
3403
00:58
in the space of 500 years from Newton to Einstein,
19
58989
3604
01:02
developed the deep math and physics
20
62626
1836
01:04
required to understand the universe, from quarks to cosmology.
21
64495
4071
01:08
And it did this all without consulting ChatGPT.
22
68599
2569
01:12
And then, of course, there's the advances of the last decade.
23
72169
3604
01:15
To really understand what just happened in AI,
24
75806
2803
01:18
we need to combine physics, math,
25
78609
2069
01:20
neuroscience, psychology, computer science and more,
26
80678
3270
01:23
to develop a new science of intelligence.
27
83948
2836
01:26
The science of intelligence
28
86817
1368
01:28
can simultaneously help us understand biological intelligence
29
88219
3870
01:32
and create better artificial intelligence.
30
92123
2135
01:34
And we need this science now,
31
94291
2169
01:36
because the engineering of intelligence
32
96494
1968
01:38
has vastly outstripped our ability to understand it.
33
98462
3103
01:41
I want to take you on a tour of our work in the science of intelligence
34
101565
3370
01:44
that addresses five critical areas in which AI can improve --
35
104969
3737
01:48
data efficiency, energy efficiency, going beyond evolution,
36
108739
4405
01:53
explainability and melding minds and machines.
37
113177
3337
01:56
Let's address these critical gaps one by one.
38
116514
3336
02:00
First, data efficiency.
39
120251
1801
02:02
AI is vastly more data-hungry than humans.
40
122086
3303
02:05
For example, we train our language models on the order of one trillion words now.
41
125389
5305
02:10
Well, how many words do we get?
42
130728
1902
02:12
Just 100 million.
43
132630
1201
02:13
It's that tiny little red dot at the center.
44
133864
2136
02:16
You might not be able to see it.
45
136033
1535
02:18
It would take us 24,000 years to read the rest of the one trillion words.
46
138035
5672
02:23
OK, now, you might say that's unfair.
47
143707
1902
02:25
Sure, AI read for 24,000 human-equivalent years,
48
145609
3537
02:29
but humans got 500 million years of vertebrate brain evolution.
49
149180
3269
02:32
But there's a catch.
50
152850
1535
02:34
Your entire legacy of evolution is given to you through your DNA,
51
154418
4571
02:39
and your DNA is only about 700 megabytes,
52
159023
2469
02:41
or equivalently, 600 million [words].
53
161525
2202
02:43
So the combined information we get from learning and evolution
54
163761
3570
02:47
is minuscule compared to what AI gets.
55
167364
2203
02:49
You are all incredibly efficient learning machines.
56
169600
3503
02:53
So how do we bridge the gap between AI and humans?
57
173137
3970
02:57
We started to tackle this problem by revisiting the famous scaling laws.
58
177141
3470
03:00
Here's an example of a scaling law,
59
180644
2036
03:02
where error falls off as a power law with the amount of training data.
60
182713
4271
03:07
These scaling laws have captured the imagination of industry
61
187017
3604
03:10
and motivated significant societal investments
62
190621
2269
03:12
in energy, compute and data collection.
63
192923
3504
03:16
But there's a problem.
64
196460
1735
03:18
The exponents of these scaling laws are small.
65
198229
2602
03:20
So to reduce the error by a little bit,
66
200864
1902
03:22
you might need to ten-x your amount of training data.
67
202800
2736
03:25
This is unsustainable in the long run.
68
205536
2703
03:28
And even if it leads to improvements in the short run,
69
208272
2602
03:30
there must be a better way.
70
210874
1402
03:33
We developed a theory that explains why these scaling laws are so bad.
71
213110
4037
03:37
The basic idea is that large random datasets are incredibly redundant.
72
217147
3671
03:40
If you already have billions of data points,
73
220851
2069
03:42
the next data point doesn't tell you much that's new.
74
222920
2536
03:45
But what if you could create a nonredundant dataset,
75
225489
2836
03:48
where each data point is chosen carefully
76
228359
2135
03:50
to tell you something new, compared to all the other data points?
77
230527
3204
03:53
We developed theory and algorithms to do just this.
78
233764
4004
03:57
We theoretically predicted and experimentally verified
79
237801
3637
04:01
that we could bend these bad power laws down to much better exponentials,
80
241438
4205
04:05
where adding a few more data points could reduce your error,
81
245676
2869
04:08
rather than ten-xing the amount of data.
82
248579
2235
04:10
So what theory did we use to get this result?
83
250848
2769
04:14
We used ideas from statistical physics, and these are the equations.
84
254485
3236
04:17
Now, for the rest of this entire talk,
85
257755
1868
04:19
I'm going to go through these equations one by one.
86
259623
2469
04:22
(Laughter)
87
262092
1101
04:23
You think I'm joking?
88
263227
1368
04:24
And explain them to you.
89
264595
1835
04:26
OK, you're right, I'm joking. I'm not that mean.
90
266430
2736
04:29
But you should have seen the faces of the TED organizers
91
269166
2836
04:32
when I said I was going to do that.
92
272002
2269
04:34
Alright, let's move on.
93
274271
1235
04:35
Let's zoom out a little bit,
94
275539
1602
04:37
and think more generally
95
277174
1201
04:38
about what it takes to make AI less data-hungry.
96
278375
2569
04:40
Imagine if we trained our kids
97
280978
2202
04:43
the same way we pretrain our large language models,
98
283180
3036
04:46
by next-word prediction.
99
286250
1535
04:47
So I'd give my kid a random chunk of the internet and say,
100
287818
2736
04:50
"By the way, this is the next word."
101
290587
1902
04:52
I'd give them another random chunk of the internet and say,
102
292523
2836
04:55
"This is the next word."
103
295392
1468
04:56
If that's all we did,
104
296860
1168
04:58
it would take our kids 24,000 years to learn anything useful.
105
298062
3036
05:01
But we do so much more than that.
106
301131
2203
05:03
For example, when I teach my son math,
107
303367
2569
05:05
I teach him the algorithm required to solve the problem,
108
305969
3104
05:09
then he can immediately solve new problems
109
309073
2002
05:11
and generalize using far less training data than any AI system would do.
110
311108
3937
05:15
I don't just throw millions of math problems at him.
111
315079
3303
05:18
So to really make AI more data-efficient,
112
318415
4605
05:23
we have to go far beyond our current training algorithms
113
323020
2869
05:25
and turn machine learning into a new science of machine teaching.
114
325923
5272
05:31
And neuroscience, psychology and math can really help here.
115
331228
3403
05:35
Let's go on to the next big gap, energy efficiency.
116
335366
3403
05:38
Our brains are incredibly efficient.
117
338802
2236
05:41
We only consume 20 watts of power.
118
341038
2736
05:43
For reference, our old light bulbs were 100 watts.
119
343807
3437
05:47
So we are all literally dimmer than light bulbs.
120
347277
3404
05:50
(Laughter)
121
350714
1702
05:52
But what about AI?
122
352416
1168
05:53
Training a large model can consume as much as 10 million watts,
123
353617
3303
05:56
and there’s talk of going nuclear to power one-billion-watt data centers.
124
356920
4805
06:01
So why is AI so much more energy-hungry than brains?
125
361759
4738
06:06
Well, the fault lies in the choice of digital computation itself,
126
366530
3770
06:10
where we rely on fast and reliable bit flips
127
370334
3103
06:13
at every intermediate step of the computation.
128
373470
2803
06:16
Now, the laws of thermodynamics
129
376273
1568
06:17
demand that every fast and reliable bit flip must consume a lot of energy.
130
377875
5839
06:24
Biology took a very different route.
131
384448
2803
06:27
Biology computes the right answer just in time,
132
387251
3103
06:30
using intermediate steps that are as slow and as unreliable as possible.
133
390387
6240
06:36
In essence, biology does not rev its engine any more than it needs to.
134
396660
4037
06:41
In addition, biology matches computation to physics much better.
135
401932
4972
06:46
Consider, for example, addition.
136
406937
2002
06:48
Our computers add using really complex energy-consuming transistor circuits,
137
408972
6240
06:55
but neurons just directly add their voltage inputs,
138
415245
3370
06:58
because Maxwell's laws of electromagnetism already know how to add voltage.
139
418615
5639
07:04
In essence, biology matches its computation
140
424288
3904
07:08
to the native physics of the universe.
141
428225
3070
07:11
So to really build more energy-efficient AI,
142
431328
2803
07:14
we need to rethink our entire technology stack,
143
434164
2936
07:17
from electrons to algorithms,
144
437134
2769
07:19
and better match computational dynamics to physical dynamics.
145
439937
4237
07:24
For example, what are the fundamental limits
146
444208
3703
07:27
on the speed and accuracy of any given computation,
147
447911
3470
07:31
given an energy budget?
148
451381
1735
07:33
And what kinds of electrochemical computers can achieve
149
453150
3337
07:36
these fundamental limits?
150
456520
1735
07:38
We recently solved this problem for the computation of sensing,
151
458255
4104
07:42
which is something that every neuron has to do.
152
462392
2636
07:45
We were able to find fundamental lower bounds or lower limits on the error
153
465028
4104
07:49
as a function of the energy budget.
154
469132
1736
07:50
That's that red curve.
155
470901
1268
07:52
And we were able to find the chemical computers that achieve these limits.
156
472202
3671
07:55
And remarkably, they looked a lot like G-protein coupled receptors,
157
475906
3670
07:59
which every neuron uses to sense external signals.
158
479610
3837
08:03
So this suggests that biology can achieve amounts of efficiency
159
483480
5472
08:08
that are close to fundamental limits set by the laws of physics itself.
160
488986
4004
08:13
Popping up a level,
161
493023
1268
08:14
neuroscience now gives us the ability to measure not only neural activity,
162
494324
5005
08:19
but also energy consumption across, for example, the entire brain of the fly.
163
499363
5305
08:24
The energy consumption is measured through ATP usage,
164
504701
3003
08:27
which is the chemical fuel that powers all neurons.
165
507738
3670
08:31
So now let me ask you a question.
166
511441
1702
08:33
Let's say in a certain brain region, neural activity goes up.
167
513143
3670
08:36
Does the ATP go up or down?
168
516847
2669
08:39
A natural guess would be that the ATP goes down,
169
519983
2303
08:42
because neural activity costs energy, so it's got to consume the fuel.
170
522286
3303
08:46
We found the exact opposite.
171
526156
2069
08:48
When neural activity goes up,
172
528759
1768
08:50
ATP goes up and it stays elevated
173
530561
2602
08:53
just long enough to power expected future neural activity.
174
533196
3704
08:56
This suggests that the brain follows a predictive energy allocation principle,
175
536934
4537
09:01
where it can predict how much energy is needed, where and when,
176
541505
4971
09:06
and it delivers just the right amount of energy at just the right location,
177
546476
4405
09:10
for just the right amount of time.
178
550914
2870
09:14
So clearly, we have a lot to learn from physics, neuroscience and evolution
179
554384
6640
09:21
about building more energy-efficient AI.
180
561058
2502
09:23
But we don't need to be limited by evolution.
181
563594
3236
09:26
We can go beyond evolution,
182
566863
1635
09:28
to co-opt the neural algorithms discovered by evolution,
183
568532
3136
09:31
but implement them in quantum hardware that evolution could never figure out.
184
571702
3970
09:36
For example, we can replace neurons with atoms.
185
576840
3837
09:41
The different firing states of neurons
186
581445
1835
09:43
correspond to the different electronic states of atoms.
187
583313
3370
09:46
And we can replace synapses with photons.
188
586717
3937
09:50
Just as synapses allow two neurons to communicate,
189
590654
2903
09:53
photons allow two atoms to communicate through photon emission and absorption.
190
593590
5205
09:58
So what can we build with this?
191
598829
1868
10:01
We can build a quantum associative memory out of atoms and photons.
192
601264
4538
10:05
This is the same memory system
193
605836
1535
10:07
that won John Hopfield his recent Nobel Prize in physics,
194
607371
3503
10:10
but this time, it's a quantum-mechanical system built of atoms and photons,
195
610907
3671
10:14
and we can analyze its performance
196
614611
1668
10:16
and show that the quantum dynamics yields enhanced memory capacity,
197
616279
4105
10:20
robustness and recall.
198
620417
2603
10:23
We can also build new types of quantum optimizers built directly out of photons,
199
623053
4504
10:27
and we can analyze their energy landscape
200
627591
2069
10:29
and explain how they solve optimization problems in fundamentally new ways.
201
629693
4371
10:34
This marriage between neural algorithms and quantum hardware
202
634097
4238
10:38
opens up an entirely new field,
203
638368
2303
10:40
which I like to call quantum neuromorphic computing.
204
640704
2636
10:44
OK, but let's return to the brain,
205
644274
2369
10:46
where explainable AI can help us understand how it works.
206
646677
2936
10:50
So now, AI allows us to build
207
650847
2603
10:53
incredibly accurate but complicated models of the brain.
208
653483
3737
10:57
So where is this all going?
209
657254
1668
10:58
Are we simply replacing something we don't understand, the brain,
210
658955
3104
11:02
with something else we don't understand, our complex model of it?
211
662092
3503
11:05
As scientists, we'd like to have a conceptual understanding
212
665595
2837
11:08
of how the brain works,
213
668465
1268
11:09
not just have a model handed to us.
214
669766
1969
11:13
So basically, I'd like to give you
215
673136
3137
11:16
an example of our work on explainable AI, applied to the retina.
216
676273
4805
11:21
So the retina is a multilayered circuit of photoreceptors
217
681111
3036
11:24
going to hidden neurons, going to output neurons.
218
684181
2402
11:26
So how does it work?
219
686616
1602
11:28
Well, we recently built the world's most accurate model of the retina.
220
688251
3771
11:32
It could reproduce two decades of experiments on the retina.
221
692022
3770
11:35
So this is fantastic.
222
695826
1334
11:37
We have a digital twin of the retina.
223
697194
2536
11:39
But how does the twin work?
224
699763
1668
11:41
Why is it designed the way it is?
225
701465
2402
11:43
To make these questions concrete,
226
703900
3070
11:47
I'd like to discuss just one
227
707003
1802
11:48
of the two decades of experiments that I mentioned.
228
708839
3069
11:51
And we're going to do this experiment on you right now.
229
711942
3069
11:55
I'd like you to focus on my hand, and I'd like you to track it.
230
715045
3804
12:01
OK, great. Let's do that just one more time.
231
721952
3170
12:08
OK.
232
728058
1134
12:09
You might have been slightly surprised when my hand reversed direction.
233
729226
4070
12:13
And you should be surprised,
234
733330
1968
12:15
because my hand just violated Newton's first law of motion,
235
735332
3336
12:18
which states that objects that are in motion tend to remain in motion.
236
738702
3870
12:22
So where in your brain is a violation of Newton's first law first detected?
237
742606
4871
12:28
The answer is remarkable. It's in your retina.
238
748111
3671
12:31
There are neurons in your retina that will fire
239
751815
2302
12:34
if and only if Newton's first law is violated.
240
754151
2769
12:37
So does our model do that?
241
757621
1701
12:40
Yes, it does. It reproduces it.
242
760157
2402
12:42
But now, there's a puzzle.
243
762592
1268
12:43
How does the model do it?
244
763894
1868
12:45
Well, we developed methods, explainable AI methods,
245
765796
3903
12:49
to take any given stimulus that causes a neuron to fire,
246
769733
3870
12:53
and we carve out the essential subcircuit responsible for that firing,
247
773637
4237
12:57
and we explain how it works.
248
777908
2402
13:00
We were able to do this not only for Newton's first law violations,
249
780310
3337
13:03
but for the two decades of experiments that our model reproduced.
250
783680
3103
13:07
And so this one model reproduces two decades' worth of neuroscience
251
787317
4805
13:12
and also makes some new predictions.
252
792122
1735
13:15
This opens up a new pathway to accelerating neuroscience discovery
253
795091
3637
13:18
using AI.
254
798762
1468
13:20
Basically, build digital twins of the brain,
255
800263
3037
13:23
and then use explainable AI to understand how they work.
256
803300
3036
13:26
We're actually engaged in a big effort at Stanford
257
806336
2703
13:29
to build a digital twin of the entire primate visual system
258
809039
3570
13:32
and explain how it works.
259
812642
1669
13:35
But we can go beyond that
260
815278
1569
13:36
and use our digital twins to meld minds and machines,
261
816880
6073
13:42
by allowing bidirectional communication between them.
262
822986
2870
13:45
So imagine a scenario where you have a brain,
263
825889
2836
13:48
you record from it, you build a digital twin.
264
828725
3370
13:52
Then you use control theory to learn neural activity patterns
265
832128
4171
13:56
that you can write directly into the digital twin to control it.
266
836299
3137
14:00
Then, you take those same neural activity patterns
267
840270
2803
14:03
and you write them into the brain to control the brain.
268
843073
3737
14:06
In essence, we can learn the language of the brain,
269
846843
2770
14:09
and then speak directly back to it.
270
849646
2202
14:12
So we recently carried out this program in mice,
271
852582
3470
14:16
where we could use AI to read the mind of a mouse.
272
856086
3170
14:19
So on the top row, you're seeing images that we actually showed to the mouse,
273
859289
4805
14:24
and in the bottom row,
274
864127
1535
14:25
you're seeing images that we decoded from the brain of the mouse.
275
865695
3704
14:29
Our decoded images are lower-resolution than the actual images,
276
869399
3504
14:32
but not because our decoders are bad.
277
872936
2102
14:35
It's because mouse visual resolution is bad.
278
875071
3137
14:38
So actually, the decoded images
279
878742
1701
14:40
show you what the world would actually look like
280
880443
3204
14:43
if you were a mouse.
281
883647
1601
14:46
Now, we can go beyond that.
282
886483
2569
14:49
We can now write neural activity patterns into the mouse's brain,
283
889085
4371
14:53
so we can make it hallucinate
284
893490
2202
14:55
any particular percept we would like it to hallucinate.
285
895725
2803
14:58
And we got so good at this
286
898528
1769
15:00
that we could make it reliably hallucinate a percept
287
900297
3703
15:04
by controlling only 20 neurons in the mouse's brain,
288
904034
2869
15:06
by figuring out the right 20 neurons to control.
289
906937
3203
15:10
So essentially, we can control what the mouse sees
290
910173
3804
15:14
directly, by writing to its brain.
291
914010
2369
15:16
The possibilities of bidirectional communication
292
916413
3069
15:19
between brains and machines are limitless.
293
919516
3370
15:22
To understand, to cure and to augment the brain.
294
922886
4571
15:28
So I hope you'll see that the pursuit of a unified science of intelligence
295
928792
5972
15:34
that spans brains and machines
296
934798
2369
15:37
can both help us better understand biological intelligence
297
937200
3437
15:40
and help us create more efficient, explainable
298
940637
3470
15:44
and powerful artificial intelligence.
299
944140
2703
15:47
But it's important that this pursuit be done out in the open
300
947677
2903
15:50
so the science can be shared with the world,
301
950613
2203
15:52
and it must be done with a very long time horizon.
302
952849
2769
15:55
This makes academia the perfect place to pursue a science of intelligence.
303
955952
4872
16:00
In academia, we're free from the tyranny of quarterly earnings reports.
304
960857
4638
16:05
We're free from the censorship of corporate legal departments.
305
965495
3937
16:09
We can be far more interdisciplinary than any one company.
306
969432
4071
16:13
And our very mission is to share what we learn with the world.
307
973536
3737
16:17
For all these reasons, we're actually building a new center
308
977307
2836
16:20
for the science of intelligence at Stanford.
309
980176
3037
16:23
While there have been incredible advances in industry
310
983246
3670
16:26
on the engineering of intelligence,
311
986916
1702
16:28
now increasingly happening behind closed doors,
312
988651
2837
16:31
I'm very excited about what the science of intelligence can achieve
313
991521
4238
16:35
out in the open.
314
995792
1268
16:38
You know, in the last century,
315
998395
1468
16:39
one of the greatest intellectual adventures
316
999863
2636
16:42
lay in humanity peering outwards into the universe
317
1002499
2869
16:45
to understand it, from quarks to cosmology.
318
1005368
3971
16:49
I think one of the greatest intellectual adventures of this century
319
1009372
3404
16:52
will lie in humanity peering inwards,
320
1012809
3036
16:55
both into ourselves and into the AIs that we create,
321
1015845
5039
17:00
in order to develop a deeper, new scientific understanding of intelligence.
322
1020917
5105
17:06
Thank you.
323
1026489
1201
17:07
(Applause)
324
1027724
2369
About this website

This site will introduce you to YouTube videos that are useful for learning English. You will see English lessons taught by top-notch teachers from around the world. Double-click on the English subtitles displayed on each video page to play the video from there. The subtitles scroll in sync with the video playback. If you have any comments or requests, please contact us using this contact form.

https://forms.gle/WvT1wiN1qDtmnspy7