Can AI Match the Human Brain? | Surya Ganguli | TED

75,748 views ・ 2025-02-21

TED


请双击下面的英文字幕来播放视频。

翻译人员: Yip Yan Yeung 校对人员: Lening Xu
00:04
So what the heck happened in the field of AI in the last decade?
0
4335
3170
过去的十年里, AI 领域到底发生了什么?
00:07
It's like a strange new type of intelligence
1
7972
2436
就像我们的星球上 出现了一种奇怪的新型智能。
00:10
appeared on our planet.
2
10441
1802
00:12
But it's not like human intelligence.
3
12276
1869
但它和人类智能不一样。
00:14
It has remarkable capabilities,
4
14612
2169
它有非凡的能力,
00:16
but it also makes egregious errors that we never make.
5
16814
2869
但它也会犯下 我们不会犯的严重错误。
00:20
And it doesn't yet do the deep logical reasoning that we can do.
6
20317
3037
而且它还做不到 我们的深度逻辑推理。
00:24
It has a very mysterious surface of both capabilities and fragilities.
7
24221
5272
表面看起来它很神秘, 既有能力又脆弱。
00:29
And we understand almost nothing about how it works.
8
29527
2836
而我们对它的运作原理 几乎一无所知。
00:32
I would like a deeper scientific understanding of intelligence.
9
32396
3737
我想对智能有更深入的科学理解。
00:37
But to understand AI,
10
37101
1368
但是要理解 AI,
00:38
it's useful to place it in the historical context
11
38502
2937
将其置于生物智能的 历史背景下很有用。
00:41
of biological intelligence.
12
41472
2269
00:43
The story of human intelligence
13
43774
1502
人类智能的故事 不妨从这个小动物开始说起。
00:45
might as well have started with this little critter.
14
45309
2636
00:47
It's the last common ancestor of all vertebrates.
15
47978
2736
它是所有脊椎动物 最后一个共同祖先。
00:50
We are all descended from it.
16
50748
1902
我们都是它的后裔。
00:52
It lived about 500 million years ago.
17
52683
2202
它生活在大约 5 亿年前。
00:55
Then evolution went on to build the brain, which in turn,
18
55586
3403
然后,进化持续培育大脑,
00:58
in the space of 500 years from Newton to Einstein,
19
58989
3604
而大脑反过来在 从牛顿到爱因斯坦的 500 年间
01:02
developed the deep math and physics
20
62626
1836
发展了理解宇宙所需的 深度数学和物理学,从夸克到宇宙学。
01:04
required to understand the universe, from quarks to cosmology.
21
64495
4071
01:08
And it did this all without consulting ChatGPT.
22
68599
2569
这一切都是在没有咨询 ChatGPT 的前提下完成的。
01:12
And then, of course, there's the advances of the last decade.
23
72169
3604
当然,还有过去十年里的进步。
01:15
To really understand what just happened in AI,
24
75806
2803
要真正了解 AI 领域刚刚发生了什么,
01:18
we need to combine physics, math,
25
78609
2069
我们需要将物理学、数学、
01:20
neuroscience, psychology, computer science and more,
26
80678
3270
神经科学、心理学、 计算机科学等相结合,
01:23
to develop a new science of intelligence.
27
83948
2836
发展出一门新的智能科学。
01:26
The science of intelligence
28
86817
1368
智能的科学
01:28
can simultaneously help us understand biological intelligence
29
88219
3870
可以同时帮助我们 理解生物智能,
01:32
and create better artificial intelligence.
30
92123
2135
又创造更好的人工智能。
01:34
And we need this science now,
31
94291
2169
我们现在需要这门科学,
01:36
because the engineering of intelligence
32
96494
1968
因为智能的创造力 远超我们对它的理解能力。
01:38
has vastly outstripped our ability to understand it.
33
98462
3103
01:41
I want to take you on a tour of our work in the science of intelligence
34
101565
3370
我想带大家浏览一下 我们在智能科学上的研究,
01:44
that addresses five critical areas in which AI can improve --
35
104969
3737
包括 AI 可以改进的五个关键领域——
01:48
data efficiency, energy efficiency, going beyond evolution,
36
108739
4405
数据效率、能源效率、超越进化、
01:53
explainability and melding minds and machines.
37
113177
3337
可解释性以及 人类思维和机器的融合。
01:56
Let's address these critical gaps one by one.
38
116514
3336
让我们一一介绍这些关键缺口。
02:00
First, data efficiency.
39
120251
1801
首先,数据效率。
02:02
AI is vastly more data-hungry than humans.
40
122086
3303
AI 比人类更渴望数据。
02:05
For example, we train our language models on the order of one trillion words now.
41
125389
5305
举个例子,我们现在用大约一万亿个单词 来训练我们的语言模型。
02:10
Well, how many words do we get?
42
130728
1902
那我们会用多少个词呢?
02:12
Just 100 million.
43
132630
1201
只要 1 亿个。
02:13
It's that tiny little red dot at the center.
44
133864
2136
就是中间那个小红点。
02:16
You might not be able to see it.
45
136033
1535
你可能都看不见。
02:18
It would take us 24,000 years to read the rest of the one trillion words.
46
138035
5672
我们需要 24,000 年 才能读完一万亿字中剩下的部分。
02:23
OK, now, you might say that's unfair.
47
143707
1902
好吧,你可能会说这不公平。
02:25
Sure, AI read for 24,000 human-equivalent years,
48
145609
3537
AI 的阅读时间确实 相当于人类 24,000 年,
02:29
but humans got 500 million years of vertebrate brain evolution.
49
149180
3269
但是人类经历了 5 亿年的 脊椎动物大脑进化。
02:32
But there's a catch.
50
152850
1535
但是有一个问题。
02:34
Your entire legacy of evolution is given to you through your DNA,
51
154418
4571
你的整个进化遗传是通过DNA进行的,
02:39
and your DNA is only about 700 megabytes,
52
159023
2469
而你的 DNA 只有大约 700 兆字节,
02:41
or equivalently, 600 million [words].
53
161525
2202
相当于 6 亿个词。
02:43
So the combined information we get from learning and evolution
54
163761
3570
因此,与 AI 相比, 我们从学习和进化中
02:47
is minuscule compared to what AI gets.
55
167364
2203
获得的综合信息微不足道。
02:49
You are all incredibly efficient learning machines.
56
169600
3503
你们都是非常高效的学习机器。
02:53
So how do we bridge the gap between AI and humans?
57
173137
3970
那么,我们如何弥合 AI 与人类之间的鸿沟呢?
02:57
We started to tackle this problem by revisiting the famous scaling laws.
58
177141
3470
我们从重新审视著名的规模法则 开始解决这个问题。
03:00
Here's an example of a scaling law,
59
180644
2036
举一个规模法则的例子,
03:02
where error falls off as a power law with the amount of training data.
60
182713
4271
误差随着训练数据量的增加 而按照幂律分布降低。
03:07
These scaling laws have captured the imagination of industry
61
187017
3604
这些规模法则激发了行业的想象力,
03:10
and motivated significant societal investments
62
190621
2269
并促进了对能源、计算和 数据采集的重大社会投资。
03:12
in energy, compute and data collection.
63
192923
3504
03:16
But there's a problem.
64
196460
1735
但是有一个问题。
03:18
The exponents of these scaling laws are small.
65
198229
2602
这些规模法则的指数很小。
03:20
So to reduce the error by a little bit,
66
200864
1902
如果要将误差减少一点,
03:22
you might need to ten-x your amount of training data.
67
202800
2736
可能需要将训练数据量增加十倍。
03:25
This is unsustainable in the long run.
68
205536
2703
从长远来看,这是不可持续的。
03:28
And even if it leads to improvements in the short run,
69
208272
2602
即使它能在短期内带来改进,
03:30
there must be a better way.
70
210874
1402
也必须找到更好的方法。
03:33
We developed a theory that explains why these scaling laws are so bad.
71
213110
4037
我们研究了一种理论来解释 为什么这些规模法则如此糟糕。
03:37
The basic idea is that large random datasets are incredibly redundant.
72
217147
3671
基本逻辑是大型随机数据集非常冗余。
03:40
If you already have billions of data points,
73
220851
2069
如果你已经有了上亿个数据点,
03:42
the next data point doesn't tell you much that's new.
74
222920
2536
那么下一个数据点 并不会给你什么新信息。
03:45
But what if you could create a nonredundant dataset,
75
225489
2836
但是,如果你能创建一个非冗余数据集,
03:48
where each data point is chosen carefully
76
228359
2135
每个数据点都是精心挑选的,
03:50
to tell you something new, compared to all the other data points?
77
230527
3204
给你一些其他数据点 不能提供的新信息,会怎么样呢?
03:53
We developed theory and algorithms to do just this.
78
233764
4004
我们开发了理论 和算法来做到这一点。
03:57
We theoretically predicted and experimentally verified
79
237801
3637
通过理论预测并经实验验证,
04:01
that we could bend these bad power laws down to much better exponentials,
80
241438
4205
我们可以把这些不佳的幂律 改良成更好的指数,
04:05
where adding a few more data points could reduce your error,
81
245676
2869
通过再增加几个数据点减少误差,
04:08
rather than ten-xing the amount of data.
82
248579
2235
而不是增加十倍的数据量。
04:10
So what theory did we use to get this result?
83
250848
2769
那么我们是用什么理论 得出这个结果的呢?
04:14
We used ideas from statistical physics, and these are the equations.
84
254485
3236
我们使用了统计物理学的思想, 用了这些方程。
04:17
Now, for the rest of this entire talk,
85
257755
1868
在这场演讲剩下的时间里,
04:19
I'm going to go through these equations one by one.
86
259623
2469
我将逐一介绍这些方程。
04:22
(Laughter)
87
262092
1101
(笑声)
04:23
You think I'm joking?
88
263227
1368
你们以为我在开玩笑吗?
04:24
And explain them to you.
89
264595
1835
给大家解释这些方程。
04:26
OK, you're right, I'm joking. I'm not that mean.
90
266430
2736
没错,我在开玩笑。 我没那么坏。
04:29
But you should have seen the faces of the TED organizers
91
269166
2836
但你该看看我刚说要这么做的时候 TED 组织者是什么样的表情。
04:32
when I said I was going to do that.
92
272002
2269
04:34
Alright, let's move on.
93
274271
1235
好吧,我们继续。
04:35
Let's zoom out a little bit,
94
275539
1602
我们退后一步看,
04:37
and think more generally
95
277174
1201
更宽泛地思考一下 如何让 AI 减少对数据的需求。
04:38
about what it takes to make AI less data-hungry.
96
278375
2569
04:40
Imagine if we trained our kids
97
280978
2202
想象一下,如果我们像预训练 大语言模型一样训练孩子,
04:43
the same way we pretrain our large language models,
98
283180
3036
04:46
by next-word prediction.
99
286250
1535
也就是预测下一个单词。
04:47
So I'd give my kid a random chunk of the internet and say,
100
287818
2736
我会随便找一个 网上的片段给我的孩子,说:
04:50
"By the way, this is the next word."
101
290587
1902
“你看,这是下一个词。”
04:52
I'd give them another random chunk of the internet and say,
102
292523
2836
我再随便找一个网上的片段,说:
04:55
"This is the next word."
103
295392
1468
“这是下一个词。”
04:56
If that's all we did,
104
296860
1168
如果我们只做这些,
04:58
it would take our kids 24,000 years to learn anything useful.
105
298062
3036
那我们的孩子需要 24,000 年 才能学到有用的东西。
05:01
But we do so much more than that.
106
301131
2203
但我们做的远不止于此。
05:03
For example, when I teach my son math,
107
303367
2569
比如,在我教儿子数学时,
05:05
I teach him the algorithm required to solve the problem,
108
305969
3104
我会教他解出答案所需的算法,
05:09
then he can immediately solve new problems
109
309073
2002
然后他立即就可以解答新问题
05:11
and generalize using far less training data than any AI system would do.
110
311108
3937
并归纳总结,需要的训练数据 比任何 AI 系统都少。
05:15
I don't just throw millions of math problems at him.
111
315079
3303
我没有只丢给他几百万道数学题。
05:18
So to really make AI more data-efficient,
112
318415
4605
因此,要真正提高 AI 的数据效率,
05:23
we have to go far beyond our current training algorithms
113
323020
2869
我们必须大大突破当前的训练算法,
05:25
and turn machine learning into a new science of machine teaching.
114
325923
5272
将机器学习转变为 一门新的机器教学科学。
05:31
And neuroscience, psychology and math can really help here.
115
331228
3403
神经科学、心理学和数学 在这方面确实有用。
05:35
Let's go on to the next big gap, energy efficiency.
116
335366
3403
我们来看下一个大缺口—— 能源效率。
05:38
Our brains are incredibly efficient.
117
338802
2236
我们的大脑非常高效。
05:41
We only consume 20 watts of power.
118
341038
2736
我们只消耗 20 瓦的功率。
05:43
For reference, our old light bulbs were 100 watts.
119
343807
3437
作为参考, 我们以前的灯泡是 100 瓦。
05:47
So we are all literally dimmer than light bulbs.
120
347277
3404
我们其实比灯泡都暗。
05:50
(Laughter)
121
350714
1702
(笑声)
05:52
But what about AI?
122
352416
1168
但是 AI 呢?
05:53
Training a large model can consume as much as 10 million watts,
123
353617
3303
训练一个大模型 可能要消耗高达 1000万瓦特,
05:56
and there’s talk of going nuclear to power one-billion-watt data centers.
124
356920
4805
而且有传言称要使用核能 为 10 亿瓦特的数据中心供电。
06:01
So why is AI so much more energy-hungry than brains?
125
361759
4738
为什么 AI 比大脑更耗能呢?
06:06
Well, the fault lies in the choice of digital computation itself,
126
366530
3770
问题在于选择了数字计算,
06:10
where we rely on fast and reliable bit flips
127
370334
3103
计算过程中的每个步骤 都依赖快速可靠的位翻转。
06:13
at every intermediate step of the computation.
128
373470
2803
06:16
Now, the laws of thermodynamics
129
376273
1568
热力学定律
06:17
demand that every fast and reliable bit flip must consume a lot of energy.
130
377875
5839
决定了每次快速可靠的位翻转 都必须消耗大量能量。
06:24
Biology took a very different route.
131
384448
2803
生物学走了一条截然不同的道路。
06:27
Biology computes the right answer just in time,
132
387251
3103
生物学及时计算出了正确的答案,
06:30
using intermediate steps that are as slow and as unreliable as possible.
133
390387
6240
经历的中间步骤都相当缓慢又不可靠。
06:36
In essence, biology does not rev its engine any more than it needs to.
134
396660
4037
也就是说生物学的引擎转速 不会超过其需要的速度。
06:41
In addition, biology matches computation to physics much better.
135
401932
4972
此外,生物学可以更好地 适配计算和物理学。
06:46
Consider, for example, addition.
136
406937
2002
比如加法。
06:48
Our computers add using really complex energy-consuming transistor circuits,
137
408972
6240
我们的计算机利用非常复杂 又耗能的晶体管电路进行加法运算,
06:55
but neurons just directly add their voltage inputs,
138
415245
3370
但是神经元只需要直接将电压输入相加,
06:58
because Maxwell's laws of electromagnetism already know how to add voltage.
139
418615
5639
因为麦克斯韦的电磁定律 已经知道了如何让电压相加。
07:04
In essence, biology matches its computation
140
424288
3904
本质上,生物学将它的计算 与宇宙的自然物理学相匹配。
07:08
to the native physics of the universe.
141
428225
3070
07:11
So to really build more energy-efficient AI,
142
431328
2803
要真正构建更节能的 AI,
07:14
we need to rethink our entire technology stack,
143
434164
2936
我们需要重新考虑整个技术栈,
07:17
from electrons to algorithms,
144
437134
2769
从电子到算法,
07:19
and better match computational dynamics to physical dynamics.
145
439937
4237
更好地将计算的模式 与物理的模式相匹配。
07:24
For example, what are the fundamental limits
146
444208
3703
例如,在一定的能耗预算下,
07:27
on the speed and accuracy of any given computation,
147
447911
3470
任何计算的速度和准确率的 基本极限是什么?
07:31
given an energy budget?
148
451381
1735
07:33
And what kinds of electrochemical computers can achieve
149
453150
3337
什么样的电化学计算机 可以达到这些基本极限呢?
07:36
these fundamental limits?
150
456520
1735
07:38
We recently solved this problem for the computation of sensing,
151
458255
4104
我们最近为感知计算 解决了这个问题,
07:42
which is something that every neuron has to do.
152
462392
2636
每个神经元都在进行感知计算。
07:45
We were able to find fundamental lower bounds or lower limits on the error
153
465028
4104
我们找到了随能耗预算变化的 误差的下界或下限。
07:49
as a function of the energy budget.
154
469132
1736
07:50
That's that red curve.
155
470901
1268
就是这条红色曲线。
07:52
And we were able to find the chemical computers that achieve these limits.
156
472202
3671
我们还找到了可以达到 这个极限的化学计算机。
07:55
And remarkably, they looked a lot like G-protein coupled receptors,
157
475906
3670
值得一提的是它们 长得很像 G 蛋白偶联受体,
07:59
which every neuron uses to sense external signals.
158
479610
3837
每个神经元都用它来感知外界信号。
08:03
So this suggests that biology can achieve amounts of efficiency
159
483480
5472
这表明生物学可以实现
接近于由物理定律设定的 基本极限的效率。
08:08
that are close to fundamental limits set by the laws of physics itself.
160
488986
4004
08:13
Popping up a level,
161
493023
1268
再往上一层,
08:14
neuroscience now gives us the ability to measure not only neural activity,
162
494324
5005
神经科学现在让我们 不仅可以测量神经活动,
08:19
but also energy consumption across, for example, the entire brain of the fly.
163
499363
5305
还可以测量苍蝇整个大脑的能耗。
08:24
The energy consumption is measured through ATP usage,
164
504701
3003
能量消耗是通过 ATP 的用量来测量的,
08:27
which is the chemical fuel that powers all neurons.
165
507738
3670
ATP 是为所有神经元 提供动力的化学燃料。
08:31
So now let me ask you a question.
166
511441
1702
现在我要问大家一个问题。
08:33
Let's say in a certain brain region, neural activity goes up.
167
513143
3670
假设在某个大脑区域, 神经活动增加了。
08:36
Does the ATP go up or down?
168
516847
2669
ATP 会上升还是下降?
08:39
A natural guess would be that the ATP goes down,
169
519983
2303
凭直觉的猜测会是 ATP 下降,
08:42
because neural activity costs energy, so it's got to consume the fuel.
170
522286
3303
因为神经活动消耗能量, 所以它肯定会消耗燃料。
08:46
We found the exact opposite.
171
526156
2069
我们发现事实恰恰相反。
08:48
When neural activity goes up,
172
528759
1768
当神经活动增加时,
08:50
ATP goes up and it stays elevated
173
530561
2602
ATP 会升高并保持较高的水平,
08:53
just long enough to power expected future neural activity.
174
533196
3704
刚好足以为未来可预期的 神经活动提供动力。
08:56
This suggests that the brain follows a predictive energy allocation principle,
175
536934
4537
这表明大脑遵循预测能量分配原理,
09:01
where it can predict how much energy is needed, where and when,
176
541505
4971
它预测什么时候、 在哪里需要多少能量,
09:06
and it delivers just the right amount of energy at just the right location,
177
546476
4405
并在合适的一段时间内、 在合适的位置提供正好的能量。
09:10
for just the right amount of time.
178
550914
2870
09:14
So clearly, we have a lot to learn from physics, neuroscience and evolution
179
554384
6640
很明显,在构建 更节能的 AI 这一方面,
我们可以从物理学、神经科学 和进化中学到很多东西。
09:21
about building more energy-efficient AI.
180
561058
2502
09:23
But we don't need to be limited by evolution.
181
563594
3236
但我们不必被进化限制住。
09:26
We can go beyond evolution,
182
566863
1635
我们可以超越进化,
09:28
to co-opt the neural algorithms discovered by evolution,
183
568532
3136
采纳进化带来的神经算法,
09:31
but implement them in quantum hardware that evolution could never figure out.
184
571702
3970
但把它们用在 进化永远无法搞明白的量子硬件上。
09:36
For example, we can replace neurons with atoms.
185
576840
3837
比如,我们可以用原子代替神经元。
09:41
The different firing states of neurons
186
581445
1835
神经元的各种放电状态
09:43
correspond to the different electronic states of atoms.
187
583313
3370
对应原子的各种电子态。
09:46
And we can replace synapses with photons.
188
586717
3937
我们可以用光子代替突触。
09:50
Just as synapses allow two neurons to communicate,
189
590654
2903
就像突触能让两个神经元通信一样,
09:53
photons allow two atoms to communicate through photon emission and absorption.
190
593590
5205
光子能让两个原子 通过光子发射和吸收进行通信。
09:58
So what can we build with this?
191
598829
1868
那我们可以用它来做些什么呢?
10:01
We can build a quantum associative memory out of atoms and photons.
192
601264
4538
我们可以用原子和光子 建立‌量子联想记忆。
10:05
This is the same memory system
193
605836
1535
这就是约翰·霍普菲尔德(John Hopfield) 最近获得诺贝尔物理学奖的存储系统,
10:07
that won John Hopfield his recent Nobel Prize in physics,
194
607371
3503
10:10
but this time, it's a quantum-mechanical system built of atoms and photons,
195
610907
3671
但这里它是一个由原子和光子 构成的量子力学系统,
10:14
and we can analyze its performance
196
614611
1668
我们可以分析其性能
10:16
and show that the quantum dynamics yields enhanced memory capacity,
197
616279
4105
并证明量子动力学可以增强存储容量、
10:20
robustness and recall.
198
620417
2603
稳健性和召回率。
10:23
We can also build new types of quantum optimizers built directly out of photons,
199
623053
4504
我们还可以建造直接由光子构建的 新型量子优化器,
10:27
and we can analyze their energy landscape
200
627591
2069
我们可以分析它们的能量分布,
10:29
and explain how they solve optimization problems in fundamentally new ways.
201
629693
4371
解释它们如何以全新的方式 解决优化问题。
10:34
This marriage between neural algorithms and quantum hardware
202
634097
4238
神经算法和量子硬件之间的这种结合
10:38
opens up an entirely new field,
203
638368
2303
开辟了一个全新的领域,
10:40
which I like to call quantum neuromorphic computing.
204
640704
2636
我称之为“量子神经形态计算”。
10:44
OK, but let's return to the brain,
205
644274
2369
我们再说回大脑,
10:46
where explainable AI can help us understand how it works.
206
646677
2936
可解释的 AI 可以帮助我们 理解它的运作原理。
10:50
So now, AI allows us to build
207
650847
2603
AI 让我们能够构建 非常准确但复杂的大脑模型。
10:53
incredibly accurate but complicated models of the brain.
208
653483
3737
10:57
So where is this all going?
209
657254
1668
会发展成什么样呢?
10:58
Are we simply replacing something we don't understand, the brain,
210
658955
3104
我们是不是只是 把我们不懂的东西,也就是大脑,
11:02
with something else we don't understand, our complex model of it?
211
662092
3503
换成了另一个我们不懂的东西, 也就是它的复杂模型?
11:05
As scientists, we'd like to have a conceptual understanding
212
665595
2837
作为科学家, 我们希望从概念上了解
11:08
of how the brain works,
213
668465
1268
大脑的运作原理,
11:09
not just have a model handed to us.
214
669766
1969
而不是丢给我们一个模型。
11:13
So basically, I'd like to give you
215
673136
3137
所以我想给大家介绍一个
11:16
an example of our work on explainable AI, applied to the retina.
216
676273
4805
我们在可解释 AI 上的工作, 用于视网膜。
11:21
So the retina is a multilayered circuit of photoreceptors
217
681111
3036
视网膜是一个由感光器组成的多层回路,
11:24
going to hidden neurons, going to output neurons.
218
684181
2402
联结着隐藏神经元, 联结着输出神经元。
11:26
So how does it work?
219
686616
1602
它是如何运作的呢?
11:28
Well, we recently built the world's most accurate model of the retina.
220
688251
3771
我们最近建立了 世界上最精准的视网膜模型。
11:32
It could reproduce two decades of experiments on the retina.
221
692022
3770
它可以复现二十年以来的视网膜实验。
11:35
So this is fantastic.
222
695826
1334
太棒了。
11:37
We have a digital twin of the retina.
223
697194
2536
我们有视网膜的数字孪生。
11:39
But how does the twin work?
224
699763
1668
但是数字孪生如何运作?
11:41
Why is it designed the way it is?
225
701465
2402
为什么这么设计它?
11:43
To make these questions concrete,
226
703900
3070
为了让这些问题更具体,
11:47
I'd like to discuss just one
227
707003
1802
我想谈一谈我刚说到的 二十年以来的其中一个实验。
11:48
of the two decades of experiments that I mentioned.
228
708839
3069
11:51
And we're going to do this experiment on you right now.
229
711942
3069
现在我想跟大家进行这个实验。
11:55
I'd like you to focus on my hand, and I'd like you to track it.
230
715045
3804
请看着我的手,跟着它。
12:01
OK, great. Let's do that just one more time.
231
721952
3170
很好。再来一次。
12:08
OK.
232
728058
1134
好的。
12:09
You might have been slightly surprised when my hand reversed direction.
233
729226
4070
我的手转向的时候, 你可能会有一些意外。
12:13
And you should be surprised,
234
733330
1968
你确实应该有些意外,
12:15
because my hand just violated Newton's first law of motion,
235
735332
3336
因为我的手刚刚违反了 牛顿第一运动定律,
12:18
which states that objects that are in motion tend to remain in motion.
236
738702
3870
即运动中的物体往往会保持运动。
12:22
So where in your brain is a violation of Newton's first law first detected?
237
742606
4871
你大脑中的哪一块 最先发现违反了牛顿第一定律呢?
12:28
The answer is remarkable. It's in your retina.
238
748111
3671
答案很不寻常。 是你的视网膜。
12:31
There are neurons in your retina that will fire
239
751815
2302
视网膜中有神经元会在 且仅在违反牛顿第一定律时放电。
12:34
if and only if Newton's first law is violated.
240
754151
2769
12:37
So does our model do that?
241
757621
1701
我们的模型能做到这一点吗?
12:40
Yes, it does. It reproduces it.
242
760157
2402
可以。它复现了这个现象。
12:42
But now, there's a puzzle.
243
762592
1268
但这样就出现了一个谜题。
12:43
How does the model do it?
244
763894
1868
模型是怎么做到的?
12:45
Well, we developed methods, explainable AI methods,
245
765796
3903
我们开发了一些方法, 可解释的 AI 方法,
12:49
to take any given stimulus that causes a neuron to fire,
246
769733
3870
接受任何导致神经元放电的刺激,
12:53
and we carve out the essential subcircuit responsible for that firing,
247
773637
4237
然后我们创建了 导致这种放电的核心子电路,
12:57
and we explain how it works.
248
777908
2402
并解释其运作原理。
13:00
We were able to do this not only for Newton's first law violations,
249
780310
3337
我们的模型可以在 违反牛顿第一定律时做到这一点,
13:03
but for the two decades of experiments that our model reproduced.
250
783680
3103
也可以在我们的模型所复现的 二十年来的实验中做到。
13:07
And so this one model reproduces two decades' worth of neuroscience
251
787317
4805
这一个模型复现了 神经科学二十年以来的成果,
13:12
and also makes some new predictions.
252
792122
1735
还做出了新的预测。
13:15
This opens up a new pathway to accelerating neuroscience discovery
253
795091
3637
这为使用 AI 加速神经科学发现 开辟了一条新途径。
13:18
using AI.
254
798762
1468
13:20
Basically, build digital twins of the brain,
255
800263
3037
构建大脑的数字孪生,
13:23
and then use explainable AI to understand how they work.
256
803300
3036
再使用可解释的 AI 了解它们的运作原理。
13:26
We're actually engaged in a big effort at Stanford
257
806336
2703
我们在斯坦福大学投入了大量精力,
13:29
to build a digital twin of the entire primate visual system
258
809039
3570
构建整个灵长类动物视觉系统的数字孪生
13:32
and explain how it works.
259
812642
1669
并解释其运作原理。
13:35
But we can go beyond that
260
815278
1569
但是我们可以超越这一点,
13:36
and use our digital twins to meld minds and machines,
261
816880
6073
使用我们的数字孪生 融合人类思维和机器,
13:42
by allowing bidirectional communication between them.
262
822986
2870
通过构建它们之间的双向交流。
13:45
So imagine a scenario where you have a brain,
263
825889
2836
想象一下这样的场景: 你有一个大脑,
13:48
you record from it, you build a digital twin.
264
828725
3370
把它录下来, 构建一个数字孪生。
13:52
Then you use control theory to learn neural activity patterns
265
832128
4171
然后,你使用控制理论 学习神经活动模式,
13:56
that you can write directly into the digital twin to control it.
266
836299
3137
再将这些模式直接写入 数字孪生中来控制它。
14:00
Then, you take those same neural activity patterns
267
840270
2803
然后,你把这些神经活动模式 写入大脑来控制大脑。
14:03
and you write them into the brain to control the brain.
268
843073
3737
14:06
In essence, we can learn the language of the brain,
269
846843
2770
我们其实可以学习大脑的语言,
14:09
and then speak directly back to it.
270
849646
2202
再直接回复它。
14:12
So we recently carried out this program in mice,
271
852582
3470
我们最近在小鼠身上 进行了这个实验,
14:16
where we could use AI to read the mind of a mouse.
272
856086
3170
使用 AI 读懂小鼠的思想。
14:19
So on the top row, you're seeing images that we actually showed to the mouse,
273
859289
4805
上面一行是我们给小鼠看的图片,
14:24
and in the bottom row,
274
864127
1535
下面一行是我们从小鼠脑中 解码出来的图片。
14:25
you're seeing images that we decoded from the brain of the mouse.
275
865695
3704
14:29
Our decoded images are lower-resolution than the actual images,
276
869399
3504
解码图片的分辨率 低于实际图片,
14:32
but not because our decoders are bad.
277
872936
2102
但不是因为我们的解码器不好。
14:35
It's because mouse visual resolution is bad.
278
875071
3137
这是因为小鼠的视觉分辨率很差。
14:38
So actually, the decoded images
279
878742
1701
解码后的图像其实
14:40
show you what the world would actually look like
280
880443
3204
向你展示了如果你是一只小鼠, 你会看到怎样的世界。
14:43
if you were a mouse.
281
883647
1601
14:46
Now, we can go beyond that.
282
886483
2569
我们还可以更进一步。
14:49
We can now write neural activity patterns into the mouse's brain,
283
889085
4371
我们现在可以将神经活动模式 写入小鼠的大脑,
14:53
so we can make it hallucinate
284
893490
2202
这样我们就可以让它产生幻觉,
14:55
any particular percept we would like it to hallucinate.
285
895725
2803
产生我们希望它有的某种幻觉。
14:58
And we got so good at this
286
898528
1769
我们在这方面做得非常出色,
15:00
that we could make it reliably hallucinate a percept
287
900297
3703
我们可以让它可靠地产生幻觉,
15:04
by controlling only 20 neurons in the mouse's brain,
288
904034
2869
只需要控制小鼠大脑中的 20 个神经元,
15:06
by figuring out the right 20 neurons to control.
289
906937
3203
找出应该控制的那 20 个神经元。
15:10
So essentially, we can control what the mouse sees
290
910173
3804
也就是说我们可以控制小鼠看到什么,
15:14
directly, by writing to its brain.
291
914010
2369
只需直接写入它的大脑。
15:16
The possibilities of bidirectional communication
292
916413
3069
大脑和机器之间 进行双向通信的可能性是无限的。
15:19
between brains and machines are limitless.
293
919516
3370
15:22
To understand, to cure and to augment the brain.
294
922886
4571
理解、治愈、增强大脑。
15:28
So I hope you'll see that the pursuit of a unified science of intelligence
295
928792
5972
我希望你们能看到,追求跨越
大脑和机器的统一智能科学
15:34
that spans brains and machines
296
934798
2369
15:37
can both help us better understand biological intelligence
297
937200
3437
既可以帮助我们更好地理解生物智能,
15:40
and help us create more efficient, explainable
298
940637
3470
也可以帮助我们创造 更高效、更可解释
15:44
and powerful artificial intelligence.
299
944140
2703
和更强大的人工智能。
15:47
But it's important that this pursuit be done out in the open
300
947677
2903
但是,这一追求必须公开进行,
15:50
so the science can be shared with the world,
301
950613
2203
这样科学才能与世界共享,
15:52
and it must be done with a very long time horizon.
302
952849
2769
而且必须在很长的时间内完成。
15:55
This makes academia the perfect place to pursue a science of intelligence.
303
955952
4872
这使学术界成为 追求智能科学的理想场所。
16:00
In academia, we're free from the tyranny of quarterly earnings reports.
304
960857
4638
在学术界,我们不必承受 季度收益报告的压榨。
16:05
We're free from the censorship of corporate legal departments.
305
965495
3937
我们不必接受公司法务部门的审查。
16:09
We can be far more interdisciplinary than any one company.
306
969432
4071
我们比任何一家公司更具跨学科性。
16:13
And our very mission is to share what we learn with the world.
307
973536
3737
而我们的使命就是与世界分享 我们学到的东西。
16:17
For all these reasons, we're actually building a new center
308
977307
2836
出于这些原因,我们正在斯坦福 建立一个新的智能科学中心。
16:20
for the science of intelligence at Stanford.
309
980176
3037
16:23
While there have been incredible advances in industry
310
983246
3670
虽然工业界在创造智能上 取得了惊人的进步,
16:26
on the engineering of intelligence,
311
986916
1702
16:28
now increasingly happening behind closed doors,
312
988651
2837
但封闭也越来越多,
16:31
I'm very excited about what the science of intelligence can achieve
313
991521
4238
我非常看好智能科学 在开放环境下所能取得的成就。
16:35
out in the open.
314
995792
1268
16:38
You know, in the last century,
315
998395
1468
上个世纪,
16:39
one of the greatest intellectual adventures
316
999863
2636
人类智慧最伟大的冒险之一
16:42
lay in humanity peering outwards into the universe
317
1002499
2869
是人类向外凝视宇宙、
16:45
to understand it, from quarks to cosmology.
318
1005368
3971
理解宇宙,从夸克到宇宙学。
16:49
I think one of the greatest intellectual adventures of this century
319
1009372
3404
我认为,本世纪人类智慧 最伟大的冒险之一
16:52
will lie in humanity peering inwards,
320
1012809
3036
将是人类向内凝视,
16:55
both into ourselves and into the AIs that we create,
321
1015845
5039
凝视我们自己, 也凝视我们创造的 AI,
17:00
in order to develop a deeper, new scientific understanding of intelligence.
322
1020917
5105
对智能形成更深入的科学新理解。
17:06
Thank you.
323
1026489
1201
谢谢。
17:07
(Applause)
324
1027724
2369
(掌声)
关于本网站

这个网站将向你介绍对学习英语有用的YouTube视频。你将看到来自世界各地的一流教师教授的英语课程。双击每个视频页面上显示的英文字幕,即可从那里播放视频。字幕会随着视频的播放而同步滚动。如果你有任何意见或要求,请使用此联系表与我们联系。

https://forms.gle/WvT1wiN1qDtmnspy7