请双击下面的英文字幕来播放视频。
翻译人员: Junyi Sha
校对人员: Cindy Ma
00:13
I'm going to talk
about a failure of intuition
0
13000
2216
我想谈论一种我们
很多人都经历过的
00:15
that many of us suffer from.
1
15240
1600
来自于直觉上的失误。
00:17
It's really a failure
to detect a certain kind of danger.
2
17480
3040
它让人们无法察觉到
一种特定危险的存在。
00:21
I'm going to describe a scenario
3
21360
1736
我要向大家描述一个情景,
00:23
that I think is both terrifying
4
23120
3256
一个我觉得既令人害怕,
00:26
and likely to occur,
5
26400
1760
却又很可能发生的情景。
00:28
and that's not a good combination,
6
28840
1656
这样一个组合的出现,
00:30
as it turns out.
7
30520
1536
显然不是一个好的征兆。
00:32
And yet rather than be scared,
most of you will feel
8
32080
2456
不过,在座的大部分人都会觉得,
00:34
that what I'm talking about
is kind of cool.
9
34560
2080
我要谈论的这件事其实挺酷的。
00:37
I'm going to describe
how the gains we make
10
37200
2976
我将描述我们从人工智能中
00:40
in artificial intelligence
11
40200
1776
获得的好处,
将怎样彻底地毁灭我们。
00:42
could ultimately destroy us.
12
42000
1776
00:43
And in fact, I think it's very difficult
to see how they won't destroy us
13
43800
3456
事实上,想看到人工智能
最终不摧毁我们是很难的,
或者说它必将驱使我们自我毁灭。
00:47
or inspire us to destroy ourselves.
14
47280
1680
00:49
And yet if you're anything like me,
15
49400
1856
如果你和我有共同点,
00:51
you'll find that it's fun
to think about these things.
16
51280
2656
你会发现思考这些问题
是相当有趣的。
00:53
And that response is part of the problem.
17
53960
3376
而这种反应就是问题的一部分。
00:57
OK? That response should worry you.
18
57360
1720
因为这种想法应该使你感到担忧。
00:59
And if I were to convince you in this talk
19
59920
2656
假如我想在这个演讲中让你们相信,
01:02
that we were likely
to suffer a global famine,
20
62600
3416
我们因为气候变化或者其他灾难,
很可能会遭受全球性的饥荒,
01:06
either because of climate change
or some other catastrophe,
21
66040
3056
01:09
and that your grandchildren,
or their grandchildren,
22
69120
3416
同时,你们的子孙后辈
01:12
are very likely to live like this,
23
72560
1800
都可能在这样的饥荒中挣扎求生,
01:15
you wouldn't think,
24
75200
1200
你们就不会觉得
01:17
"Interesting.
25
77440
1336
“真有趣,
01:18
I like this TED Talk."
26
78800
1200
我喜欢这个TED演讲。”
01:21
Famine isn't fun.
27
81200
1520
因为饥荒一点都不有趣。
01:23
Death by science fiction,
on the other hand, is fun,
28
83800
3376
不过,科幻小说中的死亡
往往却引人入胜。
01:27
and one of the things that worries me most
about the development of AI at this point
29
87200
3976
所以我现在最担心的一个问题是,
人们对人工智能的发展将带来的危险,
01:31
is that we seem unable to marshal
an appropriate emotional response
30
91200
4096
01:35
to the dangers that lie ahead.
31
95320
1816
似乎还没有形成一个正确的认识。
01:37
I am unable to marshal this response,
and I'm giving this talk.
32
97160
3200
我也同样如此,所以我想
在这个演讲中和大家一起探讨。
01:42
It's as though we stand before two doors.
33
102120
2696
我们就像站在了两扇门前。
01:44
Behind door number one,
34
104840
1256
在第一扇门后面,
01:46
we stop making progress
in building intelligent machines.
35
106120
3296
我们停下打造智能机器的脚步。
01:49
Our computer hardware and software
just stops getting better for some reason.
36
109440
4016
某些原因也使我们停止了
对电脑软件和硬件的升级。
01:53
Now take a moment
to consider why this might happen.
37
113480
3000
现在让我们想一下为什么会这样。
01:57
I mean, given how valuable
intelligence and automation are,
38
117080
3656
我的意思是,当我们认识到
智能和自动化不可估量的价值时,
02:00
we will continue to improve our technology
if we are at all able to.
39
120760
3520
我们总会竭尽所能的改善这些科技。
02:05
What could stop us from doing this?
40
125200
1667
那么,什么会使我们停下脚步呢?
02:07
A full-scale nuclear war?
41
127800
1800
一场大规模的核战争?
02:11
A global pandemic?
42
131000
1560
一次全球性的瘟疫?
02:14
An asteroid impact?
43
134320
1320
一个小行星撞击了地球?
02:17
Justin Bieber becoming
president of the United States?
44
137640
2576
或者是贾斯汀·比伯成为了美国总统?
02:20
(Laughter)
45
140240
2280
(笑声)
02:24
The point is, something would have to
destroy civilization as we know it.
46
144760
3920
重点是,总有一个事物
会摧毁人类现有的文明。
02:29
You have to imagine
how bad it would have to be
47
149360
4296
你需要思考这个灾难究竟有多恐怖,
02:33
to prevent us from making
improvements in our technology
48
153680
3336
才会永久性地阻止我们
发展科技,
02:37
permanently,
49
157040
1216
02:38
generation after generation.
50
158280
2016
永久性的。
02:40
Almost by definition,
this is the worst thing
51
160320
2136
光想想它,
就觉得这将是人类历史上
能发生的最惨绝人寰的事了。
02:42
that's ever happened in human history.
52
162480
2016
02:44
So the only alternative,
53
164520
1296
那么,我们唯一剩下的选择,
02:45
and this is what lies
behind door number two,
54
165840
2336
就藏在第二扇门的后面,
02:48
is that we continue
to improve our intelligent machines
55
168200
3136
那就是我们持续
改进我们的智能机器,
02:51
year after year after year.
56
171360
1600
永不停歇。
02:53
At a certain point, we will build
machines that are smarter than we are,
57
173720
3640
在将来的某一天,我们会
造出比我们更聪明的机器,
02:58
and once we have machines
that are smarter than we are,
58
178080
2616
一旦我们有了
比我们更聪明的机器,
03:00
they will begin to improve themselves.
59
180720
1976
它们将进行自我改进。
03:02
And then we risk what
the mathematician IJ Good called
60
182720
2736
然后我们就会承担着
数学家IJ Good 所说的
03:05
an "intelligence explosion,"
61
185480
1776
“智能爆炸”的风险,
03:07
that the process could get away from us.
62
187280
2000
(科技进步的)
进程将不再受我们的控制。
03:10
Now, this is often caricatured,
as I have here,
63
190120
2816
现在我们时常会看到
这样一些讽刺漫画,
03:12
as a fear that armies of malicious robots
64
192960
3216
我们总会担心受到一些不怀好意的
03:16
will attack us.
65
196200
1256
机器人军队的攻击。
03:17
But that isn't the most likely scenario.
66
197480
2696
但这不是最可能出现的事情。
我们的机器不会自动变得邪恶。
03:20
It's not that our machines
will become spontaneously malevolent.
67
200200
4856
03:25
The concern is really
that we will build machines
68
205080
2616
所以,我们唯一的顾虑就是
我们将会打造
03:27
that are so much
more competent than we are
69
207720
2056
比我们人类更有竞争力的机器。
03:29
that the slightest divergence
between their goals and our own
70
209800
3776
而一旦我们和它们的目标不一致,
03:33
could destroy us.
71
213600
1200
我们将会被摧毁。
03:35
Just think about how we relate to ants.
72
215960
2080
想想我们与蚂蚁的关系吧。
03:38
We don't hate them.
73
218600
1656
我们不讨厌它们,
03:40
We don't go out of our way to harm them.
74
220280
2056
我们不会去主动去伤害它们。
03:42
In fact, sometimes
we take pains not to harm them.
75
222360
2376
实际上,我们经常会尽量
避免伤害蚂蚁。
03:44
We step over them on the sidewalk.
76
224760
2016
我们会选择从它们身边走过。
03:46
But whenever their presence
77
226800
2136
但只要它们的存在
03:48
seriously conflicts with one of our goals,
78
228960
2496
妨碍到了我们达成目标,
03:51
let's say when constructing
a building like this one,
79
231480
2477
比如说当我们在建造这样一个建筑,
03:53
we annihilate them without a qualm.
80
233981
1960
我们会毫不手软地杀掉它们。
03:56
The concern is that we will
one day build machines
81
236480
2936
所以我们的顾虑是,终将有一天
我们打造的机器,
03:59
that, whether they're conscious or not,
82
239440
2736
不管它们是否有意识,
它们终将会以
04:02
could treat us with similar disregard.
83
242200
2000
我们对待蚂蚁的方式
来对待我们。
04:05
Now, I suspect this seems
far-fetched to many of you.
84
245760
2760
我想很多人会说这很遥远。
04:09
I bet there are those of you who doubt
that superintelligent AI is possible,
85
249360
6336
我打赌你们中有些人还会
怀疑超级人工智能是否可能实现,
04:15
much less inevitable.
86
255720
1656
认为我是在小题大做。
04:17
But then you must find something wrong
with one of the following assumptions.
87
257400
3620
但是你很快会发现以下这些
假设中的某一个是有问题的。
04:21
And there are only three of them.
88
261044
1572
下面是仅有的三种假设:
04:23
Intelligence is a matter of information
processing in physical systems.
89
263800
4719
第一,智慧可以被看做
物理系统中的信息处理过程。
04:29
Actually, this is a little bit more
than an assumption.
90
269320
2615
事实上,这不仅仅是一个假设。
04:31
We have already built
narrow intelligence into our machines,
91
271959
3457
我们已经在有些机器中
嵌入了智能系统,
04:35
and many of these machines perform
92
275440
2016
这些机器中很多已经
04:37
at a level of superhuman
intelligence already.
93
277480
2640
有着超越普通人的智慧了。
04:40
And we know that mere matter
94
280840
2576
而且,我们也知道任何一点小事
04:43
can give rise to what is called
"general intelligence,"
95
283440
2616
都可以引发所谓的“普遍智慧”,
这是一种可以在不同领域间
灵活思考的能力,
04:46
an ability to think flexibly
across multiple domains,
96
286080
3656
04:49
because our brains have managed it. Right?
97
289760
3136
因为我们的大脑已经
成功做到了这些。对吧?
04:52
I mean, there's just atoms in here,
98
292920
3936
我的意思是,
大脑里其实都是原子,
04:56
and as long as we continue
to build systems of atoms
99
296880
4496
只要我们继续建造这些原子体系,
05:01
that display more and more
intelligent behavior,
100
301400
2696
我们就能实现越来越多的智慧行为,
05:04
we will eventually,
unless we are interrupted,
101
304120
2536
我们最终将会,
当然除非我们被干扰,
05:06
we will eventually
build general intelligence
102
306680
3376
我们最终将会给我们的机器赋予
05:10
into our machines.
103
310080
1296
广泛意义上的智能。
05:11
It's crucial to realize
that the rate of progress doesn't matter,
104
311400
3656
我们要知道这个进程的速度并不重要,
05:15
because any progress
is enough to get us into the end zone.
105
315080
3176
因为任何进程都足够
让我们走进死胡同。
05:18
We don't need Moore's law to continue.
We don't need exponential progress.
106
318280
3776
甚至不需要考虑摩尔定律,
也不需要用指数函数来衡量,
05:22
We just need to keep going.
107
322080
1600
这一切顺其自然都会发生。
05:25
The second assumption
is that we will keep going.
108
325480
2920
第二个假设是,我们会一直创新。
05:29
We will continue to improve
our intelligent machines.
109
329000
2760
去继续改进我们的智能机器。
05:33
And given the value of intelligence --
110
333000
4376
由于智慧的价值就是——
05:37
I mean, intelligence is either
the source of everything we value
111
337400
3536
提供我们所珍爱的事物,
05:40
or we need it to safeguard
everything we value.
112
340960
2776
或是用于保护我们所珍视的一切。
05:43
It is our most valuable resource.
113
343760
2256
智慧就是我们最有价值的资源。
05:46
So we want to do this.
114
346040
1536
所以我们想继续革新它。
05:47
We have problems
that we desperately need to solve.
115
347600
3336
因为我们有很多需要
迫切解决的问题。
05:50
We want to cure diseases
like Alzheimer's and cancer.
116
350960
3200
我们想要治愈像阿兹海默症
和癌症这样的疾病,
05:54
We want to understand economic systems.
We want to improve our climate science.
117
354960
3936
我们想要了解经济系统,
想要改善我们的气候科学,
05:58
So we will do this, if we can.
118
358920
2256
所以只要可能,
我们就会将革新继续下去。
06:01
The train is already out of the station,
and there's no brake to pull.
119
361200
3286
而且革新的列车早已驶出,
车上却没有刹车。
06:05
Finally, we don't stand
on a peak of intelligence,
120
365880
5456
第三种假设是:
人类没有登上智慧的巅峰,
06:11
or anywhere near it, likely.
121
371360
1800
甚至连接近可能都谈不上。
06:13
And this really is the crucial insight.
122
373640
1896
这个想法十分关键。
06:15
This is what makes
our situation so precarious,
123
375560
2416
这就是为什么
我们所处的环境是很危险的,
这也是为什么我们对风险的
直觉是不可靠的。
06:18
and this is what makes our intuitions
about risk so unreliable.
124
378000
4040
06:23
Now, just consider the smartest person
who has ever lived.
125
383120
2720
现在,请大家想一下
谁是世界上最聪明的人。
06:26
On almost everyone's shortlist here
is John von Neumann.
126
386640
3416
几乎每个人的候选名单里都会
有约翰·冯·诺伊曼。
06:30
I mean, the impression that von Neumann
made on the people around him,
127
390080
3336
冯·诺伊曼留给周围人的印象
06:33
and this included the greatest
mathematicians and physicists of his time,
128
393440
4056
就是他是那个时代当中最杰出的
数学家和物理学家,
06:37
is fairly well-documented.
129
397520
1936
这些都是完好的记录在案的。
06:39
If only half the stories
about him are half true,
130
399480
3776
即使他的故事里有一半是假的,
06:43
there's no question
131
403280
1216
都没有人会质疑
06:44
he's one of the smartest people
who has ever lived.
132
404520
2456
他仍然是世界上最聪明的人之一。
那么,让我们来看看智慧谱线吧。
06:47
So consider the spectrum of intelligence.
133
407000
2520
06:50
Here we have John von Neumann.
134
410320
1429
现在我们有了约翰·冯·诺伊曼,
06:53
And then we have you and me.
135
413560
1334
还有我们大家。
06:56
And then we have a chicken.
136
416120
1296
另外还有一只鸡。
06:57
(Laughter)
137
417440
1936
(笑声)
06:59
Sorry, a chicken.
138
419400
1216
抱歉,母鸡的位置应该在这。
07:00
(Laughter)
139
420640
1256
(笑声)
07:01
There's no reason for me to make this talk
more depressing than it needs to be.
140
421920
3736
这个演讲已经够严肃了,
开个玩笑轻松一下。
07:05
(Laughter)
141
425680
1600
(笑声)
07:08
It seems overwhelmingly likely, however,
that the spectrum of intelligence
142
428339
3477
然而,很可能的情况是,
智慧谱线上的内容
07:11
extends much further
than we currently conceive,
143
431840
3120
已远远超出了我们的认知,
07:15
and if we build machines
that are more intelligent than we are,
144
435880
3216
如果我们建造了比
自身更聪明的机器,
07:19
they will very likely
explore this spectrum
145
439120
2296
它们将非常可能
以超乎寻常的方式
07:21
in ways that we can't imagine,
146
441440
1856
延展这个谱线,
07:23
and exceed us in ways
that we can't imagine.
147
443320
2520
最终超越人类。
07:27
And it's important to recognize that
this is true by virtue of speed alone.
148
447000
4336
仅仅从速度方面考虑,
我们就能够意识到这一点。
07:31
Right? So imagine if we just built
a superintelligent AI
149
451360
5056
那么,现在让我们来想象一下
我们刚建好一个超级人工智能机器,
07:36
that was no smarter
than your average team of researchers
150
456440
3456
大概和斯坦福
或是麻省理工学院的研究员的
07:39
at Stanford or MIT.
151
459920
2296
平均水平差不多吧。
07:42
Well, electronic circuits
function about a million times faster
152
462240
2976
但是,电路板要比生物系统
运行速度快一百万倍,
07:45
than biochemical ones,
153
465240
1256
07:46
so this machine should think
about a million times faster
154
466520
3136
所以这个机器思考起来
会比那些打造它的大脑
07:49
than the minds that built it.
155
469680
1816
快一百万倍。
07:51
So you set it running for a week,
156
471520
1656
当你让它运行一周后,
07:53
and it will perform 20,000 years
of human-level intellectual work,
157
473200
4560
它将能呈现出相当于人类智慧在
20000年间发展出的水平,
07:58
week after week after week.
158
478400
1960
而这个过程将周而复始。
08:01
How could we even understand,
much less constrain,
159
481640
3096
那么,我们又怎么能理解,
更不用说去制约
08:04
a mind making this sort of progress?
160
484760
2280
一个以如此速度运行的机器呢?
08:08
The other thing that's worrying, frankly,
161
488840
2136
坦白讲,另一件令人担心的事就是,
08:11
is that, imagine the best case scenario.
162
491000
4976
我们考虑一下最理想的情景。
想象我们正好做出了
一个没有任何安全隐患的
08:16
So imagine we hit upon a design
of superintelligent AI
163
496000
4176
08:20
that has no safety concerns.
164
500200
1376
超级人工智能。
08:21
We have the perfect design
the first time around.
165
501600
3256
我们有了一个前所未有的完美设计。
08:24
It's as though we've been handed an oracle
166
504880
2216
就好像我们被赐予了一件神物,
08:27
that behaves exactly as intended.
167
507120
2016
它能够准确的执行目标动作。
08:29
Well, this machine would be
the perfect labor-saving device.
168
509160
3720
这个机器将完美的节省人力工作。
08:33
It can design the machine
that can build the machine
169
513680
2429
它设计出的机器
能够再生产其他机器,
08:36
that can do any physical work,
170
516133
1763
去完成所有的人力工作。
08:37
powered by sunlight,
171
517920
1456
由太阳能供电,
08:39
more or less for the cost
of raw materials.
172
519400
2696
而成本的多少仅取决于原材料。
08:42
So we're talking about
the end of human drudgery.
173
522120
3256
那么,我们正在谈论的
就是人力劳动的终结。
08:45
We're also talking about the end
of most intellectual work.
174
525400
2800
也关乎脑力劳动的终结。
08:49
So what would apes like ourselves
do in this circumstance?
175
529200
3056
那在这种情况下,
像我们这样的"大猩猩"还能有什么用呢?
08:52
Well, we'd be free to play Frisbee
and give each other massages.
176
532280
4080
我们可以悠闲地玩飞盘,
给彼此做按摩。
08:57
Add some LSD and some
questionable wardrobe choices,
177
537840
2856
服用一些迷药,
穿一些奇装异服,
09:00
and the whole world
could be like Burning Man.
178
540720
2176
整个世界都沉浸在狂欢节之中。
09:02
(Laughter)
179
542920
1640
(笑声)
09:06
Now, that might sound pretty good,
180
546320
2000
那可能听起来挺棒的,
09:09
but ask yourself what would happen
181
549280
2376
不过让我们扪心自问,
09:11
under our current economic
and political order?
182
551680
2736
在现有的经济和政治体制下,
这意味着什么?
09:14
It seems likely that we would witness
183
554440
2416
我们很可能会目睹
09:16
a level of wealth inequality
and unemployment
184
556880
4136
前所未有的贫富差距
09:21
that we have never seen before.
185
561040
1496
和失业率。
09:22
Absent a willingness
to immediately put this new wealth
186
562560
2616
有钱人不愿意马上把这笔新的财富
09:25
to the service of all humanity,
187
565200
1480
贡献出来服务社会,
09:27
a few trillionaires could grace
the covers of our business magazines
188
567640
3616
这时一些千万富翁能够优雅地
登上商业杂志的封面,
09:31
while the rest of the world
would be free to starve.
189
571280
2440
而剩下的人可能都在挨饿。
09:34
And what would the Russians
or the Chinese do
190
574320
2296
如果听说硅谷里的公司
09:36
if they heard that some company
in Silicon Valley
191
576640
2616
即将造出超级人工智能,
09:39
was about to deploy a superintelligent AI?
192
579280
2736
俄国人和中国人
会采取怎样的行动呢?
09:42
This machine would be capable
of waging war,
193
582040
2856
那个机器将能够
以一种前所未有的能力
09:44
whether terrestrial or cyber,
194
584920
2216
去开展由领土问题和
网络问题引发的战争。
09:47
with unprecedented power.
195
587160
1680
09:50
This is a winner-take-all scenario.
196
590120
1856
这是一个胜者为王的世界。
机器世界中的半年,
09:52
To be six months ahead
of the competition here
197
592000
3136
09:55
is to be 500,000 years ahead,
198
595160
2776
在现实世界至少会相当于
09:57
at a minimum.
199
597960
1496
50万年。
09:59
So it seems that even mere rumors
of this kind of breakthrough
200
599480
4736
所以仅仅是关于这种科技突破的传闻,
10:04
could cause our species to go berserk.
201
604240
2376
就可以让我们的种族丧失理智。
10:06
Now, one of the most frightening things,
202
606640
2896
在我的观念里,
10:09
in my view, at this moment,
203
609560
2776
当前最可怕的东西
10:12
are the kinds of things
that AI researchers say
204
612360
4296
正是人工智能的研究人员
10:16
when they want to be reassuring.
205
616680
1560
安慰我们的那些话。
10:19
And the most common reason
we're told not to worry is time.
206
619000
3456
最常见的理由就是关于时间。
10:22
This is all a long way off,
don't you know.
207
622480
2056
他们会说,现在开始担心还为时尚早。
10:24
This is probably 50 or 100 years away.
208
624560
2440
这很可能是50年或者
100年之后才需要担心的事。
10:27
One researcher has said,
209
627720
1256
一个研究人员曾说过,
“担心人工智能的安全性
10:29
"Worrying about AI safety
210
629000
1576
10:30
is like worrying
about overpopulation on Mars."
211
630600
2280
就好比担心火星上人口过多一样。”
10:34
This is the Silicon Valley version
212
634116
1620
这就是硅谷版本的
10:35
of "don't worry your
pretty little head about it."
213
635760
2376
“不要杞人忧天。”
10:38
(Laughter)
214
638160
1336
(笑声)
10:39
No one seems to notice
215
639520
1896
似乎没有人注意到
10:41
that referencing the time horizon
216
641440
2616
以时间作为参考系
10:44
is a total non sequitur.
217
644080
2576
是得不出合理的结论的。
10:46
If intelligence is just a matter
of information processing,
218
646680
3256
如果说智慧只包括信息处理,
10:49
and we continue to improve our machines,
219
649960
2656
然后我们继续改善这些机器,
10:52
we will produce
some form of superintelligence.
220
652640
2880
那么我们终将生产出超级智能。
10:56
And we have no idea
how long it will take us
221
656320
3656
但是,我们无法预估将花费多长时间
来创造实现这一切的安全环境。
11:00
to create the conditions
to do that safely.
222
660000
2400
11:04
Let me say that again.
223
664200
1296
我再重复一遍。
11:05
We have no idea how long it will take us
224
665520
3816
我们无法预估将花费多长时间
11:09
to create the conditions
to do that safely.
225
669360
2240
来创造实现这一切的安全环境。
11:12
And if you haven't noticed,
50 years is not what it used to be.
226
672920
3456
你们可能没有注意过,
50年的概念已今非昔比。
11:16
This is 50 years in months.
227
676400
2456
这是用月来衡量50年的样子。
(每个点表示一个月)
11:18
This is how long we've had the iPhone.
228
678880
1840
红色的点是代表苹果手机出现的时间。
11:21
This is how long "The Simpsons"
has been on television.
229
681440
2600
这是《辛普森一家》(动画片)
在电视上播出以来的时间。
11:24
Fifty years is not that much time
230
684680
2376
要做好准备面对
人类历史上前所未有的挑战,
11:27
to meet one of the greatest challenges
our species will ever face.
231
687080
3160
50年时间并不是很长。
11:31
Once again, we seem to be failing
to have an appropriate emotional response
232
691640
4016
就像我刚才说的,
我们对确定会来临的事情
11:35
to what we have every reason
to believe is coming.
233
695680
2696
做出了不合理的回应。
11:38
The computer scientist Stuart Russell
has a nice analogy here.
234
698400
3976
计算机科学家斯图尔特·罗素
给出了一个极好的类比。
11:42
He said, imagine that we received
a message from an alien civilization,
235
702400
4896
他说,想象我们从
外太空接收到一条讯息,
11:47
which read:
236
707320
1696
上面写着:
“地球上的人类,
11:49
"People of Earth,
237
709040
1536
11:50
we will arrive on your planet in 50 years.
238
710600
2360
我们将在五十年后到达你们的星球,
11:53
Get ready."
239
713800
1576
做好准备吧。”
11:55
And now we're just counting down
the months until the mothership lands?
240
715400
4256
于是我们就开始倒计时,
直到它们的“母舰”着陆吗?
11:59
We would feel a little
more urgency than we do.
241
719680
3000
在这种情况下我们会感到更紧迫。
12:04
Another reason we're told not to worry
242
724680
1856
另外一个试图安慰我们的理由是,
12:06
is that these machines
can't help but share our values
243
726560
3016
那些机器必须
拥有和我们一样的价值观,
12:09
because they will be literally
extensions of ourselves.
244
729600
2616
因为它们将会是我们自身的延伸。
12:12
They'll be grafted onto our brains,
245
732240
1816
它们将会被嫁接到我们的大脑上,
12:14
and we'll essentially
become their limbic systems.
246
734080
2360
我们将会成它们的边缘系统。
12:17
Now take a moment to consider
247
737120
1416
现在我们再思考一下
12:18
that the safest
and only prudent path forward,
248
738560
3176
最安全的,也是唯一经慎重考虑后
12:21
recommended,
249
741760
1336
推荐的发展方向,
12:23
is to implant this technology
directly into our brains.
250
743120
2800
是将这项技术直接植入我们大脑。
12:26
Now, this may in fact be the safest
and only prudent path forward,
251
746600
3376
这也许确实是最安全的,
也是唯一慎重的发展方向,
但通常在我们把它塞进脑袋之前,
12:30
but usually one's safety concerns
about a technology
252
750000
3056
12:33
have to be pretty much worked out
before you stick it inside your head.
253
753080
3656
会充分考虑这项技术的安全性。
12:36
(Laughter)
254
756760
2016
(笑声)
12:38
The deeper problem is that
building superintelligent AI on its own
255
758800
5336
更深一层的问题是:
仅仅制造出超级人工智能机器
12:44
seems likely to be easier
256
764160
1736
可能要比
12:45
than building superintelligent AI
257
765920
1856
既制造超级人工智能,
12:47
and having the completed neuroscience
258
767800
1776
又让其拥有能让
我们的思想和超级人工智能
12:49
that allows us to seamlessly
integrate our minds with it.
259
769600
2680
无缝对接的完整的
神经科学系统要简单很多。
12:52
And given that the companies
and governments doing this work
260
772800
3176
而做这些研究的公司或政府,
很可能将彼此视作竞争对手,
12:56
are likely to perceive themselves
as being in a race against all others,
261
776000
3656
12:59
given that to win this race
is to win the world,
262
779680
3256
因为赢得了比赛就意味着称霸了世界,
13:02
provided you don't destroy it
in the next moment,
263
782960
2456
前提是不在刚成功后就将其销毁,
13:05
then it seems likely
that whatever is easier to do
264
785440
2616
所以结论是:简单的选项
13:08
will get done first.
265
788080
1200
一定会被先实现。
13:10
Now, unfortunately,
I don't have a solution to this problem,
266
790560
2856
但很遗憾,
除了建议更多人去思考这个问题,
13:13
apart from recommending
that more of us think about it.
267
793440
2616
我对此并无解决方案。
我觉得在人工智能问题上,
13:16
I think we need something
like a Manhattan Project
268
796080
2376
13:18
on the topic of artificial intelligence.
269
798480
2016
我们需要一个“曼哈顿计划”
(二战核武器研究计划),
13:20
Not to build it, because I think
we'll inevitably do that,
270
800520
2736
不是用于讨论如何制造人工智能,
因为我们一定会这么做,
13:23
but to understand
how to avoid an arms race
271
803280
3336
而是去避免军备竞赛,
13:26
and to build it in a way
that is aligned with our interests.
272
806640
3496
最终以一种有利于
我们的方式去打造它。
13:30
When you're talking
about superintelligent AI
273
810160
2136
当你在谈论一个可以自我改造的
13:32
that can make changes to itself,
274
812320
2256
超级人工智能时,
13:34
it seems that we only have one chance
to get the initial conditions right,
275
814600
4616
我们似乎只有
一次正确搭建初始系统的机会,
13:39
and even then we will need to absorb
276
819240
2056
而这个正确的初始系统
13:41
the economic and political
consequences of getting them right.
277
821320
3040
需要我们在经济以及政治上
做出很大的努力。
13:45
But the moment we admit
278
825760
2056
但是当我们承认
13:47
that information processing
is the source of intelligence,
279
827840
4000
信息处理是智慧的源头,
13:52
that some appropriate computational system
is what the basis of intelligence is,
280
832720
4800
承认一些电脑系统是智能的基础,
13:58
and we admit that we will improve
these systems continuously,
281
838360
3760
承认我们会不断改善这些系统,
14:03
and we admit that the horizon
of cognition very likely far exceeds
282
843280
4456
承认我们现存的认知远没有达到极限,
14:07
what we currently know,
283
847760
1200
将很可能被超越,
14:10
then we have to admit
284
850120
1216
我们又必须同时承认
14:11
that we are in the process
of building some sort of god.
285
851360
2640
我们在某种意义上
正在创造一个新的“上帝”。
14:15
Now would be a good time
286
855400
1576
现在正是思考人类是否
能与这个“上帝”和睦相处的最佳时机。
14:17
to make sure it's a god we can live with.
287
857000
1953
14:20
Thank you very much.
288
860120
1536
非常感谢!
14:21
(Applause)
289
861680
5093
(掌声)
New videos
关于本网站
这个网站将向你介绍对学习英语有用的YouTube视频。你将看到来自世界各地的一流教师教授的英语课程。双击每个视频页面上显示的英文字幕,即可从那里播放视频。字幕会随着视频的播放而同步滚动。如果你有任何意见或要求,请使用此联系表与我们联系。