请双击下面的英文字幕来播放视频。
翻译人员: Yanyan Hong
校对人员: Yolanda Zhang
00:12
After 13.8 billion years
of cosmic history,
0
12760
4416
在 138 亿年的历史之后,
00:17
our universe has woken up
1
17200
2096
我们的宇宙终于觉醒了,
00:19
and become aware of itself.
2
19320
1520
并开始有了自我意识。
00:21
From a small blue planet,
3
21480
1936
从一颗蓝色的小星球,
00:23
tiny, conscious parts of our universe
have begun gazing out into the cosmos
4
23440
4136
宇宙中那些有了微小意识的部分,
开始用它们的望远镜,
00:27
with telescopes,
5
27600
1376
窥视整个宇宙,
00:29
discovering something humbling.
6
29000
1480
从而有了谦卑的发现。
00:31
We've discovered that our universe
is vastly grander
7
31320
2896
宇宙比我们祖先所想象的
00:34
than our ancestors imagined
8
34240
1336
要大得多,
00:35
and that life seems to be an almost
imperceptibly small perturbation
9
35600
4256
使得生命显得如同渺小的扰动,
小到足以被忽视,
00:39
on an otherwise dead universe.
10
39880
1720
但若没有它们的存在,
宇宙也没了生命。
00:42
But we've also discovered
something inspiring,
11
42320
3016
不过我们也发现了
一些振奋人心的事,
00:45
which is that the technology
we're developing has the potential
12
45360
2976
那就是我们所开发的技术,
有着前所未有的潜能
00:48
to help life flourish like never before,
13
48360
2856
去促使生命变得更加繁盛,
00:51
not just for centuries
but for billions of years,
14
51240
3096
不仅仅只有几个世纪,
而是持续了数十亿年;
00:54
and not just on earth but throughout
much of this amazing cosmos.
15
54360
4120
也不仅仅是地球上,
甚至是在整个浩瀚的宇宙中。
00:59
I think of the earliest life as "Life 1.0"
16
59680
3336
我把最早的生命
称之为 “生命 1.0”,
01:03
because it was really dumb,
17
63040
1376
因为它那会儿还略显蠢笨,
01:04
like bacteria, unable to learn
anything during its lifetime.
18
64440
4296
就像细菌,在它们的一生中,
也不会学到什么东西。
01:08
I think of us humans as "Life 2.0"
because we can learn,
19
68760
3376
我把我们人类称为 “生命 2.0”,
因为我们能够学习,
01:12
which we in nerdy, geek speak,
20
72160
1496
用技术宅男的话来说,
01:13
might think of as installing
new software into our brains,
21
73680
3216
就像是在我们脑袋里
装了一个新的软件,
01:16
like languages and job skills.
22
76920
2120
比如语言及工作技能。
01:19
"Life 3.0," which can design not only
its software but also its hardware
23
79680
4296
而“生命 3.0” 不仅能开始设计
它的软件,甚至还可以创造其硬件。
01:24
of course doesn't exist yet.
24
84000
1656
当然,它目前还不存在。
01:25
But perhaps our technology
has already made us "Life 2.1,"
25
85680
3776
但是也许我们的科技
已经让我们走进了 “生命 2.1”,
01:29
with our artificial knees,
pacemakers and cochlear implants.
26
89480
4336
因为现在我们有了人工膝盖,
心脏起搏器以及耳蜗植入技术。
01:33
So let's take a closer look
at our relationship with technology, OK?
27
93840
3880
我们一起来聊聊
人类和科技的关系吧!
01:38
As an example,
28
98800
1216
举个例子,
01:40
the Apollo 11 moon mission
was both successful and inspiring,
29
100040
5296
阿波罗 11 号月球任务
很成功,令人备受鼓舞,
01:45
showing that when we humans
use technology wisely,
30
105360
3016
展示出了我们人类
对于使用科技的智慧,
01:48
we can accomplish things
that our ancestors could only dream of.
31
108400
3936
我们实现了很多
祖先们只能想象的事情。
01:52
But there's an even more inspiring journey
32
112360
2976
但还有一段更加
鼓舞人心的旅程,
01:55
propelled by something
more powerful than rocket engines,
33
115360
2680
由比火箭引擎更加强大的
东西所推动着,
01:59
where the passengers
aren't just three astronauts
34
119200
2336
乘客也不仅仅只是三个宇航员,
02:01
but all of humanity.
35
121560
1776
而是我们全人类。
02:03
Let's talk about our collective
journey into the future
36
123360
2936
让我们来聊聊与人工智能
一起走向未来的
02:06
with artificial intelligence.
37
126320
2000
这段旅程。
02:08
My friend Jaan Tallinn likes to point out
that just as with rocketry,
38
128960
4536
我的朋友扬·塔林(Jaan Tallinn)常说,
这就像是火箭学一样,
02:13
it's not enough to make
our technology powerful.
39
133520
3160
只让我们的科技
拥有强大的力量是不够的。
02:17
We also have to figure out,
if we're going to be really ambitious,
40
137560
3175
如果我们有足够的
雄心壮志,就应当想出
02:20
how to steer it
41
140759
1416
如何控制它们的方法,
02:22
and where we want to go with it.
42
142199
1681
希望它朝着怎样的方向前进。
02:24
So let's talk about all three
for artificial intelligence:
43
144880
2840
那么对于人工智能,
我们先来谈谈这三点:
02:28
the power, the steering
and the destination.
44
148440
3056
力量,操控和目的地。
02:31
Let's start with the power.
45
151520
1286
我们先来说力量。
02:33
I define intelligence very inclusively --
46
153600
3096
我对于人工智能的定义非常全面——
02:36
simply as our ability
to accomplish complex goals,
47
156720
4336
就是我们能够完成复杂目标的能力,
02:41
because I want to include both
biological and artificial intelligence.
48
161080
3816
因为我想把生物学
和人工智能都包含进去。
02:44
And I want to avoid
the silly carbon-chauvinism idea
49
164920
4016
我还想要避免愚蠢的
碳沙文主义的观点,
02:48
that you can only be smart
if you're made of meat.
50
168960
2360
即你认为如果你很聪明,
你就一定有着肉身。
02:52
It's really amazing how the power
of AI has grown recently.
51
172880
4176
人工智能的力量
在近期的发展十分惊人。
02:57
Just think about it.
52
177080
1256
试想一下。
02:58
Not long ago, robots couldn't walk.
53
178360
3200
甚至在不久以前,
机器人还不能走路呢。
03:03
Now, they can do backflips.
54
183040
1720
现在,它们居然开始后空翻了。
03:06
Not long ago,
55
186080
1816
不久以前,
03:07
we didn't have self-driving cars.
56
187920
1760
我们还没有全自动驾驶汽车。
03:10
Now, we have self-flying rockets.
57
190920
2480
现在,我们都有
自动飞行的火箭了。
03:15
Not long ago,
58
195960
1416
不久以前,
03:17
AI couldn't do face recognition.
59
197400
2616
人工智能甚至不能完成脸部识别。
03:20
Now, AI can generate fake faces
60
200040
2976
现在,人工智能都开始
生成仿真面貌了,
03:23
and simulate your face
saying stuff that you never said.
61
203040
4160
并模拟你的脸部表情,
说出你从未说过的话。
03:28
Not long ago,
62
208400
1576
不久以前,
人工智能还不能在围棋中战胜人类,
03:30
AI couldn't beat us at the game of Go.
63
210000
1880
03:32
Then, Google DeepMind's AlphaZero AI
took 3,000 years of human Go games
64
212400
5096
然后,谷歌的DeepMind推出的
AlphaZero 就掌握了人类三千多年的
03:37
and Go wisdom,
65
217520
1256
围棋比赛和智慧,
03:38
ignored it all and became the world's best
player by just playing against itself.
66
218800
4976
通过和自己对战的方式轻松秒杀我们,
成了全球最厉害的围棋手。
03:43
And the most impressive feat here
wasn't that it crushed human gamers,
67
223800
3696
这里最让人印象深刻的部分,
不是它击垮了人类棋手,
03:47
but that it crushed human AI researchers
68
227520
2576
而是它击垮了人类人工智能的研究者,
03:50
who had spent decades
handcrafting game-playing software.
69
230120
3680
这些研究者花了数十年
手工打造了下棋软件。
03:54
And AlphaZero crushed human AI researchers
not just in Go but even at chess,
70
234200
4656
此外,AlphaZero也在国际象棋比赛中
轻松战胜了人类的人工智能研究者们,
03:58
which we have been working on since 1950.
71
238880
2480
我们从 1950 年
就开始致力于国际象棋研究。
04:02
So all this amazing recent progress in AI
really begs the question:
72
242000
4240
所以近来,这些惊人的
人工智能进步,让大家不禁想问:
04:07
How far will it go?
73
247280
1560
它到底能达到怎样的程度?
04:09
I like to think about this question
74
249800
1696
我在思考这个问题时,
04:11
in terms of this abstract
landscape of tasks,
75
251520
2976
想从工作任务中的抽象地景来切入,
04:14
where the elevation represents
how hard it is for AI to do each task
76
254520
3456
图中的海拔高度表示
人工智能要把每一项工作
04:18
at human level,
77
258000
1216
做到人类的水平的难度,
04:19
and the sea level represents
what AI can do today.
78
259240
2760
海平面表示现今的
人工智能所达到的水平。
04:23
The sea level is rising
as AI improves,
79
263120
2056
随着人工智能的进步,
海平面会上升,
04:25
so there's a kind of global warming
going on here in the task landscape.
80
265200
3440
所以在这工作任务地景上,
有着类似全球变暖的后果。
04:30
And the obvious takeaway
is to avoid careers at the waterfront --
81
270040
3335
很显然,我们要避免
从事那些近海区的工作——
04:33
(Laughter)
82
273399
1257
(笑声)
04:34
which will soon be
automated and disrupted.
83
274680
2856
这些工作不会一直由人来完成,
迟早要被自动化取代。
04:37
But there's a much
bigger question as well.
84
277560
2976
然而同时,还存在一个很大的问题,
04:40
How high will the water end up rising?
85
280560
1810
水平面最后会升到多高?
04:43
Will it eventually rise
to flood everything,
86
283440
3200
它最后是否会升高到淹没一切,
04:47
matching human intelligence at all tasks.
87
287840
2496
人工智能会不会
最终能胜任所有的工作?
04:50
This is the definition
of artificial general intelligence --
88
290360
3736
这就成了通用人工智能
(Artificial general intelligence)——
04:54
AGI,
89
294120
1296
缩写是 AGI,
04:55
which has been the holy grail
of AI research since its inception.
90
295440
3080
从一开始它就是
人工智能研究最终的圣杯。
04:59
By this definition, people who say,
91
299000
1776
根据这个定义,有人说,
05:00
"Ah, there will always be jobs
that humans can do better than machines,"
92
300800
3416
“总是有些工作,
人类可以做得比机器好的。”
05:04
are simply saying
that we'll never get AGI.
93
304240
2920
意思就是,我们永远不会有 AGI。
05:07
Sure, we might still choose
to have some human jobs
94
307680
3576
当然,我们可以仍然
保留一些人类的工作,
05:11
or to give humans income
and purpose with our jobs,
95
311280
3096
或者说,通过我们的工作
带给人类收入和生活目标,
05:14
but AGI will in any case
transform life as we know it
96
314400
3736
但是不论如何, AGI
都会转变我们对生命的认知,
05:18
with humans no longer being
the most intelligent.
97
318160
2736
人类或许不再是最有智慧的了。
05:20
Now, if the water level does reach AGI,
98
320920
3696
如果海平面真的
上升到 AGI 的高度,
05:24
then further AI progress will be driven
mainly not by humans but by AI,
99
324640
5296
那么进一步的人工智能进展
将会由人工智能来引领,而非人类,
05:29
which means that there's a possibility
100
329960
1856
那就意味着有可能,
05:31
that further AI progress
could be way faster
101
331840
2336
进一步提升人工智能水平
将会进行得非常迅速,
05:34
than the typical human research
and development timescale of years,
102
334200
3376
甚至超越用年份来计算时间的
典型人类研究和发展,
05:37
raising the controversial possibility
of an intelligence explosion
103
337600
4016
提高到一种极具争议性的可能性,
那就是智能爆炸,
05:41
where recursively self-improving AI
104
341640
2296
即能够不断做自我改进的人工智能
05:43
rapidly leaves human
intelligence far behind,
105
343960
3416
很快就会遥遥领先人类,
05:47
creating what's known
as superintelligence.
106
347400
2440
创造出所谓的超级人工智能。
05:51
Alright, reality check:
107
351800
2280
好了,回归现实:
05:55
Are we going to get AGI any time soon?
108
355120
2440
我们很快就会有 AGI 吗?
05:58
Some famous AI researchers,
like Rodney Brooks,
109
358360
2696
一些著名的 AI 研究者,
如罗德尼 · 布鲁克斯 (Rodney Brooks),
06:01
think it won't happen
for hundreds of years.
110
361080
2496
认为一百年内是没有可能的。
06:03
But others, like Google DeepMind
founder Demis Hassabis,
111
363600
3896
但是其他人,如谷歌DeepMind公司的
创始人德米斯 · 哈萨比斯(Demis Hassabis)
06:07
are more optimistic
112
367520
1256
就比较乐观,
06:08
and are working to try to make
it happen much sooner.
113
368800
2576
且努力想要它尽早实现。
06:11
And recent surveys have shown
that most AI researchers
114
371400
3296
近期的调查显示,
大部分的人工智能研究者
06:14
actually share Demis's optimism,
115
374720
2856
其实都和德米斯一样持乐观态度,
06:17
expecting that we will
get AGI within decades,
116
377600
3080
预期我们十年内就会有 AGI,
06:21
so within the lifetime of many of us,
117
381640
2256
所以我们中许多人
在有生之年就能看到,
06:23
which begs the question -- and then what?
118
383920
1960
这就让人不禁想问——
那么接下来呢?
06:27
What do we want the role of humans to be
119
387040
2216
如果什么事情机器
都能做得比人好,
06:29
if machines can do everything better
and cheaper than us?
120
389280
2680
成本也更低,那么人类
又该扮演怎样的角色?
06:35
The way I see it, we face a choice.
121
395000
2000
依我所见,我们面临一个选择。
06:38
One option is to be complacent.
122
398000
1576
选择之一是要自我满足。
06:39
We can say, "Oh, let's just build machines
that can do everything we can do
123
399600
3776
我们可以说,“我们来建造机器,
让它来帮助我们做一切事情,
06:43
and not worry about the consequences.
124
403400
1816
不要担心后果,
06:45
Come on, if we build technology
that makes all humans obsolete,
125
405240
3256
拜托,如果我们能打造出
让全人类都被淘汰的机器,
06:48
what could possibly go wrong?"
126
408520
2096
还有什么会出错吗?”
06:50
(Laughter)
127
410640
1656
(笑声)
06:52
But I think that would be
embarrassingly lame.
128
412320
2760
但我觉得那样真是差劲到悲哀。
06:56
I think we should be more ambitious --
in the spirit of TED.
129
416080
3496
我们认为我们应该更有野心——
带着 TED 的精神。
06:59
Let's envision a truly inspiring
high-tech future
130
419600
3496
让我们来想象一个
真正鼓舞人心的高科技未来,
07:03
and try to steer towards it.
131
423120
1400
并试着朝着它前进。
07:05
This brings us to the second part
of our rocket metaphor: the steering.
132
425720
3536
这就把我们带到了火箭比喻的
第二部分:操控。
07:09
We're making AI more powerful,
133
429280
1896
我们让人工智能的力量更强大,
07:11
but how can we steer towards a future
134
431200
3816
但是我们要如何朝着
人工智能帮助人类未来更加繁盛,
07:15
where AI helps humanity flourish
rather than flounder?
135
435040
3080
而非变得挣扎的目标不断前进呢?
07:18
To help with this,
136
438760
1256
为了协助实现它,
我联合创办了 “未来生命研究所”
(Future of Life Institute)。
07:20
I cofounded the Future of Life Institute.
137
440040
1976
它是个小型的非营利机构,
旨在促进有益的科技使用,
07:22
It's a small nonprofit promoting
beneficial technology use,
138
442040
2776
07:24
and our goal is simply
for the future of life to exist
139
444840
2736
我们的目标很简单,
就是希望生命的未来能够存在,
07:27
and to be as inspiring as possible.
140
447600
2056
且越是鼓舞人心越好。
07:29
You know, I love technology.
141
449680
3176
你们知道的,我爱科技。
07:32
Technology is why today
is better than the Stone Age.
142
452880
2920
现今之所以比石器时代更好,
就是因为科技。
07:36
And I'm optimistic that we can create
a really inspiring high-tech future ...
143
456600
4080
我很乐观的认为我们能创造出
一个非常鼓舞人心的高科技未来……
07:41
if -- and this is a big if --
144
461680
1456
如果——这个 “如果” 很重要——
07:43
if we win the wisdom race --
145
463160
2456
如果我们能赢得这场
关于智慧的赛跑——
07:45
the race between the growing
power of our technology
146
465640
2856
这场赛跑的两位竞争者
便是我们不断成长的科技力量
07:48
and the growing wisdom
with which we manage it.
147
468520
2200
以及我们用来管理科技的
不断成长的智慧。
07:51
But this is going to require
a change of strategy
148
471240
2296
但这也需要策略上的改变。
07:53
because our old strategy
has been learning from mistakes.
149
473560
3040
因为我们以往的策略
往往都是从错误中学习的。
07:57
We invented fire,
150
477280
1536
我们发明了火,
07:58
screwed up a bunch of times --
151
478840
1536
因为搞砸了很多次——
08:00
invented the fire extinguisher.
152
480400
1816
我们发明出了灭火器。
08:02
(Laughter)
153
482240
1336
(笑声)
08:03
We invented the car,
screwed up a bunch of times --
154
483600
2416
我们发明了汽车,
又一不小心搞砸了很多次——
08:06
invented the traffic light,
the seat belt and the airbag,
155
486040
2667
发明了红绿灯,安全带
和安全气囊,
08:08
but with more powerful technology
like nuclear weapons and AGI,
156
488731
3845
但对于更强大的科技,
像是核武器和 AGI,
08:12
learning from mistakes
is a lousy strategy,
157
492600
3376
要去从错误中学习,
似乎是个比较糟糕的策略,
08:16
don't you think?
158
496000
1216
你们怎么看?
08:17
(Laughter)
159
497240
1016
(笑声)
08:18
It's much better to be proactive
rather than reactive;
160
498280
2576
事前的准备比事后的
补救要好得多;
08:20
plan ahead and get things
right the first time
161
500880
2296
提早做计划,争取一次成功,
08:23
because that might be
the only time we'll get.
162
503200
2496
因为有时我们或许
没有第二次机会。
08:25
But it is funny because
sometimes people tell me,
163
505720
2336
但有趣的是,
有时候有人告诉我。
“麦克斯,嘘——别那样说话。
08:28
"Max, shhh, don't talk like that.
164
508080
2736
08:30
That's Luddite scaremongering."
165
510840
1720
那是勒德分子(注:持有反机械化,
反自动化观点的人)在制造恐慌。“
08:34
But it's not scaremongering.
166
514040
1536
但这并不是制造恐慌。
08:35
It's what we at MIT
call safety engineering.
167
515600
2880
在麻省理工学院,
我们称之为安全工程。
08:39
Think about it:
168
519200
1216
想想看:
08:40
before NASA launched
the Apollo 11 mission,
169
520440
2216
在美国航天局(NASA)
部署阿波罗 11 号任务之前,
08:42
they systematically thought through
everything that could go wrong
170
522680
3136
他们全面地设想过
所有可能出错的状况,
08:45
when you put people
on top of explosive fuel tanks
171
525840
2376
毕竟是要把人类放进
易燃易爆的太空舱里,
08:48
and launch them somewhere
where no one could help them.
172
528240
2616
再将他们发射上
一个无人能助的境遇。
08:50
And there was a lot that could go wrong.
173
530880
1936
可能出错的情况非常多,
08:52
Was that scaremongering?
174
532840
1480
那是在制造恐慌吗?
08:55
No.
175
535159
1217
不是。
08:56
That's was precisely
the safety engineering
176
536400
2016
那正是在做安全工程的工作,
08:58
that ensured the success of the mission,
177
538440
1936
以确保任务顺利进行,
09:00
and that is precisely the strategy
I think we should take with AGI.
178
540400
4176
这正是我认为处理 AGI 时
应该采取的策略。
09:04
Think through what can go wrong
to make sure it goes right.
179
544600
4056
想清楚什么可能出错,
然后避免它的发生。
09:08
So in this spirit,
we've organized conferences,
180
548680
2536
基于这样的精神,
我们组织了几场大会,
09:11
bringing together leading
AI researchers and other thinkers
181
551240
2816
邀请了世界顶尖的人工智能研究者
和其他有想法的专业人士,
09:14
to discuss how to grow this wisdom
we need to keep AI beneficial.
182
554080
3736
来探讨如何发展这样的智慧,
从而确保人工智能对人类有益。
09:17
Our last conference
was in Asilomar, California last year
183
557840
3296
我们最近的一次大会
去年在加州的阿西洛玛举行,
09:21
and produced this list of 23 principles
184
561160
3056
我们得出了 23 条原则,
09:24
which have since been signed
by over 1,000 AI researchers
185
564240
2896
自此已经有超过 1000 位
人工智能研究者,以及核心企业的
09:27
and key industry leaders,
186
567160
1296
领导人参与签署。
09:28
and I want to tell you
about three of these principles.
187
568480
3176
我想要和各位分享
其中的三项原则。
09:31
One is that we should avoid an arms race
and lethal autonomous weapons.
188
571680
4960
第一,我们需要避免军备竞赛,
以及致命的自动化武器出现。
09:37
The idea here is that any science
can be used for new ways of helping people
189
577480
3616
其中的想法是,任何科学都可以
用新的方式来帮助人们,
09:41
or new ways of harming people.
190
581120
1536
同样也可以以新的方式
对我们造成伤害。
09:42
For example, biology and chemistry
are much more likely to be used
191
582680
3936
例如,生物和化学更可能被用来
09:46
for new medicines or new cures
than for new ways of killing people,
192
586640
4856
制造新的医药用品,
而非带来新的杀人方法,
09:51
because biologists
and chemists pushed hard --
193
591520
2176
因为生物学家和
化学家很努力——
09:53
and successfully --
194
593720
1256
也很成功地——在推动
09:55
for bans on biological
and chemical weapons.
195
595000
2176
禁止生化武器的出现。
09:57
And in the same spirit,
196
597200
1256
基于同样的精神,
09:58
most AI researchers want to stigmatize
and ban lethal autonomous weapons.
197
598480
4440
大部分的人工智能研究者也在
试图指责和禁止致命的自动化武器。
10:03
Another Asilomar AI principle
198
603600
1816
另一条阿西洛玛
人工智能会议的原则是,
10:05
is that we should mitigate
AI-fueled income inequality.
199
605440
3696
我们应该要减轻
由人工智能引起的收入不平等。
10:09
I think that if we can grow
the economic pie dramatically with AI
200
609160
4456
我认为,我们能够大幅度利用
人工智能发展出一块经济蛋糕,
10:13
and we still can't figure out
how to divide this pie
201
613640
2456
但却没能相处如何来分配它
10:16
so that everyone is better off,
202
616120
1576
才能让所有人受益,
10:17
then shame on us.
203
617720
1256
那可太丢人了。
10:19
(Applause)
204
619000
4096
(掌声)
10:23
Alright, now raise your hand
if your computer has ever crashed.
205
623120
3600
那么问一个问题,如果
你的电脑有死机过的,请举手。
10:27
(Laughter)
206
627480
1256
(笑声)
10:28
Wow, that's a lot of hands.
207
628760
1656
哇,好多人举手。
10:30
Well, then you'll appreciate
this principle
208
630440
2176
那么你们就会感谢这条准则,
10:32
that we should invest much more
in AI safety research,
209
632640
3136
我们应该要投入更多
以确保对人工智能安全性的研究,
10:35
because as we put AI in charge
of even more decisions and infrastructure,
210
635800
3656
因为我们让人工智能在主导
更多决策以及基础设施时,
10:39
we need to figure out how to transform
today's buggy and hackable computers
211
639480
3616
我们要了解如何将
会出现程序错误以及有漏洞的电脑,
10:43
into robust AI systems
that we can really trust,
212
643120
2416
转化为可靠的人工智能,
10:45
because otherwise,
213
645560
1216
否则的话,
10:46
all this awesome new technology
can malfunction and harm us,
214
646800
2816
这些了不起的新技术
就会出现故障,反而伤害到我们,
10:49
or get hacked and be turned against us.
215
649640
1976
或被黑入以后转而对抗我们。
10:51
And this AI safety work
has to include work on AI value alignment,
216
651640
5696
这项人工智能安全性的工作
必须包含对人工智能价值观的校准,
10:57
because the real threat
from AGI isn't malice,
217
657360
2816
因为 AGI 会带来的威胁
通常并非出于恶意——
11:00
like in silly Hollywood movies,
218
660200
1656
就像是愚蠢的
好莱坞电影中表现的那样,
11:01
but competence --
219
661880
1736
而是源于能力——
11:03
AGI accomplishing goals
that just aren't aligned with ours.
220
663640
3416
AGI 想完成的目标
与我们的目标背道而驰。
11:07
For example, when we humans drove
the West African black rhino extinct,
221
667080
4736
例如,当我们人类促使了
西非的黑犀牛灭绝时,
11:11
we didn't do it because we were a bunch
of evil rhinoceros haters, did we?
222
671840
3896
并不是因为我们是邪恶
且痛恨犀牛的家伙,对吧?
11:15
We did it because
we were smarter than them
223
675760
2056
我们能够做到
只是因为我们比它们聪明,
11:17
and our goals weren't aligned with theirs.
224
677840
2576
而我们的目标和它们的目标相违背。
11:20
But AGI is by definition smarter than us,
225
680440
2656
但是 AGI 在定义上就比我们聪明,
11:23
so to make sure that we don't put
ourselves in the position of those rhinos
226
683120
3576
所以必须确保我们别让
自己落到了黑犀牛的境遇,
11:26
if we create AGI,
227
686720
1976
如果我们发明 AGI,
11:28
we need to figure out how
to make machines understand our goals,
228
688720
4176
首先就要解决如何
让机器明白我们的目标,
11:32
adopt our goals and retain our goals.
229
692920
3160
选择采用我们的目标,
并一直跟随我们的目标。
11:37
And whose goals should these be, anyway?
230
697320
2856
不过,这些目标到底是谁的目标?
11:40
Which goals should they be?
231
700200
1896
这些目标到底是什么目标?
11:42
This brings us to the third part
of our rocket metaphor: the destination.
232
702120
3560
这就引出了火箭比喻的
第三部分:目的地。
11:47
We're making AI more powerful,
233
707160
1856
我们要让人工智能的力量更强大,
11:49
trying to figure out how to steer it,
234
709040
1816
试图想办法来操控它,
11:50
but where do we want to go with it?
235
710880
1680
但我们到底想把它带去何方呢?
11:53
This is the elephant in the room
that almost nobody talks about --
236
713760
3656
这就像是房间里的大象,
显而易见却无人问津——
11:57
not even here at TED --
237
717440
1856
甚至在 TED 也没人谈论——
11:59
because we're so fixated
on short-term AI challenges.
238
719320
4080
因为我们都把目光
聚焦于短期的人工智能挑战。
12:04
Look, our species is trying to build AGI,
239
724080
4656
你们看,我们人类
正在试图建造 AGI,
12:08
motivated by curiosity and economics,
240
728760
3496
由我们的好奇心
以及经济需求所带动,
12:12
but what sort of future society
are we hoping for if we succeed?
241
732280
3680
但如果我们能成功,
希望能创造出怎样的未来社会呢?
12:16
We did an opinion poll on this recently,
242
736680
1936
最近对于这一点,
我们做了一次观点投票,
12:18
and I was struck to see
243
738640
1216
结果很让我惊讶,
12:19
that most people actually
want us to build superintelligence:
244
739880
2896
大部分的人其实希望
我们能打造出超级人工智能:
12:22
AI that's vastly smarter
than us in all ways.
245
742800
3160
在各个方面都
比我们聪明的人工智能,
12:27
What there was the greatest agreement on
was that we should be ambitious
246
747120
3416
大家甚至一致希望
我们应该更有野心,
12:30
and help life spread into the cosmos,
247
750560
2016
并协助生命在宇宙中的拓展,
12:32
but there was much less agreement
about who or what should be in charge.
248
752600
4496
但对于应该由谁,或者什么来主导,
大家就各持己见了。
12:37
And I was actually quite amused
249
757120
1736
有件事我觉得非常奇妙,
12:38
to see that there's some some people
who want it to be just machines.
250
758880
3456
就是我看到有些人居然表示
让机器主导就好了。
12:42
(Laughter)
251
762360
1696
(笑声)
12:44
And there was total disagreement
about what the role of humans should be,
252
764080
3856
至于人类该扮演怎样的角色,
大家的意见简直就是大相径庭,
12:47
even at the most basic level,
253
767960
1976
即使在最基础的层面上也是,
12:49
so let's take a closer look
at possible futures
254
769960
2816
那么让我们进一步
去看看这些可能的未来,
12:52
that we might choose
to steer toward, alright?
255
772800
2736
我们可能去往目的地,怎么样?
12:55
So don't get me wrong here.
256
775560
1336
别误会我的意思,
12:56
I'm not talking about space travel,
257
776920
2056
我不是在谈论太空旅行,
12:59
merely about humanity's
metaphorical journey into the future.
258
779000
3200
只是打个比方,
人类进入未来的这个旅程。
13:02
So one option that some
of my AI colleagues like
259
782920
3496
我的一些研究人工智能的同事
很喜欢的一个选择就是
13:06
is to build superintelligence
and keep it under human control,
260
786440
3616
打造人工智能,
并确保它被人类所控制,
13:10
like an enslaved god,
261
790080
1736
就像被奴役起来的神一样,
13:11
disconnected from the internet
262
791840
1576
网络连接被断开,
13:13
and used to create unimaginable
technology and wealth
263
793440
3256
为它的操控者创造出无法想象的
13:16
for whoever controls it.
264
796720
1240
科技和财富。
13:18
But Lord Acton warned us
265
798800
1456
但是艾克顿勋爵(Lord Acton)
警告过我们,
13:20
that power corrupts,
and absolute power corrupts absolutely,
266
800280
3616
权力会带来腐败,
绝对的权力终将带来绝对的腐败,
13:23
so you might worry that maybe
we humans just aren't smart enough,
267
803920
4056
所以也许你会担心
我们人类就是还不够聪明,
或者不够智慧,
13:28
or wise enough rather,
268
808000
1536
13:29
to handle this much power.
269
809560
1240
无法妥善处理过多的权力。
13:31
Also, aside from any
moral qualms you might have
270
811640
2536
还有,除了对于奴役带来的优越感,
13:34
about enslaving superior minds,
271
814200
2296
你可能还会产生道德上的忧虑,
13:36
you might worry that maybe
the superintelligence could outsmart us,
272
816520
3976
你也许会担心人工智能
能够在智慧上超越我们,
13:40
break out and take over.
273
820520
2240
奋起反抗,并取得我们的控制权。
13:43
But I also have colleagues
who are fine with AI taking over
274
823560
3416
但是我也有同事认为,
让人工智能来操控一切也无可厚非,
13:47
and even causing human extinction,
275
827000
2296
造成人类灭绝也无妨,
13:49
as long as we feel the the AIs
are our worthy descendants,
276
829320
3576
只要我们觉得人工智能
配得上成为我们的后代,
13:52
like our children.
277
832920
1736
就像是我们的孩子。
13:54
But how would we know that the AIs
have adopted our best values
278
834680
5616
但是我们如何才能知道
人工智能汲取了我们最好的价值观,
14:00
and aren't just unconscious zombies
tricking us into anthropomorphizing them?
279
840320
4376
而不是只是一个无情的僵尸,
让我们误以为它们有人性?
14:04
Also, shouldn't those people
who don't want human extinction
280
844720
2856
此外,那些绝对不想
看到人类灭绝的人,
14:07
have a say in the matter, too?
281
847600
1440
对此应该也有话要说吧?
14:10
Now, if you didn't like either
of those two high-tech options,
282
850200
3376
如果这两个高科技的选择
都不是你所希望的,
14:13
it's important to remember
that low-tech is suicide
283
853600
3176
请记得,从宇宙历史的角度来看,
14:16
from a cosmic perspective,
284
856800
1256
低级的科技如同自杀,
14:18
because if we don't go far
beyond today's technology,
285
858080
2496
因为如果我们不能
远远超越今天的科技,
14:20
the question isn't whether humanity
is going to go extinct,
286
860600
2816
问题就不再是人类是否会灭绝,
14:23
merely whether
we're going to get taken out
287
863440
2016
而是让我们灭绝的会是下一次
14:25
by the next killer asteroid, supervolcano
288
865480
2136
巨型流星撞击地球,
还是超级火山爆发,
14:27
or some other problem
that better technology could have solved.
289
867640
3096
亦或是一些其他本该可以
由更好的科技来解决的问题。
14:30
So, how about having
our cake and eating it ...
290
870760
3576
所以,为什么不干脆
坐享其成……
14:34
with AGI that's not enslaved
291
874360
1840
使用非奴役的 AGI,
14:37
but treats us well because its values
are aligned with ours?
292
877120
3176
因为价值观和我们一致,
愿意和我们并肩作战的 AGI?
14:40
This is the gist of what Eliezer Yudkowsky
has called "friendly AI,"
293
880320
4176
尤多科斯基(Eliezer Yudkowsky) 所谓的
“友善的人工智能” 就是如此,
14:44
and if we can do this,
it could be awesome.
294
884520
2680
若我们能做到这点,那简直太棒了。
14:47
It could not only eliminate negative
experiences like disease, poverty,
295
887840
4816
它或许不会解决负面的影响,
如疾病,贫穷,
14:52
crime and other suffering,
296
892680
1456
犯罪或是其它,
14:54
but it could also give us
the freedom to choose
297
894160
2816
但是它会给予我们自由,
让我们从那些正面的
境遇中去选择——
14:57
from a fantastic new diversity
of positive experiences --
298
897000
4056
15:01
basically making us
the masters of our own destiny.
299
901080
3160
让我们成为自己命运的主人。
15:06
So in summary,
300
906280
1376
总的来说,
15:07
our situation with technology
is complicated,
301
907680
3096
在科技上,我们的现状很复杂,
15:10
but the big picture is rather simple.
302
910800
2416
但是若从大局来看,又很简单。
15:13
Most AI researchers
expect AGI within decades,
303
913240
3456
多数人工智能的研究者认为
AGI 能在未来十年内实现,
15:16
and if we just bumble
into this unprepared,
304
916720
3136
如果我们没有事先
准备好去面对它们,
15:19
it will probably be
the biggest mistake in human history --
305
919880
3336
就可能成为人类历史上
最大的一个错误——
15:23
let's face it.
306
923240
1416
我们要面对现实。
15:24
It could enable brutal,
global dictatorship
307
924680
2576
它可能导致残酷的
全球独裁主义变成现实,
15:27
with unprecedented inequality,
surveillance and suffering,
308
927280
3536
造成前所未有的
不平等监控和苦难,
15:30
and maybe even human extinction.
309
930840
1976
或许甚至导致人类灭绝。
15:32
But if we steer carefully,
310
932840
2320
但是如果我们能小心操控,
15:36
we could end up in a fantastic future
where everybody's better off:
311
936040
3896
我们可能会有个美好的未来,
人人都会受益的未来,
15:39
the poor are richer, the rich are richer,
312
939960
2376
穷人变得富有,富人变得更富有,
15:42
everybody is healthy
and free to live out their dreams.
313
942360
3960
每个人都是健康的,
能自由地去实现他们的梦想。
15:47
Now, hang on.
314
947000
1536
不过先别急。
15:48
Do you folks want the future
that's politically right or left?
315
948560
4576
你们希望未来的政治
是左派还是右派?
15:53
Do you want the pious society
with strict moral rules,
316
953160
2856
你们想要一个有
严格道德准则的社会,
15:56
or do you an hedonistic free-for-all,
317
956040
1816
还是一个人人可参与的
享乐主义社会,
15:57
more like Burning Man 24/7?
318
957880
2216
更像是个无时无刻
不在运转的火人盛会?
16:00
Do you want beautiful beaches,
forests and lakes,
319
960120
2416
你们想要美丽的海滩、森林和湖泊,
16:02
or would you prefer to rearrange
some of those atoms with the computers,
320
962560
3416
还是偏好用电脑
重新排列组成新的原子,
实现真正的虚拟现实?
16:06
enabling virtual experiences?
321
966000
1715
16:07
With friendly AI, we could simply
build all of these societies
322
967739
3157
有了友善的人工智能,
我们就能轻而易举地建立这些社会,
16:10
and give people the freedom
to choose which one they want to live in
323
970920
3216
让大家有自由去选择
想要生活在怎样的社会里,
因为我们不会再受到
自身智慧的限制,
16:14
because we would no longer
be limited by our intelligence,
324
974160
3096
16:17
merely by the laws of physics.
325
977280
1456
唯一的限制只有物理的定律。
16:18
So the resources and space
for this would be astronomical --
326
978760
4616
所以资源和空间会取之不尽——
16:23
literally.
327
983400
1320
毫不夸张。
16:25
So here's our choice.
328
985320
1200
我们的选择如下:
16:27
We can either be complacent
about our future,
329
987880
2320
我们可以对未来感到自满,
16:31
taking as an article of blind faith
330
991440
2656
带着盲目的信念,
16:34
that any new technology
is guaranteed to be beneficial,
331
994120
4016
相信任何科技必定是有益的,
16:38
and just repeat that to ourselves
as a mantra over and over and over again
332
998160
4136
并将这个想法当作
圣歌一般,不断默念,
16:42
as we drift like a rudderless ship
towards our own obsolescence.
333
1002320
3680
让我们像漫无目的船只,
驶向自我消亡的结局。
16:46
Or we can be ambitious --
334
1006920
1880
或者,我们可以拥有雄心壮志——
16:49
thinking hard about how
to steer our technology
335
1009840
2456
努力去找到操控我们科技的方法,
16:52
and where we want to go with it
336
1012320
1936
以及向往的目的地,
16:54
to create the age of amazement.
337
1014280
1760
创造出真正令人惊奇的时代。
16:57
We're all here to celebrate
the age of amazement,
338
1017000
2856
我们相聚在这里,
赞颂这令人惊奇的时代,
16:59
and I feel that its essence should lie
in becoming not overpowered
339
1019880
4440
我觉得,它的精髓应当是,
让科技赋予我们力量,
17:05
but empowered by our technology.
340
1025240
2616
而非反过来受控于它。
17:07
Thank you.
341
1027880
1376
谢谢大家。
17:09
(Applause)
342
1029280
3080
(掌声)
New videos
关于本网站
这个网站将向你介绍对学习英语有用的YouTube视频。你将看到来自世界各地的一流教师教授的英语课程。双击每个视频页面上显示的英文字幕,即可从那里播放视频。字幕会随着视频的播放而同步滚动。如果你有任何意见或要求,请使用此联系表与我们联系。