请双击下面的英文字幕来播放视频。
翻译人员: Yichen Zheng
校对人员: Yanyan Hong
00:12
This is Lee Sedol.
0
12532
1552
这是李世石。
00:14
Lee Sedol is one of the world's
greatest Go players,
1
14108
3997
李世石是全世界
最顶尖的围棋高手之一,
00:18
and he's having what my friends
in Silicon Valley call
2
18129
2885
在这一刻,他所经历的
足以让我硅谷的朋友们
00:21
a "Holy Cow" moment --
3
21038
1510
喊一句”我的天啊“——
00:22
(Laughter)
4
22572
1073
(笑声)
00:23
a moment where we realize
5
23669
2188
在这一刻,我们意识到
00:25
that AI is actually progressing
a lot faster than we expected.
6
25881
3296
原来人工智能发展的进程
比我们预想的要快得多。
00:29
So humans have lost on the Go board.
What about the real world?
7
29974
3047
人们在围棋棋盘上已经输了,
那在现实世界中又如何呢?
当然了,现实世界要
比围棋棋盘要大得多,
00:33
Well, the real world is much bigger,
8
33045
2100
00:35
much more complicated than the Go board.
9
35169
2249
复杂得多。
00:37
It's a lot less visible,
10
37442
1819
相比之下每一步也没那么明确,
00:39
but it's still a decision problem.
11
39285
2038
但现实世界仍然是一个选择性问题。
00:42
And if we think about some
of the technologies
12
42768
2321
如果我们想想那一些在不久的未来,
00:45
that are coming down the pike ...
13
45113
1749
即将来临的新科技……
00:47
Noriko [Arai] mentioned that reading
is not yet happening in machines,
14
47558
4335
Noriko提到机器还不能进行阅读,
00:51
at least with understanding.
15
51917
1500
至少达不到理解的程度,
00:53
But that will happen,
16
53441
1536
但这迟早会发生,
而当它发生时,
00:55
and when that happens,
17
55001
1771
00:56
very soon afterwards,
18
56796
1187
不久之后,
机器就将读遍人类写下的所有东西。
00:58
machines will have read everything
that the human race has ever written.
19
58007
4572
01:03
And that will enable machines,
20
63670
2030
这将使机器除了拥有
01:05
along with the ability to look
further ahead than humans can,
21
65724
2920
比人类看得更远的能力,
01:08
as we've already seen in Go,
22
68668
1680
就像我们在围棋中看到的那样,
01:10
if they also have access
to more information,
23
70372
2164
如果机器能接触到比人类更多的信息,
01:12
they'll be able to make better decisions
in the real world than we can.
24
72560
4268
则将能够在现实世界中
做出比人类更好的选择。
01:18
So is that a good thing?
25
78612
1606
那这是一件好事吗?
01:21
Well, I hope so.
26
81718
2232
我当然希望如此。
01:26
Our entire civilization,
everything that we value,
27
86514
3255
人类的全部文明,
我们所珍视的一切,
01:29
is based on our intelligence.
28
89793
2068
都是基于我们的智慧之上。
01:31
And if we had access
to a lot more intelligence,
29
91885
3694
如果我们能掌控更强大的智能,
01:35
then there's really no limit
to what the human race can do.
30
95603
3302
那我们人类的 创造力
就真的没有极限了。
01:40
And I think this could be,
as some people have described it,
31
100485
3325
我认为这可能就像很多人描述的那样
01:43
the biggest event in human history.
32
103834
2016
会成为人类历史上最重要的事件。
01:48
So why are people saying things like this,
33
108485
2829
那为什么有的人会说出以下的言论,
01:51
that AI might spell the end
of the human race?
34
111338
2876
说人工智能将是人类的末日呢?
01:55
Is this a new thing?
35
115258
1659
这是一个新事物吗?
01:56
Is it just Elon Musk and Bill Gates
and Stephen Hawking?
36
116941
4110
这只关乎伊隆马斯克、
比尔盖茨,和斯提芬霍金吗?
02:01
Actually, no. This idea
has been around for a while.
37
121773
3262
其实不是的,人工智能
这个概念已经存在很长时间了。
02:05
Here's a quotation:
38
125059
1962
请看这段话:
02:07
"Even if we could keep the machines
in a subservient position,
39
127045
4350
“即便我们能够将机器
维持在一个屈服于我们的地位,
02:11
for instance, by turning off the power
at strategic moments" --
40
131419
2984
比如说,在战略性时刻将电源关闭。”——
02:14
and I'll come back to that
"turning off the power" idea later on --
41
134427
3237
我等会儿再来讨论
”关闭电源“这一话题,
02:17
"we should, as a species,
feel greatly humbled."
42
137688
2804
”我们,作为一个物种,
仍然应该自感惭愧。“
02:21
So who said this?
This is Alan Turing in 1951.
43
141997
3448
这段话是谁说的呢?
是阿兰图灵,他在1951年说的。
02:26
Alan Turing, as you know,
is the father of computer science
44
146120
2763
阿兰图灵,众所皆知,
是计算机科学之父。
02:28
and in many ways,
the father of AI as well.
45
148907
3048
从很多意义上说,
他也是人工智能之父。
02:33
So if we think about this problem,
46
153059
1882
当我们考虑这个问题,
02:34
the problem of creating something
more intelligent than your own species,
47
154965
3787
创造一个比自己更智能的
物种的问题时,
02:38
we might call this "the gorilla problem,"
48
158776
2622
我们不妨将它称为”大猩猩问题“,
02:42
because gorillas' ancestors did this
a few million years ago,
49
162165
3750
因为这正是大猩猩的
祖先们几百万年前所经历的。
02:45
and now we can ask the gorillas:
50
165939
1745
我们今天可以去问大猩猩们:
02:48
Was this a good idea?
51
168572
1160
那么做是不是一个好主意?
02:49
So here they are having a meeting
to discuss whether it was a good idea,
52
169756
3530
在这幅图里,大猩猩们正在
开会讨论那么做是不是一个好主意,
02:53
and after a little while,
they conclude, no,
53
173310
3346
片刻后他们下定结论,不是的。
02:56
this was a terrible idea.
54
176680
1345
那是一个很糟糕的主意。
我们的物种已经奄奄一息了,
02:58
Our species is in dire straits.
55
178049
1782
03:00
In fact, you can see the existential
sadness in their eyes.
56
180358
4263
你都可以从它们的眼神中看到这种忧伤,
03:04
(Laughter)
57
184645
1640
(笑声)
03:06
So this queasy feeling that making
something smarter than your own species
58
186309
4840
所以创造比你自己更聪明的物种,
03:11
is maybe not a good idea --
59
191173
2365
也许不是一个好主意——
03:14
what can we do about that?
60
194308
1491
那我们能做些什么呢?
03:15
Well, really nothing,
except stop doing AI,
61
195823
4767
其实没什么能做的,
除了停止研究人工智能,
03:20
and because of all
the benefits that I mentioned
62
200614
2510
但因为人工智能能带来
我之前所说的诸多益处,
03:23
and because I'm an AI researcher,
63
203148
1716
也因为我是
人工智能的研究者之一,
03:24
I'm not having that.
64
204888
1791
我可不同意就这么止步。
03:27
I actually want to be able
to keep doing AI.
65
207103
2468
实际上,我想继续做人工智能。
03:30
So we actually need to nail down
the problem a bit more.
66
210435
2678
所以我们需要把这个问题更细化一点,
它到底是什么呢?
03:33
What exactly is the problem?
67
213137
1371
03:34
Why is better AI possibly a catastrophe?
68
214532
3246
那就是为什么更强大的
人工智能可能会是灾难呢?
03:39
So here's another quotation:
69
219218
1498
再来看这段话:
03:41
"We had better be quite sure
that the purpose put into the machine
70
221755
3335
”我们一定得确保我们
给机器输入的目的和价值
03:45
is the purpose which we really desire."
71
225114
2298
是我们确实想要的目的和价值。“
03:48
This was said by Norbert Wiener in 1960,
72
228102
3498
这是诺博特维纳在1960年说的,
03:51
shortly after he watched
one of the very early learning systems
73
231624
4002
他说这话时是刚看到
一个早期的学习系统,
03:55
learn to play checkers
better than its creator.
74
235650
2583
这个系统在学习如何能把
西洋棋下得比它的创造者更好。
04:00
But this could equally have been said
75
240422
2683
与此如出一辙的一句话,
04:03
by King Midas.
76
243129
1167
迈达斯国王也说过。
04:04
King Midas said, "I want everything
I touch to turn to gold,"
77
244903
3134
迈达斯国王说:”我希望
我触碰的所有东西都变成金子。“
结果他真的获得了点石成金的能力。
04:08
and he got exactly what he asked for.
78
248061
2473
04:10
That was the purpose
that he put into the machine,
79
250558
2751
那就是他所输入的目的,
04:13
so to speak,
80
253333
1450
从一定程度上说,
04:14
and then his food and his drink
and his relatives turned to gold
81
254807
3444
后来他的食物、
他的家人都变成了金子,
04:18
and he died in misery and starvation.
82
258275
2281
他死在痛苦与饥饿之中。
04:22
So we'll call this
"the King Midas problem"
83
262264
2341
我们可以把这个问题
叫做”迈达斯问题“,
04:24
of stating an objective
which is not, in fact,
84
264629
3305
这个问题是我们阐述的目标,但实际上
04:27
truly aligned with what we want.
85
267958
2413
与我们真正想要的不一致,
04:30
In modern terms, we call this
"the value alignment problem."
86
270395
3253
用现代的术语来说,
我们把它称为”价值一致性问题“。
04:36
Putting in the wrong objective
is not the only part of the problem.
87
276867
3485
而输入错误的目标
仅仅是问题的一部分。
04:40
There's another part.
88
280376
1152
它还有另一部分。
04:41
If you put an objective into a machine,
89
281980
1943
如果你为机器输入一个目标,
04:43
even something as simple as,
"Fetch the coffee,"
90
283947
2448
即便是一个很简单的目标,
比如说”去把咖啡端来“,
04:47
the machine says to itself,
91
287728
1841
机器会对自己说:
04:50
"Well, how might I fail
to fetch the coffee?
92
290553
2623
”好吧,那我要怎么去拿咖啡呢?
04:53
Someone might switch me off.
93
293200
1580
说不定有人会把我的电源关掉。
04:55
OK, I have to take steps to prevent that.
94
295465
2387
好吧,那我要想办法
阻止别人把我关掉。
04:57
I will disable my 'off' switch.
95
297876
1906
我得让我的‘关闭’开关失效。
05:00
I will do anything to defend myself
against interference
96
300354
2959
我得尽一切可能自我防御,
不让别人干涉我,
05:03
with this objective
that I have been given."
97
303337
2629
这都是因为我被赋予的目标。”
05:05
So this single-minded pursuit
98
305990
2012
这种一根筋的思维,
05:09
in a very defensive mode
of an objective that is, in fact,
99
309033
2945
以一种十分防御型的
模式去实现某一目标,
实际上与我们人类最初
想实现的目标并不一致——
05:12
not aligned with the true objectives
of the human race --
100
312002
2814
05:15
that's the problem that we face.
101
315942
1862
这就是我们面临的问题。
05:18
And in fact, that's the high-value
takeaway from this talk.
102
318827
4767
实际上,这就是今天这个演讲的核心。
05:23
If you want to remember one thing,
103
323618
2055
如果你在我的演讲中只记住一件事,
05:25
it's that you can't fetch
the coffee if you're dead.
104
325697
2675
那就是:如果你死了,
你就不能去端咖啡了。
05:28
(Laughter)
105
328396
1061
(笑声)
05:29
It's very simple. Just remember that.
Repeat it to yourself three times a day.
106
329481
3829
这很简单。记住它就行了。
每天对自己重复三遍。
05:33
(Laughter)
107
333334
1821
(笑声)
05:35
And in fact, this is exactly the plot
108
335179
2754
实际上,这正是电影
05:37
of "2001: [A Space Odyssey]"
109
337957
2648
《2001太空漫步》的剧情。
05:41
HAL has an objective, a mission,
110
341046
2090
HAL有一个目标,一个任务,
05:43
which is not aligned
with the objectives of the humans,
111
343160
3732
但这个目标和人类的目标不一致,
05:46
and that leads to this conflict.
112
346916
1810
这就导致了矛盾的产生。
05:49
Now fortunately, HAL
is not superintelligent.
113
349314
2969
幸运的是,HAL并不具备超级智能,
05:52
He's pretty smart,
but eventually Dave outwits him
114
352307
3587
他挺聪明的,但还是
比不过人类主角戴夫,
05:55
and manages to switch him off.
115
355918
1849
戴夫成功地把HAL关掉了。
06:01
But we might not be so lucky.
116
361648
1619
但我们可能就没有这么幸运了。
06:08
So what are we going to do?
117
368013
1592
那我们应该怎么办呢?
06:12
I'm trying to redefine AI
118
372191
2601
我想要重新定义人工智能,
06:14
to get away from this classical notion
119
374816
2061
远离传统的定义,
06:16
of machines that intelligently
pursue objectives.
120
376901
4567
将其仅限定为
机器通过智能去达成目标。
06:22
There are three principles involved.
121
382532
1798
新的定义涉及到三个原则:
第一个原则是利他主义原则,
06:24
The first one is a principle
of altruism, if you like,
122
384354
3289
06:27
that the robot's only objective
123
387667
3262
也就是说,机器的唯一目标
06:30
is to maximize the realization
of human objectives,
124
390953
4246
就是去最大化地实现人类的目标,
06:35
of human values.
125
395223
1390
人类的价值。
06:36
And by values here I don't mean
touchy-feely, goody-goody values.
126
396637
3330
至于价值,我指的不是感情化的价值,
06:39
I just mean whatever it is
that the human would prefer
127
399991
3787
而是指人类对生活所向往的,
06:43
their life to be like.
128
403802
1343
无论是什么。
06:47
And so this actually violates Asimov's law
129
407184
2309
这实际上违背了阿西莫夫定律,
06:49
that the robot has to protect
its own existence.
130
409517
2329
他指出机器人一定要维护自己的生存。
06:51
It has no interest in preserving
its existence whatsoever.
131
411870
3723
但我定义的机器
对维护自身生存毫无兴趣。
06:57
The second law is a law
of humility, if you like.
132
417240
3768
第二个原则不妨称之为谦逊原则。
07:01
And this turns out to be really
important to make robots safe.
133
421794
3743
这一条对于制造安全的机器十分重要。
07:05
It says that the robot does not know
134
425561
3142
它说的是机器不知道
07:08
what those human values are,
135
428727
2028
人类的价值是什么,
07:10
so it has to maximize them,
but it doesn't know what they are.
136
430779
3178
机器知道它需要将人类的价值最大化,
却不知道这价值究竟是什么。
07:15
And that avoids this problem
of single-minded pursuit
137
435074
2626
为了避免一根筋地追求
07:17
of an objective.
138
437724
1212
某一目标,
07:18
This uncertainty turns out to be crucial.
139
438960
2172
这种不确定性是至关重要的。
07:21
Now, in order to be useful to us,
140
441546
1639
那机器为了对我们有用,
07:23
it has to have some idea of what we want.
141
443209
2731
它就得掌握一些
关于我们想要什么的信息。
07:27
It obtains that information primarily
by observation of human choices,
142
447043
5427
它主要通过观察人类
做的选择来获取这样的信息,
07:32
so our own choices reveal information
143
452494
2801
我们自己做出的选择会包含着
07:35
about what it is that we prefer
our lives to be like.
144
455319
3300
关于我们希望我们的生活
是什么样的信息,
07:40
So those are the three principles.
145
460452
1683
这就是三条原则。
让我们来看看它们是如何应用到
07:42
Let's see how that applies
to this question of:
146
462159
2318
07:44
"Can you switch the machine off?"
as Turing suggested.
147
464501
2789
像图灵说的那样,
“将机器关掉”这个问题上来。
07:48
So here's a PR2 robot.
148
468893
2120
这是一个PR2机器人。
我们实验室里有一个。
07:51
This is one that we have in our lab,
149
471037
1821
07:52
and it has a big red "off" switch
right on the back.
150
472882
2903
它的背面有一个大大的红色的开关。
07:56
The question is: Is it
going to let you switch it off?
151
476361
2615
那问题来了:它会让你把它关掉吗?
如果我们按传统的方法,
07:59
If we do it the classical way,
152
479000
1465
08:00
we give it the objective of, "Fetch
the coffee, I must fetch the coffee,
153
480489
3482
给它一个目标,让它拿咖啡,
它会想:”我必须去拿咖啡,
08:03
I can't fetch the coffee if I'm dead,"
154
483995
2580
但我死了就不能拿咖啡了。“
08:06
so obviously the PR2
has been listening to my talk,
155
486599
3341
显然PR2听过我的演讲了,
08:09
and so it says, therefore,
"I must disable my 'off' switch,
156
489964
3753
所以它说:”我必须让我的开关失灵,
08:14
and probably taser all the other
people in Starbucks
157
494796
2694
可能还要把那些在星巴克里,
08:17
who might interfere with me."
158
497514
1560
可能干扰我的人都电击一下。“
08:19
(Laughter)
159
499098
2062
(笑声)
08:21
So this seems to be inevitable, right?
160
501184
2153
这看起来必然会发生,对吗?
08:23
This kind of failure mode
seems to be inevitable,
161
503361
2398
这种失败看起来是必然的,
08:25
and it follows from having
a concrete, definite objective.
162
505783
3543
因为机器人在遵循
一个十分确定的目标。
08:30
So what happens if the machine
is uncertain about the objective?
163
510632
3144
那如果机器对目标
不那么确定会发生什么呢?
08:33
Well, it reasons in a different way.
164
513800
2127
那它的思路就不一样了。
08:35
It says, "OK, the human
might switch me off,
165
515951
2424
它会说:”好的,人类可能会把我关掉,
08:38
but only if I'm doing something wrong.
166
518964
1866
但只在我做错事的时候。
08:41
Well, I don't really know what wrong is,
167
521567
2475
我不知道什么是错事,
但我知道我不该做那些事。”
08:44
but I know that I don't want to do it."
168
524066
2044
这就是第一和第二原则。
08:46
So that's the first and second
principles right there.
169
526134
3010
08:49
"So I should let the human switch me off."
170
529168
3359
“那我就应该让人类把我关掉。”
08:53
And in fact you can calculate
the incentive that the robot has
171
533541
3956
事实上你可以计算出机器人
08:57
to allow the human to switch it off,
172
537521
2493
让人类把它关掉的动机,
而且这个动机是
09:00
and it's directly tied to the degree
173
540038
1914
09:01
of uncertainty about
the underlying objective.
174
541976
2746
与对目标的不确定程度直接相关的。
09:05
And then when the machine is switched off,
175
545797
2949
当机器被关闭后,
09:08
that third principle comes into play.
176
548770
1805
第三条原则就起作用了。
09:10
It learns something about the objectives
it should be pursuing,
177
550599
3062
机器开始学习它所追求的目标,
09:13
because it learns that
what it did wasn't right.
178
553685
2533
因为它知道它刚做的事是不对的。
09:16
In fact, we can, with suitable use
of Greek symbols,
179
556242
3570
实际上,我们可以用希腊字母
09:19
as mathematicians usually do,
180
559836
2131
就像数学家们经常做的那样,
09:21
we can actually prove a theorem
181
561991
1984
直接证明这一定理,
09:23
that says that such a robot
is provably beneficial to the human.
182
563999
3553
那就是这样的一个机器人
对人们是绝对有利的。
09:27
You are provably better off
with a machine that's designed in this way
183
567576
3803
可以证明我们的生活
有如此设计的机器人会变得
09:31
than without it.
184
571403
1246
比没有这样的机器人更好。
09:33
So this is a very simple example,
but this is the first step
185
573057
2906
这是一个很简单的例子,但这只是
09:35
in what we're trying to do
with human-compatible AI.
186
575987
3903
我们尝试实现与人类
兼容的人工智能的第一步。
09:42
Now, this third principle,
187
582477
3257
现在来看第三个原则。
09:45
I think is the one that you're probably
scratching your head over.
188
585758
3112
我知道你们可能正在
为这一个原则而大伤脑筋。
09:48
You're probably thinking, "Well,
you know, I behave badly.
189
588894
3239
你可能会想:“你知道,
我有时不按规矩办事。
09:52
I don't want my robot to behave like me.
190
592157
2929
我可不希望我的机器人
像我一样行事。
我有时大半夜偷偷摸摸地
从冰箱里找东西吃,
09:55
I sneak down in the middle of the night
and take stuff from the fridge.
191
595110
3434
09:58
I do this and that."
192
598568
1168
诸如此类的事。”
09:59
There's all kinds of things
you don't want the robot doing.
193
599760
2797
有各种各样的事你是
不希望机器人去做的。
10:02
But in fact, it doesn't
quite work that way.
194
602581
2071
但实际上并不一定会这样。
10:04
Just because you behave badly
195
604676
2155
仅仅是因为你表现不好,
10:06
doesn't mean the robot
is going to copy your behavior.
196
606855
2623
并不代表机器人就会复制你的行为。
它会去尝试理解你做事的动机,
而且可能会在合适的情况下制止你去做
10:09
It's going to understand your motivations
and maybe help you resist them,
197
609502
3910
10:13
if appropriate.
198
613436
1320
那些不该做的事。
10:16
But it's still difficult.
199
616026
1464
但这仍然十分困难。
10:18
What we're trying to do, in fact,
200
618122
2545
实际上,我们在做的是
10:20
is to allow machines to predict
for any person and for any possible life
201
620691
5796
让机器去预测任何一个人,
在他们的任何一种
10:26
that they could live,
202
626511
1161
可能的生活中
10:27
and the lives of everybody else:
203
627696
1597
以及别人的生活中,
10:29
Which would they prefer?
204
629317
2517
他们会更倾向于哪一种?
10:33
And there are many, many
difficulties involved in doing this;
205
633881
2954
这涉及到诸多困难;
10:36
I don't expect that this
is going to get solved very quickly.
206
636859
2932
我不认为这会很快地就被解决。
10:39
The real difficulties, in fact, are us.
207
639815
2643
实际上,真正的困难是我们自己。
10:43
As I have already mentioned,
we behave badly.
208
643969
3117
就像我刚说的那样,
我们做事不守规矩,
我们中有的人甚至行为肮脏。
10:47
In fact, some of us are downright nasty.
209
647110
2321
10:50
Now the robot, as I said,
doesn't have to copy the behavior.
210
650251
3052
就像我说的,
机器人并不会复制那些行为,
10:53
The robot does not have
any objective of its own.
211
653327
2791
机器人没有自己的目标,
10:56
It's purely altruistic.
212
656142
1737
它是完全无私的。
10:59
And it's not designed just to satisfy
the desires of one person, the user,
213
659113
5221
它的设计不是去满足
某一个人、一个用户的欲望,
11:04
but in fact it has to respect
the preferences of everybody.
214
664358
3138
而是去尊重所有人的意愿。
11:09
So it can deal with a certain
amount of nastiness,
215
669083
2570
所以它能对付一定程度的肮脏行为。
11:11
and it can even understand
that your nastiness, for example,
216
671677
3701
它甚至能理解你的不端行为,比如说
11:15
you may take bribes as a passport official
217
675402
2671
假如你是一个边境护照官员,
很可能收取贿赂,
11:18
because you need to feed your family
and send your kids to school.
218
678097
3812
因为你得养家、
得供你的孩子们上学。
11:21
It can understand that;
it doesn't mean it's going to steal.
219
681933
2906
机器人能理解这一点,
它不会因此去偷,
11:24
In fact, it'll just help you
send your kids to school.
220
684863
2679
它反而会帮助你去供孩子们上学。
11:28
We are also computationally limited.
221
688796
3012
我们的计算能力也是有限的。
11:31
Lee Sedol is a brilliant Go player,
222
691832
2505
李世石是一个杰出的围棋大师,
11:34
but he still lost.
223
694361
1325
但他还是输了。
11:35
So if we look at his actions,
he took an action that lost the game.
224
695710
4239
如果我们看他的行动,
他最终输掉了棋局。
11:39
That doesn't mean he wanted to lose.
225
699973
2161
但这不意味着他想要输。
11:43
So to understand his behavior,
226
703160
2040
所以要理解他的行为,
11:45
we actually have to invert
through a model of human cognition
227
705224
3644
我们得从人类认知模型来反过来想,
11:48
that includes our computational
limitations -- a very complicated model.
228
708892
4977
这包含了我们的计算能力限制,
是一个很复杂的模型,
11:53
But it's still something
that we can work on understanding.
229
713893
2993
但仍然是我们可以尝试去理解的。
11:57
Probably the most difficult part,
from my point of view as an AI researcher,
230
717696
4320
可能对于我这样一个
人工智能研究人员来说最大的困难,
是我们彼此各不相同。
12:02
is the fact that there are lots of us,
231
722040
2575
12:06
and so the machine has to somehow
trade off, weigh up the preferences
232
726114
3581
所以机器必须想办法去判别衡量
12:09
of many different people,
233
729719
2225
不同人的不同需求,
12:11
and there are different ways to do that.
234
731968
1906
而又有众多方法去做这样的判断。
12:13
Economists, sociologists,
moral philosophers have understood that,
235
733898
3689
经济学家、社会学家、
哲学家都理解这一点,
12:17
and we are actively
looking for collaboration.
236
737611
2455
我们正在积极地去寻求合作。
12:20
Let's have a look and see what happens
when you get that wrong.
237
740090
3251
让我们来看看如果我们
把这一步弄错了会怎么样。
12:23
So you can have
a conversation, for example,
238
743365
2133
举例来说,你可能会
与你的人工智能助理,
12:25
with your intelligent personal assistant
239
745522
1944
有这样的对话:
12:27
that might be available
in a few years' time.
240
747490
2285
这样的人工智能可能几年内就会出现,
12:29
Think of a Siri on steroids.
241
749799
2524
可以把它想做加强版的Siri。
12:33
So Siri says, "Your wife called
to remind you about dinner tonight."
242
753447
4322
Siri对你说:“你的妻子打电话
提醒你今晚要跟她共进晚餐。”
12:38
And of course, you've forgotten.
"What? What dinner?
243
758436
2508
而你呢,自然忘了这回事:
“什么?什么晚饭?
12:40
What are you talking about?"
244
760968
1425
你在说什么?”
12:42
"Uh, your 20th anniversary at 7pm."
245
762417
3746
“啊,你们晚上7点,
庆祝结婚20周年纪念日。”
12:48
"I can't do that. I'm meeting
with the secretary-general at 7:30.
246
768735
3719
“我可去不了。
我约了晚上7点半见领导。
12:52
How could this have happened?"
247
772478
1692
怎么会这样呢?”
12:54
"Well, I did warn you, but you overrode
my recommendation."
248
774194
4660
“呃,我可是提醒过你的,
但你不听我的建议。”
12:59
"Well, what am I going to do?
I can't just tell him I'm too busy."
249
779966
3328
“我该怎么办呢?我可不能
跟领导说我有事,没空见他。”
13:04
"Don't worry. I arranged
for his plane to be delayed."
250
784310
3281
“别担心。我已经安排了,
让他的航班延误。
13:07
(Laughter)
251
787615
1682
(笑声)
13:10
"Some kind of computer malfunction."
252
790069
2101
“像是因为某种计算机故障那样。”
13:12
(Laughter)
253
792194
1212
(笑声)
13:13
"Really? You can do that?"
254
793430
1617
“真的吗?这个你也能做到?”
13:16
"He sends his profound apologies
255
796220
2179
“领导很不好意思,跟你道歉,
13:18
and looks forward to meeting you
for lunch tomorrow."
256
798423
2555
并且告诉你明天
中午午饭不见不散。”
(笑声)
13:21
(Laughter)
257
801002
1299
这里就有一个小小的问题。
13:22
So the values here --
there's a slight mistake going on.
258
802325
4403
13:26
This is clearly following my wife's values
259
806752
3009
这显然是在遵循我妻子的价值论,
13:29
which is "Happy wife, happy life."
260
809785
2069
那就是“老婆开心,生活舒心”。
13:31
(Laughter)
261
811878
1583
(笑声)
13:33
It could go the other way.
262
813485
1444
它也有可能发展成另一种情况。
13:35
You could come home
after a hard day's work,
263
815641
2201
你忙碌一天,回到家里,
13:37
and the computer says, "Long day?"
264
817866
2195
电脑对你说:“像是繁忙的一天啊?”
“是啊,我连午饭都没来得及吃。”
13:40
"Yes, I didn't even have time for lunch."
265
820085
2288
“那你一定很饿了吧。”
13:42
"You must be very hungry."
266
822397
1282
13:43
"Starving, yeah.
Could you make some dinner?"
267
823703
2646
“快饿晕了。你能做点晚饭吗?”
13:47
"There's something I need to tell you."
268
827890
2090
“有一件事我得告诉你。
(笑声)
13:50
(Laughter)
269
830004
1155
13:52
"There are humans in South Sudan
who are in more urgent need than you."
270
832013
4905
”南苏丹的人们可比你更需要照顾。
13:56
(Laughter)
271
836942
1104
(笑声)
“所以我要离开了。
你自己做饭去吧。”
13:58
"So I'm leaving. Make your own dinner."
272
838070
2075
14:00
(Laughter)
273
840169
2000
(笑声)
14:02
So we have to solve these problems,
274
842643
1739
我们得解决这些问题,
14:04
and I'm looking forward
to working on them.
275
844406
2515
我也很期待去解决。
14:06
There are reasons for optimism.
276
846945
1843
我们有理由感到乐观。
14:08
One reason is,
277
848812
1159
理由之一是
14:09
there is a massive amount of data.
278
849995
1868
我们有大量的数据,
14:11
Because remember -- I said
they're going to read everything
279
851887
2794
记住,我说过机器将能够阅读一切
人类所写下来的东西,
14:14
the human race has ever written.
280
854705
1546
而我们写下的大多数是
我们做的什么事情,
14:16
Most of what we write about
is human beings doing things
281
856275
2724
以及其他人对此有什么意见。
14:19
and other people getting upset about it.
282
859023
1914
14:20
So there's a massive amount
of data to learn from.
283
860961
2398
所以机器可以从大量的数据中去学习。
14:23
There's also a very
strong economic incentive
284
863383
2236
同时从经济的角度,
我们也有足够的动机
14:27
to get this right.
285
867151
1186
去把这件事做对。
14:28
So imagine your domestic robot's at home.
286
868361
2001
想象一下,你家里有个居家机器人,
14:30
You're late from work again
and the robot has to feed the kids,
287
870386
3067
而你又得加班,
机器人得给孩子们做饭,
14:33
and the kids are hungry
and there's nothing in the fridge.
288
873477
2823
孩子们很饿,
但冰箱里什么都没有。
14:36
And the robot sees the cat.
289
876324
2605
然后机器人看到了家里的猫,
14:38
(Laughter)
290
878953
1692
(笑声)
14:40
And the robot hasn't quite learned
the human value function properly,
291
880669
4190
机器人还没学透人类的价值论,
14:44
so it doesn't understand
292
884883
1251
所以它不知道
猫的感情价值
大于猫的营养价值。
14:46
the sentimental value of the cat outweighs
the nutritional value of the cat.
293
886158
4844
(笑声)
14:51
(Laughter)
294
891026
1095
接下来会发生什么?
14:52
So then what happens?
295
892145
1748
14:53
Well, it happens like this:
296
893917
3297
差不多是这样的:
14:57
"Deranged robot cooks kitty
for family dinner."
297
897238
2964
头版头条:“疯狂的机器人
把猫煮了给主人当晚饭!”
15:00
That one incident would be the end
of the domestic robot industry.
298
900226
4523
这一个事故就足以结束
整个居家机器人产业。
15:04
So there's a huge incentive
to get this right
299
904773
3372
所以我们有足够的动机在我们实现
15:08
long before we reach
superintelligent machines.
300
908169
2715
超级智能机器让它更加完善。
15:11
So to summarize:
301
911948
1535
总结来说:
15:13
I'm actually trying to change
the definition of AI
302
913507
2881
我想要改变人工智能的定义,
15:16
so that we have provably
beneficial machines.
303
916412
2993
让我们可以证明机器对我们是有利的。
15:19
And the principles are:
304
919429
1222
这三个原则是:
15:20
machines that are altruistic,
305
920675
1398
机器是利他的,
15:22
that want to achieve only our objectives,
306
922097
2804
只想着实现我们的目标,
15:24
but that are uncertain
about what those objectives are,
307
924925
3116
但它不确定我们的目标是什么,
所以它会观察我们,
15:28
and will watch all of us
308
928065
1998
15:30
to learn more about what it is
that we really want.
309
930087
3203
从中学习我们想要的究竟是什么。
15:34
And hopefully in the process,
we will learn to be better people.
310
934193
3559
希望在这个过程中,
我们也能学会成为更好的人。
15:37
Thank you very much.
311
937776
1191
谢谢大家。
15:38
(Applause)
312
938991
3709
(掌声)
15:42
Chris Anderson: So interesting, Stuart.
313
942724
1868
克里斯安德森:
非常有意思,斯图尔特。
我们趁着工作人员
为下一位演讲者布置的时候
15:44
We're going to stand here a bit
because I think they're setting up
314
944616
3170
来简单聊几句。
15:47
for our next speaker.
315
947810
1151
15:48
A couple of questions.
316
948985
1538
我有几个问题。
15:50
So the idea of programming in ignorance
seems intuitively really powerful.
317
950547
5453
从直觉上来看,将无知编入到程序中
似乎是一个很重要的理念,
当你要实现超级智能时,
15:56
As you get to superintelligence,
318
956024
1594
15:57
what's going to stop a robot
319
957642
2258
什么能阻止机器人?
15:59
reading literature and discovering
this idea that knowledge
320
959924
2852
当它在阅读和学习的过程中发现,
16:02
is actually better than ignorance
321
962800
1572
知识比无知更强大,
16:04
and still just shifting its own goals
and rewriting that programming?
322
964396
4218
然后就改变它的目标
去重新编写程序呢?
16:09
Stuart Russell: Yes, so we want
it to learn more, as I said,
323
969512
6356
斯图尔特拉塞尔:是的,
我们想要它去学习,就像我说的,
16:15
about our objectives.
324
975892
1287
学习我们的目标。
16:17
It'll only become more certain
as it becomes more correct,
325
977203
5521
它只有在理解得越来越正确的时候,
才会变得更确定,
16:22
so the evidence is there
326
982748
1945
我们有证据显示,
16:24
and it's going to be designed
to interpret it correctly.
327
984717
2724
它的设计使它能按正确的方式理解。
16:27
It will understand, for example,
that books are very biased
328
987465
3956
比如说,它能够理解书中的论证是
16:31
in the evidence they contain.
329
991445
1483
带有非常强的偏见的。
16:32
They only talk about kings and princes
330
992952
2397
书中只会讲述国王、王子
16:35
and elite white male people doing stuff.
331
995373
2800
和那些精英白人男性做的事。
16:38
So it's a complicated problem,
332
998197
2096
这是一个复杂的问题,
16:40
but as it learns more about our objectives
333
1000317
3872
但当它更深入地学习我们的目标时,
16:44
it will become more and more useful to us.
334
1004213
2063
它就变得对我们更有用。
16:46
CA: And you couldn't
just boil it down to one law,
335
1006300
2526
CA:那你不能把这些
都集中在一条准则里吗?
16:48
you know, hardwired in:
336
1008850
1650
把这样的命令写在它的程序里:
16:50
"if any human ever tries to switch me off,
337
1010524
3293
“如果人类什么时候想把我关掉,
16:53
I comply. I comply."
338
1013841
1935
我服从。我服从。”
16:55
SR: Absolutely not.
339
1015800
1182
SR:绝对不行,
那将是一个很糟糕的主意。
16:57
That would be a terrible idea.
340
1017006
1499
16:58
So imagine that you have
a self-driving car
341
1018529
2689
试想一下,你有一辆无人驾驶汽车,
17:01
and you want to send your five-year-old
342
1021242
2433
你想让它送你五岁的孩子
17:03
off to preschool.
343
1023699
1174
去上学。
17:04
Do you want your five-year-old
to be able to switch off the car
344
1024897
3101
你希望你五岁的孩子
能在汽车运行过程中
将它关闭吗?
17:08
while it's driving along?
345
1028022
1213
应该不会吧。
17:09
Probably not.
346
1029259
1159
它得理解下指令的人有多理智,
是不是讲道理。
17:10
So it needs to understand how rational
and sensible the person is.
347
1030442
4703
17:15
The more rational the person,
348
1035169
1676
这个人越理智,
17:16
the more willing you are
to be switched off.
349
1036869
2103
它就越愿意自己被关掉。
17:18
If the person is completely
random or even malicious,
350
1038996
2543
如果这个人是完全思绪混乱
或者甚至是有恶意的,
17:21
then you're less willing
to be switched off.
351
1041563
2512
那你就不愿意它被关掉。
17:24
CA: All right. Stuart, can I just say,
352
1044099
1866
CA:好吧。斯图尔特,我得说
17:25
I really, really hope you
figure this out for us.
353
1045989
2314
我真的希望你为我们
能把这一切研究出来,
很感谢你的演讲,太精彩了。
17:28
Thank you so much for that talk.
That was amazing.
354
1048327
2375
17:30
SR: Thank you.
355
1050726
1167
SR:谢谢。
17:31
(Applause)
356
1051917
1837
(掌声)
New videos
关于本网站
这个网站将向你介绍对学习英语有用的YouTube视频。你将看到来自世界各地的一流教师教授的英语课程。双击每个视频页面上显示的英文字幕,即可从那里播放视频。字幕会随着视频的播放而同步滚动。如果你有任何意见或要求,请使用此联系表与我们联系。