Can we build AI without losing control over it? | Sam Harris

3,798,812 views ・ 2016-10-19

TED


Please double-click on the English subtitles below to play the video.

Translator: 潘 可儿 Reviewer: Alan Watson
我想講下 一個好多人都經歷過嘅感官錯覺
00:13
I'm going to talk about a failure of intuition
0
13000
2216
00:15
that many of us suffer from.
1
15240
1600
00:17
It's really a failure to detect a certain kind of danger.
2
17480
3040
當呢個錯覺嚟嗰陣
我哋會唔識得留意危險
00:21
I'm going to describe a scenario
3
21360
1736
我亦都想講下一個我認為駭人聽聞
00:23
that I think is both terrifying
4
23120
3256
00:26
and likely to occur,
5
26400
1760
同時又好有可能會發生嘅情景
00:28
and that's not a good combination,
6
28840
1656
呢個情景發生嘅話,唔係一件好事嚟
00:30
as it turns out.
7
30520
1536
你哋可能唔覺得我依家講緊嘅嘢恐怖
00:32
And yet rather than be scared, most of you will feel
8
32080
2456
00:34
that what I'm talking about is kind of cool.
9
34560
2080
反而覺得好型
00:37
I'm going to describe how the gains we make
10
37200
2976
所以我想講下 我哋人類喺人工智能方面取得嘅成就
00:40
in artificial intelligence
11
40200
1776
最终會點樣摧毀我哋
00:42
could ultimately destroy us.
12
42000
1776
00:43
And in fact, I think it's very difficult to see how they won't destroy us
13
43800
3456
而事實上,我認為好難會見到
佢哋唔會摧毀我哋 或者導致我哋自我毀滅
00:47
or inspire us to destroy ourselves.
14
47280
1680
00:49
And yet if you're anything like me,
15
49400
1856
依家你哋或者同我一樣
00:51
you'll find that it's fun to think about these things.
16
51280
2656
覺得諗呢啲嘢好得意
00:53
And that response is part of the problem.
17
53960
3376
正因為覺得得意
亦都成為咗問題嘅一部份
00:57
OK? That response should worry you.
18
57360
1720
你哋應該擔心你哋嘅反應至真!
00:59
And if I were to convince you in this talk
19
59920
2656
如果我喺呢場演講度話畀你哋聽
01:02
that we were likely to suffer a global famine,
20
62600
3416
因為氣候變化或者大災難嘅原因
我哋會遭遇一場饑荒
01:06
either because of climate change or some other catastrophe,
21
66040
3056
而你嘅孫,或者佢哋嘅孫 會好似咁樣生活
01:09
and that your grandchildren, or their grandchildren,
22
69120
3416
01:12
are very likely to live like this,
23
72560
1800
你就唔會覺得
01:15
you wouldn't think,
24
75200
1200
「好有趣,我鍾意呢場 TED 演講。」
01:17
"Interesting.
25
77440
1336
01:18
I like this TED Talk."
26
78800
1200
01:21
Famine isn't fun.
27
81200
1520
饑荒一啲都唔有趣
01:23
Death by science fiction, on the other hand, is fun,
28
83800
3376
但科幻小說描繪嘅死亡就好有趣
呢一刻,人工智能發展 最令我最困擾嘅係
01:27
and one of the things that worries me most about the development of AI at this point
29
87200
3976
我哋面對近在眼前嘅危險似乎無動於衷
01:31
is that we seem unable to marshal an appropriate emotional response
30
91200
4096
01:35
to the dangers that lie ahead.
31
95320
1816
雖然我喺你哋面前演講
01:37
I am unable to marshal this response, and I'm giving this talk.
32
97160
3200
但我同你哋一樣都係冇反應
成件事就好似我哋企喺兩道門前面
01:42
It's as though we stand before two doors.
33
102120
2696
01:44
Behind door number one,
34
104840
1256
喺一號門後面,我哋唔再發展智能機器
01:46
we stop making progress in building intelligent machines.
35
106120
3296
因為某啲原因
01:49
Our computer hardware and software just stops getting better for some reason.
36
109440
4016
我哋電腦嘅硬件同軟件都停滯不前
依家嚟諗一下點解呢種情況會發生
01:53
Now take a moment to consider why this might happen.
37
113480
3000
即係話,因為智能同自動化好重要
01:57
I mean, given how valuable intelligence and automation are,
38
117080
3656
02:00
we will continue to improve our technology if we are at all able to.
39
120760
3520
所以我哋會喺許可嘅情況之下 繼續改善科技
咁究竟係乜嘢會阻止我哋?
02:05
What could stop us from doing this?
40
125200
1667
02:07
A full-scale nuclear war?
41
127800
1800
一個全面嘅核戰爭?
一個全球流行病?
02:11
A global pandemic?
42
131000
1560
一個小行星撞擊?
02:14
An asteroid impact?
43
134320
1320
02:17
Justin Bieber becoming president of the United States?
44
137640
2576
Justin Bieber 做咗美國總統?
(笑聲)
02:20
(Laughter)
45
140240
2280
02:24
The point is, something would have to destroy civilization as we know it.
46
144760
3920
之但係,如我哋所知
有一啲嘢會摧毀文明
02:29
You have to imagine how bad it would have to be
47
149360
4296
你必須要想像
如果我哋一代又一代人 永遠改善唔到科技
02:33
to prevent us from making improvements in our technology
48
153680
3336
02:37
permanently,
49
157040
1216
情況會有幾嚴重
02:38
generation after generation.
50
158280
2016
幾乎可以確定嘅係
02:40
Almost by definition, this is the worst thing
51
160320
2136
呢個係人類史上最壞嘅事
02:42
that's ever happened in human history.
52
162480
2016
02:44
So the only alternative,
53
164520
1296
所以唯一嘅選擇 就係二號門後嘅做法
02:45
and this is what lies behind door number two,
54
165840
2336
我哋繼續年復一年升級改造智能機器
02:48
is that we continue to improve our intelligent machines
55
168200
3136
02:51
year after year after year.
56
171360
1600
到咗某個地步
02:53
At a certain point, we will build machines that are smarter than we are,
57
173720
3640
我哋就會整出比我哋仲要聰明嘅機器
一旦我哋有咗比我哋自己 仲聰明嘅機器
02:58
and once we have machines that are smarter than we are,
58
178080
2616
03:00
they will begin to improve themselves.
59
180720
1976
佢哋就會自我改良
03:02
And then we risk what the mathematician IJ Good called
60
182720
2736
到時我哋就會面臨數學家 IJ Good 講嘅「智能爆炸」危機
03:05
an "intelligence explosion,"
61
185480
1776
即係話,改良過程唔再需要人類
03:07
that the process could get away from us.
62
187280
2000
依家,經常會有人學呢張諷刺漫畫咁
03:10
Now, this is often caricatured, as I have here,
63
190120
2816
03:12
as a fear that armies of malicious robots
64
192960
3216
描繪叛變嘅機器人會攻擊我哋
03:16
will attack us.
65
196200
1256
但係呢個唔係最有可能發生嘅情景
03:17
But that isn't the most likely scenario.
66
197480
2696
03:20
It's not that our machines will become spontaneously malevolent.
67
200200
4856
我哋嘅機器唔會自動變惡
所以問題在於我哋製造出 比我哋更加做到嘢嘅機器嘅時候
03:25
The concern is really that we will build machines
68
205080
2616
03:27
that are so much more competent than we are
69
207720
2056
03:29
that the slightest divergence between their goals and our own
70
209800
3776
佢哋目標上同我哋嘅細微分歧 會置我哋於死地
03:33
could destroy us.
71
213600
1200
03:35
Just think about how we relate to ants.
72
215960
2080
就諗下我哋同螞蟻之間嘅關係︰
03:38
We don't hate them.
73
218600
1656
我哋唔討厭佢哋
我哋唔會傷害佢哋
03:40
We don't go out of our way to harm them.
74
220280
2056
03:42
In fact, sometimes we take pains not to harm them.
75
222360
2376
甚至我哋為咗唔傷害佢哋 而會受一啲苦
03:44
We step over them on the sidewalk.
76
224760
2016
例如我哋會為咗唔踩到佢哋 而跨過佢哋
03:46
But whenever their presence
77
226800
2136
但係一旦佢哋嘅存在 同我哋嘅其中一個目標有嚴重衝突
03:48
seriously conflicts with one of our goals,
78
228960
2496
03:51
let's say when constructing a building like this one,
79
231480
2477
譬如話要起一棟咁樣嘅樓
03:53
we annihilate them without a qualm.
80
233981
1960
我哋諗都唔諗就殺死佢哋
03:56
The concern is that we will one day build machines
81
236480
2936
問題係,我哋終有一日整出嘅機器——
03:59
that, whether they're conscious or not,
82
239440
2736
無論佢哋自己有冇意識都好
同樣會冷漠咁對待我哋
04:02
could treat us with similar disregard.
83
242200
2000
04:05
Now, I suspect this seems far-fetched to many of you.
84
245760
2760
依家,我估對於你哋大部份人嚟講 呢件情景都係遙不可及嘅
04:09
I bet there are those of you who doubt that superintelligent AI is possible,
85
249360
6336
我賭你哋當中有人質疑 超級智能嘅可能性
04:15
much less inevitable.
86
255720
1656
更加唔好講 人類要避免超級智能
04:17
But then you must find something wrong with one of the following assumptions.
87
257400
3620
但係你哋肯定會喺下面嘅假設當中 搵到一啲謬誤
呢度一共有三個假設
04:21
And there are only three of them.
88
261044
1572
04:23
Intelligence is a matter of information processing in physical systems.
89
263800
4719
喺物理系統裏面,智能等如訊息處理
04:29
Actually, this is a little bit more than an assumption.
90
269320
2615
但係,呢個超過咗假設
04:31
We have already built narrow intelligence into our machines,
91
271959
3457
因為我哋已經喺我哋嘅機器裏面 植入咗弱人工智能
04:35
and many of these machines perform
92
275440
2016
而且呢啲機器好多 已經處於一個超人類智能水平
04:37
at a level of superhuman intelligence already.
93
277480
2640
04:40
And we know that mere matter
94
280840
2576
同時我哋知道僅僅係物質 就可以產生所謂嘅「一般智能」
04:43
can give rise to what is called "general intelligence,"
95
283440
2616
一種可以喺唔同領域之間 靈活思考嘅能力
04:46
an ability to think flexibly across multiple domains,
96
286080
3656
04:49
because our brains have managed it. Right?
97
289760
3136
咁係因為我哋嘅大腦 已經可以做到,係唔係?
04:52
I mean, there's just atoms in here,
98
292920
3936
我嘅意思係,大腦凈係得原子
04:56
and as long as we continue to build systems of atoms
99
296880
4496
只要我哋繼續加設原子系統
05:01
that display more and more intelligent behavior,
100
301400
2696
機器就可以有更加多智能行為
除非進度有咩停頓
05:04
we will eventually, unless we are interrupted,
101
304120
2536
05:06
we will eventually build general intelligence
102
306680
3376
否則我哋最終會喺機器裏面 建構出一般智能
05:10
into our machines.
103
310080
1296
明白進度嘅快慢並唔影響係好重要
05:11
It's crucial to realize that the rate of progress doesn't matter,
104
311400
3656
因為任何過程都足以令我哋返唔到轉頭
05:15
because any progress is enough to get us into the end zone.
105
315080
3176
05:18
We don't need Moore's law to continue. We don't need exponential progress.
106
318280
3776
我哋唔需要按照摩爾定律進行
我哋唔需要指數式增長
我哋只需要繼續做
05:22
We just need to keep going.
107
322080
1600
05:25
The second assumption is that we will keep going.
108
325480
2920
第二個假設就係我哋會繼續做
我哋會繼續改造我哋嘅智能機器
05:29
We will continue to improve our intelligent machines.
109
329000
2760
05:33
And given the value of intelligence --
110
333000
4376
而考慮到智能嘅價值…
我係話,因為有智能 我哋至會珍重事物
05:37
I mean, intelligence is either the source of everything we value
111
337400
3536
05:40
or we need it to safeguard everything we value.
112
340960
2776
或者我哋需要智能 去保護我哋珍重嘅一切
05:43
It is our most valuable resource.
113
343760
2256
智能係我哋最有寶貴嘅資源
所以我哋想繼續發展智能
05:46
So we want to do this.
114
346040
1536
05:47
We have problems that we desperately need to solve.
115
347600
3336
我哋有極需解決嘅問題
05:50
We want to cure diseases like Alzheimer's and cancer.
116
350960
3200
例如我哋想治療類似阿茲海默症 同癌症嘅疾病
05:54
We want to understand economic systems. We want to improve our climate science.
117
354960
3936
我哋想認識經濟系統
我哋想改善我哋嘅氣候科學
05:58
So we will do this, if we can.
118
358920
2256
所以如果可以做到嘅話 我哋會繼續發展智能
06:01
The train is already out of the station, and there's no brake to pull.
119
361200
3286
件事亦都可以比喻為︰ 列車已經開出,但冇刹車掣可以踩
06:05
Finally, we don't stand on a peak of intelligence,
120
365880
5456
最終,我哋唔會去到 智能嘅頂峰或者高智能水平
06:11
or anywhere near it, likely.
121
371360
1800
06:13
And this really is the crucial insight.
122
373640
1896
而呢個就係非常重要嘅觀察結果
06:15
This is what makes our situation so precarious,
123
375560
2416
就係呢個結果 將我哋置於岌岌可危嘅境地
亦令到我哋對於危險嘅觸覺唔可靠
06:18
and this is what makes our intuitions about risk so unreliable.
124
378000
4040
依家,就諗下史上最聰明嘅人
06:23
Now, just consider the smartest person who has ever lived.
125
383120
2720
幾乎喺每個人嘅名單上面 都會有 John von Neumann
06:26
On almost everyone's shortlist here is John von Neumann.
126
386640
3416
06:30
I mean, the impression that von Neumann made on the people around him,
127
390080
3336
我嘅意思係 John von Neumann 畀佢周圍嘅人嘅印象
包括佢畀嗰個時代最犀利嘅數學家 同物理學家嘅印象
06:33
and this included the greatest mathematicians and physicists of his time,
128
393440
4056
06:37
is fairly well-documented.
129
397520
1936
都係有紀錄低嘅
06:39
If only half the stories about him are half true,
130
399480
3776
如果一半關於佢嘅故事有一半係真嘅
06:43
there's no question
131
403280
1216
咁毫無疑問
06:44
he's one of the smartest people who has ever lived.
132
404520
2456
佢係有史以來其中一個最聰明嘅人
所以當我哋畫一幅比較智力嘅圖
06:47
So consider the spectrum of intelligence.
133
407000
2520
喺右邊高分嘅位置 我哋有 John von Neumann
06:50
Here we have John von Neumann.
134
410320
1429
06:53
And then we have you and me.
135
413560
1334
喺中間有你同我
06:56
And then we have a chicken.
136
416120
1296
去到最左邊,我哋有雞仔
06:57
(Laughter)
137
417440
1936
(笑聲)
係吖,就係一隻雞仔
06:59
Sorry, a chicken.
138
419400
1216
07:00
(Laughter)
139
420640
1256
(笑聲)
07:01
There's no reason for me to make this talk more depressing than it needs to be.
140
421920
3736
我冇理由將呢個演講搞到咁灰㗎
(笑聲)
07:05
(Laughter)
141
425680
1600
07:08
It seems overwhelmingly likely, however, that the spectrum of intelligence
142
428339
3477
但好有可能智力分佈 遠比我哋目前認知嘅廣
07:11
extends much further than we currently conceive,
143
431840
3120
07:15
and if we build machines that are more intelligent than we are,
144
435880
3216
如果我哋建造出 比我哋擁有更高智慧嘅機器
佢哋嘅智力好有可能會 超越我哋認知嘅最高智力
07:19
they will very likely explore this spectrum
145
439120
2296
07:21
in ways that we can't imagine,
146
441440
1856
07:23
and exceed us in ways that we can't imagine.
147
443320
2520
同埋以無法想像嘅方式超越我哋
同樣重要嘅係 單憑運算速度就可以超越我哋
07:27
And it's important to recognize that this is true by virtue of speed alone.
148
447000
4336
啱唔啱?諗下如果我哋整咗一個
07:31
Right? So imagine if we just built a superintelligent AI
149
451360
5056
冇哈佛或者麻省理工研究人員 咁聰明嘅超級人工智能
07:36
that was no smarter than your average team of researchers
150
456440
3456
07:39
at Stanford or MIT.
151
459920
2296
但電路運行速度大概 比生化電路快一百萬倍
07:42
Well, electronic circuits function about a million times faster
152
462240
2976
07:45
than biochemical ones,
153
465240
1256
07:46
so this machine should think about a million times faster
154
466520
3136
所以呢個機器嘅思考速度應該會 比佢嘅創造者快大概一百萬倍
07:49
than the minds that built it.
155
469680
1816
07:51
So you set it running for a week,
156
471520
1656
所以如果佢運行一個星期
07:53
and it will perform 20,000 years of human-level intellectual work,
157
473200
4560
佢就可以完成人類要兩萬年 先至完成得到嘅工作
07:58
week after week after week.
158
478400
1960
08:01
How could we even understand, much less constrain,
159
481640
3096
而我哋又點會明白
人工智能係點樣完成咁龐大嘅運算呢?
08:04
a mind making this sort of progress?
160
484760
2280
08:08
The other thing that's worrying, frankly,
161
488840
2136
另一個令人擔憂嘅事,老實講
08:11
is that, imagine the best case scenario.
162
491000
4976
就係…不如想像一下最好嘅情形
08:16
So imagine we hit upon a design of superintelligent AI
163
496000
4176
想像一下我哋設計咗一個 冇安全問題嘅超級人工智能
08:20
that has no safety concerns.
164
500200
1376
08:21
We have the perfect design the first time around.
165
501600
3256
我哋第一次擁有完美嘅設計
08:24
It's as though we've been handed an oracle
166
504880
2216
就好似我哋摞住 一個按照預期發展嘅神諭
08:27
that behaves exactly as intended.
167
507120
2016
08:29
Well, this machine would be the perfect labor-saving device.
168
509160
3720
呢個機器仲會變成完美嘅慳力設備
08:33
It can design the machine that can build the machine
169
513680
2429
事關機器可以生產另一款機器出嚟
做任何體力勞動
08:36
that can do any physical work,
170
516133
1763
08:37
powered by sunlight,
171
517920
1456
兼由太陽能驅動
08:39
more or less for the cost of raw materials.
172
519400
2696
成本仲同買原材料差唔多
所以,我哋唔單止講緊咕哩勞力嘅終結
08:42
So we're talking about the end of human drudgery.
173
522120
3256
08:45
We're also talking about the end of most intellectual work.
174
525400
2800
我哋同時講緊大部份用腦工作嘅終結
08:49
So what would apes like ourselves do in this circumstance?
175
529200
3056
咁我哋人類面對工作削減 應該何去何從?
08:52
Well, we'd be free to play Frisbee and give each other massages.
176
532280
4080
我哋會好自由咁去掟飛盤 、同人按摩
服食一啲 LSD 精神藥 同埋著上怪異服飾
08:57
Add some LSD and some questionable wardrobe choices,
177
537840
2856
09:00
and the whole world could be like Burning Man.
178
540720
2176
於是成個世界都會變成火人節嘅人咁
09:02
(Laughter)
179
542920
1640
(笑聲)
09:06
Now, that might sound pretty good,
180
546320
2000
頭先講到嘅嘢聽起上嚟好似好好咁
09:09
but ask yourself what would happen
181
549280
2376
但係撫心自問
面對目前嘅經濟政治秩序 乜嘢會發生呢?
09:11
under our current economic and political order?
182
551680
2736
09:14
It seems likely that we would witness
183
554440
2416
似乎我哋會目睹
09:16
a level of wealth inequality and unemployment
184
556880
4136
我哋從未見過咁嚴重嘅 貧富懸殊同失業率
09:21
that we have never seen before.
185
561040
1496
09:22
Absent a willingness to immediately put this new wealth
186
562560
2616
如果呢筆新財富唔即時用嚟服務全人類
09:25
to the service of all humanity,
187
565200
1480
09:27
a few trillionaires could grace the covers of our business magazines
188
567640
3616
就算一啲億萬富翁使好多錢 㨘靚商業雜誌嘅封面
世界上其他人都要挨餓
09:31
while the rest of the world would be free to starve.
189
571280
2440
咁如果俄羅斯人或者中國人
09:34
And what would the Russians or the Chinese do
190
574320
2296
聽到矽谷嘅一啲公司 打算使用一個超級人工智能
09:36
if they heard that some company in Silicon Valley
191
576640
2616
09:39
was about to deploy a superintelligent AI?
192
579280
2736
佢哋會點諗?
09:42
This machine would be capable of waging war,
193
582040
2856
呢個機器有能力用未見過嘅力度 發動地面或者網絡戰爭
09:44
whether terrestrial or cyber,
194
584920
2216
09:47
with unprecedented power.
195
587160
1680
呢個係「勝者全取」嘅情況
喺呢場人工智能較量中有六個月嘅優勢
09:50
This is a winner-take-all scenario.
196
590120
1856
09:52
To be six months ahead of the competition here
197
592000
3136
就係至少要做多人類五十萬年做到嘅嘢
09:55
is to be 500,000 years ahead,
198
595160
2776
09:57
at a minimum.
199
597960
1496
09:59
So it seems that even mere rumors of this kind of breakthrough
200
599480
4736
甚至只係關於人工智能突破嘅謠言 就可以令到人類亂起上嚟
10:04
could cause our species to go berserk.
201
604240
2376
10:06
Now, one of the most frightening things,
202
606640
2896
依家最驚人嘅一件事,我覺得
10:09
in my view, at this moment,
203
609560
2776
就係人工智能研究人員 安定人心時講嘅說話
10:12
are the kinds of things that AI researchers say
204
612360
4296
10:16
when they want to be reassuring.
205
616680
1560
佢哋成日話,因為我哋有時間 所以我哋唔需要擔心
10:19
And the most common reason we're told not to worry is time.
206
619000
3456
10:22
This is all a long way off, don't you know.
207
622480
2056
「乜你唔知有排咩?
10:24
This is probably 50 or 100 years away.
208
624560
2440
仲有五十年或者一百年先到。」
10:27
One researcher has said,
209
627720
1256
一位研究人員曾經咁講︰
「擔心人工智能嘅安全就好似 擔心火星人口爆棚一樣。」
10:29
"Worrying about AI safety
210
629000
1576
10:30
is like worrying about overpopulation on Mars."
211
630600
2280
呢句嘢等如矽谷同你講︰
10:34
This is the Silicon Valley version
212
634116
1620
10:35
of "don't worry your pretty little head about it."
213
635760
2376
「你十八廿二就杞人憂天!」
10:38
(Laughter)
214
638160
1336
(笑聲)
冇人意識到 攞時間嚟到講完全係無稽之談
10:39
No one seems to notice
215
639520
1896
10:41
that referencing the time horizon
216
641440
2616
10:44
is a total non sequitur.
217
644080
2576
如果智能凈係同處理訊息有關
10:46
If intelligence is just a matter of information processing,
218
646680
3256
10:49
and we continue to improve our machines,
219
649960
2656
同埋我哋繼續改良我哋嘅機器嘅話
10:52
we will produce some form of superintelligence.
220
652640
2880
我哋最終會生產到超級智能
但我哋唔知道要用幾長時間
10:56
And we have no idea how long it will take us
221
656320
3656
先可以生產安全嘅超級智能
11:00
to create the conditions to do that safely.
222
660000
2400
11:04
Let me say that again.
223
664200
1296
等我再講多一次
11:05
We have no idea how long it will take us
224
665520
3816
我哋唔知道要用幾長時間
先可以生產安全嘅超級智能
11:09
to create the conditions to do that safely.
225
669360
2240
11:12
And if you haven't noticed, 50 years is not what it used to be.
226
672920
3456
如果你仲未意識到
五十年嘅概念已經唔同咗喇
11:16
This is 50 years in months.
227
676400
2456
呢幅圖顯示咗以月份計嘅五十年
11:18
This is how long we've had the iPhone.
228
678880
1840
先係 iPhone 面世至今嘅時間
11:21
This is how long "The Simpsons" has been on television.
229
681440
2600
再係阿森一族出現係電視上嘅時間
11:24
Fifty years is not that much time
230
684680
2376
五十年不足以畀人類應對最大挑戰
11:27
to meet one of the greatest challenges our species will ever face.
231
687080
3160
11:31
Once again, we seem to be failing to have an appropriate emotional response
232
691640
4016
再一次,我哋對於有理由發生嘅事
11:35
to what we have every reason to believe is coming.
233
695680
2696
未有採取適當嘅情緒反應
11:38
The computer scientist Stuart Russell has a nice analogy here.
234
698400
3976
對此,電腦科學家 Stuart Russell 有一個好嘅比喻
11:42
He said, imagine that we received a message from an alien civilization,
235
702400
4896
佢話︰想像一下我哋收到 一個來自外星文明嘅信息
上面寫住:
11:47
which read:
236
707320
1696
「地球上嘅人類,
11:49
"People of Earth,
237
709040
1536
11:50
we will arrive on your planet in 50 years.
238
710600
2360
我哋五十年之後會到達你哋嘅星球。
11:53
Get ready."
239
713800
1576
請準備好。」
11:55
And now we're just counting down the months until the mothership lands?
240
715400
4256
咁我哋依家凈係會倒數外星人來臨?
11:59
We would feel a little more urgency than we do.
241
719680
3000
我哋應該更加緊張至係
12:04
Another reason we're told not to worry
242
724680
1856
另一個我哋被告知唔使擔心嘅原因係
12:06
is that these machines can't help but share our values
243
726560
3016
呢啲機器只會識得 將我哋嘅價值觀傳開
12:09
because they will be literally extensions of ourselves.
244
729600
2616
因為佢哋係我哋人類嘅附屬嘅一部分
但同時佢哋會被植入我哋嘅大腦
12:12
They'll be grafted onto our brains,
245
732240
1816
12:14
and we'll essentially become their limbic systems.
246
734080
2360
所以我哋會成為佢哋嘅邊緣系統
依家使啲時間諗下 最安全同唯一審慎嘅做法
12:17
Now take a moment to consider
247
737120
1416
12:18
that the safest and only prudent path forward,
248
738560
3176
12:21
recommended,
249
741760
1336
而推薦嘅做法就係 直接將呢種科技植入我哋嘅大腦
12:23
is to implant this technology directly into our brains.
250
743120
2800
12:26
Now, this may in fact be the safest and only prudent path forward,
251
746600
3376
呢種做法可能係最安全同唯一審慎嘅
但係喺你將佢植入你個腦之前
12:30
but usually one's safety concerns about a technology
252
750000
3056
12:33
have to be pretty much worked out before you stick it inside your head.
253
753080
3656
科技嘅安全問題需要解決
12:36
(Laughter)
254
756760
2016
(笑聲)
12:38
The deeper problem is that building superintelligent AI on its own
255
758800
5336
更深一層嘅問題係
人工智能自己整超級人工智能
似乎比整一個可以喺神經科學上
12:44
seems likely to be easier
256
764160
1736
12:45
than building superintelligent AI
257
765920
1856
12:47
and having the completed neuroscience
258
767800
1776
同我哋腦部無縫接合嘅 超級人工智能簡單
12:49
that allows us to seamlessly integrate our minds with it.
259
769600
2680
12:52
And given that the companies and governments doing this work
260
772800
3176
考慮到從事研發人工智能嘅公司 同政府好可能會互相競爭
12:56
are likely to perceive themselves as being in a race against all others,
261
776000
3656
12:59
given that to win this race is to win the world,
262
779680
3256
考慮到要贏呢場比賽就要贏成個世界
13:02
provided you don't destroy it in the next moment,
263
782960
2456
同埋先假設如果你下一刻 唔會糟塌人工智能嘅成果
13:05
then it seems likely that whatever is easier to do
264
785440
2616
咁樣,似乎更加簡單嘅事會完成咗先
13:08
will get done first.
265
788080
1200
13:10
Now, unfortunately, I don't have a solution to this problem,
266
790560
2856
但唔好彩嘅係 我除咗叫大家反思呢個問題
我就再冇辦法解決呢個問題
13:13
apart from recommending that more of us think about it.
267
793440
2616
我覺得我哋喺人工智能方面
13:16
I think we need something like a Manhattan Project
268
796080
2376
需要好似「曼哈頓計劃」咁嘅計劃
13:18
on the topic of artificial intelligence.
269
798480
2016
13:20
Not to build it, because I think we'll inevitably do that,
270
800520
2736
唔係講點樣整人工智能 因為我認為人工智能終有一日會整到
13:23
but to understand how to avoid an arms race
271
803280
3336
而係搞清楚點樣避免一場軍備競賽
13:26
and to build it in a way that is aligned with our interests.
272
806640
3496
同埋往符合我哋利益嘅方向 發展人工智能
當你講緊可以自我改造嘅超級人工智能
13:30
When you're talking about superintelligent AI
273
810160
2136
13:32
that can make changes to itself,
274
812320
2256
13:34
it seems that we only have one chance to get the initial conditions right,
275
814600
4616
我哋似乎只有一個機會 令到人工智能發展得安全
就算發展得安全
13:39
and even then we will need to absorb
276
819240
2056
我哋都要接受 人工智能對經濟同政治產生嘅結果
13:41
the economic and political consequences of getting them right.
277
821320
3040
13:45
But the moment we admit
278
825760
2056
但係當我哋同意 訊息處理係智能嘅起步點
13:47
that information processing is the source of intelligence,
279
827840
4000
13:52
that some appropriate computational system is what the basis of intelligence is,
280
832720
4800
同意一啲適當嘅計算系統係智能嘅基礎
同意我哋會不斷完善人工智能
13:58
and we admit that we will improve these systems continuously,
281
838360
3760
同意將來有好多嘢超越我哋認知嘅
14:03
and we admit that the horizon of cognition very likely far exceeds
282
843280
4456
14:07
what we currently know,
283
847760
1200
咁我哋就必須要承認 我哋正喺度創造緊某種神明
14:10
then we have to admit
284
850120
1216
14:11
that we are in the process of building some sort of god.
285
851360
2640
依家會係一個好時機 確保佢係可以同我哋共存嘅神明
14:15
Now would be a good time
286
855400
1576
14:17
to make sure it's a god we can live with.
287
857000
1953
好多謝你哋
14:20
Thank you very much.
288
860120
1536
14:21
(Applause)
289
861680
5093
(掌聲)
About this website

This site will introduce you to YouTube videos that are useful for learning English. You will see English lessons taught by top-notch teachers from around the world. Double-click on the English subtitles displayed on each video page to play the video from there. The subtitles scroll in sync with the video playback. If you have any comments or requests, please contact us using this contact form.

https://forms.gle/WvT1wiN1qDtmnspy7