請雙擊下方英文字幕播放視頻。
譯者: Lilian Chiu
審譯者: Shelley Tsang 曾雯海
00:03
Five years ago,
0
3833
2461
五年前,
00:06
I stood on the TED stage
1
6294
1752
我站在 TED 舞台上,警告大家
00:08
and warned about the dangers
of superintelligence.
2
8046
4379
超智慧的危險性。
00:13
I was wrong.
3
13051
1710
我錯了。
00:16
It went even worse than I thought.
4
16513
1752
情況比我想的還更糟。
00:18
(Laughter)
5
18306
1752
(笑聲)
00:20
I never thought governments
would let AI companies get this far
6
20058
4379
我從來沒有想到政府
會讓人工智慧公司
在毫無任何有意義規範的
情況下發展到這個地步,
00:24
without any meaningful regulation.
7
24479
2294
00:27
And the progress of AI
went even faster than I predicted.
8
27732
4880
而人工智慧的進步速度
比我預測的還快。
00:32
Look, I showed this abstract
landscape of tasks
9
32654
3587
我先前給大家看過這張
抽象的工作任務地景圖,
00:36
where the elevation represented
how hard it was for AI
10
36241
3128
高度代表人工智慧進行該工作
任務並做到人類水平的難度。
00:39
to do each task at human level.
11
39369
1919
00:41
And the sea level represented
what AI could be back then.
12
41288
3753
海平面代表當時人工智慧
可能是什麼樣子的。
00:45
And boy or boy, has the sea
been rising fast ever since.
13
45875
2962
而,天哪,海平面從那時起
就一直快速上升。
00:48
But a lot of these tasks have already
gone blub blub blub blub blub blub.
14
48878
3587
但這些工作任務有很多
都已經淹到海裡去了。
00:52
And the water is on track
to submerge all land,
15
52882
3921
水正在迅速淹沒所有陸地,
00:56
matching human intelligence
at all cognitive tasks.
16
56803
3420
在所有認知類的工作任務上
都能與人類智慧匹敵。
01:00
This is a definition of artificial
general intelligence, AGI,
17
60849
5756
這就是通用人工智慧的一個定義,
簡稱 AGI,
01:06
which is the stated goal
of companies like OpenAI,
18
66605
3837
AGI 是許多公司宣稱的
目標,如 OpenAI、
01:10
Google DeepMind and Anthropic.
19
70442
2002
Google DeepMind,和 Anthropic 。
01:12
And these companies are also trying
to build superintelligence,
20
72819
3587
而這些公司也在試圖打造超智慧,
01:16
leaving human intelligence far behind.
21
76448
2919
把人類智慧遠遠拋在後頭。
01:19
And many think it'll only be a few years,
maybe, from AGI to superintelligence.
22
79826
4379
許多人認為,從 AGI 到超智慧
只需要幾年的時間。
01:24
So when are we going to get AGI?
23
84539
2795
那麼我們何時會有 AGI?
01:27
Well, until recently, most AI researchers
thought it was at least decades away.
24
87375
5464
目前為止,大多數
人工智慧研究者都認為
至少還要數十年。
01:33
And now Microsoft is saying,
"Oh, it's almost here."
25
93214
3504
現在微軟說:「喔,
它就快問世了。」
01:36
We're seeing sparks of AGI in ChatGPT-4,
26
96760
3753
在 ChatGPT-4 中我們
可以看到 AGI 的火花,
01:40
and the Metaculus betting site
is showing the time left to AGI
27
100555
3796
Metaculus 投注網站顯示,
在過去十八個月間,
實現 AGI 的時間
01:44
plummeting from 20 years away
to three years away
28
104351
4129
從二十年後縮短到三年後。
01:48
in the last 18 months.
29
108521
1544
01:50
And leading industry people
are now predicting
30
110106
4922
而業界領頭羊現在預測
大概兩到三年後,
我們就不是最聰明的了。
01:55
that we have maybe two or three years left
until we get outsmarted.
31
115028
5047
02:00
So you better stop talking
about AGI as a long-term risk,
32
120116
4421
因此,你最好不要再把
AGI 當作長期風險來談,
02:04
or someone might call you a dinosaur
stuck in the past.
33
124579
3170
否則你可能會被認為
是活在過去的恐龍。
02:08
It's really remarkable
how AI has progressed recently.
34
128416
3837
最近人工智慧的發展
真的很令人驚訝。
02:12
Not long ago, robots moved like this.
35
132921
2419
不久前,機器人的動作
還是像這樣子的。
02:15
(Music)
36
135382
2085
(音樂)
02:18
Now they can dance.
37
138551
1418
現在它們還可以跳舞呢。
02:20
(Music)
38
140804
2711
(音樂)
02:29
Just last year, Midjourney
produced this image.
39
149979
3295
就在去年,MidJourney
製作了這張影像。
02:34
This year, the exact
same prompt produces this.
40
154401
3253
今年,用完全相同的提示,
產生出來的結果是這樣。
02:39
Deepfakes are getting really convincing.
41
159656
2544
深偽已經變得非常有說服力。
02:43
(Video) Deepfake Tom Cruise:
I’m going to show you some magic.
42
163201
2920
(影片)深偽的湯姆‧克魯斯:
我變個魔術給大家看。
02:46
It's the real thing.
43
166913
1335
這是真的東西。
02:48
(Laughs)
44
168289
2086
(笑)
02:50
I mean ...
45
170375
1293
我的意思是……
02:53
It's all ...
46
173920
1835
這一切都是……
02:55
the real ...
47
175797
1710
真的……東西。
02:57
thing.
48
177549
1168
02:58
Max Tegmark: Or is it?
49
178717
1251
講者:是嗎?
03:02
And Yoshua Bengio now argues
50
182387
2669
約書亞‧班吉歐現在主張
03:05
that large language models
have mastered language
51
185056
3837
大型語言模型精通語言和知識已經
到了可以通過圖靈測試的程度。
03:08
and knowledge to the point
that they pass the Turing test.
52
188893
2795
03:12
I know some skeptics are saying,
53
192355
1585
我知道有些懷疑論者會說:「不,
它們只是被過度吹捧的隨機鸚鵡,
03:13
"Nah, they're just overhyped
stochastic parrots
54
193940
2711
03:16
that lack a model of the world,"
55
196693
1877
缺乏世界觀模型。」
03:18
but they clearly have
a representation of the world.
56
198611
2878
但它們很顯然能夠表達出世界。
03:21
In fact, we recently found that Llama-2
even has a literal map of the world in it.
57
201531
5839
事實上,我們最近發現
Llama-2 裡面甚至
還有一張真的世界地圖。
03:28
And AI also builds
58
208121
3045
人工智慧還可以針對
03:31
geometric representations
of more abstract concepts
59
211207
3796
抽象的概念構建出
幾何的表示方式,比如
03:35
like what it thinks is true and false.
60
215003
3754
它認為的是真的和假的。
03:40
So what's going to happen
if we get AGI and superintelligence?
61
220467
5046
那麼,當我們有了 AGI
和超智慧時會發生什麼事?
03:46
If you only remember one thing
from my talk, let it be this.
62
226514
3879
如果這場演說你只能記住
一點,那請記住這點:
03:51
AI godfather, Alan Turing predicted
63
231311
3462
人工智慧教父艾倫‧圖靈
03:54
that the default outcome
is the machines take control.
64
234814
4797
預測預設的結果是機器取得掌控權。
04:00
The machines take control.
65
240528
2211
機器取得掌控權。
04:04
I know this sounds like science fiction,
66
244240
2336
我知道這聽起來像科幻小說,
04:06
but, you know, having AI as smart as GPT-4
67
246618
3503
但,有像 GPT-4
這麼聰明的人工智慧,
04:10
also sounded like science
fiction not long ago.
68
250163
2919
在不久前也會覺得
聽起來像是科幻小說。
04:13
And if you think of AI,
69
253124
2378
如果你把人工智慧,
04:15
if you think of superintelligence
in particular, as just another technology,
70
255502
6047
特別是,如果你把超智慧視為
不過是另一種技術,就像電力,
04:21
like electricity,
71
261591
2419
04:24
you're probably not very worried.
72
264052
2002
你可能不會太擔心。
04:26
But you see,
73
266095
1168
但,要知道,在圖靈眼中,
超智慧更像是一個新物種。
04:27
Turing thinks of superintelligence
more like a new species.
74
267263
3837
04:31
Think of it,
75
271142
1168
想想看,我們是在打造
04:32
we are building creepy, super capable,
76
272310
3879
令人毛骨悚然、能力超強、
04:36
amoral psychopaths
77
276231
1585
沒道德的精神病患,不用睡覺,
04:37
that don't sleep and think
much faster than us,
78
277857
2711
思考速度比我們快很多,
可以自我複製,且毫無人性。
04:40
can make copies of themselves
79
280610
1418
04:42
and have nothing human about them at all.
80
282070
2002
04:44
So what could possibly go wrong?
81
284072
1835
所以,怎麼可能會出錯呢?
04:45
(Laughter)
82
285949
1543
(笑聲)
04:47
And it's not just Turing.
83
287951
1585
且不只是圖靈。
04:49
OpenAI CEO Sam Altman,
who gave us ChatGPT,
84
289536
2919
帶給我們 ChatGPT 的
OpenAI 執行長山姆‧奧特曼
04:52
recently warned that it could
be "lights out for all of us."
85
292497
3837
近期警告說,它可能會是
「對我們所有人而言的熄燈時刻」。
04:57
Anthropic CEO, Dario Amodei,
even put a number on this risk:
86
297126
3754
Anthropic 執行長達里奧‧阿莫迪
甚至給了這個風險一個數字:
05:02
10-25 percent.
87
302090
2210
10-25%。
05:04
And it's not just them.
88
304300
1335
且不僅是他們。
05:05
Human extinction from AI
went mainstream in May
89
305677
3086
五月時,人工智慧造成
人類滅絕成了主流話題,
05:08
when all the AGI CEOs
and who's who of AI researchers
90
308763
4922
因為那時所有的 AGI 執行長
和人工智慧的重要研究者
05:13
came on and warned about it.
91
313685
1376
都跳出來做警告。
上個月,連歐盟的老大也警告
05:15
And last month, even the number one
of the European Union
92
315061
2920
05:18
warned about human extinction by AI.
93
318022
3295
要小心人工智慧造成人類滅絕。
05:21
So let me summarize
everything I've said so far
94
321359
2211
讓我用一張貓咪迷因投影片
來總結目前為止我所說的一切。
05:23
in just one slide of cat memes.
95
323611
2544
05:27
Three years ago,
96
327282
1793
三年前,
05:29
people were saying it's inevitable,
superintelligence,
97
329117
4129
大家都說這是不可避免的,超智慧,
05:33
it'll be fine,
98
333288
1501
一切都會很好的,
05:34
it's decades away.
99
334789
1210
還有幾十年。
05:35
Last year it was more like,
100
335999
1835
去年的狀況比較像是:
05:37
It's inevitable, it'll be fine.
101
337876
2043
這是不可避免的,一切都會很好的。
05:40
Now it's more like,
102
340295
2460
現在的狀況更像是:
05:42
It's inevitable.
103
342797
1251
這是不可避免的。
05:44
(Laughter)
104
344090
1126
(笑聲)
05:47
But let's take a deep breath
and try to raise our spirits
105
347260
3962
但,咱們深呼吸一下,
試著提振精神,讓自己開心點,
05:51
and cheer ourselves up,
106
351264
1168
05:52
because the rest of my talk
is going to be about the good news,
107
352473
3045
因為這場演說後續的
部分都會是好消息,
05:55
that it's not inevitable,
and we can absolutely do better,
108
355560
2919
說明這並非不可避免的,
且我們絕對可以做得更好,好嗎?
05:58
alright?
109
358521
1168
06:00
(Applause)
110
360315
2002
(掌聲)
06:02
So ...
111
362317
1209
所以……
06:04
The real problem is that we lack
a convincing plan for AI safety.
112
364903
5296
真正的問題在於
針對人工智慧安全,
我們缺乏有說服力的計畫。
06:10
People are working hard on evals
113
370700
3337
大家很努力在做評估,
尋找人工智慧有哪些行為
會造成風險,這是好事,
06:14
looking for risky AI behavior,
and that's good,
114
374037
4087
06:18
but clearly not good enough.
115
378124
2044
但顯然不夠好。
06:20
They're basically training AI
to not say bad things
116
380209
4797
基本上,他們是在訓練
人工智慧不要說不好的事,
06:25
rather than not do bad things.
117
385006
2502
而不是不要做不好的事。
06:28
Moreover, evals and debugging
are really just necessary,
118
388176
4212
此外,
評估和除錯
對於安全而言,
是必要條件但還不足夠。
06:32
not sufficient, conditions for safety.
119
392430
2002
06:34
In other words,
120
394474
1751
換句話說,
06:36
they can prove the presence of risk,
121
396225
3671
他們可以證明有風險存在,
06:39
not the absence of risk.
122
399938
2168
而不是證明沒有風險。
06:42
So let's up our game, alright?
123
402148
2544
所以,咱們再努力提升點水平吧。
06:44
Try to see how we can make
provably safe AI that we can control.
124
404692
5631
試看看我們要如何做出安全性可以
被證明且我們能控制的人工智慧。
06:50
Guardrails try to physically limit harm.
125
410323
5047
護欄是以實體的方式嘗試限制傷害。
06:55
But if your adversary is superintelligence
126
415828
2211
但如果你的敵人是超智慧,
或使用超智慧對付你的人,
06:58
or a human using superintelligence
against you, right,
127
418081
2544
07:00
trying is just not enough.
128
420625
1960
光嘗試還不足夠。
07:02
You need to succeed.
129
422585
1877
你得要成功。
07:04
Harm needs to be impossible.
130
424504
2169
必須要讓傷害不可能發生。
07:06
So we need provably safe systems.
131
426714
2544
我們需要安全性可以被證明的系統。
07:09
Provable, not in the weak sense
of convincing some judge,
132
429258
3838
證明指的並不是說服某位
法官的那種證明,那太弱了,
07:13
but in the strong sense of there being
something that's impossible
133
433137
3128
而是要強到比如根據物理定律
就是不可能發生的程度,
07:16
according to the laws of physics.
134
436265
1585
07:17
Because no matter how smart an AI is,
135
437892
2002
因為不論人工智慧有多聰明,
07:19
it can't violate the laws of physics
and do what's provably impossible.
136
439936
4046
也不可能違反物理定律,
做出已被證明不可能的事。
07:24
Steve Omohundro and I
wrote a paper about this,
137
444440
2836
我和史帝夫‧奧莫亨卓
寫了一篇相關論文,
我們對此抱持樂觀,
07:27
and we're optimistic
that this vision can really work.
138
447318
5005
認為這個願景是行得通的。
07:32
So let me tell you a little bit about how.
139
452323
2169
讓我來談談怎麼行得通。
07:34
There's a venerable field
called formal verification,
140
454993
4421
有個很讓人敬佩的領域,
叫做正式驗證,
07:39
which proves stuff about code.
141
459455
2127
用來證明和程式碼相關的東西。
07:41
And I'm optimistic that AI will
revolutionize automatic proving business
142
461624
6548
我很樂觀認為,人工智慧
會為「自動證明」事業帶來革命,
07:48
and also revolutionize program synthesis,
143
468214
3337
也會為程式合成帶來革命,
07:51
the ability to automatically
write really good code.
144
471592
3254
也就是自動寫出
很好的程式碼的能力。
07:54
So here is how our vision works.
145
474887
1585
我們的願景是這麼運作的:
當身為人類的你
07:56
You, the human, write a specification
146
476472
4213
撰寫一份規格,
08:00
that your AI tool must obey,
147
480685
2711
你的人工智慧工具
必須要遵守這份規格,
08:03
that it's impossible to log in
to your laptop
148
483438
2127
內容是,沒有正確的密碼
就不可能登入你的筆電,
08:05
without the correct password,
149
485565
1793
08:07
or that a DNA printer
cannot synthesize dangerous viruses.
150
487400
5714
或者 DNA 印表機不可以
合成出危險的病毒。
08:13
Then a very powerful AI
creates both your AI tool
151
493156
5213
接著,一個非常強大的人工智慧
不但創建了你的人工智慧工具,
08:18
and a proof that your tool
meets your spec.
152
498369
3837
也證明了你的工具符合你的規格。
08:22
Machine learning is uniquely good
at learning algorithms,
153
502540
4254
機器學習在學習演算法上
有獨特的優勢,
08:26
but once the algorithm has been learned,
154
506836
2169
但一旦演算法已經學好了,
08:29
you can re-implement it in a different
computational architecture
155
509047
3169
你就可以把它重新導入到
不同的計算架構中,
08:32
that's easier to verify.
156
512216
1627
更容易驗證的架構中。
08:35
Now you might worry,
157
515344
1210
現在你可能會擔心我到底要怎麼了解
08:36
how on earth am I going
to understand this powerful AI
158
516554
3921
這個強大的人工智慧、它建造的
強大人工智慧工具,及證據。
08:40
and the powerful AI tool it built
159
520475
1626
08:42
and the proof,
160
522143
1126
畢竟它們可能複雜到人類無法理解?
08:43
if they're all too complicated
for any human to grasp?
161
523311
2794
08:46
Here is the really great news.
162
526147
2127
我有個超棒的消息:
這些你通通不需要了解,
08:48
You don't have to understand
any of that stuff,
163
528316
2461
08:50
because it's much easier to verify
a proof than to discover it.
164
530818
5297
因為驗證證據
比發現證據更容易許多。
08:56
So you only have to understand
or trust your proof-checking code,
165
536115
5089
你只需要了解或信任
你用來檢查證據的程式碼,
09:01
which could be just
a few hundred lines long.
166
541245
2211
它的長度可能只有幾百行。
09:03
And Steve and I envision
167
543498
2252
史帝夫和我期望未來
09:05
that such proof checkers get built
into all our compute hardware,
168
545750
4463
這種證據檢查程式都會被
內建在我們所有的電腦硬體中,
09:10
so it just becomes impossible
to run very unsafe code.
169
550254
4213
所以根本就不可能
執行不安全的程式碼。
09:14
What if the AI, though,
isn't able to write that AI tool for you?
170
554509
5505
但若人工智慧無法為你寫出
那個人工智慧工具,怎麼辦?
那麼,還有另一種可能性。
09:20
Then there's another possibility.
171
560056
3795
09:23
You train an AI to first just learn
to do what you want
172
563851
3587
你先訓練一個人工智慧
來做你想要做的事,
09:27
and then you use a different AI
173
567480
3337
接著,使用一個不同的人工智慧
09:30
to extract out the learned algorithm
and knowledge for you,
174
570858
3963
為你取出已經學好的演算法和知識,
09:34
like an AI neuroscientist.
175
574862
2086
就像人工智慧神經科學家一樣。
09:37
This is in the spirit of the field
of mechanistic interpretability,
176
577281
3879
這完全符合機制
可解釋性領域的精神,
09:41
which is making really
impressive rapid progress.
177
581160
3253
這個領域的進展快得驚人。
09:44
Provably safe systems
are clearly not impossible.
178
584455
3170
很顯然,安全性可以
被證明的系統不是不可能的。
09:47
Let's look at a simple example
179
587625
1502
咱們來看個簡單的例子:首先,我們
09:49
of where we first machine-learn
an algorithm from data
180
589168
4630
用機器學習方法,
從資料中學習演算法,
09:53
and then distill it out
in the form of code
181
593840
4254
接著,把它提取出來,
以程式碼的形式呈現,
且可證明是符合規格的。
09:58
that provably meets spec, OK?
182
598136
2252
10:00
Let’s do it with an algorithm
that you probably learned in first grade,
183
600888
4922
咱們用個大家可能在一年級
就學到的演算法來當例子,
10:05
addition,
184
605810
1168
加法,
從最右的位數到最左,
進行迴圈,有時要進位。
10:07
where you loop over the digits
from right to left,
185
607019
2503
10:09
and sometimes you do a carry.
186
609564
1793
10:11
We'll do it in binary,
187
611357
1752
我們用二進位來做。
10:13
as if you were counting
on two fingers instead of ten.
188
613151
2752
就像把用十隻手指數數
改為用兩隻手指。
10:16
And we first train a recurrent
neural network,
189
616279
3378
我們首先訓練一個
循環神經網路,不用管細節,
10:19
never mind the details,
190
619657
2211
10:21
to nail the task.
191
621909
1418
用來完成任務。
10:23
So now you have this algorithm
that you don't understand how it works
192
623828
3253
現在你有這個演算法,
不過你不知道它怎麼運作,
它是黑箱作業,
10:27
in a black box
193
627123
2753
10:29
defined by a bunch of tables
of numbers that we, in nerd speak,
194
629917
4463
用一堆數字表格來定義的黑箱,
用阿宅語言來說就是「參數」。
10:34
call parameters.
195
634380
1502
10:35
Then we use an AI tool we built
to automatically distill out from this
196
635882
5297
接著,我們使用我們建造的
工具從黑箱中提取出
10:41
the learned algorithm
in the form of a Python program.
197
641179
3420
學好的演算法,並以
Python 程式的形式呈現,
10:44
And then we use the formal
verification tool known as Daphne
198
644640
4838
接著,
我們使用正式的驗證工具,
即大家熟知的 Daphne,來證明
10:49
to prove that this program
correctly adds up any numbers,
199
649520
5422
這個程式可以將任何數字
正確相加起來,
10:54
not just the numbers
that were in your training data.
200
654942
2503
不僅是在訓練資料集中的數字。
10:57
So in summary,
201
657778
1377
總結來說,
10:59
provably safe AI,
I'm convinced is possible,
202
659155
3962
安全性可以被證明的人工智慧,
我相信是有可能的,
11:03
but it's going to take time and work.
203
663117
3003
但需要投入時間和努力。
11:06
And in the meantime,
204
666162
1209
與此同時,別忘記
11:07
let's remember that all the AI benefits
205
667413
4254
讓大多數人感到興奮的
各種人工智慧益處
11:11
that most people are excited about
206
671709
3795
11:15
actually don't require superintelligence.
207
675504
2628
其實並不需要超智慧。
11:18
We can have a long
and amazing future with AI.
208
678758
5005
我們可以和人工智慧共存,
創造長久且不凡的未來。
11:25
So let's not pause AI.
209
685014
2169
所以,咱們別暫停人工智慧。
11:28
Let's just pause the reckless
race to superintelligence.
210
688476
4129
咱們要暫停的是魯莾的超智慧競賽。
11:32
Let's stop obsessively training
ever-larger models
211
692980
4255
咱們別再沉迷於訓練
我們都還不了解的更大模型。
11:37
that we don't understand.
212
697276
1669
11:39
Let's heed the warning from ancient Greece
213
699737
3212
咱們要聽從古希臘的警告,
11:42
and not get hubris,
like in the story of Icarus.
214
702949
3712
別像伊卡洛斯的故事一樣變得傲慢。
11:46
Because artificial intelligence
215
706702
2711
因為人工智慧
11:49
is giving us incredible intellectual wings
216
709455
4463
能給予我們很棒的智慧之翼,
11:53
with which we can do things
beyond our wildest dreams
217
713960
4045
讓我們能超越我們最狂野的夢想,
前提是我們不能再繼續
沉迷於飛向太陽。
11:58
if we stop obsessively
trying to fly to the sun.
218
718047
4379
12:02
Thank you.
219
722802
1168
謝謝。
12:03
(Applause)
220
723970
5880
(掌聲)
New videos
關於本網站
本網站將向您介紹對學習英語有用的 YouTube 視頻。 您將看到來自世界各地的一流教師教授的英語課程。 雙擊每個視頻頁面上顯示的英文字幕,從那裡播放視頻。 字幕與視頻播放同步滾動。 如果您有任何意見或要求,請使用此聯繫表與我們聯繫。