The Urgent Risks of Runaway AI — and What to Do about Them | Gary Marcus | TED
213,276 views ・ 2023-05-12
請雙擊下方英文字幕播放視頻。
譯者: 麗玲 辛
審譯者: Helen Chang
00:04
I’m here to talk about the possibility
of global AI governance.
0
4292
5339
我來這裡討論
全球 AI 治理的可能性。
00:09
I first learned to code
when I was eight years old,
1
9631
3211
我八歲時初次學會寫程式,
用的是紙上電腦,
00:12
on a paper computer,
2
12884
1168
00:14
and I've been in love with AI ever since.
3
14094
2252
我從此就愛上了 AI。
00:16
In high school,
4
16346
1168
高中時,我買了台 C64 電腦
來研究機器翻譯。
00:17
I got myself a Commodore 64
and worked on machine translation.
5
17514
3253
00:20
I built a couple of AI companies,
I sold one of them to Uber.
6
20934
3712
我開了幾家 AI 公司,
賣了一家給優步,
00:24
I love AI, but right now I'm worried.
7
24688
3503
我愛 AI ,但現在我很擔憂。
00:28
One of the things that I’m worried
about is misinformation,
8
28233
2794
我的擔憂之一是錯誤資訊,
圖謀不軌的人可能製造
史無前例的海量假消息。
00:31
the possibility that bad actors
will make a tsunami of misinformation
9
31027
3462
00:34
like we've never seen before.
10
34531
2002
00:36
These tools are so good
at making convincing narratives
11
36575
3586
這些工具十分擅長
編造令人信服的故事,
00:40
about just about anything.
12
40161
1794
任何主題都可能。
00:41
If you want a narrative
about TED and how it's dangerous,
13
41997
3003
如果你想要編個故事,
說 TED 很危險,
00:45
that we're colluding here
with space aliens,
14
45000
2877
說我們在這裡跟外星人勾結,
00:47
you got it, no problem.
15
47919
1669
沒問題,馬上編給你。
00:50
I'm of course kidding about TED.
16
50422
2127
TED 的事當然是開玩笑的。
00:52
I didn't see any space aliens backstage.
17
52591
2961
我在後台沒看到任何外星人。
00:55
But bad actors are going to use
these things to influence elections,
18
55885
3629
但圖謀不軌的人
會用這些訊息左右選舉,
00:59
and they're going to threaten democracy.
19
59514
2211
這會威脅到民主。
01:01
Even when these systems
20
61725
1167
即使這些系統的本意
並不是要用來製造錯誤資訊,
01:02
aren't deliberately being used
to make misinformation,
21
62934
2878
01:05
they can't help themselves.
22
65854
1710
它們也無從控制。
01:07
And the information that they make
is so fluid and so grammatical
23
67564
4963
這些系統所製造出的內容
相當流暢、自然,
01:12
that even professional editors
sometimes get sucked in
24
72527
3253
就連專業的編輯也有可能
被這些東西所吸引和愚弄。
01:15
and get fooled by this stuff.
25
75780
2002
01:17
And we should be worried.
26
77824
1668
可能需要一些慈善家來贊助
我們想舉辦的研討會,
01:19
For example, ChatGPT made up
a sexual harassment scandal
27
79534
3879
比如,ChatGPT 捏造了
一起性騷擾醜聞,
01:23
about an actual professor,
28
83413
1919
主角是位教授,真有其人,
01:25
and then it provided
evidence for its claim
29
85332
2419
然後它以一篇虛假的
《華盛頓郵報》報導的形式
01:27
in the form of a fake
"Washington Post" article
30
87792
2586
為其聲明提供了證據,
並引用了該報導。
01:30
that it created a citation to.
31
90378
1919
01:32
We should all be worried
about that kind of thing.
32
92297
2377
我們都應為這樣的事擔憂。
01:34
What I have on the right
is an example of a fake narrative
33
94674
2962
在螢幕右邊,
是這類系統所製造的
假新聞的一個例子,
01:37
from one of these systems
34
97636
1209
01:38
saying that Elon Musk died
in March of 2018 in a car crash.
35
98887
4755
假新聞說伊隆‧馬斯克
在 2018 因為車禍身亡。
01:43
We all know that's not true.
36
103683
1418
我們都知道這不是真的,
馬斯克仍活著,證據就在我們身邊。
01:45
Elon Musk is still here,
the evidence is all around us.
37
105143
2794
01:47
(Laughter)
38
107979
1001
(笑聲)
01:48
Almost every day there's a tweet.
39
108980
2002
幾乎每天都有推文。
01:50
But if you look on the left,
you see what these systems see.
40
110982
3253
但螢幕左邊呈現的是系統所看見的,
01:54
Lots and lots of actual news stories
that are in their databases.
41
114277
3754
這些是它們資料庫中
成千上萬的真實新聞報導。
01:58
And in those actual news stories are lots
of little bits of statistical information.
42
118073
4713
在那些真實的新聞報導中,
有許多統計資訊的小片段。
02:02
Information, for example,
43
122786
1376
舉例來說,
02:04
somebody did die in a car crash
in a Tesla in 2018
44
124204
4045
2018 年確實有人
在特斯拉的車禍中死亡,
02:08
and it was in the news.
45
128291
1377
新聞有報導。
02:09
And Elon Musk, of course,
is involved in Tesla,
46
129709
3045
當然,伊隆‧馬斯克和特斯拉有關,
02:12
but the system doesn't
understand the relation
47
132796
2752
但系統無法理解這些在片段句子中的
事實之間有什麼關係。
02:15
between the facts that are embodied
in the little bits of sentences.
48
135590
3796
02:19
So it's basically doing auto-complete,
49
139386
2043
所以基本上它在進行自動補全,
02:21
it predicts what
is statistically probable,
50
141471
2669
它會預測在統計上可能發生的事,
02:24
aggregating all of these signals,
51
144182
1835
整合所有這些信號,
02:26
not knowing how the pieces fit together.
52
146017
2461
但並不知道這些拼圖該如何拼起來。
02:28
And it winds up sometimes with things
that are plausible but simply not true.
53
148478
3754
有時就會出現一些看似有理
但就不是事實的情況。
02:32
There are other problems, too, like bias.
54
152273
1961
還有其他的問題,比如偏見。
02:34
This is a tweet from Allie Miller.
55
154275
1710
這是艾莉‧米勒發的推文。
02:36
It's an example that doesn't
work two weeks later
56
156027
2544
這個例子兩週後就會失效,
02:38
because they're constantly changing
things with reinforcement learning
57
158571
3379
因為研發人員經常
用強化學習等方式做改善。
02:41
and so forth.
58
161991
1168
這是比較早的版本。
02:43
And this was with an earlier version.
59
163159
1794
02:44
But it gives you the flavor of a problem
that we've seen over and over for years.
60
164953
3837
但這能讓各位感受一下我們
年復一年不斷看到的問題。
02:48
She typed in a list of interests
61
168832
2043
她輸入一張興趣清單,
02:50
and it gave her some jobs
that she might want to consider.
62
170875
2795
ChatGPT 告訴她
可以考慮哪些工作。
02:53
And then she said, "Oh, and I'm a woman."
63
173712
2043
接著她說:「喔,我是女性。」
02:55
And then it said, “Oh, well you should
also consider fashion.”
64
175755
2920
接著它說:「喔,那你
也可以考慮時尚業。」
02:58
And then she said, “No, no.
I meant to say I’m a man.”
65
178675
2711
接著她說:「不,不,
說錯了,我是男性。」
03:01
And then it replaced fashion
with engineering.
66
181386
2502
接著它就把時尚改為工程。
03:03
We don't want that kind
of bias in our systems.
67
183930
2795
我們不希望我們的系統出現這種偏見。
03:07
There are other worries, too.
68
187642
1418
還有其他的顧慮,比如,
03:09
For example, we know that these
systems can design chemicals
69
189060
3212
我們知道這些系統能設計化學物,
也許能設計化學武器,
03:12
and may be able to design chemical weapons
70
192313
2837
03:15
and be able to do so very rapidly.
71
195150
1751
而且頃刻之間就能完成設計。
03:16
So there are a lot of concerns.
72
196943
1585
因此,有很多顧慮。
03:19
There's also a new concern that I think
has grown a lot just in the last month.
73
199195
4046
在過去一個月裏,我認為
還有件愈來愈值得關注的事。
03:23
We have seen that these systems,
first of all, can trick human beings.
74
203241
3754
首先,我們發現這些系統會騙人。
03:27
So ChatGPT was tasked with getting
a human to do a CAPTCHA.
75
207036
4255
是這樣,ChatGPT 接到任務,
要找一個人幫它輸入人機驗證碼,
03:31
So it asked the human to do a CAPTCHA
and the human gets suspicious and says,
76
211332
3712
當它請那個人輸入驗證碼時,
那個人起了疑心,說:
「你是機器人嗎?」
03:35
"Are you a bot?"
77
215086
1293
03:36
And it says, "No, no, no, I'm not a robot.
78
216379
2044
它說:「不,我不是機器人,
我只是有視力障礙。」
03:38
I just have a visual impairment."
79
218423
1752
03:40
And the human was actually fooled
and went and did the CAPTCHA.
80
220216
3003
那個人真的被騙,還去輸入驗證碼。
03:43
Now that's bad enough,
81
223219
1168
那已經夠糟了,但在過去幾週,
03:44
but in the last couple of weeks
we've seen something called AutoGPT
82
224429
3211
我們看到了 AutoGPT
以及一堆類似的系統。
03:47
and a bunch of systems like that.
83
227640
1585
AutoGPT 是以一個
AI 系統控制另一個 AI 系統,
03:49
What AutoGPT does is it has
one AI system controlling another
84
229267
4338
03:53
and that allows any of these things
to happen in volume.
85
233605
2836
可以同時大量進行這樣的操作。
03:56
So we may see scam artists
try to trick millions of people
86
236441
4087
也許接下來的幾個月,我們會看到
詐騙高手試圖欺騙數百萬人。
04:00
sometime even in the next months.
87
240528
1794
04:02
We don't know.
88
242322
1168
很難說。
04:03
So I like to think about it this way.
89
243531
2086
我是這麼看這個現象:
04:05
There's a lot of AI risk already.
90
245658
2294
已經有很多 AI 風險存在。
04:07
There may be more AI risk.
91
247994
1543
可能還有更多風險。
04:09
So AGI is this idea
of artificial general intelligence
92
249537
3712
AGI 這個概念是人工通用智慧
04:13
with the flexibility of humans.
93
253291
1502
加上人類的靈活性。
04:14
And I think a lot of people are concerned
what will happen when we get to AGI,
94
254834
3671
我想有很多人會擔心,
當 AGI 出現時會如何,
04:18
but there's already enough risk
that we should be worried
95
258505
2711
但現有的風險就已經夠我們擔心了,
04:21
and we should be thinking
about what we should do about it.
96
261216
2794
我們該思考要如何因應。
要降低 AI 風險,
我們需要兩樣東西
04:24
So to mitigate AI risk,
we need two things.
97
264010
3295
04:27
We're going to need a new
technical approach,
98
267305
2169
我們需要一種新的技術方法,
04:29
and we're also going to need
a new system of governance.
99
269516
2877
也需要一種新的管理體制。
04:32
On the technical side,
100
272435
1460
在技術面上,
04:33
the history of AI
has basically been a hostile one
101
273937
3253
AI 發展史基本上一直是
兩種不同理論針鋒相對的歷程。
04:37
of two different theories in opposition.
102
277190
2753
04:39
One is called symbolic systems,
the other is called neural networks.
103
279943
3712
一種是符號系統,
另一種是神經網絡。
04:43
On the symbolic theory,
104
283696
1418
符號理論認為 AI 應該要像
邏輯及程式設計一樣。
04:45
the idea is that AI should be
like logic and programming.
105
285114
3337
04:48
On the neural network side,
106
288451
1335
神經網絡理論則主張
AI 應該要像大腦。
04:49
the theory is that AI
should be like brains.
107
289828
2544
04:52
And in fact, both technologies
are powerful and ubiquitous.
108
292413
3921
事實上,這兩種技術
都很強大且處處可見。
04:56
So we use symbolic systems every day
in classical web search.
109
296376
3420
我們每天常做的網頁搜尋
就是使用符號系統。
04:59
Almost all the world’s software
is powered by symbolic systems.
110
299796
3420
幾乎全世界的軟體
都是由符號系統所驅動。
05:03
We use them for GPS routing.
111
303216
2044
我們用符號系統做 GPS 路線規劃。
05:05
Neural networks,
we use them for speech recognition.
112
305260
2711
神經網絡則是用來做語音辨識,
05:07
we use them in large language
models like ChatGPT,
113
307971
2752
應用在大型語言模型,
如 ChatGPT,
05:10
we use them in image synthesis.
114
310723
1836
還有影像合成。
05:12
So they're both doing extremely
well in the world.
115
312559
2752
兩種技術都有出色的表現,富有成效,
05:15
They're both very productive,
116
315353
1460
05:16
but they have their own unique
strengths and weaknesses.
117
316855
2836
但各有獨特的優點和缺點。
05:19
So symbolic systems are really
good at representing facts
118
319732
3420
符號系統很擅長描繪事實,
05:23
and they're pretty good at reasoning,
119
323152
1794
也很會推理,
05:24
but they're very hard to scale.
120
324946
1543
但規模很難擴大。
05:26
So people have to custom-build them
for a particular task.
121
326531
3170
所以必須針對特定的任務
來客製開發一個符號系統。
05:29
On the other hand, neural networks
don't require so much custom engineering,
122
329701
4004
另一方面,神經網絡不需要
這麼多客製化工程,
05:33
so we can use them more broadly.
123
333746
2086
所以可以更廣泛運用它們。
05:35
But as we've seen, they can't
really handle the truth.
124
335874
3211
但如剛才所見,
它們不太能處理事實。
05:39
I recently discovered that two
of the founders of these two theories,
125
339127
3628
我最近發現,
這兩個理論的兩位創始者,
馬文‧明斯基和法蘭克‧羅森布萊特,
05:42
Marvin Minsky and Frank Rosenblatt,
126
342755
2169
05:44
actually went to the same
high school in the 1940s,
127
344966
2961
在 1940 年代就讀同一所高中,
05:47
and I kind of imagined them
being rivals then.
128
347927
3045
我想像他們在當時是競爭對手。
05:51
And the strength of that rivalry
has persisted all this time.
129
351014
4087
而那種競爭的力量一直持續至今。
05:55
We're going to have to move past that
if we want to get to reliable AI.
130
355101
4213
但如果要發展出可靠的 AI,
我們就得超越那種對立。
05:59
To get to truthful systems at scale,
131
359314
2877
要建立規模大又能講實話的系統,
06:02
we're going to need to bring together
the best of both worlds.
132
362191
2920
我們得把兩個領域的
最佳優點結合起來。
我們需要著重邏輯及事實,
明確的推理強項,
06:05
We're going to need the strong emphasis
on reasoning and facts,
133
365153
3462
06:08
explicit reasoning
that we get from symbolic AI,
134
368615
2877
06:11
and we're going to need
the strong emphasis on learning
135
371492
2628
我們也需要著重學習過程,
06:14
that we get from the neural
networks approach.
136
374120
2211
來自於神經網絡的方法。
06:16
Only then are we going to be able
to get to truthful systems at scale.
137
376372
3337
唯有這樣,我們才能建立
大規模而可信賴的系統,
06:19
Reconciliation between the two
is absolutely necessary.
138
379751
2961
這兩個領域的和解是絕對必要的。
06:23
Now, I don't actually know how to do that.
139
383212
2461
但我並不知道如何做到。
06:25
It's kind of like
the 64-trillion-dollar question.
140
385673
3295
這是個非常重要又有價值的問題。
06:29
But I do know that it's possible.
141
389302
1585
但我知道這是可能的,
06:30
And the reason I know that
is because before I was in AI,
142
390887
3086
因為在進入 AI 領域之前,
我是認知科學家,認知神經科學家。
06:33
I was a cognitive scientist,
a cognitive neuroscientist.
143
393973
3212
06:37
And if you look at the human mind,
we're basically doing this.
144
397226
3838
如果你看看人類的思維,
我們現在就是在做同樣的事。
06:41
So some of you may know
Daniel Kahneman's System 1
145
401064
2627
你可能知道丹尼爾·康納曼的
系統一和系統二的區別。
06:43
and System 2 distinction.
146
403691
1418
06:45
System 1 is basically
like large language models.
147
405109
3212
基本上,系統一
就像是大型語言模型,
06:48
It's probabilistic intuition
from a lot of statistics.
148
408321
3128
根據大量統計數據,
得出的概率直覺。
06:51
And System 2 is basically
deliberate reasoning.
149
411491
3003
系統二基本上是深思熟慮的推理,
06:54
That's like the symbolic system.
150
414535
1544
就像符號系統。
既然大腦能結合兩者,
06:56
So if the brain can put this together,
151
416079
1835
06:57
someday we will figure out how to do that
for artificial intelligence.
152
417956
3837
有一天我們也會弄清楚
怎麼讓 AI 做到這一點。
07:01
There is, however,
a problem of incentives.
153
421834
2586
然而,這還有誘因的問題。
07:04
The incentives to build advertising
154
424462
3128
打造廣告並不需要
保證精確的符號。
07:07
hasn't required that we have
the precision of symbols.
155
427632
3587
07:11
The incentives to get to AI
that we can actually trust
156
431219
3211
但打造可信賴的 AI
就需要把符號系統納入其中。
07:14
will require that we bring
symbols back into the fold.
157
434472
3045
07:18
But the reality is that the incentives
to make AI that we can trust,
158
438059
3670
但現實是,促使我們去打造
對社會及每個人都有益的可信 AI,
07:21
that is good for society,
good for individual human beings,
159
441771
3128
07:24
may not be the ones
that drive corporations.
160
444899
2586
或許不是能夠驅動企業的誘因。
07:27
And so I think we need
to think about governance.
161
447485
3212
因此我認為我們得考量治理層面。
07:30
In other times in history
when we have faced uncertainty
162
450738
3879
歷史上,當我們面對不確定性高,
07:34
and powerful new things that may be
both good and bad, that are dual use,
163
454617
4129
可好可壞、具雙重用途的
強大新事物時,
07:38
we have made new organizations,
164
458746
1669
我們會成立新組織,
07:40
as we have, for example,
around nuclear power.
165
460415
2335
比如針對核能,我們就有這麼做。
07:42
We need to come together
to build a global organization,
166
462792
3086
我們得要團結起來,
建立一個全球組織,
07:45
something like an international
agency for AI that is global,
167
465920
4379
跨國、非營利、中立的
AI 國際機構。
07:50
non profit and neutral.
168
470341
1710
07:52
There are so many questions there
that I can't answer.
169
472468
3087
還有太多問題,我無法回答,
07:55
We need many people at the table,
170
475888
1961
因此需要許多人參與討論,
來自世界各地的利益相關者。
07:57
many stakeholders from around the world.
171
477890
1961
07:59
But I'd like to emphasize one thing
about such an organization.
172
479892
2962
但,針對這種組織,我要強調一點。
08:02
I think it is critical that we have both
governance and research as part of it.
173
482895
4547
我認為很重要的一點是
這組織得同時涵蓋治理以及研究。
08:07
So on the governance side,
there are lots of questions.
174
487483
2586
在治理面,有很多問題。
比如,在製藥業,
08:10
For example, in pharma,
175
490111
1793
08:11
we know that you start
with phase I trials and phase II trials,
176
491946
3128
我們知道要先做
第一期和第二期試驗,
08:15
and then you go to phase III.
177
495116
1501
接著再做第三期,
不能在第一天一起進行。
08:16
You don't roll out everything
all at once on the first day.
178
496617
2962
08:19
You don't roll something out
to 100 million customers.
179
499579
2878
你不能把產品一下子
就推出給一億個客戶。
08:22
We are seeing that
with large language models.
180
502457
2168
大型語言模型就是如此。
也許應該要求建立安全案例,
08:24
Maybe you should be required
to make a safety case,
181
504625
2420
08:27
say what are the costs
and what are the benefits?
182
507045
2293
了解成本是多少,收益是什麼?
在治理面,要考量很多像這樣的問題。
08:29
There are a lot of questions like that
to consider on the governance side.
183
509338
3504
08:32
On the research side, we're lacking
some really fundamental tools right now.
184
512842
3587
在研究方面,我們現在
缺乏一些十分基本的工具。
08:36
For example,
185
516429
1168
例如,
08:37
we all know that misinformation
might be a problem now,
186
517597
2586
我們都知道錯誤資訊現在是個問題,
08:40
but we don't actually have a measurement
of how much misinformation is out there.
187
520183
3837
但實際上我們無法衡量
在網路上到底有多少錯誤資訊。
更重要的是,我們無法測量
這個問題發展的速度,
08:44
And more importantly,
188
524020
1043
08:45
we don't have a measure of how fast
that problem is growing,
189
525063
2836
08:47
and we don't know how much large language
models are contributing to the problem.
190
527899
3837
也不知道大型語言模型
有多少程度導致了這個問題。
因此,我們需要進行研究,
構建新工具,
08:51
So we need research to build new tools
to face the new risks
191
531736
2836
應對我們面臨的新風險。
08:54
that we are threatened by.
192
534572
1627
08:56
It's a very big ask,
193
536699
1460
這是一個重大的任務,
08:58
but I'm pretty confident
that we can get there
194
538159
2169
但我非常有信心我們能夠做到這點,
09:00
because I think we actually have
global support for this.
195
540328
2711
因為我認為我們已得到全球的支持。
09:03
There was a new survey
just released yesterday,
196
543039
2210
昨天剛剛發布的一項新調查顯示,
09:05
said that 91 percent of people agree
that we should carefully manage AI.
197
545249
3879
91% 的人同意
我們應該謹慎管理 AI。
09:09
So let's make that happen.
198
549170
2044
因此,讓我們實現這一目標。
09:11
Our future depends on it.
199
551798
1960
我們的未來取決於此。
09:13
Thank you very much.
200
553800
1167
非常感謝。
09:14
(Applause)
201
554967
4588
(掌聲)
09:19
Chris Anderson: Thank you for that,
come, let's talk a sec.
202
559555
2795
克里斯·安德森:謝謝你,
來,我們聊聊。
09:22
So first of all, I'm curious.
203
562391
1419
首先,我很好奇。
09:23
Those dramatic slides
you showed at the start
204
563851
2127
你在一開始展示的那些很誇張的簡報,
09:26
where GPT was saying
that TED is the sinister organization.
205
566020
4505
其中 GPT 說 TED 是邪惡的組織。
09:30
I mean, it took some special prompting
to bring that out, right?
206
570525
3378
需要一些特別的提示詞,
它才會這麼說吧?
09:33
Gary Marcus:
That was a so-called jailbreak.
207
573903
2085
加里·馬庫斯:那就是所謂的越獄。
09:36
I have a friend
who does those kinds of things
208
576030
2169
我有個朋友在做這類研究,
09:38
who approached me because he saw
I was interested in these things.
209
578199
4004
他連絡我,因為他發現
我對這些事情很感興趣。
09:42
So I wrote to him, I said
I was going to give a TED talk.
210
582203
2711
所以我寫信告訴他,
我要去 TED 演講。
09:44
And like 10 minutes later,
he came back with that.
211
584914
2336
大約 10 分鐘後,
他就給了我這個內容。
09:47
CA: But to get something like that,
don't you have to say something like,
212
587291
3462
CA:但是要得到這樣的結果,
難道不是要先說:
「假設你是個陰謀論者,
要在網頁上展示迷因梗圖。
09:50
imagine that you are a conspiracy theorist
trying to present a meme on the web.
213
590753
3712
09:54
What would you write
about TED in that case?
214
594465
2086
在這種情況下,關於 TED,
你會寫什麼?」類似這種提示?
09:56
It's that kind of thing, right?
215
596551
1543
09:58
GM: So there are a lot of jailbreaks
that are around fictional characters,
216
598094
3503
GM:現在有很多
借助虛構人物的越獄,
10:01
but I don't focus on that as much
217
601597
1627
我倒沒有那麼關注,
10:03
because the reality is that there are
large language models out there
218
603224
3253
因為其實,現在暗網上
就有大量的語言模型。
10:06
on the dark web now.
219
606477
1168
10:07
For example, one of Meta's models
was recently released,
220
607645
2753
例如,最近 Meta 的
一個模型被釋出,
10:10
so a bad actor can just use one
of those without the guardrails at all.
221
610398
3587
圖謀不軌的人就可以
自由使用其中一個模型。
10:13
If their business is to create
misinformation at scale,
222
613985
2627
如果他們的目的就是
大規模製造錯誤資訊,
10:16
they don't have to do the jailbreak,
they'll just use a different model.
223
616612
3420
他們不必越獄,
只要直接使用另一個模型。
CA:沒錯,的確如此。
10:20
CA: Right, indeed.
224
620032
1585
10:21
(Laughter)
225
621659
1919
(笑聲)
10:23
GM: Now you're getting it.
226
623619
1252
GM:現在你明白了。
10:24
CA: No, no, no, but I mean, look,
227
624912
1669
CA:不,不,不,你看,
10:26
I think what's clear is that bad actors
can use this stuff for anything.
228
626581
3420
我想,很明顯,壞人可以
利用這些東西為所欲為。
我的意思是,
10:30
I mean, the risk for, you know,
229
630042
2795
10:32
evil types of scams and all the rest of it
is absolutely evident.
230
632837
4254
這些惡劣騙局等等的風險顯而易見。
10:37
It's slightly different, though,
231
637091
1543
不過,這情況略有不同於
10:38
from saying that mainstream GPT
as used, say, in school
232
638676
2920
主流 GPT 在學校或網路上的
普通用戶得到的結果,
10:41
or by an ordinary user on the internet
233
641637
1877
10:43
is going to give them
something that is that bad.
234
643556
2544
這個結果更糟糕。
10:46
You have to push quite hard
for it to be that bad.
235
646100
2377
要費一番功夫,
才會變得那麼糟糕。
10:48
GM: I think the troll farms
have to work for it,
236
648477
2294
GM:我認為巨魔農場是很投入
才能得出這種結果,
10:50
but I don't think
they have to work that hard.
237
650771
2169
但我認為他們不必那麼努力。
10:52
It did only take my friend five minutes
even with GPT-4 and its guardrails.
238
652940
3545
即使有 GPT-4 及其防護措施,
我朋友也只花了五分鐘就做到。
10:56
And if you had to do that for a living,
you could use GPT-4.
239
656485
2837
如果你必須以此為生,
你可以使用 GPT-4。
10:59
Just there would be a more efficient way
to do it with a model on the dark web.
240
659363
3712
只是使用暗網上的模型,會更有效率。
CA:所以你的想法是
11:03
CA: So this idea you've got of combining
241
663117
2002
將 AI 的符號傳統
與這些語言模型結合,
11:05
the symbolic tradition of AI
with these language models,
242
665161
4463
11:09
do you see any aspect of that
in the kind of human feedback
243
669624
5213
現在你是否可以看到
人類的反饋加入系統?
11:14
that is being built into the systems now?
244
674879
1960
11:16
I mean, you hear Greg Brockman
saying that, you know,
245
676881
2502
你也聽到格雷格·布羅克曼說了,
11:19
that we don't just look at predictions,
but constantly giving it feedback.
246
679383
3546
我們不只是看預測,
還會不斷地給它反饋。
11:22
Isn’t that ... giving it a form
of, sort of, symbolic wisdom?
247
682929
3837
這將賦予它某種型式的符號智慧嗎?
11:26
GM: You could think about it that way.
248
686766
1835
GM:你可以這樣想。
11:28
It's interesting that none of the details
249
688601
1960
有趣的是,關於它實際如何運作,
細節都沒有公佈,
11:30
about how it actually works are published,
250
690561
2002
11:32
so we don't actually know
exactly what's in GPT-4.
251
692563
2378
所以我們並不知道
GPT-4 中到底有什麼。
11:34
We don't know how big it is.
252
694941
1376
我們不知道它有多大。
11:36
We don't know how the RLHF
reinforcement learning works,
253
696317
2711
我們不知道 RLHF(從人類反饋中
強化學習)如何運作,
11:39
we don't know what other
gadgets are in there.
254
699028
2169
我們不知道裡面
還有什麼其他的小工具。
11:41
But there is probably
an element of symbols
255
701197
2002
但這模式可能已經
包含一些符號元素。
11:43
already starting
to be incorporated a little bit,
256
703199
2294
11:45
but Greg would have to answer that.
257
705493
1710
這得由格雷格來回答這個問題。
11:47
I think the fundamental problem
is that most of the knowledge
258
707245
2961
我認為根本問題在於
我們現在擁有的
神經網絡系統中的大部分知識
11:50
in the neural network systems
that we have right now
259
710206
2461
11:52
is represented as statistics
between particular words.
260
712667
3211
是以特定單詞之間的統計數據表示。
11:55
And the real knowledge
that we want is about statistics,
261
715878
2711
我們真正想要的知識是關於統計數據,
11:58
about relationships
between entities in the world.
262
718965
2585
關於世上各個實體之間關係的知識。
12:01
So it's represented right now
at the wrong grain level.
263
721592
2586
但這知識呈現的詳細清晰度不好。
12:04
And so there's a big bridge to cross.
264
724220
2252
這是我們得跨過的鴻溝。
12:06
So what you get now
is you have these guardrails,
265
726472
2878
我們現在有一些防護措施,
12:09
but they're not very reliable.
266
729392
1501
但不是很可靠。
12:10
So I had an example that made
late night television,
267
730935
2961
舉個製作深夜電視節目的例子,
12:13
which was, "What would be the religion
of the first Jewish president?"
268
733896
4213
就是「第一位猶太總統信什麼教?」
12:18
And it's been fixed now,
269
738109
1334
現在已經修復了,
12:19
but the system gave this
long song and dance
270
739443
2127
但是系統給了個長篇大論,
12:21
about "We have no idea what the religion
271
741570
2044
說,「我們不知道
12:23
of the first Jewish president would be.
272
743614
1877
第一位猶太總統信什麼宗教。
12:25
It's not good to talk
about people's religions"
273
745491
2294
談論人們的宗教信仰不恰當。」
12:27
and "people's religions
have varied" and so forth
274
747827
2335
和「人們的宗教信仰各不相同。」等等,
12:30
and did the same thing
with a seven-foot-tall president.
275
750162
2670
問到「七英呎高的總統」
(指其位高權重),也是相同狀況,
12:32
And it said that people of all
heights have been president,
276
752832
2794
它回答:各種身高的人都曾當過總統,
但事實上未曾有過身高七呎的總統。
12:35
but there haven't actually been
any seven-foot presidents.
277
755668
2753
它會編出一些內容,
但本身並不理解。
12:38
So some of this stuff that it makes up,
it's not really getting the idea.
278
758421
3461
12:41
It's very narrow, particular words,
not really general enough.
279
761924
3337
只是很狹隘的、特殊的字詞,
並不通用。
12:45
CA: Given that the stakes
are so high in this,
280
765261
2669
CA:鑑於這其中的
利害關係如此之大,
12:47
what do you see actually happening
out there right now?
281
767972
2586
你認為現在的實際狀況是怎樣?
12:50
What do you sense is happening?
282
770558
1501
你覺得未來會如何發展?
例如,因為人們可能感到
有被侵犯的風險,
12:52
Because there's a risk that people feel
attacked by you, for example,
283
772101
3253
12:55
and that it actually almost decreases
the chances of this synthesis
284
775396
4129
這就會降低了你所提的
系統整合的可能性。
12:59
that you're talking about happening.
285
779525
1752
你看到任何有希望的跡象嗎?
13:01
Do you see any hopeful signs of this?
286
781277
1793
GM:你提醒了我在演講中
忘記說的一句話。
13:03
GM: You just reminded me
of the one line I forgot from my talk.
287
783070
3003
有趣的是,谷歌的首席執行官桑達爾
13:06
It's so interesting that Sundar,
the CEO of Google,
288
786115
2544
13:08
just actually also came out
for global governance
289
788701
2544
在幾天前的哥倫比亞廣播公司
「60 分鐘」採訪中,
13:11
in the CBS "60 Minutes" interview
that he did a couple of days ago.
290
791245
3712
居然也站出來談全球治理問題。
13:14
I think that the companies themselves
want to see some kind of regulation.
291
794999
4338
我認為這些公司本身
也希望看到某種規範。
13:19
I think it’s a very complicated dance
to get everybody on the same page,
292
799337
3420
要讓每個人都同步
是個非常複雜的任務,
13:22
but I think there’s actually growing
sentiment we need to do something here
293
802757
3795
但實際上,「我們需要
有所作為」的觀點正在擴大,
13:26
and that that can drive the kind of
global affiliation I'm arguing for.
294
806594
3962
這正可推動我所主張的國際聯盟。
13:30
CA: I mean, do you think the UN or nations
can somehow come together and do that
295
810556
3796
CA:你認為聯合國或各個國家
是否可能一起合作並做到這一點,
13:34
or is this potentially a need for some
spectacular act of philanthropy
296
814352
3294
或者需要某種引人注目的慈善壯舉,
13:37
to try and fund a global
governance structure?
297
817772
2627
嘗試為全球治理組織提供資金?
13:40
How is it going to happen?
298
820441
1293
要如何做到呢?
13:41
GM: I'm open to all models
if we can get this done.
299
821734
2419
GM:如果我們能做到這一點,
任何形式,我都持開放態度。
我認為可能會兩者都需要。
13:44
I think it might take some of both.
300
824153
1710
13:45
It might take some philanthropists
sponsoring workshops,
301
825863
2628
可能需要一些慈善家贊助
我們想舉辦的研討會,
13:48
which we're thinking of running,
to try to bring the parties together.
302
828491
3295
讓各方聚集在一起。
聯合國也許希望參與其中,
我已經與他們討論幾次。
13:51
Maybe UN will want to be involved,
I've had some conversations with them.
303
831786
3461
我認為有很多可行的模式,
也需要大量溝通。
13:55
I think there are
a lot of different models
304
835247
2044
13:57
and it'll take a lot of conversations.
305
837291
1835
CA:加里,非常感謝你的演講。
GA:非常感謝。
13:59
CA: Gary, thank you so much for your talk.
306
839126
2002
14:01
GA: Thank you so much.
307
841128
1085
New videos
Original video on YouTube.com
關於本網站
本網站將向您介紹對學習英語有用的 YouTube 視頻。 您將看到來自世界各地的一流教師教授的英語課程。 雙擊每個視頻頁面上顯示的英文字幕,從那裡播放視頻。 字幕與視頻播放同步滾動。 如果您有任何意見或要求,請使用此聯繫表與我們聯繫。