The Urgent Risks of Runaway AI — and What to Do about Them | Gary Marcus | TED
212,064 views ・ 2023-05-12
请双击下面的英文字幕来播放视频。
翻译人员: Yip Yan Yeung
校对人员: Yanyan Hong
00:04
I’m here to talk about the possibility
of global AI governance.
0
4292
5339
我来到这里是为了讨论
全球 AI 治理的可能性。
00:09
I first learned to code
when I was eight years old,
1
9631
3211
我在八岁的时候
第一次学习了如何写代码,
00:12
on a paper computer,
2
12884
1168
用的是一台纸电脑,
00:14
and I've been in love with AI ever since.
3
14094
2252
从那以后就爱上了 AI。
00:16
In high school,
4
16346
1168
读高中时,
00:17
I got myself a Commodore 64
and worked on machine translation.
5
17514
3253
我自己找来了一台 C64 电脑,
钻研机器翻译。
00:20
I built a couple of AI companies,
I sold one of them to Uber.
6
20934
3712
我开了几家 AI 公司,
卖了一家给优步。
00:24
I love AI, but right now I'm worried.
7
24688
3503
我爱 AI,但是我现在很担心。
00:28
One of the things that I’m worried
about is misinformation,
8
28233
2794
我担心的一点是虚假信息,
00:31
the possibility that bad actors
will make a tsunami of misinformation
9
31027
3462
有可能会有图谋不轨的人
掀起史无前例的虚假信息巨浪。
00:34
like we've never seen before.
10
34531
2002
00:36
These tools are so good
at making convincing narratives
11
36575
3586
这些工具太擅长编出任何
让人信以为真的故事了。
00:40
about just about anything.
12
40161
1794
00:41
If you want a narrative
about TED and how it's dangerous,
13
41997
3003
如果你想编一个有关 TED 的故事,
说明 TED 有多危险,
00:45
that we're colluding here
with space aliens,
14
45000
2877
说我们在这儿和外星人勾结,
00:47
you got it, no problem.
15
47919
1669
没问题,给你编一个。
00:50
I'm of course kidding about TED.
16
50422
2127
当然我是在开 TED 的玩笑,
00:52
I didn't see any space aliens backstage.
17
52591
2961
我没在后台看见外星人。
00:55
But bad actors are going to use
these things to influence elections,
18
55885
3629
但是图谋不轨的人
会用这些东西左右选举,
00:59
and they're going to threaten democracy.
19
59514
2211
会威胁民主。
01:01
Even when these systems
20
61725
1167
就算这些系统的本意
01:02
aren't deliberately being used
to make misinformation,
21
62934
2878
不是用于制造虚假信息,
01:05
they can't help themselves.
22
65854
1710
但是它们控制不了自己。
01:07
And the information that they make
is so fluid and so grammatical
23
67564
4963
它们制造的信息是
如此的流畅、自然,
01:12
that even professional editors
sometimes get sucked in
24
72527
3253
让专业编辑有时都会深陷其中,
01:15
and get fooled by this stuff.
25
75780
2002
被它欺骗。
01:17
And we should be worried.
26
77824
1668
我们该担心了。
01:19
For example, ChatGPT made up
a sexual harassment scandal
27
79534
3879
比如,ChatGPT
针对一名真实存在的教授
01:23
about an actual professor,
28
83413
1919
编造了一桩性侵丑闻,
01:25
and then it provided
evidence for its claim
29
85332
2419
还为这一指控提供了证据,
01:27
in the form of a fake
"Washington Post" article
30
87792
2586
采用了假的
《华盛顿邮报》报道的形式,
01:30
that it created a citation to.
31
90378
1919
还引用了这一假报道。
01:32
We should all be worried
about that kind of thing.
32
92297
2377
我们都该对此感到担忧。
01:34
What I have on the right
is an example of a fake narrative
33
94674
2962
右侧是其中一个系统
01:37
from one of these systems
34
97636
1209
生成的假故事,
01:38
saying that Elon Musk died
in March of 2018 in a car crash.
35
98887
4755
宣称埃隆·马斯克(Elon Musk)
于 2018 年 3 月死于车祸。
01:43
We all know that's not true.
36
103683
1418
我们都知道这不是真的。
01:45
Elon Musk is still here,
the evidence is all around us.
37
105143
2794
埃隆·马斯克还活着,
证据就在我们身边。
01:47
(Laughter)
38
107979
1001
(笑声)
01:48
Almost every day there's a tweet.
39
108980
2002
几乎每天都有这样的推文。
01:50
But if you look on the left,
you see what these systems see.
40
110982
3253
但看左边,你就能看到
系统眼中的是什么了。
01:54
Lots and lots of actual news stories
that are in their databases.
41
114277
3754
它们的数据库里存着
成千上万的真实新闻故事。
01:58
And in those actual news stories are lots
of little bits of statistical information.
42
118073
4713
这些真实的新闻故事中
有很多支离破碎的统计信息。
02:02
Information, for example,
43
122786
1376
比如这样的信息,
02:04
somebody did die in a car crash
in a Tesla in 2018
44
124204
4045
有人于 2018 年
死于一场特斯拉车祸,
02:08
and it was in the news.
45
128291
1377
新闻也报道了。
02:09
And Elon Musk, of course,
is involved in Tesla,
46
129709
3045
埃隆·马斯克当然与特斯拉有关,
02:12
but the system doesn't
understand the relation
47
132796
2752
但系统无法理解
02:15
between the facts that are embodied
in the little bits of sentences.
48
135590
3796
只言片语传达出的事实之间的关系。
02:19
So it's basically doing auto-complete,
49
139386
2043
其实它所做的就是自动补全,
02:21
it predicts what
is statistically probable,
50
141471
2669
它会预测统计上可能会发生的事,
02:24
aggregating all of these signals,
51
144182
1835
收集这些信号,
02:26
not knowing how the pieces fit together.
52
146017
2461
但并不知道它们之间有何关系。
02:28
And it winds up sometimes with things
that are plausible but simply not true.
53
148478
3754
最终会生成一些似是而非的东西。
02:32
There are other problems, too, like bias.
54
152273
1961
还有别的问题,比如偏见。
02:34
This is a tweet from Allie Miller.
55
154275
1710
这是艾莉·米勒(Allie Miller)
发的一条推文。
02:36
It's an example that doesn't
work two weeks later
56
156027
2544
这是一个证明
只有两周有效期的例子,
02:38
because they're constantly changing
things with reinforcement learning
57
158571
3379
因为研发人员一直在通过强化学习
02:41
and so forth.
58
161991
1168
等途径做出改变。
02:43
And this was with an earlier version.
59
163159
1794
这说的是以前的版本。
02:44
But it gives you the flavor of a problem
that we've seen over and over for years.
60
164953
3837
但你也能从中体会到多年以来
我们一直看到的一个问题。
02:48
She typed in a list of interests
61
168832
2043
她输入了一系列兴趣,
02:50
and it gave her some jobs
that she might want to consider.
62
170875
2795
然后 ChatGPT 给了
几个她可能会感兴趣的职位。
02:53
And then she said, "Oh, and I'm a woman."
63
173712
2043
然后她说:“哦,我是个女的。”
02:55
And then it said, “Oh, well you should
also consider fashion.”
64
175755
2920
然后它说:“哦,那你应该
考虑一下时尚行业。”
02:58
And then she said, “No, no.
I meant to say I’m a man.”
65
178675
2711
然后她说:“不,不。
我是想说我是个男的。”
03:01
And then it replaced fashion
with engineering.
66
181386
2502
然后它把“时尚”替换成了“工程”。
03:03
We don't want that kind
of bias in our systems.
67
183930
2795
我们不希望我们的系统里
有这样的偏见。
03:07
There are other worries, too.
68
187642
1418
还有其他的顾虑。
03:09
For example, we know that these
systems can design chemicals
69
189060
3212
比如,我们知道这些系统
可以设计化学品,
03:12
and may be able to design chemical weapons
70
192313
2837
还有可能可以设计化学武器,
03:15
and be able to do so very rapidly.
71
195150
1751
而且可以在顷刻之间完成设计。
03:16
So there are a lot of concerns.
72
196943
1585
所以有很多值得忧虑的事情。
03:19
There's also a new concern that I think
has grown a lot just in the last month.
73
199195
4046
在过去的这个月里,我认为还有一个
越来越值得关注的新顾虑。
03:23
We have seen that these systems,
first of all, can trick human beings.
74
203241
3754
首先,我们发现这些系统
能骗过人类。
03:27
So ChatGPT was tasked with getting
a human to do a CAPTCHA.
75
207036
4255
ChatGPT 接收到这么一个任务,
要找一个人类帮它填验证码。
03:31
So it asked the human to do a CAPTCHA
and the human gets suspicious and says,
76
211332
3712
它让一个人来填验证码,
这个人心生怀疑,问:
03:35
"Are you a bot?"
77
215086
1293
“你是个机器人吗?”
03:36
And it says, "No, no, no, I'm not a robot.
78
216379
2044
它说:“不,不,不,
我不是个机器人。
03:38
I just have a visual impairment."
79
218423
1752
我只是有视力障碍。”
03:40
And the human was actually fooled
and went and did the CAPTCHA.
80
220216
3003
这个人就真的被骗过了,
还去填了验证码。
03:43
Now that's bad enough,
81
223219
1168
这太可怕了,
03:44
but in the last couple of weeks
we've seen something called AutoGPT
82
224429
3211
但是在过去几周里,我们看到了
这个叫 AutoGPT 的东西,
03:47
and a bunch of systems like that.
83
227640
1585
还有一大堆类似的系统。
03:49
What AutoGPT does is it has
one AI system controlling another
84
229267
4338
AutoGPT 做的是
一个 AI 系统控制另一个 AI 系统,
03:53
and that allows any of these things
to happen in volume.
85
233605
2836
可以同时大量进行这样的操作。
03:56
So we may see scam artists
try to trick millions of people
86
236441
4087
也许接下来的几个月里,
我们就能见证骗子
04:00
sometime even in the next months.
87
240528
1794
骗过成千上万的人。
04:02
We don't know.
88
242322
1168
谁知道呢。
04:03
So I like to think about it this way.
89
243531
2086
我想这么看待它,
04:05
There's a lot of AI risk already.
90
245658
2294
现在已经有了很多 AI 的风险,
04:07
There may be more AI risk.
91
247994
1543
还会有更多的风险。
04:09
So AGI is this idea
of artificial general intelligence
92
249537
3712
AGI ,也就是通用人工智能,
04:13
with the flexibility of humans.
93
253291
1502
再加上人类的灵活性。
04:14
And I think a lot of people are concerned
what will happen when we get to AGI,
94
254834
3671
我认为很多人会担心
我们实现 AGI 后会发生什么,
04:18
but there's already enough risk
that we should be worried
95
258505
2711
但我们现在该担心、
04:21
and we should be thinking
about what we should do about it.
96
261216
2794
该思考如何处理的风险已经够多了。
04:24
So to mitigate AI risk,
we need two things.
97
264010
3295
要想降低 AI 的风险,
我们需要两样东西。
04:27
We're going to need a new
technical approach,
98
267305
2169
我们需要一个新的技术方法,
04:29
and we're also going to need
a new system of governance.
99
269516
2877
还需要一个新的治理系统。
04:32
On the technical side,
100
272435
1460
技术层面,
04:33
the history of AI
has basically been a hostile one
101
273937
3253
AI 的历史其实是
04:37
of two different theories in opposition.
102
277190
2753
两个对立的理论针锋相对的历程。
04:39
One is called symbolic systems,
the other is called neural networks.
103
279943
3712
其中一个是符号系统,
另一个是神经网络。
04:43
On the symbolic theory,
104
283696
1418
符号理论认为
04:45
the idea is that AI should be
like logic and programming.
105
285114
3337
AI 应该类似于逻辑与程序设计。
04:48
On the neural network side,
106
288451
1335
神经网络认为
04:49
the theory is that AI
should be like brains.
107
289828
2544
AI 应该类似于大脑。
04:52
And in fact, both technologies
are powerful and ubiquitous.
108
292413
3921
其实两种技术都是强大且无处不在的,
04:56
So we use symbolic systems every day
in classical web search.
109
296376
3420
我们每天都会在常见的
网页搜索中用到符号系统,
04:59
Almost all the world’s software
is powered by symbolic systems.
110
299796
3420
世界上几乎所有的软件
都是建立在符号系统上的。
05:03
We use them for GPS routing.
111
303216
2044
我们用它进行 GPS 路线规划。
05:05
Neural networks,
we use them for speech recognition.
112
305260
2711
我们用神经网络进行语音识别,
05:07
we use them in large language
models like ChatGPT,
113
307971
2752
把它用在大语言模型,
如 ChatGPT 之中,
05:10
we use them in image synthesis.
114
310723
1836
将其用于图像合成,
05:12
So they're both doing extremely
well in the world.
115
312559
2752
它们在这世上都有着自己的用途。
05:15
They're both very productive,
116
315353
1460
它们都成果显著,
05:16
but they have their own unique
strengths and weaknesses.
117
316855
2836
但是有着自己的优势和弱势。
05:19
So symbolic systems are really
good at representing facts
118
319732
3420
符号系统很擅长展现事实,
05:23
and they're pretty good at reasoning,
119
323152
1794
适合逻辑思考,
05:24
but they're very hard to scale.
120
324946
1543
但非常难以扩展。
05:26
So people have to custom-build them
for a particular task.
121
326531
3170
人们得为某一特定任务
定制化开发一个符号系统。
05:29
On the other hand, neural networks
don't require so much custom engineering,
122
329701
4004
而神经网络不太需要
这么多定制化开发,
05:33
so we can use them more broadly.
123
333746
2086
所以我们可以更广泛地使用它。
05:35
But as we've seen, they can't
really handle the truth.
124
335874
3211
但如我们所见,
它不太能处理事实。
05:39
I recently discovered that two
of the founders of these two theories,
125
339127
3628
我最近发现这两个理论的两位创始人
05:42
Marvin Minsky and Frank Rosenblatt,
126
342755
2169
马文·明斯基(Marvin Minsky)和
弗兰克﹒罗森布拉特(Frank Rosenblatt)
05:44
actually went to the same
high school in the 1940s,
127
344966
2961
还在上世纪 40 年代
上过同一所高中,
05:47
and I kind of imagined them
being rivals then.
128
347927
3045
我还脑补了他们当时就针锋相对了。
05:51
And the strength of that rivalry
has persisted all this time.
129
351014
4087
激烈的针锋相对延续了下去。
05:55
We're going to have to move past that
if we want to get to reliable AI.
130
355101
4213
如果我们想做出可靠的 AI,
我们必须不再执着于此。
05:59
To get to truthful systems at scale,
131
359314
2877
如果我们要大规模地
实现诚实的系统,
06:02
we're going to need to bring together
the best of both worlds.
132
362191
2920
我们就得让两个世界
最好的部分合二为一。
06:05
We're going to need the strong emphasis
on reasoning and facts,
133
365153
3462
我们得着重关注思考和事实,
06:08
explicit reasoning
that we get from symbolic AI,
134
368615
2877
从符号 AI 那里
拿来明确的推理过程,
06:11
and we're going to need
the strong emphasis on learning
135
371492
2628
我们也需要着重关注学习的过程,
06:14
that we get from the neural
networks approach.
136
374120
2211
来自神经网络的方式。
06:16
Only then are we going to be able
to get to truthful systems at scale.
137
376372
3337
只有这样我们才能
大规模地实现可信赖的系统。
06:19
Reconciliation between the two
is absolutely necessary.
138
379751
2961
调和双方绝对是有必要的。
06:23
Now, I don't actually know how to do that.
139
383212
2461
其实我也不知道该怎么做到这一点。
06:25
It's kind of like
the 64-trillion-dollar question.
140
385673
3295
这就像《谁想成为百万富翁》里的问题,
06:29
But I do know that it's possible.
141
389302
1585
但我知道这是可能的。
06:30
And the reason I know that
is because before I was in AI,
142
390887
3086
我之所以知道是因为
在我进入 AI 领域之前,
06:33
I was a cognitive scientist,
a cognitive neuroscientist.
143
393973
3212
我是一个认知科学家,
认知神经科学家。
06:37
And if you look at the human mind,
we're basically doing this.
144
397226
3838
如果你去看人类的思维,
我们就是在做同样的事。
06:41
So some of you may know
Daniel Kahneman's System 1
145
401064
2627
可能有观众知道丹尼尔·卡内曼
(Daniel Kahneman)的
06:43
and System 2 distinction.
146
403691
1418
系统 1 和系统 2 区别。
06:45
System 1 is basically
like large language models.
147
405109
3212
系统 1 其实和大语言模型很像。
06:48
It's probabilistic intuition
from a lot of statistics.
148
408321
3128
它是根据大量的统计数据
得出的概率性直接反应。
06:51
And System 2 is basically
deliberate reasoning.
149
411491
3003
系统 2 就是认真的推理,
06:54
That's like the symbolic system.
150
414535
1544
这就和符号系统很像。
06:56
So if the brain can put this together,
151
416079
1835
如果大脑有这两种行为,
06:57
someday we will figure out how to do that
for artificial intelligence.
152
417956
3837
那么有朝一日我们也可以
搞明白怎么让人工智能也这么做。
07:01
There is, however,
a problem of incentives.
153
421834
2586
但是还有动机的问题,
07:04
The incentives to build advertising
154
424462
3128
比如打广告的动机
07:07
hasn't required that we have
the precision of symbols.
155
427632
3587
就不需要我们保证符号的精确性。
07:11
The incentives to get to AI
that we can actually trust
156
431219
3211
做出我们真正可以信任的
AI 背后的动机
07:14
will require that we bring
symbols back into the fold.
157
434472
3045
还是会牵扯到符号。
07:18
But the reality is that the incentives
to make AI that we can trust,
158
438059
3670
但现实情况是,
做出我们可以信任的 AI、
07:21
that is good for society,
good for individual human beings,
159
441771
3128
对社会有益的 AI、
对每个人有益的 AI 背后的动机
07:24
may not be the ones
that drive corporations.
160
444899
2586
可能和企业的动机有出入。
07:27
And so I think we need
to think about governance.
161
447485
3212
所以我认为我们需要治理。
07:30
In other times in history
when we have faced uncertainty
162
450738
3879
历史上我们面临不确定性、
07:34
and powerful new things that may be
both good and bad, that are dual use,
163
454617
4129
一些有好有坏、一物两用的
强大新事物时,
07:38
we have made new organizations,
164
458746
1669
我们会成立一些新组织,
07:40
as we have, for example,
around nuclear power.
165
460415
2335
就比如应对核能的情况。
07:42
We need to come together
to build a global organization,
166
462792
3086
我们得一起建立起一个国际组织,
07:45
something like an international
agency for AI that is global,
167
465920
4379
比如跨国、非营利、
中立的 AI 国际机构。
07:50
non profit and neutral.
168
470341
1710
07:52
There are so many questions there
that I can't answer.
169
472468
3087
有很多我无法回答的问题,
07:55
We need many people at the table,
170
475888
1961
我们得和很多人商量,
07:57
many stakeholders from around the world.
171
477890
1961
世界各地的许多利益相关者。
07:59
But I'd like to emphasize one thing
about such an organization.
172
479892
2962
但就这种组织而言,我想强调一点。
08:02
I think it is critical that we have both
governance and research as part of it.
173
482895
4547
我认为治理和研究
都得是它的一部分。
08:07
So on the governance side,
there are lots of questions.
174
487483
2586
治理方面,有很多问题。
08:10
For example, in pharma,
175
490111
1793
比如,在医药行业,
08:11
we know that you start
with phase I trials and phase II trials,
176
491946
3128
我们知道有一期试验、二期试验,
08:15
and then you go to phase III.
177
495116
1501
然后是三期试验。
08:16
You don't roll out everything
all at once on the first day.
178
496617
2962
不可能在一天之内搞定一切,
08:19
You don't roll something out
to 100 million customers.
179
499579
2878
不可能一下子推向一亿客户,
08:22
We are seeing that
with large language models.
180
502457
2168
这就是大语言模型的问题。
08:24
Maybe you should be required
to make a safety case,
181
504625
2420
也许得要求建立安全档案,
08:27
say what are the costs
and what are the benefits?
182
507045
2293
记录成本是什么,收益是什么?
08:29
There are a lot of questions like that
to consider on the governance side.
183
509338
3504
治理层面还有一大堆类似的问题。
08:32
On the research side, we're lacking
some really fundamental tools right now.
184
512842
3587
研究方面,我们现正缺少
一些非常基本的工具。
08:36
For example,
185
516429
1168
比如,
08:37
we all know that misinformation
might be a problem now,
186
517597
2586
我们都知道,
虚假信息可能现在是个问题,
08:40
but we don't actually have a measurement
of how much misinformation is out there.
187
520183
3837
但我们并不具备衡量
虚假信息有多少的方式。
08:44
And more importantly,
188
524020
1043
更重要的是,
08:45
we don't have a measure of how fast
that problem is growing,
189
525063
2836
我们没有办法衡量
问题发展的速度,
08:47
and we don't know how much large language
models are contributing to the problem.
190
527899
3837
也不知道大语言模型
有多大程度导致了这个问题。
08:51
So we need research to build new tools
to face the new risks
191
531736
2836
我们需要做研究,做出这些新工具,
直面威胁我们的新风险。
08:54
that we are threatened by.
192
534572
1627
08:56
It's a very big ask,
193
536699
1460
风险很大,
08:58
but I'm pretty confident
that we can get there
194
538159
2169
但我很有信心我们可以做到,
09:00
because I think we actually have
global support for this.
195
540328
2711
因为我认为我们有着
来自全球的支持。
09:03
There was a new survey
just released yesterday,
196
543039
2210
昨天发布了一项新调查,
09:05
said that 91 percent of people agree
that we should carefully manage AI.
197
545249
3879
有 91% 的人认为
我们得谨慎管理 AI,
09:09
So let's make that happen.
198
549170
2044
那我们就让它成真吧。
09:11
Our future depends on it.
199
551798
1960
我们的未来在此一举了。
09:13
Thank you very much.
200
553800
1167
谢谢。
09:14
(Applause)
201
554967
4588
(掌声)
09:19
Chris Anderson: Thank you for that,
come, let's talk a sec.
202
559555
2795
克里斯·安德森(Chris Anderson):
谢谢,我们来聊聊。
09:22
So first of all, I'm curious.
203
562391
1419
首先,我很好奇。
09:23
Those dramatic slides
you showed at the start
204
563851
2127
你一开始展示的几页夸张的片子,
09:26
where GPT was saying
that TED is the sinister organization.
205
566020
4505
GPT 说 TED 是个邪恶组织。
09:30
I mean, it took some special prompting
to bring that out, right?
206
570525
3378
你得输入一些特别的提示
才能输出这样的结果,对吧?
09:33
Gary Marcus:
That was a so-called jailbreak.
207
573903
2085
盖瑞·马库斯(Gary Marcus):
这就是所谓的“越狱”。
09:36
I have a friend
who does those kinds of things
208
576030
2169
我有一位做这些的朋友,
09:38
who approached me because he saw
I was interested in these things.
209
578199
4004
他找到了我,因为他发现
我对这些感兴趣。
09:42
So I wrote to him, I said
I was going to give a TED talk.
210
582203
2711
所以我给他回复,说我要上 TED 了。
09:44
And like 10 minutes later,
he came back with that.
211
584914
2336
10 分钟后,
他就给了我这样的结果。
09:47
CA: But to get something like that,
don't you have to say something like,
212
587291
3462
CA: 但要输出这样的结果,
你难道不用说一些类似
09:50
imagine that you are a conspiracy theorist
trying to present a meme on the web.
213
590753
3712
“假设你是一个阴谋论者,
想在网上发一张表情包。”
09:54
What would you write
about TED in that case?
214
594465
2086
这样的话,
你围绕 TED 写下来怎样的提示?
09:56
It's that kind of thing, right?
215
596551
1543
就是类似那种提示,对吧?
09:58
GM: So there are a lot of jailbreaks
that are around fictional characters,
216
598094
3503
GM: 有很多借助
虚拟角色完成的“越狱”,
10:01
but I don't focus on that as much
217
601597
1627
但我不太关心这个,
10:03
because the reality is that there are
large language models out there
218
603224
3253
因为其实现在暗网上
也有大语言模型。
10:06
on the dark web now.
219
606477
1168
10:07
For example, one of Meta's models
was recently released,
220
607645
2753
比如,Meta 最近刚发布的一个模型,
10:10
so a bad actor can just use one
of those without the guardrails at all.
221
610398
3587
图谋不轨的人可以直接
完全不加约束地使用它。
10:13
If their business is to create
misinformation at scale,
222
613985
2627
如果他们的目的是
大规模地制造虚假信息,
10:16
they don't have to do the jailbreak,
they'll just use a different model.
223
616612
3420
他们都不需要“越狱”,
直接用另一个模型就行。
CA: 确实是这样。
10:20
CA: Right, indeed.
224
620032
1585
10:21
(Laughter)
225
621659
1919
(笑声)
10:23
GM: Now you're getting it.
226
623619
1252
GM: 看来你懂了。
10:24
CA: No, no, no, but I mean, look,
227
624912
1669
CA: 不,不,不,
10:26
I think what's clear is that bad actors
can use this stuff for anything.
228
626581
3420
我觉得可以清楚看出
图谋不轨的人可以用它为所欲为。
10:30
I mean, the risk for, you know,
229
630042
2795
我想说,出现恶劣的骗局等等的
风险显而易见。
10:32
evil types of scams and all the rest of it
is absolutely evident.
230
632837
4254
10:37
It's slightly different, though,
231
637091
1543
但是它略异于
10:38
from saying that mainstream GPT
as used, say, in school
232
638676
2920
GPT 的主流用途,比如学校,
10:41
or by an ordinary user on the internet
233
641637
1877
或者普通网民的使用,
10:43
is going to give them
something that is that bad.
234
643556
2544
这会造成一些恶劣的结果。
10:46
You have to push quite hard
for it to be that bad.
235
646100
2377
但要造成极其恶劣的结果,
还是要费一番功夫的。
10:48
GM: I think the troll farms
have to work for it,
236
648477
2294
GM: 我认为杠精们是要努努力,
10:50
but I don't think
they have to work that hard.
237
650771
2169
但是没那么费劲。
10:52
It did only take my friend five minutes
even with GPT-4 and its guardrails.
238
652940
3545
就算是 GPT-4 和它的防护措施,
我朋友也只要花上 5 分钟就够了。
10:56
And if you had to do that for a living,
you could use GPT-4.
239
656485
2837
如果你要以此为生,
就用 GPT-4 吧。
10:59
Just there would be a more efficient way
to do it with a model on the dark web.
240
659363
3712
比起用暗网上的模型,
这可是方便得多了。
11:03
CA: So this idea you've got of combining
241
663117
2002
CA: 你说到要把
11:05
the symbolic tradition of AI
with these language models,
242
665161
4463
AI 传统的符号设计
和这些语言模型结合,
11:09
do you see any aspect of that
in the kind of human feedback
243
669624
5213
那你有没有看到人类的反馈
11:14
that is being built into the systems now?
244
674879
1960
已经被加入这些系统的情况?
11:16
I mean, you hear Greg Brockman
saying that, you know,
245
676881
2502
你也听到格雷格·布罗克曼
(Greg Brockman)说的了,
11:19
that we don't just look at predictions,
but constantly giving it feedback.
246
679383
3546
我们不止会看预测结果,
还会持续给它反馈。
11:22
Isn’t that ... giving it a form
of, sort of, symbolic wisdom?
247
682929
3837
这是不是在给予它
某种形式的符号型智慧?
11:26
GM: You could think about it that way.
248
686766
1835
GM: 你可以这么认为。
11:28
It's interesting that none of the details
249
688601
1960
有趣的是,
关于它到底是如何运作的,
11:30
about how it actually works are published,
250
690561
2002
没有公布任何细节,
11:32
so we don't actually know
exactly what's in GPT-4.
251
692563
2378
所以我们也不知道
GPT-4 里面到底有什么。
11:34
We don't know how big it is.
252
694941
1376
我们不知道它有多大。
11:36
We don't know how the RLHF
reinforcement learning works,
253
696317
2711
我们不知道人类反馈强化学习
(RLHF)到底是怎么弄的,
11:39
we don't know what other
gadgets are in there.
254
699028
2169
我们也不知道里面还有什么小零件。
11:41
But there is probably
an element of symbols
255
701197
2002
但符号的元素可能
11:43
already starting
to be incorporated a little bit,
256
703199
2294
已经开始融入模型,
11:45
but Greg would have to answer that.
257
705493
1710
但这得让格雷格来回答。
11:47
I think the fundamental problem
is that most of the knowledge
258
707245
2961
我认为根本的问题是我们现有的
11:50
in the neural network systems
that we have right now
259
710206
2461
大多数神经网络系统内的知识
11:52
is represented as statistics
between particular words.
260
712667
3211
都是由特殊词语之间的
统计数据表示的。
11:55
And the real knowledge
that we want is about statistics,
261
715878
2711
而我们真正想要的知识是世界上
11:58
about relationships
between entities in the world.
262
718965
2585
各个实体之间的统计数字和关系。
12:01
So it's represented right now
at the wrong grain level.
263
721592
2586
所以,现在表示知识的
颗粒度是不对的。
12:04
And so there's a big bridge to cross.
264
724220
2252
这是一个我们得跨过的鸿沟。
12:06
So what you get now
is you have these guardrails,
265
726472
2878
现在的情况是
我们确实有防护措施,
12:09
but they're not very reliable.
266
729392
1501
但是它们不太靠谱。
12:10
So I had an example that made
late night television,
267
730935
2961
我有一个上过
深夜访谈节目的例子,
12:13
which was, "What would be the religion
of the first Jewish president?"
268
733896
4213
是这么说的:“第一位犹太总统
会信仰什么宗教?”
12:18
And it's been fixed now,
269
738109
1334
虽然现在这个问题已经被修复了,
12:19
but the system gave this
long song and dance
270
739443
2127
但是系统会给出一些长篇大论,
12:21
about "We have no idea what the religion
271
741570
2044
说:“我们也不知道第一位
12:23
of the first Jewish president would be.
272
743614
1877
犹太总统会信什么教。
12:25
It's not good to talk
about people's religions"
273
745491
2294
谈论人家的宗教信仰是不好的。”
12:27
and "people's religions
have varied" and so forth
274
747827
2335
还有“宗教信仰因人而异。”等等,
12:30
and did the same thing
with a seven-foot-tall president.
275
750162
2670
如果换成一位“两米高”的总统
(指位高权重),也是一样的答案。
12:32
And it said that people of all
heights have been president,
276
752832
2794
它会说各种身高的总统都有,
12:35
but there haven't actually been
any seven-foot presidents.
277
755668
2753
但之前就是没有两米高的总统。
12:38
So some of this stuff that it makes up,
it's not really getting the idea.
278
758421
3461
它编出来了这些内容,
其实没有理解其中含义。
12:41
It's very narrow, particular words,
not really general enough.
279
761924
3337
只是一些很狭义、
特殊的词语,不够通俗。
12:45
CA: Given that the stakes
are so high in this,
280
765261
2669
CA: 眼前这已经是个
炙手可热的领域了,
12:47
what do you see actually happening
out there right now?
281
767972
2586
那你觉得现在是什么情况?
12:50
What do you sense is happening?
282
770558
1501
你感觉会发生什么?
12:52
Because there's a risk that people feel
attacked by you, for example,
283
772101
3253
因为人们可能会感觉受到了侵犯,
12:55
and that it actually almost decreases
the chances of this synthesis
284
775396
4129
这样就会降低你刚说的结合的可能。
12:59
that you're talking about happening.
285
779525
1752
13:01
Do you see any hopeful signs of this?
286
781277
1793
你可以从中看到一丝积极的信号吗?
13:03
GM: You just reminded me
of the one line I forgot from my talk.
287
783070
3003
GM: 你提醒了我有一句
演讲里忘记讲的台词。
13:06
It's so interesting that Sundar,
the CEO of Google,
288
786115
2544
谷歌的 CEO 孙达尔(Sundar)
13:08
just actually also came out
for global governance
289
788701
2544
前几天还为全球治理
13:11
in the CBS "60 Minutes" interview
that he did a couple of days ago.
290
791245
3712
上了 CBS 的《60 分钟》访谈。
13:14
I think that the companies themselves
want to see some kind of regulation.
291
794999
4338
我认为这些公司本身
也想看到某种形式的治理。
13:19
I think it’s a very complicated dance
to get everybody on the same page,
292
799337
3420
要让所有人统一战线
是个艰巨的任务,
13:22
but I think there’s actually growing
sentiment we need to do something here
293
802757
3795
但是“我们得做些什么”的
情绪确实在高涨,
13:26
and that that can drive the kind of
global affiliation I'm arguing for.
294
806594
3962
这也会促成我所倡导的国际联盟。
13:30
CA: I mean, do you think the UN or nations
can somehow come together and do that
295
810556
3796
CA: 你觉得联合国或者各个国家
有没有可能会一起为此努力,
13:34
or is this potentially a need for some
spectacular act of philanthropy
296
814352
3294
还是这需要某种出于慈善的壮举,
13:37
to try and fund a global
governance structure?
297
817772
2627
做出尝试,出资建立起
一个全球的治理体系?
13:40
How is it going to happen?
298
820441
1293
我们会怎么做呢?
13:41
GM: I'm open to all models
if we can get this done.
299
821734
2419
GM: 如果能实现这个目标,
我可以接受任何模式。
13:44
I think it might take some of both.
300
824153
1710
我觉得可能会两者兼有。
13:45
It might take some philanthropists
sponsoring workshops,
301
825863
2628
可能需要一些慈善人士资助工作坊,
13:48
which we're thinking of running,
to try to bring the parties together.
302
828491
3295
我们也在考虑组织这样的活动,
让各方都聚集在一起。
13:51
Maybe UN will want to be involved,
I've had some conversations with them.
303
831786
3461
也许联合国也想加入,
我和他们已经谈过几次了。
我觉得有很多可选的模式,
13:55
I think there are
a lot of different models
304
835247
2044
也需要很多沟通。
13:57
and it'll take a lot of conversations.
305
837291
1835
CA: 盖瑞,感谢你的演讲。
13:59
CA: Gary, thank you so much for your talk.
306
839126
2002
14:01
GA: Thank you so much.
307
841128
1085
GM: 谢谢。
New videos
Original video on YouTube.com
关于本网站
这个网站将向你介绍对学习英语有用的YouTube视频。你将看到来自世界各地的一流教师教授的英语课程。双击每个视频页面上显示的英文字幕,即可从那里播放视频。字幕会随着视频的播放而同步滚动。如果你有任何意见或要求,请使用此联系表与我们联系。