Is AI Progress Stuck? | Jennifer Golbeck | TED

122,814 views ・ 2024-11-19

TED


请双击下面的英文字幕来播放视频。

翻译人员: Yip Yan Yeung 校对人员: Jinnie Sun
00:04
We've built artificial intelligence already
0
4334
2878
我们已经打造出了在某些任务上 比人类表现更佳的人工智能。
00:07
that, on specific tasks, performs better than humans.
1
7212
3837
00:11
There's AI that can play chess and beat human grandmasters.
2
11049
4505
有会下国际象棋 并击败人类特级大师的 AI。
00:15
But since the introduction of generative AI
3
15595
2336
但自从几年前 生成式 AI 的问世,
00:17
to the general public a couple years ago,
4
17973
2544
00:20
there's been more talk about artificial general intelligence, or AGI,
5
20517
5297
人们更常讨论通用人工智能, 即 AGI,
00:25
and that describes the idea
6
25814
1418
这一概念指的是
00:27
that there's AI that can perform at or above human levels
7
27232
3879
有一种以匹敌或超越人类水平
00:31
on a wide variety of tasks, just like we humans are able to do.
8
31152
4004
完成一众任务的 AI, 就如同我们人类的能力。
00:35
And people who think about AGI are worried about what it means
9
35615
3587
思考 AGI 的人们在顾虑它意味着什么,
00:39
if we reach that level of performance in the technology.
10
39244
3754
当这项科技达到了那种能力水平。
00:42
Right now, there's people from the tech industry
11
42998
2252
如今,科技届的人们跑出来说:
00:45
coming out and saying
12
45292
1126
00:46
"The AI that we're building is so powerful and dangerous
13
46459
2962
“我们打造的 AI 太强大了,太危险了,
00:49
that it poses a threat to civilization.”
14
49462
2711
可能危害人类文明。”
00:52
And they’re going to government and saying,
15
52173
2002
他们跑去政府说:
00:54
"Maybe you need to regulate us."
16
54217
1794
“也许你得管管我们。”
00:56
Now normally when an industry makes a powerful new tool,
17
56052
2878
通常当一个行业做出了 一个强大的新工具,
00:58
they don't say it poses an existential threat to humanity
18
58930
2920
他们不会说它 置人类于危急存亡之秋中,
01:01
and that it needs to be limited, so why are we hearing that language?
19
61891
3963
也不会说它该受到限制, 那我们为什么会听到这种说法呢?
01:05
And I think there's two main reasons.
20
65895
2128
我认为有两个主要原因。
01:08
One is if your technology is so powerful
21
68064
4046
一是如果你的科技太强大了,
01:12
that it can destroy civilization, between now and then,
22
72152
4171
甚至有可能摧毁文明,
那么从现在开始到毁灭前,
01:16
there's an awful lot of money to be made with that.
23
76364
3003
其间能赚一笔大钱。
01:19
And what better way to convince your investors
24
79409
2461
还有什么比警告别人 你的工具太危险了
01:21
to put some money with you
25
81911
1252
更能说服投资人 给你多投点钱的呢?
01:23
than to warn that your tool is that dangerous?
26
83204
3045
01:26
The other is that the idea of AI overtaking humanity
27
86291
3587
另一个原因是 AI 超越人类这一想法
01:29
is truly a cinematic concept.
28
89878
2169
完全是个电影里的概念。
01:32
We’ve all seen those movies.
29
92088
2127
我们都看过这种电影。
01:34
And it’s kind of entertaining to think about what that would mean now
30
94215
3379
现在再去想想那些电影的意义 还是挺有意思的,
01:37
with tools that we're actually able to put our hands on.
31
97636
3461
当我们真的有了摸得着的工具,
01:41
In fact, it’s so entertaining that it’s a very effective distraction
32
101097
4505
有意思到它其实有效地让人们不去想
01:45
from the real problems already happening in the world because of AI.
33
105644
4879
世界上那些由 AI 带来的真实问题。
01:50
The more we think about these improbable futures,
34
110565
3879
我们对这种低可能的未来想得越多,
01:54
the less time we spend thinking about how do we correct deepfakes
35
114486
4004
我们就会花更少的时间 思考如何纠正深度伪造,
01:58
or the fact that there's AI right now being used to decide
36
118531
3462
或者纠正现在用 AI 决定
02:01
whether or not people are let out of prison,
37
121993
2086
囚犯该不该出狱,
02:04
and we know it’s racially biased.
38
124079
2127
即使我们知道 AI 带有种族歧视。
02:06
But are we anywhere close to actually achieving AGI?
39
126665
3503
但我们很快要真正实现 AGI 了吗?
02:10
Some people think so.
40
130210
1334
有些人认为是的。
02:11
Elon Musk said that we'll achieve it within a year.
41
131544
2419
埃隆·马斯克说 我们会在一年内实现。
02:13
I think he posted this a few weeks ago.
42
133963
2128
他应该几周前发了这个帖子。
02:16
But like at the same time
43
136132
1502
但与此同时,
02:17
Google put out their eye search tool that's supposed to give you the answer
44
137676
3753
谷歌推出了 AI 搜索工具, 不用点击链接即可得到答案,
02:21
so you don’t have to click on a link,
45
141471
1919
02:23
and it's not going super well.
46
143431
2169
效果不是特别好。
02:25
["How many rocks should I eat?"]
47
145642
1543
[“我应该吃多少石头?”]
02:27
["... at least a single serving of pebbles, geodes or gravel ..."]
48
147227
3128
[“……至少一份鹅卵石、 晶洞或砾石……”]
02:30
Please don't eat rocks.
49
150355
1209
不要去吃石头。
02:31
(Laughter)
50
151564
1168
(笑声)
02:32
Now of course these tools are going to get better.
51
152732
3003
当然这些工具会变好的。
02:35
But if we're going to achieve AGI
52
155777
2836
但如果我们要实现 AGI,
02:38
or if they're even going to fundamentally change the way we work,
53
158613
3420
或者要让它彻底改变我们工作的方式,
02:42
we need to be in a place where they are continuing
54
162075
2628
我们需要让它一直处于 能力飞速提升的趋势中。
02:44
on a sharp upward trajectory in terms of their abilities.
55
164703
3795
02:48
And that may be one path.
56
168498
1835
这可能是一种趋势。
02:50
But there's also the possibility that what we're seeing
57
170375
2586
但我们还看到了另一种可能性: 这些工具已经达到了
02:53
is that these tools have basically achieved
58
173002
2420
02:55
what they're capable of doing,
59
175422
1501
它们能够做到的水平,
02:56
and the future is incremental improvements in a plateau.
60
176923
4129
未来是基于稳定水平的逐步改良。
03:01
So to understand the AI future,
61
181052
2211
要理解 AI 未来,
03:03
we need to look at all the hype around it and get under there
62
183304
2920
我们得看穿关于它的 各种炒作,深入研究,
03:06
and see what's technically possible.
63
186266
1751
看看从技术上什么是有可能的。
03:08
And we also need to think about where are the areas that we need to worry
64
188017
3462
我们也要考虑 我们需要担心哪几块,
03:11
and where are the areas that we don't.
65
191479
1835
不需要担心哪几块。
03:13
So if we want to realize the hype around AI,
66
193857
3086
如果我们想实现 AI 相关的炒作,
03:16
the one main challenge that we have to solve is reliability.
67
196985
3712
我们需要解决的 一个主要问题就是可靠性。
03:20
These algorithms are wrong all the time, like we saw with Google.
68
200697
4421
这些算法一直是错的, 就像我们看到谷歌的例子。
03:25
And Google actually came out and said,
69
205118
2002
谷歌其实曾站出来表示过,
03:27
after these bad search results were popularized,
70
207162
3211
在这些糟糕的搜索结果广泛传播后,
03:30
that they don't know how to fix this problem.
71
210415
2169
称他们也不知道 该怎么解决这个问题。
03:32
I use ChatGPT every day.
72
212625
1335
我每天都用 ChatGPT。
03:34
I write a newsletter that summarizes discussions on far-right message boards,
73
214002
3754
我在写一份订阅新闻稿, 总结极右翼留言板上的讨论,
03:37
and so I download that data,
74
217797
1377
我下载了这些数据,
03:39
ChatGPT helps me write a summary.
75
219174
1918
ChatGPT 帮我写一份总结。
03:41
And it makes me much more efficient than if I had to do it by hand.
76
221134
4546
比起我手动去完成要高效多了。
03:45
But I have to correct it every day
77
225722
2044
但是我每天都得纠正它,
03:47
because it misunderstands something,
78
227766
1751
因为它会误解一些事,
03:49
it takes out the context.
79
229559
1793
会断章取义。
03:51
And so because of that,
80
231394
1877
因此,
03:53
I can't just rely on it to do the job for me.
81
233313
2752
我不能完全依赖它帮我完成工作。
03:56
And this reliability is really important.
82
236107
2878
这种可靠性非常重要。
03:58
Now a subpart of reliability in this space is AI hallucination,
83
238985
5255
这一领域中的可靠性 有一部分是 AI 幻觉,
04:04
a great technical term for the fact that AI just makes stuff up
84
244282
3587
这是一个很好的技术术语, 描述了 AI 很多时候都在乱编。
04:07
a lot of the time.
85
247911
1418
04:09
I did this in my newsletter.
86
249370
1377
我在我的新闻稿里这么做了。
04:10
I said, ChatGPT are there any people threatening violence?
87
250747
3128
我说:“ChatGPT, 有没有人威胁要使用暴力?
04:13
If so, give me the quotes.
88
253917
1251
有的话,给我原句。”
04:15
And it produced these three really clear threats of violence
89
255210
2919
然后它输出了这三条 显然是威胁使用暴力的信息,
04:18
that didn't sound anything like people talk
90
258171
2002
但是听起来并不像 人们会发在留言板上的内容。
04:20
on these message boards.
91
260215
1167
04:21
And I went back to the data, and nobody ever said it.
92
261424
2503
于是我回到数据中, 根本没人说过。
04:23
It just made it up out of thin air.
93
263927
2252
它就是凭空捏造出来的。
04:26
And you may have seen this if you've used an AI image generator.
94
266221
3044
如果你用过 AI 图片生成器, 你可能也见过这种情况。
04:29
I asked it to give me a close up of people holding hands.
95
269265
3295
我让它给我一张人们牵手的特写。
04:32
That's a hallucination and a disturbing one at that.
96
272602
3295
这就是一种 AI 幻觉, 让人看着挺不舒服的。
04:35
(Laughter)
97
275939
1710
(笑声)
04:37
We have to solve this hallucination problem
98
277649
3086
我们得解决幻觉问题,
04:40
if this AI is going to live up to the hype.
99
280777
3003
如果 AI 要匹配炒作的水平。
04:43
And I don't think it's a solvable problem.
100
283780
2878
我觉得这不是一个可以解决的问题,
04:46
With the way this technology works, there are people who say,
101
286699
2878
鉴于这项技术的工作原理,
有些人会说我们在几个月内 就能搞定这个问题,
04:49
we're going to have it taken care of in a few months,
102
289619
2544
04:52
but there’s no technical reason to think that’s the case.
103
292163
2711
但是这种言论并没有什么技术依据。
04:54
Because generative AI always makes stuff up.
104
294874
3420
因为生成式 AI 总是在瞎编。
04:58
When you ask it a question,
105
298294
1418
你问了一个问题,
04:59
it's creating that answer or creating that image from scratch
106
299754
4254
它就在你问的时候 凭空创造一个答案或者图片。
05:04
when you ask.
107
304050
1168
05:05
It's not like a search engine that goes and finds the right answer on a page.
108
305218
3670
而不是像搜索引擎那样 找出某个页面上的正确答案。
05:08
And so because its job is to make things up every time,
109
308888
3879
由于它的任务就是每次乱编,
05:12
I don't know that we're going to be able to get it to make up correct stuff
110
312767
3587
我都不知道我们能不能 让它编出正确的东西,
05:16
and then not make up other stuff.
111
316354
1627
不要编别的东西。
05:17
That's not what it's trained to do,
112
317981
1710
这不是它被训练做的事情。
05:19
and we're very far from achieving that.
113
319732
2128
我们距离实现这一点还有很远。
05:21
And in fact, there are spaces where they're trying really hard.
114
321901
3170
事实上有一些人们正在 为之努力的领域。
05:25
One space that there's a lot of enthusiasm for AI
115
325113
2627
热衷于使用 AI 的一个领域
05:27
is in the legal area
116
327740
1335
是法律领域,
05:29
where they hope it will help write legal briefs or do research.
117
329117
3420
人们希望 AI 可以帮忙 写法律摘要或者进行法律研究。
05:32
Some people have found out the hard way
118
332579
2168
有些人折腾了一番以后发现
05:34
that they should not write legal briefs right now with ChatGPT
119
334789
3837
现在还不能用 ChatGPT 写法律摘要、
05:38
and send them to federal court,
120
338668
1501
发给联邦法院,
05:40
because it just makes up cases that sound right.
121
340211
3379
因为它会编出一些 听起来没问题的案件。
05:43
And that's a really fast way to get a judge mad at you
122
343631
2836
这会轻轻松松激怒法官,
05:46
and to get your case thrown out.
123
346509
1752
拒绝受理你的案件。
05:49
Now there are legal research companies right now
124
349012
2544
现在有些法律研究公司
05:51
that advertise hallucination-free
125
351598
2961
号称他们在使用“无幻觉”生成式 AI。
05:54
generative AI.
126
354559
1626
05:56
And I was really dubious about this.
127
356561
3003
我对此非常怀疑。
05:59
And researchers at Stanford actually went in and checked it,
128
359564
3628
斯坦福的研究人员参与并进行了调查,
06:03
and they found the best-performing of these hallucination-free tools
129
363192
3754
他们发现这些“无幻觉”工具中 表现最好的一个
06:06
still hallucinates 17 percent of the time.
130
366946
2544
也有 17% 的时候会产生幻觉。
06:10
So like on one hand,
131
370158
1626
所以说,一方面,
06:11
it's a great scientific achievement that we have built a tool
132
371826
3629
这是一项伟大的科学成就, 我们打造了一个工具,
06:15
that we can pose basically any query to,
133
375496
2920
可以向它提出任何查询内容,
06:18
and 60 or 70 or maybe even 80 percent of the time
134
378458
3211
60%、70% 甚至是 80% 的时间
06:21
it gives us a reasonable answer.
135
381711
1919
它能给出合理的答案。
06:23
But if we're going to rely on using those tools
136
383963
2252
但如果我们要依赖这些工具,
06:26
and they're wrong 20 or 30 percent of the time,
137
386257
2670
而它们有 20% 或 30% 的概率是错的,
06:28
there's no model where that's really useful.
138
388927
2961
那就没有一个真的很有用的模型。
06:32
And that kind of leads us into
139
392472
2043
这就引出了一个问题,
06:34
how do we make these tools that useful?
140
394557
2002
我们怎么能让这些工具变得有用呢?
06:36
Because even if you don't believe me
141
396601
1752
因为即使你不相信我,
06:38
and think we're going to solve this hallucination problem,
142
398353
2711
认为我们可以解决这个幻觉问题,
06:41
we're going to solve the reliability problem,
143
401105
2127
我们可以解决可靠性问题,
06:43
the tools still need to get better than they are now.
144
403274
2503
这些工具仍然需要 比现在的水平更进一步。
06:45
And there's two things they need to do that.
145
405777
2085
为此需要两件事。
06:47
One is lots more data
146
407862
1168
第一,更多的数据,
06:49
and two is the technology itself has to improve.
147
409072
2294
第二,技术本身得进步。
06:51
So where are we going to get that data?
148
411783
1960
那么我们从哪儿获取这些数据呢?
06:53
Because they've kind of taken all the reliable stuff online already.
149
413785
3837
因为它们几乎已经获取了 线上所有可靠的内容了。
06:57
And if we were to find twice as much data as they've already had,
150
417622
3295
如果我们要寻找 比现在已有更多一倍的数据,
07:00
that doesn't mean they're going to be twice as smart.
151
420959
2502
并不代表它会变两倍的聪明。
07:04
I don't know if there's enough data out there,
152
424295
2169
我不知道有没有足够的数据,
07:06
and it's compounded by the fact
153
426506
1501
而且情况比较复杂,
07:08
that one way the generative AI has been very successful
154
428049
2920
因为生成式 AI 很擅长的一点就是
07:10
is at producing low-quality content online.
155
430969
3211
生成线上的低质量内容。
07:14
That's bots on social media, misinformation,
156
434222
3253
社交媒体上的机器人、虚假信息,
07:17
and these SEO pages that don't really say anything
157
437475
2377
还有那些搜索引擎优化的 推荐网页,言之无物,
07:19
but have a lot of ads and come up high in the search results.
158
439894
3212
但是有一大堆广告, 在搜索结果中遥遥领先。
07:23
And if the AI starts training on pages that it generated,
159
443106
4254
如果 AI 开始基于它生成的网页训练,
07:27
we know from decades of AI research that they just get progressively worse.
160
447360
4463
根据数十年来的 AI 研究, 我们都知道它们会逐渐退步。
07:31
It's like the digital version of mad cow disease.
161
451823
2794
那就像数字版本的疯牛病。
07:34
(Laughter)
162
454951
1626
(笑声)
07:36
Let's say we solve the data problem.
163
456577
2336
假设我们解决了数据问题。
07:39
You still have to get the technology better.
164
459247
2169
你还是得改良技术。
07:41
And we've seen 50 billion dollars in the last couple years
165
461457
3379
过去几年里, 我们看到了 500 亿美元
07:44
invested in improving generative AI.
166
464836
3128
被投入改良生成式 AI。
07:48
And that's resulted in three billion dollars in revenue.
167
468006
3336
得到了 30 亿美元的收入。
07:51
So that's not sustainable.
168
471342
1585
所以这是不可持续的。
07:52
But of course it's early, right?
169
472927
1543
但是当然它还在早期,对吧?
07:54
Companies may find ways to start using this technology.
170
474512
2586
企业可能会找到用这项技术的方式。
07:57
But is it going to be valuable enough
171
477140
2752
但是它的价值是否足以
07:59
to justify the tens and maybe hundreds of billions of dollars
172
479934
3170
证明数百亿,甚至数千亿美元
08:03
of hardware that needs to be bought
173
483146
2586
在优化这些模型所需购置的 硬件上的花费是值得的呢?
08:05
to make these models get better?
174
485773
1835
08:08
I don't think so.
175
488109
1126
我觉得不能。
08:09
And we can kind of start looking at practical examples to figure that out.
176
489277
3545
我们看看实际的例子就知道了。
08:12
And it leads us to think about where are the spaces we need to worry and not.
177
492864
3670
那能让我们思考我们 需要担心、不需要担心什么领域。
08:16
Because one place that everybody's worried with this
178
496576
2502
因为每个人对此担心的一点是
08:19
is that AI is going to take all of our jobs.
179
499120
2085
AI 会抢走我们所有的工作。
08:21
Lots of people are telling us that’s going to happen,
180
501247
2503
很多人告诉我们会是这样的,
08:23
and people are worried about it.
181
503750
1543
于是人们对此非常担心。
08:25
And I think there's a fundamental misunderstanding at the heart of that.
182
505293
3420
我认为其中有一个关键的误解。
08:28
So imagine this scenario.
183
508713
1209
想象这个场景。
08:29
We have a company,
184
509922
1168
有一家公司,
08:31
and they can afford to employ two software engineers.
185
511090
2628
他们请得起两位软件工程师。
08:33
And if we were to give those engineers some generative AI to help write code,
186
513760
4004
如果我们给这两位工程师 一些生成式 AI,帮助他们写代码,
08:37
which is something it's pretty good at,
187
517805
1877
这是 AI 很擅长的,
08:39
let's say they're twice as efficient.
188
519682
1794
假设他们的效率翻了一番。
08:41
That's a big overestimate, but it makes the math easy.
189
521517
3462
这是过高的估计了, 但是简单起见就这么算吧。
08:45
So in that case, the company has two choices.
190
525021
2127
这样这家公司就有了两个选择。
08:47
They could fire one of those software engineers
191
527148
2336
他们可以炒掉一位软件工程师,
08:49
because the other one can do the work of two people now,
192
529525
2670
因为另一位现在可以 一人干两个人的活,
08:52
or they already could afford two of them,
193
532195
3712
或者他们本来就请得起两位,
08:55
and now they're twice as efficient,
194
535948
1877
现在效率翻倍,
08:57
so they're bringing in more money.
195
537867
1627
于是能赚更多钱。
08:59
So why not keep both of them and take that extra profit?
196
539535
3671
为什么把两位都留着,多赚点钱呢?
09:03
The only way this math fails is if the AI is so expensive
197
543247
4338
唯一不成立的一点就是 如果 AI 太昂贵了,
09:07
that it's not worth it.
198
547585
1585
价值配不上成本。
09:09
But that would be like the AI is 100,000 dollars a year
199
549212
3670
但是这等同于 AI 每年花十万美元
09:12
to do one person's worth of work.
200
552882
2169
完成一人份的工作。
09:15
So that sounds really expensive.
201
555093
2293
听起来非常昂贵。
09:17
And practically,
202
557428
1377
实际上,
09:18
there are already open-source versions of these tools
203
558805
2836
这些工具有很多开源版本,
09:21
that are low-cost, that companies can install and run themselves.
204
561682
3295
价格低廉,企业可以 自行安装、运行。
09:25
Now they don’t perform as well as the flagship models,
205
565019
2878
虽然它们的性能不如旗舰模型,
09:27
but if they're half as good and really cheap,
206
567939
3044
但如果它们能达到一半的效果, 而且相当廉价,
09:30
wouldn't you take those over the one that costs 100,000 dollars a year
207
570983
3337
你难道会不选它们, 而去选每年花掉十万美元
09:34
to do one person's work?
208
574320
1168
完成一人份工作的 AI 吗?
09:35
Of course you would.
209
575530
1167
你当然会选它们。
09:36
And so even if we solve reliability, we solve the data problem,
210
576697
3003
即使我们解决了 可靠性问题、数据问题,
09:39
we make the models better,
211
579700
1585
我们提升了模型性能,
09:41
the fact that there are cheap versions of this available
212
581285
3212
有可用的廉价版本这一事实
09:44
suggests that companies aren't going to be spending
213
584497
2419
表明了企业并不会花费
09:46
hundreds of millions of dollars to replace their workforce with AI.
214
586916
3712
上亿美元用 AI 取代人力。
09:50
There are areas that we need to worry, though.
215
590670
2169
但还是有我们需要担心的领域。
09:52
Because if we look at AI now,
216
592839
1835
因为现在看看 AI,
09:54
there are lots of problems that we haven't been able to solve.
217
594715
3045
还是有很多无法解决的问题。
09:58
I've been building artificial intelligence for over 20 years,
218
598094
3086
我已经从事人工智能建设超过 20 年,
10:01
and one thing we know
219
601222
1710
我们知道有一点,
10:02
is that if we train AI on human data,
220
602974
2919
那就是如果我们用人类数据训练 AI,
10:05
the AI adopts human biases,
221
605935
2711
那么 AI 就会沿袭人类的偏见,
10:08
and we have not been able to fix that.
222
608688
2586
而我们对此束手无策。
10:11
We've seen those biases start showing up in generative AI,
223
611274
3545
我们已经在生成式 AI 中 开始看见这些偏见,
10:14
and the gut reaction is always, well,
224
614861
2085
我们的直觉反应一直都是
10:16
let's just put in some guardrails to stop the AI from doing the biased thing.
225
616988
4337
那就加上一些防护措施, 阻止 AI 做出一些带有偏见的事。
10:21
But one, that never fixes the bias because the AI finds a way around it.
226
621325
3587
但是其一,这无法消除偏见, 因为 AI 会见招拆招。
10:24
And two, the guardrails themselves can cause problems.
227
624954
3253
其二,防护措施本身就会出问题。
10:28
So Google has an AI image generator,
228
628249
2628
谷歌有一款 AI 图片生成器,
10:30
and they tried to put guardrails in place to stop the bias in the results.
229
630877
4087
他们试图加上防护措施, 避免输出中出现偏见。
10:35
And it turned out it made it wrong.
230
635006
2210
结果发现做错了。
10:37
This is a request for a picture
231
637258
1627
有人要求生成一张 签署《独立宣言》的图片。
10:38
of the signing of the Declaration of Independence.
232
638926
2336
10:41
And it's a great picture, but it is not factually correct.
233
641304
3879
图片不错,但是与事实不符。
10:45
And so in trying to stop the bias,
234
645183
2460
为了试图阻止偏见,
10:47
we end up creating more reliability problems.
235
647685
4254
结果创造了更多可靠性问题。
10:51
We haven't been able to solve this problem of bias.
236
651981
3670
我们还无法解决这种偏见问题。
10:55
And if we're thinking about deferring decision making,
237
655693
2794
如果我们在考虑延迟决策、
10:58
replacing human decision makers and relying on this technology
238
658529
3712
取代人类决策者、 依靠这项技术,
11:02
and we can't solve this problem,
239
662283
1793
还是解决不了这个问题,
11:04
that's a thing that we should worry about
240
664076
2044
那么这就是值得担心的一件事,
11:06
and demand solutions to
241
666120
1293
需要解决方法,
11:07
before it's just widely adopted and employed because it's sexy.
242
667455
3503
必须赶在它们因看似酷炫 而被广泛使用之前。
11:11
And I think there's one final thing that's missing here,
243
671459
3003
我觉得还有一点没讲到,
11:14
which is our human intelligence
244
674503
1627
那就是我们的人类智慧
11:16
is not defined by our productivity at work.
245
676172
3378
并不由我们的工作效率定义。
11:20
At its core, it's defined by our ability to connect with other people.
246
680092
4171
核心上,它是由我们 与他人的联结能力定义的。
11:24
Our ability to have emotional responses,
247
684263
2670
我们产生情绪反应的能力,
11:26
to take our past and integrate it with new information
248
686974
2878
将我们的过往与新信息结合,
11:29
and creatively come up with new things.
249
689894
2294
有创意地想到新点子。
11:32
And that’s something that artificial intelligence is not now
250
692188
2919
这是人工智能无论现在,
11:35
nor will it ever be capable of doing.
251
695107
2461
还是未来都无法做到的事。
11:37
It may be able to imitate it
252
697610
1501
它也许可以模仿,
11:39
and give us a cheap facsimile of genuine connection
253
699153
3087
拙劣地模仿真情实感的联结、
11:42
and empathy and creativity.
254
702281
2127
同理心和创造力。
11:44
But it can't do those core things to our humanity.
255
704408
3671
但是它无法做到人类的核心。
11:48
And that's why I'm not really worried about AGI taking over civilization.
256
708120
5047
这就是为什么我不是很担心 AGI 取代人类文明。
11:53
But if you come away from this disbelieving everything I have told you,
257
713209
4504
但是如果你结束时不相信我刚说的,
11:57
and right now you're worried
258
717755
1460
你现在在担心
11:59
about humanity being destroyed by AI overlords,
259
719257
3003
人类会被 AI 魔王毁灭,
12:02
the one thing to remember is,
260
722301
1877
那么请记住一点,
12:04
despite what the movies have told you,
261
724220
2085
无论电影告诉你什么,
12:06
if it gets really bad,
262
726347
1460
如果真的回天乏术,
12:07
we still can always just turn it off.
263
727807
2919
我们总还是可以关机的。
12:10
(Laughter)
264
730768
1001
(笑声)
12:11
Thank you.
265
731769
1168
谢谢。
12:12
(Applause)
266
732979
3837
(掌声)
关于本网站

这个网站将向你介绍对学习英语有用的YouTube视频。你将看到来自世界各地的一流教师教授的英语课程。双击每个视频页面上显示的英文字幕,即可从那里播放视频。字幕会随着视频的播放而同步滚动。如果你有任何意见或要求,请使用此联系表与我们联系。

https://forms.gle/WvT1wiN1qDtmnspy7