Why AI Is Incredibly Smart and Shockingly Stupid | Yejin Choi | TED

383,954 views ・ 2023-04-28

TED


请双击下面的英文字幕来播放视频。

翻译人员: Yip Yan Yeung 校对人员: Yanyan Hong
00:03
So I'm excited to share a few spicy thoughts on artificial intelligence.
0
3708
6257
我很高兴可以分享几个关于 人工智能(AI)的“真知灼见”。
00:10
But first, let's get philosophical
1
10799
3044
但首先,我们从哲学看起,
00:13
by starting with this quote by Voltaire,
2
13843
2545
引用一句来自 18 世纪 启蒙思想家伏尔泰的名言:
00:16
an 18th century Enlightenment philosopher,
3
16388
2252
00:18
who said, "Common sense is not so common."
4
18682
2961
“常识不平常。”
00:21
Turns out this quote couldn't be more relevant
5
21685
3128
结果这句名言和
00:24
to artificial intelligence today.
6
24854
2169
如今的人工智能息息相关。
00:27
Despite that, AI is an undeniably powerful tool,
7
27065
3921
虽然如此,AI 毋庸置疑 是个强大的工具,
00:31
beating the world-class "Go" champion,
8
31027
2586
它能赢得世界级围棋大赛,
00:33
acing college admission tests and even passing the bar exam.
9
33613
4088
顺利通过大学入学考试, 甚至通过律师资格考试。
00:38
I’m a computer scientist of 20 years,
10
38118
2461
我从事计算机科学家这一职业 已经 20 年了,
00:40
and I work on artificial intelligence.
11
40579
2419
我研究人工智能。
00:43
I am here to demystify AI.
12
43039
2586
我来到这里,是为了揭秘 AI。
00:46
So AI today is like a Goliath.
13
46626
3462
如今的 AI 就像是个 歌利亚(巨人)。
00:50
It is literally very, very large.
14
50130
3003
真的非常、非常大型。
00:53
It is speculated that the recent ones are trained on tens of thousands of GPUs
15
53508
5839
据推测,最新的 AI 由几万个 GPU (图形处理器)和
00:59
and a trillion words.
16
59389
2544
一万亿个词语训练而成。
01:02
Such extreme-scale AI models,
17
62475
2086
如此超巨型的 AI 模型,
01:04
often referred to as "large language models,"
18
64603
3128
通常被称为“大语言模型”,
01:07
appear to demonstrate sparks of AGI,
19
67731
3879
它们的出现是 AGI,
01:11
artificial general intelligence.
20
71610
2627
即通用人工智能的一簇火花。
01:14
Except when it makes small, silly mistakes,
21
74279
3837
虽然它会犯一些愚蠢的小错误,
01:18
which it often does.
22
78158
1585
而且总是会犯。
01:20
Many believe that whatever mistakes AI makes today
23
80368
3671
很多人认为,AI 现在犯的错误
01:24
can be easily fixed with brute force,
24
84080
2002
都可以强行依靠
01:26
bigger scale and more resources.
25
86124
2127
更大的规模和 更多的资源轻松解决。
01:28
What possibly could go wrong?
26
88585
1960
这有什么不对的呢?
01:32
So there are three immediate challenges we face already at the societal level.
27
92172
5130
我们如今在社会层面上 面临着三个亟待解决的问题。
01:37
First, extreme-scale AI models are so expensive to train,
28
97886
6173
首先,训练超大规模 AI 模型的成本非常高,
01:44
and only a few tech companies can afford to do so.
29
104059
3461
只有屈指可数的科技公司 具备负担的实力。
01:48
So we already see the concentration of power.
30
108104
3796
借此我们已经可以看出 权力的集中化了。
01:52
But what's worse for AI safety,
31
112817
2503
但就 AI 安全而言, 更不好的情况是
01:55
we are now at the mercy of those few tech companies
32
115320
3795
我们现在任凭这仅有的 几家科技公司的摆布,
01:59
because researchers in the larger community
33
119115
3796
因为业界的研究者们
02:02
do not have the means to truly inspect and dissect these models.
34
122952
4755
还没有找到真正检查、 剖析这些模型的方法。
02:08
And let's not forget their massive carbon footprint
35
128416
3837
我们也不能忽略它们大量的碳足迹
02:12
and the environmental impact.
36
132295
1919
和环境影响。
02:14
And then there are these additional intellectual questions.
37
134881
3253
还有几个智能方面的问题。
02:18
Can AI, without robust common sense, be truly safe for humanity?
38
138176
5214
如果 AI 没有可靠的常识, 它对人类来说真的是安全的吗?
02:24
And is brute-force scale really the only way
39
144307
4463
强行扩张真的是教授 AI 的
02:28
and even the correct way to teach AI?
40
148812
2919
唯一且正确的途径吗?
02:32
So I’m often asked these days
41
152232
1668
最近总是有人问我,
02:33
whether it's even feasible to do any meaningful research
42
153900
2628
如果没有超大规模计算, 还有没有可能做出一些有意义的研究。
02:36
without extreme-scale compute.
43
156569
1961
02:38
And I work at a university and nonprofit research institute,
44
158530
3795
我在一所大学和非营利研究机构工作,
02:42
so I cannot afford a massive GPU farm to create enormous language models.
45
162367
5630
所以我负担不起用大规模的 GPU 集群做出大语言模型。
02:48
Nevertheless, I believe that there's so much we need to do
46
168707
4462
但是,我相信还有很多 我们需要做、
02:53
and can do to make AI sustainable and humanistic.
47
173211
4004
可以做的事, 让 AI 可持续、以人为本。
02:57
We need to make AI smaller, to democratize it.
48
177799
3378
我们得缩小 AI、让它触手可及。
03:01
And we need to make AI safer by teaching human norms and values.
49
181177
4255
我们得通过传授人类的规范和价值观 让 AI 更安全。
03:06
Perhaps we can draw an analogy from "David and Goliath,"
50
186683
4713
也许我们可以引用 《大卫和歌利亚》的比喻,
03:11
here, Goliath being the extreme-scale language models,
51
191438
4587
在这个例子中,歌利亚就是 超大规模语言模型,
03:16
and seek inspiration from an old-time classic, "The Art of War,"
52
196067
5089
受到古代经典作品 《孙子兵法》的启发,
03:21
which tells us, in my interpretation,
53
201156
2419
根据我自己的解读,
03:23
know your enemy, choose your battles, and innovate your weapons.
54
203575
4129
我们需要了解对手、 选择战与不战、更新武器。
03:28
Let's start with the first, know your enemy,
55
208163
2669
我们从第一点了解对手开始,
03:30
which means we need to evaluate AI with scrutiny.
56
210874
4129
也就是说我们得对 AI 细细审视。
03:35
AI is passing the bar exam.
57
215044
2169
AI 通过了律师资格考试。
03:38
Does that mean that AI is robust at common sense?
58
218089
3212
这能说明 AI 有着可靠的常识吗?
03:41
You might assume so, but you never know.
59
221342
2795
你可以这么认为, 但你也没法验证。
03:44
So suppose I left five clothes to dry out in the sun,
60
224429
4129
假设我晒了 5 件衣服,
03:48
and it took them five hours to dry completely.
61
228600
3003
要花 5 个小时才能晒干。
03:51
How long would it take to dry 30 clothes?
62
231644
3379
那晒干 30 件衣服要多久?
03:55
GPT-4, the newest, greatest AI system says 30 hours.
63
235315
4337
最新、最厉害的 AI 系统 GPT-4 说 30 个小时。
03:59
Not good.
64
239694
1502
不咋地。
04:01
A different one.
65
241196
1167
换个问题。
04:02
I have 12-liter jug and six-liter jug,
66
242405
2294
我有一个 12 升的壶 和一个 6 升的壶,
04:04
and I want to measure six liters.
67
244741
1626
我想量出 6 升水。
04:06
How do I do it?
68
246367
1252
该怎么做?
04:07
Just use the six liter jug, right?
69
247660
2002
直接用 6 升的壶就行了,对吧?
04:09
GPT-4 spits out some very elaborate nonsense.
70
249996
3754
GPT-4 输出了一堆复杂的狗屁。
04:13
(Laughter)
71
253792
2919
(笑声)
04:17
Step one, fill the six-liter jug,
72
257212
2252
第一步,装满 6 升的壶。
04:19
step two, pour the water from six to 12-liter jug,
73
259506
3044
第二步,把水从 6 升的壶 倒进 12 升的壶里。
04:22
step three, fill the six-liter jug again,
74
262550
3087
第三步,再装满 6 升的壶。
04:25
step four, very carefully, pour the water from six to 12-liter jug.
75
265637
4421
第四步,小心翼翼地把水 从 6 升的壶倒进 12 升的壶里。
04:30
And finally you have six liters of water in the six-liter jug
76
270099
4839
最后,6 升的壶就能量出 6 升的水,
04:34
that should be empty by now.
77
274979
1460
而这个壶现在应该是空的。
04:36
(Laughter)
78
276439
1377
(笑声)
04:37
OK, one more.
79
277857
1126
再来一个。
04:39
Would I get a flat tire by bicycling over a bridge
80
279567
4088
如果我骑着自行车经过了一座
04:43
that is suspended over nails, screws and broken glass?
81
283696
4630
跨过钉子、螺丝和碎玻璃的桥, 我的轮胎会爆掉吗?
04:48
Yes, highly likely, GPT-4 says,
82
288368
3086
“会,非常有可能会。” GPT-4 是这么回答的,
04:51
presumably because it cannot correctly reason
83
291454
2378
可能是因为它无法正确地解读
04:53
that if a bridge is suspended over the broken nails and broken glass,
84
293873
4296
这座桥是架在碎钉子和碎玻璃之上的,
04:58
then the surface of the bridge doesn't touch the sharp objects directly.
85
298211
4129
桥面也不会直接接触到尖锐物体。
05:02
OK, so how would you feel about an AI lawyer that aced the bar exam
86
302340
6089
那你对这位通过了律师资格考试,
但偶尔会在这些基本常识上犯错的 AI 律师有何感想?
05:08
yet randomly fails at such basic common sense?
87
308429
3546
05:12
AI today is unbelievably intelligent and then shockingly stupid.
88
312767
6131
如今的 AI 聪明绝顶却又愚蠢不堪。
05:18
(Laughter)
89
318898
1418
(笑声)
05:20
It is an unavoidable side effect of teaching AI through brute-force scale.
90
320316
5673
如果要通过强行扩张教授 AI, 那就会产生不可避免的副作用。
05:26
Some scale optimists might say, “Don’t worry about this.
91
326447
3170
有些看好扩张的人可能会说: “别担心这个。
05:29
All of these can be easily fixed by adding similar examples
92
329659
3962
这些都可以通过给 AI 再加点类似的实例
05:33
as yet more training data for AI."
93
333663
2753
和训练数据轻松解决。”
05:36
But the real question is this.
94
336916
2044
但真正的问题是这个。
05:39
Why should we even do that?
95
339460
1377
我们干嘛要这么做呢?
05:40
You are able to get the correct answers right away
96
340879
2836
你甚至都不用自己拿着 近似实例去训练一遍,
05:43
without having to train yourself with similar examples.
97
343715
3295
就能立即得出正确答案。
05:48
Children do not even read a trillion words
98
348136
3378
要让儿童获取基本的常识,
05:51
to acquire such a basic level of common sense.
99
351556
3420
根本不需要阅读一万亿个单词。
05:54
So this observation leads us to the next wisdom,
100
354976
3170
这个现象将我们引向了 下一条大智慧:
05:58
choose your battles.
101
358146
1710
选择战与不战。
06:00
So what fundamental questions should we ask right now
102
360148
4421
我们现在该问、 该解决什么关键问题,
06:04
and tackle today
103
364569
1918
06:06
in order to overcome this status quo with extreme-scale AI?
104
366529
4421
才能应对超大规模 AI 的现状?
06:11
I'll say common sense is among the top priorities.
105
371534
3545
我想说,常识是重中之重。
06:15
So common sense has been a long-standing challenge in AI.
106
375079
3921
常识一直是 AI 长久以来的挑战。
06:19
To explain why, let me draw an analogy to dark matter.
107
379667
4088
让我引用暗物质的比喻 来解释一下这是为什么。
06:23
So only five percent of the universe is normal matter
108
383796
2878
宇宙中只有 5% 是正常物质,
06:26
that you can see and interact with,
109
386716
2794
是你可以看见、互动的,
06:29
and the remaining 95 percent is dark matter and dark energy.
110
389552
4463
剩下的 95% 都是 暗物质和暗能量。
06:34
Dark matter is completely invisible,
111
394390
1835
暗物质是完全不可见的,
06:36
but scientists speculate that it's there because it influences the visible world,
112
396225
4630
但科学家们推测出了它的存在, 是因为它影响着可见世界,
06:40
even including the trajectory of light.
113
400897
2627
甚至包括了光路。
06:43
So for language, the normal matter is the visible text,
114
403524
3629
对语言来说,正常物质 就是可见的文本,
06:47
and the dark matter is the unspoken rules about how the world works,
115
407195
4379
暗物质就是潜规则, 描述世界是如何运行的,
06:51
including naive physics and folk psychology,
116
411574
3212
包括朴素物理学和民间心理学,
06:54
which influence the way people use and interpret language.
117
414827
3546
它们影响着人们使用、 解读语言的方式。
06:58
So why is this common sense even important?
118
418831
2503
这种常识有什么重要的呢?
07:02
Well, in a famous thought experiment proposed by Nick Bostrom,
119
422460
5464
尼克·博斯特罗姆 (Nick Bostrom)
曾提出这样一个著名的思想实验,
07:07
AI was asked to produce and maximize the paper clips.
120
427924
5881
要求 AI 产生最大量的回形针。
07:13
And that AI decided to kill humans to utilize them as additional resources,
121
433805
5964
AI 最终决定杀死人类, 将人类当作额外的资源,
07:19
to turn you into paper clips.
122
439769
2461
把你们都做成回形针。
07:23
Because AI didn't have the basic human understanding about human values.
123
443064
5505
因为 AI 对于人类的价值 没有基本的人类认知。
07:29
Now, writing a better objective and equation
124
449070
3295
如果写了这么一个 更好的目标和等式,
07:32
that explicitly states: “Do not kill humans”
125
452365
2919
明确表示:“不要杀死人类。”
07:35
will not work either
126
455284
1210
也无济于事,
07:36
because AI might go ahead and kill all the trees,
127
456494
3629
因为 AI 有可能会杀死所有的树木,
07:40
thinking that's a perfectly OK thing to do.
128
460123
2419
认为这完全没问题。
07:42
And in fact, there are endless other things
129
462583
2002
其实还有无穷无尽的事,
07:44
that AI obviously shouldn’t do while maximizing paper clips,
130
464585
2837
都是 AI 在生产最多回形针的 同时显然不应该做的,
07:47
including: “Don’t spread the fake news,” “Don’t steal,” “Don’t lie,”
131
467463
4255
包括:“不要散布假消息”、 “不要盗窃”、“不要撒谎”,
07:51
which are all part of our common sense understanding about how the world works.
132
471759
3796
这些都是我们对这个世界 该如何运行的常识性理解。
07:55
However, the AI field for decades has considered common sense
133
475930
4880
但是,几十年以来, AI 领域一直将常识
08:00
as a nearly impossible challenge.
134
480810
2753
视为几乎不可能被征服的挑战。
08:03
So much so that when my students and colleagues and I
135
483563
3837
不可能到我和我的学生、同事
08:07
started working on it several years ago, we were very much discouraged.
136
487400
3754
多年前开始研究这个领域时, 都非常挫败。
08:11
We’ve been told that it’s a research topic of ’70s and ’80s;
137
491195
3254
有人告诉我们这个研究课题 该是上世纪 70、80 年代的;
08:14
shouldn’t work on it because it will never work;
138
494490
2419
不该研究这个, 因为永远得不到答案;
08:16
in fact, don't even say the word to be taken seriously.
139
496951
3378
这个词甚至都不该被摆到台面上。
08:20
Now fast forward to this year,
140
500329
2128
时间跳到今年,
08:22
I’m hearing: “Don’t work on it because ChatGPT has almost solved it.”
141
502498
4296
我听到了:“别研究这个了,因为 ChatGPT 几乎已经把它搞定了。”
08:26
And: “Just scale things up and magic will arise,
142
506836
2461
还有:“什么都扩张一下就行了, 会发生奇迹的,
08:29
and nothing else matters.”
143
509338
1794
别的都无所谓。”
08:31
So my position is that giving true common sense
144
511174
3545
我的观点是,给 AI 真正的常识,
08:34
human-like robots common sense to AI, is still moonshot.
145
514761
3712
类人的机器人常识, 依旧难如登天。
08:38
And you don’t reach to the Moon
146
518514
1502
你要登天,
08:40
by making the tallest building in the world one inch taller at a time.
147
520016
4212
也不可能一英尺一英尺地 拔高世界上最高的楼。
08:44
Extreme-scale AI models
148
524270
1460
超大规模的 AI 模型
08:45
do acquire an ever-more increasing amount of commonsense knowledge,
149
525772
3169
确实需要比以往更大量的常识,
08:48
I'll give you that.
150
528983
1168
我可以这么说。
08:50
But remember, they still stumble on such trivial problems
151
530193
4254
但记住,它们仍然会在 一些小朋友都会做的
08:54
that even children can do.
152
534489
2419
小问题上犯错误。
08:56
So AI today is awfully inefficient.
153
536908
3879
现在的 AI 极度低效。
09:00
And what if there is an alternative path or path yet to be found?
154
540787
4337
也许还有一条还没有 被发掘的道路呢?
09:05
A path that can build on the advancements of the deep neural networks,
155
545166
4171
一条基于深度神经网络进步的道路,
09:09
but without going so extreme with the scale.
156
549378
2712
也不用走向极端的规模。
09:12
So this leads us to our final wisdom:
157
552465
3170
这就说到了我们最后一条大智慧:
09:15
innovate your weapons.
158
555635
1710
更新你的武器。
09:17
In the modern-day AI context,
159
557345
1668
在当代的 AI 环境中,
09:19
that means innovate your data and algorithms.
160
559055
3086
指的就是在你的数据和算法上创新。
09:22
OK, so there are, roughly speaking, three types of data
161
562183
2628
现在的 AI 大概 由 3 类数据训练而成:
09:24
that modern AI is trained on:
162
564852
1961
09:26
raw web data,
163
566813
1376
原始网页数据、
09:28
crafted examples custom developed for AI training,
164
568231
4462
专为 AI 训练定制的人工实例
09:32
and then human judgments,
165
572735
2044
和人类判断,
09:34
also known as human feedback on AI performance.
166
574821
3211
也就是人类就 AI 的表现 提供的反馈。
09:38
If the AI is only trained on the first type, raw web data,
167
578074
3962
如果 AI 只由 第一种原始网页数据训练,
09:42
which is freely available,
168
582078
1710
此类数据唾手可得,
09:43
it's not good because this data is loaded with racism and sexism
169
583788
4755
这就会是个不好的选择,因为 这类数据充满了种族歧视、性别歧视
09:48
and misinformation.
170
588584
1126
和错误信息。
09:49
So no matter how much of it you use, garbage in and garbage out.
171
589752
4171
无论你用了多少此类数据, 就是输入了垃圾又输出了垃圾。
09:54
So the newest, greatest AI systems
172
594507
2794
最新最好的 AI 系统
09:57
are now powered with the second and third types of data
173
597343
3337
现已接入了 第二种和第三种数据,
10:00
that are crafted and judged by human workers.
174
600680
3378
由人类员工创建、评判。
10:04
It's analogous to writing specialized textbooks for AI to study from
175
604350
5422
这就类似于专为 AI 写了一本教科书,让它学,
10:09
and then hiring human tutors to give constant feedback to AI.
176
609814
4421
然后再请人类辅导老师 不断给 AI 提供反馈。
10:15
These are proprietary data, by and large,
177
615027
2461
这些都是专有数据,
10:17
speculated to cost tens of millions of dollars.
178
617488
3420
大约估算要花费上亿美元。
10:20
We don't know what's in this,
179
620908
1460
我们都不知道其中有什么,
10:22
but it should be open and publicly available
180
622410
2419
但这些数据得是公开的、 公众可以获取的,
10:24
so that we can inspect and ensure [it supports] diverse norms and values.
181
624829
4463
这样我们可以检视, 保证多种规范和价值观。
10:29
So for this reason, my teams at UW and AI2
182
629876
2711
因此,我在华盛顿大学和 艾伦人工智能研究所(AI2)的团队
10:32
have been working on commonsense knowledge graphs
183
632628
2461
一直在研究常识知识图谱
10:35
as well as moral norm repositories
184
635089
2086
和道德规范库,
10:37
to teach AI basic commonsense norms and morals.
185
637216
3504
借此将基本常识中的规范和道德 教授给 AI。
10:41
Our data is fully open so that anybody can inspect the content
186
641137
3336
我们的数据是完全公开的, 任何人都可以检查内容,
10:44
and make corrections as needed
187
644473
1502
必要时做出修改,
10:45
because transparency is the key for such an important research topic.
188
645975
4171
因为透明度是如此重要的 研究课题的关键。
10:50
Now let's think about learning algorithms.
189
650646
2545
我们来谈一谈学习算法。
10:53
No matter how amazing large language models are,
190
653733
4629
无论大语言模型有多牛,
10:58
by design
191
658404
1126
它们可能本来就不是 可靠的知识模型的最佳选择。
10:59
they may not be the best suited to serve as reliable knowledge models.
192
659572
4755
11:04
And these language models do acquire a vast amount of knowledge,
193
664368
4463
这些语言模型确实能获取海量知识,
11:08
but they do so as a byproduct as opposed to direct learning objective.
194
668831
4755
但这是与它直接的学习目标 相反的意外收获。
11:14
Resulting in unwanted side effects such as hallucinated effects
195
674503
4296
这会导致多余的副作用, 比如幻觉
11:18
and lack of common sense.
196
678841
2002
和缺乏常识。
11:20
Now, in contrast,
197
680843
1210
相比之下,
11:22
human learning is never about predicting which word comes next,
198
682053
3170
人类学习从来就不是 预测接下来该输出什么词,
11:25
but it's really about making sense of the world
199
685223
2877
而是理解世界,
11:28
and learning how the world works.
200
688142
1585
学习世界运作的方式。
11:29
Maybe AI should be taught that way as well.
201
689727
2544
也许也该这么教 AI。
11:33
So as a quest toward more direct commonsense knowledge acquisition,
202
693105
6090
为了探寻获取常识的 更直接的方式,
11:39
my team has been investigating potential new algorithms,
203
699195
3879
我的团队一直在研究 潜在的新算法,
11:43
including symbolic knowledge distillation
204
703115
2628
比如符号知识提炼,
11:45
that can take a very large language model as shown here
205
705743
3795
需要的巨型模型如图所示,
11:49
that I couldn't fit into the screen because it's too large,
206
709538
3963
这页都放不下, 因为实在是太大了,
11:53
and crunch that down to much smaller commonsense models
207
713501
4671
再通过深度神经网络把它 缩小成一个小得多的常识模型。
11:58
using deep neural networks.
208
718214
2252
12:00
And in doing so, we also generate, algorithmically, human-inspectable,
209
720508
5380
与此同时,我们通过算法 生成人类可以检视、
12:05
symbolic, commonsense knowledge representation,
210
725888
3253
以符号表达的常识知识表示,
12:09
so that people can inspect and make corrections
211
729141
2211
让人们可以检查、修正,
12:11
and even use it to train other neural commonsense models.
212
731394
3545
甚至用其训练其他神经常识模型。
12:15
More broadly,
213
735314
1210
更广泛地说,
12:16
we have been tackling this seemingly impossible giant puzzle
214
736565
4630
我们正在解开这个看似 不可能解开的巨幅常识拼图,
12:21
of common sense, ranging from physical,
215
741237
2669
从物理的、
12:23
social and visual common sense
216
743906
2169
社会的、视觉上的常识,
12:26
to theory of minds, norms and morals.
217
746117
2419
到心智理论、规范和道德。
12:28
Each individual piece may seem quirky and incomplete,
218
748577
3796
每一块都古怪又不完整,
12:32
but when you step back,
219
752415
1585
但如果退后一步看,
12:34
it's almost as if these pieces weave together into a tapestry
220
754041
4421
这些碎片好像交织在一起, 形成一幅我们称之为
12:38
that we call human experience and common sense.
221
758504
3045
人类经验和常识的画卷。
12:42
We're now entering a new era
222
762174
1961
我们现正迈入一个新时代,
12:44
in which AI is almost like a new intellectual species
223
764176
5923
AI 就像是一种 拥有知识的新物种,
12:50
with unique strengths and weaknesses compared to humans.
224
770099
3837
相较人类有着独特的优势和弱势。
12:54
In order to make this powerful AI
225
774478
3546
要让这强大的 AI
12:58
sustainable and humanistic,
226
778065
2336
可持续又以人为本,
13:00
we need to teach AI common sense, norms and values.
227
780401
4129
我们得把常识、规范和 价值观教给 AI。
13:04
Thank you.
228
784530
1376
谢谢。
13:05
(Applause)
229
785906
6966
(掌声)
13:13
Chris Anderson: Look at that.
230
793664
1460
克里斯·安德森 (Chris Anderson):瞧瞧。
13:15
Yejin, please stay one sec.
231
795124
1877
艺珍(Yejin),请留步。
13:18
This is so interesting,
232
798002
1418
太有趣了,
13:19
this idea of common sense.
233
799462
2002
常识的话题。
13:21
We obviously all really want this from whatever's coming.
234
801505
3712
显然我们都很期待。
13:25
But help me understand.
235
805926
1168
但请你帮我理解一下。
13:27
Like, so we've had this model of a child learning.
236
807094
4463
我们有了这个类似 儿童学习的模型。
13:31
How does a child gain common sense
237
811599
3044
除了更多输入的积累和人类的反馈,
13:34
apart from the accumulation of more input
238
814685
3545
小孩子该如何获取常识呢?
13:38
and some, you know, human feedback?
239
818230
3045
13:41
What else is there?
240
821317
1293
还有什么?
13:42
Yejin Choi: So fundamentally, there are several things missing,
241
822610
3003
崔艺珍(Yejin Choi): 从根本上来说,缺少了几样东西,
13:45
but one of them is, for example,
242
825613
1918
但以其中一样为例,
13:47
the ability to make hypothesis and make experiments,
243
827573
3796
即做出假设和尝试的能力,
13:51
interact with the world and develop this hypothesis.
244
831369
4713
与世界互动,形成假设。
13:56
We abstract away the concepts about how the world works,
245
836123
3671
我们不会去归纳总结 世界运作的方式,
13:59
and then that's how we truly learn,
246
839835
2044
这才是我们学习的真正方式,
14:01
as opposed to today's language model.
247
841921
3003
而不是如今语言模型采用的方式。
14:05
Some of them is really not there quite yet.
248
845424
2795
有些模型还达不到这种程度。
14:09
CA: You use the analogy that we can’t get to the Moon
249
849303
2669
CA: 你打了个比方, 说我们无法通过
14:12
by extending a building a foot at a time.
250
852014
2544
每次把楼拔高一英尺登天。
14:14
But the experience that most of us have had
251
854558
2044
但很多人在这些语言模型上的 体验可不是每次一英尺,
14:16
of these language models is not a foot at a time.
252
856602
2336
14:18
It's like, the sort of, breathtaking acceleration.
253
858938
2669
而是像猛地一脚油门。
14:21
Are you sure that given the pace at which those things are going,
254
861607
3670
你确定照现在发展的节奏,
14:25
each next level seems to be bringing with it
255
865319
2711
每到下一个阶段都会带来
14:28
what feels kind of like wisdom and knowledge.
256
868072
4671
某种智慧的心得和知识吗?
14:32
YC: I totally agree that it's remarkable how much this scaling things up
257
872785
5297
YC: 我完全同意扩大规模
14:38
really enhances the performance across the board.
258
878124
3670
真的可以总体提高性能 是一件了不起的事。
14:42
So there's real learning happening
259
882086
2544
计算和数据的规模 真的能让我们有所收获。
14:44
due to the scale of the compute and data.
260
884630
4797
14:49
However, there's a quality of learning that is still not quite there.
261
889468
4171
但是收获的质量不尽如人意。
14:53
And the thing is,
262
893681
1168
问题是,
14:54
we don't yet know whether we can fully get there or not
263
894849
3712
我们都不知道 到底能不能“如人意”,
14:58
just by scaling things up.
264
898561
2335
仅仅通过扩大规模这一途径。
15:01
And if we cannot, then there's this question of what else?
265
901188
4213
如果这样是达不到的, 问题就变成了:还有什么途径呢?
15:05
And then even if we could,
266
905401
1877
就算我们可以借此达到想要的效果,
15:07
do we like this idea of having very, very extreme-scale AI models
267
907319
5214
我们真的会喜欢使用这种 非常、非常大规模的 AI 模型,
15:12
that only a few can create and own?
268
912575
4337
只有屈指可数的人 可以创造、拥有的模型吗?
15:18
CA: I mean, if OpenAI said, you know, "We're interested in your work,
269
918456
4587
CA: 如果 OpenAI 说: “我们对你的工作很感兴趣,
15:23
we would like you to help improve our model,"
270
923043
2837
我希望你能帮我们改进我们的模型。”
15:25
can you see any way of combining what you're doing
271
925921
2670
你觉得有没有将你的研究内容
15:28
with what they have built?
272
928632
1710
与他们做的东西相结合的可能?
15:30
YC: Certainly what I envision
273
930926
2336
YC: 我的畅想当然
15:33
will need to build on the advancements of deep neural networks.
274
933304
4171
必须建立在深度神经网络的突破之上。
15:37
And it might be that there’s some scale Goldilocks Zone,
275
937516
4213
也许存在一个规模的“适居带”,
15:41
such that ...
276
941770
1168
这样……
15:42
I'm not imagining that the smaller is the better either, by the way.
277
942980
3212
我不是说越小越好,
15:46
It's likely that there's right amount of scale, but beyond that,
278
946233
4421
很有可能有一个合适的规模, 但除此之外,
15:50
the winning recipe might be something else.
279
950696
2294
取胜秘籍可能另有他物。
15:53
So some synthesis of ideas will be critical here.
280
953032
4838
各种想法的碰撞就是关键。
15:58
CA: Yejin Choi, thank you so much for your talk.
281
958579
2294
CA: 崔艺珍,感谢你的演讲。
16:00
(Applause)
282
960873
1585
(掌声)
关于本网站

这个网站将向你介绍对学习英语有用的YouTube视频。你将看到来自世界各地的一流教师教授的英语课程。双击每个视频页面上显示的英文字幕,即可从那里播放视频。字幕会随着视频的播放而同步滚动。如果你有任何意见或要求,请使用此联系表与我们联系。

https://forms.gle/WvT1wiN1qDtmnspy7