Why AI Is Incredibly Smart and Shockingly Stupid | Yejin Choi | TED

383,445 views ・ 2023-04-28

TED


請雙擊下方英文字幕播放視頻。

譯者: Lilian Chiu 審譯者: Helen Chang
00:03
So I'm excited to share a few spicy thoughts on artificial intelligence.
0
3708
6257
我很興奮能來分享一些
關於人工智慧的辛辣想法。
00:10
But first, let's get philosophical
1
10799
3044
但,咱們先來點哲學,
00:13
by starting with this quote by Voltaire,
2
13843
2545
用一句伏爾泰的引言開頭,
00:16
an 18th century Enlightenment philosopher,
3
16388
2252
這位十八世紀的 啟蒙時代哲學家曾說:
00:18
who said, "Common sense is not so common."
4
18682
2961
「常識並不常見。」
00:21
Turns out this quote couldn't be more relevant
5
21685
3128
結果發現,這句話
太適合套用在現今的人工智慧上了。
00:24
to artificial intelligence today.
6
24854
2169
00:27
Despite that, AI is an undeniably powerful tool,
7
27065
3921
儘管如此,人工智慧 無疑是強大的工具,
打敗世界級圍棋冠軍,
00:31
beating the world-class "Go" champion,
8
31027
2586
00:33
acing college admission tests and even passing the bar exam.
9
33613
4088
在大學入學考試表現一流,
甚至通過了律師考試。
00:38
I’m a computer scientist of 20 years,
10
38118
2461
我是有二十年經驗的電腦科學家,
00:40
and I work on artificial intelligence.
11
40579
2419
我做的是人工智慧。
我來這裡揭開人工智慧的神秘面紗。
00:43
I am here to demystify AI.
12
43039
2586
00:46
So AI today is like a Goliath.
13
46626
3462
現今的人工智慧就像巨人歌利亞。
00:50
It is literally very, very large.
14
50130
3003
它真的非常非常大。
00:53
It is speculated that the recent ones are trained on tens of thousands of GPUs
15
53508
5839
據推測,訓練近期的 人工智慧所用的是
數以萬計的圖形處理器 和上兆個字詞。
00:59
and a trillion words.
16
59389
2544
01:02
Such extreme-scale AI models,
17
62475
2086
這種極大規模的人工智慧模型 通常被稱為「大型語言模型」,
01:04
often referred to as "large language models,"
18
64603
3128
01:07
appear to demonstrate sparks of AGI,
19
67731
3879
它們顯然展現出了 AGI 的跡象,
01:11
artificial general intelligence.
20
71610
2627
AGI 就是人工通用智慧。
01:14
Except when it makes small, silly mistakes,
21
74279
3837
只是它也會犯很蠢的小錯,
01:18
which it often does.
22
78158
1585
且還蠻常犯的。
01:20
Many believe that whatever mistakes AI makes today
23
80368
3671
許多人相信,不論現今 人工智慧犯的是什麼錯誤,
01:24
can be easily fixed with brute force,
24
84080
2002
都可以用暴力地以更大的規模 及更多的資源來解決。
01:26
bigger scale and more resources.
25
86124
2127
01:28
What possibly could go wrong?
26
88585
1960
怎麼可能會出錯呢?
01:32
So there are three immediate challenges we face already at the societal level.
27
92172
5130
在社會層級,我們已經 面臨到三個立即的挑戰。
01:37
First, extreme-scale AI models are so expensive to train,
28
97886
6173
第一,
訓練極大規模的人工智慧 模型,成本非常昂貴,
01:44
and only a few tech companies can afford to do so.
29
104059
3461
只有幾間科技公司能負擔得起。
01:48
So we already see the concentration of power.
30
108104
3796
所以我們已經看到了 權力集中的現象。
01:52
But what's worse for AI safety,
31
112817
2503
但就人工智慧安全性 而言,更糟的是,
01:55
we are now at the mercy of those few tech companies
32
115320
3795
我們現在要看少數科技公司的臉色,
01:59
because researchers in the larger community
33
119115
3796
因為,
在更大的圈子裡的研究者沒有辦法
02:02
do not have the means to truly inspect and dissect these models.
34
122952
4755
真正檢查和分析這些模型。
02:08
And let's not forget their massive carbon footprint
35
128416
3837
也別忘了它們的大量碳足跡
02:12
and the environmental impact.
36
132295
1919
以及對環境的衝擊。
02:14
And then there are these additional intellectual questions.
37
134881
3253
此外還有智慧方面的問題。
02:18
Can AI, without robust common sense, be truly safe for humanity?
38
138176
5214
若沒有健全的常識,對人類 來說人工智慧真的安全嗎?
02:24
And is brute-force scale really the only way
39
144307
4463
且,暴力規模的方式真的是唯一可以
02:28
and even the correct way to teach AI?
40
148812
2919
教導人工智慧的方式嗎? 它又是對的方式嗎?
02:32
So I’m often asked these days
41
152232
1668
這陣子我常被問,在沒有 極大規模計算的情況下
02:33
whether it's even feasible to do any meaningful research
42
153900
2628
做任何有意義的研究是可行的嗎?
02:36
without extreme-scale compute.
43
156569
1961
02:38
And I work at a university and nonprofit research institute,
44
158530
3795
我在大學及非營利研究機構工作,
02:42
so I cannot afford a massive GPU farm to create enormous language models.
45
162367
5630
所以我負擔不起大型圖形處理器農場
來創造巨大的語言模型。
02:48
Nevertheless, I believe that there's so much we need to do
46
168707
4462
儘管如此,
我相信我們必須要做/能做很多事
02:53
and can do to make AI sustainable and humanistic.
47
173211
4004
讓人工智慧永續且人本主義化。
02:57
We need to make AI smaller, to democratize it.
48
177799
3378
我們要讓人工智慧 變得更小,將它民主化,
03:01
And we need to make AI safer by teaching human norms and values.
49
181177
4255
我們也需要教導人工智慧 人類標準和價值觀來讓它更安全。
03:06
Perhaps we can draw an analogy from "David and Goliath,"
50
186683
4713
也許我們可以用 《大衛與歌利亞》來比喻。
03:11
here, Goliath being the extreme-scale language models,
51
191438
4587
歌利亞就是極大規模語言模型,
03:16
and seek inspiration from an old-time classic, "The Art of War,"
52
196067
5089
接著從古典名著 《孫子兵法》中尋求靈感,
03:21
which tells us, in my interpretation,
53
201156
2419
根據我的詮釋,它告訴我們,
03:23
know your enemy, choose your battles, and innovate your weapons.
54
203575
4129
了解你的敵人,選擇你要打 哪場仗,並創新你的武器。
03:28
Let's start with the first, know your enemy,
55
208163
2669
咱們先從第一點開始, 了解你的敵人,
03:30
which means we need to evaluate AI with scrutiny.
56
210874
4129
我們需要用嚴謹的態度 來評估人工智慧。
03:35
AI is passing the bar exam.
57
215044
2169
人工智慧能通過律師考試。
03:38
Does that mean that AI is robust at common sense?
58
218089
3212
那表示人工智慧有健全的常識嗎?
03:41
You might assume so, but you never know.
59
221342
2795
你可能假設是如此, 但你永遠不會知道。
03:44
So suppose I left five clothes to dry out in the sun,
60
224429
4129
假設我留了五件衣服在太陽下曬乾,
03:48
and it took them five hours to dry completely.
61
228600
3003
它們需要五小時才能全乾。
03:51
How long would it take to dry 30 clothes?
62
231644
3379
曬乾三十件衣服要多久?
03:55
GPT-4, the newest, greatest AI system says 30 hours.
63
235315
4337
最新最棒的人工智慧系統 GPT-4 說是三十個小時。
03:59
Not good.
64
239694
1502
不好。
04:01
A different one.
65
241196
1167
再來個不同的例子:我有 十二公升的甕和六公升的甕,
04:02
I have 12-liter jug and six-liter jug,
66
242405
2294
04:04
and I want to measure six liters.
67
244741
1626
我想要測量出六公升,要怎麼做?
04:06
How do I do it?
68
246367
1252
04:07
Just use the six liter jug, right?
69
247660
2002
用六公升的甕就好了對吧?
04:09
GPT-4 spits out some very elaborate nonsense.
70
249996
3754
GPT-4 卻吐出一堆 精心製作的胡言亂語。
04:13
(Laughter)
71
253792
2919
(笑聲)
04:17
Step one, fill the six-liter jug,
72
257212
2252
第一步:把六公升的甕裝滿。
04:19
step two, pour the water from six to 12-liter jug,
73
259506
3044
第二步:把六公升甕中的水 倒入十二公升的甕。
04:22
step three, fill the six-liter jug again,
74
262550
3087
第三步:再次把六公升的甕裝滿。
04:25
step four, very carefully, pour the water from six to 12-liter jug.
75
265637
4421
第四步:非常小心地將水 從六公升的甕倒入十二公升的甕。
04:30
And finally you have six liters of water in the six-liter jug
76
270099
4839
最後,
你就有六公升的水了,
它就在現在應該沒水的六公升甕中。
04:34
that should be empty by now.
77
274979
1460
04:36
(Laughter)
78
276439
1377
(笑聲)
04:37
OK, one more.
79
277857
1126
好,再來一個。
04:39
Would I get a flat tire by bicycling over a bridge
80
279567
4088
我這樣會不會爆胎:騎腳踏車騎過
一座懸吊的橋,橋下有
04:43
that is suspended over nails, screws and broken glass?
81
283696
4630
釘子、螺絲,和碎玻璃。
04:48
Yes, highly likely, GPT-4 says,
82
288368
3086
GPT-4 說:是的,非常有可能,
04:51
presumably because it cannot correctly reason
83
291454
2378
有可能是因為它無法正確推論出
04:53
that if a bridge is suspended over the broken nails and broken glass,
84
293873
4296
如果橋是懸吊在 釘子和碎玻璃上方的,
04:58
then the surface of the bridge doesn't touch the sharp objects directly.
85
298211
4129
那麼橋的表面就不會直接 接觸到這些銳利的物體。
05:02
OK, so how would you feel about an AI lawyer that aced the bar exam
86
302340
6089
好,各位對此有什麼感想: 一位人工智慧律師
在律師考試表現出色,
05:08
yet randomly fails at such basic common sense?
87
308429
3546
卻會隨機在一些常識上出現錯誤?
05:12
AI today is unbelievably intelligent and then shockingly stupid.
88
312767
6131
現今的人工智慧
聰明到讓人難以置信,
卻也愚蠢到讓人瞠目結舌。
05:18
(Laughter)
89
318898
1418
(笑聲)
05:20
It is an unavoidable side effect of teaching AI through brute-force scale.
90
320316
5673
若要透過暴力規模來教導人工智慧,
這就是無可避免的副作用。
05:26
Some scale optimists might say, “Don’t worry about this.
91
326447
3170
有些對規模抱持樂觀的人會說:
「別擔心這點, 這一切都很容易解決,
05:29
All of these can be easily fixed by adding similar examples
92
329659
3962
只要加一些類似的例子, 給人工智慧更多訓練資料即可。
05:33
as yet more training data for AI."
93
333663
2753
05:36
But the real question is this.
94
336916
2044
但,真正的問題是這個:
05:39
Why should we even do that?
95
339460
1377
我們幹嘛這樣做?你馬上 就能得出正確答案了,
05:40
You are able to get the correct answers right away
96
340879
2836
05:43
without having to train yourself with similar examples.
97
343715
3295
你還不需要用類似的例子 來訓練你自己。
05:48
Children do not even read a trillion words
98
348136
3378
兒童甚至不用讀到上兆個字詞
05:51
to acquire such a basic level of common sense.
99
351556
3420
也能習得這種基本層級的常識。
05:54
So this observation leads us to the next wisdom,
100
354976
3170
這項觀察,就要帶到第二項教訓:
05:58
choose your battles.
101
358146
1710
選擇你要打哪場仗。
06:00
So what fundamental questions should we ask right now
102
360148
4421
我們現在應該要問哪些基礎問題,
06:04
and tackle today
103
364569
1918
現今要處理哪些問題,
06:06
in order to overcome this status quo with extreme-scale AI?
104
366529
4421
才能夠克服極大規模 人工智慧的這種現況?
06:11
I'll say common sense is among the top priorities.
105
371534
3545
我會說,常識是最該 優先處理的議題之一。
06:15
So common sense has been a long-standing challenge in AI.
106
375079
3921
長久以來,常識一直是 人工智慧領域的難題。
06:19
To explain why, let me draw an analogy to dark matter.
107
379667
4088
為了解釋這一點, 讓我用暗物質來做比喻。
06:23
So only five percent of the universe is normal matter
108
383796
2878
宇宙只有 5% 是正常物質,
06:26
that you can see and interact with,
109
386716
2794
即你可以看見、互動的物質,
06:29
and the remaining 95 percent is dark matter and dark energy.
110
389552
4463
剩下的 95% 都是暗物質和暗能量。
06:34
Dark matter is completely invisible,
111
394390
1835
暗物質是完全看不見的,
06:36
but scientists speculate that it's there because it influences the visible world,
112
396225
4630
但科學家推論它存在, 因為它會影響可見的世界,
06:40
even including the trajectory of light.
113
400897
2627
甚至光的軌道。
06:43
So for language, the normal matter is the visible text,
114
403524
3629
語言上的正常物質就是可見的文字,
06:47
and the dark matter is the unspoken rules about how the world works,
115
407195
4379
暗物質則是世界 如何運作的潛在規則,
06:51
including naive physics and folk psychology,
116
411574
3212
包括天真物理學和民間心理學,
06:54
which influence the way people use and interpret language.
117
414827
3546
這些都會影響到人如何 使用和詮釋語言。
06:58
So why is this common sense even important?
118
418831
2503
那為什麼常識很重要?
07:02
Well, in a famous thought experiment proposed by Nick Bostrom,
119
422460
5464
尼克‧博斯特羅姆提出了 一個著名的思想實驗,
07:07
AI was asked to produce and maximize the paper clips.
120
427924
5881
人工智慧被要求要盡量製造出最多的
迴紋針。
07:13
And that AI decided to kill humans to utilize them as additional resources,
121
433805
5964
而人工智慧決定要殺害人類, 用他們當作額外的資源,
07:19
to turn you into paper clips.
122
439769
2461
把各位變成迴紋針。
07:23
Because AI didn't have the basic human understanding about human values.
123
443064
5505
因為人工智慧不像人類 對於人的價值有基本的了解。
07:29
Now, writing a better objective and equation
124
449070
3295
就算是給它比較好的目標和方程式,
07:32
that explicitly states: “Do not kill humans”
125
452365
2919
明確陳述「不要殺人」,
07:35
will not work either
126
455284
1210
也行不通,因為 人工智慧可能就會改成
07:36
because AI might go ahead and kill all the trees,
127
456494
3629
殺光所有樹木,
07:40
thinking that's a perfectly OK thing to do.
128
460123
2419
以為那樣做完全沒問題。
07:42
And in fact, there are endless other things
129
462583
2002
事實上,還有數不清的事,
07:44
that AI obviously shouldn’t do while maximizing paper clips,
130
464585
2837
都是人工智慧在盡量製造 迴紋針時不該做的事,包括:
07:47
including: “Don’t spread the fake news,” “Don’t steal,” “Don’t lie,”
131
467463
4255
「別散播假消息」、 「別偷竊」、「別說謊」,
07:51
which are all part of our common sense understanding about how the world works.
132
471759
3796
都屬於我們了解世界 如何運作的常識。
07:55
However, the AI field for decades has considered common sense
133
475930
4880
然而,
數十年來,人工智慧領域
認為常識是不可能的挑戰。
08:00
as a nearly impossible challenge.
134
480810
2753
08:03
So much so that when my students and colleagues and I
135
483563
3837
且到這種程度:
當我和我的學生及同事 數年前開始投入這個主題時,
08:07
started working on it several years ago, we were very much discouraged.
136
487400
3754
我們被大力勸阻。
08:11
We’ve been told that it’s a research topic of ’70s and ’80s;
137
491195
3254
別人告訴我們,這是七○ 和八○年代的研究主題,
08:14
shouldn’t work on it because it will never work;
138
494490
2419
別投入這個主題, 因為永遠不會有成果。
08:16
in fact, don't even say the word to be taken seriously.
139
496951
3378
事實上,甚至別說出這個詞 才能被別人認真對待。
08:20
Now fast forward to this year,
140
500329
2128
快轉到今年,我聽到:
08:22
I’m hearing: “Don’t work on it because ChatGPT has almost solved it.”
141
502498
4296
「別投入這個主題,因為 ChatGPT 快解決它了。」
08:26
And: “Just scale things up and magic will arise,
142
506836
2461
及「把規模擴大就會有神奇的 事情發生,其他都無所謂。」
08:29
and nothing else matters.”
143
509338
1794
08:31
So my position is that giving true common sense
144
511174
3545
所以,我的立場是:
要讓人工智慧有真正的常識 仍然是跟登月一樣難的課題。
08:34
human-like robots common sense to AI, is still moonshot.
145
514761
3712
08:38
And you don’t reach to the Moon
146
518514
1502
且登月的做法並不是把世上 最高的大樓一次加高一英吋。
08:40
by making the tallest building in the world one inch taller at a time.
147
520016
4212
08:44
Extreme-scale AI models
148
524270
1460
我承認極大規模人工智慧模型 取得的常識知識不斷在增加。
08:45
do acquire an ever-more increasing amount of commonsense knowledge,
149
525772
3169
08:48
I'll give you that.
150
528983
1168
但是,
08:50
But remember, they still stumble on such trivial problems
151
530193
4254
別忘了,連孩子都懂的簡單 小問題它們還是會出錯。
08:54
that even children can do.
152
534489
2419
08:56
So AI today is awfully inefficient.
153
536908
3879
現今的人工智慧非常低效。
09:00
And what if there is an alternative path or path yet to be found?
154
540787
4337
如果有替代途徑 或尚未被找到的途徑呢?
09:05
A path that can build on the advancements of the deep neural networks,
155
545166
4171
以深度神經網路的進展 為基礎的途徑,
09:09
but without going so extreme with the scale.
156
549378
2712
但在規模上不用做到這麼大。
09:12
So this leads us to our final wisdom:
157
552465
3170
這就要帶出
最後一項教訓:創新你的武器。
09:15
innovate your weapons.
158
555635
1710
09:17
In the modern-day AI context,
159
557345
1668
在現代人工智慧的情境中, 意思就是創新你的資料和演算法。
09:19
that means innovate your data and algorithms.
160
559055
3086
09:22
OK, so there are, roughly speaking, three types of data
161
562183
2628
粗略來說,訓練現代人工智慧 所用的資料分為三類:
09:24
that modern AI is trained on:
162
564852
1961
09:26
raw web data,
163
566813
1376
原始網路資料、
09:28
crafted examples custom developed for AI training,
164
568231
4462
為了訓練人工智慧 而量身打造的範例,
09:32
and then human judgments,
165
572735
2044
以及人類判斷,
09:34
also known as human feedback on AI performance.
166
574821
3211
也就是人類針對 人工智慧表現的回饋。
09:38
If the AI is only trained on the first type, raw web data,
167
578074
3962
若只用免費取得的原始網路資料 (第一類)來訓練人工智慧,
09:42
which is freely available,
168
582078
1710
09:43
it's not good because this data is loaded with racism and sexism
169
583788
4755
那並不好,因為,
這些資料中帶有許多 種族主義、性別主義、假消息。
09:48
and misinformation.
170
588584
1126
09:49
So no matter how much of it you use, garbage in and garbage out.
171
589752
4171
不論你用多少資料, 垃圾進只會垃圾出。
09:54
So the newest, greatest AI systems
172
594507
2794
所以,
現在最新最棒的人工智慧系統 也靠第二、三類資料的支持,
09:57
are now powered with the second and third types of data
173
597343
3337
10:00
that are crafted and judged by human workers.
174
600680
3378
這些資料由人類工作者製作和評斷。
10:04
It's analogous to writing specialized textbooks for AI to study from
175
604350
5422
可以比喻成:寫專門的 教科書給人工智慧研讀,
10:09
and then hiring human tutors to give constant feedback to AI.
176
609814
4421
接著僱用人類家教來經常 給予人工智慧回饋意見。
10:15
These are proprietary data, by and large,
177
615027
2461
這些大多是專有的資料,
10:17
speculated to cost tens of millions of dollars.
178
617488
3420
推測可能成本要數千萬美金。
10:20
We don't know what's in this,
179
620908
1460
我們不知道這些資料包含什麼, 但應該要開放給大家取得,
10:22
but it should be open and publicly available
180
622410
2419
10:24
so that we can inspect and ensure [it supports] diverse norms and values.
181
624829
4463
讓我們能檢視和確保它們能協助 傳遞多樣性的規範和價值觀。
10:29
So for this reason, my teams at UW and AI2
182
629876
2711
基於這個理由, 我在 UW 和 AI2 的團隊
10:32
have been working on commonsense knowledge graphs
183
632628
2461
一直投入在做常識知識圖
10:35
as well as moral norm repositories
184
635089
2086
以及道德規範庫。
10:37
to teach AI basic commonsense norms and morals.
185
637216
3504
來教導人工智慧基本的 常識規範和道德。
10:41
Our data is fully open so that anybody can inspect the content
186
641137
3336
我們的資料是完全公開的, 人人都可以檢視內容,
10:44
and make corrections as needed
187
644473
1502
並做必要的修正,
10:45
because transparency is the key for such an important research topic.
188
645975
4171
因為對這麼重要的研究主題 來說,透明度是關鍵。
10:50
Now let's think about learning algorithms.
189
650646
2545
咱們來思考一下學習演算法。
10:53
No matter how amazing large language models are,
190
653733
4629
不論大型語言模型有多驚人,
10:58
by design
191
658404
1126
從設計角度來說它們可能不會最適合
10:59
they may not be the best suited to serve as reliable knowledge models.
192
659572
4755
擔任可靠的知識模型。
11:04
And these language models do acquire a vast amount of knowledge,
193
664368
4463
這些語言模型確實 取得了大量的知識,
11:08
but they do so as a byproduct as opposed to direct learning objective.
194
668831
4755
但這是它們副產物, 而不是直接的學習目標,
11:14
Resulting in unwanted side effects such as hallucinated effects
195
674503
4296
這導致了我們不想要的 副作用,如幻覺效應
11:18
and lack of common sense.
196
678841
2002
以及缺乏常識。
11:20
Now, in contrast,
197
680843
1210
相對的,人類學習的重點從來 不是去預測下一個字是什麼,
11:22
human learning is never about predicting which word comes next,
198
682053
3170
11:25
but it's really about making sense of the world
199
685223
2877
重點是要去理解這個世界, 並學習這個世界如何運作。
11:28
and learning how the world works.
200
688142
1585
11:29
Maybe AI should be taught that way as well.
201
689727
2544
也許也該用這種方式 來教導人工智慧。
11:33
So as a quest toward more direct commonsense knowledge acquisition,
202
693105
6090
所以,為了追尋
更直接取得常識知識的方法,
11:39
my team has been investigating potential new algorithms,
203
699195
3879
我的團隊一直在研究 潛在的新演算法,
11:43
including symbolic knowledge distillation
204
703115
2628
包括符號知識蒸餾,
11:45
that can take a very large language model as shown here
205
705743
3795
可以將非常大的語言模型, 如畫面上的這個,
11:49
that I couldn't fit into the screen because it's too large,
206
709538
3963
它太大了所以螢幕放不下,
11:53
and crunch that down to much smaller commonsense models
207
713501
4671
把它打碎成更小許多的常識模型,
11:58
using deep neural networks.
208
718214
2252
用的方法是深度神經網路。
12:00
And in doing so, we also generate, algorithmically, human-inspectable,
209
720508
5380
這麼做的過程中, 我們也用演算法產生出
可讓人類檢視的
12:05
symbolic, commonsense knowledge representation,
210
725888
3253
符號化知識呈現方式,
12:09
so that people can inspect and make corrections
211
729141
2211
讓大家可以做檢查、 做修正,甚至用它
12:11
and even use it to train other neural commonsense models.
212
731394
3545
訓練其他神經常識模型。
12:15
More broadly,
213
735314
1210
更廣泛來說,
12:16
we have been tackling this seemingly impossible giant puzzle
214
736565
4630
我們一直在拼湊這個似乎無法解決的
巨型常識拼圖,
12:21
of common sense, ranging from physical,
215
741237
2669
它的範圍從實體、 社會,以及視覺常識,
12:23
social and visual common sense
216
743906
2169
12:26
to theory of minds, norms and morals.
217
746117
2419
一直到心智理論、規範,和道德。
12:28
Each individual piece may seem quirky and incomplete,
218
748577
3796
每一片看起來都很怪異且不完整,
12:32
but when you step back,
219
752415
1585
但當你退後一步,
12:34
it's almost as if these pieces weave together into a tapestry
220
754041
4421
就好像每片拼圖編織 在一起成了一幅織錦,
12:38
that we call human experience and common sense.
221
758504
3045
我們稱這織錦為人類經驗及常識。
12:42
We're now entering a new era
222
762174
1961
我們正在邁入新時代, 在這個新時代,
12:44
in which AI is almost like a new intellectual species
223
764176
5923
人工智慧幾乎就像是 一種有智慧的新物種,
12:50
with unique strengths and weaknesses compared to humans.
224
770099
3837
和人類相比,它們 有獨特的優勢和缺點。
12:54
In order to make this powerful AI
225
774478
3546
為了要讓這種強大的人工智慧
12:58
sustainable and humanistic,
226
778065
2336
能永續且有人性,
13:00
we need to teach AI common sense, norms and values.
227
780401
4129
我們必須要教導人工智慧 常識、規範,和價值觀。
13:04
Thank you.
228
784530
1376
謝謝。
13:05
(Applause)
229
785906
6966
(掌聲)
13:13
Chris Anderson: Look at that.
230
793664
1460
主持人:看哪。葉真,請留步。
13:15
Yejin, please stay one sec.
231
795124
1877
13:18
This is so interesting,
232
798002
1418
這個關於常識的想法相當有趣。
13:19
this idea of common sense.
233
799462
2002
13:21
We obviously all really want this from whatever's coming.
234
801505
3712
不論將來出現的是什麼, 我們顯然都希望能如此。
13:25
But help me understand.
235
805926
1168
請幫我釐清一下, 我們有兒童學習模型,
13:27
Like, so we've had this model of a child learning.
236
807094
4463
13:31
How does a child gain common sense
237
811599
3044
兒童是如何習得常識的?
13:34
apart from the accumulation of more input
238
814685
3545
除了累積更多的輸入資訊
13:38
and some, you know, human feedback?
239
818230
3045
以及一些人類回饋?
13:41
What else is there?
240
821317
1293
還有什麼其他的?
13:42
Yejin Choi: So fundamentally, there are several things missing,
241
822610
3003
講者:基本上被忽略的有 好幾項,但舉例來說其一就是
13:45
but one of them is, for example,
242
825613
1918
13:47
the ability to make hypothesis and make experiments,
243
827573
3796
做假設和實驗的能力,
13:51
interact with the world and develop this hypothesis.
244
831369
4713
和世界互動並發展出假設的能力。
13:56
We abstract away the concepts about how the world works,
245
836123
3671
我們把世界運作的方式 抽象化成概念,
13:59
and then that's how we truly learn,
246
839835
2044
那是我們真正學習的方式。
14:01
as opposed to today's language model.
247
841921
3003
相對的,現今的語言模型並非如此。
14:05
Some of them is really not there quite yet.
248
845424
2795
有一些真的還差得很遠。
14:09
CA: You use the analogy that we can’t get to the Moon
249
849303
2669
主持人:你比喻說,若要登月,
不能用把大樓一次 增加一英呎的方式。
14:12
by extending a building a foot at a time.
250
852014
2544
14:14
But the experience that most of us have had
251
854558
2044
但我們大部分人對語言模型的 體驗並不是一次一英呎,
14:16
of these language models is not a foot at a time.
252
856602
2336
14:18
It's like, the sort of, breathtaking acceleration.
253
858938
2669
而像是讓人摒息的加速。
14:21
Are you sure that given the pace at which those things are going,
254
861607
3670
你確定,依現在 這些模型的發展步調,
14:25
each next level seems to be bringing with it
255
865319
2711
每提升一個層級就似乎帶來了
14:28
what feels kind of like wisdom and knowledge.
256
868072
4671
很像是智慧和知識的感覺。
14:32
YC: I totally agree that it's remarkable how much this scaling things up
257
872785
5297
講者:我完全同意 這種擴展規模的做法
14:38
really enhances the performance across the board.
258
878124
3670
真的讓性能全面性強化了。
14:42
So there's real learning happening
259
882086
2544
因為計算和資料的規模這麼大,
14:44
due to the scale of the compute and data.
260
884630
4797
確實有真正的學習產生。
14:49
However, there's a quality of learning that is still not quite there.
261
889468
4171
然而,學習的品質還不太到位。
14:53
And the thing is,
262
893681
1168
重點是,我們還不知道
14:54
we don't yet know whether we can fully get there or not
263
894849
3712
我們是否能靠擴大規模就達到目標。
14:58
just by scaling things up.
264
898561
2335
15:01
And if we cannot, then there's this question of what else?
265
901188
4213
如果達不到,問題就是: 還有什麼別的?
15:05
And then even if we could,
266
905401
1877
就算達得到,
15:07
do we like this idea of having very, very extreme-scale AI models
267
907319
5214
我們真的會喜歡這種概念嗎: 這些極大規模的人工智慧模型
15:12
that only a few can create and own?
268
912575
4337
只有少數人能創造和擁有?
15:18
CA: I mean, if OpenAI said, you know, "We're interested in your work,
269
918456
4587
主持人:如果 OpenAI 說 「我們對你的研究很感興趣,
我們希望你能協助 改善我們的模型」,
15:23
we would like you to help improve our model,"
270
923043
2837
15:25
can you see any way of combining what you're doing
271
925921
2670
你認為有辦法將你的研究 和他們已經打造出的成果結合嗎?
15:28
with what they have built?
272
928632
1710
15:30
YC: Certainly what I envision
273
930926
2336
講者:我所想像的肯定是
15:33
will need to build on the advancements of deep neural networks.
274
933304
4171
要以深度神經網路的進展為基礎。
15:37
And it might be that there’s some scale Goldilocks Zone,
275
937516
4213
且可能會有某種
最適度的規模,讓……
15:41
such that ...
276
941770
1168
15:42
I'm not imagining that the smaller is the better either, by the way.
277
942980
3212
順道一提,我並沒有想像 比較小就是比較好。
15:46
It's likely that there's right amount of scale, but beyond that,
278
946233
4421
很可能會有適當的規模,
但除此之外,
15:50
the winning recipe might be something else.
279
950696
2294
成功的關鍵可能是別的。
所以,將想法綜合起來是很重要的。
15:53
So some synthesis of ideas will be critical here.
280
953032
4838
15:58
CA: Yejin Choi, thank you so much for your talk.
281
958579
2294
主持人:崔葉真, 謝謝你帶來的演說。
16:00
(Applause)
282
960873
1585
(掌聲)
關於本網站

本網站將向您介紹對學習英語有用的 YouTube 視頻。 您將看到來自世界各地的一流教師教授的英語課程。 雙擊每個視頻頁面上顯示的英文字幕,從那裡播放視頻。 字幕與視頻播放同步滾動。 如果您有任何意見或要求,請使用此聯繫表與我們聯繫。

https://forms.gle/WvT1wiN1qDtmnspy7