Jeff Dean: AI isn't as smart as you think -- but it could be | TED

253,387 views ・ 2022-01-12

TED


請雙擊下方英文字幕播放視頻。

譯者: Yiann C. 審譯者: Helen Chang
00:13
Hi, I'm Jeff.
0
13329
1583
嗨,我是傑夫
00:15
I lead AI Research and Health at Google.
1
15746
2833
我在 Google 帶領人工智慧研究與健康
00:18
I joined Google more than 20 years ago,
2
18621
1875
我在 20 多年前加入 Google
00:20
when we were all wedged into a tiny office space,
3
20538
3166
那時我們還擠在一個小辦公室
00:23
above what's now a T-Mobile store in downtown Palo Alto.
4
23704
2750
在帕羅奧圖市中心 現在一家 T-Mobile 樓上
00:27
I've seen a lot of computing transformations in that time,
5
27163
2875
我見證了很多當時電腦技術的轉變
00:30
and in the last decade, we've seen AI be able to do tremendous things.
6
30079
3459
過去十年,我們也看到 人工智慧做到了許多厲害的事
00:34
But we're still doing it all wrong in many ways.
7
34746
2542
但我們還是在許多方面都做錯了
00:37
That's what I want to talk to you about today.
8
37329
2209
這就是我今天想要和你們說的
00:39
But first, let's talk about what AI can do.
9
39579
2209
不過,我們先來說說 人工智慧能做些什麼
00:41
So in the last decade, we've seen tremendous progress
10
41829
3292
過去十年,人工智慧有很大的進展
00:45
in how AI can help computers see, understand language,
11
45121
4333
能幫助電腦更懂得去看、理解語言
00:49
understand speech better than ever before.
12
49496
2833
聽懂人們說的話
00:52
Things that we couldn't do before, now we can do.
13
52329
2583
以前我們做不到的事,現在能做到了
00:54
If you think about computer vision alone,
14
54954
2333
單就電腦視覺來說
00:57
just in the last 10 years,
15
57329
1583
光最近十年
00:58
computers have effectively developed the ability to see;
16
58912
2792
電腦已經有效地發展出了視覺
01:01
10 years ago, they couldn't see, now they can see.
17
61746
2500
十年前它們看不見,現在可以了
01:04
You can imagine this has had a transformative effect
18
64287
2500
這對於我們能如何應用電腦 有轉變性的影響
01:06
on what we can do with computers.
19
66787
2209
我們來看看幾個 這些功能所帶來的應用
01:08
So let's look at a couple of the great applications
20
68996
2500
01:11
enabled by these capabilities.
21
71496
2000
我們能運用機器學習 更準確地預測洪水
01:13
We can better predict flooding, keep everyone safe,
22
73537
2417
確保大家的安全
01:15
using machine learning.
23
75996
1500
我們可以翻譯超過一百種語言 讓大家更無礙地溝通
01:17
We can translate over 100 languages so we all can communicate better,
24
77538
3250
也能更準確地預測和診斷疾病
01:20
and better predict and diagnose disease,
25
80788
2000
01:22
where everyone gets the treatment that they need.
26
82788
2333
讓大家獲得所需的治療
01:25
So let's look at two key components
27
85163
1916
我們來看看兩個關鍵的因素
01:27
that underlie the progress in AI systems today.
28
87121
2333
它們促成了現今人工智慧的發展
01:30
The first is neural networks,
29
90121
1542
第一個是類神經網路
01:31
a breakthrough approach to solving some of these difficult problems
30
91704
3417
是個突破性的方法 去解決許多困難的問題
01:35
that has really shone in the last 15 years.
31
95121
2833
最近 15 年廣受運用
01:37
But they're not a new idea.
32
97954
1625
但它們不是新的發想
01:39
And the second is computational power.
33
99621
1833
第二個是計算能力
01:41
It actually takes a lot of computational power
34
101454
2167
事實上,讓類神經網路 發揮它真正的潛力
01:43
to make neural networks able to really sing,
35
103663
2083
需要很多計算能力
01:45
and in the last 15 years, we’ve been able to halve that,
36
105746
3375
過去 15 年 我們成功減半所需的計算能力
01:49
and that's partly what's enabled all this progress.
37
109121
2458
這是造就了目前的發展 部分的原因
01:51
But at the same time, I think we're doing several things wrong,
38
111954
3459
但同時,我認為我們做錯了幾件事
01:55
and that's what I want to talk to you about
39
115413
2000
這是我在後半段想跟大家說的
01:57
at the end of the talk.
40
117454
1167
01:58
First, a bit of a history lesson.
41
118663
1958
先來上一下歷史課
02:00
So for decades,
42
120663
1208
幾十年以來
02:01
almost since the very beginning of computing,
43
121913
2125
幾乎從計算科學的一開始
02:04
people have wanted to be able to build computers
44
124079
2250
大家就想建造出看得見、懂得語言、 聽得懂人們說話的電腦
02:06
that could see, understand language, understand speech.
45
126329
3917
02:10
The earliest approaches to this, generally,
46
130579
2000
最早的方法
02:12
people were trying to hand-code all the algorithms
47
132621
2333
基本上是將所需的演算法 全部人工編碼
02:14
that you need to accomplish those difficult tasks,
48
134996
2458
來完成那些困難的任務
02:17
and it just turned out to not work very well.
49
137454
2375
結果這個方法不怎麼好用
02:19
But in the last 15 years, a single approach
50
139829
3250
但最近 15 年
出乎意料,有一個方法 同時促進了所有方面的發展
02:23
unexpectedly advanced all these different problem spaces all at once:
51
143079
4167
02:27
neural networks.
52
147954
1542
類神經網路
02:29
So neural networks are not a new idea.
53
149538
1833
類神經網路不是全新的概念
02:31
They're kind of loosely based
54
151371
1417
是基於一些真的神經系統的特性
02:32
on some of the properties that are in real neural systems.
55
152788
3041
02:35
And many of the ideas behind neural networks
56
155829
2084
很多關於類神經網路的想法
02:37
have been around since the 1960s and 70s.
57
157954
2250
早在 1960、70 年代就出現了
02:40
A neural network is what it sounds like,
58
160204
2167
類神經網路就如其名
02:42
a series of interconnected artificial neurons
59
162413
2708
是一連串互相連結的人工神經元
02:45
that loosely emulate the properties of your real neurons.
60
165121
3000
大致上模仿人類神經元的特性
02:48
An individual neuron in one of these systems
61
168163
2166
系統裡每個神經元
02:50
has a set of inputs,
62
170371
1417
有一組輸入值
02:51
each with an associated weight,
63
171788
2041
它們有各自的權重
02:53
and the output of a neuron
64
173871
1833
神經元的輸出
02:55
is a function of those inputs multiplied by those weights.
65
175704
3209
則是將這些輸入值乘以各自的權重 所組成的函數
02:59
So pretty simple,
66
179288
1208
很簡單
03:00
and lots and lots of these work together to learn complicated things.
67
180538
3666
透過很多這樣的神經元一起運作 來學習複雜的事情
03:04
So how do we actually learn in a neural network?
68
184496
2833
類神經網路到底如何學習的呢?
03:07
It turns out the learning process
69
187371
1708
其實學習的過程
03:09
consists of repeatedly making tiny little adjustments
70
189079
2792
是不斷地微調權重的分配
03:11
to the weight values,
71
191913
1208
03:13
strengthening the influence of some things,
72
193121
2042
透過加強某些輸入對輸出的影響
03:15
weakening the influence of others.
73
195204
1959
和減弱其他輸入的權重
03:17
By driving the overall system towards desired behaviors,
74
197204
3917
朝著我們想要的結果 來調整整體系統
03:21
these systems can be trained to do really complicated things,
75
201163
2916
我們可以訓練它們 達成非常複雜的任務
03:24
like translate from one language to another,
76
204121
2875
像是翻譯語言
03:27
detect what kind of objects are in a photo,
77
207038
3083
偵測相片裡的物體
03:30
all kinds of complicated things.
78
210121
1875
各種複雜的事
03:32
I first got interested in neural networks
79
212038
2000
我對類神經網路感興趣的契機
03:34
when I took a class on them as an undergraduate in 1990.
80
214079
3042
是 1990 年在大學修的一門課
03:37
At that time,
81
217163
1125
當時
03:38
neural networks showed impressive results on tiny problems,
82
218329
3792
類神經網路在小問題上有顯著的表現
03:42
but they really couldn't scale to do real-world important tasks.
83
222121
4375
但無法擴展到實際生活中 重要的任務上
03:46
But I was super excited.
84
226538
1500
但我超級興奮
03:48
(Laughter)
85
228079
2500
(笑聲)
03:50
I felt maybe we just needed more compute power.
86
230579
2417
我覺得我們只是需要更多的計算能力
03:52
And the University of Minnesota had a 32-processor machine.
87
232996
3625
明尼蘇達大學有台機器 有 32 個處理器
03:56
I thought, "With more compute power,
88
236621
1792
我就想,有了更強大的計算能力
03:58
boy, we could really make neural networks really sing."
89
238413
3000
天啊,我們就可以 發揮類神經網路的極致
04:01
So I decided to do a senior thesis on parallel training of neural networks,
90
241454
3584
所以我決定以類神經網路的平行訓練 為畢業論文的主題
04:05
the idea of using processors in a computer or in a computer system
91
245079
4000
概念是在一台電腦或一個系統中 運用多個處理器
04:09
to all work toward the same task,
92
249079
2125
一起完成同一項任務
04:11
that of training neural networks.
93
251204
1584
來訓練類神經網路
04:12
32 processors, wow,
94
252829
1292
32 個處理器,哇
04:14
we’ve got to be able to do great things with this.
95
254163
2833
我們一定可以達成很棒的事
04:17
But I was wrong.
96
257496
1167
但我錯了
04:20
Turns out we needed about a million times as much computational power
97
260038
3333
結果發現我們需要 1990 年的百萬倍的計算能力
04:23
as we had in 1990
98
263371
1375
04:24
before we could actually get neural networks to do impressive things.
99
264788
3333
才能讓類神經網路做厲害的事
04:28
But starting around 2005,
100
268121
2417
不過 2005 年左右開始
04:30
thanks to the computing progress of Moore's law,
101
270579
2500
多虧摩爾定律
04:33
we actually started to have that much computing power,
102
273121
2625
我們開始有足夠的電腦效能
04:35
and researchers in a few universities around the world started to see success
103
275746
4250
世界各地幾所大學的研究員開始發現
04:40
in using neural networks for a wide variety of different kinds of tasks.
104
280038
4083
類神經網路能成功做到 各式各樣不同的任務
04:44
I and a few others at Google heard about some of these successes,
105
284121
3583
我和其他幾位 Google 的同事 聽聞這些消息
04:47
and we decided to start a project to train very large neural networks.
106
287746
3333
就決定開始一個新專案 來訓練大規模的類神經網路
04:51
One system that we trained,
107
291079
1459
我們訓練了一個系統
04:52
we trained with 10 million randomly selected frames
108
292538
3541
用一千萬張 從 Youtube 影片裡隨機選出的畫面
04:56
from YouTube videos.
109
296079
1292
04:57
The system developed the capability
110
297371
1750
這個系統發展出
04:59
to recognize all kinds of different objects.
111
299121
2583
識別各種不同物體的能力
05:01
And it being YouTube, of course,
112
301746
1542
因為是 Youtube
05:03
it developed the ability to recognize cats.
113
303329
2500
這個系統發展出識別貓的能力
05:05
YouTube is full of cats.
114
305829
1292
Youtube 上充滿著貓咪
05:07
(Laughter)
115
307163
1416
(笑聲)
05:08
But what made that so remarkable
116
308621
2208
但驚人的是
05:10
is that the system was never told what a cat was.
117
310871
2417
我們從來沒有告訴這個系統 貓是什麼
05:13
So using just patterns in data,
118
313704
2500
所以只靠數據裡的規律
05:16
the system honed in on the concept of a cat all on its own.
119
316204
3625
系統自己整理出貓的概念
05:20
All of this occurred at the beginning of a decade-long string of successes,
120
320371
3750
這是十年間 一連串成功案例的開端
05:24
of using neural networks for a huge variety of tasks,
121
324121
2500
Google 和其他人成功運用類神經網路 在很多不同的事情上
05:26
at Google and elsewhere.
122
326663
1166
05:27
Many of the things you use every day,
123
327871
2167
很多每天用到的東西
05:30
things like better speech recognition for your phone,
124
330079
2500
像是手機上更準確的語音辨識
05:32
improved understanding of queries and documents
125
332579
2209
改良對查詢和文件的理解
05:34
for better search quality,
126
334829
1459
提升搜尋的品質
05:36
better understanding of geographic information to improve maps,
127
336329
3042
對地理資訊有更好的理解 來改良地圖
05:39
and so on.
128
339413
1166
等等
05:40
Around that time,
129
340621
1167
那時候
05:41
we also got excited about how we could build hardware that was better tailored
130
341788
3750
我們也很興奮地想知道
如何製造出 更適合類神經網路的計算的硬體
05:45
to the kinds of computations neural networks wanted to do.
131
345579
2792
類神經網路計算有兩個特性
05:48
Neural network computations have two special properties.
132
348371
2667
第一是它們對準確度有很大的包容性
05:51
The first is they're very tolerant of reduced precision.
133
351079
2625
05:53
Couple of significant digits, you don't need six or seven.
134
353746
2750
只要幾個有效數字,不需要六個或七個
05:56
And the second is that all the algorithms are generally composed
135
356496
3458
第二,基本上所有演算法
都是以矩陣和向量所組成的不同序列
05:59
of different sequences of matrix and vector operations.
136
359996
3458
06:03
So if you can build a computer
137
363496
1750
所以如果你做了一台電腦
06:05
that is really good at low-precision matrix and vector operations
138
365246
3792
有不錯的低精確度 矩陣和向量運算能力
06:09
but can't do much else,
139
369079
1709
但不太能做其他的
06:10
that's going to be great for neural-network computation,
140
370829
2625
這會很適合類神經網路運算
儘管不能拿它來做別的事
06:13
even though you can't use it for a lot of other things.
141
373496
2667
如果你做出這樣的東西 人們會找到它的用處
06:16
And if you build such things, people will find amazing uses for them.
142
376163
3291
這是我們做出來的第一版,TPU v1
06:19
This is the first one we built, TPU v1.
143
379496
2083
06:21
"TPU" stands for Tensor Processing Unit.
144
381621
2542
TPU 是張量處理器的縮寫
06:24
These have been used for many years behind every Google search,
145
384204
3042
它們已經被廣為應用好幾年
包括 Google 搜尋、翻譯
06:27
for translation,
146
387246
1167
06:28
in the DeepMind AlphaGo matches,
147
388454
1917
還有 DeepMind AlphaGo 圍棋比賽
06:30
so Lee Sedol and Ke Jie maybe didn't realize,
148
390413
2583
所以李世乭和柯潔大概不知道
06:33
but they were competing against racks of TPU cards.
149
393038
2708
他們是跟一整架的 TPU 卡在比賽
06:35
And we've built a bunch of subsequent versions of TPUs
150
395788
2583
我們接著做了好幾個新的版本
06:38
that are even better and more exciting.
151
398371
1875
更好更棒
06:40
But despite all these successes,
152
400288
2083
不過儘管有這麼多成功案例
06:42
I think we're still doing many things wrong,
153
402371
2208
我認為我們還是做錯了很多事情
06:44
and I'll tell you about three key things we're doing wrong,
154
404621
2792
讓我說明三件我們做錯的事
以及我們該怎麼修正
06:47
and how we'll fix them.
155
407454
1167
首先,目前大部分的類神經網路
06:48
The first is that most neural networks today
156
408663
2083
06:50
are trained to do one thing, and one thing only.
157
410746
2250
都是訓練來做一件事,只有一件事
06:53
You train it for a particular task that you might care deeply about,
158
413038
3208
訓練它來做某一件你特別關心的事
06:56
but it's a pretty heavyweight activity.
159
416288
1916
其實很麻煩的
06:58
You need to curate a data set,
160
418246
1667
必須準備一個資料庫
06:59
you need to decide what network architecture you'll use
161
419913
3000
必須決定針對這個問題 要用什麼網路架構
07:02
for this problem,
162
422954
1167
07:04
you need to initialize the weights with random values,
163
424121
2708
必須用隨機值來初始化權重分配
07:06
apply lots of computation to make adjustments to the weights.
164
426871
2875
再用各種運算來調整
07:09
And at the end, if you’re lucky, you end up with a model
165
429746
2750
幸運的話,最後 會得到一個很好的模型
07:12
that is really good at that task you care about.
166
432538
2291
能做好你在乎的那件事
07:14
But if you do this over and over,
167
434871
1583
但如果你不斷重複這些步驟
07:16
you end up with thousands of separate models,
168
436496
2667
最後會有上千個獨立的模型
07:19
each perhaps very capable,
169
439204
1792
可能每個都表現得很好
07:21
but separate for all the different tasks you care about.
170
441038
2791
但各自解決不同的問題
07:23
But think about how people learn.
171
443829
1667
想想人類是怎麼學習的
07:25
In the last year, many of us have picked up a bunch of new skills.
172
445538
3166
過去一年,相信很多人學了很多新技能
07:28
I've been honing my gardening skills,
173
448746
1792
我更精進了我的園藝技巧
07:30
experimenting with vertical hydroponic gardening.
174
450579
2375
試驗了垂直水耕栽培
07:32
To do that, I didn't need to relearn everything I already knew about plants.
175
452954
3792
在那之前,我不需要重新學習 對植物既有的知識
07:36
I was able to know how to put a plant in a hole,
176
456746
3375
我已經知道如何把植物放進洞裡
07:40
how to pour water, that plants need sun,
177
460163
2291
如何澆水、植物需要陽光
07:42
and leverage that in learning this new skill.
178
462496
3583
然後在這個基礎上學習新的技能
07:46
Computers can work the same way, but they don’t today.
179
466079
3042
電腦可以用同樣的方式學習, 但目前並非如此
07:49
If you train a neural network from scratch,
180
469163
2583
如果你從頭訓練一個類神經網路
07:51
it's effectively like forgetting your entire education
181
471746
3542
就像是每當你嘗試一件新的事
07:55
every time you try to do something new.
182
475288
1875
就得忘掉一生所受過的教育
很荒唐吧?
07:57
That’s crazy, right?
183
477163
1000
07:58
So instead, I think we can and should be training
184
478788
3708
所以我認為我們可以,也應該要
08:02
multitask models that can do thousands or millions of different tasks.
185
482538
3791
訓練能多工處理上千 或上百萬不同任務的模型
08:06
Each part of that model would specialize in different kinds of things.
186
486329
3375
模型裡各個部分專精於不同的事
08:09
And then, if we have a model that can do a thousand things,
187
489704
2792
如果我們有個模型可以做一千件事
08:12
and the thousand and first thing comes along,
188
492538
2166
然後第 1001 件事出現了
08:14
we can leverage the expertise we already have
189
494746
2125
我們可以利用和它相關的既有知識
08:16
in the related kinds of things
190
496913
1541
08:18
so that we can more quickly be able to do this new task,
191
498496
2792
更快地達成這個新任務
08:21
just like you, if you're confronted with some new problem,
192
501288
2791
就像你一樣,如果遇到了一個新問題
你會快速地想到 17 件已經知道的事
08:24
you quickly identify the 17 things you already know
193
504121
2625
08:26
that are going to be helpful in solving that problem.
194
506746
2542
來幫助你解決問題
08:29
Second problem is that most of our models today
195
509329
2709
第二個問題是,目前大部分的模型
08:32
deal with only a single modality of data --
196
512079
2125
只處理單一一種型態的資料
08:34
with images, or text or speech,
197
514204
3084
影像,或是文字,或是語音
08:37
but not all of these all at once.
198
517329
1709
不是同時處理全部
08:39
But think about how you go about the world.
199
519079
2042
但想想你是怎麼感受周遭的
08:41
You're continuously using all your senses
200
521121
2333
你不斷地用所有的感官
08:43
to learn from, react to,
201
523454
3083
去學習、去反應
08:46
figure out what actions you want to take in the world.
202
526579
2667
來決定你該做出什麼行為
這樣做很合理
08:49
Makes a lot more sense to do that,
203
529287
1667
08:50
and we can build models in the same way.
204
530954
2000
我們也可以用同樣的道理來建模型
08:52
We can build models that take in these different modalities of input data,
205
532954
4042
我們可以建立 接收不同資料型態的模型
文字、影像、語音
08:57
text, images, speech,
206
537037
1750
08:58
but then fuse them together,
207
538829
1417
然後把它們融合在一起
09:00
so that regardless of whether the model sees the word "leopard,"
208
540329
3833
所以不管模型是看到「豹」這個字
09:04
sees a video of a leopard or hears someone say the word "leopard,"
209
544204
4083
看到一部豹的影片 還是聽到某個人說「豹」這個字
09:08
the same response is triggered inside the model:
210
548329
2250
都會產生相同的反應
09:10
the concept of a leopard
211
550621
1750
就是豹的概念
09:12
can deal with different kinds of input data,
212
552412
2250
也可以處理其他不同種的資料
09:14
even nonhuman inputs, like genetic sequences,
213
554662
3000
甚至是非人為產出的資料 例如基因序列
09:17
3D clouds of points, as well as images, text and video.
214
557662
3209
3D 點雲,和影像、文字、影片
09:20
The third problem is that today's models are dense.
215
560912
3625
第三個問題是,現在的模型是密集的
09:24
There's a single model,
216
564579
1417
單一一個模型
09:25
the model is fully activated for every task,
217
565996
2375
完全啟動來處理每一個任務
09:28
for every example that we want to accomplish,
218
568412
2125
每一件我們想完成的事
09:30
whether that's a really simple or a really complicated thing.
219
570537
2917
無論是非常簡單還是非常複雜的事
09:33
This, too, is unlike how our own brains work.
220
573496
2666
這也不像我們腦袋運作的方式
09:36
Different parts of our brains are good at different things,
221
576204
3000
腦袋不同的部位負責不同的功能
09:39
and we're continuously calling upon the pieces of them
222
579246
3291
我們不斷運用各個 與眼前任務有關的部份
09:42
that are relevant for the task at hand.
223
582579
2167
09:44
For example, nervously watching a garbage truck
224
584787
2334
例如,當你緊張地看著垃圾車倒車
09:47
back up towards your car,
225
587162
1875
越來越靠近你的車
09:49
the part of your brain that thinks about Shakespearean sonnets
226
589037
2917
腦中負責思考 莎士比亞十四行詩的部位
09:51
is probably inactive.
227
591996
1250
大概是沒有活動的
09:53
(Laughter)
228
593246
1625
(笑聲)
09:54
AI models can work the same way.
229
594912
1750
人工智慧模型也可以這樣運作
09:56
Instead of a dense model,
230
596662
1292
與其用一個密集的模型
09:57
we can have one that is sparsely activated.
231
597954
2250
我們可以用一個零星式啟動的模型
10:00
So for particular different tasks, we call upon different parts of the model.
232
600204
4250
所以不同的任務 會啟動模型裡不同的部分
10:04
During training, the model can also learn which parts are good at which things,
233
604496
4375
訓練時,模型也可以學到 哪部分擅長哪些事
10:08
to continuously identify what parts it wants to call upon
234
608871
3791
來持續判斷它需要利用哪些部分
10:12
in order to accomplish a new task.
235
612662
1834
去完成一項新的任務
10:14
The advantage of this is we can have a very high-capacity model,
236
614496
3541
這個方法的優點是 模型的適應能力會很高
也非常的有效率
10:18
but it's very efficient,
237
618037
1250
因為我們只用各項任務所需的部分
10:19
because we're only calling upon the parts that we need
238
619329
2583
10:21
for any given task.
239
621954
1208
修正這三個問題之後,我認為
10:23
So fixing these three things, I think,
240
623162
2000
10:25
will lead to a more powerful AI system:
241
625162
2209
會做出更強大的人工智慧系統
10:27
instead of thousands of separate models,
242
627412
2000
與其用上千個獨立的模型
10:29
train a handful of general-purpose models
243
629412
2000
不如訓練一些通用於 上千或上百萬件事的模型
10:31
that can do thousands or millions of things.
244
631454
2083
10:33
Instead of dealing with single modalities,
245
633579
2042
與其只處理單一資料型態
10:35
deal with all modalities,
246
635662
1334
不如同時處理所有型態的資料
10:36
and be able to fuse them together.
247
636996
1708
然後把它們融合在一起
10:38
And instead of dense models, use sparse, high-capacity models,
248
638746
3458
與其用密集的模型 不如用零星、高適應力的模型
10:42
where we call upon the relevant bits as we need them.
249
642246
2958
只在需要的時候用所需的部分
10:45
We've been building a system that enables these kinds of approaches,
250
645246
3416
我們已經在建立運用這些方法的系統了
10:48
and we’ve been calling the system “Pathways.”
251
648704
2542
我們叫它「 Pathways 」
10:51
So the idea is this model will be able to do
252
651287
3084
概念是這個模型可以做 上千、上百萬不同的任務
10:54
thousands or millions of different tasks,
253
654412
2084
10:56
and then, we can incrementally add new tasks,
254
656537
2250
當我們逐漸增加新的任務
10:58
and it can deal with all modalities at once,
255
658787
2125
它可以同時處理所有資料型態
11:00
and then incrementally learn new tasks as needed
256
660954
2958
並且必要時,會漸漸學習新的任務
11:03
and call upon the relevant bits of the model
257
663954
2083
然後針對不同的內容
11:06
for different examples or tasks.
258
666037
1709
運用與其相關的部分
11:07
And we're pretty excited about this,
259
667787
1750
我們很興奮
相信這會帶我們邁向 理想中的人工智慧系統
11:09
we think this is going to be a step forward
260
669537
2042
11:11
in how we build AI systems.
261
671621
1333
11:12
But I also wanted to touch on responsible AI.
262
672954
3708
不過我也想提提 負責任的人工智慧
11:16
We clearly need to make sure that this vision of powerful AI systems
263
676662
4875
很顯然地,我們需要保證 這個強大的人工智慧系統遠景
11:21
benefits everyone.
264
681579
1167
能讓每個人都受惠
11:23
These kinds of models raise important new questions
265
683496
2458
這些模型衍生出許多重要的問題
11:25
about how do we build them with fairness,
266
685954
2458
如何在建立的過程中 考量對所有使用者的公平性
11:28
interpretability, privacy and security,
267
688454
3208
可解釋性、隱私與安全
11:31
for all users in mind.
268
691662
1459
11:33
For example, if we're going to train these models
269
693621
2291
例如,如果我們訓練這些模型
11:35
on thousands or millions of tasks,
270
695954
2125
來做上千、上百萬種任務
11:38
we'll need to be able to train them on large amounts of data.
271
698079
2875
我們得用非常大量的資料來訓練它們
11:40
And we need to make sure that data is thoughtfully collected
272
700996
3250
我們必須確保這些資料 經過謹慎的蒐集
11:44
and is representative of different communities and situations
273
704287
3667
並且對世界上不同的群體和情況
11:47
all around the world.
274
707954
1167
都具有代表性
11:49
And data concerns are only one aspect of responsible AI.
275
709579
4000
對資料的顧慮只是 負責任的人工智慧的一部分
11:53
We have a lot of work to do here.
276
713621
1583
我們有很多工作要做
11:55
So in 2018, Google published this set of AI principles
277
715246
2666
2018 年,Google 發表了這些 人工智慧原則
11:57
by which we think about developing these kinds of technology.
278
717912
3500
適用於開發這類的科技
12:01
And these have helped guide us in how we do research in this space,
279
721454
3625
這些原則引導我們 如何在這個領域做研究
12:05
how we use AI in our products.
280
725121
1833
如用在產品中應用人工智慧
12:06
And I think it's a really helpful and important framing
281
726996
2750
我認為這是非常有用,而且重要的架構
12:09
for how to think about these deep and complex questions
282
729746
2875
來幫助我們思考這些 關於如何在社會中應用人工智慧
12:12
about how we should be using AI in society.
283
732621
2833
深入又複雜的問題
12:15
We continue to update these as we learn more.
284
735454
3625
隨著我們知道更多 我們會持續更新這些原則
12:19
Many of these kinds of principles are active areas of research --
285
739079
3458
很多是正在研究中的領域
12:22
super important area.
286
742537
1542
超級重要的領域
12:24
Moving from single-purpose systems that kind of recognize patterns in data
287
744121
4125
從辨識資料中的規律的 單一目的系統
12:28
to these kinds of general-purpose intelligent systems
288
748246
2750
到對世界有更深的理解的 多方通用智慧系統
12:31
that have a deeper understanding of the world
289
751037
2292
12:33
will really enable us to tackle
290
753329
1583
可以真正讓我們解決
12:34
some of the greatest problems humanity faces.
291
754954
2500
人類長久以來面臨的大問題
12:37
For example,
292
757454
1167
例如,我們可以診斷出更多疾病
12:38
we’ll be able to diagnose more disease;
293
758662
2375
12:41
we'll be able to engineer better medicines
294
761079
2000
我們可以製造出更好的藥
12:43
by infusing these models with knowledge of chemistry and physics;
295
763079
3083
透過注入化學和物理的知識 到這些模型裡
12:46
we'll be able to advance educational systems
296
766204
2667
我們可以提供更多個人化家教
12:48
by providing more individualized tutoring
297
768871
2041
來提升教育系統
12:50
to help people learn in new and better ways;
298
770912
2375
讓大家用全新更好的方法來學習
12:53
we’ll be able to tackle really complicated issues,
299
773287
2375
我們能處理非常複雜的問題 像是氣候變遷
12:55
like climate change,
300
775704
1208
12:56
and perhaps engineering of clean energy solutions.
301
776954
2792
或者可能可以實現 乾淨能源的解決方案
12:59
So really, all of these kinds of systems
302
779787
2667
所以,這些系統
13:02
are going to be requiring the multidisciplinary expertise
303
782496
2875
將會需要全世界所有人
13:05
of people all over the world.
304
785371
1875
跨領域的專業
13:07
So connecting AI with whatever field you are in,
305
787287
3542
所以為了有所進展,不論你在什麼領域
13:10
in order to make progress.
306
790829
1750
應用人工智慧吧
13:13
So I've seen a lot of advances in computing,
307
793579
2292
我見證了許多電腦運算科學上的進展
13:15
and how computing, over the past decades,
308
795912
2292
在過去的幾十年
13:18
has really helped millions of people better understand the world around them.
309
798204
4167
運算科學幫助了上百萬的人 理解周遭的世界
13:22
And AI today has the potential to help billions of people.
310
802412
3000
如今,人工智慧有能夠 幫助數十億人類的潛力
13:26
We truly live in exciting times.
311
806204
2125
我們真的身處於一個令人興奮的時代
13:28
Thank you.
312
808746
1166
謝謝
13:29
(Applause)
313
809912
7000
(掌聲)
13:39
Chris Anderson: Thank you so much.
314
819829
1667
克里斯·安德森:非常謝謝你
13:41
I want to follow up on a couple things.
315
821537
2375
我想多問幾個問題
13:44
This is what I heard.
316
824454
2792
這是我聽到的
13:47
Most people's traditional picture of AI
317
827287
4125
大部分人對傳統人工智慧的理解
13:51
is that computers recognize a pattern of information,
318
831412
3125
是電腦辨識資訊中的規律
13:54
and with a bit of machine learning,
319
834579
2125
利用一些機器學習
13:56
they can get really good at that, better than humans.
320
836704
2625
電腦會表現得比人類還好
13:59
What you're saying is those patterns
321
839329
1792
你說那些規律
14:01
are no longer the atoms that AI is working with,
322
841121
2875
不再是人工智慧處理的元素
14:04
that it's much richer-layered concepts
323
844037
2542
而是一種多層的概念
14:06
that can include all manners of types of things
324
846621
3458
包含了,像是各式各樣
14:10
that go to make up a leopard, for example.
325
850121
3000
和豹相關的東西
14:13
So what could that lead to?
326
853121
2916
所以,這會往什麼方向發展?
14:16
Give me an example of when that AI is working,
327
856079
2750
舉個例子,當這樣的人工智慧實現時
14:18
what do you picture happening in the world
328
858829
2042
未來的五或十年裡 你期待什麼樣的事情會發生?
14:20
in the next five or 10 years that excites you?
329
860912
2209
14:23
Jeff Dean: I think the grand challenge in AI
330
863537
2500
傑夫:我認為人工智慧很大的一個挑戰
14:26
is how do you generalize from a set of tasks
331
866079
2375
是如何概括一組已經知道如何做的事
14:28
you already know how to do
332
868496
1416
14:29
to new tasks,
333
869954
1208
到新的任務上
14:31
as easily and effortlessly as possible.
334
871204
2542
而且要盡量輕鬆容易
14:33
And the current approach of training separate models for everything
335
873746
3250
目前,為所有事 訓練獨立模型的方法
14:36
means you need lots of data about that particular problem,
336
876996
3166
意味著你需要有 針對每個問題的大量數據
14:40
because you're effectively trying to learn everything
337
880162
2542
因為你試著從零來學習
14:42
about the world and that problem, from nothing.
338
882746
2541
關於這個世界和那個問題的所有事
14:45
But if you can build these systems
339
885287
1667
但如果你可以建造一些系統
14:46
that already are infused with how to do thousands and millions of tasks,
340
886996
4458
它們已經知道如何做 上千或上百萬個任務
14:51
then you can effectively teach them to do a new thing
341
891496
3791
那你就可以有效地用相對少的例子 來教它們新的事
14:55
with relatively few examples.
342
895329
1500
14:56
So I think that's the real hope,
343
896871
2083
所以我認為這是我們的希望
14:58
that you could then have a system where you just give it five examples
344
898996
4083
你只需要提供這個系統 關於某件你在乎的事的
15:03
of something you care about,
345
903121
1541
五個例子
15:04
and it learns to do that new task.
346
904704
1917
它就會學習怎麼做新的任務
15:06
CA: You can do a form of self-supervised learning
347
906662
2375
克里斯:你可以提供極少的原料
15:09
that is based on remarkably little seeding.
348
909079
2542
來做自監督式的學習
15:11
JD: Yeah, as opposed to needing 10,000 or 100,000 examples
349
911662
2875
傑夫:對,不需要一萬或十萬個例子
15:14
to figure everything in the world out.
350
914579
1833
去理解世界上所有的事
克里斯:難道不會有預期之外
15:16
CA: Aren't there kind of terrifying unintended consequences
351
916454
2792
令人害怕的後果嗎?
15:19
possible, from that?
352
919287
1334
15:20
JD: I think it depends on how you apply these systems.
353
920621
2625
傑夫:這就看你如何應用這些系統
15:23
It's very clear that AI can be a powerful system for good,
354
923287
4167
人工智慧用在好的地方 很顯然會是很厲害的系統
15:27
or if you apply it in ways that are not so great,
355
927496
2333
如果用在不好的地方
15:29
it can be a negative consequence.
356
929871
3375
會帶來負面的影響
15:33
So I think that's why it's important to have a set of principles
357
933287
3042
所以我認為這就是為什麼原則很重要
15:36
by which you look at potential uses of AI
358
936371
2250
按照原則去衡量人工智慧可以用在哪
15:38
and really are careful and thoughtful about how you consider applications.
359
938662
5125
非常小心、周全地思考如何應用
15:43
CA: One of the things people worry most about
360
943787
2209
克里斯:大家擔心的事之一是
15:45
is that, if AI is so good at learning from the world as it is,
361
945996
3583
假如人工智慧這麼擅長 學習社會現在的樣貌
15:49
it's going to carry forward into the future
362
949621
2875
它將會把一些其實不對的事
15:52
aspects of the world as it is that actually aren't right, right now.
363
952537
4375
原封不動地延續到未來
15:56
And there's obviously been a huge controversy about that
364
956954
2708
最近 Google 在這個議題上
15:59
recently at Google.
365
959662
2417
有個很大的爭議
16:02
Some of those principles of AI development,
366
962121
3083
那些你所說的人工智慧開發原則
16:05
you've been challenged that you're not actually holding to them.
367
965246
4708
有人質疑你們自己並沒有遵守
16:10
Not really interested to hear about comments on a specific case,
368
970329
3250
我並不是想聽對於特定事件的回應
16:13
but ... are you really committed?
369
973621
2708
但是...你們真的有決心嗎?
16:16
How do we know that you are committed to these principles?
370
976329
2750
我們怎麼知道 你們真的承諾遵守這些原則?
這只是個宣傳嗎?還是 真的有實踐在你們每天的工作中?
16:19
Is that just PR, or is that real, at the heart of your day-to-day?
371
979079
4417
16:23
JD: No, that is absolutely real.
372
983496
1541
傑夫:不是宣傳,是真的
16:25
Like, we have literally hundreds of people
373
985079
2125
我們真的有上百位研究員
16:27
working on many of these related research issues,
374
987204
2333
在研究這些相關議題
16:29
because many of those things are research topics
375
989537
2417
因為這些議題本身就是個研究主題
16:31
in their own right.
376
991996
1166
如何從現實世界取得資料
16:33
How do you take data from the real world,
377
993162
2250
16:35
that is the world as it is, not as we would like it to be,
378
995412
5000
但它們反映出現在社會真實的樣貌 而不是我們想要的模樣
16:40
and how do you then use that to train a machine-learning model
379
1000412
3209
然後如何用這些資料 來訓練一個機器學習模型
16:43
and adapt the data bit of the scene
380
1003621
2500
並針對情況去調整資料
16:46
or augment the data with additional data
381
1006162
2500
或是加上額外的資料來擴增
16:48
so that it can better reflect the values we want the system to have,
382
1008704
3292
讓模型更能反映出 我們想要系統擁有的價值
16:51
not the values that it sees in the world?
383
1011996
2208
而不是它所看到社會目前的樣子
16:54
CA: But you work for Google,
384
1014204
2417
克里斯: 但是你為 Google 工作
16:56
Google is funding the research.
385
1016662
2084
Google 為研究提供資金
16:59
How do we know that the main values that this AI will build
386
1019371
4458
我們怎麼知道這個人工智慧建立的價值
17:03
are for the world,
387
1023871
1208
是為了這個世界
17:05
and not, for example, to maximize the profitability of an ad model?
388
1025121
4916
例如,不是為了 一個下廣告的模型最大化利潤
17:10
When you know everything there is to know about human attention,
389
1030037
3084
當你知道所有關於 人類注意力的運作模式
你也會知道很多
17:13
you're going to know so much
390
1033121
1500
17:14
about the little wriggly, weird, dark parts of us.
391
1034621
2458
我們見不得人、奇怪、黑暗的部分
你的團隊裡,有沒有規定相關的界線
17:17
In your group, are there rules about how you hold off,
392
1037121
6167
17:23
church-state wall between a sort of commercial push,
393
1043329
3750
防止商業利益干擾,明確區隔
17:27
"You must do it for this purpose,"
394
1047079
2292
「為了這個使命,一定要這麼做」
17:29
so that you can inspire your engineers and so forth,
395
1049413
2458
所以你可以激勵工程師等等
17:31
to do this for the world, for all of us.
396
1051913
1916
為了整個社會、整體人類而做?
17:33
JD: Yeah, our research group does collaborate
397
1053871
2125
傑夫:有,我們的研究團隊
和 Google 各個不同的團隊合作
17:36
with a number of groups across Google,
398
1056038
1833
17:37
including the Ads group, the Search group, the Maps group,
399
1057913
2750
包括廣告組、搜尋組、地圖組
所以我們內部有互相合作 但也有很多基本的研究
17:40
so we do have some collaboration, but also a lot of basic research
400
1060704
3209
是對外公開的
17:43
that we publish openly.
401
1063954
1542
去年,我們在不同主題上 發表了一千多篇論文
17:45
We've published more than 1,000 papers last year
402
1065538
3333
17:48
in different topics, including the ones you discussed,
403
1068871
2583
包括你提到的
關於機器學習模型的 公平性、可解釋性
17:51
about fairness, interpretability of the machine-learning models,
404
1071454
3042
17:54
things that are super important,
405
1074538
1791
非常重要的主題
17:56
and we need to advance the state of the art in this
406
1076371
2417
我們須要提升當前的最佳技術
17:58
in order to continue to make progress
407
1078829
2209
才能確保
在模型的安全性和開發責任方面 持續有所進展
18:01
to make sure these models are developed safely and responsibly.
408
1081079
3292
18:04
CA: It feels like we're at a time when people are concerned
409
1084788
3041
克里斯:感覺現在大家 都很擔心科技巨頭的力量
18:07
about the power of the big tech companies,
410
1087829
2042
18:09
and it's almost, if there was ever a moment to really show the world
411
1089871
3750
如果有機會展現給世界看
18:13
that this is being done to make a better future,
412
1093663
3333
說為了更美好的未來,這已經做到了
18:17
that is actually key to Google's future,
413
1097038
2791
對 Google 的未來會很關鍵
18:19
as well as all of ours.
414
1099871
1750
對我們的未來也是
18:21
JD: Indeed.
415
1101663
1166
傑夫:沒錯
18:22
CA: It's very good to hear you come and say that, Jeff.
416
1102871
2583
克里斯:非常高興聽到你這麼說,傑夫
18:25
Thank you so much for coming here to TED.
417
1105454
2042
非常謝謝你來到 TED
18:27
JD: Thank you.
418
1107538
1166
傑夫:謝謝
18:28
(Applause)
419
1108704
1167
(掌聲)
關於本網站

本網站將向您介紹對學習英語有用的 YouTube 視頻。 您將看到來自世界各地的一流教師教授的英語課程。 雙擊每個視頻頁面上顯示的英文字幕,從那裡播放視頻。 字幕與視頻播放同步滾動。 如果您有任何意見或要求,請使用此聯繫表與我們聯繫。

https://forms.gle/WvT1wiN1qDtmnspy7