How to get empowered, not overpowered, by AI | Max Tegmark

127,885 views ・ 2018-07-05

TED


請雙擊下方英文字幕播放視頻。

譯者: Lilian Chiu 審譯者: Melody Tang
00:12
After 13.8 billion years of cosmic history,
0
12760
4416
在 138 億年的宇宙歷史之後,
00:17
our universe has woken up
1
17200
2096
我們的宇宙終於覺醒了,
00:19
and become aware of itself.
2
19320
1520
開始意識到自己。
00:21
From a small blue planet,
3
21480
1936
從一顆藍色的小星球,
00:23
tiny, conscious parts of our universe have begun gazing out into the cosmos
4
23440
4136
我們宇宙中微小、有意識的部分
開始用望遠鏡窺視宇宙,
00:27
with telescopes,
5
27600
1376
00:29
discovering something humbling.
6
29000
1480
有了讓人謙卑的發現。
00:31
We've discovered that our universe is vastly grander
7
31320
2896
我們發現宇宙比我們祖先所想像的
00:34
than our ancestors imagined
8
34240
1336
要大很多很多,
00:35
and that life seems to be an almost imperceptibly small perturbation
9
35600
4256
而且生命似乎是非常小的擾動, 小到幾乎無法感知到,
00:39
on an otherwise dead universe.
10
39880
1720
若沒有它,宇宙就是死寂的了。
00:42
But we've also discovered something inspiring,
11
42320
3016
但我們也有振奮人心的發現:
00:45
which is that the technology we're developing has the potential
12
45360
2976
那就是我們所開發的技術,
有潛能可以協助生命 變得前所未有的繁盛,
00:48
to help life flourish like never before,
13
48360
2856
00:51
not just for centuries but for billions of years,
14
51240
3096
不是只有數世紀, 而是能持續數十億年。
00:54
and not just on earth but throughout much of this amazing cosmos.
15
54360
4120
不只是在地球上, 還是在這整個不可思議的宇宙中。
00:59
I think of the earliest life as "Life 1.0"
16
59680
3336
我把最早的生命 視為是「生命 1.0」,
01:03
because it was really dumb,
17
63040
1376
因為它其實很蠢,
01:04
like bacteria, unable to learn anything during its lifetime.
18
64440
4296
像細菌,它在一生中 無法學習任何東西。
01:08
I think of us humans as "Life 2.0" because we can learn,
19
68760
3376
我把人類視為是「生命 2.0」, 因為我們可以學習,
01:12
which we in nerdy, geek speak,
20
72160
1496
用很宅的方式來說,
01:13
might think of as installing new software into our brains,
21
73680
3216
可以視為是把新軟體 安裝到我們的大腦中,
01:16
like languages and job skills.
22
76920
2120
就像語言以及工作技能。
01:19
"Life 3.0," which can design not only its software but also its hardware
23
79680
4296
「生命 3.0」不只能設計 它的軟體,還能設計硬體,
01:24
of course doesn't exist yet.
24
84000
1656
當然,它還不存在。
01:25
But perhaps our technology has already made us "Life 2.1,"
25
85680
3776
但,也許我們的科技已經 讓我們成為「生命 2.1」了,
01:29
with our artificial knees, pacemakers and cochlear implants.
26
89480
4336
因為我們現在有人工膝蓋、 心律調節器,以及耳蝸植入。
01:33
So let's take a closer look at our relationship with technology, OK?
27
93840
3880
咱們來更進一步談談 我們與科技的關係,好嗎?
01:38
As an example,
28
98800
1216
舉個例子,
01:40
the Apollo 11 moon mission was both successful and inspiring,
29
100040
5296
阿波羅 11 月球任務 很成功也很鼓舞人心,
01:45
showing that when we humans use technology wisely,
30
105360
3016
展示出當我們人類聰明地使用科技時,
01:48
we can accomplish things that our ancestors could only dream of.
31
108400
3936
我們能達成祖先只能夢想的事情。
01:52
But there's an even more inspiring journey
32
112360
2976
但還有一趟更鼓舞人心的旅程,
01:55
propelled by something more powerful than rocket engines,
33
115360
2680
由比火箭引擎 更強大的東西所推動,
01:59
where the passengers aren't just three astronauts
34
119200
2336
乘客也不只是三個太空人,
02:01
but all of humanity.
35
121560
1776
而是全人類。
02:03
Let's talk about our collective journey into the future
36
123360
2936
咱們來談談我們全體與人工智慧
02:06
with artificial intelligence.
37
126320
2000
一起前往未來的旅程,
02:08
My friend Jaan Tallinn likes to point out that just as with rocketry,
38
128960
4536
我的朋友楊·塔林常說, 就像火箭學一樣,
02:13
it's not enough to make our technology powerful.
39
133520
3160
光是讓我們的科技 有強大的力量是不足夠的。
02:17
We also have to figure out, if we're going to be really ambitious,
40
137560
3175
若我們真的很有野心的話, 我們還得要想出
02:20
how to steer it
41
140759
1416
如何操控它,
02:22
and where we want to go with it.
42
142199
1681
及我們要和它一起去到哪裡。
02:24
So let's talk about all three for artificial intelligence:
43
144880
2840
所以,咱們針對人工智慧 來談談這三點:
02:28
the power, the steering and the destination.
44
148440
3056
力量、操控,以及目的地。
02:31
Let's start with the power.
45
151520
1286
咱們先從力量談起。
02:33
I define intelligence very inclusively --
46
153600
3096
我對於人工智慧的定義非常全面,
02:36
simply as our ability to accomplish complex goals,
47
156720
4336
就是我們能夠完成 複雜目標的能力,
02:41
because I want to include both biological and artificial intelligence.
48
161080
3816
因為我想要把生物智慧 和人工智慧都包含進去。
02:44
And I want to avoid the silly carbon-chauvinism idea
49
164920
4016
且我想要排除愚蠢的碳沙文主義,
02:48
that you can only be smart if you're made of meat.
50
168960
2360
它認為若你很聰明, 你就一定是肉做的。
02:52
It's really amazing how the power of AI has grown recently.
51
172880
4176
人工智慧的力量 在近期的成長十分驚人。
02:57
Just think about it.
52
177080
1256
試想看看。
02:58
Not long ago, robots couldn't walk.
53
178360
3200
沒多久以前,機器人還無法走路。
03:03
Now, they can do backflips.
54
183040
1720
現在,它們還會後翻。
03:06
Not long ago,
55
186080
1816
沒多久以前,
03:07
we didn't have self-driving cars.
56
187920
1760
我們還沒有自動駕駛的汽車。
03:10
Now, we have self-flying rockets.
57
190920
2480
現在,我們有自動飛行的火箭。
03:15
Not long ago,
58
195960
1416
沒多久以前,
03:17
AI couldn't do face recognition.
59
197400
2616
人工智慧還無法做臉孔辨識。
03:20
Now, AI can generate fake faces
60
200040
2976
現在,人工智慧能產生出假臉孔,
03:23
and simulate your face saying stuff that you never said.
61
203040
4160
並模擬你的臉孔, 說出你從來沒有說過的話。
03:28
Not long ago,
62
208400
1576
沒多久以前,
03:30
AI couldn't beat us at the game of Go.
63
210000
1880
人工智慧無法在圍棋中打敗人類。
03:32
Then, Google DeepMind's AlphaZero AI took 3,000 years of human Go games
64
212400
5096
接著, Google DeepMind AlphaZero 的人工智慧
拿來人類三千年的圍棋譜和圍棋智慧,
03:37
and Go wisdom,
65
217520
1256
03:38
ignored it all and became the world's best player by just playing against itself.
66
218800
4976
全部擺在一邊,
透過和自己比賽的練習, 變成了世界上最厲害的圍棋手。
03:43
And the most impressive feat here wasn't that it crushed human gamers,
67
223800
3696
這裡最讓人印象深刻的功績 並不是它擊垮了人類的棋手,
03:47
but that it crushed human AI researchers
68
227520
2576
而是它擊垮了人類的 人工智慧研究者,
03:50
who had spent decades handcrafting game-playing software.
69
230120
3680
這些研究者花了數十年 手工打造下棋軟體。
03:54
And AlphaZero crushed human AI researchers not just in Go but even at chess,
70
234200
4656
除了圍棋,AlphaZero 也在西洋棋 擊垮了人類的人工智慧研究者,
03:58
which we have been working on since 1950.
71
238880
2480
西洋棋從 1950 年起就被研究著。
04:02
So all this amazing recent progress in AI really begs the question:
72
242000
4240
所以,近期這些驚人的 人工智慧進步,讓大家想問:
04:07
How far will it go?
73
247280
1560
它能做到什麼程度?
04:09
I like to think about this question
74
249800
1696
我在思考這個問題時,
04:11
in terms of this abstract landscape of tasks,
75
251520
2976
想從工作任務的抽象地景來切入,
04:14
where the elevation represents how hard it is for AI to do each task
76
254520
3456
高度表示人工智慧要把每一項工作
做到人類水平的難度,
04:18
at human level,
77
258000
1216
04:19
and the sea level represents what AI can do today.
78
259240
2760
海平面高度表示現今的 人工智慧能做什麼。
04:23
The sea level is rising as AI improves,
79
263120
2056
海平面隨著人工智慧的改進而上升,
04:25
so there's a kind of global warming going on here in the task landscape.
80
265200
3440
所以,在這工作任務地景上, 有類似全球暖化的現象發生。
04:30
And the obvious takeaway is to avoid careers at the waterfront --
81
270040
3335
很顯然的結論是: 避免從事在海濱的職業——
04:33
(Laughter)
82
273399
1257
(笑聲)
04:34
which will soon be automated and disrupted.
83
274680
2856
這些工作很快就會被自動化所取代。
04:37
But there's a much bigger question as well.
84
277560
2976
但這裡還有一個更大的問題。
04:40
How high will the water end up rising?
85
280560
1810
水面最後會升到多高?
04:43
Will it eventually rise to flood everything,
86
283440
3200
它最後是否會升高到淹沒一切,
04:47
matching human intelligence at all tasks.
87
287840
2496
在所有工作任務上 都能和人類的智慧匹敵?
04:50
This is the definition of artificial general intelligence --
88
290360
3736
這是「強人工智慧」的定義,
04:54
AGI,
89
294120
1296
縮寫 AGI,
04:55
which has been the holy grail of AI research since its inception.
90
295440
3080
打從它的一開始, 它就是人工智慧研究的聖杯。
04:59
By this definition, people who say,
91
299000
1776
依這個定義,當有人說:
05:00
"Ah, there will always be jobs that humans can do better than machines,"
92
300800
3416
「啊,總是會有些工作, 人類能做得比機器好。」
05:04
are simply saying that we'll never get AGI.
93
304240
2920
他們說的只是 「我們永遠不會有 AGI。」
05:07
Sure, we might still choose to have some human jobs
94
307680
3576
當然,我們仍可選擇 保留一些人類的工作,
05:11
or to give humans income and purpose with our jobs,
95
311280
3096
或是用工作讓人類保有收入和目的,
05:14
but AGI will in any case transform life as we know it
96
314400
3736
但不論如何,AGI 都會 轉變我們所認知的生活,
05:18
with humans no longer being the most intelligent.
97
318160
2736
人類將不再是最有智慧的。
05:20
Now, if the water level does reach AGI,
98
320920
3696
如果海平面真的升到 AGI 的高度,
05:24
then further AI progress will be driven mainly not by humans but by AI,
99
324640
5296
那麼進一步的人工智慧進步將會 由人工智慧來主導,而非人類,
05:29
which means that there's a possibility
100
329960
1856
意思是,有可能進一步的 人工智慧進展會非常快,
05:31
that further AI progress could be way faster
101
331840
2336
05:34
than the typical human research and development timescale of years,
102
334200
3376
超越用「年」來計算 典型人類研究和發展的時間,
05:37
raising the controversial possibility of an intelligence explosion
103
337600
4016
提高很受爭議的智慧噴發的可能性,
05:41
where recursively self-improving AI
104
341640
2296
即,不斷遞迴的 自我改進的人工智慧
05:43
rapidly leaves human intelligence far behind,
105
343960
3416
很快就會遠遠超越人類的智慧,
05:47
creating what's known as superintelligence.
106
347400
2440
創造出所謂的超級人工智慧。
05:51
Alright, reality check:
107
351800
2280
好了,回歸現實:
05:55
Are we going to get AGI any time soon?
108
355120
2440
我們很快就會有 AGI 嗎?
05:58
Some famous AI researchers, like Rodney Brooks,
109
358360
2696
有些知名的人工智慧研究者, 像羅德尼·布魯克斯,
06:01
think it won't happen for hundreds of years.
110
361080
2496
認為在數百年內還不會發生。
06:03
But others, like Google DeepMind founder Demis Hassabis,
111
363600
3896
但其他人,像 Google DeepMind 的 創辦人傑米斯·哈薩比斯,
06:07
are more optimistic
112
367520
1256
就比較樂觀,
06:08
and are working to try to make it happen much sooner.
113
368800
2576
且努力想要讓它早點發生。
06:11
And recent surveys have shown that most AI researchers
114
371400
3296
近期的調查顯示, 大部分的人工智慧研究者
06:14
actually share Demis's optimism,
115
374720
2856
其實和傑米斯一樣樂觀,
06:17
expecting that we will get AGI within decades,
116
377600
3080
預期我們會在數十年內就有 AGI,
06:21
so within the lifetime of many of us,
117
381640
2256
所以,許多人 在有生之年就能看到,
06:23
which begs the question -- and then what?
118
383920
1960
這就讓人不禁想問:接下來呢?
06:27
What do we want the role of humans to be
119
387040
2216
我們希望人類扮演什麼角色,
06:29
if machines can do everything better and cheaper than us?
120
389280
2680
如果機器每件事都做得 比人類好、成本又更低的話?
06:35
The way I see it, we face a choice.
121
395000
2000
依我所見,我們面臨一個選擇。
選項之一是不假思索的滿足。
06:38
One option is to be complacent.
122
398000
1576
06:39
We can say, "Oh, let's just build machines that can do everything we can do
123
399600
3776
我們可以說:「咱們來打造機器, 讓它們做所有我們能做的事,
06:43
and not worry about the consequences.
124
403400
1816
不要擔心結果。
06:45
Come on, if we build technology that makes all humans obsolete,
125
405240
3256
拜託,如果我們能打造科技, 讓全人類變成過時,
06:48
what could possibly go wrong?"
126
408520
2096
怎可能會出錯的?」
06:50
(Laughter)
127
410640
1656
(笑聲)
06:52
But I think that would be embarrassingly lame.
128
412320
2760
但我覺得那樣是差勁到令人難堪。
06:56
I think we should be more ambitious -- in the spirit of TED.
129
416080
3496
我認為我們該更有野心—— 抱持 TED 精神。
06:59
Let's envision a truly inspiring high-tech future
130
419600
3496
咱們來想像一下 一個真正鼓舞人心的高科技未來,
07:03
and try to steer towards it.
131
423120
1400
並試著朝它邁進。
07:05
This brings us to the second part of our rocket metaphor: the steering.
132
425720
3536
這就帶我們來到了 火箭比喻的第二部分:操控。
07:09
We're making AI more powerful,
133
429280
1896
我們讓人工智慧的力量更強大,
07:11
but how can we steer towards a future
134
431200
3816
但我們要如何將人工智慧導向
協助人類繁盛而非掙扎無助的未來?
07:15
where AI helps humanity flourish rather than flounder?
135
435040
3080
07:18
To help with this,
136
438760
1256
為了協助做到這點,
07:20
I cofounded the Future of Life Institute.
137
440040
1976
我共同創辦了「生命未來研究所」。
07:22
It's a small nonprofit promoting beneficial technology use,
138
442040
2776
它是個小型的非營利機構, 旨在促進有益的科技使用,
07:24
and our goal is simply for the future of life to exist
139
444840
2736
我們的目標很簡單: 希望生命的未來能夠存在,
07:27
and to be as inspiring as possible.
140
447600
2056
且越是鼓舞人心越好。
07:29
You know, I love technology.
141
449680
3176
你們知道的,我很愛科技。
07:32
Technology is why today is better than the Stone Age.
142
452880
2920
現今之所以比石器時代更好, 就是因為科技。
07:36
And I'm optimistic that we can create a really inspiring high-tech future ...
143
456600
4080
我很樂觀地認為我們能創造出 真的很鼓舞人心的高科技未來……
07:41
if -- and this is a big if --
144
461680
1456
如果——這個「如果」很重要——
07:43
if we win the wisdom race --
145
463160
2456
如果我們能贏得這場智慧賽跑——
07:45
the race between the growing power of our technology
146
465640
2856
這場賽跑的兩位競爭者是
不斷成長的科技力量 和不斷成長的管理科技智慧。
07:48
and the growing wisdom with which we manage it.
147
468520
2200
07:51
But this is going to require a change of strategy
148
471240
2296
這會需要策略的改變,
07:53
because our old strategy has been learning from mistakes.
149
473560
3040
因為我們的舊策略 是從錯誤中學習。
07:57
We invented fire,
150
477280
1536
我們發明了火,
07:58
screwed up a bunch of times --
151
478840
1536
搞砸了很多次——
08:00
invented the fire extinguisher.
152
480400
1816
發明了滅火器。
08:02
(Laughter)
153
482240
1336
(笑聲)
08:03
We invented the car, screwed up a bunch of times --
154
483600
2416
我們發明了汽車,搞砸了很多次——
08:06
invented the traffic light, the seat belt and the airbag,
155
486040
2667
發明了紅綠燈、 安全帶,和安全氣囊,
08:08
but with more powerful technology like nuclear weapons and AGI,
156
488731
3845
但對於更強大的科技, 比如核子武器和 AGI,
08:12
learning from mistakes is a lousy strategy,
157
492600
3376
從錯誤中學習是很糟的策略,
08:16
don't you think?
158
496000
1216
對吧?(笑聲)
08:17
(Laughter)
159
497240
1016
事前的主動比事後的反應更好;
08:18
It's much better to be proactive rather than reactive;
160
498280
2576
08:20
plan ahead and get things right the first time
161
500880
2296
先計畫好,第一次就把事情做對,
08:23
because that might be the only time we'll get.
162
503200
2496
因為我們可能只有一次機會。
08:25
But it is funny because sometimes people tell me,
163
505720
2336
好笑的是,有時人們告訴我:
08:28
"Max, shhh, don't talk like that.
164
508080
2736
「麥克斯,噓,別那樣說話。
08:30
That's Luddite scaremongering."
165
510840
1720
那是危言聳聽。」
08:34
But it's not scaremongering.
166
514040
1536
但那並非危言聳聽。
08:35
It's what we at MIT call safety engineering.
167
515600
2880
我們在麻省理工學院 稱之為安全工程。
08:39
Think about it:
168
519200
1216
想想看:
08:40
before NASA launched the Apollo 11 mission,
169
520440
2216
在美國太空總署的 阿波羅 11 任務之前,
08:42
they systematically thought through everything that could go wrong
170
522680
3136
他們系統性地設想過 所有可能出錯的狀況,
08:45
when you put people on top of explosive fuel tanks
171
525840
2376
畢竟是要把人類 放在易爆燃料槽上,
08:48
and launch them somewhere where no one could help them.
172
528240
2616
再將他們發射到 沒有人能協助他們的地方。
08:50
And there was a lot that could go wrong.
173
530880
1936
可能會出錯的狀況非常多。
08:52
Was that scaremongering?
174
532840
1480
那是危言聳聽嗎?
08:55
No.
175
535159
1217
不。
08:56
That's was precisely the safety engineering
176
536400
2016
那正是安全工程,
用來確保任務能夠成功。
08:58
that ensured the success of the mission,
177
538440
1936
09:00
and that is precisely the strategy I think we should take with AGI.
178
540400
4176
那正是我認為處理 AGI 時 應該採用的策略。
09:04
Think through what can go wrong to make sure it goes right.
179
544600
4056
想清楚有什麼可能會出錯, 確保它能不要出錯。
09:08
So in this spirit, we've organized conferences,
180
548680
2536
基於這種精神,我們辦了一些會議,
09:11
bringing together leading AI researchers and other thinkers
181
551240
2816
集合了最領先的人工智慧 研究者和其他思想家,
09:14
to discuss how to grow this wisdom we need to keep AI beneficial.
182
554080
3736
討論要如何發展這項必要的智慧, 確保人工智慧是有益的。
09:17
Our last conference was in Asilomar, California last year
183
557840
3296
我們最近一次會議是去年 在加州的阿西洛馬會議中舉辦,
09:21
and produced this list of 23 principles
184
561160
3056
得出了這 23 條原則。
09:24
which have since been signed by over 1,000 AI researchers
185
564240
2896
從那之後,已經有 超過一千名人工智慧研究者
09:27
and key industry leaders,
186
567160
1296
與重要產業領導者簽署。
09:28
and I want to tell you about three of these principles.
187
568480
3176
我想要和各位談其中三條原則。
09:31
One is that we should avoid an arms race and lethal autonomous weapons.
188
571680
4960
其一是我們應該要避免 軍備競賽以及自主的致命武器。
09:37
The idea here is that any science can be used for new ways of helping people
189
577480
3616
想法是,任何科學都能被 用作助人或傷人的新方法。
09:41
or new ways of harming people.
190
581120
1536
09:42
For example, biology and chemistry are much more likely to be used
191
582680
3936
比如,生物和化學更有可能會被用來
09:46
for new medicines or new cures than for new ways of killing people,
192
586640
4856
做新的藥物或治療方法, 而不是殺人的新方法,
09:51
because biologists and chemists pushed hard --
193
591520
2176
因為生物學家和化學家 很努力推動——
09:53
and successfully --
194
593720
1256
且很成功——
09:55
for bans on biological and chemical weapons.
195
595000
2176
禁止生物及化學武器的禁令。
09:57
And in the same spirit,
196
597200
1256
基於同樣的精神,
09:58
most AI researchers want to stigmatize and ban lethal autonomous weapons.
197
598480
4440
大部分的人工智慧研究者 想要譴責和禁用自主的致命武器。
10:03
Another Asilomar AI principle
198
603600
1816
另一條阿西洛馬會議人工智慧原則
10:05
is that we should mitigate AI-fueled income inequality.
199
605440
3696
是我們應該要減輕 由人工智慧引起的收入不平等。
10:09
I think that if we can grow the economic pie dramatically with AI
200
609160
4456
我認為如果能用人工智慧 讓經濟大餅大幅地成長,
10:13
and we still can't figure out how to divide this pie
201
613640
2456
而仍無法弄清楚如何分割這塊餅
10:16
so that everyone is better off,
202
616120
1576
來讓每個人都過得更好,
10:17
then shame on us.
203
617720
1256
那我們真該感到羞恥。
10:19
(Applause)
204
619000
4096
(掌聲)
10:23
Alright, now raise your hand if your computer has ever crashed.
205
623120
3600
好,如果你的電腦曾當過機,請舉手。
10:27
(Laughter)
206
627480
1256
(笑聲)
10:28
Wow, that's a lot of hands.
207
628760
1656
哇,好多人舉手。
10:30
Well, then you'll appreciate this principle
208
630440
2176
那麼你們就會欣賞這條原則:
10:32
that we should invest much more in AI safety research,
209
632640
3136
我們應該要更投入 人工智慧安全的研究,
10:35
because as we put AI in charge of even more decisions and infrastructure,
210
635800
3656
因為當我們讓人工智慧來主導 更多決策和基礎設施時。
10:39
we need to figure out how to transform today's buggy and hackable computers
211
639480
3616
我們得要想出該如何將現今 有程式錯誤且可能被駭入的電腦,
10:43
into robust AI systems that we can really trust,
212
643120
2416
轉變成我們能真正信任的 穩定的人工智慧系統。
10:45
because otherwise,
213
645560
1216
10:46
all this awesome new technology can malfunction and harm us,
214
646800
2816
要不然,所有這些了不起的新科技 都可能故障、傷害我們,
10:49
or get hacked and be turned against us.
215
649640
1976
或被駭入而轉成對抗我們。
10:51
And this AI safety work has to include work on AI value alignment,
216
651640
5696
這項人工智慧安全性的工作必須要 包含人工智慧價值校準的工作,
10:57
because the real threat from AGI isn't malice,
217
657360
2816
因為 AGI 帶來的 真正威脅不是惡意——
11:00
like in silly Hollywood movies,
218
660200
1656
像愚蠢的好萊塢電影裡那樣——
11:01
but competence --
219
661880
1736
而是能力——
11:03
AGI accomplishing goals that just aren't aligned with ours.
220
663640
3416
AGI 要完成的目標就是 與我們的目標不一致。
11:07
For example, when we humans drove the West African black rhino extinct,
221
667080
4736
比如,當我們人類 讓西非的黑犀牛濱臨絕種時,
11:11
we didn't do it because we were a bunch of evil rhinoceros haters, did we?
222
671840
3896
並不是因為我們邪惡、痛恨犀牛 才這麼做的,對吧?
11:15
We did it because we were smarter than them
223
675760
2056
會這麼做,是因為我們比牠們聰明,
11:17
and our goals weren't aligned with theirs.
224
677840
2576
我們的目標和牠們的目標不一致。
11:20
But AGI is by definition smarter than us,
225
680440
2656
但 AGI 在定義上 就是比我們聰明的,
11:23
so to make sure that we don't put ourselves in the position of those rhinos
226
683120
3576
所以要確保我們在 創造了 AGI 之後,
11:26
if we create AGI,
227
686720
1976
不會淪落到那些犀牛的處境,
11:28
we need to figure out how to make machines understand our goals,
228
688720
4176
我們就得要想出如何 讓機器了解我們的目標,
11:32
adopt our goals and retain our goals.
229
692920
3160
採用並且保持我們的目標。
11:37
And whose goals should these be, anyway?
230
697320
2856
不過,這些目標該是誰的目標?
11:40
Which goals should they be?
231
700200
1896
這些目標該是哪些目標?
11:42
This brings us to the third part of our rocket metaphor: the destination.
232
702120
3560
這就帶我們到了火箭比喻的 第三部分:目的地。
11:47
We're making AI more powerful,
233
707160
1856
我們要讓人工智慧的力量更強大,
11:49
trying to figure out how to steer it,
234
709040
1816
試圖想辦法來操控它,
11:50
but where do we want to go with it?
235
710880
1680
但我們如何帶它到哪裡?
11:53
This is the elephant in the room that almost nobody talks about --
236
713760
3656
這是幾乎沒人談論的房中大象 (顯而易見又被忽略)——
11:57
not even here at TED --
237
717440
1856
即使在 TED 也沒人在談——
11:59
because we're so fixated on short-term AI challenges.
238
719320
4080
因為我們都把目光放在 短期的人工智慧挑戰。
12:04
Look, our species is trying to build AGI,
239
724080
4656
聽著,我們人類正試著建造 AGI,
12:08
motivated by curiosity and economics,
240
728760
3496
動機是好奇心和經濟,
12:12
but what sort of future society are we hoping for if we succeed?
241
732280
3680
但如果成功了,我們希望 創造出什麼樣的未來社會?
12:16
We did an opinion poll on this recently,
242
736680
1936
最近我們針對這點做了意見調查,
12:18
and I was struck to see
243
738640
1216
結果讓我很驚訝,
12:19
that most people actually want us to build superintelligence:
244
739880
2896
大部分的人其實希望 我們建造超級人工智慧:
12:22
AI that's vastly smarter than us in all ways.
245
742800
3160
全面比我們聰明的人工智慧。
12:27
What there was the greatest agreement on was that we should be ambitious
246
747120
3416
大家最一致的意見, 就是我們應該要有野心,
12:30
and help life spread into the cosmos,
247
750560
2016
並協助生命在宇宙中散播,
12:32
but there was much less agreement about who or what should be in charge.
248
752600
4496
但對於該由誰或由什麼來主導, 大家的意見就不那麼一致了。
12:37
And I was actually quite amused
249
757120
1736
有件事讓我覺得很有趣的
12:38
to see that there's some some people who want it to be just machines.
250
758880
3456
是我看到有些人希望就由機器主導。
12:42
(Laughter)
251
762360
1696
(笑聲)
12:44
And there was total disagreement about what the role of humans should be,
252
764080
3856
至於人類該扮演什麼角色, 意見就完全不一致了,
12:47
even at the most basic level,
253
767960
1976
即使在最基礎的層級也一樣。
12:49
so let's take a closer look at possible futures
254
769960
2816
讓咱們來近看
我們或許會選擇的可能未來,好嗎?
12:52
that we might choose to steer toward, alright?
255
772800
2736
12:55
So don't get me wrong here.
256
775560
1336
別誤會我的意思,
12:56
I'm not talking about space travel,
257
776920
2056
我並不是在談太空旅行,
只是要談人類進入未來的比喻之旅。
12:59
merely about humanity's metaphorical journey into the future.
258
779000
3200
13:02
So one option that some of my AI colleagues like
259
782920
3496
我的一些人工智慧同事 很喜歡的一個選擇是
13:06
is to build superintelligence and keep it under human control,
260
786440
3616
建造超級人工智慧, 並保持讓它被人類控制,
13:10
like an enslaved god,
261
790080
1736
就像能被奴役的神一樣,
13:11
disconnected from the internet
262
791840
1576
和網路沒有連結,
13:13
and used to create unimaginable technology and wealth
263
793440
3256
用來創造無法想像的科技和財富,
13:16
for whoever controls it.
264
796720
1240
全交給控制它的人。
13:18
But Lord Acton warned us
265
798800
1456
阿克頓男爵警告我們:
13:20
that power corrupts, and absolute power corrupts absolutely,
266
800280
3616
「權力會產生腐敗, 絕對權力絕對腐敗。」
13:23
so you might worry that maybe we humans just aren't smart enough,
267
803920
4056
你可能會擔心 也許我們人類就是不夠聰明,
13:28
or wise enough rather,
268
808000
1536
或是沒有足夠的智慧
13:29
to handle this much power.
269
809560
1240
來操作這麼多權力。
13:31
Also, aside from any moral qualms you might have
270
811640
2536
除了奴役更優秀的智慧的 任何道德疑慮之外,
13:34
about enslaving superior minds,
271
814200
2296
13:36
you might worry that maybe the superintelligence could outsmart us,
272
816520
3976
你也許會擔心
或許超級人工智慧會智勝我們,
13:40
break out and take over.
273
820520
2240
衝破籓籬和掌管。
13:43
But I also have colleagues who are fine with AI taking over
274
823560
3416
但我也有些同事覺得 讓人工智慧接管也沒不好,
13:47
and even causing human extinction,
275
827000
2296
甚至造成人類絕種也無妨,
13:49
as long as we feel the the AIs are our worthy descendants,
276
829320
3576
只要我們覺得人工智慧 配得上做我們的後裔就好,
13:52
like our children.
277
832920
1736
就像我們的孩子一樣。
13:54
But how would we know that the AIs have adopted our best values
278
834680
5616
但,我們要如何確知人工智慧 已經採用了我們最好的價值觀,
14:00
and aren't just unconscious zombies tricking us into anthropomorphizing them?
279
840320
4376
而不只是有意識的殭屍, 騙我們將人性賦予它們?
14:04
Also, shouldn't those people who don't want human extinction
280
844720
2856
此外,不希望人類絕種的那些人
14:07
have a say in the matter, too?
281
847600
1440
也應被容許對此事表達意見吧?
14:10
Now, if you didn't like either of those two high-tech options,
282
850200
3376
如果這兩個高科技選項 都不合你們的意,
14:13
it's important to remember that low-tech is suicide
283
853600
3176
很重要的是要記得 從宇宙的觀點來看低科技是自殺,
14:16
from a cosmic perspective,
284
856800
1256
14:18
because if we don't go far beyond today's technology,
285
858080
2496
因為如果我們不遠遠超越 現今的科技,
14:20
the question isn't whether humanity is going to go extinct,
286
860600
2816
問題就不是人類是否會絕種,
14:23
merely whether we're going to get taken out
287
863440
2016
而是讓我們絕種的會是下一次的
14:25
by the next killer asteroid, supervolcano
288
865480
2136
巨型慧星撞擊、超級火山爆發,
14:27
or some other problem that better technology could have solved.
289
867640
3096
或是更優的科技 本可解決的其他問題?
14:30
So, how about having our cake and eating it ...
290
870760
3576
所以,何不接受和吃下這蛋糕……
14:34
with AGI that's not enslaved
291
874360
1840
這個不是被奴役的 AGI,
14:37
but treats us well because its values are aligned with ours?
292
877120
3176
而是價值觀和我們一致, 善待我們的 AGI 呢?
14:40
This is the gist of what Eliezer Yudkowsky has called "friendly AI,"
293
880320
4176
那就是亞里艾瑟·尤考斯基 所謂的「友善的人工智慧」,
14:44
and if we can do this, it could be awesome.
294
884520
2680
若我們能做到這點,可能會很棒。
14:47
It could not only eliminate negative experiences like disease, poverty,
295
887840
4816
它可能不只會除去負面的遭遇,
比如疾病、貧困、
14:52
crime and other suffering,
296
892680
1456
犯罪,和其他苦難,
14:54
but it could also give us the freedom to choose
297
894160
2816
它也可能會給予我們自由,
讓我們從各式各樣 新的正面經驗中做選擇——
14:57
from a fantastic new diversity of positive experiences --
298
897000
4056
15:01
basically making us the masters of our own destiny.
299
901080
3160
基本上,就是讓我們 成為自己命運的主宰。
15:06
So in summary,
300
906280
1376
所以,總結一下,
15:07
our situation with technology is complicated,
301
907680
3096
我們在科技方面的情況很複雜,
15:10
but the big picture is rather simple.
302
910800
2416
但整體來看是很簡單的。
15:13
Most AI researchers expect AGI within decades,
303
913240
3456
多數人工智慧研究者預期 AGI 會在數十年內出現,
15:16
and if we just bumble into this unprepared,
304
916720
3136
如果我們沒有先準備好面對它,
15:19
it will probably be the biggest mistake in human history --
305
919880
3336
那可能會成為人類史上最大的錯誤。
15:23
let's face it.
306
923240
1416
讓我們正視事實吧,
15:24
It could enable brutal, global dictatorship
307
924680
2576
它會讓殘酷的 全球獨裁主義成為可能,
15:27
with unprecedented inequality, surveillance and suffering,
308
927280
3536
造成前所未有的不平等、 監控,以及苦難,
15:30
and maybe even human extinction.
309
930840
1976
甚至讓人類絕種。
15:32
But if we steer carefully,
310
932840
2320
但如果我們小心地操控,
15:36
we could end up in a fantastic future where everybody's better off:
311
936040
3896
我們可能會有個美好的未來, 人人都過得更好:
15:39
the poor are richer, the rich are richer,
312
939960
2376
貧窮的人有錢,有錢的人更有錢,
15:42
everybody is healthy and free to live out their dreams.
313
942360
3960
每個人都健康,能自由自在地 去實現他們的夢想。
15:47
Now, hang on.
314
947000
1536
等等,別急。
15:48
Do you folks want the future that's politically right or left?
315
948560
4576
你們希望未來在政治上 是右派還是左派?
15:53
Do you want the pious society with strict moral rules,
316
953160
2856
你們想要一個有著嚴格 道德規則的虔誠社會,
15:56
or do you an hedonistic free-for-all,
317
956040
1816
或一個人人可參與的享樂主義社會,
15:57
more like Burning Man 24/7?
318
957880
2216
就像全年無休的燃燒人節慶?
16:00
Do you want beautiful beaches, forests and lakes,
319
960120
2416
你們想要美麗的海灘、森林和湖泊,
16:02
or would you prefer to rearrange some of those atoms with the computers,
320
962560
3416
或是偏好用電腦重新排列原子,
16:06
enabling virtual experiences?
321
966000
1715
產生出虛擬經驗?
16:07
With friendly AI, we could simply build all of these societies
322
967739
3157
有了友善的人工智慧, 我們就能建立出所有這些社會,
16:10
and give people the freedom to choose which one they want to live in
323
970920
3216
並讓大家有自由去選擇 他們想要住在哪個社會中,
因為我們不會再受智慧的限制,
16:14
because we would no longer be limited by our intelligence,
324
974160
3096
唯一的限制是物理法則。
16:17
merely by the laws of physics.
325
977280
1456
16:18
So the resources and space for this would be astronomical --
326
978760
4616
所以,資源和空間會非常龐大 ——
16:23
literally.
327
983400
1320
天文級的龐大。
16:25
So here's our choice.
328
985320
1200
我們的選擇如下:
16:27
We can either be complacent about our future,
329
987880
2320
我們可以對未來感到滿足,
16:31
taking as an article of blind faith
330
991440
2656
帶著盲目的信念,
16:34
that any new technology is guaranteed to be beneficial,
331
994120
4016
相信任何新科技都必然有益,
16:38
and just repeat that to ourselves as a mantra over and over and over again
332
998160
4136
當作真言,不斷對自己 一次又一次地重述,
16:42
as we drift like a rudderless ship towards our own obsolescence.
333
1002320
3680
而我們像無舵的船漂向淘汰。
16:46
Or we can be ambitious --
334
1006920
1880
或是我們可以有野心,
16:49
thinking hard about how to steer our technology
335
1009840
2456
努力去想出如何操控我們的科技,
16:52
and where we want to go with it
336
1012320
1936
以及我們想要去的目的地,
16:54
to create the age of amazement.
337
1014280
1760
創造出驚奇的時代。
16:57
We're all here to celebrate the age of amazement,
338
1017000
2856
我們在這裡讚頌驚奇的時代,
16:59
and I feel that its essence should lie in becoming not overpowered
339
1019880
4440
我覺得精髓應該在於不受控於科技,
17:05
but empowered by our technology.
340
1025240
2616
而是讓它賦予我們力量。
17:07
Thank you.
341
1027880
1376
謝謝。
17:09
(Applause)
342
1029280
3080
(掌聲)
關於本網站

本網站將向您介紹對學習英語有用的 YouTube 視頻。 您將看到來自世界各地的一流教師教授的英語課程。 雙擊每個視頻頁面上顯示的英文字幕,從那裡播放視頻。 字幕與視頻播放同步滾動。 如果您有任何意見或要求,請使用此聯繫表與我們聯繫。

https://forms.gle/WvT1wiN1qDtmnspy7