3 myths about the future of work (and why they're not true) | Daniel Susskind

174,353 views ・ 2018-04-05

TED


請雙擊下方英文字幕播放視頻。

譯者: Lilian Chiu 審譯者: Chen Chi-An
00:12
Automation anxiety has been spreading lately,
0
12760
3376
近期,自動化焦慮一直在散佈,
00:16
a fear that in the future,
1
16160
2656
它是種恐懼,害怕在未來
00:18
many jobs will be performed by machines
2
18840
2456
許多工作會由機器來進行,
00:21
rather than human beings,
3
21320
1336
而不是人類,
00:22
given the remarkable advances that are unfolding
4
22680
2936
因為現在已可以看到在人工智慧
00:25
in artificial intelligence and robotics.
5
25640
2776
和機器人學領域的驚人進步。
00:28
What's clear is that there will be significant change.
6
28440
2816
很清楚的一點是, 將來會有顯著的改變。
00:31
What's less clear is what that change will look like.
7
31280
3616
比較不那麼清楚的是, 改變會是什麼樣的。
00:34
My research suggests that the future is both troubling and exciting.
8
34920
4936
我的研究指出,未來 既讓人困擾又讓人興奮。
00:39
The threat of technological unemployment is real,
9
39880
3736
科技造成失業的威脅是真的,
00:43
and yet it's a good problem to have.
10
43640
2056
但,能有這種問題也是件好事。
00:45
And to explain how I came to that conclusion,
11
45720
3216
為了解釋我如何得到這個結論,
00:48
I want to confront three myths
12
48960
2536
我想要來正視三項迷思,
00:51
that I think are currently obscuring our vision of this automated future.
13
51520
4280
我認為這些迷思 目前遮掩了我們的視線,
讓我們看不清自動化的未來。
00:56
A picture that we see on our television screens,
14
56880
2336
我們在電視上、書中、電影中、
00:59
in books, in films, in everyday commentary
15
59240
2216
每天的評論中所看到的描繪,
01:01
is one where an army of robots descends on the workplace
16
61480
3696
通常是機器人大軍湧入工作場所,
01:05
with one goal in mind:
17
65200
1376
心中只有一個目標:
01:06
to displace human beings from their work.
18
66600
2496
在工作上取代人類。
01:09
And I call this the Terminator myth.
19
69120
2696
我稱這個想法為「終結者迷思」。
01:11
Yes, machines displace human beings from particular tasks,
20
71840
3976
是的,在特定的工作任務上, 機器會取代人類,
01:15
but they don't just substitute for human beings.
21
75840
2256
但它們不會就這樣代替人類。
01:18
They also complement them in other tasks,
22
78120
1976
它們在其他工作任務上會補足人類,
01:20
making that work more valuable and more important.
23
80120
3616
讓工作更有價值、更重要。
01:23
Sometimes they complement human beings directly,
24
83760
3336
有時,它們會直接補足人類,
01:27
making them more productive or more efficient at a particular task.
25
87120
4016
讓人類在特定的工作任務上 能更有生產力或更有效率。
01:31
So a taxi driver can use a satnav system to navigate on unfamiliar roads.
26
91160
4616
計程車司機在不熟悉的路上 可以用衛星導航系統來協助導航。
01:35
An architect can use computer-assisted design software
27
95800
3336
建築師可以用電腦輔助的設計軟體
01:39
to design bigger, more complicated buildings.
28
99160
3096
來設計更大、更複雜的建築物。
01:42
But technological progress doesn't just complement human beings directly.
29
102280
3696
但科技進步並不只會直接補足人類。
01:46
It also complements them indirectly, and it does this in two ways.
30
106000
3336
它也會用間接方式補足人類, 間接的方式有兩種。
01:49
The first is if we think of the economy as a pie,
31
109360
3336
第一,如果我們把 經濟想成是一塊派,
01:52
technological progress makes the pie bigger.
32
112720
2896
科技進步會讓派變更大。
01:55
As productivity increases, incomes rise and demand grows.
33
115640
3856
隨著生產力增加, 收入會增加,需求會成長。
01:59
The British pie, for instance,
34
119520
1776
比如,英國的派
02:01
is more than a hundred times the size it was 300 years ago.
35
121320
3960
與三百年前相比,現在超過百倍大。
02:05
And so people displaced from tasks in the old pie
36
125920
3216
在舊派工作被取代的人,
02:09
could find tasks to do in the new pie instead.
37
129160
2720
能在新派中找到工作。
02:12
But technological progress doesn't just make the pie bigger.
38
132800
3936
但科技進步並不只會讓派變大。
02:16
It also changes the ingredients in the pie.
39
136760
2856
它也會改變派的成分原料。
02:19
As time passes, people spend their income in different ways,
40
139640
3456
隨時間演進,人會以 不同的方式花費他們的收入,
02:23
changing how they spread it across existing goods,
41
143120
2816
改變既有商品花費上的分配,
02:25
and developing tastes for entirely new goods, too.
42
145960
3216
並也會發展出對於全新商品的品味。
02:29
New industries are created,
43
149200
1776
新的產業會被創造出來,
02:31
new tasks have to be done
44
151000
1816
有新的工作任務需要被完成,
02:32
and that means often new roles have to be filled.
45
152840
2536
那就意味著有新角色要有人扮演。
02:35
So again, the British pie:
46
155400
1496
所以,再回到英國的派:
02:36
300 years ago, most people worked on farms,
47
156920
2976
三百年前,大部分的人在農場工作,
02:39
150 years ago, in factories,
48
159920
2336
一百五十年前,在工廠工作,
02:42
and today, most people work in offices.
49
162280
2856
現今,大部分的人在辦公室工作。
02:45
And once again, people displaced from tasks in the old bit of pie
50
165160
4056
再提一次,在老派工作被取代的人,
02:49
could tumble into tasks in the new bit of pie instead.
51
169240
2800
可能會在新派當中 發現可以做的工作任務。
02:52
Economists call these effects complementarities,
52
172720
3336
經濟學家把這些效應稱為互補性,
02:56
but really that's just a fancy word to capture the different way
53
176080
3256
但那只是個很炫的詞,其實意思就是
02:59
that technological progress helps human beings.
54
179360
3136
科技進步用不同的方式在協助人類。
03:02
Resolving this Terminator myth
55
182520
2096
解開這個終結者迷思之後,
03:04
shows us that there are two forces at play:
56
184640
2336
會發現有兩股力量在運作:
03:07
one, machine substitution that harms workers,
57
187000
3536
第一:機器代替,這會傷害到工人,
03:10
but also these complementarities that do the opposite.
58
190560
2880
但也會有第二股力量, 互補性,反而會幫助工人。
03:13
Now the second myth,
59
193960
1376
再來,第二項迷思,
03:15
what I call the intelligence myth.
60
195360
2280
我稱之為「智慧迷思」。
03:18
What do the tasks of driving a car, making a medical diagnosis
61
198440
4896
以下這些工作任務: 駕駛一台車、做出醫療診斷,
03:23
and identifying a bird at a fleeting glimpse have in common?
62
203360
2920
及快速一瞥就辨識出 一隻鳥,有何共通性?
03:27
Well, these are all tasks that until very recently,
63
207280
2976
這些工作任務都是直到最近
03:30
leading economists thought couldn't readily be automated.
64
210280
3336
仍被經濟學家認為不能 自動化的工作任務。
03:33
And yet today, all of these tasks can be automated.
65
213640
3176
然而,現今,所有這些 工作任務都能被自動化。
03:36
You know, all major car manufacturers have driverless car programs.
66
216840
3496
所有大型汽車製造商都有 無人駕駛汽車的計畫。
03:40
There's countless systems out there that can diagnose medical problems.
67
220360
3976
外面有數不清的系統 都能夠診斷醫療問題。
03:44
And there's even an app that can identify a bird
68
224360
2416
甚至有個應用程式能用來辨識鳥類,
03:46
at a fleeting glimpse.
69
226800
1200
只要快速一瞥。
03:48
Now, this wasn't simply a case of bad luck on the part of economists.
70
228920
4376
這並不是經濟學家運氣不好的情況。
03:53
They were wrong,
71
233320
1296
他們錯了,
03:54
and the reason why they were wrong is very important.
72
234640
2496
而他們為什麼會錯的原因很重要。
03:57
They've fallen for the intelligence myth,
73
237160
2256
他們陷入了智慧迷思中,
03:59
the belief that machines have to copy the way
74
239440
2896
相信機器必須要複製人類
04:02
that human beings think and reason
75
242360
2056
思考和推理的方式,
04:04
in order to outperform them.
76
244440
1776
才能夠表現得比人類好。
04:06
When these economists were trying to figure out
77
246240
2216
當這些經濟學家在試圖想出
04:08
what tasks machines could not do,
78
248480
1856
機器無法勝任哪些工作任務,
04:10
they imagined the only way to automate a task
79
250360
2136
他們想像,將工作任務自動化的
04:12
was to sit down with a human being,
80
252520
1816
唯一方式就是和人類坐下來,
04:14
get them to explain to you how it was they performed a task,
81
254360
3536
讓人類解釋他們如何執行工作任務,
04:17
and then try and capture that explanation
82
257920
2656
再試著分析他們的解釋,
04:20
in a set of instructions for a machine to follow.
83
260600
2776
轉換成一組指令,讓機器照著做。
04:23
This view was popular in artificial intelligence at one point, too.
84
263400
4176
在人工智慧領域,這種觀點 曾在某個時點很流行過。
04:27
I know this because Richard Susskind,
85
267600
2176
我知道這點,因為理查薩斯金,
04:29
who is my dad and my coauthor,
86
269800
2856
他是我爸爸也是我的共同作者,
04:32
wrote his doctorate in the 1980s on artificial intelligence and the law
87
272680
4056
在八〇年代,在牛津大學 寫了一篇關於人工智慧
04:36
at Oxford University,
88
276760
1416
與法律的博士論文,
04:38
and he was part of the vanguard.
89
278200
1576
他是先鋒部隊之一。
04:39
And with a professor called Phillip Capper
90
279800
2256
和一位名叫菲利普卡波的教授,
04:42
and a legal publisher called Butterworths,
91
282080
2096
以及一間法律出版社 叫做 Butterworths,
04:44
they produced the world's first commercially available
92
284200
5896
他們合作製作出了 世界上第一個商業用的
04:50
artificial intelligence system in the law.
93
290120
2776
法律人工智慧系統。
04:52
This was the home screen design.
94
292920
2616
這是首頁的畫面設計。
04:55
He assures me this was a cool screen design at the time.
95
295560
2696
他向我保證,在當時 這是很酷的畫面設計。
04:58
(Laughter)
96
298280
1016
(笑聲)
04:59
I've never been entirely convinced.
97
299320
1696
我從來沒有被說服。
05:01
He published it in the form of two floppy disks,
98
301040
2616
他用兩張軟碟片的形式將之出版,
05:03
at a time where floppy disks genuinely were floppy,
99
303680
3536
在那個時代,軟碟片真的是軟的,
05:07
and his approach was the same as the economists':
100
307240
2336
而他的方式就和經濟學家一樣:
05:09
sit down with a lawyer,
101
309600
1256
和一名律師坐下來,
05:10
get her to explain to you how it was she solved a legal problem,
102
310880
3176
讓她向你解釋如何解決法律問題,
05:14
and then try and capture that explanation in a set of rules for a machine to follow.
103
314080
5376
接著就試著把她的解釋 轉成一組指令給機器執行。
05:19
In economics, if human beings could explain themselves in this way,
104
319480
3616
在經濟上,如果人類能夠用 這種方式解釋自己做的事,
05:23
the tasks are called routine, and they could be automated.
105
323120
3296
這種工作任務就叫做例行事務, 是可以被自動化的。
05:26
But if human beings can't explain themselves,
106
326440
2336
但如果人類無法解釋出怎麼做,
05:28
the tasks are called non-routine, and they're thought to be out reach.
107
328800
4256
這種工作任務叫做非例行事務, 應該是不能自動化的。
05:33
Today, that routine-nonroutine distinction is widespread.
108
333080
3296
現今,將事務區別為例行 與非例行是處處可見的。
05:36
Think how often you hear people say to you
109
336400
2056
想想看,你有多常聽到別人對你說
05:38
machines can only perform tasks that are predictable or repetitive,
110
338480
3256
機器能進行的工作任務 只有可預測的、重覆性的、
05:41
rules-based or well-defined.
111
341760
1896
以規則為基礎的,或定義清楚的。
05:43
Those are all just different words for routine.
112
343680
2936
那些詞只是例行事務的不同說法。
05:46
And go back to those three cases that I mentioned at the start.
113
346640
3976
回到我一開始提到的三個案例。
05:50
Those are all classic cases of nonroutine tasks.
114
350640
2896
那些案例是典型的非例行事務。
05:53
Ask a doctor, for instance, how she makes a medical diagnosis,
115
353560
2976
比如,去問一位醫生 如何做醫療診斷,
05:56
and she might be able to give you a few rules of thumb,
116
356560
2656
她可能會給你少數經驗法則,
05:59
but ultimately she'd struggle.
117
359240
1656
但最終,她會很掙扎。
06:00
She'd say it requires things like creativity and judgment and intuition.
118
360920
4816
她會說,你還需要創意、 判斷,以及直覺才行。
06:05
And these things are very difficult to articulate,
119
365760
2376
這些東西是很難明確表達的,
06:08
and so it was thought these tasks would be very hard to automate.
120
368160
3096
所以這些工作任務就會 被認為很難自動化。
06:11
If a human being can't explain themselves,
121
371280
2536
如果人類無法解釋他們自己的做法,
06:13
where on earth do we begin in writing a set of instructions
122
373840
2896
我們究竟要從何開始寫指令
06:16
for a machine to follow?
123
376760
1200
給機器遵循?
06:18
Thirty years ago, this view was right,
124
378640
2576
三十年前,這個觀點是對的,
06:21
but today it's looking shaky,
125
381240
2136
但現今,它很不穩固,
06:23
and in the future it's simply going to be wrong.
126
383400
2256
在未來,它將會是錯的。
06:25
Advances in processing power, in data storage capability
127
385680
3256
處理能力、資料儲存容量,
06:28
and in algorithm design
128
388960
1656
以及演算法設計都在進步,
06:30
mean that this routine-nonroutine distinction
129
390640
2496
這就表示例行與非例行事務間的區別
06:33
is diminishingly useful.
130
393160
1736
越來越沒有用了。
06:34
To see this, go back to the case of making a medical diagnosis.
131
394920
3256
要了解這點,我們 回到醫療診斷的案例。
06:38
Earlier in the year,
132
398200
1376
今年早些時候,
06:39
a team of researchers at Stanford announced they'd developed a system
133
399600
3296
史丹佛的一個研究者團隊 宣佈他們發展出了一個系統,
06:42
which can tell you whether or not a freckle is cancerous
134
402920
3056
它能告訴你一個斑點是否為惡性的,
06:46
as accurately as leading dermatologists.
135
406000
2680
正確率不輸給頂尖皮膚科醫生。
06:49
How does it work?
136
409280
1256
它怎麼做到的?
06:50
It's not trying to copy the judgment or the intuition of a doctor.
137
410560
5296
它並不是嘗試複製 醫生的判斷或是直覺。
06:55
It knows or understands nothing about medicine at all.
138
415880
3136
它對於醫學是一竅不通。
06:59
Instead, it's running a pattern recognition algorithm
139
419040
2576
反之,它進行的是模式辨識演算法,
07:01
through 129,450 past cases,
140
421640
4656
在 129,450 個個案當中,
07:06
hunting for similarities between those cases
141
426320
3096
獵尋那些個案與欲探究的損害
07:09
and the particular lesion in question.
142
429440
2080
之間有哪些相似性。
07:12
It's performing these tasks in an unhuman way,
143
432080
3216
它是用非人類的方式 在進行這些工作任務,
07:15
based on the analysis of more possible cases
144
435320
2336
且是以大量案例的分析來當依據,
07:17
than any doctor could hope to review in their lifetime.
145
437680
2616
案例數多到是醫生 一輩子都看不完的。
07:20
It didn't matter that that human being,
146
440320
1896
無所謂人類,也就是醫生,
07:22
that doctor, couldn't explain how she'd performed the task.
147
442240
2800
是否能解釋她如何進行此工作任務。
07:25
Now, there are those who dwell upon that the fact
148
445640
2336
有些人老是會想著
這些機器被建立時 沒有依循我們的形象。
07:28
that these machines aren't built in our image.
149
448000
2296
07:30
As an example, take IBM's Watson,
150
450320
2056
以 IBM 的「華生 」為例,
07:32
the supercomputer that went on the US quiz show "Jeopardy!" in 2011,
151
452400
4856
那是台超級電腦,2011 年參加 美國的益智節目《危險邊緣》,
07:37
and it beat the two human champions at "Jeopardy!"
152
457280
3016
在節目中它打敗了兩位人類冠軍。
07:40
The day after it won,
153
460320
1696
它獲勝之後的隔天,
07:42
The Wall Street Journal ran a piece by the philosopher John Searle
154
462040
3296
《華爾街日報》刊了一篇 哲學家約翰希爾勒的文章,
07:45
with the title "Watson Doesn't Know It Won on 'Jeopardy!'"
155
465360
3376
標題是〈華生不知道 它自己贏了《危險邊緣》 〉。
07:48
Right, and it's brilliant, and it's true.
156
468760
1976
是的,這篇文章很聰明也沒說錯。
07:50
You know, Watson didn't let out a cry of excitement.
157
470760
2456
華生並沒有興奮地放聲大叫。
它沒有打電話給它的父母 說它的表現多棒。
07:53
It didn't call up its parents to say what a good job it had done.
158
473240
3096
它沒有去酒吧喝酒慶祝。
07:56
It didn't go down to the pub for a drink.
159
476360
2336
07:58
This system wasn't trying to copy the way that those human contestants played,
160
478720
4456
這個系統並沒有試圖複製 那些人類參賽者比賽的方式,
08:03
but it didn't matter.
161
483200
1256
但那無所謂。
08:04
It still outperformed them.
162
484480
1976
它仍然表現得比人類好。
08:06
Resolving the intelligence myth
163
486480
1576
解開這個智慧迷思之後,
08:08
shows us that our limited understanding about human intelligence,
164
488080
3376
看到的是雖然我們對於 人類智慧、對我們如何
08:11
about how we think and reason,
165
491480
1896
思考推理的方式了解有限,
08:13
is far less of a constraint on automation than it was in the past.
166
493400
3456
但這個限制對於自動化的影響 已經遠比過去小很多。
08:16
What's more, as we've seen,
167
496880
1496
此外,如我們所見,
08:18
when these machines perform tasks differently to human beings,
168
498400
3416
當這些機器用和人類不同的 方式來執行工作任務時,
08:21
there's no reason to think
169
501840
1256
沒有理由認為
08:23
that what human beings are currently capable of doing
170
503120
2536
人類目前能夠做到的事
08:25
represents any sort of summit
171
505680
1456
就代表了一種上限,
08:27
in what these machines might be capable of doing in the future.
172
507160
3000
在未來機器能夠達成的事 都不可能超過這個上限。
08:31
Now the third myth,
173
511040
1256
第三項迷思,
08:32
what I call the superiority myth.
174
512320
2456
我稱之為優越迷思。
08:34
It's often said that those who forget
175
514800
2216
常見的說法是,有些人會
08:37
about the helpful side of technological progress,
176
517040
2456
忘記了科技進步的幫助面,
08:39
those complementarities from before,
177
519520
2496
忘記過去的互補性,
08:42
are committing something known as the lump of labor fallacy.
178
522040
3040
這些人所犯的,就是 所謂的「勞動總合謬誤」。
08:45
Now, the problem is the lump of labor fallacy
179
525840
2295
問題是,勞動總合謬誤本身
08:48
is itself a fallacy,
180
528159
1496
就是個謬誤,
08:49
and I call this the lump of labor fallacy fallacy,
181
529679
2937
我把它稱為 「勞動總合謬誤的謬誤」,
08:52
or LOLFF, for short.
182
532640
2320
簡寫為「LOLFF」。
08:56
Let me explain.
183
536000
1416
讓我解釋一下。
08:57
The lump of labor fallacy is a very old idea.
184
537440
2136
勞動總合謬誤是個很古老的想法。
08:59
It was a British economist, David Schloss, who gave it this name in 1892.
185
539600
4216
這個名稱是 1892 年由英國 經濟學家大衛許洛斯取的。
09:03
He was puzzled to come across a dock worker
186
543840
2816
有件事讓他百思不解, 他遇到一個碼頭工人,
09:06
who had begun to use a machine to make washers,
187
546680
2336
這個工人開始用機器來製造墊圈,
09:09
the small metal discs that fasten on the end of screws.
188
549040
3320
墊圈是小型的金屬圓盤, 固定在螺絲底端。
09:13
And this dock worker felt guilty for being more productive.
189
553000
3760
這個碼頭工人對於自己的 高生產力有罪惡感。
09:17
Now, most of the time, we expect the opposite,
190
557560
2176
通常,我們預期的是相反的反應,
09:19
that people feel guilty for being unproductive,
191
559760
2216
生產力不高才會讓人感到罪惡,
你知道的,工作時 花太多時間滑臉書或推特。
09:22
you know, a little too much time on Facebook or Twitter at work.
192
562000
3016
但這個工人對於 太有生產力感到罪惡,
09:25
But this worker felt guilty for being more productive,
193
565040
2536
問他原因,他說:「我知道我做錯了。
09:27
and asked why, he said, "I know I'm doing wrong.
194
567600
2296
09:29
I'm taking away the work of another man."
195
569920
2040
我搶走了另一個人的工作。」
09:32
In his mind, there was some fixed lump of work
196
572760
2976
在他的認知中,勞動總合是固定的,
09:35
to be divided up between him and his pals,
197
575760
2136
要由他和他的伙伴來分攤,
09:37
so that if he used this machine to do more,
198
577920
2056
所以如果他用機器多做一點,
09:40
there'd be less left for his pals to do.
199
580000
2016
他伙伴能做的就變少了。
09:42
Schloss saw the mistake.
200
582040
1856
許洛斯看到了這個錯誤。
09:43
The lump of work wasn't fixed.
201
583920
1856
勞動總合並不是固定的。
09:45
As this worker used the machine and became more productive,
202
585800
2816
當這個工人用機器提高生產力,
09:48
the price of washers would fall, demand for washers would rise,
203
588640
2976
墊圈的價格會下降, 對墊圈的需求會提高,
09:51
more washers would have to be made,
204
591640
1696
就得要做出更多的墊圈,
09:53
and there'd be more work for his pals to do.
205
593360
2096
他的伙伴反而會有更多要做。
09:55
The lump of work would get bigger.
206
595480
1696
勞動總合變更大了。
09:57
Schloss called this "the lump of labor fallacy."
207
597200
2680
許洛斯稱之為「勞動總合謬誤」。
10:00
And today you hear people talk about the lump of labor fallacy
208
600560
2936
現今,在思考有各類工作的未來時,
10:03
to think about the future of all types of work.
209
603520
2216
會聽到人們談到勞動總合謬誤。
沒有固定的勞動總合
10:05
There's no fixed lump of work out there to be divided up
210
605760
2656
要讓人類與機器瓜分。
10:08
between people and machines.
211
608440
1376
是的,機器會取代人類, 讓原本的勞動總合變少,
10:09
Yes, machines substitute for human beings, making the original lump of work smaller,
212
609840
4656
10:14
but they also complement human beings,
213
614520
1856
但它們也會補足人類,
10:16
and the lump of work gets bigger and changes.
214
616400
2096
勞動總合會變更大並且改變。
10:19
But LOLFF.
215
619760
1616
但,LOLFF。
10:21
Here's the mistake:
216
621400
1376
錯誤是這樣的:
10:22
it's right to think that technological progress
217
622800
2216
認為科技進步會讓 要做的勞動總合變大,
10:25
makes the lump of work to be done bigger.
218
625040
1976
這點是沒錯的。
有些工作任務變得較有價值。 有新工作任務需要完成。
10:27
Some tasks become more valuable. New tasks have to be done.
219
627040
3016
10:30
But it's wrong to think that necessarily,
220
630080
2536
錯的地方在於,認為安排人類
10:32
human beings will be best placed to perform those tasks.
221
632640
3256
來做那些工作任務一定是最好的。
10:35
And this is the superiority myth.
222
635920
1616
這就是優越迷思。
10:37
Yes, the lump of work might get bigger and change,
223
637560
3416
是的,勞動總量可能 會變大也會改變,
10:41
but as machines become more capable,
224
641000
1976
但隨著機器變得更有能力,
10:43
it's likely that they'll take on the extra lump of work themselves.
225
643000
3896
很有可能它們會自己去接下 那些額外的勞動總量。
10:46
Technological progress, rather than complement human beings,
226
646920
3256
科技進步就不是在補足人類了,
10:50
complements machines instead.
227
650200
1880
反而是補足機器。
10:52
To see this, go back to the task of driving a car.
228
652920
3016
可以回頭看駕駛汽車的 工作任務來了解這點。
10:55
Today, satnav systems directly complement human beings.
229
655960
4096
現今,衛星導航系統直接補足人類。
11:00
They make some human beings better drivers.
230
660080
2280
它讓一些人類變成更好的駕駛。
11:02
But in the future,
231
662920
1256
但在未來,
11:04
software is going to displace human beings from the driving seat,
232
664200
3096
軟體會取代坐在駕駛座上的人類,
這些衛星導航系統 就不是在補足人類了,
11:07
and these satnav systems, rather than complement human beings,
233
667320
2936
而單純就是在讓這些 無人駕駛汽車更有效率,
11:10
will simply make these driverless cars more efficient,
234
670280
2536
11:12
helping the machines instead.
235
672840
1536
改而協助機器。
11:14
Or go to those indirect complementarities that I mentioned as well.
236
674400
4056
或也可以回到 我剛提過的間接互補性。
11:18
The economic pie may get larger,
237
678480
1776
經濟的派可能會變更大,
11:20
but as machines become more capable,
238
680280
1736
但隨著機器更有能力,
11:22
it's possible that any new demand will fall on goods that machines,
239
682040
3143
有可能所有符合新需求的商品都適合
由機器而不是由人類來製造。
11:25
rather than human beings, are best placed to produce.
240
685207
2649
11:27
The economic pie may change,
241
687880
1896
經濟的派可能會改變,
11:29
but as machines become more capable,
242
689800
1896
但隨著機器變得更有能力,
11:31
it's possible that they'll be best placed to do the new tasks that have to be done.
243
691720
4856
有可能它們最適合運用在 新工作任務中,那些必須解決的事。
11:36
In short, demand for tasks isn't demand for human labor.
244
696600
3696
簡言之,對工作任務的需求 並非對人類勞動力的需求。
11:40
Human beings only stand to benefit
245
700320
1936
人類只有在仍然能支配
11:42
if they retain the upper hand in all these complemented tasks,
246
702280
3816
這些補足性工作任務的 情況下才有可能受益,
11:46
but as machines become more capable, that becomes less likely.
247
706120
3720
但隨著機器變得更有能力, 那就更不可能發生。
11:50
So what do these three myths tell us then?
248
710760
2016
所以,這三項迷思告訴我們什麼?
11:52
Well, resolving the Terminator myth
249
712800
1696
解開終結者迷思之後,
11:54
shows us that the future of work depends upon this balance between two forces:
250
714520
3696
我們知道工作的未來還要 仰賴兩股力量間的平衡:
11:58
one, machine substitution that harms workers
251
718240
3136
第一:機器代替,這會傷害到工人,
12:01
but also those complementarities that do the opposite.
252
721400
2576
但也會有第二股力量, 互補性,反而會幫助工人。
12:04
And until now, this balance has fallen in favor of human beings.
253
724000
4040
直到目前,這平衡是對人類有利的。
12:09
But resolving the intelligence myth
254
729120
1736
但解開了智慧迷思之後,
12:10
shows us that that first force, machine substitution,
255
730880
2496
我們知道,第一股力量,機器代替,
12:13
is gathering strength.
256
733400
1296
正在聚集實力。
12:14
Machines, of course, can't do everything,
257
734720
1976
當然,機器並非什麼都能做,
12:16
but they can do far more,
258
736720
1256
但它們能做的很多,
12:18
encroaching ever deeper into the realm of tasks performed by human beings.
259
738000
4576
能更深進入到人類所進行之 工作任務的領域中。
12:22
What's more, there's no reason to think
260
742600
1896
此外,沒有理由去認為
12:24
that what human beings are currently capable of
261
744520
2216
人類目前已經能做到的事,
12:26
represents any sort of finishing line,
262
746760
1856
就表示是某種終點線,
12:28
that machines are going to draw to a polite stop
263
748640
2256
等到機器和我們一樣有能力時
12:30
once they're as capable as us.
264
750920
1816
就會禮貌地在終點線前停下來。
12:32
Now, none of this matters
265
752760
1536
這些都無所謂,
12:34
so long as those helpful winds of complementarity
266
754320
2816
只要機器和人類在工作上 能相得益彰就好。
12:37
blow firmly enough,
267
757160
1736
12:38
but resolving the superiority myth
268
758920
1936
但解開了優越迷思之後,
12:40
shows us that that process of task encroachment
269
760880
3096
我們了解到,工作任務侵佔的過程
12:44
not only strengthens the force of machine substitution,
270
764000
3936
不僅是強化了機器代替的那股力量,
12:47
but it wears down those helpful complementarities too.
271
767960
3336
也會耗損那些有助益的互補性。
12:51
Bring these three myths together
272
771320
1936
把這三項迷思結合起來,
12:53
and I think we can capture a glimpse of that troubling future.
273
773280
2936
我想,我們就能對 讓人困擾的未來有點概念。
12:56
Machines continue to become more capable,
274
776240
2016
機器持續變得更有能力,
12:58
encroaching ever deeper on tasks performed by human beings,
275
778280
3656
比以前更深入人類進行的工作任務,
13:01
strengthening the force of machine substitution,
276
781960
2576
強化機器代替的那股力量,
13:04
weakening the force of machine complementarity.
277
784560
3616
弱化機器互補性的那股力量。
13:08
And at some point, that balance falls in favor of machines
278
788200
4296
在某個時點,那平衡 會變得對機器有利,
13:12
rather than human beings.
279
792520
2056
而非人類。
13:14
This is the path we're currently on.
280
794600
1736
我們目前就在這條路上。
13:16
I say "path" deliberately, because I don't think we're there yet,
281
796360
3176
我刻意用「路」這個字, 因為我們還沒有到達那裡,
13:19
but it is hard to avoid the conclusion that this is our direction of travel.
282
799560
3640
但無可避免,結論會是: 這就是我們行進的方向。
13:24
That's the troubling part.
283
804640
1456
那是讓人困擾的部分。
13:26
Let me say now why I think actually this is a good problem to have.
284
806120
3520
現在讓我說明為什麼我認為 有這個問題是件好事。
13:30
For most of human history, one economic problem has dominated:
285
810520
3536
大部分的人類歷史中, 主導的都是這一個經濟問題:
13:34
how to make the economic pie large enough for everyone to live on.
286
814080
4056
如何讓經濟的派夠大, 確保每個人都得以維生。
13:38
Go back to the turn of the first century AD,
287
818160
2176
回到西元一世紀,
13:40
and if you took the global economic pie
288
820360
2096
如果用全球的派當作例子,
13:42
and divided it up into equal slices for everyone in the world,
289
822480
3296
將它切成相同的等分, 分給全世界的人,
13:45
everyone would get a few hundred dollars.
290
825800
2136
每個人可能得到幾百美元。
13:47
Almost everyone lived on or around the poverty line.
291
827960
2760
幾乎每個人都是在 貧窮水平線上下過生活。
13:51
And if you roll forward a thousand years,
292
831320
2176
如果你再向前轉一千年,
13:53
roughly the same is true.
293
833520
1240
大致上也是一樣的。
13:55
But in the last few hundred years, economic growth has taken off.
294
835680
3576
但在過去幾百年間,經濟成長起飛。
13:59
Those economic pies have exploded in size.
295
839280
2376
這些經濟的派在尺寸上都爆增。
14:01
Global GDP per head,
296
841680
2056
全球的人均生產總值,
14:03
the value of those individual slices of the pie today,
297
843760
3376
也就是現今每個人分到的那片派,
14:07
they're about 10,150 dollars.
298
847160
2816
價值約 10,150 美元。
14:10
If economic growth continues at two percent,
299
850000
2696
如果經濟成長率維持 2%,
14:12
our children will be twice as rich as us.
300
852720
2056
我們的孩子會比我們富有兩倍。
14:14
If it continues at a more measly one percent,
301
854800
2296
如果成長率低一點,維持在 1%,
14:17
our grandchildren will be twice as rich as us.
302
857120
2656
我們的孫子會比我們富有兩倍。
14:19
By and large, we've solved that traditional economic problem.
303
859800
3680
總的來說,我們解決了 傳統的經濟問題。
14:24
Now, technological unemployment, if it does happen,
304
864200
3016
如果真的因為科技進步而造成失業,
14:27
in a strange way will be a symptom of that success,
305
867240
3216
從一種奇怪的角度來看, 那會是一種成功的象徵,
14:30
will have solved one problem -- how to make the pie bigger --
306
870480
3856
它能夠解決一個問題 ──如何讓派變大──
14:34
but replaced it with another --
307
874360
1816
但卻用另一個問題取代它──
14:36
how to make sure that everyone gets a slice.
308
876200
2760
如何確保每個人得到一片派。
14:39
As other economists have noted, solving this problem won't be easy.
309
879840
3496
如其他經濟學家注意到的, 解決這個問題並不容易。
14:43
Today, for most people,
310
883360
1656
現今,對大部分人而言,
14:45
their job is their seat at the economic dinner table,
311
885040
2496
他們的工作就是在 經濟晚餐餐桌上的席位,
14:47
and in a world with less work or even without work,
312
887560
2416
在一個更少或甚至沒工作的世界裡,
14:50
it won't be clear how they get their slice.
313
890000
2056
沒人知道他們如何得到自己的那片派。
14:52
There's a great deal of discussion, for instance,
314
892080
2336
比如,有很多的討論都是
14:54
about various forms of universal basic income
315
894440
2696
關於全體基本收入的各種形式,
14:57
as one possible approach,
316
897160
1216
這是種可能的方式,
14:58
and there's trials underway
317
898400
1616
且在美國、芬蘭,
15:00
in the United States and in Finland and in Kenya.
318
900040
2400
及肯亞都有試驗正在進行中。
15:03
And this is the collective challenge that's right in front of us,
319
903000
3176
這是我們要面臨的集體挑戰,
15:06
to figure out how this material prosperity generated by our economic system
320
906200
5056
要想出我們的經濟體制 所產生出的物質繁榮要如何
15:11
can be enjoyed by everyone
321
911280
1976
讓每個人都享受到,
15:13
in a world in which our traditional mechanism
322
913280
2416
而且在這個世界中,
我們的傳統切派機制,
15:15
for slicing up the pie,
323
915720
1856
15:17
the work that people do,
324
917600
1936
瓜分人們所做的工作的機制,
15:19
withers away and perhaps disappears.
325
919560
2160
在衰弱且也許在消失中。
15:22
Solving this problem is going to require us to think in very different ways.
326
922280
4360
若要解決這個問題,我們 得要用很不同的方式思考。
15:27
There's going to be a lot of disagreement about what ought to be done,
327
927400
4176
對於該做什麼事, 必定會有很多異議,
15:31
but it's important to remember that this is a far better problem to have
328
931600
3416
但很重要的是要記住, 有這個問題其實算好事,
15:35
than the one that haunted our ancestors for centuries:
329
935040
2816
比我們的祖先煩惱了 幾世紀的問題要好多了,
15:37
how to make that pie big enough in the first place.
330
937880
3376
他們煩惱的是: 一開始要如何讓派變大。
15:41
Thank you very much.
331
941280
1256
非常謝謝各位。
15:42
(Applause)
332
942560
3840
(掌聲)
關於本網站

本網站將向您介紹對學習英語有用的 YouTube 視頻。 您將看到來自世界各地的一流教師教授的英語課程。 雙擊每個視頻頁面上顯示的英文字幕,從那裡播放視頻。 字幕與視頻播放同步滾動。 如果您有任何意見或要求,請使用此聯繫表與我們聯繫。

https://forms.gle/WvT1wiN1qDtmnspy7


This website was created in October 2020 and last updated on June 12, 2025.

It is now archived and preserved as an English learning resource.

Some information may be out of date.

隱私政策

eng.lish.video

Developer's Blog