How humans and AI can work together to create better businesses | Sylvain Duranton

30,271 views ・ 2020-02-14

TED


請雙擊下方英文字幕播放視頻。

00:00
Translator: Ivana Korom Reviewer: Krystian Aparta
0
0
7000
譯者: Harper Chang 審譯者: Helen Chang
00:12
Let me share a paradox.
1
12865
2127
我來和大家分享發現的一個悖論。
00:16
For the last 10 years,
2
16429
1467
在過去十年裡,
00:17
many companies have been trying to become less bureaucratic,
3
17920
3848
很多公司都想擺脫官僚化,
00:21
to have fewer central rules and procedures,
4
21792
2833
通過減少職務、精簡程序,
00:24
more autonomy for their local teams to be more agile.
5
24649
3245
給團隊更多自主, 讓公司運作更靈活。
00:28
And now they are pushing artificial intelligence, AI,
6
28204
4586
現在,公司開始引進人工智慧,AI,
00:32
unaware that cool technology
7
32814
2445
卻沒意識到這個很酷的科技,
00:35
might make them more bureaucratic than ever.
8
35283
3602
可能讓他們比以前更官僚。
00:39
Why?
9
39378
1151
為什麼呢?
00:40
Because AI operates just like bureaucracies.
10
40553
3492
因為 AI 的運作方式就很官僚。
00:44
The essence of bureaucracy
11
44403
2412
官僚的本質
00:46
is to favor rules and procedures over human judgment.
12
46839
4444
就是看重規則和程序, 而不是人類自身的判斷,
00:51
And AI decides solely based on rules.
13
51887
3841
而 AI 僅依據規則做決策。
00:56
Many rules inferred from past data
14
56062
2833
即便 AI 的規則 依據過去的數據形成,
00:58
but only rules.
15
58919
1904
那也終究是規則。
01:01
And if human judgment is not kept in the loop,
16
61204
3730
如果將人類判斷置若罔聞,
01:04
AI will bring a terrifying form of new bureaucracy --
17
64958
4556
對 AI 的運用將帶來 可怕的新官僚主義——
01:09
I call it "algocracy" --
18
69538
2999
我稱之為「AI 官僚主義」 (algocracy),
01:12
where AI will take more and more critical decisions by the rules
19
72561
4500
意即 AI 將根據規則, 不受人為控制,
01:17
outside of any human control.
20
77085
2317
做更多重要決策。
01:20
Is there a real risk?
21
80427
1674
真有風險嗎?
01:22
Yes.
22
82514
1150
當然有。
01:23
I'm leading a team of 800 AI specialists.
23
83688
3006
我領導的團隊 由 800 位 AI 專家組成,
01:26
We have deployed over 100 customized AI solutions
24
86718
3857
我們為很多全球的大公司
量身打造了超過 100 個 AI 系統。
01:30
for large companies around the world.
25
90599
2467
01:33
And I see too many corporate executives behaving like bureaucrats from the past.
26
93417
5819
我看過太多的公司高管 因此而變得官僚作風。
01:39
They want to take costly, old-fashioned humans out of the loop
27
99784
4912
他們對麻煩、老舊的 人類決策嗤之以鼻,
01:44
and rely only upon AI to take decisions.
28
104720
3865
完全依賴 AI 來做決策。
01:49
I call this the "human-zero mindset."
29
109244
4253
我稱之為「無人類思維」。 (human-zero mindset)
01:54
And why is it so tempting?
30
114260
2118
可為何這種思維這麼誘人?
01:56
Because the other route, "Human plus AI," is long,
31
116879
5404
因為另一種思維—— 「人類+AI」(Human plus AI)
費時、費錢又費力。
02:02
costly and difficult.
32
122307
2609
02:04
Business teams, tech teams, data-science teams
33
124940
3293
商業團隊、科技團隊和數據科學團隊
02:08
have to iterate for months
34
128257
2079
不得不費幾個月的功夫
02:10
to craft exactly how humans and AI can best work together.
35
130360
5268
探索人類和 AI 怎樣達到最好的合作。
02:16
Long, costly and difficult.
36
136160
3428
探索過程漫長艱難,耗了很多錢,
02:19
But the reward is huge.
37
139891
2070
但取得了巨大成果。
02:22
A recent survey from BCG and MIT
38
142343
3306
根據波士頓諮詢公司 和麻省理工學院最近的調查,
02:25
shows that 18 percent of companies in the world
39
145673
4507
全球有 18% 的公司
02:30
are pioneering AI,
40
150204
2214
在研究 AI,
02:32
making money with it.
41
152442
2301
藉此盈利。
02:35
Those companies focus 80 percent of their AI initiatives
42
155157
5590
那些公司把 80% 的 AI 創新
02:40
on effectiveness and growth,
43
160771
1953
專注在效率和成長,
02:42
taking better decisions --
44
162748
2170
做更好的決策,
02:44
not replacing humans with AI to save costs.
45
164942
3538
而不是用 AI 取代人類來減少開支。
02:50
Why is it important to keep humans in the loop?
46
170159
3200
為什麼人類的作用必不可少?
02:54
Simply because, left alone, AI can do very dumb things.
47
174032
4847
原因很簡單: 沒有人類,AI 會幹傻事。
02:59
Sometimes with no consequences, like in this tweet.
48
179363
3373
有時候毫無幫助, 就像這條推文講的:
03:03
"Dear Amazon,
49
183212
1564
「親愛的亞馬遜公司,
03:04
I bought a toilet seat.
50
184800
1386
我之前買了一個馬桶圈。
03:06
Necessity, not desire.
51
186210
1661
生活必需品,不是什麼私人愛好。
03:07
I do not collect them,
52
187895
1452
我不收藏馬桶圈,
03:09
I'm not a toilet-seat addict.
53
189371
2238
我沒有馬桶圈癮。
03:11
No matter how temptingly you email me,
54
191633
2206
不管你的廣告郵件多誘人,
03:13
I am not going to think, 'Oh, go on, then,
55
193863
2349
我都不會覺得 『噢,受不了,
03:16
one more toilet seat, I'll treat myself.' "
56
196236
2140
只好再買個馬桶圈了, 偶爾放縱一下自己。』」
03:18
(Laughter)
57
198400
1344
(笑聲)
03:19
Sometimes, with more consequence, like in this other tweet.
58
199768
4618
有時候,AI 又「太有幫助」, 就像這條推文:
03:24
"Had the same situation
59
204903
1787
「我在買我媽媽的骨灰盒後
03:26
with my mother's burial urn."
60
206714
2459
遇到了同樣狀況。」
03:29
(Laughter)
61
209197
1008
(笑聲)
03:30
"For months after her death,
62
210229
1365
「在她去世後的幾個月,
03:31
I got messages from Amazon, saying, 'If you liked that ...' "
63
211618
3548
亞馬遜發給我多個郵件: 『如果你喜歡這個⋯(骨灰盒)』」
03:35
(Laughter)
64
215190
2015
(笑聲)
03:37
Sometimes with worse consequences.
65
217229
2528
有時 AI 幹壞事。
03:39
Take an AI engine rejecting a student application for university.
66
219781
4730
比如說 AI 曾經拒絕 學生的大學申請。
03:44
Why?
67
224535
1150
為什麼?
03:45
Because it has "learned," on past data,
68
225709
2670
因為這個 AI 從以前的數據「學」了
03:48
characteristics of students that will pass and fail.
69
228403
3182
哪些學生會通過,哪些學生不能——
03:51
Some are obvious, like GPAs.
70
231609
2103
有一些指標很明顯,比如 GPA。
03:54
But if, in the past, all students from a given postal code have failed,
71
234069
5109
但如果在過去,某個地區 所有的學生都沒通過,
03:59
it is very likely that AI will make this a rule
72
239202
3532
AI 很可能就以此定下規則,
04:02
and will reject every student with this postal code,
73
242758
3770
拒絕每一個來自這個地區的學生,
04:06
not giving anyone the opportunity to prove the rule wrong.
74
246552
4813
不給任何人證明規則有誤的機會。
04:11
And no one can check all the rules,
75
251857
2516
沒有人能檢查每一條規則,
04:14
because advanced AI is constantly learning.
76
254397
3452
因為先進的 AI 一直在學。
04:18
And if humans are kept out of the room,
77
258307
2326
所以如果直接用 AI 取代人類,
04:20
there comes the algocratic nightmare.
78
260657
3277
迎來的將是 AI 官僚主義的噩夢:
04:24
Who is accountable for rejecting the student?
79
264466
2857
誰應該對學生的被拒負責?
04:27
No one, AI did.
80
267347
1643
沒有誰,AI 負責。
04:29
Is it fair? Yes.
81
269014
1674
這公平嗎?公平。
04:30
The same set of objective rules has been applied to everyone.
82
270712
3242
因為所有學生都用同一規則判定。
04:34
Could we reconsider for this bright kid with the wrong postal code?
83
274367
3902
那可不可以重新考慮這個 「住錯了地方」的聰明學生?
04:38
No, algos don't change their mind.
84
278899
3111
不行,AI 官僚不會改主意。
04:42
We have a choice here.
85
282974
2016
我們的選項是,
04:45
Carry on with algocracy
86
285756
2524
繼續 AI 的獨裁,
04:48
or decide to go to "Human plus AI."
87
288304
2865
還是考慮「人類+AI」思維?
04:51
And to do this,
88
291193
1333
要擁有這種思維,
04:52
we need to stop thinking tech first,
89
292550
3440
我們要先把科技放在一邊,
04:56
and we need to start applying the secret formula.
90
296014
3650
從秘密公式入手。
05:00
To deploy "Human plus AI,"
91
300601
2103
要實現「人類+AI」,
05:02
10 percent of the effort is to code algos;
92
302728
2921
需要 10% 的程式;
05:05
20 percent to build tech around the algos,
93
305673
3531
20% 的科技成份,
05:09
collecting data, building UI, integrating into legacy systems;
94
309228
4106
包括收集數據、構建用戶界面、 整合進遺留系統;
05:13
But 70 percent, the bulk of the effort,
95
313358
2904
但 70%,最主要的部份,
05:16
is about weaving together AI with people and processes
96
316286
4476
是結合 AI 和人類方法,
05:20
to maximize real outcome.
97
320786
2374
讓結果最接近完美。
05:24
AI fails when cutting short on the 70 percent.
98
324136
4634
如果這 70% 被削減, AI 就會出現問題。
05:28
The price tag for that can be small,
99
328794
3159
代價可以很小,
05:31
wasting many, many millions of dollars on useless technology.
100
331977
3985
比如在無用科技上浪費數百萬美元。
05:35
Anyone cares?
101
335986
1150
誰會在乎呢?
05:38
Or real tragedies:
102
338153
2325
但代價也可以大到無法承受:
05:41
346 casualties in the recent crashes of two B-737 aircrafts
103
341137
7515
最近兩起波音 737 空難中 346 人遇難,
05:48
when pilots could not interact properly
104
348776
3261
原因是電腦控制的飛行系統
05:52
with a computerized command system.
105
352061
2467
無法正確回應飛機師的指令。
05:55
For a successful 70 percent,
106
355974
1794
要實現人類的 70%,
05:57
the first step is to make sure that algos are coded by data scientists
107
357792
5095
第一步是保證算法程式由數據科學家
06:02
and domain experts together.
108
362911
2118
和領域專家共同完成。
06:05
Take health care for example.
109
365427
2198
拿醫療保健舉例,
06:07
One of our teams worked on a new drug with a slight problem.
110
367649
4817
我們有一個團隊 曾經處理一種藥產生的小問題。
06:12
When taking their first dose,
111
372784
1499
在第一次服用這種藥後,
06:14
some patients, very few, have heart attacks.
112
374307
3484
有一小部份的患者會發心臟病。
06:18
So, all patients, when taking their first dose,
113
378117
3135
於是所有第一次服這種藥的患者
06:21
have to spend one day in hospital,
114
381276
2682
都要住院觀察一天,
06:23
for monitoring, just in case.
115
383982
2071
以防心臟病發作。
06:26
Our objective was to identify patients who were at zero risk of heart attacks,
116
386613
5556
我們想要識別出 完全沒可能發心臟病的患者,
06:32
who could skip the day in hospital.
117
392193
2333
這樣他們就不用在醫院多待一天。
06:34
We used AI to analyze data from clinical trials,
118
394962
4079
我們用 AI 分析臨床試驗的數據,
06:40
to correlate ECG signal, blood composition, biomarkers,
119
400145
4368
尋找心電圖、血液成份、生物標記
06:44
with the risk of heart attack.
120
404537
2000
和心臟病發作概率之間的關係。
06:47
In one month,
121
407232
1274
在一個月內,
06:48
our model could flag 62 percent of patients at zero risk.
122
408530
6031
我們訓練的模型就能識別出 62% 的零發病風險患者。
06:54
They could skip the day in hospital.
123
414887
2222
這樣,這些患者就免去了 待在醫院的一天。
06:57
Would you be comfortable staying at home for your first dose
124
417863
3492
但是,你會放心地 在第一次服藥後直接回家,
07:01
if the algo said so?
125
421379
1524
就因為 AI 說你可以回家啦?
07:02
(Laughter)
126
422927
1015
(笑聲)
07:03
Doctors were not.
127
423966
1650
醫師也不會放心。
07:05
What if we had false negatives,
128
425966
2302
萬一出現偽陰性呢?
07:08
meaning people who are told by AI they can stay at home, and die?
129
428292
5229
也就是 AI 叫他們回家, 結果他們死了?
07:13
(Laughter)
130
433545
1365
(笑聲)
07:14
There started our 70 percent.
131
434934
2452
這時就需要那 70% 的作用了。
07:17
We worked with a team of doctors
132
437410
1992
我們與醫師團隊合作,
07:19
to check the medical logic of each variable in our model.
133
439426
3799
檢驗模型中每個變量的醫學合理性。
07:23
For instance, we were using the concentration of a liver enzyme
134
443537
4569
比方說,我們用肝酵素濃度
07:28
as a predictor,
135
448130
1273
作為預測變量,
07:29
for which the medical logic was not obvious.
136
449427
3698
其醫學邏輯並不明顯,
07:33
The statistical signal was quite strong.
137
453149
2666
但從統計信號角度看, 它與結果有很大的關聯。
07:36
But what if it was a bias in our sample?
138
456300
2833
但萬一它實際上是個偏置項呢? [意即這個變量與心臟病無實際關聯]
07:39
That predictor was taken out of the model.
139
459157
2800
模型就會剔除那個變量。
07:42
We also took out predictors for which experts told us
140
462307
3445
我們還剔除了一些變量,
07:45
they cannot be rigorously measured by doctors in real life.
141
465776
3936
因為醫師無法精準測出這些變量。
07:50
After four months,
142
470371
1690
四個月後,
07:52
we had a model and a medical protocol.
143
472085
3071
我們訓練出了模型, 制定了醫學使用規則。
07:55
They both got approved
144
475514
1666
去年春天它們都獲 美國醫療機構批准通過,
07:57
my medical authorities in the US last spring,
145
477204
3222
08:00
resulting in far less stress for half of the patients
146
480450
3706
為一半服用這種藥的患者減輕壓力,
08:04
and better quality of life.
147
484180
1800
提高生活品質。
08:06
And an expected upside on sales over 100 million for that drug.
148
486355
4269
還預期這種藥的銷量 增加超過一億份。
08:11
Seventy percent weaving AI with team and processes
149
491668
4198
人類團隊和方法造就的 70%,
08:15
also means building powerful interfaces
150
495890
3571
也意味著在人類和 AI 之間 建立堅固的連結,
08:19
for humans and AI to solve the most difficult problems together.
151
499485
5309
共同解決最難的問題。
08:25
Once, we got challenged by a fashion retailer.
152
505286
4635
以前有一個時裝零售商問我們:
08:31
"We have the best buyers in the world.
153
511143
2498
「世上有很會進貨的時裝零售商,
08:33
Could you build an AI engine that would beat them at forecasting sales?
154
513665
5111
你能不能做一個 AI 比他們更會預測銷量,
08:38
At telling how many high-end, light-green, men XL shirts
155
518800
4166
告訴我們明年要訂購多少
高端服裝、淺綠色衣服、 加大碼男襯衫呢?
08:42
we need to buy for next year?
156
522990
2047
08:45
At predicting better what will sell or not
157
525061
2810
能不能預測哪些衣服會大賣,
08:47
than our designers."
158
527895
1960
預測得比設計師還準?」
08:50
Our team trained a model in a few weeks, on past sales data,
159
530434
3976
我們的團隊在幾週內 用以往銷量數據訓練出模型,
08:54
and the competition was organized with human buyers.
160
534434
3533
和人類商家比賽。
08:58
Result?
161
538347
1150
猜猜誰贏了?
09:00
AI wins, reducing forecasting errors by 25 percent.
162
540061
4682
AI 勝出,預測錯誤率比人類低 25%。
09:05
Human-zero champions could have tried to implement this initial model
163
545903
4833
零人類思維的人可能會改進模型,
09:10
and create a fight with all human buyers.
164
550760
2754
投入和人類商家的競爭。
09:13
Have fun.
165
553538
1150
開心就好。
09:15
But we knew that human buyers had insights on fashion trends
166
555205
5126
但我們知道,人類對時尚潮流有遠見,
09:20
that could not be found in past data.
167
560355
2845
這是 AI 在以往數據學不到的。
09:23
There started our 70 percent.
168
563701
2857
於是我們轉向 70%。
09:26
We went for a second test,
169
566582
1944
我們開始了第二次測試,
09:28
where human buyers were reviewing quantities
170
568550
3103
人類商家來審查
09:31
suggested by AI
171
571677
1662
AI 推薦的購買量,
09:33
and could correct them if needed.
172
573363
2325
然後做出糾正。
09:36
Result?
173
576180
1150
結果如何?
09:37
Humans using AI ...
174
577704
2117
使用 AI 的人類商家⋯
09:39
lose.
175
579845
1407
輸了。
09:41
Seventy-five percent of the corrections made by a human
176
581795
4151
人類做出的糾正中,
有 75% 降低了 AI 的準確率。
09:45
were reducing accuracy.
177
585970
2055
09:49
Was it time to get rid of human buyers?
178
589002
3174
是不是要放棄人類商家的介入了?
09:52
No.
179
592200
1158
不是。
09:53
It was time to recreate a model
180
593382
2617
我們要重新搭建一個模型,
09:56
where humans would not try to guess when AI is wrong,
181
596023
5071
這一次,不讓人類猜 AI 的對錯,
10:01
but where AI would take real input from human buyers.
182
601118
4542
而是讓 AI 尋求人類的建議。
10:06
We fully rebuilt the model
183
606962
1611
我們將模型改頭換面,
10:08
and went away from our initial interface, which was, more or less,
184
608597
5964
拋棄了最初的交互方式, 有點像這樣:
10:14
"Hey, human! This is what I forecast,
185
614585
2437
「嘿人類!這是我的預測,
10:17
correct whatever you want,"
186
617046
1761
照你的意思幫我糾正一下吧!」
10:18
and moved to a much richer one, more like,
187
618831
3636
改進後的交互方式變得更廣泛,像這樣:
10:22
"Hey, humans!
188
622491
1976
「嘿人類!
10:24
I don't know the trends for next year.
189
624491
1825
我不懂明年的流行趨勢,
10:26
Could you share with me your top creative bets?"
190
626340
2956
可不可以告訴我你押寶在哪?」
10:30
"Hey, humans!
191
630063
1476
「嘿人類!
10:31
Could you help me quantify those few big items?
192
631563
2719
可以幫我看看這些大傢伙嗎?
10:34
I cannot find any good comparables in the past for them."
193
634306
3317
它們超出了我已知的知識範圍。」
10:38
Result?
194
638401
1150
結果如何?
10:40
"Human plus AI" wins,
195
640195
2000
「人類+AI」勝出,
10:42
reducing forecast errors by 50 percent.
196
642219
3919
這次預測錯誤率低了 50%。
10:47
It took one year to finalize the tool.
197
647770
2828
我們花了一年才最終完成這個工具,
10:51
Long, costly and difficult.
198
651073
3317
漫長、成本高又艱難,
10:55
But profits and benefits
199
655046
2206
但利潤和獲益頗豐厚,
10:57
were in excess of 100 million of savings per year for that retailer.
200
657276
5396
每年為零售商節省超過一億美金。
11:03
Seventy percent on very sensitive topics
201
663459
2936
在一些特定議題上,
11:06
also means human have to decide what is right or wrong
202
666419
3778
70% 也意味著人類要決定對錯,
11:10
and define rules for what AI can do or not,
203
670221
4086
限制 AI 的權力。
11:14
like setting caps on prices to prevent pricing engines
204
674331
3484
例如設定價格上限,
防止 AI 粗暴地提高價格,
11:17
[from charging] outrageously high prices to uneducated customers
205
677839
4524
向不知情的顧客漫天要價。
11:22
who would accept them.
206
682387
1466
11:24
Only humans can define those boundaries --
207
684538
2563
只有人類能夠設定界線,
11:27
there is no way AI can find them in past data.
208
687125
3621
因為 AI 不可能從以往數據學到。
11:31
Some situations are in the gray zone.
209
691230
2467
有時我們可能遇到灰色地帶。
11:34
We worked with a health insurer.
210
694135
2745
我們曾和保險公司有過合作,
11:36
He developed an AI engine to identify, among his clients,
211
696904
4713
他們開發了一個 針對客戶的 AI 系統,
11:41
people who are just about to go to hospital
212
701641
2548
用來識別健康狀況較差, 很可能即將要去治病的客戶,
11:44
to sell them premium services.
213
704213
2269
向他們推銷附加產品。
11:46
And the problem is,
214
706506
1515
問題是,
11:48
some prospects were called by the commercial team
215
708045
2969
一些接到推銷電話的客戶,
11:51
while they did not know yet
216
711038
2697
這時候並不知道
11:53
they would have to go to hospital very soon.
217
713759
2818
他們很可能馬上要去醫院看病。
11:57
You are the CEO of this company.
218
717720
2317
如果你是這家公司的執行長,
12:00
Do you stop that program?
219
720061
1667
你會取消掉這個項目嗎?
12:02
Not an easy question.
220
722577
1913
這是個兩難抉擇。
12:04
And to tackle this question, some companies are building teams,
221
724514
3563
為了解決這個問題, 一些公司正在組建團隊,
12:08
defining ethical rules and standards to help business and tech teams set limits
222
728101
5793
幫商業和科技團隊 制訂倫理規則和標準,
12:13
between personalization and manipulation,
223
733918
3596
在個性化和可操作性間尋找平衡點,
12:17
customization of offers and discrimination,
224
737538
2969
區別客製與偏見,
12:20
targeting and intrusion.
225
740531
2023
分清關照與冒犯。
12:24
I am convinced that in every company,
226
744562
3674
我堅信在每家公司,
12:28
applying AI where it really matters has massive payback.
227
748260
4650
把 AI 運用到關鍵之處 定會有巨大回報。
12:33
Business leaders need to be bold
228
753474
2151
商業領袖們要勇敢嘗試,
12:35
and select a few topics,
229
755649
1976
選擇幾個項目,
12:37
and for each of them, mobilize 10, 20, 30 people from their best teams --
230
757649
4936
每個項目,召集幾十個領域佼佼者——
12:42
tech, AI, data science, ethics --
231
762609
3333
科技、AI、數據科學、倫理——
12:45
and go through the full 10-, 20-, 70-percent cycle
232
765966
4421
然後完成 10%、20%、70% 的
12:50
of "Human plus AI,"
233
770411
1722
「人類+AI」。
12:52
if they want to land AI effectively in their teams and processes.
234
772157
4340
這樣 AI 就可以和人類高效合作。
12:57
There is no other way.
235
777006
1889
除此之外別無他法。
12:58
Citizens in developed economies already fear algocracy.
236
778919
4724
經濟飛速發展的同時, 公民已對 AI 官僚主義產生恐懼。
13:04
Seven thousand were interviewed in a recent survey.
237
784196
3571
在近期對七千人的調查中,
13:08
More than 75 percent expressed real concerns
238
788157
3555
超過 75% 的人表示實質的擔憂,
13:11
on the impact of AI on the workforce, on privacy,
239
791736
3937
擔心 AI 影響勞動力、隱私,
13:15
on the risk of a dehumanized society.
240
795697
3436
擔心社會會失去人性。
13:19
Pushing algocracy creates a real risk of severe backlash against AI
241
799157
5380
AI 官僚主義的出現
13:24
within companies or in society at large.
242
804561
4103
會導致公司和社會 對 AI 的強烈牴觸。
13:29
"Human plus AI" is our only option
243
809014
3285
「人類+AI」是唯一選項,
13:32
to bring the benefits of AI to the real world.
244
812323
3134
只有這樣才能讓 AI 真正帶來福祉。
13:36
And in the end,
245
816038
1158
到頭來,
13:37
winning organizations will invest in human knowledge,
246
817220
4134
因 AI 獲利的組織, 會為人類知識投資,
13:41
not just AI and data.
247
821378
2325
而不僅僅投資 AI 和數據。
13:44
Recruiting, training, rewarding human experts.
248
824792
3328
招募、培養、獎勵人類專家。
13:48
Data is said to be the new oil,
249
828800
3142
有人說數據是新的燃料,
13:51
but believe me, human knowledge will make the difference,
250
831966
4071
但相信我,人類知識能改變世界。
13:56
because it is the only derrick available
251
836061
3588
因為人類知識是唯一的泵,
13:59
to pump the oil hidden in the data.
252
839673
3674
將蘊藏於數據中的燃料泵出。
14:04
Thank you.
253
844633
1153
謝謝大家。
14:05
(Applause)
254
845810
3904
(掌聲)
關於本網站

本網站將向您介紹對學習英語有用的 YouTube 視頻。 您將看到來自世界各地的一流教師教授的英語課程。 雙擊每個視頻頁面上顯示的英文字幕,從那裡播放視頻。 字幕與視頻播放同步滾動。 如果您有任何意見或要求,請使用此聯繫表與我們聯繫。

https://forms.gle/WvT1wiN1qDtmnspy7


This website was created in October 2020 and last updated on June 12, 2025.

It is now archived and preserved as an English learning resource.

Some information may be out of date.

隱私政策

eng.lish.video

Developer's Blog