How humans and AI can work together to create better businesses | Sylvain Duranton

29,531 views ・ 2020-02-14

TED


请双击下面的英文字幕来播放视频。

00:00
Translator: Ivana Korom Reviewer: Krystian Aparta
0
0
7000
翻译人员: 奕含 董 校对人员: Yolanda Zhang
00:12
Let me share a paradox.
1
12865
2127
我来分享一个矛盾。
00:16
For the last 10 years,
2
16429
1467
在过去十年中,
00:17
many companies have been trying to become less bureaucratic,
3
17920
3848
很多公司都想摆脱官僚化,
00:21
to have fewer central rules and procedures,
4
21792
2833
通过减少职务, 精简程序,
00:24
more autonomy for their local teams to be more agile.
5
24649
3245
给团队更多自主权, 让公司运作更灵活。
00:28
And now they are pushing artificial intelligence, AI,
6
28204
4586
现在公司开始引进人工智能,AI,
00:32
unaware that cool technology
7
32814
2445
却没意识到这个很酷的科技
00:35
might make them more bureaucratic than ever.
8
35283
3602
可能让他们变得更加官僚。
00:39
Why?
9
39378
1151
为什么呢?
00:40
Because AI operates just like bureaucracies.
10
40553
3492
因为 AI 的运作方式就很官僚。
00:44
The essence of bureaucracy
11
44403
2412
官僚的本质
00:46
is to favor rules and procedures over human judgment.
12
46839
4444
就是看重规则和程序, 而非人类自身的判断,
00:51
And AI decides solely based on rules.
13
51887
3841
而且只根据规则做决策。
00:56
Many rules inferred from past data
14
56062
2833
虽然 AI 是依据原有规则形成的,
00:58
but only rules.
15
58919
1904
但只有规则。
01:01
And if human judgment is not kept in the loop,
16
61204
3730
若我们抛弃人类的判断,
01:04
AI will bring a terrifying form of new bureaucracy --
17
64958
4556
运用 AI 将带来可怕的新官僚主义——
01:09
I call it "algocracy" --
18
69538
2999
我称之为 AI 官僚主义 (algocracy),
01:12
where AI will take more and more critical decisions by the rules
19
72561
4500
也就是说 AI 将脱离 人类的控制,仅凭规则
01:17
outside of any human control.
20
77085
2317
做出越来越多重要决策。
01:20
Is there a real risk?
21
80427
1674
这有风险吗?
01:22
Yes.
22
82514
1150
当然有。
01:23
I'm leading a team of 800 AI specialists.
23
83688
3006
我领导的团队由 800 名 AI 专家组成,
01:26
We have deployed over 100 customized AI solutions
24
86718
3857
我们为很多全球的大公司
01:30
for large companies around the world.
25
90599
2467
量身打造了上百个 AI 系统。
01:33
And I see too many corporate executives behaving like bureaucrats from the past.
26
93417
5819
我看过太多的公司高管 因此重拾了过往的官僚做派。
01:39
They want to take costly, old-fashioned humans out of the loop
27
99784
4912
他们对麻烦又老套的 人类决策嗤之以鼻,
01:44
and rely only upon AI to take decisions.
28
104720
3865
完全依赖 AI 来做决策。
01:49
I call this the "human-zero mindset."
29
109244
4253
我称之为无人类思维 (human-zero mindset)。
01:54
And why is it so tempting?
30
114260
2118
可为何这种思维这么诱人?
01:56
Because the other route, "Human plus AI," is long,
31
116879
5404
因为另一种思维—— 人类+AI
02:02
costly and difficult.
32
122307
2609
费时、费钱、又费力。
02:04
Business teams, tech teams, data-science teams
33
124940
3293
商业团队、科技团队 和数据科学团队
02:08
have to iterate for months
34
128257
2079
不得不花费几个月的功夫,
02:10
to craft exactly how humans and AI can best work together.
35
130360
5268
探索人类和 AI 如何更好地合作。
02:16
Long, costly and difficult.
36
136160
3428
探索过程漫长艰难, 花了很多钱,
02:19
But the reward is huge.
37
139891
2070
但取得了巨大成果。
02:22
A recent survey from BCG and MIT
38
142343
3306
根据波士顿咨询公司和 麻省理工大学最近的调查,
02:25
shows that 18 percent of companies in the world
39
145673
4507
全球有 18% 的公司
02:30
are pioneering AI,
40
150204
2214
都在推动 AI 的发展,
02:32
making money with it.
41
152442
2301
希望借此盈利。
02:35
Those companies focus 80 percent of their AI initiatives
42
155157
5590
这些公司 80% 的人工智能计划
02:40
on effectiveness and growth,
43
160771
1953
都集中在效率和增长上,
02:42
taking better decisions --
44
162748
2170
以做出更好的决策——
02:44
not replacing humans with AI to save costs.
45
164942
3538
而不是用 AI 取代人类 以减少开支。
02:50
Why is it important to keep humans in the loop?
46
170159
3200
为什么人类的作用必不可少?
02:54
Simply because, left alone, AI can do very dumb things.
47
174032
4847
原因很简单: 没有人类,AI 会干傻事。
02:59
Sometimes with no consequences, like in this tweet.
48
179363
3373
有时候 AI 的工作毫无价值, 就像这条推文讲的:
03:03
"Dear Amazon,
49
183212
1564
“亲爱的亚马逊公司,
03:04
I bought a toilet seat.
50
184800
1386
我之前买了一个马桶圈。
03:06
Necessity, not desire.
51
186210
1661
生活必需品, 不是什么癖好。
03:07
I do not collect them,
52
187895
1452
我不收藏马桶圈,
03:09
I'm not a toilet-seat addict.
53
189371
2238
我没有马桶圈瘾。
03:11
No matter how temptingly you email me,
54
191633
2206
不管你的广告邮件多诱人,
03:13
I am not going to think, 'Oh, go on, then,
55
193863
2349
我都不会觉得 ‘哦,受不了,
03:16
one more toilet seat, I'll treat myself.' "
56
196236
2140
只好再买个马桶圈了, 偶尔放纵一下自己。’ ”
03:18
(Laughter)
57
198400
1344
(笑声)
03:19
Sometimes, with more consequence, like in this other tweet.
58
199768
4618
有时,AI 又“太有帮助”, 像这条推文:
03:24
"Had the same situation
59
204903
1787
“我在为妈妈买了骨灰盒后
03:26
with my mother's burial urn."
60
206714
2459
遇到了同样的状况。”
03:29
(Laughter)
61
209197
1008
(笑声)
03:30
"For months after her death,
62
210229
1365
“在她去世后的几个月里,
03:31
I got messages from Amazon, saying, 'If you liked that ...' "
63
211618
3548
亚马逊给我发的邮件都是‘根据你 的购物历史,你可能喜欢… (骨灰盒) ’ ”
03:35
(Laughter)
64
215190
2015
(笑声)
03:37
Sometimes with worse consequences.
65
217229
2528
有时结果更糟。
03:39
Take an AI engine rejecting a student application for university.
66
219781
4730
比如说 AI 曾经拒绝了 一名学生的大学申请。
03:44
Why?
67
224535
1150
为什么?
03:45
Because it has "learned," on past data,
68
225709
2670
因为这个AI 从以前的数据“学”到了
03:48
characteristics of students that will pass and fail.
69
228403
3182
哪些学生会通过, 哪些学生不能——
03:51
Some are obvious, like GPAs.
70
231609
2103
有一些指标很明确, 比如绩点。
03:54
But if, in the past, all students from a given postal code have failed,
71
234069
5109
但如果在过去,某个地区 学生都没通过,
03:59
it is very likely that AI will make this a rule
72
239202
3532
AI 很可能就此定下规则,
04:02
and will reject every student with this postal code,
73
242758
3770
然后拒绝所有来自这个地区的学生,
04:06
not giving anyone the opportunity to prove the rule wrong.
74
246552
4813
不给任何人证明规则有误的机会。
04:11
And no one can check all the rules,
75
251857
2516
并且没有人能够筛查掉这样的规则,
04:14
because advanced AI is constantly learning.
76
254397
3452
因为先进的 AI 一直在学。
04:18
And if humans are kept out of the room,
77
258307
2326
那么如果直接用 AI 取代人类,
04:20
there comes the algocratic nightmare.
78
260657
3277
迎来的将是 AI 官僚主义的噩梦:
04:24
Who is accountable for rejecting the student?
79
264466
2857
谁应该对学生的被拒负责?
04:27
No one, AI did.
80
267347
1643
没有谁,AI 来负责。
04:29
Is it fair? Yes.
81
269014
1674
这公平吗?公平。
04:30
The same set of objective rules has been applied to everyone.
82
270712
3242
因为所有学生都用同一规则判定。
04:34
Could we reconsider for this bright kid with the wrong postal code?
83
274367
3902
那可不可以重新考虑这个 “住错了地方”的聪明学生?
04:38
No, algos don't change their mind.
84
278899
3111
不行,AI 算法不会改变主意。
04:42
We have a choice here.
85
282974
2016
我们需要做出选择:
04:45
Carry on with algocracy
86
285756
2524
继续 AI 的独裁,
04:48
or decide to go to "Human plus AI."
87
288304
2865
还是考虑“人类+AI”思维?
04:51
And to do this,
88
291193
1333
要拥有这种思维,
04:52
we need to stop thinking tech first,
89
292550
3440
我们不能再优先考虑技术,
04:56
and we need to start applying the secret formula.
90
296014
3650
而是要从秘密公式入手。
05:00
To deploy "Human plus AI,"
91
300601
2103
要实现“人类+AI”,
05:02
10 percent of the effort is to code algos;
92
302728
2921
需要 10% 的编程算法;
05:05
20 percent to build tech around the algos,
93
305673
3531
20% 的科技成分,
05:09
collecting data, building UI, integrating into legacy systems;
94
309228
4106
包括收集数据,构建用户界面, 整合进遗留系统;
05:13
But 70 percent, the bulk of the effort,
95
313358
2904
其余 70% 是最重要的,
05:16
is about weaving together AI with people and processes
96
316286
4476
是结合 AI 和人类的方法,
05:20
to maximize real outcome.
97
320786
2374
让结果最接近完美。
05:24
AI fails when cutting short on the 70 percent.
98
324136
4634
如果这 70% 被削减, AI 就会出现问题。
05:28
The price tag for that can be small,
99
328794
3159
代价可以很小,
05:31
wasting many, many millions of dollars on useless technology.
100
331977
3985
只是在无用科技上 浪费数百万美元。
05:35
Anyone cares?
101
335986
1150
谁会在乎呢?
05:38
Or real tragedies:
102
338153
2325
但代价也可以大到无法承受:
05:41
346 casualties in the recent crashes of two B-737 aircrafts
103
341137
7515
最近两起波音 737 空难造成了 346 人遇难,
05:48
when pilots could not interact properly
104
348776
3261
原因都是电脑控制的飞行系统
05:52
with a computerized command system.
105
352061
2467
没有正确回应飞行员的指令。
05:55
For a successful 70 percent,
106
355974
1794
要成功实现那 70%,
05:57
the first step is to make sure that algos are coded by data scientists
107
357792
5095
第一步就要保证算法编程 由数据科学家
06:02
and domain experts together.
108
362911
2118
和领域专家共同完成。
06:05
Take health care for example.
109
365427
2198
拿医疗领域举例,
06:07
One of our teams worked on a new drug with a slight problem.
110
367649
4817
我们有一个团队曾经处理过 一种药产生的小问题。
06:12
When taking their first dose,
111
372784
1499
在首次服用这种药后,
06:14
some patients, very few, have heart attacks.
112
374307
3484
有很少一部分患者会诱发心脏病。
06:18
So, all patients, when taking their first dose,
113
378117
3135
于是所有第一次服用这种药的患者
06:21
have to spend one day in hospital,
114
381276
2682
都要住院观察一天,
06:23
for monitoring, just in case.
115
383982
2071
以防心脏病发作。
06:26
Our objective was to identify patients who were at zero risk of heart attacks,
116
386613
5556
我们想区分出 完全不可能发心脏病的患者,
06:32
who could skip the day in hospital.
117
392193
2333
这样他们就不用在医院多待一天。
06:34
We used AI to analyze data from clinical trials,
118
394962
4079
我们用 AI 分析了临床试验的数据,
06:40
to correlate ECG signal, blood composition, biomarkers,
119
400145
4368
寻找心电图、血液成分、生物标记
06:44
with the risk of heart attack.
120
404537
2000
和心脏病发作风险之间的关系。
06:47
In one month,
121
407232
1274
在一个月内,
06:48
our model could flag 62 percent of patients at zero risk.
122
408530
6031
我们训练的模型就能标记出 62% 的零发病风险患者。
06:54
They could skip the day in hospital.
123
414887
2222
这样,这些患者就不必 白白在医院呆上一天。
06:57
Would you be comfortable staying at home for your first dose
124
417863
3492
但是,你会放心地 在第一次服药后直接回家,
07:01
if the algo said so?
125
421379
1524
就因为 AI 说你可以回家了?
07:02
(Laughter)
126
422927
1015
(笑声)
07:03
Doctors were not.
127
423966
1650
医师也不会放心。
07:05
What if we had false negatives,
128
425966
2302
万一出现了错误结果呢?
07:08
meaning people who are told by AI they can stay at home, and die?
129
428292
5229
也就是说,AI 叫他们回家等死?
07:13
(Laughter)
130
433545
1365
(笑声)
07:14
There started our 70 percent.
131
434934
2452
这就需要那 70% 的作用了。
07:17
We worked with a team of doctors
132
437410
1992
我们与医师团队合作,
07:19
to check the medical logic of each variable in our model.
133
439426
3799
检验模型中变量的医学合理性。
07:23
For instance, we were using the concentration of a liver enzyme
134
443537
4569
比方说,我们用肝酶浓度
07:28
as a predictor,
135
448130
1273
作为预测变量,
07:29
for which the medical logic was not obvious.
136
449427
3698
这里的医学逻辑并不明显,
07:33
The statistical signal was quite strong.
137
453149
2666
但从统计信号角度看, 与结果有很大关系。
07:36
But what if it was a bias in our sample?
138
456300
2833
但万一它是个偏置项呢? (注:即该变量与心脏病无实际关联)
07:39
That predictor was taken out of the model.
139
459157
2800
所以这个变量会被剔除。
07:42
We also took out predictors for which experts told us
140
462307
3445
我们还剔除了一些变量,
07:45
they cannot be rigorously measured by doctors in real life.
141
465776
3936
因为医师无法精准测出这些变量。
07:50
After four months,
142
470371
1690
四个月后,
07:52
we had a model and a medical protocol.
143
472085
3071
我们训练出了模型, 制定了医学使用协议。
07:55
They both got approved
144
475514
1666
它们都获批通过。
07:57
my medical authorities in the US last spring,
145
477204
3222
去年春天,与我们合作的 美国医疗机构,
08:00
resulting in far less stress for half of the patients
146
480450
3706
为一半服用这种药的患者减轻了压力,
08:04
and better quality of life.
147
484180
1800
提高了生活品质。
08:06
And an expected upside on sales over 100 million for that drug.
148
486355
4269
且这种药的销量迅速增加, 超过了一亿份。
08:11
Seventy percent weaving AI with team and processes
149
491668
4198
人类团队和方法造就的 70%,
08:15
also means building powerful interfaces
150
495890
3571
也意味着在人类和 AI 之间
08:19
for humans and AI to solve the most difficult problems together.
151
499485
5309
建立了坚固的联结, 以共同解决最难的问题。
08:25
Once, we got challenged by a fashion retailer.
152
505286
4635
以前有一个时装零售商问我们:
08:31
"We have the best buyers in the world.
153
511143
2498
“时装零售商都很会进货,
08:33
Could you build an AI engine that would beat them at forecasting sales?
154
513665
5111
你能不能做一个 AI 在预测销量上超过他们?
08:38
At telling how many high-end, light-green, men XL shirts
155
518800
4166
要卖多少件高端服装、 浅绿色衣服、加大码男衬衫,
08:42
we need to buy for next year?
156
522990
2047
能赚到最多钱?
08:45
At predicting better what will sell or not
157
525061
2810
能不能预测哪些衣服会大卖,
08:47
than our designers."
158
527895
1960
预测得比设计师还准?”
08:50
Our team trained a model in a few weeks, on past sales data,
159
530434
3976
我们的团队在几周内 用以往销量数据训练出模型,
08:54
and the competition was organized with human buyers.
160
534434
3533
和人类商家比赛。
08:58
Result?
161
538347
1150
猜猜谁赢了?
09:00
AI wins, reducing forecasting errors by 25 percent.
162
540061
4682
AI 胜出,预测错误率比人类低 25%。
09:05
Human-zero champions could have tried to implement this initial model
163
545903
4833
零人类思维者可能会改进模型,
09:10
and create a fight with all human buyers.
164
550760
2754
投入和人类商家的竞争。
09:13
Have fun.
165
553538
1150
开心就好。
09:15
But we knew that human buyers had insights on fashion trends
166
555205
5126
但我们知道, 人类买家对时尚潮流有远见,
09:20
that could not be found in past data.
167
560355
2845
这是 AI 在以往数据学不到的。
09:23
There started our 70 percent.
168
563701
2857
于是我们转向那 70%,
09:26
We went for a second test,
169
566582
1944
我们开始了第二次测试。
09:28
where human buyers were reviewing quantities
170
568550
3103
人类商家来复查
09:31
suggested by AI
171
571677
1662
AI 推算的购买量,
09:33
and could correct them if needed.
172
573363
2325
然后做出必要纠正。
09:36
Result?
173
576180
1150
结果如何?
09:37
Humans using AI ...
174
577704
2117
使用 AI 的人类商家……
09:39
lose.
175
579845
1407
输了。
09:41
Seventy-five percent of the corrections made by a human
176
581795
4151
人类做出的纠正中,
09:45
were reducing accuracy.
177
585970
2055
有 75% 都在降低 AI 准确率。
09:49
Was it time to get rid of human buyers?
178
589002
3174
是不是要放弃人类商家的介入了?
09:52
No.
179
592200
1158
不是。
09:53
It was time to recreate a model
180
593382
2617
我们要重新搭建一个模型,
09:56
where humans would not try to guess when AI is wrong,
181
596023
5071
这一次,不让人类猜 AI 的对错,
10:01
but where AI would take real input from human buyers.
182
601118
4542
而是让 AI 寻求人类的建议。
10:06
We fully rebuilt the model
183
606962
1611
我们将模型改头换面,
10:08
and went away from our initial interface, which was, more or less,
184
608597
5964
抛弃了最初的交互方式:
10:14
"Hey, human! This is what I forecast,
185
614585
2437
“嘿人类!这是我的预测,
10:17
correct whatever you want,"
186
617046
1761
帮我纠正一下吧!”
10:18
and moved to a much richer one, more like,
187
618831
3636
改进后的交互方式 变得更广泛,像这样:
10:22
"Hey, humans!
188
622491
1976
“嘿人类!
10:24
I don't know the trends for next year.
189
624491
1825
我不懂明年的流行趋势,
10:26
Could you share with me your top creative bets?"
190
626340
2956
可不可以告诉我你押宝在哪?”
10:30
"Hey, humans!
191
630063
1476
“嘿人类!
10:31
Could you help me quantify those few big items?
192
631563
2719
可以帮我看看这些大家伙吗?
10:34
I cannot find any good comparables in the past for them."
193
634306
3317
它们超出了我的认知范围。”
10:38
Result?
194
638401
1150
结果如何?
10:40
"Human plus AI" wins,
195
640195
2000
“人类+AI” 胜出,
10:42
reducing forecast errors by 50 percent.
196
642219
3919
这次预测错误率降低了 50%。
10:47
It took one year to finalize the tool.
197
647770
2828
我们花了一年才最终完成这个工具,
10:51
Long, costly and difficult.
198
651073
3317
漫长、成本高,还很艰难,
10:55
But profits and benefits
199
655046
2206
但利润很丰厚,好处很多,
10:57
were in excess of 100 million of savings per year for that retailer.
200
657276
5396
每年为零售商节省了超过一亿美金。
11:03
Seventy percent on very sensitive topics
201
663459
2936
在一些特定议题上,
11:06
also means human have to decide what is right or wrong
202
666419
3778
70% 也意味着人类要决定对错,
11:10
and define rules for what AI can do or not,
203
670221
4086
定下规则限制 AI 的权力。
11:14
like setting caps on prices to prevent pricing engines
204
674331
3484
例如设定价格上限,
11:17
[from charging] outrageously high prices to uneducated customers
205
677839
4524
防止 AI 粗暴地抬价, 向不知情的顾客
11:22
who would accept them.
206
682387
1466
漫天要价。
11:24
Only humans can define those boundaries --
207
684538
2563
只有人类能够设定界限,
11:27
there is no way AI can find them in past data.
208
687125
3621
因为 AI 不可能从以往数据学到。
11:31
Some situations are in the gray zone.
209
691230
2467
有时候我们可能遇到灰色地带。
11:34
We worked with a health insurer.
210
694135
2745
我们曾和保险公司有过合作,
11:36
He developed an AI engine to identify, among his clients,
211
696904
4713
他们开发了一个 针对客户的 AI 系统,
11:41
people who are just about to go to hospital
212
701641
2548
用来识别快要去治病的客户,
11:44
to sell them premium services.
213
704213
2269
向他们推销附加产品。
11:46
And the problem is,
214
706506
1515
问题是,
11:48
some prospects were called by the commercial team
215
708045
2969
一些接到推销电话的客户,
11:51
while they did not know yet
216
711038
2697
这时候并不知道
11:53
they would have to go to hospital very soon.
217
713759
2818
他们很可能马上要去医院看病。
11:57
You are the CEO of this company.
218
717720
2317
如果你是这家公司的执行长,
12:00
Do you stop that program?
219
720061
1667
你会取消这个项目吗?
12:02
Not an easy question.
220
722577
1913
这是个两难的抉择。
12:04
And to tackle this question, some companies are building teams,
221
724514
3563
为了解决这个问题, 一些公司正在组建团队,
12:08
defining ethical rules and standards to help business and tech teams set limits
222
728101
5793
帮商业和科技团队 制定伦理规则和标准,
12:13
between personalization and manipulation,
223
733918
3596
在个性化和可操作性间寻找平衡点,
12:17
customization of offers and discrimination,
224
737538
2969
区别意见和偏见,
12:20
targeting and intrusion.
225
740531
2023
分清关照和冒犯。
12:24
I am convinced that in every company,
226
744562
3674
我坚信在每家公司,
12:28
applying AI where it really matters has massive payback.
227
748260
4650
把 AI 运用到关键之处 定会有巨大回报。
12:33
Business leaders need to be bold
228
753474
2151
商业领袖们要大胆尝试,
12:35
and select a few topics,
229
755649
1976
选择一些项目,
12:37
and for each of them, mobilize 10, 20, 30 people from their best teams --
230
757649
4936
为每个项目召集几十个 领域佼佼者——
12:42
tech, AI, data science, ethics --
231
762609
3333
科技、AI、科学、伦理——
12:45
and go through the full 10-, 20-, 70-percent cycle
232
765966
4421
然后完成10%、20%、70%的
12:50
of "Human plus AI,"
233
770411
1722
“人类+AI”目标。
12:52
if they want to land AI effectively in their teams and processes.
234
772157
4340
这样 AI 就可以和人类高效合作。
12:57
There is no other way.
235
777006
1889
除此之外别无他法。
12:58
Citizens in developed economies already fear algocracy.
236
778919
4724
经济飞速发展的同时, 公民已对 AI 官僚主义产生了恐惧。
13:04
Seven thousand were interviewed in a recent survey.
237
784196
3571
在近期的一项针对七千人的调研中,
13:08
More than 75 percent expressed real concerns
238
788157
3555
超过 75% 的人表示了担忧,
13:11
on the impact of AI on the workforce, on privacy,
239
791736
3937
担心 AI 影响就业、隐私,
13:15
on the risk of a dehumanized society.
240
795697
3436
担心社会会失去人性。
13:19
Pushing algocracy creates a real risk of severe backlash against AI
241
799157
5380
AI 官僚主义的出现 会导致公司和社会
13:24
within companies or in society at large.
242
804561
4103
对 AI 的强烈抵触。
13:29
"Human plus AI" is our only option
243
809014
3285
“人类+AI”是唯一选项,
13:32
to bring the benefits of AI to the real world.
244
812323
3134
只有这样才能让 AI 真正带来福祉。
13:36
And in the end,
245
816038
1158
最后,
13:37
winning organizations will invest in human knowledge,
246
817220
4134
因 AI 获利的组织, 要为人类智慧投资,
13:41
not just AI and data.
247
821378
2325
而不仅仅投资 AI 和数据。
13:44
Recruiting, training, rewarding human experts.
248
824792
3328
聘募、培养、奖励人类专家。
13:48
Data is said to be the new oil,
249
828800
3142
有人说数据是新的燃料,
13:51
but believe me, human knowledge will make the difference,
250
831966
4071
但相信我,人类知识能改变世界。
13:56
because it is the only derrick available
251
836061
3588
因为人类知识是唯一的泵,
13:59
to pump the oil hidden in the data.
252
839673
3674
能将蕴藏于数据的“燃料” 源源不断地泵出。
14:04
Thank you.
253
844633
1153
谢谢大家。
14:05
(Applause)
254
845810
3904
(掌声)
关于本网站

这个网站将向你介绍对学习英语有用的YouTube视频。你将看到来自世界各地的一流教师教授的英语课程。双击每个视频页面上显示的英文字幕,即可从那里播放视频。字幕会随着视频的播放而同步滚动。如果你有任何意见或要求,请使用此联系表与我们联系。

https://forms.gle/WvT1wiN1qDtmnspy7