Your Right to Repair AI Systems | Rumman Chowdhury | TED

43,621 views ・ 2024-06-05

TED


請雙擊下方英文字幕播放視頻。

譯者: C Leung 審譯者: 麗玲 辛
00:04
I want to tell you a story
0
4543
1835
我想告訴你一個
00:06
about artificial intelligence and farmers.
1
6419
4171
關於人工智慧和農民的故事。
00:10
Now, what a strange combination, right?
2
10966
3044
多麼奇怪的組合,對吧?
00:14
Two topics could not sound more different from each other.
3
14010
4088
兩個主題聽起來風馬牛不相及,
00:18
But did you know that modern farming actually involves a lot of technology?
4
18431
5047
但你知道現代農業 其實涉及很多科技嗎?
00:23
So computer vision is used to predict crop yields.
5
23478
3796
電腦視覺用於預測農產量、
00:27
And artificial intelligence is used to find,
6
27315
2836
人工智慧用於發現、 識別和消滅昆蟲,
00:30
identify and get rid of insects.
7
30151
2878
00:33
Predictive analytics helps figure out extreme weather conditions
8
33071
4129
預測分析有助於找出 乾旱或颶風等
00:37
like drought or hurricanes.
9
37242
1960
極端天氣情況。
00:39
But this technology is also alienating to farmers.
10
39744
4547
但科技也會讓農民感到格格不入。
00:44
And this all came to a head in 2017
11
44291
3003
在 2017 年,拖拉機公司 約翰·迪爾 (John Deere)
00:47
with the tractor company John Deere when they introduced smart tractors.
12
47335
4922
推出智慧拖拉機時, 一切都到達緊要關頭。
00:52
So before then, if a farmer's tractor broke,
13
52299
2961
在以前,如果農民的拖拉機壞了,
00:55
they could just repair it themselves or take it to a mechanic.
14
55302
3920
他們可以自行或帶給技工修理。
00:59
Well, the company actually made it illegal
15
59222
2961
而該公司實際上
01:02
for farmers to fix their own equipment.
16
62225
2586
將農民自行修理設備視為非法。
01:04
You had to use a licensed technician
17
64811
2836
農民必須找有執照的技術人員維修,
01:07
and farmers would have to wait for weeks
18
67689
2628
即使當農作物腐爛、害蟲侵佔,
01:10
while their crops rot and pests took over.
19
70317
3044
他們也得等上數星期。
01:14
So they took matters into their own hands.
20
74279
3170
所以他們親力親為。
01:17
Some of them learned to program,
21
77490
1544
有些人學會了編程,
01:19
and they worked with hackers to create patches to repair their own systems.
22
79075
4922
並且與駭客合作, 建立修補程序來修復自己的系統。
01:23
In 2022,
23
83997
1418
2022 年,
01:25
at one of the largest hacker conferences in the world, DEFCON,
24
85457
3712
在世上最大的駭客會議 DEFCON 上,
01:29
a hacker named Sick Codes and his team
25
89210
2586
名為 Sick Codes 的 駭客和他的團隊
01:31
showed everybody how to break into a John Deere tractor,
26
91838
3212
向大家展示了如何入侵 約翰迪爾拖拉機,
01:35
showing that, first of all, the technology was vulnerable,
27
95091
3629
首先指出,該技術易受攻擊,
01:38
but also that you can and should own your own equipment.
28
98762
4337
同時也表明,你可以、 也應該擁有自己的設備。
01:43
To be clear, this is illegal,
29
103683
2836
明確地說,這是非法的,
01:46
but there are people trying to change that.
30
106561
2795
但有些人想改變這點。
01:49
Now that movement is called the “right to repair.”
31
109689
3796
這個行動被稱為「維修權」。
01:53
The right to repair goes something like this.
32
113526
2169
維修權是這樣:
01:55
If you own a piece of technology,
33
115737
1668
如果你擁有一件科技產品,
01:57
it could be a tractor, a smart toothbrush,
34
117405
2544
可能是拖拉機 、智慧牙刷 、
01:59
a washing machine,
35
119991
1377
洗衣機,
02:01
you should have the right to repair it if it breaks.
36
121368
3086
萬一損壞,你應有權去維修它。
02:05
So why am I telling you this story?
37
125246
2253
那我為何要告訴你這故事?
02:08
The right to repair needs to extend to artificial intelligence.
38
128541
5214
維修權需要擴展到人工智慧。
02:14
Now it seems like every week
39
134214
2127
現在,似乎每週
02:16
there is a new and mind-blowing innovation in AI.
40
136383
3336
都有令人振奮的 AI 創新。
02:19
But did you know that public confidence is actually declining?
41
139719
4880
但你知道公眾信心其實正下降嗎?
02:24
A recent Pew poll showed that more Americans are concerned
42
144599
4963
最近的皮尤民意調查 (Pew) 顯示,
美國人對該技術的擔心多於興奮。
02:29
than they are excited about the technology.
43
149604
2503
02:32
This is echoed throughout the world.
44
152148
2211
世界各地都有同感。
02:34
The World Risk Poll shows
45
154401
1418
世界風險調查顯示,
02:35
that respondents from Central and South America and Africa
46
155860
3462
來自中南美洲和非洲的受訪者,
02:39
all said that they felt AI would lead to more harm than good for their people.
47
159322
6090
大都表示認為 AI 對人們是弊大於利。
02:46
As a social scientist and an AI developer,
48
166287
2503
作為社會科學家和 AI 開發者,
02:48
this frustrates me.
49
168790
1585
這令我沮喪。
02:50
I'm a tech optimist
50
170417
1293
我是技術樂觀主義者,
02:51
because I truly believe this technology can lead to good.
51
171751
4338
因為我真誠相信 這種技術可以帶來益處。
02:56
So what's the disconnect?
52
176464
1919
那當中障礙是什麼?
02:58
Well, I've talked to hundreds of people over the last few years.
53
178758
3754
我在過去幾年裡和數百人談過,
03:02
Architects and scientists, journalists and photographers,
54
182554
3670
建築師和科學家 、 記者和攝影師、
03:06
ride-share drivers and doctors,
55
186224
1710
共乘車司機和醫生,
03:07
and they all say the same thing.
56
187976
2961
他們都有同樣想法。
03:12
People feel like an afterthought.
57
192272
3795
他們覺得自己後知後覺。
03:17
They all know that their data is harvested often without their permission
58
197485
4338
他們都清楚自己的數據 通常在未經許可下被收集,
03:21
to create these sophisticated systems.
59
201823
2461
用來建立這些複雜的系統。
03:24
They know that these systems are determining their life opportunities.
60
204325
4171
他們知道這些系統 正決定他們的命運。
03:28
They also know that nobody ever bothered to ask them
61
208496
3545
他們也知道, 沒有人會費心問他們
03:32
how the system should be built,
62
212083
1502
系統該如何建構,
03:33
and they certainly have no idea where to go if something goes wrong.
63
213585
5964
他們當然不知道如果問題出現時, 應該去找誰負責。
03:40
We may not own AI systems,
64
220383
2336
我們也許不能擁有 AI 系統,
03:42
but they are slowly dominating our lives.
65
222761
2794
但它們正慢慢地主導我們的生活。
03:45
We need a better feedback loop
66
225597
1501
我們需要更好的反饋迴路,
03:47
between the people who are making these systems,
67
227140
2961
在系統製造者
03:50
and the people who are best determined to tell us
68
230101
3337
及普羅大眾之間架設渠道,
03:53
how these AI systems should interact in their world.
69
233480
3628
去使 AI 系統更好地融入人們生活。
03:57
One step towards this is a process called red teaming.
70
237609
3837
邁向這目標的一步, 是靠稱為紅隊演練的過程。
04:01
Now, red teaming is a practice that was started in the military,
71
241821
3003
紅隊演練是從軍事而來的一種做法,
04:04
and it's used in cybersecurity.
72
244866
1919
應用於網絡安全。
04:06
In a traditional red-teaming exercise,
73
246826
2628
在傳統的紅隊演練過程中,
04:09
external experts are brought in to break into a system,
74
249454
3920
外部專家被招募去入侵系統,
04:13
sort of like what Sick Codes did with tractors, but legal.
75
253374
4338
如同 Sick Codes 對拖拉機 所做的事, 但是合法。
04:17
So red teaming acts as a way of testing your defenses
76
257712
3670
因此,紅隊演練可以作為 測試防禦的方法,
04:21
and when you can figure out where something will go wrong,
77
261424
3545
當你能找出問題所在時,
04:25
you can figure out how to fix it.
78
265011
3003
你就能找出解決方案。
04:28
But when AI systems go rogue,
79
268056
2377
但當 AI 系統變得異常,
04:30
it's more than just a hacker breaking in.
80
270433
2920
它不只是駭客入侵般簡單。
04:33
The model could malfunction or misrepresent reality.
81
273394
3754
該模型可能發生故障或歪曲現實。
04:37
So, for example, not too long ago,
82
277190
2044
例如不久之前,我們看到 AI 系統
04:39
we saw an AI system attempting diversity
83
279234
2627
使用不準確的歷史照片 來嘗試展現多元。
04:41
by showing historically inaccurate photos.
84
281903
3503
04:45
Anybody with a basic understanding of Western history
85
285406
2670
任何對西方歷史 有基本了解的人都知道,
04:48
could have told you that neither the Founding Fathers
86
288076
2502
美國開國元勛或納粹時代的 士兵都不會是黑人。
04:50
nor Nazi-era soldiers would have been Black.
87
290620
2169
04:54
In that case, who qualifies as an expert?
88
294123
3629
在這種情況下,誰有資格成為專家?
04:58
You.
89
298670
1168
你。
05:00
I'm working with thousands of people all around the world
90
300380
2836
我正與世界各地成千上萬的人合作,
05:03
on large and small red-teaming exercises,
91
303258
2252
進行各種大小的紅隊演練,
05:05
and through them we found and fixed mistakes in AI models.
92
305552
4629
通過這些演練,我們發現 並修正了 AI 模型的錯誤。
05:10
We also work with some of the biggest tech companies in the world:
93
310181
3587
我們還與世界上 一些最大的科技公司合作:
05:13
OpenAI, Meta, Anthropic, Google.
94
313810
2753
OpenAI、Meta、 Anthropic、谷歌。
05:16
And through this, we've made models work better for more people.
95
316604
4630
通過這些演練,我們使模型 更適合給更多人使用。
05:22
Here's a bit of what we've learned.
96
322277
2168
以下是我們所學到的。
05:24
We partnered with the Royal Society in London to do a scientific,
97
324988
3670
我們與倫敦皇家學會合作,
和疾病科學家舉辦一場關於 科學、錯誤和虛假信息的活動。
05:28
mis- and disinformation event with disease scientists.
98
328658
3545
05:32
What these scientists found
99
332203
1335
這些科學家發現到,
05:33
is that AI models actually had a lot of protections
100
333580
2794
AI 模型實際上具有
很多針對新冠肺炎 錯誤信息的保護措施。
05:36
against COVID misinformation.
101
336416
2294
05:38
But for other diseases like measles, mumps and the flu,
102
338751
3546
但對於麻疹,腮腺炎和 流感等其他疾病,
05:42
the same protections didn't apply.
103
342297
2460
同樣的保護措施並不適用。
05:44
We reported these changes,
104
344757
1293
我們舉報該有這些改變,
它們已被修復, 現在我們都得到更好的保護,
05:46
they’re fixed and now we are all better protected
105
346050
3170
05:49
against scientific mis- and disinformation.
106
349220
2670
免受科學錯誤和虛假信息的侵害。
05:52
We did a really similar exercise with architects at Autodesk University,
107
352724
4838
我們與 Autodesk 大學的建築師 進行了一項非常類似的練習,
05:57
and we asked them a simple question:
108
357562
1877
我們問了個簡單問題:
05:59
Will AI put them out of a job?
109
359439
2961
AI 會令他們失業嗎?
06:02
Or more specifically,
110
362442
2169
更具體說,
06:04
could they imagine a modern AI system
111
364611
2043
他們能否想像一個能夠設計
06:06
that would be able to design the specs of a modern art museum?
112
366696
3670
現代藝術博物館規格的 現代 AI 系統?
06:10
The answer, resoundingly, was no.
113
370408
3962
答案清脆利落,是否定的。
06:14
Here's why, architects do more than just draw buildings.
114
374787
3629
原因是建築師不單單是繪製建築物,
06:18
They have to understand physics and material science.
115
378458
3336
他們必須了解物理和材料科學。
06:21
They have to know building codes,
116
381836
1585
他們必須清楚建築規範,
06:23
and they have to do that
117
383463
1251
並且同時,還要能 喚起情感,引發大眾共鳴。
06:24
while making something that evokes emotion.
118
384756
2794
06:28
What the architects wanted was an AI system
119
388426
2169
建築師想要的是
與他們互動的 AI 系統, 可以給他們反饋,
06:30
that interacted with them, that would give them feedback,
120
390595
2753
06:33
maybe proactively offer design recommendations.
121
393389
2753
或主動提供設計建議,
06:36
And today's AI systems, not quite there yet.
122
396142
3670
而現今的 AI 系統仍未達標。
06:39
But those are technical problems.
123
399854
1752
但這些是技術問題。
06:41
People building AI are incredibly smart,
124
401648
2043
建立 AI 的人非常聰明,
06:43
and maybe they could solve all that in a few years.
125
403733
2753
也許他們可以在幾年內 解決所有問題。
06:46
But that wasn't their biggest concern.
126
406527
1961
但這並非是建築師的最大擔憂。
06:48
Their biggest concern was trust.
127
408529
2795
他們最大的擔憂是信任。
06:51
Now architects are liable if something goes wrong with their buildings.
128
411991
4546
若建築物出現問題, 建築師就要承擔責任。
06:56
They could lose their license,
129
416537
1669
他們可能會失去執照,
06:58
they could be fined, they could even go to prison.
130
418206
3211
可能被罰款, 甚至有牢獄之災。
07:01
And failures can happen in a million different ways.
131
421459
3086
失敗能以百萬種不同方式發生。
07:04
For example, exit doors that open the wrong way,
132
424545
3045
例如,逃生門的打開方向出錯,
07:07
leading to people being crushed in an evacuation crisis,
133
427632
4296
導致人們在緊急疏散中被擠壓,
07:11
or broken glass raining down onto pedestrians in the street
134
431928
4129
或因為強風打破窗戶,
玻璃碎落在街道上的行人身上。
07:16
because the wind blows too hard and shatters windows.
135
436099
3837
07:20
So why would an architect trust an AI system with their job,
136
440311
3921
那為何建築師要信任一個 AI 系統,
押上自己的工作和人身自由,
07:24
with their literal freedom,
137
444273
2670
07:26
if they couldn't go in and fix a mistake if they found it?
138
446943
3628
但當發現系統錯誤時 , 卻不能親自修復它?
07:31
So we need to figure out these problems today, and I'll tell you why.
139
451030
4171
所以我們需要弄清楚這些問題, 我會告訴你原因。
07:35
The next wave of artificial intelligence systems, called agentic AI,
140
455201
5047
下一波人工智慧系統, 稱為代理型 AI ,
07:40
is a true tipping point
141
460289
1335
是個真正轉折點,
07:41
between whether or not we retain human agency,
142
461624
4213
要麼我們保留人類自主性,
07:45
or whether or not AI systems make our decisions for us.
143
465837
3795
要麼 AI 系統替我們作出決策。
07:50
Imagine an AI agent as kind of like a personal assistant.
144
470008
3211
想想一個像個人助理般的 AI 代理人。
07:53
So, for example, a medical agent might determine
145
473261
3712
例如,醫療代理可能會判斷
07:57
whether or not your family needs doctor's appointments,
146
477015
2585
你的家人是否需要醫生預約、
07:59
it might refill prescription medications, or in case of an emergency,
147
479642
3379
補充處方藥物、 或在緊急情況下
08:03
send medical records to the hospital.
148
483062
2586
將醫療記錄發送到醫院。
08:05
But AI agents can't and won't exist
149
485690
2836
但除非我們有真正的維修權,
08:08
unless we have a true right to repair.
150
488568
2210
否則 AI 代理無法,也不會存在。
08:10
What parent would trust their child's health to an AI system
151
490820
4922
除非可以執行一些基本診斷,
否則哪有家長會將孩子的健康 交託給 AI 系統?
08:15
unless you could run some basic diagnostics?
152
495783
2795
08:18
What professional would trust an AI system with job decisions,
153
498619
4213
除非可以按照初級員工的方式 重新培訓 AI,
08:22
unless you could retrain it the way you might a junior employee?
154
502874
4296
否則哪有專業人員 會信任 AI 系統進行工作決策?
08:28
Now, a right to repair might look something like this.
155
508129
2878
維修權可能看起來像這樣。
08:31
You could have a diagnostics board
156
511007
2210
有一個診斷板,
08:33
where you run basic tests that you design,
157
513259
3003
你可以在其中執行 自己設計的基本測試,
08:36
and if something's wrong, you could report it to the company
158
516304
2836
如果出現問題,則可以向該公司報告,
並在問題修復後得到回覆。
08:39
and hear back when it's fixed.
159
519140
1585
08:40
Or you could work with third parties like ethical hackers
160
520725
3128
或者你可以與第三方合作, 例如道德駭客,
08:43
who make patches for systems like we do today.
161
523895
2586
為系統製作修補程序, 跟我們所做一樣。
08:46
You can download them and use them to improve your system
162
526481
2711
你可以下載並使用它們, 按你想要的方式改進系統。
08:49
the way you want it to be improved.
163
529233
1919
08:51
Or you could be like these intrepid farmers and learn to program
164
531194
3628
或者你可以像這些堅毅的農民一樣,
學習編程和微調自己的系統。
08:54
and fine-tune your own systems.
165
534864
3003
08:58
We won't achieve the promised benefits of artificial intelligence
166
538618
4171
除非我們弄清楚 如何將人們帶入開發過程,
09:02
unless we figure out how to bring people into the development process.
167
542789
5046
否則我們將無法實現 人工智慧所承諾的益處。
09:08
I've dedicated my career to responsible AI,
168
548377
3504
我投入「負責任 AI」 的事業,
09:11
and in that field we ask the question,
169
551923
3253
在這個領域我們提出問題,
09:15
what can companies build to ensure that people trust AI?
170
555218
4963
「企業能建立什麼 來確保人們信任 AI?」
09:20
Now, through these red-teaming exercises, and by talking to you,
171
560723
3837
現在,通過這些紅隊演練 及跟你們的交談,
09:24
I've come to realize that we've been asking the wrong question all along.
172
564602
5339
我發現我們一直問錯問題了。
09:30
What we should have been asking is what tools can we build
173
570566
3921
我們應該問的是:
「我們能建立哪些工具, 讓人們可以使用 AI 而獲益?」
09:34
so people can make AI beneficial for them?
174
574529
4045
09:39
Technologists can't do it alone.
175
579117
2460
技術人員無法獨自做到;
09:41
We can only do it with you.
176
581619
2419
我們只能與你一起做到。
09:44
Thank you.
177
584080
1168
謝謝。
09:45
(Applause)
178
585248
2794
(掌聲)
關於本網站

本網站將向您介紹對學習英語有用的 YouTube 視頻。 您將看到來自世界各地的一流教師教授的英語課程。 雙擊每個視頻頁面上顯示的英文字幕,從那裡播放視頻。 字幕與視頻播放同步滾動。 如果您有任何意見或要求,請使用此聯繫表與我們聯繫。

https://forms.gle/WvT1wiN1qDtmnspy7