AI Is Dangerous, but Not for the Reasons You Think | Sasha Luccioni | TED

1,167,750 views ・ 2023-11-06

TED


請雙擊下方英文字幕播放視頻。

譯者: Lilian Chiu 審譯者: Shelley Tsang 曾雯海
00:04
So I've been an AI researcher for over a decade.
0
4292
3504
我做人工智慧研究者做了十多年。
00:07
And a couple of months ago, I got the weirdest email of my career.
1
7796
3503
幾個月前,我收到了 我職涯中最奇怪的電子郵件。
00:11
A random stranger wrote to me
2
11925
1668
一個不認識的陌生人寫信給我, 說我的研究會終結人類。
00:13
saying that my work in AI is going to end humanity.
3
13635
3420
00:18
Now I get it, AI, it's so hot right now.
4
18598
3754
現在我明白了, 人工智慧,現在超熱門。
00:22
(Laughter)
5
22352
1627
(笑聲)
它幾乎每天上頭條,
00:24
It's in the headlines pretty much every day,
6
24020
2086
有時是因為真的很酷的原因, 比如:發現新的醫藥分子,
00:26
sometimes because of really cool things
7
26106
1918
00:28
like discovering new molecules for medicine
8
28066
2169
00:30
or that dope Pope in the white puffer coat.
9
30235
2252
或那個穿著白色羽絨外套的酷教宗。
00:33
But other times the headlines have been really dark,
10
33446
2461
但其他時候它的頭條卻很黑暗,如:
00:35
like that chatbot telling that guy that he should divorce his wife
11
35907
3671
聊天機器人告訴那個傢伙 他應該和妻子離婚,
00:39
or that AI meal planner app proposing a crowd pleasing recipe
12
39619
4088
或者人工智慧餐點規劃 APP 提出頗受眾愛的食譜,
00:43
featuring chlorine gas.
13
43707
2002
主打特色是氯氣。
00:46
And in the background,
14
46376
1418
在幕後,我們會聽到很多人 在談末日的情境、存在風險,
00:47
we've heard a lot of talk about doomsday scenarios,
15
47836
2419
00:50
existential risk and the singularity,
16
50255
1918
和奇點,且會有人寫信 和安排發起活動以確保
00:52
with letters being written and events being organized
17
52215
2503
00:54
to make sure that doesn't happen.
18
54718
2002
那些事不會發生。
00:57
Now I'm a researcher who studies AI's impacts on society,
19
57637
4630
身為研究者,我研究的是 人工智慧對社會的影響,
01:02
and I don't know what's going to happen in 10 or 20 years,
20
62267
2836
我不知道十年或二十年後 會發生什麼事,
01:05
and nobody really does.
21
65145
2461
且沒有人真的知道。
01:07
But what I do know is that there's some pretty nasty things going on right now,
22
67981
4546
但我確實知道的是:現在就有 一些非常糟糕的事情正在發生,
01:12
because AI doesn't exist in a vacuum.
23
72527
2878
因為人工智能並非獨立存在的。
01:15
It is part of society, and it has impacts on people and the planet.
24
75447
3920
它是社會的一部分, 它會影響到人類和地球。
01:20
AI models can contribute to climate change.
25
80160
2502
人工智慧模型可能會促成氣候變遷。
01:22
Their training data uses art and books created by artists
26
82704
3462
它們的訓練資料會用到藝術家 和作家創作的藝術和書籍,
01:26
and authors without their consent.
27
86207
1710
卻未取得他們的同意。
01:27
And its deployment can discriminate against entire communities.
28
87959
3837
人工智慧的實際應用可能會 產生對整個族群的歧視。
01:32
But we need to start tracking its impacts.
29
92797
2127
但我們需要開始追蹤它的影響。
01:34
We need to start being transparent and disclosing them and creating tools
30
94966
3587
我們需要開始做到透明化, 揭露這些影響並創造工具
01:38
so that people understand AI better,
31
98595
2419
來讓大家更了解人工智慧,
也希望未來世代的人工智慧 模型能夠更可信賴、永續,
01:41
so that hopefully future generations of AI models
32
101056
2335
01:43
are going to be more trustworthy, sustainable,
33
103433
2836
01:46
maybe less likely to kill us, if that's what you're into.
34
106269
2836
也許殺死我們的機率也低些, 若你對這議題感興趣的話。
01:50
But let's start with sustainability,
35
110148
1752
但讓我們從永續性談起,因為
01:51
because that cloud that AI models live on is actually made out of metal, plastic,
36
111900
5756
人工智慧模型所居住的雲端, 組成成份實際上是金屬、
塑膠,還有大量的能源供給電力。
01:57
and powered by vast amounts of energy.
37
117656
2460
02:00
And each time you query an AI model, it comes with a cost to the planet.
38
120116
4463
你每用人工智慧做一次查詢,
地球都要付出代價。
02:05
Last year, I was part of the BigScience initiative,
39
125789
3044
去年,我是「大科學計畫」的一員,
02:08
which brought together a thousand researchers
40
128833
2127
該計畫集結了來自世界各地的 一千名研究者來創造 Bloom,
02:10
from all over the world to create Bloom,
41
130960
2503
02:13
the first open large language model, like ChatGPT,
42
133505
4337
第一個開放的大型語言 模型,就像 ChatGPT,
02:17
but with an emphasis on ethics, transparency and consent.
43
137842
3546
但重視道德、透明度,和徵詢同意。
02:21
And the study I led that looked at Bloom's environmental impacts
44
141721
3253
我主導的研究是在探究 Bloom 對環境的影響,
結果發現,光是訓練它,
02:25
found that just training it used as much energy
45
145016
3253
使用的能源就等同於 三十個家庭使用一整年的量,
02:28
as 30 homes in a whole year
46
148311
2211
02:30
and emitted 25 tons of carbon dioxide,
47
150563
2419
且會排放二十五公噸的二氧化碳,
差不多等同於開車繞地球五次,
02:33
which is like driving your car five times around the planet
48
153024
3253
02:36
just so somebody can use this model to tell a knock-knock joke.
49
156319
3170
只是為了讓某人可以用 這個模型來說個敲門笑話。
02:39
And this might not seem like a lot,
50
159489
2169
可能看似不多,
02:41
but other similar large language models,
51
161700
2460
但其他類似的大型語言 模型,如 GPT-3,
02:44
like GPT-3,
52
164202
1126
02:45
emit 20 times more carbon.
53
165370
2544
排放的碳量是二十倍。
02:47
But the thing is, tech companies aren't measuring this stuff.
54
167956
2878
重點是科技公司並沒有在 衡量這些,沒有揭露出來。
02:50
They're not disclosing it.
55
170875
1252
02:52
And so this is probably only the tip of the iceberg,
56
172168
2461
因此,這可能只是冰山的一角,
02:54
even if it is a melting one.
57
174629
1418
即使是在融化的冰山。
02:56
And in recent years we've seen AI models balloon in size
58
176798
3629
近年來,我們看到 人工智慧模型快速增大,
03:00
because the current trend in AI is "bigger is better."
59
180468
3462
因為目前的人工智慧 趨勢是「越大越好」。
03:04
But please don't get me started on why that's the case.
60
184305
2795
但請別讓我開始談為什麼如此。
03:07
In any case, we've seen large language models in particular
61
187100
3003
無論如何,我們看到 特別是大型語言模型
03:10
grow 2,000 times in size over the last five years.
62
190103
3211
在過去五年間就長大了兩千倍。
03:13
And of course, their environmental costs are rising as well.
63
193314
3045
當然,它們的環境成本 也會跟著上升。
03:16
The most recent work I led, found that switching out a smaller,
64
196401
3795
我最近期主導的研究發現,
將較小、較有效率的模型 更換為更大的語言模型
03:20
more efficient model for a larger language model
65
200238
3337
03:23
emits 14 times more carbon for the same task.
66
203616
3754
來做同樣的工作任務, 排放的碳會增為十四倍。
03:27
Like telling that knock-knock joke.
67
207412
1877
就像說那個敲門的笑話。
03:29
And as we're putting in these models into cell phones and search engines
68
209289
3462
隨著我們將這些模型置入
手機、搜尋引擎、 智慧冰箱,和喇叭中,
03:32
and smart fridges and speakers,
69
212792
2836
03:35
the environmental costs are really piling up quickly.
70
215628
2628
環境成本很快就越堆越高。
03:38
So instead of focusing on some future existential risks,
71
218840
3754
因此,與其專注在未來的存在風險,
03:42
let's talk about current tangible impacts
72
222635
2753
咱們不如來談談目前實質的影響
03:45
and tools we can create to measure and mitigate these impacts.
73
225388
3629
以及我們可以創建什麼工具 來衡量和降低這些影響。
03:49
I helped create CodeCarbon,
74
229893
1668
我協助創造了 CodeCarbon,
03:51
a tool that runs in parallel to AI training code
75
231603
2961
它是種與人工智慧訓練 程式碼平行運作的工具,
03:54
that estimates the amount of energy it consumes
76
234564
2211
可以估計該程式碼 消耗的能量和排放的碳量。
03:56
and the amount of carbon it emits.
77
236775
1668
03:58
And using a tool like this can help us make informed choices,
78
238485
2877
使用這類工具可以協助我們 做出明智的選擇,例如
04:01
like choosing one model over the other because it's more sustainable,
79
241404
3253
考量永續性所以選擇 這一個模型而非另一個,
04:04
or deploying AI models on renewable energy,
80
244657
2920
或者採用靠可再生能源 運作的人工智慧模型,
04:07
which can drastically reduce their emissions.
81
247619
2544
它們的排放量會大大減少。
04:10
But let's talk about other things
82
250163
2085
但咱們來談談其他議題,
04:12
because there's other impacts of AI apart from sustainability.
83
252290
2961
因為除了永續性之外, 人工智慧還會造成其他影響。
04:15
For example, it's been really hard for artists and authors
84
255627
3128
例如,藝術家和作家真的很難證明
04:18
to prove that their life's work has been used for training AI models
85
258797
4212
未經他們的同意,他們一生的 心血就被用來訓練人工智慧模型。
04:23
without their consent.
86
263051
1209
04:24
And if you want to sue someone, you tend to need proof, right?
87
264302
3170
如果你想告別人, 通常會需要證據,對吧?
04:27
So Spawning.ai, an organization that was founded by artists,
88
267806
3920
Spawning.ai 是一個 由藝術家創立的組織,
04:31
created this really cool tool called “Have I Been Trained?”
89
271726
3337
創造了一個很酷的工具, 叫「我是怎麼被訓練的?」
04:35
And it lets you search these massive data sets
90
275104
2461
它讓你能搜尋這些龐大的資料集, 看看有哪些你的資訊在其中。
04:37
to see what they have on you.
91
277607
2085
04:39
Now, I admit it, I was curious.
92
279734
1668
我承認,我很好奇。
04:41
I searched LAION-5B,
93
281444
1627
我搜尋大型的影像 及文本資料集 LAION-5B,
04:43
which is this huge data set of images and text,
94
283112
2461
04:45
to see if any images of me were in there.
95
285615
2711
想知道裡面是否有我的影像。
04:49
Now those two first images,
96
289285
1585
前兩張影像
04:50
that's me from events I've spoken at.
97
290870
2169
是我在活動上演說。
04:53
But the rest of the images, none of those are me.
98
293081
2753
但其餘的影像通通都不是我。
04:55
They're probably of other women named Sasha
99
295875
2002
可能是其他也叫莎夏的女性 把自己的照片放上網。
04:57
who put photographs of themselves up on the internet.
100
297919
2628
05:01
And this can probably explain why,
101
301047
1627
這可能可以解釋為什麼 當我要求影像生成模型
05:02
when I query an image generation model
102
302715
1836
05:04
to generate a photograph of a woman named Sasha,
103
304551
2294
生成一個名叫莎夏的女子的 照片時,通常我會得到
05:06
more often than not I get images of bikini models.
104
306886
2753
比基尼模特兒的影像。
05:09
Sometimes they have two arms,
105
309681
1626
有時她們有兩隻手, 有時她們有三隻手,
05:11
sometimes they have three arms,
106
311349
2294
05:13
but they rarely have any clothes on.
107
313685
2043
但她們都很少有穿衣服。
05:16
And while it can be interesting for people like you and me
108
316563
2794
雖然對你我這樣的人來說, 搜尋這些資料集可能很有趣,
05:19
to search these data sets,
109
319357
2127
05:21
for artists like Karla Ortiz,
110
321526
2044
但對卡拉‧歐提茲這種藝術家來說,
05:23
this provides crucial evidence that her life's work, her artwork,
111
323570
3753
這些是重要的證據, 可證明她一生的作品,
她的藝術創作,被用來訓練 人工智慧模型且未取得她的同意。
05:27
was used for training AI models without her consent,
112
327365
2961
05:30
and she and two artists used this as evidence
113
330326
2336
她和兩位藝術家把搜尋結果當證據, 對人工智慧公司提起集體訴訟,
05:32
to file a class action lawsuit against AI companies
114
332704
2794
05:35
for copyright infringement.
115
335540
1960
控告它們侵犯版權。
05:37
And most recently --
116
337542
1168
最近——
05:38
(Applause)
117
338710
3378
(掌聲)
05:42
And most recently Spawning.ai partnered up with Hugging Face,
118
342130
3044
最近, Spawning.ai 和我服務的 公司 Hugging Face 合作,
05:45
the company where I work at,
119
345216
1585
05:46
to create opt-in and opt-out mechanisms for creating these data sets.
120
346801
4922
創造同意參與和退出機制,
供創建這些資料集時使用。
05:52
Because artwork created by humans shouldn’t be an all-you-can-eat buffet
121
352098
3587
因為人類創作的藝術作品不該是訓練 人工智慧語言模型的吃到飽大餐。
05:55
for training AI language models.
122
355727
1793
05:58
(Applause)
123
358313
4254
(掌聲)
06:02
The very last thing I want to talk about is bias.
124
362567
2336
我想談的最後一點是偏見。
06:04
You probably hear about this a lot.
125
364944
1919
各位可能常常聽到。
06:07
Formally speaking, it's when AI models encode patterns and beliefs
126
367196
3713
正式的說就是:人工智慧 模型編碼可以代表刻板印象
06:10
that can represent stereotypes or racism and sexism.
127
370950
3128
或種族主義和性別主義的 模式及信念時。
06:14
One of my heroes, Dr. Joy Buolamwini, experienced this firsthand
128
374412
3212
我的偶像喬伊‧布蘭維尼 博士就有親身經歷,
06:17
when she realized that AI systems wouldn't even detect her face
129
377665
3045
她發現,人工智慧系統 不會偵測她的臉孔,
06:20
unless she was wearing a white-colored mask.
130
380752
2169
除非她戴上白色面具。
06:22
Digging deeper, she found that common facial recognition systems
131
382962
3754
更深入研究後,她發現常見的
臉孔辨識系統對有色人種女性的 辨識能力遠不如白種男性。
06:26
were vastly worse for women of color compared to white men.
132
386758
3253
06:30
And when biased models like this are deployed in law enforcement settings,
133
390428
5297
當像這樣有偏見的模型 被用在執法的應用上時,
06:35
this can result in false accusations, even wrongful imprisonment,
134
395767
4296
可能會導致錯誤指控, 甚至錯誤監禁,
06:40
which we've seen happen to multiple people in recent months.
135
400063
3920
最近幾個月我們就看到 不少人遇到這樣的現象。
例如,波莎‧伍德拉夫在懷孕 八個月時被錯誤指控劫車,
06:44
For example, Porcha Woodruff was wrongfully accused of carjacking
136
404025
3086
06:47
at eight months pregnant
137
407111
1252
06:48
because an AI system wrongfully identified her.
138
408363
2961
因為人工智慧系統誤認了她。
06:52
But sadly, these systems are black boxes,
139
412325
2002
但不幸的是,這些系統是黑盒子,
06:54
and even their creators can't say exactly why they work the way they do.
140
414369
5964
連創造它們的人也無法 明確說明為什麼它們
會採用它們現在的運行方式。
07:00
And for example, for image generation systems,
141
420917
3462
以影像生成系統為例,
07:04
if they're used in contexts like generating a forensic sketch
142
424379
4129
如果使用它們的情境是要
根據嫌犯的描述 來產生一張法醫素描,
07:08
based on a description of a perpetrator,
143
428549
2711
07:11
they take all those biases and they spit them back out
144
431260
3587
它們會接受所有這些偏見, 再丟回來給我們,
07:14
for terms like dangerous criminal, terrorists or gang member,
145
434889
3462
用危險的罪犯、恐怖分子、 幫派成員等用語呈現,
07:18
which of course is super dangerous
146
438393
2168
當這些工具被實際 用在社會中時,當然
07:20
when these tools are deployed in society.
147
440603
4421
就超危險。
07:25
And so in order to understand these tools better,
148
445566
2294
因此,為了更了解這些工具, 我創造了「穩定偏見探索器」,
07:27
I created this tool called the Stable Bias Explorer,
149
447902
3212
07:31
which lets you explore the bias of image generation models
150
451155
3379
這項工具可讓你從職業的角度 來探索影像生成模型的偏見。
07:34
through the lens of professions.
151
454575
1669
07:37
So try to picture a scientist in your mind.
152
457370
3045
請各位腦海中想像一位科學家。
07:40
Don't look at me.
153
460456
1168
別看我。
07:41
What do you see?
154
461666
1335
你看到什麼?
07:43
A lot of the same thing, right?
155
463835
1501
很多相似點吧?
07:45
Men in glasses and lab coats.
156
465378
2377
戴眼鏡穿實驗室白袍的男性。
07:47
And none of them look like me.
157
467797
1710
沒有一個看起來像我。
07:50
And the thing is,
158
470174
1460
重點是,
07:51
is that we looked at all these different image generation models
159
471676
3253
我們看了很多不同的影像生成 模型,發現很多相似點:
07:54
and found a lot of the same thing:
160
474929
1627
07:56
significant representation of whiteness and masculinity
161
476597
2586
我們研究了一百五十個職業, 主要呈現的都是白種男性。
07:59
across all 150 professions that we looked at,
162
479225
2127
08:01
even if compared to the real world,
163
481352
1794
即使是和真實世界,美國 勞工統計局做比較也一樣。
08:03
the US Labor Bureau of Statistics.
164
483187
1836
這些模型呈現出來的律師是男性,
08:05
These models show lawyers as men,
165
485023
3044
08:08
and CEOs as men, almost 100 percent of the time,
166
488109
3462
執行長也是男性, 幾乎 100% 都是如此,
08:11
even though we all know not all of them are white and male.
167
491571
3170
即使我們都知道這類人 並非都是白人和男性。
08:14
And sadly, my tool hasn't been used to write legislation yet.
168
494782
4380
遺憾的是,我的工具 尚未被用來起草立法。
08:19
But I recently presented it at a UN event about gender bias
169
499203
3963
但我最近在一個聯合國的 性別偏見活動上介紹這個工具,
08:23
as an example of how we can make tools for people from all walks of life,
170
503166
3879
用它當例子說明我們要如何創造工具
給各行各業的人用, 包括不知道如何寫程式的人,
08:27
even those who don't know how to code,
171
507086
2252
08:29
to engage with and better understand AI because we use professions,
172
509380
3253
讓他們能運用並更了解人工智慧
因為我們雖然用的是職業, 你可以用任何你感興趣的主題。
08:32
but you can use any terms that are of interest to you.
173
512633
3087
08:36
And as these models are being deployed,
174
516596
2752
隨著這些模型開始被實際運用,
08:39
are being woven into the very fabric of our societies,
175
519390
3128
被融入到我們社會的結構中,
08:42
our cell phones, our social media feeds,
176
522518
2044
我們的手機、社群媒體動態,
08:44
even our justice systems and our economies have AI in them.
177
524604
3211
甚至我們的司法系統 和經濟裡面都有人工智慧。
08:47
And it's really important that AI stays accessible
178
527815
3879
保持人工智慧能夠被理解 是相當重要的事,
08:51
so that we know both how it works and when it doesn't work.
179
531736
4713
這樣我們才能知道它怎麼運作
以及何時它是行不通的。
08:56
And there's no single solution for really complex things like bias
180
536908
4296
沒有單一個解決方案可以處理 相當複雜的議題,如偏見、
09:01
or copyright or climate change.
181
541245
2419
版權,或氣候變遷。
09:03
But by creating tools to measure AI's impact,
182
543664
2711
但藉由創造工具來衡量 人工智慧的影響,
09:06
we can start getting an idea of how bad they are
183
546375
3337
我們可以開始了解它們
有多糟糕,並在發展的 過程中開始做改善。
09:09
and start addressing them as we go.
184
549754
2502
09:12
Start creating guardrails to protect society and the planet.
185
552256
3337
開始創造護欄,以保護社會和地球。
09:16
And once we have this information,
186
556177
2336
一旦我們有了這些資訊,
09:18
companies can use it in order to say,
187
558513
1835
公司就可以用它當依據,說:好,
09:20
OK, we're going to choose this model because it's more sustainable,
188
560389
3170
我們選這個模型是因為它比較永續, 選這個模型是因為它尊重版權。
09:23
this model because it respects copyright.
189
563601
2044
09:25
Legislators who really need information to write laws,
190
565686
3087
很需要這些資訊來起草法案的 立法者可以運用這些工具
09:28
can use these tools to develop new regulation mechanisms
191
568773
3462
在人工智慧被應用到社會中時, 開發新的規範機制或管理方法。
09:32
or governance for AI as it gets deployed into society.
192
572276
3796
09:36
And users like you and me can use this information
193
576114
2377
像你我這種使用者 可以用這些資訊來選擇
09:38
to choose AI models that we can trust,
194
578491
3337
我們可以信任的人工智慧模型,
09:41
not to misrepresent us and not to misuse our data.
195
581869
2920
不會扭曲我們的狀況, 也不會誤用我們的資料。
09:45
But what did I reply to that email
196
585790
1918
但,我怎麼回覆那封說我的研究 會毀滅人類的電子郵件?
09:47
that said that my work is going to destroy humanity?
197
587750
2961
09:50
I said that focusing on AI's future existential risks
198
590711
4046
我說,把焦點放在 人工智慧未來的存在風險
09:54
is a distraction from its current,
199
594799
2044
反而是分散注意力,沒去注意 它目前非常實在的影響,
09:56
very tangible impacts
200
596843
1835
09:58
and the work we should be doing right now, or even yesterday,
201
598719
4004
以及我們現在或甚至昨天該做什麼
10:02
for reducing these impacts.
202
602723
1919
來減少這些影響。
10:04
Because yes, AI is moving quickly, but it's not a done deal.
203
604684
4045
因為,是的,人工智慧發展迅速,
但它還沒定型,我們是在邊走邊做,
10:08
We're building the road as we walk it,
204
608771
2503
10:11
and we can collectively decide what direction we want to go in together.
205
611274
3795
我們大家可以共同決定 我們想要一起走向哪個方向。
10:15
Thank you.
206
615069
1210
謝謝。
10:16
(Applause)
207
616279
2002
(掌聲)
關於本網站

本網站將向您介紹對學習英語有用的 YouTube 視頻。 您將看到來自世界各地的一流教師教授的英語課程。 雙擊每個視頻頁面上顯示的英文字幕,從那裡播放視頻。 字幕與視頻播放同步滾動。 如果您有任何意見或要求,請使用此聯繫表與我們聯繫。

https://forms.gle/WvT1wiN1qDtmnspy7