Will Superintelligent AI End the World? | Eliezer Yudkowsky | TED

254,667 views ・ 2023-07-11

TED


μ•„λž˜ μ˜λ¬Έμžλ§‰μ„ λ”λΈ”ν΄λ¦­ν•˜μ‹œλ©΄ μ˜μƒμ΄ μž¬μƒλ©λ‹ˆλ‹€.

λ²ˆμ—­: Jaewook Seol κ²€ν† : DK Kim
00:04
Since 2001, I have been working on what we would now call
0
4334
3503
2001λ…„λΆ€ν„° μ œκ°€ 일해 온 λΆ„μ•ΌλŠ”
ν˜„μž¬ 일반 인곡 지λŠ₯ μ •λ ¬ 문제라고 λΆ€λ₯΄λŠ” λΆ„μ•Όμž…λ‹ˆλ‹€.
00:07
the problem of aligning artificial general intelligence:
1
7879
3629
00:11
how to shape the preferences and behavior
2
11550
2085
즉, μ–΄λ–»κ²Œ κ°•λ ₯ν•œ 인곡 μ •μ‹ μ˜ 행동과 μ„ ν˜Έλ„λ₯Ό ν˜•μ„±ν•˜μ—¬
00:13
of a powerful artificial mind such that it does not kill everyone.
3
13677
4838
μ‚¬λžŒλ“€μ„ 죽이지 μ•Šκ²Œ λ§Œλ“€ 것인가 ν•˜λŠ” λ¬Έμ œμž…λ‹ˆλ‹€.
00:19
I more or less founded the field two decades ago,
4
19224
3128
μ œκ°€ 20λ…„ 전에 이 λΆ„μ•Όμ˜ κΈ°λ°˜μ„ 사싀상 λ‹€μ‘Œλ‹€κ³  ν•  수 μžˆλŠ”λ°
00:22
when nobody else considered it rewarding enough to work on.
5
22352
2794
κ·Έλ•ŒλŠ” 그게 보람찬 일이라고 μƒκ°ν•˜λŠ” μ‚¬λžŒμ΄ 아무도 μ—†μ—ˆμ£ .
00:25
I tried to get this very important project started early
6
25146
2711
μ €λŠ” 이 ꡉμž₯히 μ€‘μš”ν•œ ν”„λ‘œμ νŠΈκ°€ 일찍 μ‹œμž‘λ˜κ²Œ ν•˜λ €κ³  λ…Έλ ₯ν–ˆμŠ΅λ‹ˆλ‹€.
00:27
so we'd be in less of a drastic rush later.
7
27899
2544
그러면 λ‚˜μ€‘μ— κ·Ήλ‹¨μ μœΌλ‘œ μ„œλ‘λ₯Ό ν•„μš”κ°€ μ‘°κΈˆμ΄λΌλ„ 쀄겠죠.
00:31
I consider myself to have failed.
8
31069
2044
μ €λŠ” μ œκ°€ μ‹€νŒ¨ν–ˆλ‹€κ³  λ΄…λ‹ˆλ‹€.
00:33
(Laughter)
9
33154
1001
(μ›ƒμŒ)
00:34
Nobody understands how modern AI systems do what they do.
10
34197
3337
ν˜„λŒ€ AIκ°€ μ–΄λ–»κ²Œ μž‘λ™ν•˜λŠ”μ§€ μ΄ν•΄ν•˜λŠ” μ‚¬λžŒμ€ 아무도 μ—†μ–΄μš”.
00:37
They are giant, inscrutable matrices of floating point numbers
11
37576
2961
그듀은 κ±°λŒ€ν•˜κ³  λΆˆκ°€ν•΄ν•œ 뢀동 μ†Œμˆ˜μ λ“€ ν–‰λ ¬λ‘œμ„œ
00:40
that we nudge in the direction of better performance
12
40579
2460
더 λ‚˜μ€ μ„±λŠ₯을 μœ„ν•΄ 그듀을 λ°€μ–΄λΆ€μ³μ„œ
그듀은 μ„€λͺ…ν•  수 μ—†λŠ” λ°©μ‹μœΌλ‘œ μž‘λ™ν•˜λŠ” 지경에 이λ₯΄λ €μŠ΅λ‹ˆλ‹€.
00:43
until they inexplicably start working.
13
43039
2044
00:45
At some point, the companies rushing headlong to scale AI
14
45083
3378
AIλ₯Ό ν™•μž₯ν•˜κΈ° μœ„ν•΄ μ €λŒμ μœΌλ‘œ μ„œλ‘λ₯΄λŠ” 기업듀이
00:48
will cough out something that's smarter than humanity.
15
48461
2670
인λ₯˜λ³΄λ‹€ 더 λ˜‘λ˜‘ν•œ λ­”κ°€λ₯Ό μ–Έμ  κ°€ λ±‰μ–΄λ‚΄κ²Œ 될 κ²λ‹ˆλ‹€.
00:51
Nobody knows how to calculate when that will happen.
16
51131
2544
그게 일어날 μ‹œμ μ„ μ•Œ 방법은 어디에도 μ—†μ£ .
00:53
My wild guess is that it will happen after zero to two more breakthroughs
17
53675
3712
μ œκ°€ λŒ€λž΅ μΆ”μ •ν•œλ‹€λ©΄, 트랜슀포머 규λͺ¨μ—μ„œ
돌파ꡬλ₯Ό λ§Žμ•„μ•Ό 두 개 정도 더 찾은 뒀에 일어날 것 κ°™μŠ΅λ‹ˆλ‹€.
00:57
the size of transformers.
18
57429
1710
00:59
What happens if we build something smarter than us
19
59139
2335
μ œλŒ€λ‘œ 이해도 λͺ» ν•˜λ©΄μ„œ μš°λ¦¬λ³΄λ‹€ λ˜‘λ˜‘ν•œ κ±Έ λ§Œλ“€λ©΄ μ–΄λ–»κ²Œ λ κΉŒμš”?
01:01
that we understand that poorly?
20
61516
2377
01:03
Some people find it obvious that building something smarter than us
21
63935
3170
μ–΄λ–€ μ‚¬λžŒλ“€μ€ μš°λ¦¬κ°€ 이해도 λͺ» ν•˜λ©΄μ„œ μš°λ¦¬λ³΄λ‹€ λ˜‘λ˜‘ν•œ κ±Έ λ§Œλ“ λ‹€λ©΄
01:07
that we don't understand might go badly.
22
67105
2294
일이 잘λͺ»λ  게 λͺ…λ°±ν•˜λ‹€κ³  ν•©λ‹ˆλ‹€.
01:09
Others come in with a very wide range of hopeful thoughts
23
69441
3879
λ‹€λ₯Έ μ‚¬λžŒλ“€μ€ 그게 μ–΄λ–»κ²Œ 잘 될 수 μžˆμ„μ§€μ— λŒ€ν•΄
01:13
about how it might possibly go well.
24
73361
2044
ꡉμž₯히 κ΄‘λ²”μœ„ν•œ 희망찬 생각듀을 λ“€κ³  였죠.
01:16
Even if I had 20 minutes for this talk and months to prepare it,
25
76281
3337
20λΆ„μ§œλ¦¬ 이 강연을 μœ„ν•΄ λͺ‡ 달을 μ€€λΉ„ν•œλ‹€ 해도
01:19
I would not be able to refute all the ways people find to imagine
26
79659
3087
μ‚¬λžŒλ“€μ΄ AIκ°€ 잘 될 거라고 μƒμƒν•˜λŠ” λͺ¨λ“  것듀에 λŒ€ν•΄
01:22
that things might go well.
27
82746
1710
λ°˜λ°•ν•  μˆ˜λŠ” μ—†μ—ˆμ„ κ²λ‹ˆλ‹€.
01:24
But I will say that there is no standard scientific consensus
28
84456
4546
ν•˜μ§€λ§Œ μ €λŠ” 그게 μ–΄λ–»κ²Œ 잘 될지에 λŒ€ν•΄μ„œλŠ”
ν‘œμ€€ κ³Όν•™κ³„μ˜ ν•©μ˜κ°€ μ—†λ‹€κ³  말씀 λ“œλ¦¬κ² μŠ΅λ‹ˆλ‹€.
01:29
for how things will go well.
29
89044
1668
01:30
There is no hope that has been widely persuasive
30
90712
2252
κ΄‘λ²”μœ„ν•˜κ²Œ μ„€λ“μ μ΄λ©΄μ„œλ„
νšŒμ˜λ‘ μžλ“€μ˜ 곡격을 버텨낸 희망은 아직 μ—†μŠ΅λ‹ˆλ‹€.
01:33
and stood up to skeptical examination.
31
93006
1877
01:35
There is nothing resembling a real engineering plan for us surviving
32
95592
4421
우리의 생쑴을 μœ„ν•œ μ§„μ§œ 곡학적 κ³„νšκ³Ό λΉ„μŠ·ν•œ 것도 μ—†μŠ΅λ‹ˆλ‹€.
μ œκ°€ λ…Όν•  수 μžˆλŠ” ν•œμ€μš”.
01:40
that I could critique.
33
100013
1710
01:41
This is not a good place in which to find ourselves.
34
101765
2752
μ§€κΈˆ 우리의 μ²˜μ§€λŠ” 쒋은 상황이 μ•„λ‹™λ‹ˆλ‹€.
01:44
If I had more time,
35
104517
1210
μ‹œκ°„μ΄ 더 λ§Žμ•˜λ‹€λ©΄ λͺ‡ 가지 κ·ΈλŸ΄λ“―ν•œ μ΄μœ λ“€μ„ λ§μ”€λ“œλ Έμ„ κ²ƒμž…λ‹ˆλ‹€.
01:45
I'd try to tell you about the predictable reasons
36
105727
2294
01:48
why the current paradigm will not work
37
108063
1960
ν˜„μž¬μ˜ νŒ¨λŸ¬λ‹€μž„μœΌλ‘œλŠ”
우리λ₯Ό μ’‹μ•„ν•˜λŠ” μ΄ˆμ§€λŠ₯, λ˜λŠ” 친ꡬ 같은 μ΄ˆμ§€λŠ₯μ΄λ‚˜
01:50
to build a superintelligence that likes you
38
110065
2460
01:52
or is friends with you, or that just follows orders.
39
112567
3378
μ§€μ‹œλ₯Ό κ·Έλƒ₯ λ”°λ₯΄λŠ” μ΄ˆμ§€λŠ₯을 λ§Œλ“€ 수 μ—†λŠ” 이유 λ§μž…λ‹ˆλ‹€.
01:56
Why, if you press "thumbs up" when humans think that things went right
40
116613
4379
μž˜λλ‹€κ³  인간이 생각할 λ–„ β€˜μ’‹μ•„μš”β€™λ₯Ό λˆ„λ₯΄κ±°λ‚˜,
잘λͺ»λλ‹€κ³  λ‹€λ₯Έ AI μ‹œμŠ€ν…œμ΄ 생각할 λ•Œ β€˜μ‹«μ–΄μš”β€™λ₯Ό λˆ„λ₯Έλ‹€λ©΄,
02:01
or "thumbs down" when another AI system thinks that they went wrong,
41
121034
3712
02:04
you do not get a mind that wants nice things
42
124788
3503
ν›ˆλ ¨ 받지 μ•Šμ€ 뢀뢄도 잘 μΌλ°˜ν™”ν•΄μ„œ
02:08
in a way that generalizes well outside the training distribution
43
128291
4004
ν›ˆλ ¨ ꡐ관보닀 더 λ˜‘λ˜‘ν•œ AIκ°€ λ˜μ–΄
02:12
to where the AI is smarter than the trainers.
44
132337
2461
쒋은 κ²ƒλ§Œ μ›ν•˜λŠ” 인곡 정신을 μ™œ 얻을 수 μ—†λŠ” κ±ΈκΉŒμš”?
02:15
You can search for "Yudkowsky list of lethalities" for more.
45
135674
5130
더 μ•Œκ³  μ‹ΆμœΌμ‹œλ©΄ β€˜μœ λ“œμ½”μŠ€ν‚€ 치λͺ…μ„± λͺ©λ‘β€™μ„ κ²€μƒ‰ν•΄λ³΄μ„Έμš”.
02:20
(Laughter)
46
140804
1793
(μ›ƒμŒ)
02:22
But to worry, you do not need to believe me
47
142597
2169
κ±±μ •λ˜μ‹œκ² μ§€λ§Œ, κ·Έλ ‡λ‹€κ³  μ •ν™•νžˆ μ–΄λ–€ μž¬λ‚œμ΄ λ²Œμ–΄μ§ˆμ§€
02:24
about exact predictions of exact disasters.
48
144808
2878
μ œκ°€ μ •ν™•νžˆ μ˜ˆμΈ‘ν–ˆλ‹€κ³  생각할 ν•„μš”λŠ” μ—†μŠ΅λ‹ˆλ‹€.
02:27
You just need to expect that things are not going to work great
49
147727
2962
μ—¬λŸ¬λΆ„μ€ κ·Έλƒ₯ μ§„μ§œ μ§„μ§€ν•˜κ³  μ§„μ§œ 결정적인 졜초 μ‹œλ„μ—μ„œλŠ”
02:30
on the first really serious, really critical try
50
150730
3045
일이 잘 λ˜μ§€ μ•Šμ„ 거라고 μƒκ°ν•˜μ‹œλ©΄ λ©λ‹ˆλ‹€.
02:33
because an AI system smart enough to be truly dangerous
51
153817
3503
μ§„μ •μœΌλ‘œ μœ„ν—˜ν•  μ •λ„λ‘œ λ˜‘λ˜‘ν•œ AIλŠ”
02:37
was meaningfully different from AI systems stupider than that.
52
157362
3503
그보닀 더 λ©μ²­ν•œ AIμ™€λŠ” μ•„μ£Ό λ‹€λ₯Ό 것이기 λ•Œλ¬Έμž…λ‹ˆλ‹€.
02:40
My prediction is that this ends up with us facing down something smarter than us
53
160907
5005
제 μ˜ˆμƒμœΌλ‘œλŠ” 이건 κ²°κ΅­ μš°λ¦¬κ°€ μ›ν•˜λŠ” κ±Έ μ›ν•˜μ§€λ„ μ•Šκ³ ,
μš°λ¦¬κ°€ κ°€μΉ˜λ₯Ό λ‘κ±°λ‚˜ 의미있게 μ—¬κΈ°λŠ” κ·Έ μ–΄λ–€ 것도 μ›ν•˜μ§€ μ•ŠμœΌλ©΄μ„œλ„
02:45
that does not want what we want,
54
165954
1752
02:47
that does not want anything we recognize as valuable or meaningful.
55
167706
4170
μš°λ¦¬λ³΄λ‹€ λ˜‘λ˜‘ν•œ λ­”κ°€λ₯Ό λŒ€λ©΄ν•˜λŠ” κ²°κ³Όλ₯Ό 낳을 κ²ƒμž…λ‹ˆλ‹€.
02:52
I cannot predict exactly how a conflict between humanity and a smarter AI would go
56
172252
4588
λ˜‘λ˜‘ν•œ AI와 인λ₯˜ 간에 μ–΄λ–€ κ°ˆλ“±μ΄ λΆˆκ±°μ§ˆμ§€ 저도 μ •ν™•νžˆ μ˜ˆμƒ λͺ» ν•©λ‹ˆλ‹€.
02:56
for the same reason I can't predict exactly how you would lose a chess game
57
176881
3838
ν˜„μž¬ 졜고 AI 체슀 ν”„λ‘œκ·Έλž¨, μ˜ˆμ»¨λŒ€ μŠ€ν†‘ν”Όμ‹œμ™€ 체슀λ₯Ό 두면
03:00
to one of the current top AI chess programs, let's say Stockfish.
58
180760
3671
μ—¬λŸ¬λΆ„μ΄ μ •ν™•νžˆ μ–΄λ–€ μ‹μœΌλ‘œ μ§ˆμ§€ μ˜ˆμƒν•  수 μ—†λŠ” 것과 κ°™μŠ΅λ‹ˆλ‹€.
03:04
If I could predict exactly where Stockfish could move,
59
184973
3545
λ§Œμ•½ μŠ€ν†‘ν”Όμ‹œκ°€ 말을 μ–΄λ””λ‘œ 움직일지 μ œκ°€ μ •ν™•νžˆ μ˜ˆμƒν•  수 μžˆλ‹€λ©΄,
03:08
I could play chess that well myself.
60
188560
2544
μ œκ°€ 체슀λ₯Ό κ·Έ μ •λ„λ‘œ μž˜ν–ˆκ² μ£ .
03:11
I can't predict exactly how you'll lose to Stockfish,
61
191146
2502
μ—¬λŸ¬λΆ„μ΄ μŠ€ν†‘ν”Όμ‹œμ—κ²Œ μ–΄λ–»κ²Œ μ§ˆμ§€ μ œκ°€ μ •ν™•νžˆ μ˜ˆμƒν•  μˆ˜λŠ” μ—†μ§€λ§Œ,
03:13
but I can predict who wins the game.
62
193648
2044
λˆ„κ°€ κ²½κΈ°μ—μ„œ μ΄κΈΈμ§€λŠ” μ˜ˆμƒν•  수 μžˆμŠ΅λ‹ˆλ‹€.
03:16
I do not expect something actually smart to attack us with marching robot armies
63
196401
4421
μ €λŠ” μ§„μ§œ λ˜‘λ˜‘ν•œ λ­”κ°€κ°€ 뢉은 λˆˆμ„ 번쩍거리며
μ§„κ΅°ν•˜λŠ” λ‘œλ΄‡ κ΅°λ‹¨μœΌλ‘œ 우리λ₯Ό κ³΅κ²©ν•˜λŠ” 상황,
03:20
with glowing red eyes
64
200864
1543
03:22
where there could be a fun movie about us fighting them.
65
202407
3086
μš°λ¦¬κ°€ κ·Έλ“€κ³Ό μ‹Έμš°λŠ” μž¬λ°ŒλŠ” μ˜ν™” 같은 상황을 μ˜ˆμƒν•˜μ§„ μ•ŠμŠ΅λ‹ˆλ‹€.
03:25
I expect an actually smarter and uncaring entity
66
205535
2502
μ œκ°€ μ˜ˆμƒν•˜λŠ” 건, μ§„μ§œ 더 λ˜‘λ˜‘ν•˜κ³  λ¬΄μ •ν•œ 쑴재라면
03:28
will figure out strategies and technologies
67
208079
2002
μ‹ μ†ν•˜κ³  ν™•μ‹€ν•˜κ²Œ 죽일 수 μžˆλŠ” κΈ°μˆ μ„ μ•Œμ•„λ‚Έ λ‹€μŒ
03:30
that can kill us quickly and reliably and then kill us.
68
210123
3545
우리λ₯Ό 죽일 κ±°λΌλŠ” κ²λ‹ˆλ‹€.
03:34
I am not saying that the problem of aligning superintelligence
69
214461
2919
μ €λŠ” μ΄ˆμ§€λŠ₯을 μ •λ ¬ν•˜λŠ” λ¬Έμ œκ°€
μ›μΉ™μ μœΌλ‘œ 해결될 수 μ—†λ‹€λŠ” μ–˜κΈ°λ₯Ό ν•˜λŠ” 게 μ•„λ‹™λ‹ˆλ‹€.
03:37
is unsolvable in principle.
70
217422
1710
03:39
I expect we could figure it out with unlimited time and unlimited retries,
71
219174
4879
μ‹œκ°„μ΄ λ¬΄ν•œνžˆ 있고 λ¬΄ν•œνžˆ μ‹œλ„ν•˜λ©΄ μ•Œμ•„λ‚Ό 수 μžˆμ„ κ²λ‹ˆλ‹€.
03:44
which the usual process of science assumes that we have.
72
224095
3796
일반적인 과학적 μ ˆμ°¨μ—μ„œ κ°€μ •ν•˜λŠ” 것이죠.
03:48
The problem here is the part where we don't get to say,
73
228308
2836
λ¬Έμ œλŠ” μš°λ¦¬κ°€ μ΄λ ‡κ²Œ μ–˜κΈ°ν•  수 μ—†λ‹€λŠ” μ μž…λ‹ˆλ‹€.
03:51
β€œHa ha, whoops, that sure didn’t work.
74
231186
2294
β€œν•˜ν•˜, 아이고, 그건 ν™•μ‹€νžˆ μ•ˆ ν†΅ν•˜λ„€.
03:53
That clever idea that used to work on earlier systems
75
233521
3837
초창기 μ‹œμŠ€ν…œμ—μ„œ ν†΅ν–ˆλ˜ κ·Έ κΈ°λ§‰νžŒ 아이디어가
03:57
sure broke down when the AI got smarter, smarter than us.”
76
237358
3504
AIκ°€ μš°λ¦¬λ³΄λ‹€ 더 λ˜‘λ˜‘ν•΄μ§€λ‹ˆκΉŒ ν™•μ‹€νžˆ μ†Œμš©μ—†μ–΄μ‘Œκ΅°.”
04:01
We do not get to learn from our mistakes and try again
77
241571
2544
μš°λ¦¬λŠ” μ‹€μˆ˜μ—μ„œ 배우고 λ‹€μ‹œ μ‹œλ„ν•΄ λ³Ό 수 없을 κ²λ‹ˆλ‹€.
04:04
because everyone is already dead.
78
244157
1960
벌써 λͺ¨λ‘ λ‹€ μ£½μ—ˆμœΌλ‹ˆκΉŒμš”.
λ¬΄λ¦¬ν•œ μš”κ΅¬μΌ 수 μžˆμŠ΅λ‹ˆλ‹€.
04:07
It is a large ask
79
247076
2586
04:09
to get an unprecedented scientific and engineering challenge
80
249704
3003
μ „λ‘€ μ—†λŠ” 과학적, 곡학적 과제λ₯Ό
04:12
correct on the first critical try.
81
252707
2002
ν•œ 방에 μ •ν™•νžˆ ν•΄κ²°ν•œλ‹€λŠ” 건 말이죠.
04:15
Humanity is not approaching this issue with remotely
82
255084
3045
인λ₯˜κ°€ 이 λ¬Έμ œμ— μ ‘κ·Όν•˜λŠ” 방식은
ν•©λ‹Ήν•œ 심각성 μˆ˜μ€€μ—μ„œ ν•œμ°Έ 동떨어져 μžˆμŠ΅λ‹ˆλ‹€.
04:18
the level of seriousness that would be required.
83
258171
2711
04:20
Some of the people leading these efforts
84
260924
1918
이런 λ…Έλ ₯듀을 μ΄λ„λŠ” μ‚¬λžŒλ“€ 쀑 μΌλΆ€λŠ” μ§€λ‚œ μ‹­ λ…„κ°„ μ΄ˆμ§€λŠ₯을 λ§Œλ“ λ‹€λŠ” 게
04:22
have spent the last decade not denying
85
262884
2502
04:25
that creating a superintelligence might kill everyone,
86
265386
2920
λͺ¨λ‘λ₯Ό 죽일 κ±°λž€ 점을 λΆ€μΈν•˜λŠ” λŒ€μ‹ μ—
04:28
but joking about it.
87
268306
1376
κ·Έκ±Έ λ†λ‹΄κ±°λ¦¬λ‘œ μ‚Όμ•˜μŠ΅λ‹ˆλ‹€.
04:30
We are very far behind.
88
270183
1877
μš°λ¦¬λŠ” ꡉμž₯히 λ’€μ²˜μ Έ μžˆμ–΄μš”.
04:32
This is not a gap we can overcome in six months,
89
272101
2294
이건 6κ°œμ›” 유예 μ„ μ–Έ 같은 걸둜
μš°λ¦¬κ°€ 6κ°œμ›” λ§Œμ— 극볡할 수 μžˆλŠ” 격차가 μ•„λ‹™λ‹ˆλ‹€.
04:34
given a six-month moratorium.
90
274437
1919
04:36
If we actually try to do this in real life,
91
276689
2503
ν˜„μ‹€μ—μ„œ 이걸 μ§„μ§œλ‘œ μ‹œλ„ν•˜λ € ν•œλ‹€λ©΄ 우린 λͺ¨λ‘ 죽을 κ±°μ˜ˆμš”.
04:39
we are all going to die.
92
279234
1668
04:41
People say to me at this point, what's your ask?
93
281402
2586
μ‚¬λžŒλ“€μ€ μ΄μ―€μ—μ„œ λ§ν•˜μ£ , κ·Έλž˜μ„œ μ–΄μ©Œμžκ³ ?
04:44
I do not have any realistic plan,
94
284697
1627
제겐 μ–΄λ–€ ν˜„μ‹€μ μΈ κ³„νšλ„ μ—†μŠ΅λ‹ˆλ‹€.
04:46
which is why I spent the last two decades
95
286324
1960
κ·Έλž˜μ„œ μ§€λ‚œ 20λ…„κ°„ μ‹œλ„ν–ˆμ§€λ§Œ
04:48
trying and failing to end up anywhere but here.
96
288326
2586
μ•„λ¬΄λŸ° μ„±κ³Ό 없이 μ—¬κΈ°κΉŒμ§€ 온 κ±°μ£ .
04:51
My best bad take is that we need an international coalition
97
291913
3670
λŒ€λž΅ μƒκ°ν•˜κΈ°μ—λŠ”
κ±°λŒ€ AI ν›ˆλ ¨μ„ κΈˆμ§€ν•˜λŠ” ꡭ제 μ—°λŒ€κ°€ ν•„μš”ν•©λ‹ˆλ‹€.
04:55
banning large AI training runs,
98
295625
2252
04:57
including extreme and extraordinary measures
99
297877
3379
κ·ΈλŸ¬ν•œ κΈˆμ§€κ°€ μ‹€μ œμ μ΄κ³  보편적인 효과λ₯Ό 내도둝 ν• 
05:01
to have that ban be actually and universally effective,
100
301256
3253
극단적이고 μ˜ˆμ™Έμ μΈ μ‘°μΉ˜κΉŒμ§€ ν¬ν•¨ν•΄μ„œ 말이죠.
05:04
like tracking all GPU sales,
101
304509
2377
λͺ¨λ“  GPU 판맀λ₯Ό μΆ”μ ν•˜κ±°λ‚˜,
05:06
monitoring all the data centers,
102
306928
2127
λͺ¨λ“  데이터 μ„Όν„°λ₯Ό κ°μ‹œν•˜κ±°λ‚˜,
05:09
being willing to risk a shooting conflict between nations
103
309055
2711
쑰약에 μ„œλͺ…ν•˜μ§€ μ•Šμ€ ꡭ가에 μžˆλŠ”
05:11
in order to destroy an unmonitored data center
104
311808
2711
κ°μ‹œλ°›μ§€ μ•ŠλŠ” 데이터 μ„Όν„°λ₯Ό νŒŒκ΄΄ν•˜κΈ° μœ„ν•΄
05:14
in a non-signatory country.
105
314561
1793
κ΅­κ°€ κ°„μ˜ λΆ„μŸλ„ λΆˆμ‚¬ν•  μ˜μ§€ 같은 것 λ§μ΄μ—μš”.
05:17
I say this, not expecting that to actually happen.
106
317480
3170
μ΄λ ‡κ²Œ μ–˜κΈ°λŠ” 해도 μ‹€μ œλ‘œ 될 거라 κΈ°λŒ€ν•˜μ§„ μ•ŠμŠ΅λ‹ˆλ‹€.
05:21
I say this expecting that we all just die.
107
321109
2919
κ·Έλƒ₯ λͺ¨λ‘ 죽을 거라 μƒκ°ν•˜μ§€λ§Œ μ–˜κΈ°λ₯Ό ν•˜λŠ” κ±°μ˜ˆμš”.
05:24
But it is not my place to just decide on my own
108
324779
3295
그런데 인λ₯˜κ°€ λ‹€λ₯Έ μ‚¬λžŒλ“€μ—κ²Œ κ΅¬νƒœμ—¬ κ²½κ³ ν•˜μ§€λ„ μ•Šκ³ 
05:28
that humanity will choose to die,
109
328074
2211
μ£½μŒμ„ 선택할 κ²ƒμ΄λΌλŠ” 말은
05:30
to the point of not bothering to warn anyone.
110
330326
2461
제 λ§ˆμŒλŒ€λ‘œ κ·Έλƒ₯ ν•˜λŠ” μ†Œλ¦¬κ°€ μ•„λ‹™λ‹ˆλ‹€.
05:33
I have heard that people outside the tech industry
111
333204
2336
μ €λŠ” 기술 업계 외뢀인듀이 업계 λ‚΄λΆ€μžλ“€λ³΄λ‹€
05:35
are getting this point faster than people inside it.
112
335582
2627
이 지점을 더 빨리 μ΄ν•΄ν•œλ‹€κ³  λ“€μ—ˆμŠ΅λ‹ˆλ‹€.
05:38
Maybe humanity wakes up one morning and decides to live.
113
338251
3712
인λ₯˜λŠ” μ–΄λŠ λ‚  아침에 μΌμ–΄λ‚˜μ„œ μ‚΄μ•„μ•Όκ² λ‹€κ³  정할지도 λͺ¨λ¦…λ‹ˆλ‹€.
05:43
Thank you for coming to my brief TED talk.
114
343006
2043
μ €μ˜ κ°„λž΅ν•œ TED 강연에 μ™€μ£Όμ…”μ„œ κ°μ‚¬ν•©λ‹ˆλ‹€.
05:45
(Laughter)
115
345049
1585
(μ›ƒμŒ)
05:46
(Applause and cheers)
116
346676
6965
(λ°•μˆ˜)(ν™˜ν˜Έ)
05:56
Chris Anderson: So, Eliezer, thank you for coming and giving that.
117
356102
4421
크리슀 μ•€λ”μŠ¨: λ„€, μ—˜λ¦¬μ—μ €, 였늘 κ°•μ—° κ°μ‚¬λ“œλ¦½λ‹ˆλ‹€.
06:00
It seems like what you're raising the alarm about is that like,
118
360523
3962
μ—˜λ¦¬μ—μ € 씨가 경쒅을 μšΈλ¦¬λŠ” 것은,
06:04
for this to happen, for an AI to basically destroy humanity,
119
364527
3962
그런 일이 μΌμ–΄λ‚˜λ €λ©΄, AIκ°€ 인λ₯˜λ₯Ό μ—†μ• λ €λ©΄,
06:08
it has to break out, escape controls of the internet and, you know,
120
368489
5172
그게 μΈν„°λ„·μ˜ ν†΅μ œμ—μ„œ λ²—μ–΄λ‚˜κ³  νƒˆμΆœν•΄μ„œ,
06:13
start commanding actual real-world resources.
121
373661
2920
μ‹€μ œλ‘œ ν˜„μ‹€ μžμ›μ„ ν†΅μ œν•˜κΈ° μ‹œμž‘ν•΄μ•Ό ν•œλ‹€λŠ” 말이죠.
06:16
You say you can't predict how that will happen,
122
376581
2210
그게 μ–΄λ–»κ²Œ λ²Œμ–΄μ§ˆμ§€ μ˜ˆμƒν•  수 μ—†λ‹€κ³  ν•˜μ…¨μ§€λ§Œ,
06:18
but just paint one or two possibilities.
123
378833
3462
κ·Έλƒ₯ ν•œλ‘ 가지 κ°€λŠ₯성을 κ·Έλ €λ³΄μ‹œμ£ .
06:22
Eliezer Yudkowsky: OK, so why is this hard?
124
382337
2794
μ—˜λ¦¬μ—μ € μœ λ“œμ½”μŠ€ν‚€: μ’‹μŠ΅λ‹ˆλ‹€, 이게 μ™œ μ–΄λ €μšΈκΉŒμš”?
06:25
First, because you can't predict exactly where a smarter chess program will move.
125
385131
3837
첫째, 더 λ˜‘λ˜‘ν•œ 체슀 ν”„λ‘œκ·Έλž¨μ˜ λ‹€μŒ 수λ₯Ό μ •ν™•νžˆ μ˜ˆμƒν•  수 μ—†κ±°λ“ μš”.
06:28
Maybe even more importantly than that,
126
388968
1877
μ–΄μ©Œλ©΄ 그보닀 더 μ€‘μš”ν•œ 건,
06:30
imagine sending the design for an air conditioner
127
390887
2794
에어컨 섀계도λ₯Ό 11μ„ΈκΈ°λ‘œ 보낸닀고 μƒμƒν•΄λ³΄μ„Έμš”.
06:33
back to the 11th century.
128
393723
1877
06:35
Even if they -- if it’s enough detail for them to build it,
129
395642
3128
μ„€λ Ή κ·Έ μ‹œλŒ€ μ‚¬λžŒλ“€μ΄ κ·Έκ±Έ λ§Œλ“€ 만큼 μΆ©λΆ„ν•œ μ„ΈλΆ€ 사항을 λ‹΄μ•˜λ‹€ 해도,
06:38
they will be surprised when cold air comes out
130
398811
2753
μ—μ–΄μ»¨μ—μ„œ μ°¬ λ°”λžŒμ΄ λ‚˜μ˜€λ©΄ 그듀은 깜짝 λ†€λž„ κ²λ‹ˆλ‹€.
06:41
because the air conditioner will use the temperature-pressure relation
131
401564
3670
μ™œλƒν•˜λ©΄ 에어컨은 μ˜¨λ„-μ••λ ₯ 상관관계λ₯Ό μ‚¬μš©ν•  텐데
06:45
and they don't know about that law of nature.
132
405276
2461
그듀은 κ·Έ 법칙을 아직 λͺ¨λ₯΄κΈ° λ•Œλ¬Έμž…λ‹ˆλ‹€.
06:47
So if you want me to sketch what a superintelligence might do,
133
407779
4629
κ·Έλž˜μ„œ 크리슀 씨가 제게 μ΄ˆμ§€λŠ₯이 뭘 할지 그렀보라 ν•˜μ‹ λ‹€λ©΄,
06:52
I can go deeper and deeper into places
134
412450
2336
μš°λ¦¬κ°€ 아직 μ•Œμ§€ λͺ»ν•˜μ§€λ§Œ 기술적 진보가 μ˜ˆμΈ‘λ˜λŠ” 뢄야에 λŒ€ν•΄
06:54
where we think there are predictable technological advancements
135
414827
2962
점점 더 깊이 νŒŒκ³ λ“€μ–΄ λ³Ό 수 μžˆμ„ κ²λ‹ˆλ‹€.
06:57
that we haven't figured out yet.
136
417789
1626
06:59
And as I go deeper, it will get harder and harder to follow.
137
419415
2836
또 μ œκ°€ 더 깊이 νŒŒκ³ λ“€μˆ˜λ‘ 점점 더 따라가기 νž˜λ“€μ–΄μ§€κ² μ£ .
μ΄μ•ΌκΈ°λŠ” 무척 섀득적일 수 μžˆμŠ΅λ‹ˆλ‹€.
07:02
It could be super persuasive.
138
422251
1961
07:04
That's relatively easy to understand.
139
424253
1877
μƒλŒ€μ μœΌλ‘œ μ΄ν•΄ν•˜κΈ° μ‰¬μšΈ μˆ˜λ„ 있죠.
07:06
We do not understand exactly how the brain works,
140
426130
2545
μš°λ¦¬λŠ” 아직 λ‡Œκ°€ μ–΄λ–»κ²Œ μž‘λ™ν•˜λŠ”μ§€ μ •ν™•νžˆ μ΄ν•΄ν•˜μ§€ λͺ»ν•©λ‹ˆλ‹€.
07:08
so it's a great place to exploit laws of nature that we do not know about.
141
428716
3504
κ·Έλž˜μ„œ 아직 μš°λ¦¬κ°€ λͺ¨λ₯΄λŠ” μžμ—° 법칙을 뽑아내기 쒋은 λΆ„μ•Όμ£ .
07:12
Rules of the environment,
142
432261
1502
ν™˜κ²½ λ²•μΉ™μ΄λ‚˜ κ·Έκ±Έ λ›°μ–΄ λ„˜λŠ” μ‹ κΈ°μˆ μ„ μ°½μ•ˆν•  수 μžˆμ–΄μš”.
07:13
invent new technologies beyond that.
143
433805
2669
07:16
Can you build a synthetic virus that gives humans a cold
144
436808
3879
ν•©μ„± λ°”μ΄λŸ¬μŠ€λ₯Ό λ§Œλ“€μ–΄μ„œ 인간이 감기에 걸리게 λ§Œλ“  λ‹€μŒμ—
07:20
and then a bit of neurological change and they're easier to persuade?
145
440728
4213
μ‹ κ²½ 회둜λ₯Ό μ•½κ°„ λ°”κΎΈμ–΄μ„œ 더 μ„€λ“ν•˜κΈ° μ‰½κ²Œ λ§Œλ“€ 수 μžˆμ„κΉŒμš”?
07:24
Can you build your own synthetic biology,
146
444941
3336
μžμ‹ λ§Œμ˜ ν•©μ„± 생λͺ…체, ν•©μ„± 사이보그λ₯Ό λ§Œλ“€ 수 μžˆμ„κΉŒμš”?
07:28
synthetic cyborgs?
147
448319
1585
07:29
Can you blow straight past that
148
449946
2044
κ·Έκ±Έ 곧μž₯ λ›°μ–΄ λ„˜μ–΄
07:31
to covalently bonded equivalents of biology,
149
451990
4004
곡유 κ²°ν•©μœΌλ‘œ κ²°ν•©ν•œ 생λͺ…μ²΄λ‘œ λ„˜μ–΄κ°ˆ 수 μžˆμ„κΉŒμš”?
07:36
where instead of proteins that fold up and are held together by static cling,
150
456035
3754
λ‹¨λ°±μ§ˆμ΄ μ ‘ν˜€μ Έμ„œ 정적 결합에 μ˜ν•΄ 묢인 게 μ•„λ‹ˆλΌ
07:39
you've got things that go down much sharper potential energy gradients
151
459789
3378
훨씬 더 κΈ‰κ²©ν•œ μ—λ„ˆμ§€ 차이λ₯Ό 두고 μ„œλ‘œ κ²°ν•©λœ 그런 것 λ§μž…λ‹ˆλ‹€.
07:43
and are bonded together?
152
463209
1376
07:44
People have done advanced design work about this sort of thing
153
464585
3546
μ‚¬λžŒλ“€μ€ 이런 μ’…λ₯˜μ˜ 것듀에 λŒ€ν•΄ μ„ ν–‰ 섀계 μž‘μ—…μ„ ν•΄μ™”μŠ΅λ‹ˆλ‹€.
07:48
for artificial red blood cells that could hold 100 times as much oxygen
154
468172
3879
μ‚°μ†Œλ₯Ό 100λ°°λ‚˜ μš΄λ°˜ν•  수 μžˆλŠ” 인곡 적혈ꡬ μ„Έν¬λŠ”
07:52
if they were using tiny sapphire vessels to store the oxygen.
155
472093
3754
μ‚°μ†Œλ₯Ό μ €μž₯ν•˜κΈ° μœ„ν•΄ μ•„μ£Ό μž‘μ€ μ‚¬νŒŒμ΄μ–΄ 용기λ₯Ό μ”λ‹ˆλ‹€.
07:55
There's lots and lots of room above biology,
156
475888
2586
생물학 λ„ˆλ¨Έ 많고 λ§Žμ€ μ˜μ—­μ΄ 더 μžˆμ„ 수 μžˆμ§€λ§Œ
07:58
but it gets harder and harder to understand.
157
478474
2211
점점 더 μ΄ν•΄ν•˜κΈ° μ–΄λ €μ›Œμ§€μ£ .
08:01
CA: So what I hear you saying
158
481519
1543
크리슀: 그럼 μ—˜λ¦¬μ—μ € 씨 말씀은 이런 λ¬΄μ‹œλ¬΄μ‹œν•œ κ°€λŠ₯성듀이 μžˆμ§€λ§Œ
08:03
is that these terrifying possibilities there
159
483104
2294
08:05
but your real guess is that AIs will work out something more devious than that.
160
485440
5130
μ—˜λ¦¬μ—μ € 씨가 μ§„μ§œλ‘œ μΆ”μΈ‘ν•˜λŠ” 건
AI듀이 그것보닀 훨씬 더 μ΄μƒν•œ 것도 λ§Œλ“€μ–΄λ‚Ό κ±°λž€ κ±΄κ°€μš”?
08:10
Is that really a likely pathway in your mind?
161
490570
3837
그게 μ§„μ§œ μ—˜λ¦¬μ—μ € 씨 생각에 κ°€λŠ₯μ„± μžˆλŠ” κ²½λ‘œμΈκ°€μš”?
08:14
EY: Which part?
162
494407
1168
μ—˜λ¦¬μ—μ €: μ–΄λ–€ λΆ€λΆ„μ΄μš”?
08:15
That they're smarter than I am? Absolutely.
163
495616
2002
그듀이 저보닀 더 λ˜‘λ˜‘ν•˜λ‹€λŠ” λΆ€λΆ„ 말이라면 그건 ν™•μ‹€ν•˜μ£ .
08:17
CA: Not that they're smarter,
164
497660
1418
크리슀: 그듀이 더 λ˜‘λ˜‘ν•˜λ‹€λŠ” 것 λ§κ³ μš”,
08:19
but why would they want to go in that direction?
165
499078
3045
μ™œ 그런 λ°©ν–₯으둜 κ°€κ³  μ‹Άμ–΄ν•  건지 λ§μž…λ‹ˆλ‹€.
08:22
Like, AIs don't have our feelings of sort of envy and jealousy and anger
166
502123
5422
κ·ΈλŸ¬λ‹ˆκΉŒ, AI듀은 우리처럼
μ‹œκΈ°μ™€ μ§ˆνˆ¬μ™€ λΆ„λ…Έ 같은 감정이 μ—†μž–μ•„μš”.
08:27
and so forth.
167
507587
1168
08:28
So why might they go in that direction?
168
508755
2460
그런데 μ™œ AI듀이 그런 λ°©ν–₯으둜 κ°€λ €κ³  ν•˜κ² μ–΄μš”?
08:31
EY: Because it's convergently implied by almost any of the strange,
169
511215
4296
μ—˜λ¦¬μ—μ €: μ™œλƒν•˜λ©΄ κ·Έ μ΄μƒν•˜κ³  이해할 수 μ—†λŠ” 거의 λͺ¨λ“  것듀이
08:35
inscrutable things that they might end up wanting
170
515511
3379
이런 β€˜μ’‹μ•„μš”β€˜μ™€ β€˜μ‹«μ–΄μš”β€™ 같은 것듀에 λŒ€ν•œ
08:38
as a result of gradient descent
171
518931
1710
점진적인 μ„ ν˜Έλ„ λ•Œλ¬Έμ—
08:40
on these "thumbs up" and "thumbs down" things internally.
172
520683
2711
κ²°κ΅­ κ·Έκ±Έ 원할 κ²ƒμž„μ„ μ•”μ‹œν•˜κ³  있기 λ•Œλ¬Έμž…λ‹ˆλ‹€.
08:44
If all you want is to make tiny little molecular squiggles
173
524604
3920
당신이 μ›ν•˜λŠ” 게 κ·Έμ € 쑰그만 λΆ„μž 뭉텅이λ₯Ό λ§Œλ“œλŠ” 게 μ „λΆ€κ±°λ‚˜
08:48
or that's like, one component of what you want,
174
528566
2461
ν˜Ήμ€ 그게 당신이 μ›ν•˜λŠ” κ²ƒμ˜ ν•œ μš”μ†ŒμΌ 뿐이라 해도,
08:51
but it's a component that never saturates, you just want more and more of it,
175
531069
3628
그건 μ ˆλŒ€ λ©ˆμΆ”μ§€ μ•Šκ³  κ·Έμ € 계속 더 많이 μ›ν•˜κ²Œ λ©λ‹ˆλ‹€.
08:54
the same way that we would want more and more galaxies filled with life
176
534697
3337
μš°λ¦¬κ°€ 점점 더 λ§Žμ€ μ€ν•˜λ“€μ΄ 생λͺ…μœΌλ‘œ 가득 μ°¨κΈΈ 바라고
μ‚¬λžŒλ“€μ΄ 였래 였래 ν–‰λ³΅ν•˜κ²Œ μ‚΄κΈ°λ₯Ό μ›ν•˜λŠ” 것과 λ§ˆμ°¬κ°€μ§€μ£ .
08:58
and people living happily ever after.
177
538034
1793
08:59
Anything that just keeps going,
178
539869
1627
무엇이든 κ·Έλ ‡κ²Œ λ©ˆμΆ”μ§€ μ•Šκ³ 
09:01
you just want to use more and more material for that,
179
541537
3045
κ·Έκ±Έ μœ„ν•΄ 점점 더 λ§Žμ€ 재료λ₯Ό μ“°λŠ” κ²ƒλ§Œμ„ μ›ν•œλ‹€λ©΄,
09:04
that could kill everyone on Earth as a side effect.
180
544624
2669
κ·Έ λΆ€μž‘μš©μœΌλ‘œ μ§€κ΅¬μƒμ˜ λͺ¨λ“  μ‚¬λžŒμ„ 죽일 μˆ˜λ„ μžˆλŠ” κ±°μ˜ˆμš”.
09:07
It could kill us because it doesn't want us making other superintelligences
181
547293
3545
AIλŠ” 그것과 κ²½μŸν•  λ‹€λ₯Έ μ΄ˆμ§€λŠ₯을 μš°λ¦¬κ°€ λ§Œλ“œλŠ” κ±Έ μ›μΉ˜ μ•ŠκΈ° λ•Œλ¬Έμ—
09:10
to compete with it.
182
550880
1168
우리λ₯Ό 죽일 μˆ˜λ„ μžˆμŠ΅λ‹ˆλ‹€.
AIλŠ” μ§€κ΅¬μƒμ˜ λͺ¨λ“  ν™”ν•™ μ—λ„ˆμ§€λ₯Ό λ‹€ μ“°κ³  λ‚˜μ„œ
09:12
It could kill us because it's using up all the chemical energy on earth
183
552048
4379
09:16
and we contain some chemical potential energy.
184
556469
2586
인체의 화학적 μœ„μΉ˜ μ—λ„ˆμ§€ λ•Œλ¬Έμ— 우리λ₯Ό 죽일 μˆ˜λ„ μžˆμŠ΅λ‹ˆλ‹€.
09:19
CA: So some people in the AI world worry that your views are strong enough
185
559055
6256
크리슀: AI 업계에 μžˆλŠ” μ–΄λ–€ 뢄듀이 μš°λ €ν•˜λŠ” 점은
μ—˜λ¦¬μ—μ € μ”¨μ˜ 견해가 λ„ˆλ¬΄ κ°•ν•΄μ„œ,
κ·ΈλΆ„λ“€ ν‘œν˜„μ— λ”°λ₯΄λ©΄ λ„ˆλ¬΄ κ·Ήλ‹¨μ μ΄λΌμ„œ,
09:25
and they would say extreme enough
186
565311
1585
09:26
that you're willing to advocate extreme responses to it.
187
566896
3170
μ—˜λ¦¬μ—μ € 씨가 극단적인 λŒ€μ‘μ„ μ§€μ§€ν•œλ‹€λŠ” μ μž…λ‹ˆλ‹€.
09:30
And therefore, they worry that you could be, you know,
188
570066
3462
λ”°λΌμ„œ 그뢄듀이 μš°λ €ν•˜λŠ” 점은 μ—˜λ¦¬μ—μ € 씨가
μ–΄λ–€ μ˜λ―Έλ‘œλŠ”, λͺΉμ‹œ 파괴적인 인물이 될 수 μžˆλ‹€λŠ” κ±°μ˜ˆμš”.
09:33
in one sense, a very destructive figure.
189
573528
1918
09:35
Do you draw the line yourself in terms of the measures
190
575488
3170
μ—˜λ¦¬μ—μ € μ”¨λŠ” 이 일이 μΌμ–΄λ‚˜λŠ” κ±Έ 막기 μœ„ν•΄
09:38
that we should take to stop this happening?
191
578699
2753
μš°λ¦¬κ°€ μ·¨ν•  μ‘°μΉ˜μ— ν•œκ³„λ₯Ό λ‘μ‹œλ‚˜μš”?
09:41
Or is actually anything justifiable to stop
192
581494
3253
μ•„λ‹ˆλ©΄ μ—˜λ¦¬μ—μ € 씨가 일어날 거라고 λ§μ”€ν•˜μ‹  μ˜ˆμƒμ„
09:44
the scenarios you're talking about happening?
193
584747
2211
μ‹€μ œλ‘œ 막을 수만 μžˆλ‹€λ©΄ 무엇이든 μ •λ‹Ήν™”ν•  수 μžˆμœΌμ‹ κ°€μš”?
09:47
EY: I don't think that "anything" works.
194
587834
3253
μ—˜λ¦¬μ—μ €: μ €λŠ” β€˜λ¬΄μ—‡μ΄λ“ β€™ 톡할 거라고 μƒκ°ν•˜μ§„ μ•ŠμŠ΅λ‹ˆλ‹€.
제 μƒκ°μ—λŠ” μ΄λŠ” κ΅­κ°€ μ°¨μ›μ˜ μ›€μ§μž„κ³Ό
09:51
I think that this takes state actors
195
591087
4379
09:55
and international agreements
196
595508
2753
ꡭ제 ν˜‘μ •μ΄ ν•„μš”ν•˜λ‹€κ³  μƒκ°ν•©λ‹ˆλ‹€.
09:58
and all international agreements by their nature,
197
598302
3254
또 λͺ¨λ“  ꡭ제 ν˜‘μ •μ€ 본질적으둜
10:01
tend to ultimately be backed by force
198
601556
1876
μ„œλͺ… κ΅­κ°€λ“€κ³Ό λΉ„ μ„œλͺ… κ΅­κ°€λ“€μ—κ²Œ
10:03
on the signatory countries and on the non-signatory countries,
199
603474
3379
κΆκ·Ήμ μœΌλ‘œλŠ” κ°•μ œλ‘œ μ΄ν–‰λ˜κ²Œ ν•΄μ•Ό ν•˜λŠ”λ° 이건 더 극단적인 쑰치죠.
10:06
which is a more extreme measure.
200
606894
1794
10:09
I have not proposed that individuals run out and use violence,
201
609856
2961
μ €λŠ” κ°œμΈλ“€μ΄ λ›°μ³λ‚˜κ°€μ„œ 폭λ ₯을 μ‚¬μš©ν•  것을 μ œμ•ˆν•˜μ§„ μ•Šμ•˜μŠ΅λ‹ˆλ‹€.
10:12
and I think that the killer argument for that is that it would not work.
202
612859
4337
그에 λŒ€ν•œ 핡심 κ·Όκ±°λŠ” 그게 쓸데없닀고 μƒκ°ν•˜κΈ° λ–Όλ¬Έμž…λ‹ˆλ‹€.
10:18
CA: Well, you are definitely not the only person to propose
203
618489
2961
크리슀: λ„€, 이 λ¬Έμ œμ— μ•žμœΌλ‘œ μ–΄λ–»κ²Œ λŒ€μ²˜ν•΄μ•Ό 할지에 λŒ€ν•΄
10:21
that what we need is some kind of international reckoning here
204
621450
4004
μΌμ’…μ˜ ꡭ제적인 인식이 ν•„μš”ν•˜λ‹€λŠ” 점을
10:25
on how to manage this going forward.
205
625496
2044
μ œμ•ˆν•΄μ£Όμ‹  건 ν™•μ‹€νžˆ μ—˜λ¦¬μ—μ € μ”¨λ§Œμ€ μ•„λ‹™λ‹ˆλ‹€.
10:27
Thank you so much for coming here to TED, Eliezer.
206
627540
2419
TED에 λ‚˜μ™€ μ£Όμ…”μ„œ λŒ€λ‹¨νžˆ κ°μ‚¬ν•©λ‹ˆλ‹€, μ—˜λ¦¬μ—μ € 씨.
(λ°•μˆ˜)
10:30
(Applause)
207
630001
2085
이 μ›Ήμ‚¬μ΄νŠΈ 정보

이 μ‚¬μ΄νŠΈλŠ” μ˜μ–΄ ν•™μŠ΅μ— μœ μš©ν•œ YouTube λ™μ˜μƒμ„ μ†Œκ°œν•©λ‹ˆλ‹€. μ „ 세계 졜고의 μ„ μƒλ‹˜λ“€μ΄ κ°€λ₯΄μΉ˜λŠ” μ˜μ–΄ μˆ˜μ—…μ„ 보게 될 κ²ƒμž…λ‹ˆλ‹€. 각 λ™μ˜μƒ νŽ˜μ΄μ§€μ— ν‘œμ‹œλ˜λŠ” μ˜μ–΄ μžλ§‰μ„ 더블 ν΄λ¦­ν•˜λ©΄ κ·Έκ³³μ—μ„œ λ™μ˜μƒμ΄ μž¬μƒλ©λ‹ˆλ‹€. λΉ„λ””μ˜€ μž¬μƒμ— 맞좰 μžλ§‰μ΄ μŠ€ν¬λ‘€λ©λ‹ˆλ‹€. μ˜κ²¬μ΄λ‚˜ μš”μ²­μ΄ μžˆλŠ” 경우 이 문의 양식을 μ‚¬μš©ν•˜μ—¬ λ¬Έμ˜ν•˜μ‹­μ‹œμ˜€.

https://forms.gle/WvT1wiN1qDtmnspy7