3 principles for creating safer AI | Stuart Russell

139,486 views ・ 2017-06-06

TED


μ•„λž˜ μ˜λ¬Έμžλ§‰μ„ λ”λΈ”ν΄λ¦­ν•˜μ‹œλ©΄ μ˜μƒμ΄ μž¬μƒλ©λ‹ˆλ‹€.

λ²ˆμ—­: Hyein Jeng κ²€ν† : keun_young Lee
00:12
This is Lee Sedol.
0
12532
1552
이 μ‚¬λžŒμ€ μ΄μ„ΈλŒμž…λ‹ˆλ‹€.
00:14
Lee Sedol is one of the world's greatest Go players,
1
14108
3997
μ΄μ„ΈλŒμ€ μ„Έκ³„μ—μ„œ κ°€μž₯ λ›°μ–΄λ‚œ λ°”λ‘‘ 기사 쀑 ν•œλͺ…이죠.
00:18
and he's having what my friends in Silicon Valley call
2
18129
2885
보고 계신 이 μ‚¬μ§„μ˜ μˆœκ°„μ— μ‹€λ¦¬μ½˜λ²¨λ¦¬μ˜ 제 μΉœκ΅¬λ“€μ€
00:21
a "Holy Cow" moment --
3
21038
1510
"μ„Έμƒμ—λ‚˜!" 라고 μ™Έμ³€μŠ΅λ‹ˆλ‹€.
00:22
(Laughter)
4
22572
1073
(μ›ƒμŒ)
00:23
a moment where we realize
5
23669
2188
λ°”λ‘œ 이 μˆœκ°„μ—
00:25
that AI is actually progressing a lot faster than we expected.
6
25881
3296
인곡지λŠ₯이 μƒκ°ν–ˆλ˜ 것보닀 λΉ λ₯΄κ²Œ λ°œμ „ν•˜κ³  μžˆμŒμ„ κΉ¨λ‹«κ²Œ λ˜μ—ˆμ£ .
00:29
So humans have lost on the Go board. What about the real world?
7
29974
3047
자, λ°”λ‘‘νŒμ—μ„œ 인간듀은 νŒ¨λ°°ν–ˆμŠ΅λ‹ˆλ‹€. 그럼 ν˜„μ‹€ μ„Έμƒμ—μ„œλŠ” μ–΄λ–¨κΉŒμš”?
00:33
Well, the real world is much bigger,
8
33045
2100
ν˜„μ‹€ μ„Έκ³„λŠ” 훨씬 크고
00:35
much more complicated than the Go board.
9
35169
2249
λ°”λ‘‘νŒλ³΄λ‹€ 훨씬 λ³΅μž‘ν•©λ‹ˆλ‹€.
00:37
It's a lot less visible,
10
37442
1819
ν•œ λˆˆμ— λ“€μ–΄μ˜€μ§€λ„ μ•Šκ³ 
00:39
but it's still a decision problem.
11
39285
2038
결정에 κ΄€ν•œ λ¬Έμ œλ„ μ—¬μ „νžˆ λ‚¨μ•„μžˆμŠ΅λ‹ˆλ‹€.
00:42
And if we think about some of the technologies
12
42768
2321
그리고 μƒˆλ‘­κ²Œ λ– μ˜€λ₯΄λŠ” κΈ°μˆ λ“€μ„ μ‚΄νŽ΄λ³΄λ©΄
00:45
that are coming down the pike ...
13
45113
1749
----------------------------------
00:47
Noriko [Arai] mentioned that reading is not yet happening in machines,
14
47558
4335
노리코 아라이 κ΅μˆ˜κ°€ λ§ν•œλŒ€λ‘œ λ…μ„œν•˜λŠ” 인곡지λŠ₯은 아직 μ—†μŠ΅λ‹ˆλ‹€.
00:51
at least with understanding.
15
51917
1500
적어도 이해λ₯Ό λ™λ°˜ν•œ λ…μ„œ 말이죠.
00:53
But that will happen,
16
53441
1536
ν•˜μ§€λ§Œ 그런 인곡지λŠ₯도 μΆœν˜„ν•  κ²λ‹ˆλ‹€.
00:55
and when that happens,
17
55001
1771
κ·Έλ•Œκ°€ 되면
00:56
very soon afterwards,
18
56796
1187
μ–Όλ§ˆ μ•ˆλΌμ„œ
00:58
machines will have read everything that the human race has ever written.
19
58007
4572
인곡지λŠ₯은 인λ₯˜κ°€ μ§€κΈˆκΉŒμ§€ μ“΄ λͺ¨λ“  것을 읽게 될 κ²ƒμž…λ‹ˆλ‹€.
01:03
And that will enable machines,
20
63670
2030
그리고 인곡지λŠ₯ 기계듀이 κ·Έλ ‡κ²Œ ν•  수 있게 되면
01:05
along with the ability to look further ahead than humans can,
21
65724
2920
인간보닀 더 멀리 μ˜ˆμΈ‘ν•˜λŠ” λŠ₯λ ₯을 κ°–κ²Œ 되고
01:08
as we've already seen in Go,
22
68668
1680
λ°”λ‘‘ μ‹œν•©μ—μ„œ 이미 λ“œλŸ¬λ‚¬λ“―μ΄
01:10
if they also have access to more information,
23
70372
2164
인곡지λŠ₯ 기계듀이 더 λ§Žμ€ 정보에 μ ‘κ·Όν•  수 있으면
01:12
they'll be able to make better decisions in the real world than we can.
24
72560
4268
μ‹€μ œ μ„Έκ³„μ—μ„œ μš°λ¦¬λ³΄λ‹€ 더 λ‚˜μ€ 결정듀을 ν•  수 μžˆμ„ κ²ƒμž…λ‹ˆλ‹€.
01:18
So is that a good thing?
25
78612
1606
그러면 쒋은 κ²ƒμΌκΉŒμš”?
01:21
Well, I hope so.
26
81718
2232
음, 그랬으면 μ’‹κ² λ„€μš”.
01:26
Our entire civilization, everything that we value,
27
86514
3255
우리 인λ₯˜ λ¬Έλͺ…을 톡틀어 κ°€μΉ˜ μžˆλ‹€κ³  μ—¬κΈ°λŠ” λͺ¨λ“  것듀은
01:29
is based on our intelligence.
28
89793
2068
우리 지식에 λ°”νƒ•ν•œ κ²ƒμž…λ‹ˆλ‹€.
01:31
And if we had access to a lot more intelligence,
29
91885
3694
그리고 λ§Œμ•½ μš°λ¦¬κ°€ 더 λ§Žμ€ 지식에 μ ‘κ·Όν•  수 μžˆλ‹€λ©΄
01:35
then there's really no limit to what the human race can do.
30
95603
3302
인λ₯˜κ°€ ν•  수 μžˆλŠ” κ²ƒλ“€μ—μ„œ μ§„μ •ν•œ ν•œκ³„λŠ” 없을 κ²ƒμž…λ‹ˆλ‹€.
01:40
And I think this could be, as some people have described it,
31
100485
3325
λˆ„κ΅°κ°€ λ§ν–ˆλ“―μ΄ κ·Έλ ‡κ²Œλ§Œ λœλ‹€λ©΄
01:43
the biggest event in human history.
32
103834
2016
인λ₯˜ 역사상 κ°€μž₯ 큰 사건이 되겠죠.
01:48
So why are people saying things like this,
33
108485
2829
그런데 μ™œ μ‚¬λžŒλ“€μ€ 이런 말을 ν• κΉŒμš”?
01:51
that AI might spell the end of the human race?
34
111338
2876
인곡지λŠ₯(AI)이 우리 인λ₯˜μ˜ 쒅말을 κ°€μ Έμ˜¬ 거라고 말이죠.
01:55
Is this a new thing?
35
115258
1659
인곡지λŠ₯이 μƒˆλ‘œμš΄ κ²ƒμΌκΉŒμš”?
01:56
Is it just Elon Musk and Bill Gates and Stephen Hawking?
36
116941
4110
μ—˜λŸ° 머슀크, 빌 게이츠, μŠ€ν‹°λΈ ν˜Έν‚Ή. 이런 μ‚¬λžŒλ“€λ§Œ μ•„λŠ” κ²ƒμΈκ°€μš”?
02:01
Actually, no. This idea has been around for a while.
37
121773
3262
사싀, μ•„λ‹™λ‹ˆλ‹€. 이 κ°œλ…μ΄ λ‚˜μ˜¨ μ§€λŠ” μ’€ λ˜μ—ˆμ£ .
02:05
Here's a quotation:
38
125059
1962
이런 말이 μžˆμŠ΅λ‹ˆλ‹€.
02:07
"Even if we could keep the machines in a subservient position,
39
127045
4350
"μ€‘μš”ν•œ μˆœκ°„μ— 전원을 κΊΌλ²„λ¦¬λŠ” μ‹μœΌλ‘œ
02:11
for instance, by turning off the power at strategic moments" --
40
131419
2984
기계λ₯Ό 우리 μΈκ°„μ—κ²Œ 계속 λ³΅μ’…ν•˜λ„λ‘ λ§Œλ“€ 수 μžˆλ”λΌλ„.."
02:14
and I'll come back to that "turning off the power" idea later on --
41
134427
3237
"전원을 λˆλ‹€"λŠ” κ°œλ…μ€ λ‚˜μ€‘μ— λ‹€μ‹œ μ„€λͺ…ν•˜κ² μŠ΅λ‹ˆλ‹€.
02:17
"we should, as a species, feel greatly humbled."
42
137688
2804
"μš°λ¦¬λŠ” 인λ₯˜λ‘œμ„œ 겸손함을 λŠκ»΄μ•Ό ν•©λ‹ˆλ‹€. "
02:21
So who said this? This is Alan Turing in 1951.
43
141997
3448
λˆ„κ°€ ν•œ λ§μΌκΉŒμš”? μ•¨λŸ° 튜링이 1951년에 ν•œ λ§μž…λ‹ˆλ‹€.
02:26
Alan Turing, as you know, is the father of computer science
44
146120
2763
μ•¨λŸ° νŠœλ§μ€ μ•„μ‹œλ‹€μ‹œν”Ό 컴퓨터 κ³Όν•™μ˜ μ•„λ²„μ§€μž…λ‹ˆλ‹€.
02:28
and in many ways, the father of AI as well.
45
148907
3048
그리고 μ—¬λŸ¬ μΈ‘λ©΄μ—μ„œ 인곡지λŠ₯의 아버지이기도 ν•˜μ£ .
02:33
So if we think about this problem,
46
153059
1882
이런 문제λ₯Ό μƒκ°ν•΄λ³ΌκΉŒμš”.
02:34
the problem of creating something more intelligent than your own species,
47
154965
3787
우리 인λ₯˜μ˜ 지λŠ₯을 λ›°μ–΄λ„˜λŠ” 무언가λ₯Ό μ°½μ‘°ν•˜λŠ” λ¬Έμ œμž…λ‹ˆλ‹€.
02:38
we might call this "the gorilla problem,"
48
158776
2622
이λ₯Έλ°” "고릴라 문제"라 ν•  수 있죠.
02:42
because gorillas' ancestors did this a few million years ago,
49
162165
3750
수만 λ…„ μ „ κ³ λ¦΄λΌλ“€μ˜ 쑰상듀도 같은 고민을 ν–ˆμ„ν…Œλ‹ˆκΉŒμš”.
02:45
and now we can ask the gorillas:
50
165939
1745
그럼 이제 κ³ λ¦΄λΌλ“€μ—κ²Œ μ΄λ ‡κ²Œ 물어보면 μ–΄λ–¨κΉŒμš”.
02:48
Was this a good idea?
51
168572
1160
"쒋은 μ•„μ΄λ””μ–΄μ˜€λ‚˜μš”?"
02:49
So here they are having a meeting to discuss whether it was a good idea,
52
169756
3530
고릴라듀은 쒋은 μƒκ°μ΄μ—ˆλŠ”μ§€ μ˜λ…Όν•˜κΈ° μœ„ν•΄ λͺ¨μ˜€μŠ΅λ‹ˆλ‹€.
02:53
and after a little while, they conclude, no,
53
173310
3346
그리고 μž μ‹œ ν›„ 결둠을 λ‚΄λ¦½λ‹ˆλ‹€.
02:56
this was a terrible idea.
54
176680
1345
μ΅œμ•…μ˜ μ•„μ΄λ””μ–΄μ˜€λ‹€κ³  결둠짓죠.
02:58
Our species is in dire straits.
55
178049
1782
κ·Έ λ•Œλ¬Έμ— μžμ‹ λ“€μ΄ 곀경에 μ²˜ν–ˆλ‹€λ©΄μ„œμš”.
03:00
In fact, you can see the existential sadness in their eyes.
56
180358
4263
μ‹€μ œλ‘œ κ·Έλ“€ λˆˆμ—μ„œ 쑴재둠적 μŠ¬ν””μ΄ μ—Ώλ³΄μ΄λ„€μš”.
03:04
(Laughter)
57
184645
1640
(μ›ƒμŒ)
03:06
So this queasy feeling that making something smarter than your own species
58
186309
4840
λ”°λΌμ„œ μš°λ¦¬λ³΄λ‹€ 더 λ›°μ–΄λ‚œ 무언가λ₯Ό λ§Œλ“€μ–΄λ‚Έλ‹€λŠ” μ΄λŸ¬ν•œ μ΄ˆμ‘°ν•œ λŠλ‚Œμ€
03:11
is maybe not a good idea --
59
191173
2365
쒋은 아이디어가 아닐 κ²λ‹ˆλ‹€.
03:14
what can we do about that?
60
194308
1491
그럼 μ–΄λ–»κ²Œ ν•΄μ•Ό ν• κΉŒμš”?
03:15
Well, really nothing, except stop doing AI,
61
195823
4767
음, 사싀 인곡지λŠ₯을 μ€‘λ‹¨μ‹œν‚€λŠ” 것 λ§κ³ λŠ” λ”±νžˆ 방법이 μ—†μŠ΅λ‹ˆλ‹€.
03:20
and because of all the benefits that I mentioned
62
200614
2510
그리고 μ œκ°€ μ•žμ„œ λ§μ”€λ“œλ Έλ˜ λͺ¨λ“  μž₯점듀 λ•Œλ¬Έμ΄κΈ°λ„ ν•˜κ³ 
03:23
and because I'm an AI researcher,
63
203148
1716
λ¬Όλ‘  μ œκ°€ 인곡지λŠ₯을 μ—°κ΅¬ν•˜κ³  μžˆμ–΄μ„œλ„ κ·Έλ ‡μ§€λ§Œ
03:24
I'm not having that.
64
204888
1791
μ €λŠ” 쀑단을 κ³ λ €ν•˜μ§€λŠ” μ•ŠμŠ΅λ‹ˆλ‹€.
03:27
I actually want to be able to keep doing AI.
65
207103
2468
μ €λŠ” 사싀 계속 인곡지λŠ₯을 κ°€λŠ₯ν•˜κ²Œ ν•˜κ³  μ‹ΆμŠ΅λ‹ˆλ‹€.
03:30
So we actually need to nail down the problem a bit more.
66
210435
2678
μ—¬κΈ°μ„œ 이 λ¬Έμ œμ— λŒ€ν•΄ μ’€ 더 μ‚΄νŽ΄λ³Ό ν•„μš”κ°€ μžˆμŠ΅λ‹ˆλ‹€.
03:33
What exactly is the problem?
67
213137
1371
λ¬Έμ œκ°€ μ •ν™•νžˆ λ¬΄μ—‡μΌκΉŒμš”?
03:34
Why is better AI possibly a catastrophe?
68
214532
3246
μ™œ λ›°μ–΄λ‚œ 인곡지λŠ₯은 μž¬μ•™μ„ λœ»ν• κΉŒμš”?
03:39
So here's another quotation:
69
219218
1498
자, 이런 말도 μžˆμŠ΅λ‹ˆλ‹€.
03:41
"We had better be quite sure that the purpose put into the machine
70
221755
3335
"기계에 λΆ€μ—¬ν•œ κ·Έ λͺ©μ μ΄ μš°λ¦¬κ°€ 정말 μ›ν–ˆλ˜ λͺ©μ μΈμ§€λ₯Ό
03:45
is the purpose which we really desire."
71
225114
2298
μ’€ 더 ν™•μ‹€νžˆ ν•΄λ‘μ—ˆμ–΄μ•Ό ν•©λ‹ˆλ‹€."
03:48
This was said by Norbert Wiener in 1960,
72
228102
3498
이건 λ…Έλ²„νŠΈ μœ„λ„ˆκ°€ 1960년에 ν•œ λ§μž…λ‹ˆλ‹€.
03:51
shortly after he watched one of the very early learning systems
73
231624
4002
맀우 초기의 기계 ν•™μŠ΅ μž₯μΉ˜κ°€ 체슀 λ‘λŠ” 법을 배우고
03:55
learn to play checkers better than its creator.
74
235650
2583
μ‚¬λžŒλ³΄λ‹€ 체슀λ₯Ό 더 잘 λ‘κ²Œ 된 것을 보고 λ°”λ‘œ ν•œ 말이죠.
04:00
But this could equally have been said
75
240422
2683
ν•˜μ§€λ§Œ μ–΄λ–»κ²Œ 보자면
λ§ˆμ΄λ”μŠ€ μ™•κ³Ό κ°™λ‹€κ³  ν•  μˆ˜λ„ μžˆμŠ΅λ‹ˆλ‹€.
04:03
by King Midas.
76
243129
1167
04:04
King Midas said, "I want everything I touch to turn to gold,"
77
244903
3134
μ†λŒ€λŠ” λͺ¨λ“  것이 금으둜 λ³€ν•˜κΈΈ μ›ν–ˆλ˜ 왕이죠.
그리고 κ·Έκ°€ μ›ν–ˆλ˜λŒ€λ‘œ μ‹€μ œλ‘œ κ·Έλ ‡κ²Œ λ˜μ—ˆμŠ΅λ‹ˆλ‹€.
04:08
and he got exactly what he asked for.
78
248061
2473
04:10
That was the purpose that he put into the machine,
79
250558
2751
그게 λ°”λ‘œ κ·Έκ°€ 기계에 μž…λ ₯ν•œ λͺ©ν‘œμž…λ‹ˆλ‹€.
04:13
so to speak,
80
253333
1450
λΉ„μœ λ₯Ό ν•˜μžλ©΄ κ·Έλ ‡μ£ .
04:14
and then his food and his drink and his relatives turned to gold
81
254807
3444
그리고 그의 μŒμ‹, 술, κ°€μ‘±κΉŒμ§€ λͺ¨λ‘ 금으둜 λ³€ν–ˆμŠ΅λ‹ˆλ‹€.
04:18
and he died in misery and starvation.
82
258275
2281
그리고 κ·ΈλŠ” μ²˜μ ˆν•¨κ³Ό λ°°κ³ ν”” μ†μ—μ„œ μ£½μ—ˆμŠ΅λ‹ˆλ‹€.
04:22
So we'll call this "the King Midas problem"
83
262264
2341
μš°λ¦¬λŠ” 이것을 "λ§ˆμ΄λ”μŠ€ μ™•μ˜ λ”œλ ˆλ§ˆ"라고 λΆ€λ¦…λ‹ˆλ‹€.
04:24
of stating an objective which is not, in fact,
84
264629
3305
이 λ”œλ ˆλ§ˆλŠ” κ·Έκ°€ λ§ν•œ λͺ©μ μ΄ μ‹€μ œλ‘œ κ·Έκ°€ 정말 μ›ν–ˆλ˜ 것과
04:27
truly aligned with what we want.
85
267958
2413
λ™μΌν•˜μ§€ μ•Šκ²Œ λ˜λŠ” λ¬Έμ œμž…λ‹ˆλ‹€.
04:30
In modern terms, we call this "the value alignment problem."
86
270395
3253
ν˜„λŒ€ μš©μ–΄λ‘œλŠ” "κ°€μΉ˜ μ‘°ν•© 문제"라고 ν•˜μ£ .
04:36
Putting in the wrong objective is not the only part of the problem.
87
276867
3485
잘λͺ»λœ λͺ©ν‘œλ₯Ό μž…λ ₯ν•˜λŠ” κ²ƒλ§Œ λ¬Έμ œκ°€ λ˜λŠ” 것은 μ•„λ‹™λ‹ˆλ‹€.
04:40
There's another part.
88
280376
1152
또 λ‹€λ₯Έ λ¬Έμ œκ°€ μžˆμŠ΅λ‹ˆλ‹€.
04:41
If you put an objective into a machine,
89
281980
1943
μ—¬λŸ¬λΆ„μ΄ 기계에 λͺ©ν‘œλ₯Ό μž…λ ₯ν•  λ•Œ
04:43
even something as simple as, "Fetch the coffee,"
90
283947
2448
"컀피λ₯Ό κ°–κ³  와"같이 λ‹¨μˆœν•œ 것이라고 해도
04:47
the machine says to itself,
91
287728
1841
κΈ°κ³„λŠ” μ΄λ ‡κ²Œ 생각할지 λͺ¨λ¦…λ‹ˆλ‹€.
04:50
"Well, how might I fail to fetch the coffee?
92
290553
2623
"자, μ–΄λ–»κ²Œ ν•˜λ©΄ λ‚΄κ°€ 컀피λ₯Ό λͺ»κ°€μ Έκ°€κ²Œ 될까?
04:53
Someone might switch me off.
93
293200
1580
λˆ„κ°€ 전원을 꺼버릴 μˆ˜λ„ μžˆμž–μ•„.
04:55
OK, I have to take steps to prevent that.
94
295465
2387
μ’‹μ•„, 그런 일을 막기 μœ„ν•΄ 쑰치λ₯Ό μ·¨ν•΄μ•Όκ² μ–΄.
04:57
I will disable my 'off' switch.
95
297876
1906
λ‚΄ '꺼짐' λ²„νŠΌμ„ κ³ μž₯ λ‚΄μ•Όκ² μ–΄.
05:00
I will do anything to defend myself against interference
96
300354
2959
주어진 μž„λ¬΄λ₯Ό λͺ»ν•˜κ²Œ λ°©ν•΄ν•˜λŠ” κ²ƒλ“€λ‘œλΆ€ν„°
05:03
with this objective that I have been given."
97
303337
2629
λ‚˜λ₯Ό λ³΄ν˜Έν•˜κΈ° μœ„ν•΄ 뭐든지 ν•  κ±°μ•Ό. "
05:05
So this single-minded pursuit
98
305990
2012
사싀, μ΄λ ‡κ²Œ 방어적 μžμ„Έλ‘œ
05:09
in a very defensive mode of an objective that is, in fact,
99
309033
2945
였직 λͺ©ν‘œ λ‹¬μ„±λ§Œμ„ μΆ”κ΅¬ν•˜λŠ” 것은
05:12
not aligned with the true objectives of the human race --
100
312002
2814
인λ₯˜μ˜ μ§„μ‹€λœ λͺ©ν‘œμ™€ μΌμΉ˜ν•˜μ§€λŠ” μ•ŠμŠ΅λ‹ˆλ‹€.
05:15
that's the problem that we face.
101
315942
1862
μš°λ¦¬κ°€ μ§λ©΄ν•œ 문제점이 λ°”λ‘œ μ΄κ²ƒμž…λ‹ˆλ‹€.
05:18
And in fact, that's the high-value takeaway from this talk.
102
318827
4767
이건 사싀 이번 κ°•μ—° μ—μ„œ 무척 고차원적인 λΆ€λΆ„μΈλ°μš”.
05:23
If you want to remember one thing,
103
323618
2055
λ§Œμ•½ ν•˜λ‚˜λ§Œ κΈ°μ–΅ν•΄μ•Ό ν•œλ‹€λ©΄
05:25
it's that you can't fetch the coffee if you're dead.
104
325697
2675
μ—¬λŸ¬λΆ„μ΄ 죽으면 컀피λ₯Ό κ°€μ Έλ‹€ 주지 μ•Šμ„ κ±°λΌλŠ” κ²ƒμž…λ‹ˆλ‹€.
05:28
(Laughter)
105
328396
1061
(μ›ƒμŒ)
05:29
It's very simple. Just remember that. Repeat it to yourself three times a day.
106
329481
3829
κ°„λ‹¨ν•˜μ£ . ν•˜λ£¨μ— μ„Έ λ²ˆμ”© μ™Έμš°μ„Έμš”.
05:33
(Laughter)
107
333334
1821
(μ›ƒμŒ)
05:35
And in fact, this is exactly the plot
108
335179
2754
그리고 사싀 이게 λ°”λ‘œ
05:37
of "2001: [A Space Odyssey]"
109
337957
2648
"2001:슀페이슀 μ˜€λ””μ„Έμ΄" μ˜ν™”μ˜ μ€„κ±°λ¦¬μž…λ‹ˆλ‹€.
05:41
HAL has an objective, a mission,
110
341046
2090
인곡지λŠ₯인 'ν• (HAL)'은 λͺ©ν‘œ, 즉 μž„λ¬΄λ₯Ό κ°–κ³  μžˆμŠ΅λ‹ˆλ‹€.
05:43
which is not aligned with the objectives of the humans,
111
343160
3732
이건 인λ₯˜μ˜ λͺ©ν‘œμ™€ μΌμΉ˜ν•˜μ§€λŠ” μ•ŠμŠ΅λ‹ˆλ‹€.
05:46
and that leads to this conflict.
112
346916
1810
κ·Έλž˜μ„œ κ²°κ΅­ μ„œλ‘œ μΆ©λŒν•˜μ£ .
05:49
Now fortunately, HAL is not superintelligent.
113
349314
2969
λ‹€ν–‰νžˆλ„ HAL의 지λŠ₯이 μ•„μ£Ό λ›°μ–΄λ‚˜μ§„ μ•Šμ•˜μŠ΅λ‹ˆλ‹€.
05:52
He's pretty smart, but eventually Dave outwits him
114
352307
3587
κ½€ λ˜‘λ˜‘ν–ˆμ§€λ§Œ κ²°κ΅­ λ°μ΄λΈŒκ°€ HAL보닀 ν•œ 수 μœ„μ˜€κ³ 
05:55
and manages to switch him off.
115
355918
1849
HAL의 전원을 끌 수 있게 λ©λ‹ˆλ‹€.
06:01
But we might not be so lucky.
116
361648
1619
ν•˜μ§€λ§Œ μš°λ¦¬λŠ” μ˜ν™”μ²˜λŸΌ 운이 쒋지 μ•Šμ„ 수 μžˆμŠ΅λ‹ˆλ‹€.
06:08
So what are we going to do?
117
368013
1592
그럼 μ–΄λ–»κ²Œ ν•΄μ•Ό ν• κΉŒμš”?
06:12
I'm trying to redefine AI
118
372191
2601
μ €λŠ” 인곡지λŠ₯을 λ‹€μ‹œ μ •μ˜ν•˜λ € ν•©λ‹ˆλ‹€.
06:14
to get away from this classical notion
119
374816
2061
고전적 κ°œλ…μ„ κΉ¨λ €κ³  ν•΄μš”.
06:16
of machines that intelligently pursue objectives.
120
376901
4567
κΈ°κ³„λŠ” 지λŠ₯적으둜 λͺ©ν‘œλ₯Ό λ‹¬μ„±ν•˜λ € ν•œλ‹€λŠ” κ°œλ…μ΄μ£ .
06:22
There are three principles involved.
121
382532
1798
μ—¬κΈ°μ—λŠ” μ„Έ 가지 원칙이 μžˆμŠ΅λ‹ˆλ‹€.
06:24
The first one is a principle of altruism, if you like,
122
384354
3289
μ²«λ²ˆμ§ΈλŠ” μ΄νƒ€μ£Όμ˜ μ›μΉ™μž…λ‹ˆλ‹€.
06:27
that the robot's only objective
123
387667
3262
λ‘œλ΄‡μ˜ μœ μΌν•œ λͺ©μ μ€
06:30
is to maximize the realization of human objectives,
124
390953
4246
μΈκ°„μ˜ λͺ©ν‘œμ™€
μΈκ°„μ˜ κ°€μΉ˜λ₯Ό μ΅œλŒ€ν•œ μ‹€ν˜„ν•˜λŠ” κ²ƒμž…λ‹ˆλ‹€.
06:35
of human values.
125
395223
1390
06:36
And by values here I don't mean touchy-feely, goody-goody values.
126
396637
3330
μ—¬κΈ°μ„œ λ§ν•˜λŠ” κ°€μΉ˜λŠ” λ‹­μ‚΄λ‹λŠ” μˆ­κ³ ν•œ κ°€μΉ˜λ₯Ό μ˜λ―Έν•˜μ§„ μ•ŠμŠ΅λ‹ˆλ‹€.
06:39
I just mean whatever it is that the human would prefer
127
399991
3787
μ œκ°€ λ§μ”€λ“œλ¦¬λŠ” κ°€μΉ˜λŠ”
μ–΄λ–»κ²Œ ν•΄μ•Ό μ‚¬λžŒλ“€μ˜ 삢이 더 λ‚˜μ•„μ§ˆμ§€λ₯Ό μ˜λ―Έν•˜λŠ” κ²ƒμž…λ‹ˆλ‹€.
06:43
their life to be like.
128
403802
1343
06:47
And so this actually violates Asimov's law
129
407184
2309
그리고 사싀 이건 μ•„μ‹œλͺ¨ν”„μ˜ 원칙 μ€‘μ—μ„œ
06:49
that the robot has to protect its own existence.
130
409517
2329
λ‘œλ΄‡μ€ 슀슀둜λ₯Ό λ³΄ν˜Έν•΄μ•Ό ν•œλ‹€λŠ” 원칙과 μΆ©λŒν•©λ‹ˆλ‹€.
06:51
It has no interest in preserving its existence whatsoever.
131
411870
3723
μ–΄μ¨Œλ“  λ‘œλ΄‡μ€ 슀슀둜λ₯Ό λ³΄ν˜Έν•΄μ„œ μ–»λŠ” 것이 μ—†μ£ .
06:57
The second law is a law of humility, if you like.
132
417240
3768
두 번째 원칙은 겸손에 κ΄€ν•œ 법칙이라고 말할 수 μžˆμŠ΅λ‹ˆλ‹€.
07:01
And this turns out to be really important to make robots safe.
133
421794
3743
λ‘œλ΄‡μ˜ μ•ˆμ „μ„ 지킀기 μœ„ν•œ μ•„μ£Ό μ€‘μš”ν•œ 원칙이죠.
07:05
It says that the robot does not know
134
425561
3142
이 원칙은
λ‘œλ΄‡λ“€μ€ 인간이 μΆ”κ΅¬ν•˜λŠ” κ°€μΉ˜κ°€ λ¬΄μ—‡μΈμ§€λŠ” μ•Œ 수 μ—†λ‹€λŠ” κ²ƒμž…λ‹ˆλ‹€.
07:08
what those human values are,
135
428727
2028
07:10
so it has to maximize them, but it doesn't know what they are.
136
430779
3178
κ°€μΉ˜λ₯Ό κ·ΉλŒ€ν™”ν•΄μ•Ό ν•˜μ§€λ§Œ 그것이 λ¬΄μ—‡μΈμ§€λŠ” μ•Œμ§€ λͺ»ν•œλ‹€λŠ” 것이죠.
07:15
And that avoids this problem of single-minded pursuit
137
435074
2626
κ·Έλ ‡κΈ° λ•Œλ¬Έμ— 이λ₯Ό ν†΅ν•΄μ„œ
λͺ©ν‘œλ§Œμ„ λ§Ήλͺ©μ μœΌλ‘œ μΆ”κ΅¬ν•˜λŠ” 문제λ₯Ό ν”Όν•  수 μžˆμŠ΅λ‹ˆλ‹€.
07:17
of an objective.
138
437724
1212
07:18
This uncertainty turns out to be crucial.
139
438960
2172
이런 λΆˆν™•μ‹€μ„±μ€ 맀우 μ€‘μš”ν•œ λΆ€λΆ„μž…λ‹ˆλ‹€.
07:21
Now, in order to be useful to us,
140
441546
1639
자, μš°λ¦¬μ—κ²Œ 도움이 되기 μœ„ν•΄μ„œλŠ”
07:23
it has to have some idea of what we want.
141
443209
2731
μš°λ¦¬κ°€ 무엇을 μ›ν•˜λŠ”μ§€μ— λŒ€ν•œ κ°œλ…μ„ κ°–κ³  μžˆμ–΄μ•Ό ν•©λ‹ˆλ‹€.
07:27
It obtains that information primarily by observation of human choices,
142
447043
5427
λ‘œλ΄‡μ€ 주둜 μ‚¬λžŒλ“€μ΄ μ„ νƒν•˜λŠ” 것을 κ΄€μ°°ν•΄μ„œ 정보λ₯Ό μ–»μŠ΅λ‹ˆλ‹€.
07:32
so our own choices reveal information
143
452494
2801
κ·Έλž˜μ„œ μš°λ¦¬λ“€μ˜ 선택 μ•ˆμ—λŠ”
07:35
about what it is that we prefer our lives to be like.
144
455319
3300
우리 삢이 μ–΄λ–»κ²Œ 되기λ₯Ό λ°”λΌλŠ”μ§€μ— κ΄€ν•œ 정보가 μžˆμŠ΅λ‹ˆλ‹€
07:40
So those are the three principles.
145
460452
1683
이것듀이 μ„Έ 개의 μ›μΉ™μž…λ‹ˆλ‹€.
자 그러면 이것이 λ‹€μŒ μ§ˆλ¬Έμ— μ–΄λ–»κ²Œ μ μš©λ˜λŠ”μ§€ μ‚΄νŽ΄λ΄…μ‹œλ‹€.
07:42
Let's see how that applies to this question of:
146
462159
2318
07:44
"Can you switch the machine off?" as Turing suggested.
147
464501
2789
κΈ°κ³„μ˜ 전원을 끌 수 μžˆμ„μ§€μ— κ΄€ν•΄ μ•¨λŸ° 튜링이 μ œκΈ°ν–ˆλ˜ λ¬Έμ œμž…λ‹ˆλ‹€.
07:48
So here's a PR2 robot.
148
468893
2120
여기에 PR2 λ‘œλ΄‡μ΄ μžˆμŠ΅λ‹ˆλ‹€.
07:51
This is one that we have in our lab,
149
471037
1821
저희 μ—°κ΅¬μ†Œμ— μžˆλŠ” λ‘œλ΄‡μž…λ‹ˆλ‹€.
07:52
and it has a big red "off" switch right on the back.
150
472882
2903
이 λ‘œλ΄‡ λ’€μ—λŠ” μ»€λ‹€λž—κ³  λΉ¨κ°„ "꺼짐" λ²„νŠΌμ΄ μžˆμŠ΅λ‹ˆλ‹€.
07:56
The question is: Is it going to let you switch it off?
151
476361
2615
자 μ—¬κΈ°μ—μ„œ μ§ˆλ¬Έμž…λ‹ˆλ‹€. λ‘œλ΄‡μ€ μ—¬λŸ¬λΆ„μ΄ 전원을 끄도둝 ν• κΉŒμš”?
07:59
If we do it the classical way,
152
479000
1465
고전적 λ°©λ²•λŒ€λ‘œ λ‘œλ΄‡μ—κ²Œ 이런 λͺ©ν‘œλ₯Ό λΆ€μ—¬ν•œλ‹€κ³  치죠.
08:00
we give it the objective of, "Fetch the coffee, I must fetch the coffee,
153
480489
3482
"컀피λ₯Ό κ°–κ³  κ°„λ‹€. λ‚˜λŠ” 컀피λ₯Ό κ°€μ Έκ°€μ•Όλ§Œ ν•œλ‹€.
08:03
I can't fetch the coffee if I'm dead,"
154
483995
2580
λ‚΄ 전원이 꺼지면 컀피λ₯Ό κ°–κ³  갈 수 μ—†λ‹€."
08:06
so obviously the PR2 has been listening to my talk,
155
486599
3341
PR2λŠ” 제 λͺ…령을 κ·ΈλŒ€λ‘œ λ”°λ₯΄κΈ° μœ„ν•΄ μ΄λ ‡κ²Œ λ§ν•˜κ² μ£ .
08:09
and so it says, therefore, "I must disable my 'off' switch,
156
489964
3753
"μ’‹μ•„, λ‚΄ '꺼짐' λ²„νŠΌμ„ λ§κ°€λœ¨λ €μ•Ό κ² μ–΄.
08:14
and probably taser all the other people in Starbucks
157
494796
2694
그리고 μŠ€νƒ€λ²…μŠ€μ— μžˆλŠ” λͺ¨λ“  μ‚¬λžŒλ“€μ„ μ „κΈ°μ΄μœΌλ‘œ 쏴야겠어.
08:17
who might interfere with me."
158
497514
1560
이 μ‚¬λžŒλ“€μ΄ μž₯애물이 될 ν…Œλ‹ˆκΉŒ."
08:19
(Laughter)
159
499098
2062
(μ›ƒμŒ)
08:21
So this seems to be inevitable, right?
160
501184
2153
μ΄λ ‡κ²Œ 될 수 밖에 μ—†μž–μ•„μš”.
08:23
This kind of failure mode seems to be inevitable,
161
503361
2398
μ΄λŸ¬ν•œ μ’…λ₯˜μ˜ μ‹€νŒ¨ λͺ¨λ“œλŠ” ν”Όν•  수 없을 κ²λ‹ˆλ‹€.
08:25
and it follows from having a concrete, definite objective.
162
505783
3543
ν™•μ‹€ν•˜κ³  λͺ…ν™•ν•œ λͺ©μ μ„ κ°–κ³  κ·ΈλŒ€λ‘œ λ”°λ₯΄κ³  μžˆμœΌλ‹ˆκΉŒμš”.
08:30
So what happens if the machine is uncertain about the objective?
163
510632
3144
그러면 이 기계가 λͺ©ν‘œλ₯Ό λΆ€μ •ν™•ν•˜κ²Œ μ•Œκ³  있으면 μ–΄λ–€ 일이 μΌμ–΄λ‚ κΉŒμš”?
08:33
Well, it reasons in a different way.
164
513800
2127
음, μ’€ λ‹€λ₯Έ λ°©μ‹μœΌλ‘œ μ΄λ ‡κ²Œ μƒκ°ν•˜κ²Œ 될 κ²λ‹ˆλ‹€.
08:35
It says, "OK, the human might switch me off,
165
515951
2424
"μ’‹μ•„, 인간이 λ‚΄ 전원을 끌 수 μžˆμ–΄.
08:38
but only if I'm doing something wrong.
166
518964
1866
ν•˜μ§€λ§Œ λ‚΄κ°€ λ­”κ°€ 잘λͺ»ν–ˆμ„ λ•Œλ§Œ 그럴 κ±°μ•Ό.
08:41
Well, I don't really know what wrong is,
167
521567
2475
그런데 λ‚˜λŠ” 잘λͺ»ν•œλ‹€λŠ” 게 λ­”μ§€λŠ” λͺ¨λ₯΄μ§€λ§Œ
08:44
but I know that I don't want to do it."
168
524066
2044
잘λͺ»ν•˜κ³  μ‹Άμ§€λŠ” μ•Šμ•„."
자, 여기에 첫 λ²ˆμ§Έμ™€ 두 번째 원칙이 μžˆμŠ΅λ‹ˆλ‹€.
08:46
So that's the first and second principles right there.
169
526134
3010
08:49
"So I should let the human switch me off."
170
529168
3359
"그러면 λ‚˜λŠ” 인간이 전원을 λ„κ²Œ 놔둬야겠닀."
08:53
And in fact you can calculate the incentive that the robot has
171
533541
3956
사싀 λ‘œλ΄‡μ΄ μΈκ°„μœΌλ‘œ ν•˜μ—¬κΈˆ 전원을 끌 수 μžˆλ„λ‘ λ†”λ‘˜ λ•Œ
08:57
to allow the human to switch it off,
172
537521
2493
μ–΄λ–€ 이득이 μžˆμ„μ§€ μ˜ˆμΈ‘ν•  수 있고
09:00
and it's directly tied to the degree
173
540038
1914
이것은 기저에 κΉ”λ¦°
09:01
of uncertainty about the underlying objective.
174
541976
2746
λͺ©μ μ— λŒ€ν•œ λΆˆν™•μ‹€μ„±μ˜ 정도와 μ§μ ‘μ μœΌλ‘œ μ—°κ²°λ©λ‹ˆλ‹€.
09:05
And then when the machine is switched off,
175
545797
2949
그리고 λ‘œλ΄‡μ˜ 전원이 꺼지면
09:08
that third principle comes into play.
176
548770
1805
μ„Έ 번째 원칙이 μž‘λ™ν•˜κ²Œ λ©λ‹ˆλ‹€.
09:10
It learns something about the objectives it should be pursuing,
177
550599
3062
λ‘œλ΄‡μ€ 좔ꡬ해야 ν•˜λŠ” λͺ©ν‘œμ— λŒ€ν•΄ μ΄ν•΄ν•˜κ²Œ 되죠.
09:13
because it learns that what it did wasn't right.
178
553685
2533
λͺ©ν‘œ μˆ˜ν–‰μ—μ„œ 무엇이 잘λͺ»λλŠ”지 μ•Œκ²Œ 되기 λ•Œλ¬Έμž…λ‹ˆλ‹€.
09:16
In fact, we can, with suitable use of Greek symbols,
179
556242
3570
사싀 μš°λ¦¬κ°€ 그리이슀 문자λ₯Ό μ‚¬μš©ν•΄μ„œ
09:19
as mathematicians usually do,
180
559836
2131
μˆ˜ν•™μ„ 늘 κ³΅λΆ€ν•˜λŠ” κ²ƒμ²˜λŸΌ
09:21
we can actually prove a theorem
181
561991
1984
μš°λ¦¬λŠ” ν•˜λ‚˜μ˜ 가섀을 증λͺ…ν•  수 μžˆμŠ΅λ‹ˆλ‹€.
09:23
that says that such a robot is provably beneficial to the human.
182
563999
3553
μ–΄λ–€ μ •μ˜λƒλ©΄, 이런 λ‘œλ΄‡μ€ μ•„λ§ˆλ„ μΈκ°„μ—κ²Œ 도움이 될 κ±°λΌλŠ” κ²λ‹ˆλ‹€.
09:27
You are provably better off with a machine that's designed in this way
183
567576
3803
μ•„λ§ˆλ„ μ΄λ ‡κ²Œ μž‘λ™ν•˜κ²Œ λ§Œλ“€μ–΄μ§„ κΈ°κ³„λ‘œ 인해 ν˜œνƒμ„ 보게 될 κ²λ‹ˆλ‹€.
09:31
than without it.
184
571403
1246
이런 기계가 없을 λ•Œλ³΄λ‹€ 말이죠.
09:33
So this is a very simple example, but this is the first step
185
573057
2906
이것은 맀우 κ°„λ‹¨ν•œ μ˜ˆμ‹œμž…λ‹ˆλ‹€.
인간과 곡쑴할 수 μžˆλŠ” 인곡지λŠ₯으둜 무얼 ν•  수 μžˆλŠ”μ§€μ— λŒ€ν•œ 첫걸음이죠.
09:35
in what we're trying to do with human-compatible AI.
186
575987
3903
09:42
Now, this third principle,
187
582477
3257
자, 이제 μ„Έ 번째 μ›μΉ™μž…λ‹ˆλ‹€.
09:45
I think is the one that you're probably scratching your head over.
188
585758
3112
이걸 보고 μ–΄μ©Œλ©΄ μ—¬λŸ¬λΆ„μ΄ 머리λ₯Ό 긁적이싀 것 κ°™μ€λ°μš”.
09:48
You're probably thinking, "Well, you know, I behave badly.
189
588894
3239
μ΄λ ‡κ²Œ 생각할 μˆ˜λ„ μžˆμŠ΅λ‹ˆλ‹€. "음, λ‚΄κ°€ μ’€ λͺ»λμž–μ•„.
09:52
I don't want my robot to behave like me.
190
592157
2929
λ‚˜λŠ” λ‚΄ λ‘œλ΄‡μ΄ λ‚˜μ²˜λŸΌ ν–‰λ™ν•˜λŠ” 건 λ³„λ‘œμ•Ό.
09:55
I sneak down in the middle of the night and take stuff from the fridge.
191
595110
3434
λ‚˜λŠ” ν•œλ°€μ€‘μ— λͺ°λž˜ 냉μž₯κ³ λ₯Ό λ’€μ Έμ„œ 먹을 κ±Έ 찾기도 ν•˜μž–μ•„.
09:58
I do this and that."
192
598568
1168
그것 말고도 λ§Žμ§€."
09:59
There's all kinds of things you don't want the robot doing.
193
599760
2797
λ‘œλ΄‡μ΄ ν•˜μ§€ μ•Šμ•˜μœΌλ©΄ ν•˜λŠ” μ—¬λŸ¬ 행동듀이 μžˆμ„ κ²λ‹ˆλ‹€.
10:02
But in fact, it doesn't quite work that way.
194
602581
2071
그런데, 그런 일은 없을 κ±°μ˜ˆμš”.
10:04
Just because you behave badly
195
604676
2155
λ‹¨μˆœνžˆ μ—¬λŸ¬λΆ„μ΄ λ‚˜μœ 행동을 ν–ˆλ‹€κ³  ν•΄μ„œ
10:06
doesn't mean the robot is going to copy your behavior.
196
606855
2623
λ‘œλ΄‡μ΄ κ·Έ 행동을 κ·ΈλŒ€λ‘œ λ”°λΌμ„œ ν•˜λŠ” 건 μ•„λ‹™λ‹ˆλ‹€.
10:09
It's going to understand your motivations and maybe help you resist them,
197
609502
3910
λ‘œλ΄‡μ€ μ—¬λŸ¬λΆ„μ˜ 동기λ₯Ό μ΄ν•΄ν•˜κ³  κ·Έ λ‚˜μœ 행동을 ν•˜μ§€ μ•Šλ„λ‘ λ•μŠ΅λ‹ˆλ‹€.
10:13
if appropriate.
198
613436
1320
λ§Œμ•½ 그게 μ˜³λ‹€λ©΄ 말이죠.
10:16
But it's still difficult.
199
616026
1464
ν•˜μ§€λ§Œ 이건 μ–΄λ €μš΄ λ¬Έμ œμž…λ‹ˆλ‹€.
10:18
What we're trying to do, in fact,
200
618122
2545
사싀 μš°λ¦¬κ°€ ν•˜λ €κ³  ν•˜λŠ” 건
10:20
is to allow machines to predict for any person and for any possible life
201
620691
5796
κΈ°κ³„λ‘œ ν•˜μ—¬κΈˆ λˆ„κ΅°κ°€λ₯Ό λŒ€μƒμœΌλ‘œ μ•žμœΌλ‘œ κ·Έκ°€ μ‚΄κ²Œ 될 삢이 μ–΄λ– ν•œμ§€λ₯Ό
10:26
that they could live,
202
626511
1161
μ˜ˆμΈ‘ν•˜κ²Œ ν•˜λŠ” κ²λ‹ˆλ‹€.
10:27
and the lives of everybody else:
203
627696
1597
그리고 λ‹€λ₯Έ μ—¬λŸ¬ λͺ¨λ‘μ˜ μ‚¬λžŒλ“€μ˜ 삢을 말이죠.
10:29
Which would they prefer?
204
629317
2517
μ–΄λ–€ 것을 그듀이 더 μ„ ν˜Έν• κΉŒμš”?
10:33
And there are many, many difficulties involved in doing this;
205
633881
2954
이λ₯Ό κ°€λŠ₯ν•˜κ²Œ ν•˜λ €λ©΄ λ„ˆλ¬΄λ‚˜ λ§Žμ€ λ‚œκ΄€μ„ λ„˜μ–΄μ•Ό ν•©λ‹ˆλ‹€.
10:36
I don't expect that this is going to get solved very quickly.
206
636859
2932
μ €λŠ” 이 λ¬Έμ œλ“€μ΄ 단기간에 ν•΄κ²°λ˜λ¦¬λΌκ³  보지 μ•ŠμŠ΅λ‹ˆλ‹€.
10:39
The real difficulties, in fact, are us.
207
639815
2643
사싀 그쀑 κ°€μž₯ 큰 λ‚œκ΄€μ€ λ°”λ‘œ 우리 μžμ‹ μž…λ‹ˆλ‹€.
10:43
As I have already mentioned, we behave badly.
208
643969
3117
μ•žμ„œ λ§μ”€λ“œλ Έλ“―μ΄ μš°λ¦¬λŠ” λ‚˜μœ 행동을 ν•©λ‹ˆλ‹€.
사싀 우리 쀑에 λˆ„κ΅°κ°€λŠ” 정말 ν˜•νŽΈ 없을지도 λͺ¨λ₯΄μ£ .
10:47
In fact, some of us are downright nasty.
209
647110
2321
10:50
Now the robot, as I said, doesn't have to copy the behavior.
210
650251
3052
λ§μ”€λ“œλ Έλ“―μ΄ λ‘œλ΄‡μ€ 그런 행동을 κ·ΈλŒ€λ‘œ 따라 ν•˜μ§€λŠ” μ•ŠμŠ΅λ‹ˆλ‹€.
10:53
The robot does not have any objective of its own.
211
653327
2791
λ‘œλ΄‡μ€ 슀슀둜λ₯Ό μœ„ν•œ μ–΄λ–€ λͺ©μ λ„ 갖지 μ•ŠμŠ΅λ‹ˆλ‹€.
10:56
It's purely altruistic.
212
656142
1737
λ‘œλ΄‡μ€ μˆœμ „νžˆ 이타적이죠.
10:59
And it's not designed just to satisfy the desires of one person, the user,
213
659113
5221
그리고 μ‚¬μš©μžλΌλŠ” ν•œ μ‚¬λžŒλ§Œμ˜ μš•κ΅¬λ₯Ό μΆ©μ‘±ν•˜κΈ° μœ„ν•΄ λ§Œλ“€μ–΄μ§„ 것도 μ•„λ‹™λ‹ˆλ‹€.
11:04
but in fact it has to respect the preferences of everybody.
214
664358
3138
사싀은 λͺ¨λ“  μ‚¬λžŒλ“€μ΄ μ›ν•˜λŠ” λ°”λ₯Ό κ³ λ €ν•΄μ•Ό ν•˜μ£ .
11:09
So it can deal with a certain amount of nastiness,
215
669083
2570
μ–΄λŠ μ •λ„μ˜ ν˜•νŽΈμ—†λŠ” 상황은 κ°λ‚΄ν•˜κ²Œ 될 κ²ƒμž…λ‹ˆλ‹€.
11:11
and it can even understand that your nastiness, for example,
216
671677
3701
μ—¬λŸ¬λΆ„μ΄ 아무리 ν˜•νŽΈμ—†μ–΄λ„ μ–΄λŠ μ •λ„λŠ” μ–‘ν•΄ν•΄ μ£Όκ² μ£ .
예λ₯Ό λ“€μ–΄ μ—¬λŸ¬λΆ„μ΄ μ—¬κΆŒ λ°œκΈ‰ 곡무원인데 λ‡Œλ¬Όμ„ λ°›μ•˜λ‹€κ³  μΉ©μ‹œλ‹€.
11:15
you may take bribes as a passport official
217
675402
2671
11:18
because you need to feed your family and send your kids to school.
218
678097
3812
그런데 λ‡Œλ¬Όμ„ 받은 μ΄μœ κ°€ μƒν™œλΉ„μ™€ 아이듀 κ΅μœ‘μ„ μœ„ν•œ κ²ƒμ΄μ—ˆλ‹€λ©΄
11:21
It can understand that; it doesn't mean it's going to steal.
219
681933
2906
λ‘œλ΄‡μ€ 이런 κ±Έ μ΄ν•΄ν•˜κ³ , λΉΌμ•—μ•„κ°€μ§€λŠ” μ•Šμ„ κ²λ‹ˆλ‹€.
11:24
In fact, it'll just help you send your kids to school.
220
684863
2679
였히렀 λ‘œλ΄‡μ€ μ—¬λŸ¬λΆ„μ˜ μžλ…€κ°€ 학ꡐ에 갈 수 μžˆλ„λ‘ λ„μšΈ κ±°μ˜ˆμš”.
11:28
We are also computationally limited.
221
688796
3012
우리의 μ—°μ‚°λŠ₯λ ₯은 ν•œκ³„κ°€ μžˆμŠ΅λ‹ˆλ‹€.
11:31
Lee Sedol is a brilliant Go player,
222
691832
2505
이세듀은 천재 λ°”λ‘‘κΈ°μ‚¬μž…λ‹ˆλ‹€.
11:34
but he still lost.
223
694361
1325
ν•˜μ§€λ§Œ κ·ΈλŠ” λ‘œλ΄‡μ—κ²Œ 쑌죠.
11:35
So if we look at his actions, he took an action that lost the game.
224
695710
4239
그의 수λ₯Ό μ‚΄νŽ΄λ³΄λ©΄, κ·ΈλŠ” λ‘œλ΄‡μ—κ²Œ 질 수 밖에 μ—†λŠ” 수λ₯Ό λ‘μ—ˆμŠ΅λ‹ˆλ‹€.
11:39
That doesn't mean he wanted to lose.
225
699973
2161
κ·Έλ ‡λ‹€κ³  ν•΄μ„œ κ·Έκ°€ 패배λ₯Ό μ›ν–ˆλ‹€λŠ” 건 μ•„λ‹™λ‹ˆλ‹€.
11:43
So to understand his behavior,
226
703160
2040
λ”°λΌμ„œ 그의 수λ₯Ό μ΄ν•΄ν•˜λ €λ©΄
11:45
we actually have to invert through a model of human cognition
227
705224
3644
μš°λ¦¬λŠ” μΈκ°„μ˜ 지각 λͺ¨λΈμ„ μ•„μ˜ˆ 뒀집어 봐야 ν•©λ‹ˆλ‹€
11:48
that includes our computational limitations -- a very complicated model.
228
708892
4977
μΈκ°„μ˜ 지각 λͺ¨λΈμ€ 우리의 계산적 ν•œκ³„λ₯Ό λ‹΄κ³  μžˆμ–΄μ„œ 맀우 λ³΅μž‘ν•©λ‹ˆλ‹€.
11:53
But it's still something that we can work on understanding.
229
713893
2993
ν•˜μ§€λ§Œ 연ꡬλ₯Ό ν†΅ν•΄μ„œ 계속 λ°œμ „ν•˜κ³  μžˆμŠ΅λ‹ˆλ‹€.
11:57
Probably the most difficult part, from my point of view as an AI researcher,
230
717696
4320
인곡지λŠ₯ μ—°κ΅¬μžμ˜ μž…μž₯μ—μ„œ μ œκ°€ 보기에 κ°€μž₯ μ–΄λ €μš΄ 뢀뢄은
12:02
is the fact that there are lots of us,
231
722040
2575
인간이 λ„ˆλ¬΄ λ§Žλ‹€λŠ” κ²ƒμž…λ‹ˆλ‹€.
12:06
and so the machine has to somehow trade off, weigh up the preferences
232
726114
3581
κ·Έλž˜μ„œ 인곡지λŠ₯ κΈ°κ³„λŠ” μˆ˜λ§Žμ€ μ‚¬λžŒλ“€μ˜ μ„ ν˜Έλ„λ₯Ό λΉ„κ΅ν•˜λ©΄μ„œ
12:09
of many different people,
233
729719
2225
κ· ν˜•μ„ μœ μ§€ν•΄μ•Ό ν•˜μ£ .
12:11
and there are different ways to do that.
234
731968
1906
κ±°κΈ°μ—λŠ” μ—¬λŸ¬ 방법이 μžˆμŠ΅λ‹ˆλ‹€.
12:13
Economists, sociologists, moral philosophers have understood that,
235
733898
3689
그에 κ΄€ν•΄ κ²½μ œν•™μž, μ‚¬νšŒν•™μž, μœ€λ¦¬ν•™μžλ“€μ΄ 연ꡬλ₯Ό μˆ˜ν–‰ν–ˆκ³ 
12:17
and we are actively looking for collaboration.
236
737611
2455
각 λΆ„μ•Όκ°€ μ„œλ‘œ ν˜‘λ ₯ν•˜λŠ” μ‹œλ„λ₯Ό ν•˜κ³  μžˆμŠ΅λ‹ˆλ‹€.
12:20
Let's have a look and see what happens when you get that wrong.
237
740090
3251
자 μ—¬λŸ¬λΆ„μ΄ 잘λͺ»ν–ˆμ„ λ•Œ μ–΄λ–€ 일이 일어날지 μ‚΄νŽ΄λ³΄μ£ .
12:23
So you can have a conversation, for example,
238
743365
2133
예λ₯Ό λ“€μ–΄ μ—¬λŸ¬λΆ„μ΄ λŒ€ν™”λ₯Ό ν•˜κ³  μžˆμŠ΅λ‹ˆλ‹€.
12:25
with your intelligent personal assistant
239
745522
1944
μ—¬λŸ¬λΆ„μ˜ 인곡지λŠ₯ λΉ„μ„œμ™€ 이야기λ₯Ό ν•˜κ³  μžˆμŠ΅λ‹ˆλ‹€.
12:27
that might be available in a few years' time.
240
747490
2285
이런 λŒ€ν™”λŠ” μ•žμœΌλ‘œ λͺ‡ λ…„ μ•ˆμ— κ°€λŠ₯ν•΄μ§ˆ κ²λ‹ˆλ‹€.
12:29
Think of a Siri on steroids.
241
749799
2524
μ‹œλ¦¬λ₯Ό μƒκ°ν•΄λ³΄μ„Έμš”.
12:33
So Siri says, "Your wife called to remind you about dinner tonight."
242
753447
4322
μ‹œλ¦¬κ°€ "사λͺ¨λ‹˜μ΄ 였늘 저녁 약속을 μžŠμ§€λ§λΌκ³  μ „ν™”ν•˜μ…¨μ–΄μš”"라고 λ§ν•©λ‹ˆλ‹€.
12:38
And of course, you've forgotten. "What? What dinner?
243
758436
2508
μ—¬λŸ¬λΆ„μ€ 약속을 잊고 μžˆμ—ˆμ£ .
"뭐라고? 무슨 저녁? 무슨 말 ν•˜λŠ”κ±°μ•Ό?"
12:40
What are you talking about?"
244
760968
1425
12:42
"Uh, your 20th anniversary at 7pm."
245
762417
3746
"음, 7μ‹œλ‘œ μ˜ˆμ •λœ 결혼 20μ£Όλ…„ μΆ•ν•˜ 저녁식사 μž…λ‹ˆλ‹€."
12:48
"I can't do that. I'm meeting with the secretary-general at 7:30.
246
768735
3719
"μ‹œκ°„μ΄ μ•ˆλ˜λŠ”λ°.. 사무총μž₯λ‹˜κ³Ό 7μ‹œ λ°˜μ— 약속이 μžˆλ‹¨ 말이야.
12:52
How could this have happened?"
247
772478
1692
μ–΄λ–»κ²Œ μ΄λ ‡κ²Œ 겹쳀지?"
12:54
"Well, I did warn you, but you overrode my recommendation."
248
774194
4660
"음, μ œκ°€ λ§μ”€λ“œλ¦¬κΈ΄ ν–ˆμ§€λ§Œ 제 μ΄μ•ΌκΈ°λŠ” λ¬΄μ‹œν•˜μ…¨μ£ ."
12:59
"Well, what am I going to do? I can't just tell him I'm too busy."
249
779966
3328
"그럼 μ–΄λ–‘ν•˜μ§€? 사무총μž₯λ‹˜ν•œν…Œ λ°”μ˜λ‹€κ³  핑계λ₯Ό 댈 μˆ˜λŠ” μ—†μž–μ•„."
13:04
"Don't worry. I arranged for his plane to be delayed."
250
784310
3281
"κ±±μ •λ§ˆμ„Έμš”. μ œκ°€ 사무총μž₯λ‹˜μ΄ 타신 λΉ„ν–‰κΈ°κ°€ μ—°μ°©λ˜λ„λ‘ 손 μΌμŠ΅λ‹ˆλ‹€."
13:07
(Laughter)
251
787615
1682
(μ›ƒμŒ)
13:10
"Some kind of computer malfunction."
252
790069
2101
"μ•Œ 수 μ—†λŠ” 컴퓨터 였λ₯˜κ°€ μΌμ–΄λ‚˜λ„λ‘ ν–ˆμ£ ."
13:12
(Laughter)
253
792194
1212
(μ›ƒμŒ)
13:13
"Really? You can do that?"
254
793430
1617
"정말? κ·Έλ ‡κ²Œ ν•  수 μžˆμ–΄?"
13:16
"He sends his profound apologies
255
796220
2179
"사무총μž₯λ‹˜μ΄ 정말 λ―Έμ•ˆν•˜λ‹€κ³  ν•˜μ…¨μŠ΅λ‹ˆλ‹€.
13:18
and looks forward to meeting you for lunch tomorrow."
256
798423
2555
그리고 λŒ€μ‹  내일 점심에 κΌ­ 보자고 ν•˜μ…¨μŠ΅λ‹ˆλ‹€."
13:21
(Laughter)
257
801002
1299
(μ›ƒμŒ)
13:22
So the values here -- there's a slight mistake going on.
258
802325
4403
이 μƒν™©μ—μ„œ κ°€μΉ˜λ₯Ό λ‘” 것은.. λ°©ν–₯이 쑰금 잘λͺ»λ˜κΈ°λŠ” ν–ˆμ§€λ§Œμš”.
13:26
This is clearly following my wife's values
259
806752
3009
이건 λΆ„λͺ… 제 μ•„λ‚΄μ—κ²Œ κ°€μΉ˜λ₯Ό λ‘” κ²°μ •μž…λ‹ˆλ‹€.
13:29
which is "Happy wife, happy life."
260
809785
2069
"μ•„λ‚΄κ°€ ν–‰λ³΅ν•˜λ©΄, 삢이 ν–‰λ³΅ν•˜λ‹€." λΌλŠ” λ§μ²˜λŸΌμš”.
13:31
(Laughter)
261
811878
1583
(μ›ƒμŒ)
13:33
It could go the other way.
262
813485
1444
λ‹€λ₯Έ 상황도 λ²Œμ–΄μ§ˆ 수 μžˆμŠ΅λ‹ˆλ‹€.
13:35
You could come home after a hard day's work,
263
815641
2201
μ—¬λŸ¬λΆ„μ΄ 정말 νž˜λ“  ν•˜λ£¨λ₯Ό 보내고 집에 μ™”μŠ΅λ‹ˆλ‹€.
13:37
and the computer says, "Long day?"
264
817866
2195
그리고 컴퓨터가 λ§ν•©λ‹ˆλ‹€. "νž˜λ“  ν•˜λ£¨μ˜€μ£ ?"
13:40
"Yes, I didn't even have time for lunch."
265
820085
2288
"λ§žμ•„, 점심 먹을 μ‹œκ°„λ„ μ—†μ—ˆμ–΄."
"λ°°κ°€ 많이 κ³ ν”„μ‹œκ² λ„€μš”."
13:42
"You must be very hungry."
266
822397
1282
13:43
"Starving, yeah. Could you make some dinner?"
267
823703
2646
"배고파 죽을 것 κ°™μ•„. 저녁 μ’€ ν•΄μ€„λž˜?"
13:47
"There's something I need to tell you."
268
827890
2090
"μ œκ°€ λ¨Όμ € λ“œλ¦΄ 말씀이 μžˆμŠ΅λ‹ˆλ‹€."
13:50
(Laughter)
269
830004
1155
(μ›ƒμŒ)
13:52
"There are humans in South Sudan who are in more urgent need than you."
270
832013
4905
"남 μˆ˜λ‹¨μ—λŠ” μ£ΌμΈλ‹˜λ³΄λ‹€ 더 μ‹¬ν•œ ꡢ주림에 μ‹œλ‹¬λ¦¬λŠ” μ‚¬λžŒλ“€μ΄ μžˆμŠ΅λ‹ˆλ‹€."
13:56
(Laughter)
271
836942
1104
(μ›ƒμŒ)
13:58
"So I'm leaving. Make your own dinner."
272
838070
2075
"μ €λŠ” 이제 전원을 λ„κ² μŠ΅λ‹ˆλ‹€. 저녁은 μ•Œμ•„μ„œ λ“œμ„Έμš”."
14:00
(Laughter)
273
840169
2000
(μ›ƒμŒ)
14:02
So we have to solve these problems,
274
842643
1739
μš°λ¦¬λŠ” 이런 λ¬Έμ œλ“€μ„ ν•΄κ²°ν•΄μ•Ό ν•©λ‹ˆλ‹€.
14:04
and I'm looking forward to working on them.
275
844406
2515
그리고 ν•΄κ²°λ˜λ¦¬λΌκ³  κΈ°λŒ€ν•©λ‹ˆλ‹€.
14:06
There are reasons for optimism.
276
846945
1843
μ œκ°€ λ‚™κ΄€ν•˜λŠ” μ΄μœ κ°€ μžˆμŠ΅λ‹ˆλ‹€.
14:08
One reason is,
277
848812
1159
κ·Έ 쀑 ν•˜λ‚˜λŠ”
14:09
there is a massive amount of data.
278
849995
1868
정말 λ°©λŒ€ν•œ μ–‘μ˜ 정보가 μžˆλ‹€λŠ” κ±°μ£ .
14:11
Because remember -- I said they're going to read everything
279
851887
2794
기계듀은 인λ₯˜κ°€ 기둝해 λ‘” λͺ¨λ“  자료λ₯Ό 읽어 듀일 κ²ƒμž…λ‹ˆλ‹€.
14:14
the human race has ever written.
280
854705
1546
μš°λ¦¬κ°€ κΈ°λ‘ν•œ 자료 λŒ€λΆ€λΆ„μ€ λˆ„κ°€ 무엇을 ν–ˆκ³ 
14:16
Most of what we write about is human beings doing things
281
856275
2724
λˆ„κ°€ κ·Έ 일에 λ°˜λŒ€ν–ˆλŠ”κ°€μ— κ΄€ν•œ κ²ƒλ“€μž…λ‹ˆλ‹€.
14:19
and other people getting upset about it.
282
859023
1914
14:20
So there's a massive amount of data to learn from.
283
860961
2398
이런 λ°©λŒ€ν•œ μ–‘μ˜ μ •λ³΄μ—μ„œ 배울 점이 μžˆμŠ΅λ‹ˆλ‹€.
14:23
There's also a very strong economic incentive
284
863383
2236
이λ₯Ό 톡해 μ—„μ²­λ‚œ 경제적 이득을 λ³Ό 수 있죠.
14:27
to get this right.
285
867151
1186
μ œλŒ€λ‘œ ν™œμš©ν•œλ‹€λ©΄μš”.
14:28
So imagine your domestic robot's at home.
286
868361
2001
자, μ—¬λŸ¬λΆ„ 집에 κ°€μ •λΆ€ λ‘œλ΄‡μ΄ μžˆλ‹€κ³  μƒμƒν•΄λ³΄μ„Έμš”.
14:30
You're late from work again and the robot has to feed the kids,
287
870386
3067
μ—¬λŸ¬λΆ„μ΄ μ•Όκ·ΌμœΌλ‘œ κ·€κ°€κ°€ λŠ¦μ–΄μ‘Œκ³ , λ‘œλ΄‡μ΄ μ•„μ΄λ“€μ˜ 식사λ₯Ό 챙겨야 ν•©λ‹ˆλ‹€.
14:33
and the kids are hungry and there's nothing in the fridge.
288
873477
2823
아이듀은 λ°°κ°€ κ³ ν”„κ³  냉μž₯κ³ μ—λŠ” 먹을 게 μ—†μŠ΅λ‹ˆλ‹€.
14:36
And the robot sees the cat.
289
876324
2605
κ·Έλ•Œ λ‘œλ΄‡μ€ 고양이λ₯Ό μ³λ‹€λ΄…λ‹ˆλ‹€.
14:38
(Laughter)
290
878953
1692
(μ›ƒμŒ)
14:40
And the robot hasn't quite learned the human value function properly,
291
880669
4190
그리고 이 λ‘œλ΄‡μ€ μΈκ°„μ˜ κ°€μΉ˜ 평가 방식을 아직 μ œλŒ€λ‘œ λ°°μš°μ§€ μ•Šμ•˜μŠ΅λ‹ˆλ‹€.
14:44
so it doesn't understand
292
884883
1251
κ·Έλž˜μ„œ κ³ μ–‘μ΄μ˜ μ˜μ–‘ν•™μ  κ°€μΉ˜λ³΄λ‹€ 인간이 고양이λ₯Ό μ•„λΌλŠ” 감성적 κ°€μΉ˜κ°€
14:46
the sentimental value of the cat outweighs the nutritional value of the cat.
293
886158
4844
더 μ€‘μš”ν•˜λ‹€λŠ” 것을 μ΄ν•΄ν•˜μ§€ λͺ»ν•©λ‹ˆλ‹€.
14:51
(Laughter)
294
891026
1095
(μ›ƒμŒ)
μ–΄λ–€ 일이 μΌμ–΄λ‚ κΉŒμš”?
14:52
So then what happens?
295
892145
1748
14:53
Well, it happens like this:
296
893917
3297
음, 이런 일이 일어날 κ²λ‹ˆλ‹€.
14:57
"Deranged robot cooks kitty for family dinner."
297
897238
2964
"미친 λ‘œλ΄‡μ΄ 저녁 λ©”λ‰΄λ‘œ 고양이λ₯Ό μš”λ¦¬ν–ˆλ‹€"
15:00
That one incident would be the end of the domestic robot industry.
298
900226
4523
이 사건 ν•˜λ‚˜λ‘œ κ°€μ •μš© λ‘œλ΄‡ μ—…κ³„λŠ” 망해버릴 수 μžˆμŠ΅λ‹ˆλ‹€.
15:04
So there's a huge incentive to get this right
299
904773
3372
λ”°λΌμ„œ μ΄ˆμ§€λŠ₯을 가진 λ‘œλ΄‡μ΄ μΆœν˜„ν•˜κΈ° 전에
15:08
long before we reach superintelligent machines.
300
908169
2715
이걸 λ°”λ‘œ μž‘λŠ” 것이 무척 μ€‘μš”ν•©λ‹ˆλ‹€.
15:11
So to summarize:
301
911948
1535
자, μš”μ•½μ„ ν•˜μžλ©΄
15:13
I'm actually trying to change the definition of AI
302
913507
2881
μ €λŠ” 인곡지λŠ₯에 λŒ€ν•œ μ •μ˜λ₯Ό λ°”κΎΈκΈ° μœ„ν•΄ λ…Έλ ₯ν•©λ‹ˆλ‹€.
15:16
so that we have provably beneficial machines.
303
916412
2993
κ·Έλž˜μ•Όλ§Œ 정말 도움이 λ˜λŠ” λ‘œλ΄‡λ“€μ„ κ°€μ§ˆ 수 μžˆμ„ κ²λ‹ˆλ‹€.
15:19
And the principles are:
304
919429
1222
μ—¬κΈ°μ—λŠ” κΈ°λ³Έ 원칙이 있죠.
15:20
machines that are altruistic,
305
920675
1398
λ‘œλ΄‡μ€ 이타적이어야 ν•˜κ³ 
15:22
that want to achieve only our objectives,
306
922097
2804
μš°λ¦¬κ°€ μ›ν•˜λŠ” λͺ©μ λ§Œμ„ 이루렀고 ν•΄μ•Ό ν•©λ‹ˆλ‹€.
15:24
but that are uncertain about what those objectives are,
307
924925
3116
ν•˜μ§€λ§Œ κ·Έ λͺ©μ μ΄ μ •ν™•νžˆ 무엇인지 λͺ°λΌμ•Ό ν•©λ‹ˆλ‹€.
15:28
and will watch all of us
308
928065
1998
그리고 우리 인간을 잘 μ‚΄νŽ΄μ•Ό ν•˜μ£ .
15:30
to learn more about what it is that we really want.
309
930087
3203
μš°λ¦¬κ°€ 진정 무엇을 μ›ν•˜λŠ”μ§€ μ΄ν•΄ν•˜κΈ° μœ„ν•΄μ„œμž…λ‹ˆλ‹€.
15:34
And hopefully in the process, we will learn to be better people.
310
934193
3559
그리고 κ·Έ κ³Όμ •μ—μ„œ μš°λ¦¬λŠ” 더 λ‚˜μ€ μ‚¬λžŒμ΄ λ˜λŠ” 법을 배우게 될 κ²ƒμž…λ‹ˆλ‹€.
15:37
Thank you very much.
311
937776
1191
κ°μ‚¬ν•©λ‹ˆλ‹€.
15:38
(Applause)
312
938991
3709
(μ›ƒμŒ)
15:42
Chris Anderson: So interesting, Stuart.
313
942724
1868
CA: 정말 ν₯λ―Έλ‘­λ„€μš”. μŠ€νŠœμ–΄νŠΈ.
15:44
We're going to stand here a bit because I think they're setting up
314
944616
3170
λ‹€μŒ κ°•μ—° μˆœμ„œλ₯Ό μ€€λΉ„ν•˜λŠ” λ™μ•ˆμ— 잠깐 이야기λ₯Ό λ‚˜λˆ„κ² μŠ΅λ‹ˆλ‹€.
15:47
for our next speaker.
315
947810
1151
15:48
A couple of questions.
316
948985
1538
λͺ‡ 가지 질문이 μžˆλŠ”λ°μš”.
15:50
So the idea of programming in ignorance seems intuitively really powerful.
317
950547
5453
λ¬΄μ§€ν•œ μƒνƒœμ—μ„œ ν”„λ‘œκ·Έλž˜λ°ν•œλ‹€λŠ” 게 무척 λŒ€λ‹¨ν•˜λ‹€κ³  μƒκ°λ©λ‹ˆλ‹€.
15:56
As you get to superintelligence,
318
956024
1594
μ΄ˆμ§€λŠ₯ 연ꡬλ₯Ό ν•˜κ³  κ³„μ‹œλŠ”λ°μš”.
15:57
what's going to stop a robot
319
957642
2258
λ‘œλ΄‡μ˜ 이런 뢀뢄을 막을 수 μžˆμ„κΉŒμš”?
15:59
reading literature and discovering this idea that knowledge
320
959924
2852
λ‘œλ΄‡μ΄ λ¬Έν•™ μ„œμ μ„ 읽고
지식이 무지보닀 더 λ‚«λ‹€λŠ” κ°œλ…μ„ λ°œκ²¬ν•˜κ²Œ 되고
16:02
is actually better than ignorance
321
962800
1572
16:04
and still just shifting its own goals and rewriting that programming?
322
964396
4218
κ·ΈλŸ¬λ©΄μ„œλ„ 계속 슀슀둜 λͺ©μ μ„ λ°”κΎΈκ³ 
κ·Έκ±Έ λ‹€μ‹œ ν”„λ‘œκ·Έλž˜λ°νžˆλŠ” κ±Έ 막을 수 μžˆλ‚˜μš”?
16:09
Stuart Russell: Yes, so we want it to learn more, as I said,
323
969512
6356
SR: λ„€. λ§μ”€λ“œλ Έλ‹€μ‹œν”Ό, μš°λ¦¬λŠ” λ‘œλ΄‡μ΄ 더 많이 배우길 λ°”λžλ‹ˆλ‹€.
16:15
about our objectives.
324
975892
1287
우리의 λͺ©ν‘œμ— λŒ€ν•΄μ„œ 말이죠
16:17
It'll only become more certain as it becomes more correct,
325
977203
5521
λͺ©ν‘œκ°€ 더 μ •ν™•ν•΄μ§ˆμˆ˜λ‘ 더 ν™•μ‹€ν•΄μ§€κ²Œ λ©λ‹ˆλ‹€.
16:22
so the evidence is there
326
982748
1945
그런 증거듀이 μžˆμŠ΅λ‹ˆλ‹€.
16:24
and it's going to be designed to interpret it correctly.
327
984717
2724
그리고 λͺ©ν‘œλ₯Ό μ •ν™•ν•˜κ²Œ 해석할 수 μžˆλ„λ‘ 섀계될 κ²λ‹ˆλ‹€.
16:27
It will understand, for example, that books are very biased
328
987465
3956
예λ₯Ό λ“€μ–΄, 인곡지λŠ₯이 책듀을 읽고
κ·Έ μ•ˆμ— λ‹΄κΈ΄ λ‚΄μš©λ“€μ΄ ν•œμͺ½μœΌλ‘œ 편ν–₯λ˜μ–΄ μžˆλ‹€κ³  μ΄ν•΄ν•˜κ²Œ 될 κ²λ‹ˆλ‹€.
16:31
in the evidence they contain.
329
991445
1483
16:32
They only talk about kings and princes
330
992952
2397
κ·Έ μ±…λ“€μ˜ λ‚΄μš©μ€
μ™•κ³Ό μ™•μž, λ›°μ–΄λ‚œ 백인 λ‚¨μ„±λ“€μ˜ 업적듀 뿐이죠.
16:35
and elite white male people doing stuff.
331
995373
2800
16:38
So it's a complicated problem,
332
998197
2096
이것은 μ–΄λ €μš΄ λ¬Έμ œμž…λ‹ˆλ‹€.
16:40
but as it learns more about our objectives
333
1000317
3872
ν•˜μ§€λ§Œ 우리의 λͺ©μ μ— λŒ€ν•΄ 더 많이 μ•Œκ²Œ 될수둝
16:44
it will become more and more useful to us.
334
1004213
2063
μš°λ¦¬μ—κ²Œ λ”μš± μœ μš©ν•˜κ²Œ 될 κ²λ‹ˆλ‹€.
16:46
CA: And you couldn't just boil it down to one law,
335
1006300
2526
CA: 그런데 κ·Έκ±Έ ν•˜λ‚˜μ˜ λ²•μΉ™μœΌλ‘œλ§Œ 말할 μˆ˜λŠ” μ—†κ² μ£ .
16:48
you know, hardwired in:
336
1008850
1650
λ§ν•˜μžλ©΄, μ΄λ ‡κ²Œ μž…λ ₯ν•˜λŠ” κ±΄κ°€μš”.
16:50
"if any human ever tries to switch me off,
337
1010524
3293
"λ§Œμ•½ μ–΄λ–€ 인간이라도 λ‚΄ 전원을 끄렀고 ν•œλ‹€λ©΄
16:53
I comply. I comply."
338
1013841
1935
λ‚˜λŠ” λ³΅μ’…ν•œλ‹€. λ³΅μ’…ν•œλ‹€"
16:55
SR: Absolutely not.
339
1015800
1182
SR: λ¬Όλ‘  μ•„λ‹™λ‹ˆλ‹€. 그건 정말 λ”μ°ν•œ μƒκ°μ΄μ—μš”.
16:57
That would be a terrible idea.
340
1017006
1499
16:58
So imagine that you have a self-driving car
341
1018529
2689
λ¬΄μΈμžλ™μ°¨λ₯Ό κ°–κ³  μžˆλ‹€κ³  μƒκ°ν•΄λ³΄μ„Έμš”.
17:01
and you want to send your five-year-old
342
1021242
2433
그리고 λ‹€μ„― 살이 된 아이λ₯Ό μœ μΉ˜μ›μ— 데렀닀 μ£Όλ €κ³  ν•©λ‹ˆλ‹€.
17:03
off to preschool.
343
1023699
1174
17:04
Do you want your five-year-old to be able to switch off the car
344
1024897
3101
κ·Έλ•Œ μš΄μ „μ€‘μ— λ‹€μ„― μ‚΄μ§œλ¦¬ 아이가 전원을 κΊΌλ²„λ¦¬κ²Œ λ‘μ‹œκ² μ–΄μš”?
17:08
while it's driving along?
345
1028022
1213
μ•„λ§ˆ 아닐 κ²λ‹ˆλ‹€.
17:09
Probably not.
346
1029259
1159
λ‘œλ΄‡μ€ μž‘λ™ν•˜λŠ” μ‚¬λžŒμ΄ μ–Όλ§ˆλ‚˜ 이성적이고 지각이 μžˆλŠ”μ§€ νŒŒμ•…ν•©λ‹ˆλ‹€.
17:10
So it needs to understand how rational and sensible the person is.
347
1030442
4703
17:15
The more rational the person,
348
1035169
1676
더 이성적인 μ‚¬λžŒμ΄ μ „μœˆμ„ κΊΌμ£ΌκΈ°λ₯Ό 바라겠죠.
17:16
the more willing you are to be switched off.
349
1036869
2103
17:18
If the person is completely random or even malicious,
350
1038996
2543
λ§Œμ•½ μ–΄λ–€ μ‚¬λžŒμΈμ§€ μ „ν˜€ λͺ¨λ₯΄κ³  심지어 μ•…μ˜μ μΈ μ‚¬λžŒμ΄λΌλ©΄
17:21
then you're less willing to be switched off.
351
1041563
2512
그런 μ‚¬λžŒμ΄ 전원을 끄기λ₯Ό μ›ν•˜μ§€λŠ” μ•Šμ„ κ²λ‹ˆλ‹€.
17:24
CA: All right. Stuart, can I just say,
352
1044099
1866
CA: μ•Œκ² μŠ΅λ‹ˆλ‹€. μŠ€νŠœμ–΄λ“œμ”¨,
17:25
I really, really hope you figure this out for us.
353
1045989
2314
저희 λͺ¨λ‘λ₯Ό μœ„ν•΄ κ·Έ 문제λ₯Ό ν•΄κ²°ν•΄ μ£Όμ‹œκΈ°λ₯Ό λ°”λžλ‹ˆλ‹€.
쒋은 κ°•μ—° κ°μ‚¬ν•©λ‹ˆλ‹€. 정말 λ†€λΌμš΄ μ΄μ•ΌκΈ°μ˜€μŠ΅λ‹ˆλ‹€
17:28
Thank you so much for that talk. That was amazing.
354
1048327
2375
17:30
SR: Thank you.
355
1050726
1167
SR: κ°μ‚¬ν•©λ‹ˆλ‹€
17:31
(Applause)
356
1051917
1837
(λ°•μˆ˜)
이 μ›Ήμ‚¬μ΄νŠΈ 정보

이 μ‚¬μ΄νŠΈλŠ” μ˜μ–΄ ν•™μŠ΅μ— μœ μš©ν•œ YouTube λ™μ˜μƒμ„ μ†Œκ°œν•©λ‹ˆλ‹€. μ „ 세계 졜고의 μ„ μƒλ‹˜λ“€μ΄ κ°€λ₯΄μΉ˜λŠ” μ˜μ–΄ μˆ˜μ—…μ„ 보게 될 κ²ƒμž…λ‹ˆλ‹€. 각 λ™μ˜μƒ νŽ˜μ΄μ§€μ— ν‘œμ‹œλ˜λŠ” μ˜μ–΄ μžλ§‰μ„ 더블 ν΄λ¦­ν•˜λ©΄ κ·Έκ³³μ—μ„œ λ™μ˜μƒμ΄ μž¬μƒλ©λ‹ˆλ‹€. λΉ„λ””μ˜€ μž¬μƒμ— 맞좰 μžλ§‰μ΄ μŠ€ν¬λ‘€λ©λ‹ˆλ‹€. μ˜κ²¬μ΄λ‚˜ μš”μ²­μ΄ μžˆλŠ” 경우 이 문의 양식을 μ‚¬μš©ν•˜μ—¬ λ¬Έμ˜ν•˜μ‹­μ‹œμ˜€.

https://forms.gle/WvT1wiN1qDtmnspy7