How to Get Inside the "Brain" of AI | Alona Fyshe | TED

58,808 views ・ 2023-04-03

TED


μ•„λž˜ μ˜λ¬Έμžλ§‰μ„ λ”λΈ”ν΄λ¦­ν•˜μ‹œλ©΄ μ˜μƒμ΄ μž¬μƒλ©λ‹ˆλ‹€.

λ²ˆμ—­: Seonghoon Lim κ²€ν† : DK Kim
00:04
People are funny.
0
4417
1127
인간은 ν₯λ―Έλ‘­μ£ .
00:05
We're constantly trying to understand and interpret
1
5585
2544
μžμ‹ μ„ λ‘˜λŸ¬μ‹Ό 세계λ₯Ό λŠμž„μ—†μ΄ μ΄ν•΄ν•˜κ³  μ„€λͺ…ν•˜λ €κ³  μ• μ”λ‹ˆλ‹€.
00:08
the world around us.
2
8171
1460
00:10
I live in a house with two black cats, and let me tell you,
3
10090
2794
μ €λŠ” 검은 고양이 두 λ§ˆλ¦¬μ™€ 집에 ν•¨κ»˜ μ‚¬λŠ”λ°
00:12
every time I see a black, bunched up sweater out of the corner of my eye,
4
12926
3462
ꡬ석에 검은색 μŠ€μ›¨ν„° 더미가 보이면 κ·Έλ•Œλ§ˆλ‹€ 고양이라고 μƒκ°ν•©λ‹ˆλ‹€.
00:16
I think it's a cat.
5
16388
1209
(μ›ƒμŒ)
00:18
It's not just the things we see.
6
18098
1585
λˆˆμ— λ³΄μ΄λŠ” 게 λ‹€κ°€ μ•„λ‹ˆμ£ .
00:19
Sometimes we attribute more intelligence than might actually be there.
7
19724
3587
가끔 인간은 사물을 λ³Ό λ•Œ μ‹€μ œλ³΄λ‹€ 더 의미λ₯Ό λΆ€μ—¬ν•©λ‹ˆλ‹€.
00:23
Maybe you've seen the dogs on TikTok.
8
23770
1794
틱톑에 μžˆλŠ” 강아지 μ˜μƒμ—μ„œ
00:25
They have these little buttons that say things like "walk" or "treat."
9
25605
3379
β€˜μ‚°μ±…β€™ , β€˜κ°„μ‹β€™μ„ λœ»ν•˜λŠ” μž‘μ€ 단좔듀을 봀을 κ±°μ˜ˆμš”.
강아지듀은 κ·Έκ²ƒμœΌλ‘œ 주인듀과 μ†Œν†΅ν•  수 있으며
00:29
They can push them to communicate some things with their owners,
10
29025
3128
00:32
and their owners think they use them
11
32195
1752
주인듀도 κ·Έ λ‹¨μΆ”λ‘œ 강아지듀과 κ½€ 잘 μ†Œν†΅ν•  수 μžˆλ‹€κ³  μƒκ°ν•©λ‹ˆλ‹€.
00:33
to communicate some pretty impressive things.
12
33947
2169
00:36
But do the dogs know what they're saying?
13
36491
2002
그런데 강아지듀이 정말 μ•„λŠ” κ±ΈκΉŒμš”?
00:39
Or perhaps you've heard the story of Clever Hans the horse,
14
39494
3462
μ‚°μˆ˜λ₯Ό ν•  쀄 μ•„λŠ” λ§μ΄λΌλŠ” β€˜μ˜λ¦¬ν•œ ν•œμŠ€β€™λ₯Ό μ•„μ‹€ κ±°μ˜ˆμš”.
00:42
and he could do math.
15
42956
1376
00:44
And not just like, simple math problems, really complicated ones,
16
44374
3086
κ°„λ‹¨ν•œ μ‚°μˆ˜ λ¬Έμ œκ°€ μ•„λ‹ˆλΌ 정말 λ³΅μž‘ν•œ 문제λ₯Ό μ••λ‹ˆλ‹€.
00:47
like, if the eighth day of the month falls on a Tuesday,
17
47460
3003
8일이 ν™”μš”μΌμ΄λ©΄ λ‹€μŒ μ£Ό κΈˆμš”μΌμ€ 며칠인가 같은 문제 말이죠.
00:50
what's the date of the following Friday?
18
50463
1919
00:52
It's like, pretty impressive for a horse.
19
52841
2043
λ§μΉ˜κ³ λŠ” 정말 λŒ€λ‹¨ν•©λ‹ˆλ‹€.
00:55
Unfortunately, Hans wasn't doing math,
20
55927
2586
μ•„μ‰½κ²Œλ„ ν•œμŠ€κ°€ μ‚°μˆ˜λ₯Ό ν•˜λŠ” 것은 μ•„λ‹ˆμ—ˆμ§€λ§Œ
00:58
but what he was doing was equally impressive.
21
58555
2586
그의 행동은 μ—¬μ „νžˆ λ†€λžμŠ΅λ‹ˆλ‹€.
01:01
Hans had learned to watch the people in the room
22
61182
2670
ν•œμŠ€κ°€ 배운 것은
λ°©μ•ˆμ— μžˆλŠ” μ‚¬λžŒλ“€μ„ λ³΄λ©΄μ„œ μ–Έμ œ λ°œμ„ κ΅΄λŸ¬μ•Ό ν• μ§€μ˜€μŠ΅λ‹ˆλ‹€.
01:03
to tell when he should tap his hoof.
23
63893
2002
01:05
So he communicated his answers by tapping his hoof.
24
65937
2461
λ•Œκ°€ 되면 λ°œμ„ κ΅΄λŸ¬μ„œ 닡을 λ§ν–ˆμ£ .
01:08
It turns out that if you know the answer
25
68982
1918
μ•Œκ³  λ³΄λ‹ˆ μ•žμ„œ μ–˜κΈ°ν•œ λ³΅μž‘ν•œ μˆ˜ν•™ 문제의 닡을 μ•„λŠ” μ‚¬λžŒλ“€μ€
01:10
to "if the eighth day of the month falls on a Tuesday,
26
70942
2544
01:13
what's the date of the following Friday,"
27
73528
1960
ν•œμŠ€κ°€ λ°œμ„ 18번 κ΅΄λŸ¬μ„œ 정닡이 되면
01:15
you will subconsciously change your posture
28
75488
2044
01:17
once the horse has given the correct 18 taps.
29
77532
2169
λ¬΄μ˜μ‹μ μœΌλ‘œ μžμ„Έλ₯Ό λ°”κΏ¨λ˜ κ²ƒμ΄μ—ˆμŠ΅λ‹ˆλ‹€.
01:20
So Hans couldn't do math,
30
80493
1293
κ·ΈλŸ¬λ‹ˆκΉŒ ν•œμŠ€λŠ” μˆ˜ν•™μ€ λͺ» ν–ˆμ§€λ§Œ
01:21
but he had learned to watch the people in the room who could do math,
31
81786
3254
λ°© μ•ˆμ—μ„œ μˆ˜ν•™μ„ ν•  수 μžˆλŠ” μ‚¬λžŒμ„ λ³΄λŠ” 것을 ν•™μŠ΅ν•œ κ²λ‹ˆλ‹€.
01:25
which, I mean, still pretty impressive for a horse.
32
85081
2419
λ§μΉ˜κ³ λŠ” μ—¬μ „νžˆ λ†€λžμ£ .
01:28
But this is an old picture,
33
88418
1418
ν•˜μ§€λ§Œ 이건 였래된 이야기고 μ˜€λŠ˜λ‚  μš°λ¦¬λŠ” μ•ˆ 속을 κ±°μ˜ˆμš”.
01:29
and we would not fall for Clever Hans today.
34
89836
2252
01:32
Or would we?
35
92672
1168
μ•„λ‹ˆλ©΄ μ†μ„κΉŒμš”?
(μ›ƒμŒ)
01:34
Well, I work in AI,
36
94549
2169
μ €λŠ” AI λΆ„μ•Όμ—μ„œ μΌν•˜λŠ”λ°, μž₯λ‹΄ν•˜κ±΄λŒ€ μ–΄λ ΅μŠ΅λ‹ˆλ‹€.
01:36
and let me tell you, things are wild.
37
96718
2002
01:38
There have been multiple examples of people being completely convinced
38
98720
3754
AIκ°€ 인간을 μ΄ν•΄ν•œλ‹€κ³  μ‚¬λžŒλ“€μ΄ μ „μ μœΌλ‘œ ν™•μ‹ ν•˜λŠ”
01:42
that AI understands them.
39
102515
1502
사둀가 μ—¬λŸ¬ 가지 μžˆμŠ΅λ‹ˆλ‹€.
01:44
In 2022,
40
104559
2711
2022년에 ν•œ ꡬ글 μ—”μ§€λ‹ˆμ–΄λŠ”
01:47
a Google engineer thought that Google’s AI was sentient.
41
107270
3337
ꡬ글 인곡 지λŠ₯이 지각이 μžˆλ‹€κ³  μƒκ°ν–ˆμŠ΅λ‹ˆλ‹€.
01:50
And you may have had a really human-like conversation
42
110649
2752
μ—¬λŸ¬λΆ„μ€ 챗지피티 같은 것과 μ§„μ§œ μΈκ°„μ²˜λŸΌ λŒ€ν™”λ„ ν–ˆμ„ κ²λ‹ˆλ‹€.
01:53
with something like ChatGPT.
43
113401
1877
01:55
But models we're training today are so much better
44
115779
2377
그런데 저희가 ν˜„μž¬ ν›ˆλ ¨ 쀑인 λͺ¨λΈμ€ 5λ…„ μ „ λͺ¨λΈλ³΄λ‹€ 훨씬 λ°œμ „ν–ˆμœΌλ©°
01:58
than the models we had even five years ago.
45
118198
2002
02:00
It really is remarkable.
46
120200
1710
정말 λ†€λΌμš΄ μ„±κ³Όμ£ .
02:02
So at this super crazy moment in time,
47
122744
2502
μ™„μ „νžˆ 미친 λ“―ν•œ 이 μ‹œμ μ— 정말 말이 μ•ˆ λ˜λŠ” μ§ˆλ¬Έμ„ λ“œλ¦΄κ²Œμš”.
02:05
let’s ask the super crazy question:
48
125288
2127
02:07
Does AI understand us,
49
127832
1293
AIλŠ” 인간을 μ΄ν•΄ν• κΉŒμš”, μ•„λ‹ˆλ©΄ 또 λ‹€λ₯Έ μ˜λ¦¬ν•œ ν•œμŠ€μΌκΉŒμš”?
02:09
or are we having our own Clever Hans moment?
50
129167
2878
02:13
Some philosophers think that computers will never understand language.
51
133254
3462
μ–΄λ–€ μ² ν•™μžλ“€μ€ 컴퓨터가 μ ˆλŒ€λ‘œ μ–Έμ–΄λ₯Ό μ΄ν•΄ν•˜μ§€ λͺ»ν•  거라고 ν•©λ‹ˆλ‹€.
02:16
To illustrate this, they developed something they call
52
136716
2628
이λ₯Ό 증λͺ…ν•˜κΈ° μœ„ν•΄ 그듀은 β€˜μ€‘κ΅­μ–΄ λ°© 논증’을 λ§Œλ“€μ—ˆμŠ΅λ‹ˆλ‹€.
02:19
the Chinese room argument.
53
139344
1460
02:21
In the Chinese room, there is a person, hypothetical person,
54
141429
3629
쀑ꡭ어 λ°©μ—λŠ” 쀑ꡭ어λ₯Ό λͺ¨λ₯΄λŠ” μ‚¬λžŒμ΄ μžˆμŠ΅λ‹ˆλ‹€.
02:25
who does not understand Chinese,
55
145100
1835
02:26
but he has along with him a set of instructions
56
146976
2211
그런데 κ·ΈλŠ” μ§€μΉ¨μ„œκ°€ 있고
02:29
that tell him how to respond in Chinese to any Chinese sentence.
57
149229
4212
μ§€μΉ¨μ„œμ—λŠ” 쀑ꡭ어 λ¬Έμž₯λ§ˆλ‹€ μ–΄λ–»κ²Œ λŒ€μ‘ν• μ§€ μ ν˜€ μžˆμŠ΅λ‹ˆλ‹€.
02:33
Here's how the Chinese room works.
58
153983
1669
쀑ꡭ어 방은 이런 μ‹μœΌλ‘œ μž‘λ™ν•©λ‹ˆλ‹€.
02:35
A piece of paper comes in through a slot in the door,
59
155652
2502
문에 μžˆλŠ” ꡬ멍으둜 쒅이 ν•œ μž₯이 λ“€μ–΄μ˜€λŠ”λ°
κ·Έ μ’…μ΄μ—λŠ” μ€‘κ΅­μ–΄λ‘œ 무엇인가 μ ν˜€ μžˆμŠ΅λ‹ˆλ‹€.
02:38
has something written in Chinese on it.
60
158196
2461
[우리 κ°œλŠ” μ‚°μˆ˜λ₯Ό ν•  수 μžˆμ–΄μš”]
02:40
The person uses their instructions to figure out how to respond.
61
160699
3003
μ‚¬λžŒμ€ μ„€λͺ…μ„œλ₯Ό μ΄μš©ν•΄μ„œ μ–΄λ–»κ²Œ λŒ€λ‹΅ν• μ§€ μ•Œμ•„λƒ…λ‹ˆλ‹€.
02:43
They write the response down on a piece of paper
62
163702
2293
쒅이에 λŒ€λ‹΅μ„ 적고 λ¬Έ λ°–μœΌλ‘œ λ‹€μ‹œ λ‚΄λ³΄λƒ…λ‹ˆλ‹€.
02:46
and then send it back out through the door.
63
166037
2252
[κ°œκ°€ κ·Έλ ‡λ‹€λ‹ˆ κ·Έκ±° 정말 λ†€λžκ΅°μš”]
02:48
To somebody who speaks Chinese,
64
168331
1502
λ°© 밖에 μžˆλŠ” 쀑ꡭ어λ₯Ό ν•  쀄 μ•„λŠ” μ‚¬λžŒμ€
02:49
standing outside this room,
65
169833
1293
02:51
it might seem like the person inside the room speaks Chinese.
66
171167
3170
λ°© μ•ˆμ— μžˆλŠ” μ‚¬λžŒμ΄ 쀑ꡭ어λ₯Ό ν•  쀄 μ•ˆλ‹€κ³  생각할 κ²ƒμž…λ‹ˆλ‹€.
02:54
But we know they do not,
67
174337
2753
ν•˜μ§€λ§Œ, μ €ν¬λŠ” 그게 μ•„λ‹Œ κ±Έ μ•Œμ£ .
02:57
because no knowledge of Chinese is required to follow the instructions.
68
177090
4546
μ§€μΉ¨μ„œλ₯Ό λ³΄λŠ” λ°λŠ” 쀑ꡭ어가 ν•„μš”ν•˜μ§€ μ•ŠμœΌλ‹ˆκΉŒμš”.
이 μ‹€ν—˜ κ²°κ³ΌλŠ” κ·Έ μ‚¬λžŒμ΄ 쀑ꡭ어λ₯Ό μ•„λŠ”μ§€ μ•Œλ € 주지 μ•ŠμŠ΅λ‹ˆλ‹€.
03:01
Performance on this task does not show that you know Chinese.
69
181636
2961
03:05
So what does that tell us about AI?
70
185807
2002
이 μ‹€ν—˜μ€ AI에 λŒ€ν•΄ 무엇을 λ§ν•΄μ£Όλ‚˜μš”?
03:08
Well, when you and I stand outside of the room,
71
188226
2753
μ—¬λŸ¬λΆ„κ³Ό μ œκ°€ λ°© 밖에 있고
03:11
when we speak to one of these AIs like ChatGPT,
72
191020
4547
챗지피티 같은 AI μ€‘μ—μ„œ ν•˜λ‚˜μ—κ²Œ 말을 건닀면
03:15
we are the person standing outside the room.
73
195567
2085
μš°λ¦¬λŠ” λ°© 밖에 μ„œ μžˆλŠ” μ‚¬λžŒμ΄ λ˜λŠ” κ²λ‹ˆλ‹€.
03:17
We're feeding in English sentences,
74
197652
1710
μ˜μ–΄λ‘œ μ§ˆλ¬Έμ„ ν•˜λ©΄ μ˜μ–΄λ‘œ λŒ€λ‹΅μ„ λ°›λŠ” 것이죠.
03:19
we're getting English sentences back.
75
199362
2127
03:21
It really looks like the models understand us.
76
201531
2419
κ·Έλž˜μ„œ μ •λ§λ‘œ 인간을 μ΄ν•΄ν•˜λŠ” κ²ƒμ²˜λŸΌ 보일 κ²λ‹ˆλ‹€.
03:23
It really looks like they know English.
77
203992
2502
μ§„μ§œλ‘œ μ˜μ–΄λ₯Ό μ•„λŠ” κ²ƒμ²˜λŸΌ λ³΄μž…λ‹ˆλ‹€.
03:27
But under the hood,
78
207203
1168
ν•˜μ§€λ§Œ 싀상을 μ•Œκ³  보면 λ³΅μž‘ν•œ 지침을 λ”°λ₯΄λŠ” κ²ƒλΏμž…λ‹ˆλ‹€.
03:28
these models are just following a set of instructions, albeit complex.
79
208371
3754
03:32
How do we know if AI understands us?
80
212917
2795
AIκ°€ κ·Έκ±Έ μ΄ν•΄ν•˜λŠ”μ§€ μ–΄λ–»κ²Œ μ•ŒκΉŒμš”?
03:36
To answer that question, let's go back to the Chinese room again.
81
216880
3086
이 μ§ˆλ¬Έμ— 닡을 ν•˜λ €λ©΄ 쀑ꡭ어 방으둜 λ‹€μ‹œ λŒμ•„κ°€μ„œ
03:39
Let's say we have two Chinese rooms.
82
219966
1794
쀑ꡭ어 방이 두 개 μžˆλ‹€κ³  ν•˜μ£ .
03:41
In one Chinese room is somebody who actually speaks Chinese,
83
221801
3879
ν•œ 쀑ꡭ어 λ°©μ—λŠ”, μ‹€μ œλ‘œ 쀑ꡭ어λ₯Ό ν•˜λŠ” μ‚¬λžŒμ΄ 있고,
λ‹€λ₯Έ λ°©μ—λŠ” κ°€μ§œκ°€ 있죠.
03:46
and in the other room is our impostor.
84
226014
1877
03:48
When the person who actually speaks Chinese gets a piece of paper
85
228224
3087
μ‹€μ œλ‘œ 쀑ꡭ어λ₯Ό ν•˜λŠ” μ‚¬λžŒμ΄ μ€‘κ΅­μ–΄λ‘œ 써 μžˆλŠ” μ§ˆλ¬Έμ§€λ₯Ό λ°›μœΌλ©΄
쀑ꡭ어λ₯Ό 읽을 쀄 μ•„λ‹ˆκΉŒ λ¬Έμ œκ°€ μ—†μŠ΅λ‹ˆλ‹€.
03:51
that says something in Chinese in it, they can read it, no problem.
86
231311
3170
03:54
But when our imposter gets it again,
87
234522
1752
ν•˜μ§€λ§Œ, λ‹€λ₯Έ 방에 μžˆλŠ” κ°€μ§œκ°€ μ§ˆλ¬Έμ§€λ₯Ό λ°›μœΌλ©΄,
03:56
he has to use his set of instructions to figure out how to respond.
88
236274
3170
μ–΄λ–»κ²Œ λŒ€λ‹΅ν• μ§€ μ§€μΉ¨μ„œλ₯Ό 찾아봐야 ν•˜μ£ .
03:59
From the outside, it might be impossible to distinguish these two rooms,
89
239819
3671
λ°–μ—μ„œλŠ” 두 방을 κ΅¬λΆ„ν•˜λŠ” 게 λΆˆκ°€λŠ₯할지도 λͺ¨λ₯΄κ² μ§€λ§Œ
04:03
but we know inside something really different is happening.
90
243531
3587
λ‚΄λΆ€μ—μ„œ μ–΄λ–€ λ‹€λ₯Έ 일듀이 μΌμ–΄λ‚œλ‹€λŠ” κ±Έ μš°λ¦¬λŠ” μ••λ‹ˆλ‹€.
04:07
To illustrate that,
91
247660
1168
μ„€λͺ…을 μœ„ν•΄μ„œ 방에 μžˆλŠ” 두 μ‚¬λžŒμ˜ 머리 속에
04:08
let's say inside the minds of our two people,
92
248870
2836
04:11
inside of our two rooms,
93
251748
1585
μž‘μ€ λ©”λͺ¨μž₯이 μžˆλ‹€κ³  생각해 λ΄…μ‹œλ‹€.
04:13
is a little scratch pad.
94
253374
1752
04:15
And everything they have to remember in order to do this task
95
255168
2878
이 μž‘μ—…μ„ ν•˜κΈ° μœ„ν•΄ κΈ°μ–΅ν•΄μ•Ό ν•˜λŠ” λͺ¨λ“  것듀은
04:18
has to be written on that little scratch pad.
96
258046
2169
κ·Έ μž‘μ€ λ©”λͺ¨μž₯에 κΈ°λ‘λ©λ‹ˆλ‹€.
04:20
If we could see what was written on that scratch pad,
97
260757
2586
λ©”λͺ¨μž₯에 무엇이 μ ν˜€ μžˆλŠ”μ§€ λ³Έλ‹€λ©΄
04:23
we would be able to tell how different their approach to the task is.
98
263384
3421
그듀이 μ–΄λ–»κ²Œ λ‹€λ₯Έ μ‹μœΌλ‘œ μΌν•˜λŠ”μ§€ μ•Œ 수 있죠.
04:27
So though the input and the output of these two rooms
99
267514
2502
두 방의 질문과 λŒ€λ‹΅μ΄ μ •ν™•νžˆ κ°™μ„μ§€λŠ” λͺ°λΌλ„
04:30
might be exactly the same,
100
270016
1293
04:31
the process of getting from input to output -- completely different.
101
271309
3337
질문과 λŒ€λ‹΅μ„ μ²˜λ¦¬ν•˜λŠ” 과정은 μ™„μ „νžˆ λ‹€λ₯Έ 것이죠.
04:35
So again, what does that tell us about AI?
102
275897
2294
그러면 λ‹€μ‹œ, 이것은 AI에 λŒ€ν•΄μ„œ μš°λ¦¬μ—κ²Œ 무엇을 λ§ν•΄μ£Όλ‚˜μš”?
04:38
Again, if AI, even if it generates completely plausible dialogue,
103
278691
3838
λ‹€μ‹œ λ§μ”€λ“œλ¦¬λ©΄, μš°λ¦¬κ°€ κΈ°λŒ€ν•˜λŠ” 것과 같은
04:42
answers questions just like we would expect,
104
282529
2085
κ·ΈλŸ΄μ‹Έν•œ λŒ€ν™”μ™€ λŒ€λ‹΅μ„ κ΅¬ν˜„ν•˜λ”λΌλ„ μ—¬μ „νžˆ μΌμ’…μ˜ κ°€μ§œμΌ 수 μžˆμŠ΅λ‹ˆλ‹€.
04:44
it may still be an imposter of sorts.
105
284614
2377
λ§Œμ•½ AIκ°€ 정말 우리처럼 μ–Έμ–΄λ₯Ό μ΄ν•΄ν•˜λŠ”μ§€ μ•Œκ³  μ‹ΆμœΌλ©΄,
04:47
If we want to know if AI understands language like we do,
106
287033
3003
04:50
we need to know what it's doing.
107
290078
1835
AIκ°€ ν•˜λŠ” 일을 μ•Œμ•„μ•Ό ν•©λ‹ˆλ‹€.
04:51
We need to get inside to see what it's doing.
108
291955
2335
λ‚΄λΆ€λ‘œ λ“€μ–΄κ°€μ„œ 무엇을 ν•˜λŠ”μ§€ 봐야 ν•©λ‹ˆλ‹€.
04:54
Is it an imposter or not?
109
294332
1585
μ‚¬κΈ°κΎΌμΈκ°€μš”, μ•„λ‹Œκ°€μš”?
04:55
We need to see its scratch pad,
110
295959
1835
AI의 λ©”λͺ¨μž₯을 ν™•μΈν•œ ν›„
04:57
and we need to be able to compare it
111
297794
1752
μ‹€μ œλ‘œ μ–Έμ–΄λ₯Ό μ•„λŠ” μ‚¬λžŒμ˜ λ©”λͺ¨μž₯κ³Ό 비ꡐ할 수 μžˆμ–΄μ•Ό ν•©λ‹ˆλ‹€.
04:59
to the scratch pad of somebody who actually understands language.
112
299587
3212
05:02
But like scratch pads in brains,
113
302841
1751
ν•˜μ§€λ§Œ λ¨Έλ¦Ώμ†μ˜ λ©”λͺ¨μž₯은 μ‹€μ œλ‘œ 확인할 수 μ—†λŠ” κ±°μ£ ?
05:04
that's not something we can actually see, right?
114
304634
2336
05:07
Well, it turns out that we can kind of see scratch pads in brains.
115
307720
3337
μ•Œκ³  λ³΄λ‹ˆ 머릿속 λ©”λͺ¨μž₯을 λ³Ό 수 μžˆμ—ˆμŠ΅λ‹ˆλ‹€.
05:11
Using something like fMRI or EEG,
116
311099
2127
κΈ°λŠ₯μ„± 자기 곡λͺ… μ˜μƒ μ΄¬μ˜μ΄λ‚˜ λ‡ŒνŒŒκ³„ 같은 κ²ƒμœΌλ‘œ
05:13
we can take what are like little snapshots of the brain while it’s reading.
117
313268
3712
글을 μ½λŠ” λ™μ•ˆ 사진과 λΉ„μŠ·ν•˜κ²Œ 머릿속을 살짝 μ—Ώλ³Ό 수 μžˆμŠ΅λ‹ˆλ‹€.
κΈ€μ΄λ‚˜ 이야기λ₯Ό 읽게 ν•˜κ³  머릿속을 μ΄¬μ˜ν•©λ‹ˆλ‹€.
05:17
So have people read words or stories and then take pictures of their brain.
118
317021
4171
05:21
And those brain images are like fuzzy,
119
321192
2252
μ΄λŸ¬ν•œ 사진은 머릿속 λ©”λͺ¨μž₯을 νλ¦Ών•˜κ³  초점이 λΉ—λ‚˜κ°„ μ‚¬μ§„μ²˜λŸΌ μ°μŠ΅λ‹ˆλ‹€.
05:23
out-of-focus pictures of the scratch pad of the brain.
120
323486
3253
05:26
They tell us a little bit about how the brain is processing
121
326739
3045
μ—¬λŸ¬λΆ„μ΄ 글을 μ½λŠ” λ™μ•ˆ 머릿속에 μ–΄λ– ν•œ 일이 μΌμ–΄λ‚˜λŠ”μ§€,
05:29
and representing information while you read.
122
329784
2461
μ–΄λ–€ 정보λ₯Ό λ‚˜νƒ€λ‚΄λŠ”μ§€λ₯Ό 쑰금만 보여 μ£Όμ£ .
05:33
So here are three brain images taken while a person read the word "apartment,"
123
333079
4087
이 λ‡Œ μ˜μƒ μ„Έ κ°€μ§€λŠ”
β€˜μ•„νŒŒνŠΈβ€˜, β€˜μ£Όνƒβ€˜, β€˜μƒλŸ¬λ¦¬β€™λ₯Ό 읽을 λ•Œ λ‡Œλ₯Ό μ΄¬μ˜ν•œ κ²ƒμž…λ‹ˆλ‹€.
05:37
"house" and "celery."
124
337208
1669
05:39
You can see just with your naked eye
125
339252
2002
κ·Έλƒ₯ 맨눈으둜 λ³΄μ‹œλ©΄
05:41
that the brain image for "apartment" and "house"
126
341296
2252
β€˜μ•„νŒŒνŠΈβ€˜μ™€ β€˜μ£Όνƒβ€™μ€ β€˜μƒλŸ¬λ¦¬β€™λ³΄λ‹€
05:43
are more similar to each other
127
343548
1585
05:45
than they are to the brain image for "celery."
128
345133
2252
μ„œλ‘œ 더 λΉ„μŠ·ν•©λ‹ˆλ‹€.
05:47
And you know, of course that apartments and houses are more similar
129
347385
3170
λ‹Ήμ—°νžˆ μ•„νŒŒνŠΈμ™€ 주택은 μƒλŸ¬λ¦¬λ³΄λ‹€ μ˜λ―Έκ°€ λΉ„μŠ·ν•˜μž–μ•„μš”.
05:50
than they are to celery, just the words.
130
350555
2210
05:52
So said another way,
131
352807
2544
λ‹€λ₯Έ λ©΄μ—μ„œ λ³Έλ‹€λ©΄,
05:55
the brain uses its scratchpad when reading the words "apartment" and "house"
132
355393
4296
λ‡ŒλŠ” μ•„νŒŒνŠΈμ™€ 주택을 읽을 λ•Œ
μƒλŸ¬λ¦¬λ₯Ό 읽을 λ•Œλ³΄λ‹€ λ‡Œμ†μ˜ λ©”λͺ¨μž₯을 더 λΉ„μŠ·ν•˜κ²Œ μ‚¬μš©ν•˜μ£ .
05:59
in a way that's more similar than when you read the word "celery."
133
359731
3128
06:03
The scratch pad tells us a little bit
134
363693
1793
λ©”λͺ¨μž₯은 λ¨Έλ¦Ώμ†μ—μ„œ 단어λ₯Ό ν‘œν˜„ν•˜λŠ” 방법을 살짝 λ³΄μ—¬μ€λ‹ˆλ‹€.
06:05
about how the brain represents the language.
135
365486
2086
06:07
It's not a perfect picture of what the brain's doing,
136
367572
2502
λ‡Œκ°€ 무엇을 ν•˜λŠ”μ§€ μ™„λ²½ν•˜κ²Œ λ³΄μ—¬μ£Όμ§€λŠ” μ•Šμ§€λ§Œ μΆ©λΆ„ν•˜μ£ .
06:10
but it's good enough.
137
370074
1335
06:11
OK, so we have scratch pads for the brain.
138
371409
2169
μΈκ°„μ˜ λ©”λͺ¨μž₯을 ν™•μΈν–ˆμœΌλ‹ˆκΉŒ AI의 λ©”λͺ¨μž₯을 확인해 보죠.
06:13
Now we need a scratch pad for AI.
139
373620
2127
06:16
So inside a lot of AIs is a neural network.
140
376706
2878
AI의 λ‚΄λΆ€μ—λŠ” μˆ˜λ§Žμ€ 신경망이 있고
06:19
And inside of a neural network is a bunch of these little neurons.
141
379626
3253
신경망 λ‚΄λΆ€μ—λŠ” μž‘μ€ μ‹ κ²½ 세포듀이 많이 μžˆμŠ΅λ‹ˆλ‹€.
06:22
So here the neurons are like these little gray circles.
142
382921
2919
μž‘μ€ νšŒμƒ‰ 원이 μ‹ κ²½ μ„Έν¬μ˜ˆμš”.
06:25
And we would like to know
143
385840
1210
μ–΄λ–€ 것이 μ‹ κ²½λ§μ˜ λ©”λͺ¨μž₯인지 κΆκΈˆν•˜μ‹€ κ±°μ˜ˆμš”.
06:27
what is the scratch pad of a neural network?
144
387091
2086
06:29
Well, when we feed in a word into a neural network,
145
389177
4254
신경망에 단어λ₯Ό μž…λ ₯ν•˜λ©΄
06:33
each of the little neurons computes a number.
146
393473
2794
μž‘μ€ μ‹ κ²½ 세포듀 ν•˜λ‚˜ν•˜λ‚˜κ°€ 숫자λ₯Ό κ³„μ‚°ν•˜μ£ .
06:36
Those little numbers I'm representing here with colors.
147
396893
2627
κ·Έ μž‘μ€ μˆ«μžλ“€μ„ μƒ‰μœΌλ‘œ λ‚˜νƒ€λƒˆμŠ΅λ‹ˆλ‹€.
06:39
So every neuron computes this little number,
148
399562
2795
각 μ‹ κ²½ μ„Έν¬λŠ” 이 숫자λ₯Ό κ³„μ‚°ν•˜κ³ 
06:42
and those numbers tell us something
149
402398
1710
μ΄λŸ¬ν•œ μˆ«μžλ“€λ‘œ 신경망이 μ–Έμ–΄λ₯Ό μ΄ν•΄ν•˜λŠ” 방법을 μ•Œ 수 있죠.
06:44
about how the neural network is processing language.
150
404108
2711
06:47
Taken together,
151
407862
1168
이 λͺ¨λ“  것을 μ’…ν•©ν•΄ 보면
06:49
all of those little circles paint us a picture
152
409030
2753
이 μž‘μ€ 원듀은 μ‹ κ²½ 세포가 μ–Έμ–΄λ₯Ό μ–΄λ–»κ²Œ μ΄ν•΄ν•˜λŠ”μ§€ λ³΄μ—¬μ£Όλ©΄μ„œ
06:51
of how the neural network is representing language,
153
411824
2419
06:54
and they give us the scratch pad of the neural network.
154
414243
2753
μ‹ κ²½λ§μ˜ λ©”λͺ¨μž₯도 μ•Œλ €μ£Όμ£ .
06:57
OK, great.
155
417580
1168
μΈκ°„μ˜ λ‡Œμ™€ AI의 λ©”λͺ¨μž₯, λ‘˜ λ‹€ ν™•μΈν–ˆλŠ”λ°μš”.
06:58
Now we have two scratch pads, one from the brain and one from AI.
156
418790
3086
07:01
And we want to know: Is AI doing something like what the brain is doing?
157
421876
3629
이제 AIκ°€ μΈκ°„μ˜ λ‡Œμ²˜λŸΌ μž‘λ™ν•˜λŠ”μ§€ 확인해보죠.
07:05
How can we test that?
158
425838
1377
μ–΄λ–»κ²Œ 확인할 수 μžˆμ„κΉŒμš”?
07:07
Here's what researchers have come up with.
159
427757
2002
μ—°κ΅¬μžλ“€μ€ 이런 방법을 μƒκ°ν•΄λƒˆμŠ΅λ‹ˆλ‹€.
07:09
We're going to train a new model.
160
429801
1877
μš°λ¦¬λŠ” μƒˆ λͺ¨λΈμ„ ν•™μŠ΅μ‹œν‚¬ κ²λ‹ˆλ‹€.
07:11
That new model is going to look at neural network scratch pad
161
431678
2877
이 μƒˆ λͺ¨λΈμ€ νŠΉμ • 단어에 λŒ€ν•œ 신경망 λ©”λͺ¨μž₯을 ν™•μΈν•˜κ³ 
07:14
for a particular word
162
434555
1168
07:15
and try to predict the brain scratch pad for the same word.
163
435723
3087
같은 단어에 λŒ€ν•œ λ‡Œ λ©”λͺ¨μž₯을 μ˜ˆμƒν•΄ λ³Ό κ²λ‹ˆλ‹€.
07:18
We can do it, by the way, around two.
164
438851
1919
참고둜 두 λ°©ν–₯ λ‹€ κ°€λŠ₯ν•©λ‹ˆλ‹€.
07:20
So let's train a new model.
165
440812
2043
μƒˆ λͺ¨λΈμ„ ν•™μŠ΅μ‹œμΌœ λ΄…μ‹œλ‹€.
07:22
It’s going to look at the neural network scratch pad for a particular word
166
442897
3504
νŠΉμ • λ‹¨μ–΄μ˜ 신경망 λ©”λͺ¨μž₯을 ν™•μΈν•˜κ³  λ‘λ‡Œ λ©”λͺ¨μž₯을 μ˜ˆμΈ‘ν•˜λŠ” κ²λ‹ˆλ‹€.
07:26
and try to predict the brain scratchpad.
167
446401
1918
λ‘λ‡Œμ™€ AI의 ν–‰λ™μ—μ„œ 같은 점이 μ—†κ±°λ‚˜
07:28
If the brain and AI are doing nothing alike,
168
448319
2586
07:30
have nothing in common,
169
450947
1168
곡톡점이 μ—†λ‹€λ©΄ 예츑 μ‹€ν—˜μ„ λͺ» ν•˜κ² μ£ .
07:32
we won't be able to do this prediction task.
170
452115
2085
ν•˜λ‚˜μ—μ„œ λ‹€λ₯Έ 것을 μ˜ˆμΈ‘ν•˜λŠ” 것이 λΆˆκ°€λŠ₯ν•  ν…Œλ‹ˆκΉŒμš”.
07:34
It won't be possible to predict one from the other.
171
454200
2461
07:36
So we've reached a fork in the road
172
456995
1710
μš°λ¦¬λŠ” κ°ˆλ¦ΌκΈΈμ— 이λ₯΄λ €κ³  μ œκ°€ λ‘˜ 쀑 ν•˜λ‚˜λ₯Ό λ§ν•˜λ¦¬λž€ κ±Έ μ•„μ‹œκ² μ£ .
07:38
and you can probably tell I'm about to tell you one of two things.
173
458705
3295
07:42
I’m going to tell you AI is amazing,
174
462458
2044
β€˜AIλŠ” λ†€λžλ‹€β€™κ±°λ‚˜ μ•„λ‹ˆλ©΄ β€˜AIλŠ” κ°€μ§œλ‹€β€™μ΄μ£ .
07:44
or I'm going to tell you AI is an imposter.
175
464544
2669
저와 같은 μ—°κ΅¬μžλ“€μ€ AI와 λ‡ŒλŠ” λ‹€λ₯΄λ‹€κ³  λ§ν•˜κ³€ ν•©λ‹ˆλ‹€.
07:48
Researchers like me love to remind you
176
468047
2211
07:50
that AI is nothing like the brain.
177
470299
1627
07:51
And that is true.
178
471968
1501
그리고 그건 μ‚¬μ‹€μž…λ‹ˆλ‹€.
07:53
But could it also be the AI and the brain share something in common?
179
473845
3253
그런데 AI와 λ‡Œμ— 곡톡점이 μžˆμ„ μˆ˜λ„ μžˆμ„κΉŒμš”?
07:58
So we’ve done this scratch pad prediction task,
180
478516
2211
κ·Έλž˜μ„œ λ©”λͺ¨μž₯ 예츑 μ‹€ν—˜μ„ ν–ˆκ³  κ²°κ³ΌλŠ”
08:00
and it turns out, 75 percent of the time
181
480727
2669
예츑 μ‹€ν—˜ 75%μ—μ„œ νŠΉμ • 단어에 λŒ€ν•œ 신경망 λ©”λͺ¨μž₯은
08:03
the predicted neural network scratchpad for a particular word
182
483438
3169
08:06
is more similar to the true neural network scratchpad for that word
183
486649
3754
λ¬΄μž‘μœ„λ‘œ μ„ μ •λœ 단어듀에 λŒ€ν•œ μ§„μ§œ 신경망 λ©”λͺ¨μž₯보닀
08:10
than it is to the neural network scratch pad
184
490403
2085
μ§„μ§œ 신경망과 더 λΉ„μŠ·ν–ˆμŠ΅λ‹ˆλ‹€.
08:12
for some other randomly chosen word --
185
492488
1835
08:14
75 percent is much better than chance.
186
494323
2962
75%λŠ” μš°μ—°μœΌλ‘œ 보기 μ–΄λ ΅μ£ .
08:17
What about for more complicated things,
187
497285
1918
단어가 μ•„λ‹ˆλΌ 더 λ³΅μž‘ν•œ λ¬Έμž₯μ΄λ‚˜ 이야기일 땐 μ–΄λ–¨κΉŒμš”?
08:19
not just words, but sentences, even stories?
188
499203
2086
08:21
Again, this scratch pad prediction task works.
189
501330
2420
μ΄λ•Œλ„ λ©”λͺ¨μž₯ 예츑 μ‹€ν—˜μ€ ν†΅ν–ˆμ£ .
08:23
We’re able to predict the neural network scratch pad from the brain and vice versa.
190
503791
4213
λ‡Œμ˜ 신경망 λ©”λͺ¨μž₯으둜 신경망 λ©”λͺ¨μž₯을 μ˜ˆμΈ‘ν•  수 있고
λ°˜λŒ€λ„ κ°€λŠ₯ν•©λ‹ˆλ‹€.
08:28
Amazing.
191
508838
1210
ꡉμž₯ν•˜μ£ .
08:30
So does that mean
192
510548
1168
κ·Έλ ‡λ‹€λ©΄ AIκ°€ μΈκ°„μ²˜λŸΌ μ–Έμ–΄λ₯Ό μ΄ν•΄ν•˜λ‹€λŠ” λ§μΌκΉŒμš”?
08:31
that neural networks and AI understand language just like we do?
193
511758
3378
08:35
Well, truthfully, no.
194
515511
1585
κΈ€μŽ„μš”, μ‹€μ œλ‘œλŠ” μ•„λ‹™λ‹ˆλ‹€.
08:37
Though these scratch pad prediction tasks show above-chance accuracy,
195
517513
4880
μ΄λŸ¬ν•œ λ©”λͺ¨μž₯ 예츑 μ‹€ν—˜μ€ μš°μ—°μ„ λ„˜λŠ” 정확성을 λ³΄μ—¬μ£Όμ§€λ§Œ
08:42
the underlying correlations are still pretty weak.
196
522393
2378
근본적인 μƒκ΄€κ΄€κ³„λŠ” μƒλ‹Ήνžˆ λΉˆμ•½ν•©λ‹ˆλ‹€.
08:45
And though neural networks are inspired by the brain,
197
525188
2502
λ˜ν•œ AI 신경망이 λ‡Œλ₯Ό λͺ¨λ°©ν–ˆλ‹€ 해도
08:47
they don't have the same kind of structure and complexity
198
527690
2711
κ±°κΈ°μ—λŠ” λ‡Œμ— μžˆλŠ” 것 같은 ꡬ쑰와 λ³΅μž‘μ„±μ΄ μ—†μœΌλ©°
08:50
that we see in the brain.
199
530401
1210
또 AI 신경망은 이 세상에 μ‘΄μž¬ν•˜λŠ” 것이 μ•„λ‹™λ‹ˆλ‹€.
08:52
Neural networks also don't exist in the world.
200
532028
2461
08:54
A neural network has never opened a door
201
534530
2086
문을 μ—΄κ±°λ‚˜ 일λͺ°μ„ λ³΄κ±°λ‚˜ μ•„κΈ° 울음 μ†Œλ¦¬λ₯Ό 듀은 적이 μ—†μ£ .
08:56
or seen a sunset, heard a baby cry.
202
536657
2962
09:00
Can a neural network that doesn't actually exist in the world,
203
540161
2961
μ‹€μ œλ‘œ μ‘΄μž¬ν•˜μ§€λ„ μ•Šκ³  세상을 κ²½ν—˜ν•΄λ³΄μ§€ λͺ»ν•œ AI 신경망이
09:03
hasn't really experienced the world,
204
543122
1794
09:04
really understand language about the world?
205
544916
2294
μ„Έμƒμ˜ μ–Έμ–΄λ₯Ό 이해할 수 μžˆμ„κΉŒμš”?
09:08
Still, these scratch pad prediction experiments have held up --
206
548169
2961
이 λ©”λͺ¨μž₯ 예츑 μ‹€ν—˜, 즉, 닀쀑 λ‡Œ μ˜μƒ μ‹€ν—˜, 닀쀑 신경망은
09:11
multiple brain imaging experiments,
207
551130
1710
09:12
multiple neural networks.
208
552840
1418
κ·Έλž˜λ„ μ„±κ³΅ν–ˆμŠ΅λ‹ˆλ‹€.
09:14
We've also found that as the neural networks get more accurate,
209
554300
2961
μš°λ¦¬λŠ” λ˜ν•œ 신경망이 λ”μš± μ •ν™•ν•΄μ§ˆμˆ˜λ‘
09:17
they also start to use their scratch pad in a way
210
557261
2711
λ©”λͺ¨μž₯을 λ”μš± λ‘λ‡Œμ²˜λŸΌ μ΄μš©ν•˜κΈ° μ‹œμž‘ν•œλ‹€λŠ” 것도 λ°œκ²¬ν–ˆμ£ .
09:20
that becomes more brain-like.
211
560014
2002
μ–Έμ–΄λ§Œμ΄ μ•„λ‹ˆκ³  κΈΈμ•ˆλ‚΄μ™€ μ‹œκ°μ—μ„œλ„ λΉ„μŠ·ν•œ κ²°κ³Όλ₯Ό λ³΄μ•˜μŠ΅λ‹ˆλ‹€.
09:22
And it's not just language.
212
562058
1293
09:23
We've seen similar results in navigation and vision.
213
563392
2503
09:26
So AI is not doing exactly what the brain is doing,
214
566270
3879
λ”°λΌμ„œ, AIλŠ” μ •ν™•ν•˜κ²Œ μΈκ°„μ˜ λ‡Œμ²˜λŸΌ μž‘λ™ν•˜μ§„ μ•Šμ§€λ§Œ
09:30
but it's not completely random either.
215
570191
2377
μ•„μ£Ό 동떨어진 것도 μ•„λ‹™λ‹ˆλ‹€.
09:34
So from where I sit,
216
574112
1835
κ·Έλž˜μ„œ μ œκ°€ λ³Ό λ•ŒλŠ”
09:35
if we want to know if AI really understands language like we do,
217
575988
3337
μ •λ§λ‘œ AIκ°€ μΈκ°„μ²˜λŸΌ μ–Έμ–΄λ₯Ό μ΄ν•΄ν•˜λŠ”μ§€ μ•Œκ³  μ‹Άλ‹€λ©΄
09:39
we need to get inside of the Chinese room.
218
579325
2336
쀑ꡭ어 λ°© μ•ˆμœΌλ‘œ λ“€μ–΄κ°€ 봐야 ν•©λ‹ˆλ‹€.
09:41
We need to know what the AI is doing,
219
581702
1836
AI의 μž‘λ™ 원리λ₯Ό μ•Œκ³  인간이 μ–Έμ–΄λ₯Ό μ΄ν•΄ν•˜λŠ” 방식과 비ꡐ해야겠죠.
09:43
and we need to be able to compare that to what people are doing
220
583579
2962
09:46
when they understand language.
221
586541
1751
09:48
AI is moving so fast.
222
588334
1877
AIλŠ” μ•„μ£Ό λΉ λ₯΄κ²Œ λ³€ν™”ν•©λ‹ˆλ‹€.
09:50
Today, I'm asking you, does AI understand language
223
590545
2377
κ·Έλž˜μ„œ μ—¬λŸ¬λΆ„κ»˜ λ¬»μŠ΅λ‹ˆλ‹€. β€œAIλŠ” μ–Έμ–΄λ₯Ό 이해할 수 μžˆμ„κΉŒμš”?β€œ
09:52
that might seem like a silly question in ten years.
224
592964
2460
μ§€λ‚œ 10λ…„κ°„ λ“€μ–΄λ³Έ 질문 쀑 κ°€μž₯ 바보 같은 질문 κ°™μ£ .
09:55
Or ten months.
225
595466
1168
10κ°œμ›”μΌ μˆ˜λ„ μžˆκ³ μš”.
09:56
(Laughter)
226
596634
1919
(μ›ƒμŒ)
09:58
But one thing will remain true.
227
598594
1544
ν•˜μ§€λ§Œ ν•œ 가지 사싀은 μ§„μ‹€λ‘œ 남을 κ²λ‹ˆλ‹€.
10:00
We are meaning-making humans,
228
600179
1460
μš°λ¦¬λŠ” 의미λ₯Ό λ§Œλ“œλŠ” 인간이며
10:01
and we are going to continue to look for meaning
229
601681
2878
κ³„μ†ν•΄μ„œ μ„Έμƒμ˜ 의미λ₯Ό μ°Ύκ³  ν•΄μ„ν•œλ‹€λŠ” κ²λ‹ˆλ‹€.
10:04
and interpret the world around us.
230
604559
1918
10:06
And we will need to remember
231
606477
2127
κΈ°μ–΅ν•΄μ•Ό ν•  것이 μžˆμŠ΅λ‹ˆλ‹€.
10:08
that if we only look at the input and output of AI,
232
608646
2794
AIμ—κ²Œ ν•œ 질문과 λ‹΅λ§Œμ„ λ³Έλ‹€λ©΄ μ†μ•„λ„˜μ–΄κ°€κΈ°κ°€ μ•„μ£Ό μ‰½μŠ΅λ‹ˆλ‹€.
10:11
it's very easy to be fooled.
233
611440
2128
10:13
We need to get inside of the metaphorical room of AI
234
613609
3712
λΉ„μœ μ μΈ AI의 λ°© μ•ˆμœΌλ‘œ λ“€μ–΄κ°€μ„œ
10:17
in order to see what's happening.
235
617363
1627
원리λ₯Ό 확인해야죠.
10:19
It's what's inside the counts.
236
619740
2461
μ€‘μš”ν•œ 것은 μ•ˆμ— λ“€μ–΄ μžˆλŠ” κ²ƒμž…λ‹ˆλ‹€.
10:22
Thank you.
237
622660
1168
κ°μ‚¬ν•©λ‹ˆλ‹€.
10:23
(Applause)
238
623828
4004
(λ°•μˆ˜)
이 μ›Ήμ‚¬μ΄νŠΈ 정보

이 μ‚¬μ΄νŠΈλŠ” μ˜μ–΄ ν•™μŠ΅μ— μœ μš©ν•œ YouTube λ™μ˜μƒμ„ μ†Œκ°œν•©λ‹ˆλ‹€. μ „ 세계 졜고의 μ„ μƒλ‹˜λ“€μ΄ κ°€λ₯΄μΉ˜λŠ” μ˜μ–΄ μˆ˜μ—…μ„ 보게 될 κ²ƒμž…λ‹ˆλ‹€. 각 λ™μ˜μƒ νŽ˜μ΄μ§€μ— ν‘œμ‹œλ˜λŠ” μ˜μ–΄ μžλ§‰μ„ 더블 ν΄λ¦­ν•˜λ©΄ κ·Έκ³³μ—μ„œ λ™μ˜μƒμ΄ μž¬μƒλ©λ‹ˆλ‹€. λΉ„λ””μ˜€ μž¬μƒμ— 맞좰 μžλ§‰μ΄ μŠ€ν¬λ‘€λ©λ‹ˆλ‹€. μ˜κ²¬μ΄λ‚˜ μš”μ²­μ΄ μžˆλŠ” 경우 이 문의 양식을 μ‚¬μš©ν•˜μ—¬ λ¬Έμ˜ν•˜μ‹­μ‹œμ˜€.

https://forms.gle/WvT1wiN1qDtmnspy7