請雙擊下方英文字幕播放視頻。
譯者: Lilian Chiu
審譯者: Helen Chang
00:04
People are funny.
0
4417
1127
人很有趣。
00:05
We're constantly trying
to understand and interpret
1
5585
2544
我們經常在試著了解
和詮釋我們周遭的世界。
00:08
the world around us.
2
8171
1460
00:10
I live in a house with two black cats,
and let me tell you,
3
10090
2794
我住的房子裡有兩隻黑貓,
00:12
every time I see a black, bunched up
sweater out of the corner of my eye,
4
12926
3462
讓我告訴各位,每當我看到黑色、
皺成一團的毛衣出現在我的
眼角餘光,就以為是貓。
00:16
I think it's a cat.
5
16388
1209
00:18
It's not just the things we see.
6
18098
1585
不只是我們看到的東西,
有時,我們會認為
00:19
Sometimes we attribute more intelligence
than might actually be there.
7
19724
3587
它們有更高的智慧,實際上卻沒有。
00:23
Maybe you've seen the dogs on TikTok.
8
23770
1794
也許各位看過抖音上的狗,
00:25
They have these little buttons
that say things like "walk" or "treat."
9
25605
3379
說「散步」或「吃點心」
這些詞就能觸發牠們,
00:29
They can push them to communicate
some things with their owners,
10
29025
3128
這讓牠們能和牠們的
主人做一些溝通,
00:32
and their owners think they use them
11
32195
1752
而主人認為他們用這些詞
做了些很不簡單的溝通。
00:33
to communicate some
pretty impressive things.
12
33947
2169
00:36
But do the dogs know what they're saying?
13
36491
2002
但狗知道主人在說什麼嗎?
00:39
Or perhaps you've heard the story
of Clever Hans the horse,
14
39494
3462
也許各位聽過「聰明的漢斯」
這匹馬的故事。
00:42
and he could do math.
15
42956
1376
牠會算數學。
00:44
And not just like, simple math problems,
really complicated ones,
16
44374
3086
且不只是解簡單的數學題,
還能解很複雜的,比如,
00:47
like, if the eighth day of the month
falls on a Tuesday,
17
47460
3003
若這個月的八號是星期二,
下一個星期五是幾號?
00:50
what's the date of the following Friday?
18
50463
1919
00:52
It's like, pretty impressive for a horse.
19
52841
2043
以馬來說,這很不簡單。
00:55
Unfortunately, Hans wasn't doing math,
20
55927
2586
不幸的是,漢斯並不是在算數學,
00:58
but what he was doing
was equally impressive.
21
58555
2586
但牠所做的也同樣不簡單。
01:01
Hans had learned to watch
the people in the room
22
61182
2670
漢斯學會觀察房間中的人,
01:03
to tell when he should tap his hoof.
23
63893
2002
讓他們告訴牠何時該點蹄,
01:05
So he communicated his answers
by tapping his hoof.
24
65937
2461
牠用點蹄的方式來表達牠的答案。
01:08
It turns out that if you know the answer
25
68982
1918
結果發現是,如果你知道
01:10
to "if the eighth day of the month
falls on a Tuesday,
26
70942
2544
「若這個月的八號是星期二,
下一個星期五是幾號?」
01:13
what's the date of the following Friday,"
27
73528
1960
當這匹馬正確地點了十八次蹄時,
01:15
you will subconsciously
change your posture
28
75488
2044
01:17
once the horse has given
the correct 18 taps.
29
77532
2169
你就會下意識改變你的姿勢。
01:20
So Hans couldn't do math,
30
80493
1293
所以漢斯不會算數學,但牠
學會觀察房間中會算數學的人。
01:21
but he had learned to watch the people
in the room who could do math,
31
81786
3254
對馬來說,這樣也是相當不簡單。
01:25
which, I mean, still pretty
impressive for a horse.
32
85081
2419
01:28
But this is an old picture,
33
88418
1418
但,這張照片很古老,現今我們
不會上「聰明的漢斯」的當了。
01:29
and we would not fall
for Clever Hans today.
34
89836
2252
01:32
Or would we?
35
92672
1168
或者,還是會?
01:34
Well, I work in AI,
36
94549
2169
我做的是人工智慧的工作,
01:36
and let me tell you, things are wild.
37
96718
2002
讓我告訴各位,這個領域很瘋狂。
01:38
There have been multiple examples
of people being completely convinced
38
98720
3754
有不少例子都是有人完全被說服,
01:42
that AI understands them.
39
102515
1502
認為人工智慧懂他們。
01:44
In 2022,
40
104559
2711
2022 年,
01:47
a Google engineer thought
that Google’s AI was sentient.
41
107270
3337
一名 Google 工程師認為
Google 的人工智慧有感情。
01:50
And you may have had
a really human-like conversation
42
110649
2752
各位可能也曾和 ChatGPT
有過非常像人類的對談。
01:53
with something like ChatGPT.
43
113401
1877
01:55
But models we're training today
are so much better
44
115779
2377
但我們現今訓練的模型
比五年前的好太多了,
01:58
than the models we had
even five years ago.
45
118198
2002
真的非常驚人。
02:00
It really is remarkable.
46
120200
1710
02:02
So at this super crazy moment in time,
47
122744
2502
那麼,在這超瘋狂的時刻,
02:05
let’s ask the super crazy question:
48
125288
2127
咱們來問這個超瘋狂的問題:
02:07
Does AI understand us,
49
127832
1293
人工智慧真的懂我們嗎?或者,
02:09
or are we having our own
Clever Hans moment?
50
129167
2878
我們也遇到了「聰明的漢斯」的狀況?
02:13
Some philosophers think that computers
will never understand language.
51
133254
3462
有些哲學家認為
電腦永遠不可能懂語言。
02:16
To illustrate this, they developed
something they call
52
136716
2628
為了闡明這一點,
他們發展出了所謂的
02:19
the Chinese room argument.
53
139344
1460
中文房間論證。
02:21
In the Chinese room, there is a person,
hypothetical person,
54
141429
3629
在中文房間中,
有一個人,假設的人,他不懂中文。
02:25
who does not understand Chinese,
55
145100
1835
02:26
but he has along with him
a set of instructions
56
146976
2211
但他手上有一組指示教他如何用中文
02:29
that tell him how to respond in Chinese
to any Chinese sentence.
57
149229
4212
回應任何中文的句子。
02:33
Here's how the Chinese room works.
58
153983
1669
這個論證是這樣的:
一張紙被從門縫塞進來,
02:35
A piece of paper comes in
through a slot in the door,
59
155652
2502
上面寫的是中文。
02:38
has something written in Chinese on it.
60
158196
2461
02:40
The person uses their instructions
to figure out how to respond.
61
160699
3003
房間中的人要用他們
手上的指示來想辦法回應,
02:43
They write the response down
on a piece of paper
62
163702
2293
把回應寫在紙上,
再把紙從門縫送回去。
02:46
and then send it back out
through the door.
63
166037
2252
02:48
To somebody who speaks Chinese,
64
168331
1502
站在房間外面且會說中文的人
02:49
standing outside this room,
65
169833
1293
會以為房間中的人會說中文。
02:51
it might seem like the person
inside the room speaks Chinese.
66
171167
3170
02:54
But we know they do not,
67
174337
2753
但我們知道他們不會說中文,
02:57
because no knowledge of Chinese
is required to follow the instructions.
68
177090
4546
因為不需要對中文的知識,
就能遵照指示回應。
03:01
Performance on this task does not show
that you know Chinese.
69
181636
2961
執行這項工作任務
並不能展現出你懂中文。
03:05
So what does that tell us about AI?
70
185807
2002
那跟人工智慧有什麼關係?
03:08
Well, when you and I stand
outside of the room,
71
188226
2753
當你我站在房間的外面,
跟 ChatGPT 這類人工智慧說話,
03:11
when we speak to one
of these AIs like ChatGPT,
72
191020
4547
03:15
we are the person standing
outside the room.
73
195567
2085
我們就是站在房間外面的人。
03:17
We're feeding in English sentences,
74
197652
1710
我們輸入英文句子,
得到的回應也是英文句子。
03:19
we're getting English sentences back.
75
199362
2127
03:21
It really looks like
the models understand us.
76
201531
2419
感覺真的像是這個模型懂我們。
03:23
It really looks like they know English.
77
203992
2502
真的像是它們懂英文。
03:27
But under the hood,
78
207203
1168
但在背後,這些模型也只是
遵循一組更複雜的指示。
03:28
these models are just following
a set of instructions, albeit complex.
79
208371
3754
03:32
How do we know if AI understands us?
80
212917
2795
我們要怎麼知道
人工智慧是否懂我們?
03:36
To answer that question,
let's go back to the Chinese room again.
81
216880
3086
為了回答這個問題,
咱們再回到中文房間。
03:39
Let's say we have two Chinese rooms.
82
219966
1794
假設我們有兩間中文房間。
03:41
In one Chinese room is somebody
who actually speaks Chinese,
83
221801
3879
在一間中文房間裡的人
真的會說中文,
03:46
and in the other room is our impostor.
84
226014
1877
另一間則是冒牌的。
03:48
When the person who actually
speaks Chinese gets a piece of paper
85
228224
3087
真正會說中文的人拿到了寫著
中文的紙,他們能讀,沒問題。
03:51
that says something in Chinese in it,
they can read it, no problem.
86
231311
3170
03:54
But when our imposter gets it again,
87
234522
1752
但當冒牌者拿到紙時,就得
再用手上的指示來想辦法回應。
03:56
he has to use his set of instructions
to figure out how to respond.
88
236274
3170
03:59
From the outside, it might be impossible
to distinguish these two rooms,
89
239819
3671
從外面可能完全無法
區別這兩間房間,
04:03
but we know inside something
really different is happening.
90
243531
3587
但我們知道房間內
發生的狀況很不同。
04:07
To illustrate that,
91
247660
1168
為了說明,
04:08
let's say inside the minds
of our two people,
92
248870
2836
假設在兩間房間中兩個人的腦袋裡,
04:11
inside of our two rooms,
93
251748
1585
04:13
is a little scratch pad.
94
253374
1752
是一本小便條簿。
04:15
And everything they have to remember
in order to do this task
95
255168
2878
他們為了完成任務
所必須要記住的一切
04:18
has to be written
on that little scratch pad.
96
258046
2169
都得要寫在那本小便條簿上。
04:20
If we could see what was written
on that scratch pad,
97
260757
2586
如果我們能看到
小便條簿上寫了什麼,
04:23
we would be able to tell how different
their approach to the task is.
98
263384
3421
我們就能夠知道他們完成
任務的方法有多麼不同。
04:27
So though the input and the output
of these two rooms
99
267514
2502
所以雖然兩間房間的輸入
和輸出可能都完全一樣,
04:30
might be exactly the same,
100
270016
1293
從輸入到輸出的過程則是完全不同。
04:31
the process of getting from input
to output -- completely different.
101
271309
3337
04:35
So again, what does that tell us about AI?
102
275897
2294
再一次,那跟人工智慧有什麼關係?
04:38
Again, if AI, even if it generates
completely plausible dialogue,
103
278691
3838
再一次,如果人工智慧能
產生出貌似真實的對話,
04:42
answers questions
just like we would expect,
104
282529
2085
照我們預期的方式回答問題,
04:44
it may still be an imposter of sorts.
105
284614
2377
它仍然可能是冒牌的。
如果我們想知道人工智慧
是否像我們這樣懂語言,
04:47
If we want to know if AI understands
language like we do,
106
287033
3003
04:50
we need to know what it's doing.
107
290078
1835
我們就得知道它在做什麼。
04:51
We need to get inside
to see what it's doing.
108
291955
2335
我們得要進去看看它在
做什麼。它是不是冒牌的?
04:54
Is it an imposter or not?
109
294332
1585
04:55
We need to see its scratch pad,
110
295959
1835
我們得看它的便條簿,
04:57
and we need to be able to compare it
111
297794
1752
且我們得把它和真正懂
語言的人的便條簿拿來比較。
04:59
to the scratch pad of somebody
who actually understands language.
112
299587
3212
05:02
But like scratch pads in brains,
113
302841
1751
但,如同腦袋中的便條簿,
那是我們看不到的,對吧?
05:04
that's not something
we can actually see, right?
114
304634
2336
05:07
Well, it turns out that we can
kind of see scratch pads in brains.
115
307720
3337
結果發現,我們算是可以
看得見腦袋裡的便條簿。
05:11
Using something like fMRI or EEG,
116
311099
2127
用比如功能性磁振造影或腦電圖,
05:13
we can take what are like little snapshots
of the brain while it’s reading.
117
313268
3712
我們可以在大腦讀取資料時
拍下類似大腦的照片,
所以,找人來閱讀字詞或故事,
05:17
So have people read words or stories
and then take pictures of their brain.
118
317021
4171
接著拍下他們大腦的照片。
05:21
And those brain images are like fuzzy,
119
321192
2252
那些大腦影像就像是
模糊、沒聚焦的大腦便條簿照片。
05:23
out-of-focus pictures
of the scratch pad of the brain.
120
323486
3253
05:26
They tell us a little bit
about how the brain is processing
121
326739
3045
它們能讓讓我們略知大腦如何
05:29
and representing information
while you read.
122
329784
2461
在你閱讀的時候處理和表達資訊。
05:33
So here are three brain images taken
while a person read the word "apartment,"
123
333079
4087
這三張大腦影像是在一個人
閱讀這三個詞時拍下的:
「公寓」、「房子」,及「芹菜」。
05:37
"house" and "celery."
124
337208
1669
05:39
You can see just with your naked eye
125
339252
2002
各位用肉眼就能看見,
05:41
that the brain image
for "apartment" and "house"
126
341296
2252
「公寓」和「房子」的
大腦影像比較相似,
05:43
are more similar to each other
127
343548
1585
05:45
than they are to the brain
image for "celery."
128
345133
2252
比較不像「芹菜」的大腦影像。
05:47
And you know, of course that apartments
and houses are more similar
129
347385
3170
當然,各位知道,
公寓和房子的相似度
05:50
than they are to celery, just the words.
130
350555
2210
高於芹菜,就字面上來說。
05:52
So said another way,
131
352807
2544
換句話說,
05:55
the brain uses its scratchpad when reading
the words "apartment" and "house"
132
355393
4296
在閱讀「公寓」和「房子」時,
大腦使用其便條簿的方式比較相近,
05:59
in a way that's more similar
than when you read the word "celery."
133
359731
3128
和閱讀「芹菜」時相差較多。
06:03
The scratch pad tells us a little bit
134
363693
1793
便條簿能讓我們略知
大腦如何表達語言。
06:05
about how the brain
represents the language.
135
365486
2086
06:07
It's not a perfect picture
of what the brain's doing,
136
367572
2502
它無法完全呈現大腦
在做什麼,但也夠好了。
06:10
but it's good enough.
137
370074
1335
06:11
OK, so we have scratch pads for the brain.
138
371409
2169
好,我們有大腦的便條簿了,
06:13
Now we need a scratch pad for AI.
139
373620
2127
現在我們需要人工智慧的便條簿。
06:16
So inside a lot of AIs
is a neural network.
140
376706
2878
許多人工智慧的內部是神經網路,
06:19
And inside of a neural network
is a bunch of these little neurons.
141
379626
3253
而神經網路的內部是一大堆神經元。
06:22
So here the neurons
are like these little gray circles.
142
382921
2919
在這裡,神經元是那些灰色小圓點。
06:25
And we would like to know
143
385840
1210
我們想知道,神經網路的
便條簿是什麼?
06:27
what is the scratch pad
of a neural network?
144
387091
2086
06:29
Well, when we feed in a word
into a neural network,
145
389177
4254
當我們將一個字詞
輸入到神經網路中,
06:33
each of the little neurons
computes a number.
146
393473
2794
每一個小神經元都會計算一個數字。
06:36
Those little numbers
I'm representing here with colors.
147
396893
2627
在這裡我用顏色來代表那些小數字。
06:39
So every neuron computes
this little number,
148
399562
2795
每個神經元會計算這個小數字,
06:42
and those numbers tell us something
149
402398
1710
而那些數字能告訴我們
06:44
about how the neural network
is processing language.
150
404108
2711
神經網路如何處理語言。
06:47
Taken together,
151
407862
1168
所有這些小圓點綜合起來
06:49
all of those little circles
paint us a picture
152
409030
2753
就能為呈現出神經網路
如何表達語言,
06:51
of how the neural network
is representing language,
153
411824
2419
06:54
and they give us the scratch pad
of the neural network.
154
414243
2753
提供我們神經網路的便條簿。
06:57
OK, great.
155
417580
1168
很好,現在我們有大腦的
便條簿和人工智慧的便條簿,
06:58
Now we have two scratch pads,
one from the brain and one from AI.
156
418790
3086
07:01
And we want to know: Is AI doing something
like what the brain is doing?
157
421876
3629
我們想知道:人工智慧
是否在做大腦做的事?
07:05
How can we test that?
158
425838
1377
我們要如何檢測這一點?
07:07
Here's what researchers have come up with.
159
427757
2002
研究者想出這個方法:
07:09
We're going to train a new model.
160
429801
1877
我們要訓練一個新模型。
07:11
That new model is going to look
at neural network scratch pad
161
431678
2877
那個新模型會參考
便條簿如何回應
某一個詞,接著試著預測
07:14
for a particular word
162
434555
1168
07:15
and try to predict the brain scratch pad
for the same word.
163
435723
3087
大腦的便條簿怎麼回應這個詞。
07:18
We can do it, by the way, around two.
164
438851
1919
順道一提,也可以反向操作。
07:20
So let's train a new model.
165
440812
2043
所以,咱們來訓練一個新模型。
07:22
It’s going to look at the neural network
scratch pad for a particular word
166
442897
3504
它會到神經網路的條便簿去找
某個詞,再預測大腦的條便簿。
07:26
and try to predict the brain scratchpad.
167
446401
1918
如果大腦和人工智慧
做的事完全不同,
07:28
If the brain and AI
are doing nothing alike,
168
448319
2586
07:30
have nothing in common,
169
450947
1168
沒有共通點,我們就
無法成功做到預測。
07:32
we won't be able to do
this prediction task.
170
452115
2085
就不可能用一本條便簿
來預測另一本。
07:34
It won't be possible to predict
one from the other.
171
454200
2461
07:36
So we've reached a fork in the road
172
456995
1710
我們來到了道路的分叉口,
各位應該看得出來,
07:38
and you can probably tell
I'm about to tell you one of two things.
173
458705
3295
我接著會說出這兩個
答案的其中一個:
07:42
I’m going to tell you AI is amazing,
174
462458
2044
我可能會說人工智慧很不可思議,
07:44
or I'm going to tell you
AI is an imposter.
175
464544
2669
或者會說人工智慧是冒牌貨。
07:48
Researchers like me love to remind you
176
468047
2211
像我這種研究者要提醒大家,
07:50
that AI is nothing like the brain.
177
470299
1627
人工智慧完全不像大腦,那是事實。
07:51
And that is true.
178
471968
1501
07:53
But could it also be the AI and the brain
share something in common?
179
473845
3253
但有沒有可能人工智慧
和大腦有些共通性?
07:58
So we’ve done this scratch pad
prediction task,
180
478516
2211
我們做了這種便條簿
預測,結果發現,
08:00
and it turns out, 75 percent of the time
181
480727
2669
有 75% 的時候,
08:03
the predicted neural network scratchpad
for a particular word
182
483438
3169
針對某個詞所預測出來的
神經網路便條簿
08:06
is more similar to the true neural network
scratchpad for that word
183
486649
3754
比較接近神經網路對於
那個詞的真正便條簿,
08:10
than it is to the neural
network scratch pad
184
490403
2085
而沒那麼接近神經網路對於
其他隨機字詞的便條簿——
08:12
for some other randomly chosen word --
185
492488
1835
08:14
75 percent is much better than chance.
186
494323
2962
75% 比隨機的機率更高許多。
08:17
What about for more complicated things,
187
497285
1918
若不只是字詞,更複雜的
如句子或甚至故事呢?
08:19
not just words,
but sentences, even stories?
188
499203
2086
08:21
Again, this scratch pad
prediction task works.
189
501330
2420
同樣的,便條簿
預測的方式也行得通。
08:23
We’re able to predict the neural network
scratch pad from the brain and vice versa.
190
503791
4213
我們可以用神經網路的便條簿
來預測大腦的,反之亦然。
08:28
Amazing.
191
508838
1210
很驚人。
08:30
So does that mean
192
510548
1168
那是否意味著
08:31
that neural networks and AI
understand language just like we do?
193
511758
3378
神經網路和人工智慧懂
語言的方式和我們一樣?
08:35
Well, truthfully, no.
194
515511
1585
絕對不是。
08:37
Though these scratch pad prediction
tasks show above-chance accuracy,
195
517513
4880
雖然這些便條簿預測的
正確率比隨機猜測更佳,
08:42
the underlying correlations
are still pretty weak.
196
522393
2378
背後的相關性仍然很弱。
08:45
And though neural networks
are inspired by the brain,
197
525188
2502
雖然神經網路的靈感來自大腦,
08:47
they don't have the same kind
of structure and complexity
198
527690
2711
它們並沒有大腦的
那種結構和複雜度。
08:50
that we see in the brain.
199
530401
1210
08:52
Neural networks also
don't exist in the world.
200
532028
2461
神經網路並不存在於世界上。
08:54
A neural network has never opened a door
201
534530
2086
神經網路從來沒有開過門、
看過日落,或聽過寶寶哭泣。
08:56
or seen a sunset, heard a baby cry.
202
536657
2962
09:00
Can a neural network that doesn't
actually exist in the world,
203
540161
2961
沒有真正存在於這個世界上,
沒體驗過這個世界的神經網路
09:03
hasn't really experienced the world,
204
543122
1794
09:04
really understand language
about the world?
205
544916
2294
有可能了解關於這個世界的語言嗎?
09:08
Still, these scratch pad prediction
experiments have held up --
206
548169
2961
不過,這些便條簿預測實驗
還是得到了證實——
許多大腦成像實驗,許多神經網路。
09:11
multiple brain imaging experiments,
207
551130
1710
09:12
multiple neural networks.
208
552840
1418
09:14
We've also found that as the neural
networks get more accurate,
209
554300
2961
我們也發現,隨著
神經網路越來越正確,
09:17
they also start to use
their scratch pad in a way
210
557261
2711
它們運用便條簿的方式
也變得越來越像大腦。
09:20
that becomes more brain-like.
211
560014
2002
09:22
And it's not just language.
212
562058
1293
不只是在語言方面,在導航
和視覺方面也有類似的結果。
09:23
We've seen similar results
in navigation and vision.
213
563392
2503
09:26
So AI is not doing exactly
what the brain is doing,
214
566270
3879
所以,人工智慧的
運作方式和大腦不同,
09:30
but it's not completely random either.
215
570191
2377
但它也不是隨機亂猜的。
09:34
So from where I sit,
216
574112
1835
從我的角度來看,
09:35
if we want to know if AI really
understands language like we do,
217
575988
3337
若我們想知道人工智慧
能否像我們一樣真正懂語言,
我們就得進入中文房間裡,
09:39
we need to get inside of the Chinese room.
218
579325
2336
09:41
We need to know what the AI is doing,
219
581702
1836
我們得知道人工智慧在做什麼,
09:43
and we need to be able to compare that
to what people are doing
220
583579
2962
並比較看看,和懂語言的人
在做的有什麼差別。
09:46
when they understand language.
221
586541
1751
09:48
AI is moving so fast.
222
588334
1877
人工智慧進展超快。
09:50
Today, I'm asking you,
does AI understand language
223
590545
2377
今天我問各位人工智慧是否懂語言,
09:52
that might seem like a silly
question in ten years.
224
592964
2460
十年後這個問題可能會顯得很蠢,
09:55
Or ten months.
225
595466
1168
或十個月後。
09:56
(Laughter)
226
596634
1919
(笑聲)
09:58
But one thing will remain true.
227
598594
1544
但有一點仍然不會變:
我們是會創造意義的人類,
10:00
We are meaning-making humans,
228
600179
1460
10:01
and we are going to continue
to look for meaning
229
601681
2878
我們將會持續尋找意義,
10:04
and interpret the world around us.
230
604559
1918
並詮釋我們周遭的世界。
10:06
And we will need to remember
231
606477
2127
大家要切記,
10:08
that if we only look at the input
and output of AI,
232
608646
2794
如果我們只去看
人工智慧的輸入和輸出,
10:11
it's very easy to be fooled.
233
611440
2128
很容易被愚弄。
10:13
We need to get inside
of the metaphorical room of AI
234
613609
3712
我們得進入人工智慧的
房間(比喻說法),
10:17
in order to see what's happening.
235
617363
1627
去看看裡面是怎麼回事。
10:19
It's what's inside the counts.
236
619740
2461
裡面的才是重要的。
10:22
Thank you.
237
622660
1168
謝謝。
10:23
(Applause)
238
623828
4004
(掌聲)
New videos
關於本網站
本網站將向您介紹對學習英語有用的 YouTube 視頻。 您將看到來自世界各地的一流教師教授的英語課程。 雙擊每個視頻頁面上顯示的英文字幕,從那裡播放視頻。 字幕與視頻播放同步滾動。 如果您有任何意見或要求,請使用此聯繫表與我們聯繫。