How to Get Inside the "Brain" of AI | Alona Fyshe | TED

58,799 views ・ 2023-04-03

TED


请双击下面的英文字幕来播放视频。

翻译人员: Weihao Sheng 校对人员: Yip Yan Yeung
00:04
People are funny.
0
4417
1127
人是非常有趣的。
00:05
We're constantly trying to understand and interpret
1
5585
2544
我们一直在试图理解和解释
00:08
the world around us.
2
8171
1460
我们周围的世界。
00:10
I live in a house with two black cats, and let me tell you,
3
10090
2794
我家里有两只黑猫, 我可以告诉你,
00:12
every time I see a black, bunched up sweater out of the corner of my eye,
4
12926
3462
每当我的余光瞥见 一件打结的黑色毛衣,
00:16
I think it's a cat.
5
16388
1209
我都会认为那是一只猫。
00:18
It's not just the things we see.
6
18098
1585
不只是我们看到的东西。
00:19
Sometimes we attribute more intelligence than might actually be there.
7
19724
3587
我们有时以为一些东西有超常的智慧, 但实际上未必有。
00:23
Maybe you've seen the dogs on TikTok.
8
23770
1794
比如,你也许在TikTok上 看到过狗狗的视频。
00:25
They have these little buttons that say things like "walk" or "treat."
9
25605
3379
上面有一些小按钮, 写着“遛遛”或是“要吃的”。
00:29
They can push them to communicate some things with their owners,
10
29025
3128
这些狗狗能用这些按钮 和它们的主人交流,
00:32
and their owners think they use them
11
32195
1752
它们的主人也以为用这些按钮就能
00:33
to communicate some pretty impressive things.
12
33947
2169
让狗狗做一些令人惊奇的事情。
00:36
But do the dogs know what they're saying?
13
36491
2002
但狗狗知道它们在说什么吗?
00:39
Or perhaps you've heard the story of Clever Hans the horse,
14
39494
3462
或许你听过《聪明的汉斯》的故事,
00:42
and he could do math.
15
42956
1376
这匹马居然能做数学题。
00:44
And not just like, simple math problems, really complicated ones,
16
44374
3086
不仅仅是简单的数学计算, 而是非常复杂的问题,
00:47
like, if the eighth day of the month falls on a Tuesday,
17
47460
3003
比如,如果一个月的第八天是星期二,
00:50
what's the date of the following Friday?
18
50463
1919
那么下一个星期五的日期是什么?
00:52
It's like, pretty impressive for a horse.
19
52841
2043
对于一匹马来说,这真是令人惊叹。
00:55
Unfortunately, Hans wasn't doing math,
20
55927
2586
不幸的是,汉斯并不是在做数学题,
00:58
but what he was doing was equally impressive.
21
58555
2586
但它学会的事情也很了不起。
01:01
Hans had learned to watch the people in the room
22
61182
2670
汉斯学会了观察房间里的人,
01:03
to tell when he should tap his hoof.
23
63893
2002
来判断它该在什么时候敲蹄子。
01:05
So he communicated his answers by tapping his hoof.
24
65937
2461
它通过敲蹄子来“说出”它的答案。
01:08
It turns out that if you know the answer
25
68982
1918
真实的情况是,如果你知道答案,
01:10
to "if the eighth day of the month falls on a Tuesday,
26
70942
2544
就是“如果每个月的第八天是星期二,
01:13
what's the date of the following Friday,"
27
73528
1960
那么下一个星期五是什么日子”的答案,
01:15
you will subconsciously change your posture
28
75488
2044
你会在汉斯正确地敲打 18 下的时候
01:17
once the horse has given the correct 18 taps.
29
77532
2169
下意识地改变你的姿势。
01:20
So Hans couldn't do math,
30
80493
1293
所以汉斯不会做数学,
01:21
but he had learned to watch the people in the room who could do math,
31
81786
3254
但它学会了观察房间里会做数学的人,
01:25
which, I mean, still pretty impressive for a horse.
32
85081
2419
这对一匹马来说还是很了不起的。
01:28
But this is an old picture,
33
88418
1418
但这只是一个古老的故事了,
01:29
and we would not fall for Clever Hans today.
34
89836
2252
今天的我们已经不会 再被聪明的汉斯骗到了。
01:32
Or would we?
35
92672
1168
会吗?
01:34
Well, I work in AI,
36
94549
2169
我在人工智能领域工作,
01:36
and let me tell you, things are wild.
37
96718
2002
我可以告诉你,事情很疯狂。
01:38
There have been multiple examples of people being completely convinced
38
98720
3754
有许多例子表明,人们完全相信
01:42
that AI understands them.
39
102515
1502
人工智能能理解他们。
01:44
In 2022,
40
104559
2711
2022 年,
01:47
a Google engineer thought that Google’s AI was sentient.
41
107270
3337
一位谷歌工程师认为 谷歌的人工智能有自我意识。
01:50
And you may have had a really human-like conversation
42
110649
2752
你也可能试过与ChatGPT
01:53
with something like ChatGPT.
43
113401
1877
进行类似人类的对话。
01:55
But models we're training today are so much better
44
115779
2377
我们今天训练的模型比我们
01:58
than the models we had even five years ago.
45
118198
2002
仅五年前的模型都要好得多。
02:00
It really is remarkable.
46
120200
1710
这真的很了不起。
02:02
So at this super crazy moment in time,
47
122744
2502
所以在这个疯狂的时刻,
02:05
let’s ask the super crazy question:
48
125288
2127
让我们问一个疯狂的问题:
02:07
Does AI understand us,
49
127832
1293
人工智能是真的理解我们,
02:09
or are we having our own Clever Hans moment?
50
129167
2878
还是我们又遇到了一匹聪明的汉斯?
02:13
Some philosophers think that computers will never understand language.
51
133254
3462
一些哲学家认为, 计算机永远不会理解语言。
02:16
To illustrate this, they developed something they call
52
136716
2628
为了说明这一点,他们设计了一段
02:19
the Chinese room argument.
53
139344
1460
被称为“中文房间”的论证。
02:21
In the Chinese room, there is a person, hypothetical person,
54
141429
3629
在中文房间里, 有一个人,虚构的人,
02:25
who does not understand Chinese,
55
145100
1835
他/她不懂中文,
02:26
but he has along with him a set of instructions
56
146976
2211
但他/她眼前有一套指令,
02:29
that tell him how to respond in Chinese to any Chinese sentence.
57
149229
4212
告诉他/她如何用中文 回应任何中文语句。
02:33
Here's how the Chinese room works.
58
153983
1669
中文房间中会进行以下流程。
02:35
A piece of paper comes in through a slot in the door,
59
155652
2502
门上的缝隙里递来了一张纸条,
02:38
has something written in Chinese on it.
60
158196
2461
上面写着一些中文。
02:40
The person uses their instructions to figure out how to respond.
61
160699
3003
这个人要用面前的指示想出回应的内容。
02:43
They write the response down on a piece of paper
62
163702
2293
他/她把回答写在一张纸上,
02:46
and then send it back out through the door.
63
166037
2252
再通过门上的缝隙传出去。
02:48
To somebody who speaks Chinese,
64
168331
1502
对于说中文的人来说,
02:49
standing outside this room,
65
169833
1293
站在这个房间外面,
02:51
it might seem like the person inside the room speaks Chinese.
66
171167
3170
可能会觉得房间里的人会说中文。
02:54
But we know they do not,
67
174337
2753
但我们知道他/她不会,
02:57
because no knowledge of Chinese is required to follow the instructions.
68
177090
4546
因为遵循指示不需要学会中文。
03:01
Performance on this task does not show that you know Chinese.
69
181636
2961
这项任务的表现并不能说明你懂中文。
03:05
So what does that tell us about AI?
70
185807
2002
这个论证和AI有什么关系?
03:08
Well, when you and I stand outside of the room,
71
188226
2753
当你我站在房间外面,
03:11
when we speak to one of these AIs like ChatGPT,
72
191020
4547
与像ChatGPT这样的AI说话时,
03:15
we are the person standing outside the room.
73
195567
2085
我们就是站在房间外面的人。
03:17
We're feeding in English sentences,
74
197652
1710
我们输入的是英语句子,
03:19
we're getting English sentences back.
75
199362
2127
得到的是英语句子的反馈。
03:21
It really looks like the models understand us.
76
201531
2419
看起来这些模型真的能理解我们。
03:23
It really looks like they know English.
77
203992
2502
看起来它们真的像懂英语。
03:27
But under the hood,
78
207203
1168
但在系统的底层,
03:28
these models are just following a set of instructions, albeit complex.
79
208371
3754
这些模型只是遵循一套指令, 尽管是很复杂的指令。
03:32
How do we know if AI understands us?
80
212917
2795
我们如何知道AI能否理解我们?
03:36
To answer that question, let's go back to the Chinese room again.
81
216880
3086
为了回答这个问题, 让我们再回到“中文房间”的例子。
03:39
Let's say we have two Chinese rooms.
82
219966
1794
假设我们有两个中文房间。
03:41
In one Chinese room is somebody who actually speaks Chinese,
83
221801
3879
其中一个房间里 是真正会说中文的人,
03:46
and in the other room is our impostor.
84
226014
1877
而另一个房间里的是个冒牌货。
03:48
When the person who actually speaks Chinese gets a piece of paper
85
228224
3087
当真正讲中文的人拿到一张
03:51
that says something in Chinese in it, they can read it, no problem.
86
231311
3170
写有中文的纸时, 他/她当然可以读懂。
03:54
But when our imposter gets it again,
87
234522
1752
但是,当这位冒牌货拿到纸条时,
03:56
he has to use his set of instructions to figure out how to respond.
88
236274
3170
他/她必须使用那套指令来作出回应。
03:59
From the outside, it might be impossible to distinguish these two rooms,
89
239819
3671
从外面看,可能无法区分这两个房间,
04:03
but we know inside something really different is happening.
90
243531
3587
但我们知道房间里面 发生着一些根本不同的事情。
04:07
To illustrate that,
91
247660
1168
为了说明这一点,
04:08
let's say inside the minds of our two people,
92
248870
2836
假设在两个房间里的两个人,
04:11
inside of our two rooms,
93
251748
1585
两个人的头脑中
04:13
is a little scratch pad.
94
253374
1752
各有一个小草稿本。
04:15
And everything they have to remember in order to do this task
95
255168
2878
完成这项任务需要的所有记忆
04:18
has to be written on that little scratch pad.
96
258046
2169
都必须记在小草稿本上。
04:20
If we could see what was written on that scratch pad,
97
260757
2586
如果我们能看到写在草稿本上的东西,
04:23
we would be able to tell how different their approach to the task is.
98
263384
3421
我们就能知道他们 完成任务的方法有什么不同。
04:27
So though the input and the output of these two rooms
99
267514
2502
因此,尽管这两个房间的输入和输出
04:30
might be exactly the same,
100
270016
1293
可能完全相同,
04:31
the process of getting from input to output -- completely different.
101
271309
3337
但从输入转化为输出的过程完全不同。
04:35
So again, what does that tell us about AI?
102
275897
2294
这跟人工智能又有什么关系呢?
04:38
Again, if AI, even if it generates completely plausible dialogue,
103
278691
3838
即使人工智能产生了完全合理的对话,
04:42
answers questions just like we would expect,
104
282529
2085
像我们期望的那样回答问题,
04:44
it may still be an imposter of sorts.
105
284614
2377
它也仍然可能是某种程度上的冒牌货。
04:47
If we want to know if AI understands language like we do,
106
287033
3003
如果我们想知道人工智能 能否像我们一样理解语言,
04:50
we need to know what it's doing.
107
290078
1835
我们需要知道它在做什么。
04:51
We need to get inside to see what it's doing.
108
291955
2335
我们需要深入内部,看看它在做什么。
04:54
Is it an imposter or not?
109
294332
1585
它到底是不是一个冒牌货?
04:55
We need to see its scratch pad,
110
295959
1835
我们需要看到它的草稿本,
04:57
and we need to be able to compare it
111
297794
1752
并将其与
04:59
to the scratch pad of somebody who actually understands language.
112
299587
3212
真正理解语言的人类的草稿本进行比较。
05:02
But like scratch pads in brains,
113
302841
1751
但是,大脑中的“草稿本”
05:04
that's not something we can actually see, right?
114
304634
2336
不是我们能随便看到的东西,对吧?
05:07
Well, it turns out that we can kind of see scratch pads in brains.
115
307720
3337
实际上,我们可以在一定程度上 “看到”大脑中的草稿。
05:11
Using something like fMRI or EEG,
116
311099
2127
用fMRI或EEG这样的技术,
05:13
we can take what are like little snapshots of the brain while it’s reading.
117
313268
3712
我们可以在人阅读时拍下大脑的快照。
05:17
So have people read words or stories and then take pictures of their brain.
118
317021
4171
在人们在读单词或读故事的时候, 拍摄他们大脑的状态。
05:21
And those brain images are like fuzzy,
119
321192
2252
这些脑成像的图片就像是
05:23
out-of-focus pictures of the scratch pad of the brain.
120
323486
3253
模糊、失焦的草稿本照片,
05:26
They tell us a little bit about how the brain is processing
121
326739
3045
它们能告诉我们一些信息,
05:29
and representing information while you read.
122
329784
2461
阅读时的大脑是如何处理、表现信息的。
05:33
So here are three brain images taken while a person read the word "apartment,"
123
333079
4087
这里有三张大脑图像,对应一个人 读到三个词时的情况:“公寓”、
05:37
"house" and "celery."
124
337208
1669
“房子”和“芹菜”。
05:39
You can see just with your naked eye
125
339252
2002
你一眼就能看出,
05:41
that the brain image for "apartment" and "house"
126
341296
2252
“公寓”和“房子”的脑图像
05:43
are more similar to each other
127
343548
1585
比“芹菜”的脑图像
05:45
than they are to the brain image for "celery."
128
345133
2252
更为相似。
05:47
And you know, of course that apartments and houses are more similar
129
347385
3170
当然,公寓和房子就词意来说 本来就更相似,
05:50
than they are to celery, just the words.
130
350555
2210
相比于芹菜来说。
05:52
So said another way,
131
352807
2544
换一种说法,
05:55
the brain uses its scratchpad when reading the words "apartment" and "house"
132
355393
4296
大脑在读到“公寓”和“房子”两个词时 在草稿本上记录下的内容
05:59
in a way that's more similar than when you read the word "celery."
133
359731
3128
比读到“芹菜”时的草稿本更相似。
06:03
The scratch pad tells us a little bit
134
363693
1793
这些草稿本向我们透露了一些
06:05
about how the brain represents the language.
135
365486
2086
大脑表示语言的方法。
06:07
It's not a perfect picture of what the brain's doing,
136
367572
2502
这种方法并不能完美展现 大脑中发生的情况,
06:10
but it's good enough.
137
370074
1335
但是足够用了。
06:11
OK, so we have scratch pads for the brain.
138
371409
2169
现在我们有了大脑的草稿纸,
06:13
Now we need a scratch pad for AI.
139
373620
2127
我们需要拿到AI的草稿纸。
06:16
So inside a lot of AIs is a neural network.
140
376706
2878
许多AI的内部是一个神经网络,
06:19
And inside of a neural network is a bunch of these little neurons.
141
379626
3253
而神经网络是由 一个个小的神经元组成的。
06:22
So here the neurons are like these little gray circles.
142
382921
2919
神经元就像这些灰色的小圆圈。
06:25
And we would like to know
143
385840
1210
我们想要知道
06:27
what is the scratch pad of a neural network?
144
387091
2086
神经网络的草稿纸长什么样?
06:29
Well, when we feed in a word into a neural network,
145
389177
4254
当我们给神经网络输入一个词时,
06:33
each of the little neurons computes a number.
146
393473
2794
每个小神经元都会计算一个数字。
06:36
Those little numbers I'm representing here with colors.
147
396893
2627
我用颜色来表示这些数字。
06:39
So every neuron computes this little number,
148
399562
2795
每个神经元会计算一个数字,
06:42
and those numbers tell us something
149
402398
1710
这些数字给我们展现了
06:44
about how the neural network is processing language.
150
404108
2711
神经网络是如何处理语言的。
06:47
Taken together,
151
407862
1168
总结起来,
06:49
all of those little circles paint us a picture
152
409030
2753
所有这些小圆圈为我们描述了
06:51
of how the neural network is representing language,
153
411824
2419
神经网络表示语言的方法,
06:54
and they give us the scratch pad of the neural network.
154
414243
2753
展现了神经网络的草稿本。
06:57
OK, great.
155
417580
1168
太好了。
06:58
Now we have two scratch pads, one from the brain and one from AI.
156
418790
3086
现在我们有了两张草稿纸, 一张来自大脑,一张来自人工智能。
07:01
And we want to know: Is AI doing something like what the brain is doing?
157
421876
3629
我们想知道的是: AI做的事情是否和大脑类似?
07:05
How can we test that?
158
425838
1377
我们要怎么判断呢?
07:07
Here's what researchers have come up with.
159
427757
2002
这是研究人员想出的办法。
07:09
We're going to train a new model.
160
429801
1877
我们要训练一个新的模型。
07:11
That new model is going to look at neural network scratch pad
161
431678
2877
新的模型将检查神经网络
07:14
for a particular word
162
434555
1168
对某个单词的“草稿”,
07:15
and try to predict the brain scratch pad for the same word.
163
435723
3087
并试图预测同一个单词的大脑草稿。
07:18
We can do it, by the way, around two.
164
438851
1919
顺便一提,这个过程也可以反过来。
07:20
So let's train a new model.
165
440812
2043
我们来训练一个新的模型,
07:22
It’s going to look at the neural network scratch pad for a particular word
166
442897
3504
它会检查神经网络 对特定单词的草稿,
07:26
and try to predict the brain scratchpad.
167
446401
1918
并预测大脑的草稿。
07:28
If the brain and AI are doing nothing alike,
168
448319
2586
如果大脑和AI所做的事情 没有任何相似之处,
07:30
have nothing in common,
169
450947
1168
没有任何共同之处,
07:32
we won't be able to do this prediction task.
170
452115
2085
这项预测任务将无法完成。
07:34
It won't be possible to predict one from the other.
171
454200
2461
两者中的任何一个都无法预测另一个。
07:36
So we've reached a fork in the road
172
456995
1710
现在我们到了一个岔路口,
07:38
and you can probably tell I'm about to tell you one of two things.
173
458705
3295
答案只会是以下两者之一:
07:42
I’m going to tell you AI is amazing,
174
462458
2044
要么AI是非常惊人的;
07:44
or I'm going to tell you AI is an imposter.
175
464544
2669
要么AI只是一个冒牌货。
07:48
Researchers like me love to remind you
176
468047
2211
像我这样的研究人员 特别喜欢说,
07:50
that AI is nothing like the brain.
177
470299
1627
人工智能与大脑完全不同。
07:51
And that is true.
178
471968
1501
这是事实。
07:53
But could it also be the AI and the brain share something in common?
179
473845
3253
但AI和大脑有没有相似点呢?
07:58
So we’ve done this scratch pad prediction task,
180
478516
2211
我们进行了这项预测草稿的任务,
08:00
and it turns out, 75 percent of the time
181
480727
2669
结果发现,有 75% 的概率
08:03
the predicted neural network scratchpad for a particular word
182
483438
3169
针对某一特定词语的 神经网络草稿的预测结果
08:06
is more similar to the true neural network scratchpad for that word
183
486649
3754
会更类似针对这一词语的 真实大脑神经网络草稿,
08:10
than it is to the neural network scratch pad
184
490403
2085
而不是更接近于 针对其他随机词语的
08:12
for some other randomly chosen word --
185
492488
1835
大脑神经网络草稿。
08:14
75 percent is much better than chance.
186
494323
2962
75% 要远高于随机水平。
08:17
What about for more complicated things,
187
497285
1918
那么对于更复杂的事物,
08:19
not just words, but sentences, even stories?
188
499203
2086
不只是单词, 还有句子,甚至故事呢?
08:21
Again, this scratch pad prediction task works.
189
501330
2420
这个草稿预测任务得到了同样的结果。
08:23
We’re able to predict the neural network scratch pad from the brain and vice versa.
190
503791
4213
我们可以从大脑图像预测神经网络, 反过来也可以。
08:28
Amazing.
191
508838
1210
太有意思了。
08:30
So does that mean
192
510548
1168
那么,这是否意味着
08:31
that neural networks and AI understand language just like we do?
193
511758
3378
神经网络和人工智能可以 像我们人类一样理解语言呢?
08:35
Well, truthfully, no.
194
515511
1585
说实话,并不是。
08:37
Though these scratch pad prediction tasks show above-chance accuracy,
195
517513
4880
尽管这些草稿预测任务 表现出高于随机的准确率,
08:42
the underlying correlations are still pretty weak.
196
522393
2378
两者底层的相关性仍然非常弱。
08:45
And though neural networks are inspired by the brain,
197
525188
2502
尽管神经网络的灵感来自于大脑,
08:47
they don't have the same kind of structure and complexity
198
527690
2711
它们并不具备大脑呈现的
08:50
that we see in the brain.
199
530401
1210
结构和复杂性。
08:52
Neural networks also don't exist in the world.
200
532028
2461
神经网络也不存在于真实世界中。
08:54
A neural network has never opened a door
201
534530
2086
从来没有一个神经网络打开过门,
08:56
or seen a sunset, heard a baby cry.
202
536657
2962
看到过日落,听到过婴儿的哭声。
09:00
Can a neural network that doesn't actually exist in the world,
203
540161
2961
一个并不真实存在于世界上、
09:03
hasn't really experienced the world,
204
543122
1794
没有真正体验过世界的神经网络,
09:04
really understand language about the world?
205
544916
2294
能真正理解描述世界的语言吗?
09:08
Still, these scratch pad prediction experiments have held up --
206
548169
2961
尽管如此,这些草稿预测实验 仍然站得住脚——
09:11
multiple brain imaging experiments,
207
551130
1710
多个大脑成像结果,
09:12
multiple neural networks.
208
552840
1418
多个神经网络模型。
09:14
We've also found that as the neural networks get more accurate,
209
554300
2961
我们还发现, 随着神经网络变得更加准确,
09:17
they also start to use their scratch pad in a way
210
557261
2711
它们也以一种更像大脑的方式
09:20
that becomes more brain-like.
211
560014
2002
使用着草稿纸。
09:22
And it's not just language.
212
562058
1293
这不仅仅是语言方面。
09:23
We've seen similar results in navigation and vision.
213
563392
2503
我们在导航任务和视觉任务上 也看到了相似的结果。
09:26
So AI is not doing exactly what the brain is doing,
214
566270
3879
人工智能所做的并不完全和大脑相同,
09:30
but it's not completely random either.
215
570191
2377
但也不是完全随机。
09:34
So from where I sit,
216
574112
1835
从我的角度看来,
09:35
if we want to know if AI really understands language like we do,
217
575988
3337
如果我们想真正知道 AI能否像我们这样理解语言,
09:39
we need to get inside of the Chinese room.
218
579325
2336
我们需要进入那个“中文房间”,
09:41
We need to know what the AI is doing,
219
581702
1836
需要知道AI到底在做什么,
09:43
and we need to be able to compare that to what people are doing
220
583579
2962
需要能将AI的行为与人类
09:46
when they understand language.
221
586541
1751
理解语言的行为比较。
09:48
AI is moving so fast.
222
588334
1877
人工智能发展得太快了。
09:50
Today, I'm asking you, does AI understand language
223
590545
2377
今天我还在问大家, 人工智能能否理解语言,
09:52
that might seem like a silly question in ten years.
224
592964
2460
可能十年以后, 这个问题就会看起来很“傻”。
09:55
Or ten months.
225
595466
1168
也可能十个月。
09:56
(Laughter)
226
596634
1919
(笑)
09:58
But one thing will remain true.
227
598594
1544
但有一件事不会变化。
10:00
We are meaning-making humans,
228
600179
1460
我们是创造意义的人类。
10:01
and we are going to continue to look for meaning
229
601681
2878
我们将继续寻找意义,
10:04
and interpret the world around us.
230
604559
1918
解释我们周围的世界。
10:06
And we will need to remember
231
606477
2127
我们需要记住,
10:08
that if we only look at the input and output of AI,
232
608646
2794
如果我们只看AI的输入和输出,
10:11
it's very easy to be fooled.
233
611440
2128
我们就容易被骗到。
10:13
We need to get inside of the metaphorical room of AI
234
613609
3712
我们需要真正深入 人工智能里的那个“房间”,
10:17
in order to see what's happening.
235
617363
1627
看到真正在发生的事情。
10:19
It's what's inside the counts.
236
619740
2461
房间里发生了什么才是最重要的。
10:22
Thank you.
237
622660
1168
谢谢。
10:23
(Applause)
238
623828
4004
(掌声)
关于本网站

这个网站将向你介绍对学习英语有用的YouTube视频。你将看到来自世界各地的一流教师教授的英语课程。双击每个视频页面上显示的英文字幕,即可从那里播放视频。字幕会随着视频的播放而同步滚动。如果你有任何意见或要求,请使用此联系表与我们联系。

https://forms.gle/WvT1wiN1qDtmnspy7