forked from chenzomi12/AISystem
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy path05.srt
1224 lines (918 loc) · 18.3 KB
/
05.srt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1
00:00:02,987 --> 00:00:04,987
字幕生成:慎独 字母校对:lim
2
00:00:05,750 --> 00:00:06,838
哈喽大家好
3
00:00:06,875 --> 00:00:09,400
我是那个挑灯奋战的ZOMI
4
00:00:09,400 --> 00:00:11,831
刚跟同事吃完夜宵回来
5
00:00:11,831 --> 00:00:13,744
现在已经到了凌晨——
6
00:00:14,500 --> 00:00:15,975
0点23分了
7
00:00:15,975 --> 00:00:18,215
那今天呢还是在AI芯片里面的
8
00:00:18,215 --> 00:00:20,975
AI计算体系这个话题
9
00:00:20,975 --> 00:00:22,455
而今天的这个话题呢
10
00:00:22,455 --> 00:00:24,375
围绕着具体的计算
11
00:00:24,375 --> 00:00:26,212
也就是矩阵运算
12
00:00:26,212 --> 00:00:27,644
还有比特位
13
00:00:27,644 --> 00:00:30,375
两个内容来去给大家汇报的
14
00:00:30,375 --> 00:00:32,873
现在呢主要的工作还是去看一看
15
00:00:32,873 --> 00:00:34,984
整个计算的体系
16
00:00:34,984 --> 00:00:36,450
还有矩阵运算
17
00:00:36,450 --> 00:00:39,375
那今天的主要内容就是在矩阵运算
18
00:00:39,375 --> 00:00:42,775
不过呢会分两个内容来去给大家介绍的
19
00:00:42,775 --> 00:00:45,271
一个就是矩阵运算, 一个就是比特位
20
00:00:46,807 --> 00:00:48,471
来到第一个内容
21
00:00:48,471 --> 00:00:50,375
就是矩阵运算
22
00:00:50,375 --> 00:00:51,611
那很多时候叫MM
23
00:00:51,611 --> 00:00:54,405
就是指具体的矩阵运算了
24
00:00:54,775 --> 00:00:56,581
现在来回顾一下
25
00:00:56,581 --> 00:00:59,375
从卷积到具体的矩阵运算
26
00:00:59,375 --> 00:01:02,175
实际上呢很多时候卷积这个操作呢
27
00:01:02,175 --> 00:01:03,551
在计算机里面
28
00:01:03,551 --> 00:01:06,775
并不会真正的去执行卷积的操作
29
00:01:06,775 --> 00:01:11,175
而是把卷积的操作呢变成一个具体的矩阵
30
00:01:11,175 --> 00:01:14,575
然后把数的feature map呢也变成一个具体的矩阵
31
00:01:14,575 --> 00:01:19,575
通过一个矩阵乘的操作代替整个卷积的操作
32
00:01:19,575 --> 00:01:22,975
因为呢其实在之前的课程里面也给大家分享过
33
00:01:22,975 --> 00:01:25,087
通过了两个矩阵乘呢
34
00:01:25,087 --> 00:01:28,575
最后输出结果
35
00:01:28,575 --> 00:01:33,575
这种方式这个时候就是从卷积到矩阵运算的方式
36
00:01:33,575 --> 00:01:36,975
那下面来看一下实际上呢很多时候啊
37
00:01:36,975 --> 00:01:40,775
矩阵或者张量feature会非常的大
38
00:01:40,775 --> 00:01:44,975
假设现在B就是权重或者卷积核
39
00:01:44,975 --> 00:01:46,545
A就是feature map
40
00:01:46,545 --> 00:01:49,175
C就是结果颜色是相对应的
41
00:01:49,175 --> 00:01:51,975
在计算机体系结构里面呢
42
00:01:51,975 --> 00:01:55,375
缓存会分为HBM和cache
43
00:01:55,375 --> 00:01:58,175
而在cache里面呢一般来说没有那么大
44
00:01:58,175 --> 00:02:00,575
于是呢会进行一个分块
45
00:02:00,575 --> 00:02:02,975
把块内容把它切出来
46
00:02:02,975 --> 00:02:04,575
把块内容把它切出来
47
00:02:04,575 --> 00:02:07,775
然后呢得到一个数之后进行一个累加
48
00:02:07,775 --> 00:02:09,175
相乘再累加
49
00:02:09,175 --> 00:02:11,975
那这个时候呢就得到了C的操作
50
00:02:11,975 --> 00:02:15,375
相乘再累加这个操作大家会不会很熟
51
00:02:15,375 --> 00:02:19,375
在上一节课里面去讲到MACs
52
00:02:19,375 --> 00:02:21,375
就是相乘并累加的操作
53
00:02:21,375 --> 00:02:24,975
就刚刚符合卷积的最核心的运算
54
00:02:26,825 --> 00:02:30,175
左边这个就是原始具体的卷积的操作
55
00:02:30,175 --> 00:02:33,375
我有一个卷积核、有个input feature map
56
00:02:33,375 --> 00:02:36,375
卷积核1234呢跟这个1234
57
00:02:36,375 --> 00:02:38,975
进行一个乘加的操作
58
00:02:38,975 --> 00:02:40,575
那这个时候呢得到一个1
59
00:02:40,575 --> 00:02:42,375
然后后面同样的操作
60
00:02:42,375 --> 00:02:44,575
1234呢跟后面这个框
61
00:02:44,575 --> 00:02:46,975
进行一个MAC的计算
62
00:02:46,975 --> 00:02:48,575
然后得到这个2
63
00:02:48,575 --> 00:02:49,975
以此类推
64
00:02:49,975 --> 00:02:52,375
就得到了整个的feature map的output
65
00:02:52,375 --> 00:02:54,375
这个是基本的卷积
66
00:02:54,375 --> 00:02:55,775
而矩阵乘呢
67
00:02:55,775 --> 00:02:57,575
刚才只是简单的介绍了一下
68
00:02:57,575 --> 00:02:59,375
现在是详细的打开
69
00:02:59,375 --> 00:03:01,375
现在把权重filter
70
00:03:01,375 --> 00:03:02,575
横向的展开
71
00:03:02,575 --> 00:03:03,975
把刚才1234这个矩阵
72
00:03:03,975 --> 00:03:05,775
横向展开成1234
73
00:03:05,775 --> 00:03:08,575
横向的一条input feature map
74
00:03:08,575 --> 00:03:10,375
就数的特征
75
00:03:10,375 --> 00:03:12,575
就直接展开为1245
76
00:03:12,575 --> 00:03:14,175
就刚好1245
77
00:03:14,175 --> 00:03:15,775
然后2356
78
00:03:15,775 --> 00:03:16,775
4578
79
00:03:16,775 --> 00:03:17,775
5689
80
00:03:17,775 --> 00:03:20,175
这样的一个具体的矩阵
81
00:03:20,175 --> 00:03:21,175
然后这个矩阵呢
82
00:03:21,175 --> 00:03:22,575
跟这个矩阵直接相乘
83
00:03:22,575 --> 00:03:23,975
得到最终的结果
84
00:03:23,975 --> 00:03:24,975
output feature map
85
00:03:24,975 --> 00:03:26,975
这个就是从卷积
86
00:03:26,975 --> 00:03:27,975
到矩阵层
87
00:03:27,975 --> 00:03:29,775
MM的一个具体的
88
00:03:29,775 --> 00:03:31,375
换算过程
89
00:03:31,375 --> 00:03:32,375
下面
90
00:03:32,375 --> 00:03:34,375
来到看一看一个很有意思
91
00:03:34,375 --> 00:03:34,975
就是
92
00:03:34,975 --> 00:03:35,575
其实呢
93
00:03:35,575 --> 00:03:37,975
在矩阵或者cache里面呢
94
00:03:37,975 --> 00:03:40,175
大部分都需要对cache
95
00:03:40,175 --> 00:03:41,375
进行分块
96
00:03:41,375 --> 00:03:41,975
因为呢
97
00:03:41,975 --> 00:03:43,175
刚才讲到的一个矩阵乘
98
00:03:43,175 --> 00:03:45,575
只是里面很小的一个具体的计算
99
00:03:45,575 --> 00:03:46,575
但是feature map
100
00:03:46,575 --> 00:03:47,175
feature呢
101
00:03:47,175 --> 00:03:47,975
会很大
102
00:03:47,975 --> 00:03:48,975
非常的大
103
00:03:48,975 --> 00:03:49,775
那这个时候呢
104
00:03:49,775 --> 00:03:51,175
系统的cache呢
105
00:03:51,175 --> 00:03:52,175
其实是放不下的
106
00:03:52,175 --> 00:03:52,575
于是呢
107
00:03:52,575 --> 00:03:55,775
就会进行分块的一个相乘
108
00:03:55,775 --> 00:03:56,775
和分块的操作
109
00:03:56,775 --> 00:03:57,575
例如第一次呢
110
00:03:57,575 --> 00:03:59,575
我先对这一个数据
111
00:03:59,575 --> 00:04:01,975
跟这一个数据进行相乘
112
00:04:01,975 --> 00:04:04,375
得到 F_0,0 跟 I_0,0
113
00:04:04,375 --> 00:04:04,775
接着呢
114
00:04:04,775 --> 00:04:06,175
我会第二个数据
115
00:04:06,175 --> 00:04:07,575
跟下面这个数据呢
116
00:04:07,575 --> 00:04:08,575
进行相乘
117
00:04:08,575 --> 00:04:09,775
然后位置还是相同的
118
00:04:09,775 --> 00:04:10,375
不过呢
119
00:04:10,375 --> 00:04:12,175
再进行一个累加的操作
120
00:04:12,175 --> 00:04:14,375
从而通过分块的方式
121
00:04:14,375 --> 00:04:16,975
充分利用空间的信息
122
00:04:16,975 --> 00:04:18,575
这个就是矩阵分块
123
00:04:20,150 --> 00:04:22,775
刚才那些都是最简单的一个原理
124
00:04:22,775 --> 00:04:23,375
看一下
125
00:04:23,375 --> 00:04:24,775
CPU和GPU里面呢
126
00:04:24,775 --> 00:04:26,775
支持矩阵乘的一个库
127
00:04:26,775 --> 00:04:27,975
支持矩阵乘的库呢
128
00:04:27,975 --> 00:04:28,575
现在呢
129
00:04:28,575 --> 00:04:29,975
有CPU有OpenBLAS呢
130
00:04:29,975 --> 00:04:31,575
有Intel的MKL呢
131
00:04:31,575 --> 00:04:32,575
而GPU里面呢
132
00:04:32,575 --> 00:04:33,375
有cuBLAS呢
133
00:04:33,375 --> 00:04:35,375
cuDNN这些都是现成的
134
00:04:35,375 --> 00:04:37,375
支持矩阵乘的一些库
135
00:04:37,375 --> 00:04:38,975
而他们实现的逻辑呢
136
00:04:38,975 --> 00:04:40,575
其实很简单
137
00:04:40,575 --> 00:04:41,775
就分两步
138
00:04:41,775 --> 00:04:43,175
就是这些库呢
139
00:04:43,175 --> 00:04:43,975
先感知
140
00:04:43,975 --> 00:04:46,575
首先感知矩阵层的一个shape
141
00:04:46,575 --> 00:04:48,375
首先要知道它的大小
142
00:04:48,375 --> 00:04:49,575
然后根据的大小呢
143
00:04:49,575 --> 00:04:52,375
选择最优的kernel来去实现
144
00:04:52,375 --> 00:04:53,175
那kernel是什么
145
00:04:53,175 --> 00:04:55,575
在之前其实跟大家分享过
146
00:04:55,575 --> 00:04:57,175
那具体的实现方式呢
147
00:04:57,175 --> 00:04:59,775
就像右边的这个图所示
148
00:04:59,775 --> 00:05:02,375
首先会对loop循环进行优化
149
00:05:02,375 --> 00:05:04,975
接着呢去利用多级的缓存
150
00:05:04,975 --> 00:05:06,975
那这里面呢就有一级的缓存
151
00:05:06,975 --> 00:05:07,575
二级的缓存
152
00:05:07,575 --> 00:05:08,975
还有三级的缓存
153
00:05:08,975 --> 00:05:10,975
用了三级的缓存
154
00:05:10,975 --> 00:05:12,375
而通过loop的展开呢
155
00:05:12,375 --> 00:05:14,575
这里面就是对loop进行展开
156
00:05:14,575 --> 00:05:16,975
然后具体的实现了整个
157
00:05:16,975 --> 00:05:19,175
矩阵层GAM的算法了
158
00:05:19,175 --> 00:05:20,775
往下再看一看呢
159
00:05:20,775 --> 00:05:22,975
这个就是具体的原始的算法
160
00:05:22,975 --> 00:05:24,775
需要对loop进行展开
161
00:05:24,775 --> 00:05:25,775
因为这里面的for
162
00:05:25,775 --> 00:05:27,175
嵌套实在是太多了
163
00:05:27,175 --> 00:05:27,975
如果你不展开
164
00:05:27,975 --> 00:05:29,975
你就会大量的嵌入到
165
00:05:29,975 --> 00:05:31,175
for循环里面
166
00:05:31,175 --> 00:05:32,775
没有办法很好的利用
167
00:05:32,775 --> 00:05:36,375
利用架构芯片的性能了
168
00:05:36,375 --> 00:05:37,375
下面呢
169
00:05:37,375 --> 00:05:38,575
其实卷积呢
170
00:05:38,575 --> 00:05:40,175
不仅仅可以把卷积变成
171
00:05:40,175 --> 00:05:41,775
image to column的这种方式
172
00:05:41,775 --> 00:05:42,775
还可以通过
173
00:05:42,775 --> 00:05:43,975
快速傅里叶变换呢
174
00:05:43,975 --> 00:05:45,375
还有strassen的方式呢
175
00:05:45,375 --> 00:05:46,375
还有Winograd
176
00:05:46,375 --> 00:05:47,775
那在之前的推理引擎
177
00:05:47,775 --> 00:05:48,775
Kernel优化里面呢
178
00:05:48,775 --> 00:05:50,175
其实给大家讲过
179
00:05:50,175 --> 00:05:51,175
卷积的优化了
180
00:05:51,175 --> 00:05:52,375
image to column的算法了
181
00:05:52,375 --> 00:05:53,975
还有Winograd的这些算法
182
00:05:53,975 --> 00:05:55,375
讲了非常多的一个系列
183
00:05:55,375 --> 00:05:56,975
非常大家欢迎回头过去
184
00:05:56,975 --> 00:05:58,175
看一下我给大家的
185
00:05:58,175 --> 00:06:00,975
之前的做的一些汇报成果
186
00:06:00,975 --> 00:06:02,975
下面呢回头看看
187
00:06:02,975 --> 00:06:04,775
在AI芯片里面
188
00:06:04,775 --> 00:06:06,575
或者在真正的一些
189
00:06:06,575 --> 00:06:08,375
特殊领域专用的芯片里面呢
190
00:06:08,375 --> 00:06:09,775
应该怎么去做
191
00:06:09,775 --> 00:06:10,775
首先第一个
192
00:06:10,775 --> 00:06:13,175
希望能够减少指令的开销
193
00:06:13,175 --> 00:06:14,975
所以每有两个操作
194
00:06:14,975 --> 00:06:16,375
第一个呢就是每个指令
195
00:06:16,375 --> 00:06:18,375
希望能够执行更多的
196
00:06:18,375 --> 00:06:19,575
乘加的操作
197
00:06:19,575 --> 00:06:21,175
就是MAC的计算
198
00:06:21,175 --> 00:06:22,575
那可以看到CPU呢
199
00:06:22,575 --> 00:06:24,775
它有SIMD的这种架构
200
00:06:24,775 --> 00:06:27,175
然后每次呢通过vector的指令
201
00:06:27,175 --> 00:06:28,775
每次执行多个数据
202
00:06:28,775 --> 00:06:30,975
那GPU呢就有SIMT的架构
203
00:06:30,975 --> 00:06:33,375
然后可以对tensor进行一个处理
204
00:06:33,375 --> 00:06:34,375
而NPU呢
205
00:06:34,375 --> 00:06:35,975
它可能采用SIMD的架构
206
00:06:35,975 --> 00:06:37,975
也可能采用dataflow的形式
207
00:06:37,975 --> 00:06:39,775
里面呢可能会同时提供
208
00:06:39,775 --> 00:06:41,975
tensor或者vector的相关的指令
209
00:06:41,975 --> 00:06:43,175
这个事情呢就很有意思
210
00:06:43,175 --> 00:06:44,375
目标就是
211
00:06:44,375 --> 00:06:46,575
要每一次执行每个时钟周期
212
00:06:46,575 --> 00:06:49,175
运行更多的MAC
213
00:06:49,175 --> 00:06:49,975
那第二个呢
214
00:06:49,975 --> 00:06:51,575
在硬件里面呢
215
00:06:51,575 --> 00:06:52,775
是希望能够在不增加
216
00:06:52,775 --> 00:06:54,575
内存带宽的前提下
217
00:06:54,575 --> 00:06:57,975
单个时钟周期内执行更多的MAC
218
00:06:57,975 --> 00:06:59,375
那这个呢有个很有意思的
219
00:06:59,375 --> 00:07:03,175
就单个时钟周期内执行的更多
220
00:07:03,175 --> 00:07:04,775
这也是英特尔里面
221
00:07:04,775 --> 00:07:06,775
tensor core的一种设计
222
00:07:06,775 --> 00:07:07,975
对应于华为
223
00:07:07,975 --> 00:07:10,375
NPU也是利用这种设计
224
00:07:10,375 --> 00:07:11,975
然后去实现的
225
00:07:11,975 --> 00:07:16,175
现在再详细的打开一下这个图
226
00:07:16,175 --> 00:07:18,175
假设为了达到这个目标
227
00:07:18,175 --> 00:07:19,375
单个时钟周期内呢
228
00:07:19,375 --> 00:07:20,775
执行更多的MAC操作
229
00:07:20,775 --> 00:07:21,975
于是呢可能会将
230
00:07:21,975 --> 00:07:24,375
一个520bit的一个每个cycle
231
00:07:24,375 --> 00:07:25,575
或者每个时钟周期
232
00:07:25,575 --> 00:07:27,975
我可以执行512个bit
233
00:07:27,975 --> 00:07:29,175
执行8bit的时候
234
00:07:29,175 --> 00:07:31,375
可以执行68个指令
235
00:07:31,375 --> 00:07:33,175
而在执行32bit的时候呢
236
00:07:33,175 --> 00:07:35,175
可能只能执行16次
237
00:07:35,175 --> 00:07:35,975
那这个时候呢
238
00:07:35,975 --> 00:07:39,375
对性能影响就非常的大了
239
00:07:39,375 --> 00:07:42,775
我假设执行64次8bit的运算
240
00:07:42,775 --> 00:07:44,775
那可以执行非常多的运算
241
00:07:44,775 --> 00:07:48,575
于是呢就有了下一个内容
242
00:07:48,575 --> 00:07:50,375
减少位宽
243
00:07:50,375 --> 00:07:51,775
那在减少位宽之前呢
244
00:07:51,775 --> 00:07:53,775
还是回到矩阵乘里面
245
00:07:53,775 --> 00:07:55,775
去进行一些思考
246
00:07:55,775 --> 00:07:56,575
第一个
247
00:07:56,575 --> 00:07:59,175
就是软件层面software
248
00:07:59,175 --> 00:08:01,575
有必要去减少
249
00:08:01,575 --> 00:08:04,775
完全没有必要去计算的一些MAC
250
00:08:04,775 --> 00:08:07,175
使用其他算法来去代替