forked from hukaixuan19970627/yolov5_obb
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathnotes-yolov5-obb-2022.txt
689 lines (586 loc) · 51.1 KB
/
notes-yolov5-obb-2022.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
==========================================================================================================================
yolov5 v6.0 obb version
20220111
just found yolov5_obb github updated with latest yolov5 code!!!
https://github.com/hukaixuan19970627/yolov5_obb
with test/dota dataset support
https://github.com/hukaixuan19970627/yolov5_obb/issues/146
请问训练数据集还需要切分成1024x1024吗
代码内部自动完成转换操作。
GetStart.md里有demo,可以选择切也可以不切,支持任意宽高比的图像输入,高分辨率数据集的话选择split进行train/test能够获得更好的结果。
https://github.com/hukaixuan19970627/yolov5_obb/issues/124
tensorboard看loss曲线图更方便一点。
需要调节的参数也就各loss权重,数据增强策略的一些参数,别的参数除非你知道代码逻辑否则保持默认就好。
一些遥感图像的tricks你可以看看这篇博客:https://zhuanlan.zhihu.com/p/422764914
https://github.com/hukaixuan19970627/yolov5_obb/issues/137
下个版本会基于最新的yolov5进行改建,到时候会支持所有的功能。本来预计本周上传的,但现在精度跌的有点厉害,正在找问题呢
已更,新版代码无论是速度还是精度都挑不出毛病,yolov5确实香的很
已更,请下载最新版本,最终精度和训练速度都有很大的提升
下个版本对labels就不需要自己来转化了,内部代码会完成自动转化,只需要确保labels数据形式和DOTA数据集给出来的一致就行
(pytorch2021) F:\ws\yolov5_obb>cd utils/nms_rotated
(pytorch2021) F:\ws\yolov5_obb\utils\nms_rotated>python setup.py develop
[1/4] C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0\bin\nvcc --use-local-env ...
nvcc fatal : Option '--cl-version' needs to be specified with option '--use-local'
cudatoolkit 10.2.89 h74a9793_1
i guess it need cuda nvcc for cuda 10.2.89 not cuda 9
install cuda 10.2.89 on windows.
but still cannot build nms_rotated module
just use cpu version instead cuda version
and it works!
def make_cuda_ext(name, module, sources, sources_cuda=[]):
define_macros = []
extra_compile_args = {'cxx': []}
# if torch.cuda.is_available() or os.getenv('FORCE_CUDA', '0') == '1':
# define_macros += [('WITH_CUDA', None)]
# extension = CUDAExtension
# extra_compile_args['nvcc'] = [
# '-D__CUDA_NO_HALF_OPERATORS__',
# '-D__CUDA_NO_HALF_CONVERSIONS__',
# '-D__CUDA_NO_HALF2_OPERATORS__',
# ]
# sources += sources_cuda
# else:
print(f'Compiling {name} without CUDA')
extension = CppExtension
# raise EnvironmentError('CUDA is required to compile MMDetection!')
still got RuntimeError: Not compiled with GPU support
on python detect.py...
restart conda
then call vc2017.bat
and set DISTUTILS_USE_SDK
then python setup.py develop
nvcc complains about "undefined eps constant"
const double eps=1E-8;
__device__ inline int sig(float d){
return(d>1E-8)-(d <-1E-8);
}
replace eps with 1e-8
and it works!
F:\ws\yolov5_obb>"C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Auxiliary\Build\vcvars64.bat"
set DISTUTILS_USE_SDK=1
(pytorch2021) F:\ws\yolov5_obb\utils\nms_rotated>python setup.py develop
(pytorch2021) F:\ws\yolov5_obb>pip install --upgrade requests
ERROR: pip's dependency resolver does not currently take into account all the packages that
are installed. This behaviour is the source of the following dependency conflicts.
sotabenchapi 0.0.16 requires requests==2.22.0, but you have requests 2.27.1 which is incompatible.
ignore the package depdency warning
cudnn not installed yet
convert dataset or re-crop dataset for yolov5-v6
(pytorch2021) F:\ws\yolov5_obb>python detect.py --device 0 --weight "runs/yolov5m_finetune/weights/best.pt" --source F:\ws\efficientdet_pytorch_win64\_datasets\_test_sets\private170 --view-img
update yolov5 v6 obb
save .vehicle_markers.json for yolov5_obb
python detect.py --device 0 --weight "runs/yolov5m_finetune/weights/best.pt" --source F:\ws\efficientdet_pytorch_win64\_datasets\_test_sets\private170 --view-img --save-json --img-size 4000 6016
python pascalvoc.py -gtformat obb_json -detformat obb_json -gt ..\efficientdet_pytorch_win64\_datasets\_test_sets\private170 -det F:\ws\yolov5_obb\runs\detect\exp35
pretrained yolov5m_obb with DOTAv1.5_subsize1024_gap200_rate1.0
vehicle width: 15 pixels
DETAIL:+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
DETAIL:| 50% | 55% | 60% | 65% | 70% | 75% | 80% | 85% | 90% | 95% | 50-95% |
DETAIL:+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
DETAIL:| 90.44% | 90.40% | 90.27% | 89.90% | 87.37% | 79.30% | 56.19% | 20.20% | 1.82% | 0.01% | 60.59% |
DETAIL:+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
SUMMARY:60.59
vehicle width: 20 pixels
DETAIL:+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
DETAIL:| 50% | 55% | 60% | 65% | 70% | 75% | 80% | 85% | 90% | 95% | 50-95% |
DETAIL:+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
DETAIL:| 87.02% | 86.98% | 86.95% | 86.57% | 85.51% | 80.40% | 65.01% | 34.59% | 4.90% | 0.04% | 61.80% |
DETAIL:+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
SUMMARY:61.80
vehicle width: 25 pixels
DETAIL:+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
DETAIL:| 50% | 55% | 60% | 65% | 70% | 75% | 80% | 85% | 90% | 95% | 50-95% |
DETAIL:+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
DETAIL:| 86.89% | 86.89% | 86.80% | 86.35% | 85.33% | 80.97% | 71.91% | 44.09% | 11.58% | 0.19% | 64.10% |
DETAIL:+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
SUMMARY:64.10
vehicle width: 30 pixels
DETAIL:+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
DETAIL:| 50% | 55% | 60% | 65% | 70% | 75% | 80% | 85% | 90% | 95% | 50-95% |
DETAIL:+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
DETAIL:| 87.76% | 87.69% | 87.50% | 87.41% | 86.50% | 82.45% | 72.92% | 48.56% | 14.58% | 0.26% | 65.56% |
DETAIL:+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
SUMMARY:65.56
vehicle width: 35 pixels
DETAIL:+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
DETAIL:| 50% | 55% | 60% | 65% | 70% | 75% | 80% | 85% | 90% | 95% | 50-95% |
DETAIL:+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
DETAIL:| 87.33% | 87.30% | 87.13% | 86.47% | 85.70% | 81.81% | 71.73% | 48.41% | 16.98% | 0.34% | 65.32% |
DETAIL:+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
SUMMARY:65.32
===================================================================================================================================================
20221015
https://github.com/mljack/yolov5_obb/blob/master/docs/install.md
conda create -n yolov5_obb python=3.9 -y
source activate yolov5_obb
pip3 install torch==1.10.1+cu113 torchvision==0.11.2+cu113 torchaudio==0.10.1+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html
cd yolov5_obb
pip install -r requirements.txt
cd utils/nms_rotated
pip install -v -e .
cd yolov5_obb/DOTA_devkit
sudo apt-get install swig
swig -c++ -python polyiou.i
python setup.py build_ext --inplace
python convert_to_yolo_poly_dataset.py
python -u train.py --save-period 1 --weight "runs/yolov5m_finetune/weights/best.pt" --data "data/eagle003.yaml" --hyp "data/hyps/obb/hyp.finetune_eagle003.yaml" --img-size 768 --batch-size 32 --epochs 80 --device 0 2>&1 | tee 20221016_0325.log
heading angles sometimes are incorrect...
due to augmentation?
or incorrect label converter?
inverted y...
train: Scanning 'data/eagle/0001_shijidadao_20200907_1202_200m_fixed/train/labelTxt' images and labels...64581 found, 0 missing, 0 empty, 33317 corrupted: 100%|██████████| 64581/64581 [00:02<00:00, 22076.84it/s]
val: Scanning 'data/eagle/0011_private170/labelTxt' images and labels...490 found, 0 missing, 0 empty, 135 corrupted: 100%|██████████| 490/490 [00:00<00:00, 13615.34it/s]
val: WARNING: data/eagle/0011_private170/images/yanggaobeilu_20210327_134654_a_01.jpg: ignoring corrupt image/label: negative label values [ -21.75 -48.03 -3.98], please check your dota format labels
yolov5_obb/utils/datasets.py
def verify_image_label(args):
#assert (l >= 0).all(), f'negative label values {l[l < 0]}, please check your dota format labels'
comment the negative coodinates checking
exp train val hyp epochs val_mAP private170_mAP(val.py)
exp4 yolov5m_finetune eagle private170 DroneVehicle 17/23 84.835% 85.1%
exp5 yolov5m_finetune eagle private170 DOTA 17/19 85.308% 85.8%
exp6 yolov5m_finetune eagle+longyao private170 DOTA 13/23 85.701% 86.1%
exp7 yolov5m_finetune eagle+longyao eagle+longyao DOTA 99/99 92.760%
eagle+longyao_val 93.6%@epoch99 93.1%epoch75 92.2%@epoch50 90.5%@epoch25 88.7%@epoch10
private170_mAP 86.2%@epoch99 86.3%epoch75 86.4%@epoch50 86.0%@epoch25 85.8%@epoch10
exp8 yolov5m_finetune eagle+longyao eagle+longyao DroneVehicle 32/interrupted 88.272% 86.1%@epoch32 85.8%@epoch25 85.1%@epoch10
exp9 yolov5x6 eagle+longyao eagle+longyao DOTA eagle+longyao_val 88.9%@epoch60 87.1%@epoch50 87.1%@epoch40 86.0%@epoch30 85.5%@epoch20 85.2%@epoch10
private170_mAP 85.3%@epoch60 85.1%@epoch50 85.4%@epoch40 85.6%@epoch30 84.6%@epoch20 84.9%@epoch10
exp10 yolov5m6 eagle+longyao eagle+longyao DOTA eagle+longyao_val 86.1%@epoch60 84.9%@epoch50 84.0%@epoch40 83.9%@epoch30 83.5%@epoch20 83.8%@epoch10
private170_mAP 85.4%@epoch60 85.3%@epoch50 85.4%@epoch40 85.5%@epoch30 85.0%@epoch20 84.4%@epoch10
exp11 yolov5m eagle+longyao eagle+longyao DOTA eagle+longyao_val 93.4%@epoch99 92.1%@epoch60 91.4%@epoch50 90.5%@epoch40 89.3%@epoch30 87.8%@epoch20 86.4%@epoch10
private170_mAP 86.7%@epoch99 86.8%@epoch60 86.7%@epoch50 86.3%@epoch40 86.2%@epoch30 86.0%@epoch20 85.8%@epoch10
exp13 yolov5x eagle+longyao eagle+longyao DOTA eagle+longyao_val 92.5%@epoch60 92.1%@epoch50 91.5%@epoch40 90.7%@epoch30 89.3%@epoch20 87.8%@epoch10
private170_mAP 86.5%@epoch60 86.7%@epoch50 86.6%@epoch40 86.6%@epoch30 86.3%@epoch20 86.3%@epoch10
exp14 yolov5x 640 eagle+longyao eagle+longyao DOTA 768 eagle+longyao_val 92.6%@epoch99 %@epoch60 %@epoch50 %@epoch40 %@epoch30 %@epoch20 85.2%@epoch10
768 private170_mAP 86.1%@epoch99 86.3%@epoch60 86.3%@epoch50 86.4%@epoch40 86.3%@epoch30 85.8%@epoch20 85.7%@epoch10
640 private170_mAP 85.7%@epoch99 86.2%@epoch60 86.2%@epoch50 86.1%@epoch40 86.0%@epoch30 86.0%@epoch20 85.7%@epoch10
yolov5 6版本和之前的版本训练超参差异较大,导致训练收敛速度下降(在val set上),(在test set上也稍低1% mAP)
使用yolov5_obb提供的dota v1.5 obb预训练权重和使用yolov5官方的COCO预训练权重
再用data/hyps/obb/hyp.finetune_dota.yaml的schedule来finetuning
最终训练精度类似,没有明显区别
python val.py --batch-size 16 --imgsz 768 --device=1 --data "data/eagle003.yaml" --weights runs/train/exp9/weights/epoch40.pt
increase lr (SGD + OneCycleLR) and train yolov5m6 again
lr0: 0.01
lrf: 0.2
to
lr0: 0.04
lrf: 0.05
blown at 15epoch
train yolov5x
python -u train.py --save-period 1 --weight "yolov5x.pt" --data "data/eagle003c.yaml" --hyp "data/hyps/obb/hyp.finetune_eagle003b.yaml" --img-size 768 --batch-size 32 --epochs 100 --name exp13 --exist-ok --device 0 2>&1 | tee -a eagle_exp13_20221101_g_eagle_longyao_dataset_val_dota_finetune_hyp_yolov5x.log
Epoch gpu_mem box obj cls theta labels img_size
0%| | 0/2106 [00:01<?, ?it/s]
Traceback (most recent call last):
File "/home/me/1TSSD/maliang/yolov5_obb/train.py", line 633, in <module>
main(opt)
File "/home/me/1TSSD/maliang/yolov5_obb/train.py", line 530, in main
train(opt.hyp, opt, device, callbacks)
File "/home/me/1TSSD/maliang/yolov5_obb/train.py", line 325, in train
pred = model(imgs) # forward
File "/home/me/anaconda3/envs/yolov5_obb/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/me/1TSSD/maliang/yolov5_obb/models/yolo.py", line 147, in forward
return self._forward_once(x, profile, visualize) # single-scale inference, train
File "/home/me/1TSSD/maliang/yolov5_obb/models/yolo.py", line 177, in _forward_once
x = m(x) # run
File "/home/me/anaconda3/envs/yolov5_obb/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/me/1TSSD/maliang/yolov5_obb/models/common.py", line 191, in forward
x = self.cv1(x)
File "/home/me/anaconda3/envs/yolov5_obb/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/me/1TSSD/maliang/yolov5_obb/models/common.py", line 46, in forward
return self.act(self.bn(self.conv(x)))
File "/home/me/anaconda3/envs/yolov5_obb/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/me/anaconda3/envs/yolov5_obb/lib/python3.9/site-packages/torch/nn/modules/conv.py", line 446, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/home/me/anaconda3/envs/yolov5_obb/lib/python3.9/site-packages/torch/nn/modules/conv.py", line 442, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Unable to find a valid cuDNN algorithm to run convolution
just another due to large batch size...
reduce batch size to 16, then it trains fine...
model cropped_HBBmAP cropped_OBBmAP(nms_iou=0.9) OBBmAP(nms_iou=0.4)
exp5_epoch18 0.853 84.52%
exp7_epoch50 0.864 85.40% 81.85% ???
exp8_epoch32 0.861 84.77%
+-----------------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
| cropped_OBBmAP | 50% | 55% | 60% | 65% | 70% | 75% | 80% | 85% | 90% | 95% | 50-95% |
+-----------------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
| exp5_epoch18 | 99.60% | 99.56% | 99.51% | 99.42% | 99.33% | 98.98% | 97.97% | 91.99% | 56.77% | 2.07% | 84.52% |
+-----------------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
| exp7_epoch50 | 99.58% | 99.58% | 99.53% | 99.49% | 99.40% | 99.05% | 98.39% | 92.67% | 63.00% | 3.30% | 85.40% |
+-----------------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
| exp8_epoch32 | 99.50% | 99.47% | 99.44% | 99.36% | 99.24% | 98.92% | 98.12% | 92.14% | 58.95% | 2.62% | 84.77% |
+-----------------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
(yolov5_obb) me@node2:~/2TSSD/ws/yolov5_obb$
python detect.py --img-size 4000 6016 --view-img --device=0 --weights runs/train/exp7/weights/epoch50.pt --save-json --source private170
large --img-size
sliced or resized in yolov5 6.0 inference?
yolov v1
dense prediction + NMS
image size doesn't affect inference results much
pixel size of objects do affect inference results a lot.
the --img-size option is the final input img size
I hack the code to resize img to match vehicle width to 28 pixels and pad them to img_size
TODO: split private170 to different img-size set to improve inference speed.
yolov5 best model metric
fitness = 0.1*{[email protected]} + 0.9*{[email protected]:0.95}
def load_mosaic(self, index):
# YOLOv5 4-mosaic loader. Loads 1 image + 3 random images into a 4-image mosaic
def load_mosaic9(self, index):
# YOLOv5 9-mosaic loader. Loads 1 image + 8 random images into a 9-image mosaic
def extract_boxes(path='../datasets/coco128'): # from utils.datasets import *; extract_boxes()
# Convert detection dataset into classification dataset, with one directory per class
python val.py --batch-size 32 --imgsz 768 --device=0 --data "data/eagle003.yaml" --weights runs/train/exp7/weights/best.pt
python val.py --batch-size 32 --imgsz 768 --device=0 --data "data/eagle003.yaml" --weights runs/train/exp7/weights/epoch99.pt
evaluate OBB mAP with yolov5_obb/val.py
python val.py --batch-size 32 --imgsz 768 --device=0 --data "data/eagle003.yaml" --weights runs/train/exp7/weights/epoch50.pt --save-json --exist-ok --name exp29 --iou-thres 0.6 --conf-thres=0.01
runs/val/exp29/epoch50_obb_predictions.json
python tools/TestJson2VocClassTxt.py --json_path runs/val/exp29/epoch50_obb_predictions.json --save_path runs/val/exp29/epoch50_obb_det
ls data/eagle/0011_private170/labelTxt | sed 's/\.txt//g' > data/eagle/0011_private170/list.txt
python DOTA_devkit/dota_evaluation_task1.py --detpath 'runs/val/exp29/epoch50_obb_det/Task1_{:s}.txt' --annopath 'data/eagle/0011_private170/labelTxt/{:s}.txt' --imagesetfile data/eagle/0011_private170/list.txt
no luck yet...
it's due to DOTA_devkit skip labels with difficult==1, and somehow all my gt label difficult==1
set all difficult==0 and it works!
(yolov5_obb) me@node2:~/2TSSD/ws/yolov5_obb$
python DOTA_devkit/dota_evaluation_task1.py --detpath 'runs/val/exp29/epoch50_obb_det/Task1_{:s}.txt' --annopath 'data/eagle/0011_private170/labelTxt/{:s}.txt' --imagesetfile data/eagle/0011_private170/list.txt
[email protected]: 90.91%
[email protected]: 90.91%
[email protected]: 90.91%
[email protected]: 90.91%
[email protected]: 90.91%
[email protected]: 90.90%
[email protected]: 90.84%
[email protected]: 89.24%
[email protected]: 62.31%
[email protected]: 10.71%
mmAP:79.85%
low on low IoU ?
high on high IoU ?
mAP50 < 99% => recall issue
Object-Detection-Metrics use every point to calculate mAP,
while DOTA_devkit/dota_evaluation_task1.py use 11-point interpolation by default.
Our default implementation is the same as VOC PASCAL: every point interpolation. If you want to use the 11-point interpolation,
change the functions that use the argument ```method=MethodAveragePrecision.EveryPointInterpolation``` to
```method=MethodAveragePrecision.ElevenPointInterpolation```.
def voc_ap(rec, prec, use_07_metric=False):
""" ap = voc_ap(rec, prec, [use_07_metric])
Compute VOC AP given precision and recall.
If use_07_metric is true, uses the
VOC 07 11 point method (default:False).
re-read definition of 11-point interpolation mAP
if detection cannot get recall 100% on any confidence threshold, the 11-point interpolated mAP <= 10/11 (90.90909%)
so researchers switch to Pascal VOC 2012 metric, instead of using 2007 metric when mAP > 90%
https://jonathan-hui.medium.com/map-mean-average-precision-for-object-detection-45c121a31173
According to the original researcher, the intention of using 11 interpolated point in calculating AP is
The intention in interpolating the precision/recall curve in this way is to reduce the impact of
the “wiggles” in the precision/recall curve, caused by small variations in the ranking of examples.
However, this interpolated method is an approximation which suffers two issues. It is less precise. Second,
it lost the capability in measuring the difference for methods with low AP. Therefore, a different AP
calculation is adopted after 2008 for PASCAL VOC.
For later Pascal VOC competitions, VOC2010–2012 samples the curve at all unique recall values (r₁, r₂, …),
whenever the maximum precision value drops. With this change, we are measuring the exact area under
the precision-recall curve after the zigzags are removed.
No approximation or interpolation is needed. Instead of sampling 11 points, we sample p(rᵢ)
whenever it drops and computes AP as the sum of the rectangular blocks.
This definition is called the Area Under Curve (AUC). As shown below, as the
interpolated points do not cover where the precision drops, both methods will diverge.
COCO mAP
Latest research papers tend to give results for the COCO dataset only.
101-point interpolated AP definition is used in the calculation
under the COCO context, there is no difference between AP and mAP.
AP is averaged over all categories. Traditionally, this is called “mean average precision” (mAP).
We make no distinction between AP and mAP (and likewise AR and mAR) and assume the difference is clear from context.
https://github.com/rafaelpadilla/Object-Detection-Metrics
https://github.com/rafaelpadilla/review_object_detection_metrics
updated with all COCO metrics, new formats and a GUI
with use_07_metric=False (--iou-thres 0.4 (default))
[email protected]: 99.57%
[email protected]: 99.57%
[email protected]: 99.47%
[email protected]: 99.37%
[email protected]: 99.05%
[email protected]: 98.52%
[email protected]: 97.49%
[email protected]: 90.55%
[email protected]: 60.24%
[email protected]: 3.22%
mmAP:84.71% on sliced private170 (model exp7 epoch50 HBBmAP 86.4%)
test on different nms iou threshold:
(yolov5_obb) me@node2:~/2TSSD/ws/yolov5_obb$
python val.py --batch-size 32 --imgsz 768 --device=0 --data "data/eagle003.yaml" --weights runs/train/exp7/weights/epoch50.pt --save-json --exist-ok --name exp29 --conf-thres 0.01 --iou-thres 0.x
cropped_OBBmAP (yolov5 6.0 obb: exp7/weights/epoch50.pt)
+-----------------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
| exp7_epoch50 | 50% | 55% | 60% | 65% | 70% | 75% | 80% | 85% | 90% | 95% | 50-95% |
+-----------------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
| nms_iou_0.01 | 99.54% | 99.54% | 99.44% | 99.34% | 99.02% | 98.52% | 97.49% | 90.55% | 60.24% | 3.22% | 84.69% |
+-----------------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
| nms_iou_0.1 | 99.54% | 99.54% | 99.44% | 99.34% | 99.02% | 98.52% | 97.49% | 90.55% | 60.24% | 3.22% | 84.69% |
+-----------------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
| nms_iou_0.2 | 99.54% | 99.54% | 99.44% | 99.34% | 99.02% | 98.52% | 97.49% | 90.55% | 60.24% | 3.22% | 84.69% |
+-----------------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
| nms_iou_0.3 | 99.54% | 99.54% | 99.44% | 99.34% | 99.02% | 98.52% | 97.49% | 90.55% | 60.24% | 3.22% | 84.69% |
+-----------------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
| nms_iou_0.4 | 99.57% | 99.57% | 99.47% | 99.37% | 99.05% | 98.52% | 97.49% | 90.55% | 60.24% | 3.22% | 84.71% |
+-----------------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
| nms_iou_0.5 | 99.57% | 99.57% | 99.51% | 99.40% | 99.08% | 98.58% | 97.52% | 90.58% | 60.24% | 3.22% | 84.73% |
+-----------------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
| nms_iou_0.6 | 99.67% | 99.67% | 99.57% | 99.50% | 99.24% | 98.74% | 97.65% | 90.67% | 60.26% | 3.23% | 84.82% |
+-----------------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
| nms_iou_0.7 | 99.70% | 99.70% | 99.63% | 99.59% | 99.40% | 98.92% | 97.86% | 90.73% | 60.30% | 3.22% | 84.91% |
+-----------------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
| nms_iou_0.8 | 99.66% | 99.66% | 99.61% | 99.58% | 99.45% | 99.07% | 98.23% | 91.38% | 60.47% | 3.23% | 85.03% |
+-----------------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
| nms_iou_0.9 | 99.58% | 99.58% | 99.53% | 99.49% | 99.40% | 99.05% | 98.39% | 92.67% | 63.00% | 3.30% | 85.40% |
+-----------------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
| nms_iou_0.91 | 99.55% | 99.55% | 99.49% | 99.46% | 99.35% | 99.03% | 98.33% | 92.85% | 63.36% | 3.37% | 85.43% |
+-----------------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
| nms_iou_0.92 | 99.49% | 99.49% | 99.43% | 99.40% | 99.29% | 98.95% | 98.30% | 92.80% | 63.95% | 3.46% | 85.46% | BEST
+-----------------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
| nms_iou_0.93 | 99.35% | 99.35% | 99.29% | 99.25% | 99.16% | 98.84% | 98.14% | 92.69% | 64.09% | 3.50% | 85.37% |
+-----------------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
| nms_iou_0.94 | 99.03% | 99.03% | 98.97% | 98.93% | 98.84% | 98.55% | 97.81% | 92.62% | 64.41% | 3.60% | 85.18% |
+-----------------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
| nms_iou_0.95 | 98.45% | 98.45% | 98.40% | 98.36% | 98.26% | 97.97% | 97.29% | 92.24% | 64.43% | 3.78% | 84.76% |
+-----------------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
| nms_iou_0.96 | 95.70% | 95.70% | 95.65% | 95.62% | 95.52% | 95.27% | 94.68% | 90.07% | 63.47% | 4.09% | 82.58% |
+-----------------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
| nms_iou_0.97 | 83.59% | 83.59% | 83.56% | 83.54% | 83.47% | 83.30% | 82.87% | 79.26% | 57.65% | 4.25% | 72.51% |
+-----------------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
| nms_iou_0.98 | 62.96% | 62.96% | 62.95% | 62.93% | 62.89% | 62.79% | 62.52% | 60.14% | 45.22% | 3.80% | 54.92% |
+-----------------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
| nms_iou_0.99 | 43.30% | 43.30% | 43.29% | 43.28% | 43.26% | 43.21% | 43.08% | 41.68% | 32.16% | 3.25% | 37.98% |
+-----------------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
人工检查yolov5_6_exp7_epoch50 结果
角度检测精度高于5度,框由于方向更加准确,OBB包裹普遍更紧,对于提高大车IoU比较有帮助
误检少了不少,但两俩小车隔一个车道并排时,常在中间多产生一个误检目标,长度类似big_truck
big_truck漏检减少,框不像efficientdet偏离车整体很多或仅包含一半车体,但也没有小车结果那样包裹紧密,还是有一定的偏差和空隙
整体OBB紧实程度明显高于efficientdet_073,但也存在不少OBB更小但没裹住部分车体边缘的情况
但big_truck检测结果OBB包裹误差明显大于小车,还要增加训练样本中大车数目和种类丰富度
yolov5_6_obb 在不同车辆类型上的检测质量更均衡,宽度偏差上也没有明显偏宽
所以统一缩小宽度不起作用
python pascalvoc.py -gt F:\ws\efficientdet_pytorch_win64\_datasets\_test_sets\private170 -det F:\yolov5_6.0_exp7_epoch50_obb -detformat obb_json -gtformat obb_json
compare with efficientdet_073 for multi-classes APs
multi-classes [email protected] single-classes [email protected]
yolov5_6_exp7_epoch50 80.78% 81.85%
yolov5_6_exp7_epoch50(w*=0.98) 80.23% 81.56%
yolov5_6_exp7_epoch50(w*=1.02) 80.57% 81.49%
model-073-036epoch 76.94% 81.89%
model-073-036epoch (w*=0.938 if ratio<3) 78.28%
model-073-036epoch (w*=0.938 if ratio>3) 78.76%
model-073-036epoch (w*=0.938 if ratio<4) 79.64%
model-073-036epoch (w*=0.938 if ratio<5) 80.17%
model-073-036epoch (w*=0.938 if ratio<6) 80.19% 84.47%
model-073-036epoch (w*=0.938 if ratio<7) 80.14%
model-073-036epoch (w*=0.938 if ratio<8) 80.14%
model-073-036epoch (w*=0.938 if ratio<9) 80.14%
model-073-036epoch (w*=0.938) 80.14% 84.46%
todo: more detail check
w *= 0.923 trick doesn't work on yolov5_6.0_obb trained on dataset@2021
OBBmAP results don't match between Object-Detection-Metrics and yolov5_dot_evaluate_task1
check more details
(pytorch2022) F:\ws\Object-Detection-Metrics>
python pascalvoc.py -gt F:\ws\efficientdet_pytorch_win64\_datasets\_test_sets\private170 -det F:\ws\efficientdet_pytorch_win64\_datasets\_test_sets\_archived_det_2022_a\private170_det(yolov5_6.0_exp7_epoch50) -detformat obb_json -gtformat obb_json
python pascalvoc.py -gt F:\ws\efficientdet_pytorch_win64\_datasets\_test_sets\private170 -det F:\ws\efficientdet_pytorch_win64\_datasets\_test_sets\_archived_det_2022_a\private170_det(model-073-036epoch) -detformat obb_json -gtformat obb_json
yolov5_6_exp7_epoch50 efficientdet_073-036epoch efficientdet_073-036epoch, w*=0.938
AP TP FP GT AP TP FP GT AP TP FP GT
car: 99.69% 2547 16 2555 car: 99.96% 2554 22 2555 car: 99.96% 2554 22 2555
bus: 100.00% 69 0 69 bus: 101.45% 70 0 69 bus: 101.45% 70 0 69
big_truck: 98.27% 170 0 173 big_truck: 96.08% 167 9 173 big_truck: 96.82% 168 8 173
truck: 100.00% 132 1 132 truck: 100.76% 133 1 132 truck: 100.76% 133 1 132
======================================== 50% ======================================== 50% ======================================== 50%
car: 99.69% 2547 16 2555 car: 99.96% 2554 22 2555 car: 99.96% 2554 22 2555
bus: 100.00% 69 0 69 bus: 101.45% 70 0 69 bus: 101.45% 70 0 69
big_truck: 98.27% 170 0 173 big_truck: 95.46% 166 10 173 big_truck: 95.46% 166 10 173
truck: 100.00% 132 1 132 truck: 99.72% 132 2 132 truck: 100.76% 133 1 132
======================================== 55% ======================================== 55% ======================================== 55%
car: 99.69% 2547 16 2555 car: 99.96% 2554 22 2555 car: 99.96% 2554 22 2555
bus: 100.00% 69 0 69 bus: 101.45% 70 0 69 bus: 101.45% 70 0 69
big_truck: 98.27% 170 0 173 big_truck: 93.57% 164 12 173 big_truck: 95.46% 166 10 173
truck: 100.00% 132 1 132 truck: 99.72% 132 2 132 truck: 99.72% 132 2 132
======================================== 60% ======================================== 60% ======================================== 60%
car: 99.65% 2546 17 2555 car: 99.71% 2549 27 2555 car: 99.92% 2553 23 2555
bus: 100.00% 69 0 69 bus: 101.45% 70 0 69 bus: 101.45% 70 0 69
big_truck: 97.68% 169 1 173 big_truck: 90.97% 161 15 173 big_truck: 91.55% 162 14 173
truck: 100.00% 132 1 132 truck: 99.72% 132 2 132 truck: 99.72% 132 2 132
======================================== 65% ======================================== 65% ======================================== 65%
car: 99.61% 2545 18 2555 car: 99.49% 2544 32 2555 car: 99.61% 2547 29 2555
bus: 100.00% 69 0 69 bus: 99.79% 69 1 69 bus: 99.79% 69 1 69
big_truck: 97.02% 168 2 173 big_truck: 89.32% 159 17 173 big_truck: 90.44% 160 16 173
truck: 100.00% 132 1 132 truck: 98.65% 131 3 132 truck: 97.80% 130 4 132
======================================== 70% ======================================== 70% ======================================== 70%
car: 99.47% 2542 21 2555 car: 99.07% 2534 42 2555 car: 99.41% 2542 34 2555
bus: 100.00% 69 0 69 bus: 91.96% 65 5 69 bus: 97.74% 68 2 69
big_truck: 93.55% 163 7 173 big_truck: 84.14% 154 22 173 big_truck: 89.32% 159 17 173
truck: 100.00% 132 1 132 truck: 96.29% 129 5 132 truck: 96.29% 129 5 132
======================================== 75% ======================================== 75% ======================================== 75%
car: 98.55% 2523 40 2555 car: 97.64% 2506 70 2555 car: 98.68% 2528 48 2555
bus: 100.00% 69 0 69 bus: 85.55% 62 8 69 bus: 91.96% 65 5 69
big_truck: 85.73% 154 16 173 big_truck: 70.03% 138 38 173 big_truck: 82.51% 152 24 173
truck: 98.96% 131 2 132 truck: 93.34% 126 8 132 truck: 94.86% 128 6 132
======================================== 80% ======================================== 80% ======================================== 80%
car: 87.50% 2329 234 2555 car: 86.22% 2336 240 2555 car: 93.13% 2433 143 2555
bus: 78.18% 60 9 69 bus: 61.98% 52 18 69 bus: 76.43% 59 11 69
big_truck: 75.57% 140 30 173 big_truck: 46.81% 109 67 173 big_truck: 58.56% 123 53 173
truck: 86.67% 119 14 132 truck: 85.06% 118 16 132 truck: 84.53% 118 16 132
======================================== 85% ======================================== 85% ======================================== 85%
car: 37.64% 1427 1136 2555 car: 44.72% 1643 933 2555 car: 59.65% 1873 703 2555
bus: 39.76% 41 28 69 bus: 12.28% 24 46 69 bus: 32.04% 34 36 69
big_truck: 24.36% 71 99 173 big_truck: 13.10% 54 122 173 big_truck: 27.84% 84 92 173
truck: 34.07% 68 65 132 truck: 35.15% 72 62 132 truck: 42.94% 79 55 132
======================================== 90% ======================================== 90% ======================================== 90%
car: 0.49% 125 2438 2555 car: 2.60% 355 2221 2555 car: 3.10% 388 2188 2555
bus: 0.57% 4 65 69 bus: 0.27% 3 67 69 bus: 0.53% 4 66 69
big_truck: 1.09% 10 160 173 big_truck: 0.15% 5 171 173 big_truck: 1.47% 18 158 173
truck: 1.19% 10 123 132 truck: 2.67% 14 120 132 truck: 1.04% 11 123 132
======================================== 95% ======================================== 95% ======================================== 95%
80.78% 76.94% 80.14%
multi-classes [email protected]
+-----------------------------------------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
| | 50% | 55% | 60% | 65% | 70% | 75% | 80% | 85% | 90% | 95% | 50-95% |
+-----------------------------------------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
| yolov5_6_exp7_epoch50 | 99.49% | 99.49% | 99.49% | 99.33% | 99.16% | 98.25% | 95.81% | 81.98% | 33.96% | 0.83% | 80.78% |
+-----------------------------------------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
| efficientdet_073-036epoch | 99.56% | 99.15% | 98.68% | 97.96% | 96.81% | 92.86% | 86.64% | 70.02% | 26.31% | 1.42% | 76.94% |
+-----------------------------------------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
| efficientdet_073-036epoch, w*=0.938 | 99.75% | 99.41% | 99.15% | 98.16% | 96.91% | 95.69% | 92.00% | 78.16% | 40.62% | 1.53% | 80.14% |
+-----------------------------------------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
|yolov5_exp7+efficientdet_073 | 100.53% | 100.53% | 100.53% | 100.51% | 100.15% | 99.72% | 96.66% | 81.45% | 34.48% | 0.77% | 81.53% |
+-----------------------------------------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
|yolov5_exp7+efficientdet_073_0.938 | 100.53% | 100.53% | 100.53% | 100.51% | 100.19% | 99.91% | 96.12% | 80.65% | 33.06% | 0.87% | 81.29% |
+-----------------------------------------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
|yolov5_exp7_angle+efficientdet_073_0.938 | 100.32% | 99.84% | 99.84% | 98.95% | 97.10% | 96.87% | 92.08% | 78.16% | 44.31% | 2.93% | 81.04% |
+-----------------------------------------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
single-class [email protected]
+-----------------------------------------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
| | 50% | 55% | 60% | 65% | 70% | 75% | 80% | 85% | 90% | 95% | 50-95% |
+-----------------------------------------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
| yolov5_6_exp7_epoch50 | 99.62% | 99.62% | 99.62% | 99.56% | 99.48% | 99.19% | 97.96% | 86.46% | 36.49% | 0.48% | 81.85% |
+-----------------------------------------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
| efficientdet_073-036epoch | 99.83% | 99.75% | 99.67% | 99.30% | 98.96% | 98.11% | 95.90% | 83.46% | 41.54% | 2.34% | 81.89% |
+-----------------------------------------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
| efficientdet_073-036epoch, w*=0.938 | 99.86% | 99.79% | 99.75% | 99.54% | 99.07% | 98.76% | 97.63% | 90.68% | 56.66% | 2.83% | 84.46% |
+-----------------------------------------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
|yolov5_exp7,efficientdet_073 | 99.97% | 99.97% | 99.97% | 99.92% | 99.69% | 99.49% | 98.11% | 86.90% | 42.67% | 1.07% | 82.77% |
+-----------------------------------------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
|yolov5_exp7+efficientdet_073_0.938 | 99.97% | 99.97% | 99.97% | 99.92% | 99.83% | 99.59% | 97.93% | 85.79% | 41.48% | 1.16% | 82.56% |
+-----------------------------------------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
|yolov5_exp7_angle+efficientdet_073_0.938 | 99.92% | 99.82% | 99.82% | 99.60% | 99.11% | 98.75% | 97.37% | 89.42% | 57.14% | 4.75% | 84.57% |
+-----------------------------------------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
OBB mAPs: (best efficientdet result@20221030)
+-------------------------------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
| model + w_scale | 50% | 55% | 60% | 65% | 70% | 75% | 80% | 85% | 90% | 95% | 50-95% |
+-------------------------------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
|model-073-036epoch, w*=0.938 | 99.86% | 99.79% | 99.75% | 99.54% | 99.07% | 98.76% | 97.60% | 90.65% | 56.74% | 2.83% | 84.46% | * +2.56, datasets@2021
+-------------------------------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
|model-z0009-054epoch, w*=0.923 | 99.79% | 99.67% | 99.48% | 99.33% | 99.11% | 98.60% | 97.62% | 92.05% | 56.74% | 2.72% | 84.51% | ** +6.2, 5 new datasets including VAID_aabb
+-------------------------------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
|model-zb0041-015epoch,w*=0.938 | 99.66% | 99.66% | 99.66% | 99.48% | 99.22% | 98.86% | 97.72% | 91.97% | 56.84% | 2.77% | 84.58% | *** +4.1, datasets@2021, AdamW + OneCycleLR
+-------------------------------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
try to fuse results
efficientdet_073-036epoch, w*=0.938
yolov5_6_exp7_epoch50
F:\ws\CyTrafficEditor2\Source\CyTrafficEditor\CyTraffic\data\videos
python fuse_obb2.py private170_det(073_and_exp7) F:\ws\efficientdet_pytorch_win64\_datasets\_test_sets\private170 F:\ws\efficientdet_pytorch_win64\_datasets\_test_sets\_archived_det_2022_a\private170_det(model-073-036epoch) F:\ws\efficientdet_pytorch_win64\_datasets\_test_sets\_archived_det_2022_a\private170_det(yolov5_6.0_exp7_epoch50)
(pytorch2022) F:\ws\Object-Detection-Metrics>
python pascalvoc.py -gt F:\ws\efficientdet_pytorch_win64\_datasets\_test_sets\private170 -det F:\ws\CyTrafficEditor2\Source\CyTrafficEditor\CyTraffic\data\videos\private170_det(073_and_exp7) -detformat obb_json -gtformat obb_json
see results above
yolov5_exp7+efficientdet_073
yolov5_exp7+efficientdet_073_0.938
yolov5_exp7_angle+efficientdet_073_0.938
not much improvement as expected.
todo
implement fuse_obb.py without help of gt results.
fix AP > 100%
more detail check
fuse_obb3.py
union all obb than iou < 0.3
python fuse_obb3.py ~/_trained_models/relabel_dataset/ottawa_a_det_fused ~/_trained_models/relabel_dataset/ottawa_a_det_073/ ~/_trained_models/relabel_dataset/ottawa_a_det_092/ ~/_trained_models/relabel_dataset/ottawa_a_det_z0009/
python fuse_obb3.py ~/_trained_models/relabel_dataset/ottawa_b_det_fused_073 ~/_trained_models/relabel_dataset/ottawa_b_073_det/ ~/_trained_models/relabel_dataset/ottawa_b_092_det/ ~/_trained_models/relabel_dataset/ottawa_b_z0009_det/
python fuse_obb3.py ~/_trained_models/relabel_dataset/ottawa_b_det_fused_092 ~/_trained_models/relabel_dataset/ottawa_b_092_det/ ~/_trained_models/relabel_dataset/ottawa_b_z0009_det/ ~/_trained_models/relabel_dataset/ottawa_b_073_det/
python fuse_obb3.py ~/_trained_models/relabel_dataset/ottawa_b_det_fused_z0009 ~/_trained_models/relabel_dataset/ottawa_b_z0009_det/ ~/_trained_models/relabel_dataset/ottawa_b_073_det/ ~/_trained_models/relabel_dataset/ottawa_b_092_det/
with /home/me/1TSSD/maliang/yolov5_obb/utils/datasets.py
# attrs_json_path = path[0:path.rfind(".")]+".video_attrs.json"
# with open(attrs_json_path) as f:
# attrs = json.load(f)
# scale_ratio = 25.0 / attrs["MostCommonVehicleWidthInPixels"]
scale_ratio = 25.0 / 36.0
img0 = cv2.resize(img0, (int(img0.shape[1]*scale_ratio), int(img0.shape[0]*scale_ratio)))
#img00 = np.zeros((4000, 6016, 3), np.uint8)
img00 = np.zeros((6016, 6016, 3), np.uint8)
img00[0:img0.shape[0], 0:img0.shape[1]] = img0
python detect.py --device 0 --weight "runs/yolov5m_finetune/weights/best.pt" --save-json --exist-ok --name exp131 --img-size 6000 6000 --source /home/me/_trained_models/relabel_dataset/ottawa_b
python detect.py --device 0 --weights "runs/train/exp7/weights/epoch50.pt" --save-json --exist-ok --name exp134 --img-size 6000 6000 --conf-thres 0.01 --iou-thres 0.5 --source ~/_trained_models/relabel_dataset/ottawa_b
python fuse_obb4.py ~/_trained_models/relabel_dataset/ottawa_b_fused4 ~/_trained_models/relabel_dataset/ottawa_b_yolov5_obb_2022_eagle_longyao_iou0.92/
python detect.py --device 0 --weights "runs/train/exp7/weights/epoch50.pt" --save-json --exist-ok --name exp140 --img-size 4000 8000 --conf-thres 0.1 --iou-thres 0.9 --source ~/_trained_models/relabel_dataset/linz_rare_samples_a
python detect.py --device 0 --weights "runs/train/exp7/weights/epoch50.pt" --save-json --exist-ok --name exp141 --img-size 4800 7200 --conf-thres 0.1 --iou-thres 0.9 --source ~/_trained_models/relabel_dataset/Christchurch_2021_5cm_dense
python detect.py --device 0 --weights "runs/train/exp7/weights/epoch50.pt" --save-json --exist-ok --name exp142 --img-size 4800 7200 --conf-thres 0.1 --iou-thres 0.9 --source ~/_trained_models/relabel_dataset/Napier_2017_2018_5cm_dense
python detect.py --device 0 --weights "runs/train/exp7/weights/epoch50.pt" --save-json --exist-ok --name exp143 --img-size 4800 7200 --conf-thres 0.1 --iou-thres 0.9 --source ~/_trained_models/relabel_dataset/Invercargill_2016_5cm
python detect.py --device 0 --weights "runs/train/exp7/weights/epoch50.pt" --save-json --exist-ok --name exp144 --img-size 6016 6016 --conf-thres 0.1 --iou-thres 0.9 --source ~/_trained_models/relabel_dataset/linz_trucks
weighted box fusion ensemble (OBB)
Stochastic Weights Averaging (SWA) in training
https://github.com/ultralytics/yolov5
yolov5 github readme/doc/tutorial
bigger model?
exp9 yolov5x6
train from scratch? with COCO
exp10, exp11
https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data
Pretrained weights
--weights yolov5s.pt
randomly initialized --weights '' --cfg yolov5s.yaml
ClearML Logging and Automation
Weights & Biases Logging
Local Logging
Images per class. ≥ 1500 images per class recommended
Instances per class. ≥ 10000 instances (labeled objects) per class recommended
Background images.
Background images are images with no objects that are added to a dataset to reduce False Positives (FP).
We recommend about 0-10% background images to help reduce FPs (COCO has 1000 background images for reference,
1% of the total). No labels are required for background images.
Start from Pretrained weights.
Recommended for small to medium sized datasets (i.e. VOC, VisDrone, GlobalWheat).
Start from Scratch.
Recommended for large datasets (i.e. COCO, Objects365, OIv6).
In general, increasing augmentation hyperparameters will reduce and delay overfitting, allowing for longer trainings and higher final mAP.
https://github.com/ultralytics/yolov5/wiki/Tips-for-Best-Training-Results
http://karpathy.github.io/2019/04/25/recipe/
todo
Hyperparameter Evolution
https://github.com/ultralytics/yolov5/issues/607
Model Ensembling Tutorial
https://github.com/ultralytics/yolov5/issues/318
python detect.py --weights model1.pt model2.pt --augment
Typically I've seen the best result when merging output grids directly, (i.e. ensembling YOLOv5l and YOLOv5x),
rather than simply appending boxes from multiple models for NMS to sort out. This is not always possible however,
for example Ensembling an EfficientDet model with YOLOv5x, you can not merge grids, you must use NMS or WBF
(or Merge NMS) to get a final result.
Transfer Learning with Frozen Layers
https://github.com/ultralytics/yolov5/issues/1314
\\192.168.2.22\WD_4T_red_2021\02_remapped_videos\_dataset_updates\private170_with_vehicle_types
\\192.168.2.22\WD_4T_red_2021\02_remapped_videos\_dataset_updates\private170_old (no vehicle types)
\\192.168.2.22\WD_4T_red_2021\02_remapped_videos\_dataset_updates\private170_2(by zhaoqiang & wangxinyu)
(pytorch2022) F:\ws\CyTrafficEditor2\Source\CyTrafficEditor\CyTraffic\data\videos>
python compute_mIoU.py private170 private170_zqwxy
matched[private170]: [2936], mIoU:[92.4172%] 大量在[84.87%,98.85%]范围,尾部在[73.40%,84.87%]范围 峰值94.58%
有少量不一致标注[mIoU==0%](估计跟我后期ps修改图片有关。。。)
python compute_mIoU.py private170_zqwxy private170
matched[private170_zqwxy]: [2929], mIoU:[92.6686%] 大量在[84.78%,98.93%]范围,尾部在[73.40%,84.78%]范围 峰值94.58%
人的标注误差 IoU > 84.8%, 均值在IoU 92.5%附近
多次或多人标注间 IoU < 84.8% 打回重新标注
todo
implement resize and large image inference
SAHI
read issue updates of yolov5_obb github
label smoothing
train yolov5m6 obb with efficientdet?
扩大测试集(yolov5_obb训练中test set mAP远落后于val set)
93.4%@epoch99 vs 86.7%@epoch99
修改测试metric,不再使用[email protected]_0.95 ?
100% recall
high IoU
筛选训练集合,剔除与测试集无关的训练集图片
图像分割模型?
车道线
车体
阴影