-
Notifications
You must be signed in to change notification settings - Fork 4
/
Copy pathHCQ_ActivityNet_bs64.txt
4589 lines (4589 loc) · 426 KB
/
HCQ_ActivityNet_bs64.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
Experiment directory: /apdcephfs/share_47076/gimwang/HCQ/exps/HCQ_ActivityNet_bs64
Preparing the dataloaders ...
Loading dataset ActivityNet_val1_trainval in ram ...
Finish loading dataset ActivityNet_val1_trainval in ram, taking 895.3150341510773 s.
Loading dataset ActivityNet_val1_test in ram ...
Finish loading dataset ActivityNet_val1_test in ram, taking 222.9132363796234 s.
Loading dataset ActivityNet_val1_test in ram ...
Finish loading dataset ActivityNet_val1_test in ram, taking 167.39756417274475 s.
Training ...
Saving checkpoint: /apdcephfs/share_47076/gimwang/HCQ/exps/HCQ_ActivityNet_bs64/checkpoint-epoch0.pth ...
Done in 1.241s
Updating 'best' checkpoint: /apdcephfs/share_47076/gimwang/HCQ/exps/HCQ_ActivityNet_bs64/checkpoint-epoch0.pth ...
Done in 2.485s
epoch : 0
loss : 0
learning_rate : 5e-05
n_samples : 0
n_steps : 0
ActivityNet_val1_test/t2v_metrics/R1: 0.04067520846044336
ActivityNet_val1_test/t2v_metrics/R5: 0.18303843807199513
ActivityNet_val1_test/t2v_metrics/R10: 0.24405125076266015
ActivityNet_val1_test/t2v_metrics/R50: 0.9558673988204189
ActivityNet_val1_test/t2v_metrics/MedR: 2484.0
ActivityNet_val1_test/t2v_metrics/MeanR: 2488.61043319097
ActivityNet_val1_test/t2v_metrics/geometric_mean_R1-R5-R10: 0.12202562538133006
ActivityNet_val1_test/v2t_metrics/R1: 0.04067520846044336
ActivityNet_val1_test/v2t_metrics/R5: 0.1016880211511084
ActivityNet_val1_test/v2t_metrics/R10: 0.22371364653243847
ActivityNet_val1_test/v2t_metrics/R50: 0.9151921903599756
ActivityNet_val1_test/v2t_metrics/MedR: 2511.0
ActivityNet_val1_test/v2t_metrics/MeanR: 2492.5206426682935
ActivityNet_val1_test/v2t_metrics/geometric_mean_R1-R5-R10: 0.09744600075376822
mnt_best : 0.12202562538133006
not_improved_count: 0
Train Epoch: 1 [1/500 64/32000 (0%)] Loss: 8.45386 (QuantReg: 22.41818) QuantErr: 22.41818 batch_time=24.20000
Train Epoch: 1 [9/500 576/32000 (2%)] Loss: 7.71489 (QuantReg: 22.50525) QuantErr: 22.50525 batch_time=1.22628
Train Epoch: 1 [17/500 1088/32000 (3%)] Loss: 6.36542 (QuantReg: 22.57572) QuantErr: 22.57572 batch_time=0.46792
Train Epoch: 1 [25/500 1600/32000 (5%)] Loss: 5.15219 (QuantReg: 22.64177) QuantErr: 22.64177 batch_time=0.46463
Train Epoch: 1 [33/500 2112/32000 (7%)] Loss: 4.36890 (QuantReg: 22.63716) QuantErr: 22.63716 batch_time=0.43537
Train Epoch: 1 [41/500 2624/32000 (8%)] Loss: 3.75129 (QuantReg: 22.65233) QuantErr: 22.65233 batch_time=0.43662
Train Epoch: 1 [49/500 3136/32000 (10%)] Loss: 3.27377 (QuantReg: 22.71187) QuantErr: 22.71187 batch_time=0.47337
Train Epoch: 1 [57/500 3648/32000 (11%)] Loss: 2.73272 (QuantReg: 22.64702) QuantErr: 22.64702 batch_time=0.53851
Train Epoch: 1 [65/500 4160/32000 (13%)] Loss: 2.71565 (QuantReg: 22.67479) QuantErr: 22.67479 batch_time=0.44526
Train Epoch: 1 [73/500 4672/32000 (15%)] Loss: 2.92645 (QuantReg: 22.68732) QuantErr: 22.68732 batch_time=0.92036
Train Epoch: 1 [81/500 5184/32000 (16%)] Loss: 2.53191 (QuantReg: 22.67205) QuantErr: 22.67205 batch_time=0.45233
Train Epoch: 1 [89/500 5696/32000 (18%)] Loss: 2.14129 (QuantReg: 22.66655) QuantErr: 22.66655 batch_time=0.47819
Train Epoch: 1 [97/500 6208/32000 (19%)] Loss: 2.09651 (QuantReg: 22.66313) QuantErr: 22.66313 batch_time=0.45175
Train Epoch: 1 [105/500 6720/32000 (21%)] Loss: 1.80867 (QuantReg: 22.66559) QuantErr: 22.66559 batch_time=0.44984
Train Epoch: 1 [113/500 7232/32000 (23%)] Loss: 1.61312 (QuantReg: 22.69565) QuantErr: 22.69565 batch_time=0.50402
Train Epoch: 1 [121/500 7744/32000 (24%)] Loss: 1.39946 (QuantReg: 22.67565) QuantErr: 22.67565 batch_time=0.44197
Train Epoch: 1 [129/500 8256/32000 (26%)] Loss: 1.42428 (QuantReg: 22.72449) QuantErr: 22.72449 batch_time=0.44423
Train Epoch: 1 [137/500 8768/32000 (27%)] Loss: 1.44395 (QuantReg: 22.66346) QuantErr: 22.66346 batch_time=0.94788
Train Epoch: 1 [145/500 9280/32000 (29%)] Loss: 1.39153 (QuantReg: 22.69287) QuantErr: 22.69287 batch_time=0.46923
Train Epoch: 1 [153/500 9792/32000 (31%)] Loss: 1.29976 (QuantReg: 22.68519) QuantErr: 22.68519 batch_time=0.44727
Train Epoch: 1 [161/500 10304/32000 (32%)] Loss: 1.54687 (QuantReg: 22.71046) QuantErr: 22.71046 batch_time=0.44467
Train Epoch: 1 [169/500 10816/32000 (34%)] Loss: 1.57149 (QuantReg: 22.65053) QuantErr: 22.65053 batch_time=0.44196
Train Epoch: 1 [177/500 11328/32000 (35%)] Loss: 1.28318 (QuantReg: 22.65221) QuantErr: 22.65221 batch_time=0.49163
Train Epoch: 1 [185/500 11840/32000 (37%)] Loss: 1.19069 (QuantReg: 22.67959) QuantErr: 22.67959 batch_time=0.47984
Train Epoch: 1 [193/500 12352/32000 (39%)] Loss: 1.14025 (QuantReg: 22.65106) QuantErr: 22.65106 batch_time=0.47997
Train Epoch: 1 [201/500 12864/32000 (40%)] Loss: 1.38853 (QuantReg: 22.64795) QuantErr: 22.64795 batch_time=0.95619
Train Epoch: 1 [209/500 13376/32000 (42%)] Loss: 1.03770 (QuantReg: 22.66087) QuantErr: 22.66087 batch_time=0.44832
Train Epoch: 1 [217/500 13888/32000 (43%)] Loss: 0.95553 (QuantReg: 22.65033) QuantErr: 22.65033 batch_time=0.44597
Train Epoch: 1 [225/500 14400/32000 (45%)] Loss: 0.91056 (QuantReg: 22.69502) QuantErr: 22.69502 batch_time=0.44294
Train Epoch: 1 [233/500 14912/32000 (47%)] Loss: 1.27543 (QuantReg: 22.67630) QuantErr: 22.67630 batch_time=0.44910
Train Epoch: 1 [241/500 15424/32000 (48%)] Loss: 0.90810 (QuantReg: 22.62929) QuantErr: 22.62929 batch_time=0.51267
Train Epoch: 1 [249/500 15936/32000 (50%)] Loss: 1.01608 (QuantReg: 22.67409) QuantErr: 22.67409 batch_time=0.47714
Train Epoch: 1 [257/500 16448/32000 (51%)] Loss: 1.08295 (QuantReg: 22.68485) QuantErr: 22.68485 batch_time=0.47619
Train Epoch: 1 [265/500 16960/32000 (53%)] Loss: 1.16952 (QuantReg: 22.65524) QuantErr: 22.65524 batch_time=0.94579
Train Epoch: 1 [273/500 17472/32000 (55%)] Loss: 1.09255 (QuantReg: 22.66187) QuantErr: 22.66187 batch_time=0.44680
Train Epoch: 1 [281/500 17984/32000 (56%)] Loss: 0.87580 (QuantReg: 22.66628) QuantErr: 22.66628 batch_time=0.44653
Train Epoch: 1 [289/500 18496/32000 (58%)] Loss: 1.10960 (QuantReg: 22.61569) QuantErr: 22.61569 batch_time=0.44757
Train Epoch: 1 [297/500 19008/32000 (59%)] Loss: 0.82078 (QuantReg: 22.66670) QuantErr: 22.66670 batch_time=0.44624
Train Epoch: 1 [305/500 19520/32000 (61%)] Loss: 1.00503 (QuantReg: 22.67703) QuantErr: 22.67703 batch_time=0.47807
Train Epoch: 1 [313/500 20032/32000 (63%)] Loss: 0.78355 (QuantReg: 22.67794) QuantErr: 22.67794 batch_time=0.44761
Train Epoch: 1 [321/500 20544/32000 (64%)] Loss: 0.90615 (QuantReg: 22.64537) QuantErr: 22.64537 batch_time=0.44447
Train Epoch: 1 [329/500 21056/32000 (66%)] Loss: 0.75516 (QuantReg: 22.65967) QuantErr: 22.65967 batch_time=0.97427
Train Epoch: 1 [337/500 21568/32000 (67%)] Loss: 0.70075 (QuantReg: 22.64360) QuantErr: 22.64360 batch_time=0.44862
Train Epoch: 1 [345/500 22080/32000 (69%)] Loss: 0.71222 (QuantReg: 22.65818) QuantErr: 22.65818 batch_time=0.46556
Train Epoch: 1 [353/500 22592/32000 (71%)] Loss: 0.70609 (QuantReg: 22.63818) QuantErr: 22.63818 batch_time=0.44487
Train Epoch: 1 [361/500 23104/32000 (72%)] Loss: 0.71493 (QuantReg: 22.71354) QuantErr: 22.71354 batch_time=0.44719
Train Epoch: 1 [369/500 23616/32000 (74%)] Loss: 0.61644 (QuantReg: 22.67355) QuantErr: 22.67355 batch_time=0.47293
Train Epoch: 1 [377/500 24128/32000 (75%)] Loss: 0.58879 (QuantReg: 22.69790) QuantErr: 22.69790 batch_time=0.46632
Train Epoch: 1 [385/500 24640/32000 (77%)] Loss: 0.91056 (QuantReg: 22.70962) QuantErr: 22.70962 batch_time=0.45226
Train Epoch: 1 [393/500 25152/32000 (79%)] Loss: 0.77722 (QuantReg: 22.65633) QuantErr: 22.65633 batch_time=0.96340
Train Epoch: 1 [401/500 25664/32000 (80%)] Loss: 0.70446 (QuantReg: 22.67061) QuantErr: 22.67061 batch_time=0.47204
Train Epoch: 1 [409/500 26176/32000 (82%)] Loss: 0.82212 (QuantReg: 22.64172) QuantErr: 22.64172 batch_time=0.47576
Train Epoch: 1 [417/500 26688/32000 (83%)] Loss: 0.59018 (QuantReg: 22.66025) QuantErr: 22.66025 batch_time=0.47621
Train Epoch: 1 [425/500 27200/32000 (85%)] Loss: 0.68781 (QuantReg: 22.69551) QuantErr: 22.69551 batch_time=0.45029
Train Epoch: 1 [433/500 27712/32000 (87%)] Loss: 0.93166 (QuantReg: 22.64676) QuantErr: 22.64676 batch_time=0.48386
Train Epoch: 1 [441/500 28224/32000 (88%)] Loss: 0.44907 (QuantReg: 22.67698) QuantErr: 22.67698 batch_time=0.44816
Train Epoch: 1 [449/500 28736/32000 (90%)] Loss: 0.82014 (QuantReg: 22.67235) QuantErr: 22.67235 batch_time=0.44734
Train Epoch: 1 [457/500 29248/32000 (91%)] Loss: 0.86227 (QuantReg: 22.69470) QuantErr: 22.69470 batch_time=0.92520
Train Epoch: 1 [465/500 29760/32000 (93%)] Loss: 0.49908 (QuantReg: 22.64480) QuantErr: 22.64480 batch_time=0.44643
Train Epoch: 1 [473/500 30272/32000 (95%)] Loss: 0.54844 (QuantReg: 22.69093) QuantErr: 22.69093 batch_time=0.44407
Train Epoch: 1 [481/500 30784/32000 (96%)] Loss: 0.67728 (QuantReg: 22.64714) QuantErr: 22.64714 batch_time=0.44510
Train Epoch: 1 [489/500 31296/32000 (98%)] Loss: 0.59014 (QuantReg: 22.68123) QuantErr: 22.68123 batch_time=0.44901
Train Epoch: 1 [497/500 31808/32000 (99%)] Loss: 0.64536 (QuantReg: 22.67393) QuantErr: 22.67393 batch_time=0.48407
Train Epoch: 1 codebook_update_time=1.93552
Saving checkpoint: /apdcephfs/share_47076/gimwang/HCQ/exps/HCQ_ActivityNet_bs64/checkpoint-epoch1.pth ...
Done in 3.704s
Updating 'best' checkpoint: /apdcephfs/share_47076/gimwang/HCQ/exps/HCQ_ActivityNet_bs64/checkpoint-epoch1.pth ...
Done in 7.288s
epoch : 1
loss : 1.6141951302886008
quant_reg : 22.665719146728517
quant_err : 22.665719146728517
learning_rate : 5e-05
n_samples : 32000
n_steps : 500
ActivityNet_val1_test/t2v_metrics/R1: 10.941631075859263
ActivityNet_val1_test/t2v_metrics/R5: 32.255440309131586
ActivityNet_val1_test/t2v_metrics/R10: 48.44417327638804
ActivityNet_val1_test/t2v_metrics/R50: 84.9908480780964
ActivityNet_val1_test/t2v_metrics/MedR: 11.0
ActivityNet_val1_test/t2v_metrics/MeanR: 35.75330486068741
ActivityNet_val1_test/t2v_metrics/geometric_mean_R1-R5-R10: 25.761760020175295
ActivityNet_val1_test/v2t_metrics/R1: 11.511083994305471
ActivityNet_val1_test/v2t_metrics/R5: 34.28920073215375
ActivityNet_val1_test/v2t_metrics/R10: 49.928818385194226
ActivityNet_val1_test/v2t_metrics/R50: 86.98393329265812
ActivityNet_val1_test/v2t_metrics/MedR: 11.0
ActivityNet_val1_test/v2t_metrics/MeanR: 32.74283099450885
ActivityNet_val1_test/v2t_metrics/geometric_mean_R1-R5-R10: 27.011059796931953
mnt_best : 25.761760020175295
not_improved_count: 0
Train Epoch: 2 [1/500 64/32000 (0%)] Loss: 0.72561 (QuantReg: 11.94269) QuantErr: 11.94269 batch_time=24.48405
Train Epoch: 2 [9/500 576/32000 (2%)] Loss: 0.57737 (QuantReg: 11.84015) QuantErr: 11.84015 batch_time=0.43810
Train Epoch: 2 [17/500 1088/32000 (3%)] Loss: 0.71258 (QuantReg: 11.41277) QuantErr: 11.41277 batch_time=0.43407
Train Epoch: 2 [25/500 1600/32000 (5%)] Loss: 0.85016 (QuantReg: 11.71424) QuantErr: 11.71424 batch_time=0.43614
Train Epoch: 2 [33/500 2112/32000 (7%)] Loss: 0.59503 (QuantReg: 11.42082) QuantErr: 11.42082 batch_time=0.43477
Train Epoch: 2 [41/500 2624/32000 (8%)] Loss: 0.41292 (QuantReg: 11.65594) QuantErr: 11.65594 batch_time=0.43399
Train Epoch: 2 [49/500 3136/32000 (10%)] Loss: 0.41943 (QuantReg: 11.49574) QuantErr: 11.49574 batch_time=0.43708
Train Epoch: 2 [57/500 3648/32000 (11%)] Loss: 0.42754 (QuantReg: 11.30480) QuantErr: 11.30480 batch_time=0.43817
Train Epoch: 2 [65/500 4160/32000 (13%)] Loss: 0.52866 (QuantReg: 11.49654) QuantErr: 11.49654 batch_time=0.94863
Train Epoch: 2 [73/500 4672/32000 (15%)] Loss: 0.42514 (QuantReg: 11.79908) QuantErr: 11.79908 batch_time=0.46029
Train Epoch: 2 [81/500 5184/32000 (16%)] Loss: 0.62165 (QuantReg: 11.44191) QuantErr: 11.44191 batch_time=0.43721
Train Epoch: 2 [89/500 5696/32000 (18%)] Loss: 0.47641 (QuantReg: 11.70312) QuantErr: 11.70312 batch_time=0.43991
Train Epoch: 2 [97/500 6208/32000 (19%)] Loss: 0.54800 (QuantReg: 11.81351) QuantErr: 11.81351 batch_time=0.43473
Train Epoch: 2 [105/500 6720/32000 (21%)] Loss: 0.35568 (QuantReg: 12.53562) QuantErr: 12.53562 batch_time=0.44060
Train Epoch: 2 [113/500 7232/32000 (23%)] Loss: 0.52862 (QuantReg: 12.49627) QuantErr: 12.49627 batch_time=0.44722
Train Epoch: 2 [121/500 7744/32000 (24%)] Loss: 0.49737 (QuantReg: 12.26356) QuantErr: 12.26356 batch_time=0.43631
Train Epoch: 2 [129/500 8256/32000 (26%)] Loss: 0.48285 (QuantReg: 12.19302) QuantErr: 12.19302 batch_time=0.97620
Train Epoch: 2 [137/500 8768/32000 (27%)] Loss: 0.36473 (QuantReg: 11.95347) QuantErr: 11.95347 batch_time=0.44891
Train Epoch: 2 [145/500 9280/32000 (29%)] Loss: 0.70330 (QuantReg: 11.76206) QuantErr: 11.76206 batch_time=0.44793
Train Epoch: 2 [153/500 9792/32000 (31%)] Loss: 0.56199 (QuantReg: 12.14221) QuantErr: 12.14221 batch_time=0.44647
Train Epoch: 2 [161/500 10304/32000 (32%)] Loss: 0.62803 (QuantReg: 11.92181) QuantErr: 11.92181 batch_time=0.44875
Train Epoch: 2 [169/500 10816/32000 (34%)] Loss: 0.49296 (QuantReg: 12.34320) QuantErr: 12.34320 batch_time=0.45128
Train Epoch: 2 [177/500 11328/32000 (35%)] Loss: 0.44256 (QuantReg: 11.74029) QuantErr: 11.74029 batch_time=0.44882
Train Epoch: 2 [185/500 11840/32000 (37%)] Loss: 0.47053 (QuantReg: 12.04703) QuantErr: 12.04703 batch_time=0.47542
Train Epoch: 2 [193/500 12352/32000 (39%)] Loss: 0.48854 (QuantReg: 12.01397) QuantErr: 12.01397 batch_time=1.05818
Train Epoch: 2 [201/500 12864/32000 (40%)] Loss: 0.29749 (QuantReg: 12.51154) QuantErr: 12.51154 batch_time=0.44164
Train Epoch: 2 [209/500 13376/32000 (42%)] Loss: 0.58331 (QuantReg: 12.26067) QuantErr: 12.26067 batch_time=0.44198
Train Epoch: 2 [217/500 13888/32000 (43%)] Loss: 0.55483 (QuantReg: 12.12471) QuantErr: 12.12471 batch_time=0.43961
Train Epoch: 2 [225/500 14400/32000 (45%)] Loss: 0.47792 (QuantReg: 12.30828) QuantErr: 12.30828 batch_time=0.43174
Train Epoch: 2 [233/500 14912/32000 (47%)] Loss: 0.50005 (QuantReg: 12.59545) QuantErr: 12.59545 batch_time=0.43438
Train Epoch: 2 [241/500 15424/32000 (48%)] Loss: 0.47472 (QuantReg: 12.36238) QuantErr: 12.36238 batch_time=0.43965
Train Epoch: 2 [249/500 15936/32000 (50%)] Loss: 0.37577 (QuantReg: 12.11763) QuantErr: 12.11763 batch_time=0.45526
Train Epoch: 2 [257/500 16448/32000 (51%)] Loss: 0.36974 (QuantReg: 12.57280) QuantErr: 12.57280 batch_time=1.03322
Train Epoch: 2 [265/500 16960/32000 (53%)] Loss: 0.45639 (QuantReg: 12.26483) QuantErr: 12.26483 batch_time=0.43940
Train Epoch: 2 [273/500 17472/32000 (55%)] Loss: 0.39022 (QuantReg: 12.03066) QuantErr: 12.03066 batch_time=0.44100
Train Epoch: 2 [281/500 17984/32000 (56%)] Loss: 0.26798 (QuantReg: 12.19280) QuantErr: 12.19280 batch_time=0.44999
Train Epoch: 2 [289/500 18496/32000 (58%)] Loss: 0.34657 (QuantReg: 12.70201) QuantErr: 12.70201 batch_time=0.46315
Train Epoch: 2 [297/500 19008/32000 (59%)] Loss: 0.34445 (QuantReg: 12.37846) QuantErr: 12.37846 batch_time=0.44864
Train Epoch: 2 [305/500 19520/32000 (61%)] Loss: 0.59839 (QuantReg: 11.91625) QuantErr: 11.91625 batch_time=0.44528
Train Epoch: 2 [313/500 20032/32000 (63%)] Loss: 0.44745 (QuantReg: 12.49513) QuantErr: 12.49513 batch_time=0.44478
Train Epoch: 2 [321/500 20544/32000 (64%)] Loss: 0.29735 (QuantReg: 12.42545) QuantErr: 12.42545 batch_time=0.94504
Train Epoch: 2 [329/500 21056/32000 (66%)] Loss: 0.45677 (QuantReg: 12.28759) QuantErr: 12.28759 batch_time=0.44418
Train Epoch: 2 [337/500 21568/32000 (67%)] Loss: 0.35988 (QuantReg: 12.62699) QuantErr: 12.62699 batch_time=0.45762
Train Epoch: 2 [345/500 22080/32000 (69%)] Loss: 0.48417 (QuantReg: 12.39612) QuantErr: 12.39612 batch_time=0.43627
Train Epoch: 2 [353/500 22592/32000 (71%)] Loss: 0.45944 (QuantReg: 12.29129) QuantErr: 12.29129 batch_time=0.43591
Train Epoch: 2 [361/500 23104/32000 (72%)] Loss: 0.31969 (QuantReg: 13.05449) QuantErr: 13.05449 batch_time=0.43822
Train Epoch: 2 [369/500 23616/32000 (74%)] Loss: 0.28519 (QuantReg: 12.39510) QuantErr: 12.39510 batch_time=0.44620
Train Epoch: 2 [377/500 24128/32000 (75%)] Loss: 0.41241 (QuantReg: 12.46684) QuantErr: 12.46684 batch_time=0.45663
Train Epoch: 2 [385/500 24640/32000 (77%)] Loss: 0.39577 (QuantReg: 12.56548) QuantErr: 12.56548 batch_time=0.96423
Train Epoch: 2 [393/500 25152/32000 (79%)] Loss: 0.46063 (QuantReg: 12.61829) QuantErr: 12.61829 batch_time=0.44632
Train Epoch: 2 [401/500 25664/32000 (80%)] Loss: 0.35709 (QuantReg: 12.25833) QuantErr: 12.25833 batch_time=0.44624
Train Epoch: 2 [409/500 26176/32000 (82%)] Loss: 0.37262 (QuantReg: 12.66273) QuantErr: 12.66273 batch_time=0.44548
Train Epoch: 2 [417/500 26688/32000 (83%)] Loss: 0.36536 (QuantReg: 12.84407) QuantErr: 12.84407 batch_time=0.45127
Train Epoch: 2 [425/500 27200/32000 (85%)] Loss: 0.47645 (QuantReg: 12.23712) QuantErr: 12.23712 batch_time=0.44872
Train Epoch: 2 [433/500 27712/32000 (87%)] Loss: 0.29463 (QuantReg: 12.92317) QuantErr: 12.92317 batch_time=0.44978
Train Epoch: 2 [441/500 28224/32000 (88%)] Loss: 0.31129 (QuantReg: 12.53390) QuantErr: 12.53390 batch_time=0.44776
Train Epoch: 2 [449/500 28736/32000 (90%)] Loss: 0.39711 (QuantReg: 12.74175) QuantErr: 12.74175 batch_time=0.99466
Train Epoch: 2 [457/500 29248/32000 (91%)] Loss: 0.34072 (QuantReg: 12.82094) QuantErr: 12.82094 batch_time=0.44497
Train Epoch: 2 [465/500 29760/32000 (93%)] Loss: 0.30355 (QuantReg: 12.82866) QuantErr: 12.82866 batch_time=0.45882
Train Epoch: 2 [473/500 30272/32000 (95%)] Loss: 0.35684 (QuantReg: 13.26902) QuantErr: 13.26902 batch_time=0.47630
Train Epoch: 2 [481/500 30784/32000 (96%)] Loss: 0.28016 (QuantReg: 13.10432) QuantErr: 13.10432 batch_time=0.43879
Train Epoch: 2 [489/500 31296/32000 (98%)] Loss: 0.50992 (QuantReg: 12.88146) QuantErr: 12.88146 batch_time=0.44342
Train Epoch: 2 [497/500 31808/32000 (99%)] Loss: 0.37634 (QuantReg: 12.96099) QuantErr: 12.96099 batch_time=0.46293
Train Epoch: 2 codebook_update_time=1.66010
Saving checkpoint: /apdcephfs/share_47076/gimwang/HCQ/exps/HCQ_ActivityNet_bs64/checkpoint-epoch2.pth ...
Done in 3.845s
Updating 'best' checkpoint: /apdcephfs/share_47076/gimwang/HCQ/exps/HCQ_ActivityNet_bs64/checkpoint-epoch2.pth ...
Done in 7.552s
removing stale ckpt [epoch 1] [took 0.00s]
removing stale ckpt [epoch 0] [took 0.01s]
epoch : 2
loss : 0.4646230584383011
quant_reg : 12.237847219467163
quant_err : 12.237847219467163
learning_rate : 5e-05
n_samples : 64000
n_steps : 1000
ActivityNet_val1_test/t2v_metrics/R1: 12.385600976205003
ActivityNet_val1_test/t2v_metrics/R5: 37.21781574130567
ActivityNet_val1_test/t2v_metrics/R10: 53.101484645108805
ActivityNet_val1_test/t2v_metrics/R50: 89.26174496644295
ActivityNet_val1_test/t2v_metrics/MedR: 9.0
ActivityNet_val1_test/t2v_metrics/MeanR: 29.53670937563555
ActivityNet_val1_test/t2v_metrics/geometric_mean_R1-R5-R10: 29.03520364737379
ActivityNet_val1_test/v2t_metrics/R1: 13.666870042708968
ActivityNet_val1_test/v2t_metrics/R5: 40.085417937766934
ActivityNet_val1_test/v2t_metrics/R10: 56.88427903193004
ActivityNet_val1_test/v2t_metrics/R50: 90.5226764287167
ActivityNet_val1_test/v2t_metrics/MedR: 8.0
ActivityNet_val1_test/v2t_metrics/MeanR: 28.2479153955664
ActivityNet_val1_test/v2t_metrics/geometric_mean_R1-R5-R10: 31.468973710563155
mnt_best : 29.03520364737379
not_improved_count: 0
Train Epoch: 3 [1/500 64/32000 (0%)] Loss: 0.40993 (QuantReg: 10.71375) QuantErr: 10.71375 batch_time=22.99474
Train Epoch: 3 [9/500 576/32000 (2%)] Loss: 0.46474 (QuantReg: 10.75748) QuantErr: 10.75748 batch_time=0.48450
Train Epoch: 3 [17/500 1088/32000 (3%)] Loss: 0.38643 (QuantReg: 10.99748) QuantErr: 10.99748 batch_time=0.44436
Train Epoch: 3 [25/500 1600/32000 (5%)] Loss: 0.35345 (QuantReg: 10.25052) QuantErr: 10.25052 batch_time=0.44962
Train Epoch: 3 [33/500 2112/32000 (7%)] Loss: 0.40726 (QuantReg: 10.72150) QuantErr: 10.72150 batch_time=0.51551
Train Epoch: 3 [41/500 2624/32000 (8%)] Loss: 0.40029 (QuantReg: 10.84274) QuantErr: 10.84274 batch_time=0.45129
Train Epoch: 3 [49/500 3136/32000 (10%)] Loss: 0.25687 (QuantReg: 11.01723) QuantErr: 11.01723 batch_time=0.44791
Train Epoch: 3 [57/500 3648/32000 (11%)] Loss: 0.19419 (QuantReg: 10.72959) QuantErr: 10.72959 batch_time=0.43958
Train Epoch: 3 [65/500 4160/32000 (13%)] Loss: 0.31108 (QuantReg: 10.44353) QuantErr: 10.44353 batch_time=0.44033
Train Epoch: 3 [73/500 4672/32000 (15%)] Loss: 0.26679 (QuantReg: 10.48056) QuantErr: 10.48056 batch_time=0.44636
Train Epoch: 3 [81/500 5184/32000 (16%)] Loss: 0.24867 (QuantReg: 10.88305) QuantErr: 10.88305 batch_time=0.44768
Train Epoch: 3 [89/500 5696/32000 (18%)] Loss: 0.36001 (QuantReg: 10.76767) QuantErr: 10.76767 batch_time=0.44208
Train Epoch: 3 [97/500 6208/32000 (19%)] Loss: 0.31835 (QuantReg: 10.75226) QuantErr: 10.75226 batch_time=0.43788
Train Epoch: 3 [105/500 6720/32000 (21%)] Loss: 0.32759 (QuantReg: 10.27977) QuantErr: 10.27977 batch_time=0.44229
Train Epoch: 3 [113/500 7232/32000 (23%)] Loss: 0.35476 (QuantReg: 11.09404) QuantErr: 11.09404 batch_time=0.44306
Train Epoch: 3 [121/500 7744/32000 (24%)] Loss: 0.46567 (QuantReg: 10.30645) QuantErr: 10.30645 batch_time=0.44015
Train Epoch: 3 [129/500 8256/32000 (26%)] Loss: 0.28122 (QuantReg: 10.75619) QuantErr: 10.75619 batch_time=0.44164
Train Epoch: 3 [137/500 8768/32000 (27%)] Loss: 0.27271 (QuantReg: 10.63039) QuantErr: 10.63039 batch_time=0.44508
Train Epoch: 3 [145/500 9280/32000 (29%)] Loss: 0.25132 (QuantReg: 10.98249) QuantErr: 10.98249 batch_time=0.47105
Train Epoch: 3 [153/500 9792/32000 (31%)] Loss: 0.36782 (QuantReg: 10.78363) QuantErr: 10.78363 batch_time=0.44402
Train Epoch: 3 [161/500 10304/32000 (32%)] Loss: 0.43878 (QuantReg: 10.92621) QuantErr: 10.92621 batch_time=0.47842
Train Epoch: 3 [169/500 10816/32000 (34%)] Loss: 0.39755 (QuantReg: 10.78389) QuantErr: 10.78389 batch_time=0.47038
Train Epoch: 3 [177/500 11328/32000 (35%)] Loss: 0.36341 (QuantReg: 10.60740) QuantErr: 10.60740 batch_time=0.44630
Train Epoch: 3 [185/500 11840/32000 (37%)] Loss: 0.21364 (QuantReg: 11.00736) QuantErr: 11.00736 batch_time=0.44562
Train Epoch: 3 [193/500 12352/32000 (39%)] Loss: 0.28754 (QuantReg: 10.95399) QuantErr: 10.95399 batch_time=0.44158
Train Epoch: 3 [201/500 12864/32000 (40%)] Loss: 0.35121 (QuantReg: 10.92761) QuantErr: 10.92761 batch_time=0.44814
Train Epoch: 3 [209/500 13376/32000 (42%)] Loss: 0.29015 (QuantReg: 10.73912) QuantErr: 10.73912 batch_time=0.44942
Train Epoch: 3 [217/500 13888/32000 (43%)] Loss: 0.21805 (QuantReg: 10.95119) QuantErr: 10.95119 batch_time=0.44326
Train Epoch: 3 [225/500 14400/32000 (45%)] Loss: 0.39336 (QuantReg: 10.82223) QuantErr: 10.82223 batch_time=0.44383
Train Epoch: 3 [233/500 14912/32000 (47%)] Loss: 0.36095 (QuantReg: 10.79705) QuantErr: 10.79705 batch_time=0.44080
Train Epoch: 3 [241/500 15424/32000 (48%)] Loss: 0.29764 (QuantReg: 11.44570) QuantErr: 11.44570 batch_time=0.51349
Train Epoch: 3 [249/500 15936/32000 (50%)] Loss: 0.29001 (QuantReg: 10.77888) QuantErr: 10.77888 batch_time=0.44635
Train Epoch: 3 [257/500 16448/32000 (51%)] Loss: 0.27291 (QuantReg: 11.07236) QuantErr: 11.07236 batch_time=0.47518
Train Epoch: 3 [265/500 16960/32000 (53%)] Loss: 0.50357 (QuantReg: 11.26909) QuantErr: 11.26909 batch_time=0.47454
Train Epoch: 3 [273/500 17472/32000 (55%)] Loss: 0.31944 (QuantReg: 10.91059) QuantErr: 10.91059 batch_time=0.47373
Train Epoch: 3 [281/500 17984/32000 (56%)] Loss: 0.29584 (QuantReg: 10.88494) QuantErr: 10.88494 batch_time=0.48053
Train Epoch: 3 [289/500 18496/32000 (58%)] Loss: 0.24773 (QuantReg: 11.28522) QuantErr: 11.28522 batch_time=0.44815
Train Epoch: 3 [297/500 19008/32000 (59%)] Loss: 0.39481 (QuantReg: 11.03147) QuantErr: 11.03147 batch_time=0.43711
Train Epoch: 3 [305/500 19520/32000 (61%)] Loss: 0.21448 (QuantReg: 10.98965) QuantErr: 10.98965 batch_time=0.43810
Train Epoch: 3 [313/500 20032/32000 (63%)] Loss: 0.37343 (QuantReg: 11.14926) QuantErr: 11.14926 batch_time=0.43623
Train Epoch: 3 [321/500 20544/32000 (64%)] Loss: 0.42089 (QuantReg: 10.84798) QuantErr: 10.84798 batch_time=0.44012
Train Epoch: 3 [329/500 21056/32000 (66%)] Loss: 0.31365 (QuantReg: 10.82343) QuantErr: 10.82343 batch_time=0.46678
Train Epoch: 3 [337/500 21568/32000 (67%)] Loss: 0.26799 (QuantReg: 10.60613) QuantErr: 10.60613 batch_time=0.45443
Train Epoch: 3 [345/500 22080/32000 (69%)] Loss: 0.24319 (QuantReg: 10.97656) QuantErr: 10.97656 batch_time=0.45093
Train Epoch: 3 [353/500 22592/32000 (71%)] Loss: 0.32546 (QuantReg: 10.97236) QuantErr: 10.97236 batch_time=0.44696
Train Epoch: 3 [361/500 23104/32000 (72%)] Loss: 0.25567 (QuantReg: 11.36458) QuantErr: 11.36458 batch_time=0.44506
Train Epoch: 3 [369/500 23616/32000 (74%)] Loss: 0.26189 (QuantReg: 11.25579) QuantErr: 11.25579 batch_time=0.44577
Train Epoch: 3 [377/500 24128/32000 (75%)] Loss: 0.29855 (QuantReg: 10.67048) QuantErr: 10.67048 batch_time=0.44247
Train Epoch: 3 [385/500 24640/32000 (77%)] Loss: 0.21579 (QuantReg: 11.24943) QuantErr: 11.24943 batch_time=0.44293
Train Epoch: 3 [393/500 25152/32000 (79%)] Loss: 0.38956 (QuantReg: 10.93985) QuantErr: 10.93985 batch_time=0.43846
Train Epoch: 3 [401/500 25664/32000 (80%)] Loss: 0.36131 (QuantReg: 11.46022) QuantErr: 11.46022 batch_time=0.44596
Train Epoch: 3 [409/500 26176/32000 (82%)] Loss: 0.29492 (QuantReg: 10.94477) QuantErr: 10.94477 batch_time=0.45094
Train Epoch: 3 [417/500 26688/32000 (83%)] Loss: 0.27739 (QuantReg: 11.41177) QuantErr: 11.41177 batch_time=0.47105
Train Epoch: 3 [425/500 27200/32000 (85%)] Loss: 0.29994 (QuantReg: 10.99319) QuantErr: 10.99319 batch_time=0.44569
Train Epoch: 3 [433/500 27712/32000 (87%)] Loss: 0.22637 (QuantReg: 10.94746) QuantErr: 10.94746 batch_time=0.44288
Train Epoch: 3 [441/500 28224/32000 (88%)] Loss: 0.29155 (QuantReg: 11.36957) QuantErr: 11.36957 batch_time=0.44865
Train Epoch: 3 [449/500 28736/32000 (90%)] Loss: 0.30023 (QuantReg: 11.21756) QuantErr: 11.21756 batch_time=0.44343
Train Epoch: 3 [457/500 29248/32000 (91%)] Loss: 0.19223 (QuantReg: 11.06503) QuantErr: 11.06503 batch_time=0.43983
Train Epoch: 3 [465/500 29760/32000 (93%)] Loss: 0.29842 (QuantReg: 11.30961) QuantErr: 11.30961 batch_time=0.44554
Train Epoch: 3 [473/500 30272/32000 (95%)] Loss: 0.34664 (QuantReg: 11.19434) QuantErr: 11.19434 batch_time=0.44184
Train Epoch: 3 [481/500 30784/32000 (96%)] Loss: 0.32124 (QuantReg: 11.66651) QuantErr: 11.66651 batch_time=0.45018
Train Epoch: 3 [489/500 31296/32000 (98%)] Loss: 0.28483 (QuantReg: 10.99618) QuantErr: 10.99618 batch_time=0.44850
Train Epoch: 3 [497/500 31808/32000 (99%)] Loss: 0.32154 (QuantReg: 11.24058) QuantErr: 11.24058 batch_time=0.44788
Train Epoch: 3 codebook_update_time=1.67331
Saving checkpoint: /apdcephfs/share_47076/gimwang/HCQ/exps/HCQ_ActivityNet_bs64/checkpoint-epoch3.pth ...
Done in 3.488s
Updating 'best' checkpoint: /apdcephfs/share_47076/gimwang/HCQ/exps/HCQ_ActivityNet_bs64/checkpoint-epoch3.pth ...
Done in 7.073s
removing stale ckpt [epoch 2] [took 0.00s]
epoch : 3
loss : 0.3017108673900366
quant_reg : 10.954798482894898
quant_err : 10.954798482894898
learning_rate : 4.25e-05
n_samples : 96000
n_steps : 1500
ActivityNet_val1_test/t2v_metrics/R1: 14.012609314622738
ActivityNet_val1_test/t2v_metrics/R5: 39.251576164327844
ActivityNet_val1_test/t2v_metrics/R10: 56.457189343095386
ActivityNet_val1_test/t2v_metrics/R50: 90.92942851332113
ActivityNet_val1_test/t2v_metrics/MedR: 8.0
ActivityNet_val1_test/t2v_metrics/MeanR: 28.050030506406344
ActivityNet_val1_test/t2v_metrics/geometric_mean_R1-R5-R10: 31.43150111000888
ActivityNet_val1_test/v2t_metrics/R1: 14.968476713443156
ActivityNet_val1_test/v2t_metrics/R5: 41.67175106772422
ActivityNet_val1_test/v2t_metrics/R10: 58.287573723815335
ActivityNet_val1_test/v2t_metrics/R50: 91.25483018100468
ActivityNet_val1_test/v2t_metrics/MedR: 8.0
ActivityNet_val1_test/v2t_metrics/MeanR: 26.869025828757373
ActivityNet_val1_test/v2t_metrics/geometric_mean_R1-R5-R10: 33.12824616795514
mnt_best : 31.43150111000888
not_improved_count: 0
Train Epoch: 4 [1/500 64/32000 (0%)] Loss: 0.31668 (QuantReg: 10.49403) QuantErr: 10.49403 batch_time=24.97642
Train Epoch: 4 [9/500 576/32000 (2%)] Loss: 0.28908 (QuantReg: 10.25020) QuantErr: 10.25020 batch_time=0.44047
Train Epoch: 4 [17/500 1088/32000 (3%)] Loss: 0.25065 (QuantReg: 10.22250) QuantErr: 10.22250 batch_time=0.43714
Train Epoch: 4 [25/500 1600/32000 (5%)] Loss: 0.32267 (QuantReg: 10.36715) QuantErr: 10.36715 batch_time=0.44198
Train Epoch: 4 [33/500 2112/32000 (7%)] Loss: 0.30382 (QuantReg: 10.41416) QuantErr: 10.41416 batch_time=0.44714
Train Epoch: 4 [41/500 2624/32000 (8%)] Loss: 0.20200 (QuantReg: 10.91406) QuantErr: 10.91406 batch_time=0.44428
Train Epoch: 4 [49/500 3136/32000 (10%)] Loss: 0.27584 (QuantReg: 10.41804) QuantErr: 10.41804 batch_time=0.47418
Train Epoch: 4 [57/500 3648/32000 (11%)] Loss: 0.20106 (QuantReg: 10.67783) QuantErr: 10.67783 batch_time=0.44292
Train Epoch: 4 [65/500 4160/32000 (13%)] Loss: 0.25178 (QuantReg: 10.58445) QuantErr: 10.58445 batch_time=0.87217
Train Epoch: 4 [73/500 4672/32000 (15%)] Loss: 0.32869 (QuantReg: 10.36861) QuantErr: 10.36861 batch_time=0.45181
Train Epoch: 4 [81/500 5184/32000 (16%)] Loss: 0.41843 (QuantReg: 10.21831) QuantErr: 10.21831 batch_time=0.44495
Train Epoch: 4 [89/500 5696/32000 (18%)] Loss: 0.29685 (QuantReg: 10.31413) QuantErr: 10.31413 batch_time=0.43781
Train Epoch: 4 [97/500 6208/32000 (19%)] Loss: 0.12007 (QuantReg: 10.71739) QuantErr: 10.71739 batch_time=0.44608
Train Epoch: 4 [105/500 6720/32000 (21%)] Loss: 0.22683 (QuantReg: 10.26605) QuantErr: 10.26605 batch_time=0.44479
Train Epoch: 4 [113/500 7232/32000 (23%)] Loss: 0.16645 (QuantReg: 10.35295) QuantErr: 10.35295 batch_time=0.44591
Train Epoch: 4 [121/500 7744/32000 (24%)] Loss: 0.34859 (QuantReg: 10.55089) QuantErr: 10.55089 batch_time=0.44542
Train Epoch: 4 [129/500 8256/32000 (26%)] Loss: 0.23694 (QuantReg: 10.33128) QuantErr: 10.33128 batch_time=0.83888
Train Epoch: 4 [137/500 8768/32000 (27%)] Loss: 0.24139 (QuantReg: 10.35362) QuantErr: 10.35362 batch_time=0.43796
Train Epoch: 4 [145/500 9280/32000 (29%)] Loss: 0.18898 (QuantReg: 10.69968) QuantErr: 10.69968 batch_time=0.43641
Train Epoch: 4 [153/500 9792/32000 (31%)] Loss: 0.23084 (QuantReg: 10.84095) QuantErr: 10.84095 batch_time=0.44288
Train Epoch: 4 [161/500 10304/32000 (32%)] Loss: 0.29987 (QuantReg: 10.64164) QuantErr: 10.64164 batch_time=0.44455
Train Epoch: 4 [169/500 10816/32000 (34%)] Loss: 0.25026 (QuantReg: 11.11030) QuantErr: 11.11030 batch_time=0.44266
Train Epoch: 4 [177/500 11328/32000 (35%)] Loss: 0.18914 (QuantReg: 10.97939) QuantErr: 10.97939 batch_time=0.44795
Train Epoch: 4 [185/500 11840/32000 (37%)] Loss: 0.31622 (QuantReg: 10.22728) QuantErr: 10.22728 batch_time=0.44537
Train Epoch: 4 [193/500 12352/32000 (39%)] Loss: 0.20161 (QuantReg: 10.67864) QuantErr: 10.67864 batch_time=0.87546
Train Epoch: 4 [201/500 12864/32000 (40%)] Loss: 0.29261 (QuantReg: 10.51759) QuantErr: 10.51759 batch_time=0.44844
Train Epoch: 4 [209/500 13376/32000 (42%)] Loss: 0.29786 (QuantReg: 10.79147) QuantErr: 10.79147 batch_time=0.44668
Train Epoch: 4 [217/500 13888/32000 (43%)] Loss: 0.16765 (QuantReg: 10.82858) QuantErr: 10.82858 batch_time=0.44343
Train Epoch: 4 [225/500 14400/32000 (45%)] Loss: 0.18758 (QuantReg: 11.22581) QuantErr: 11.22581 batch_time=0.44164
Train Epoch: 4 [233/500 14912/32000 (47%)] Loss: 0.22730 (QuantReg: 10.58330) QuantErr: 10.58330 batch_time=0.44875
Train Epoch: 4 [241/500 15424/32000 (48%)] Loss: 0.17710 (QuantReg: 11.03730) QuantErr: 11.03730 batch_time=0.43982
Train Epoch: 4 [249/500 15936/32000 (50%)] Loss: 0.13548 (QuantReg: 10.73270) QuantErr: 10.73270 batch_time=0.44444
Train Epoch: 4 [257/500 16448/32000 (51%)] Loss: 0.13474 (QuantReg: 10.80898) QuantErr: 10.80898 batch_time=0.83512
Train Epoch: 4 [265/500 16960/32000 (53%)] Loss: 0.20839 (QuantReg: 11.06962) QuantErr: 11.06962 batch_time=0.44474
Train Epoch: 4 [273/500 17472/32000 (55%)] Loss: 0.13639 (QuantReg: 11.29702) QuantErr: 11.29702 batch_time=0.44378
Train Epoch: 4 [281/500 17984/32000 (56%)] Loss: 0.31699 (QuantReg: 10.60871) QuantErr: 10.60871 batch_time=0.45181
Train Epoch: 4 [289/500 18496/32000 (58%)] Loss: 0.32189 (QuantReg: 10.53854) QuantErr: 10.53854 batch_time=0.44476
Train Epoch: 4 [297/500 19008/32000 (59%)] Loss: 0.17006 (QuantReg: 11.08009) QuantErr: 11.08009 batch_time=0.44035
Train Epoch: 4 [305/500 19520/32000 (61%)] Loss: 0.17244 (QuantReg: 10.55333) QuantErr: 10.55333 batch_time=0.44844
Train Epoch: 4 [313/500 20032/32000 (63%)] Loss: 0.21301 (QuantReg: 10.81970) QuantErr: 10.81970 batch_time=0.43873
Train Epoch: 4 [321/500 20544/32000 (64%)] Loss: 0.25626 (QuantReg: 10.65682) QuantErr: 10.65682 batch_time=0.87750
Train Epoch: 4 [329/500 21056/32000 (66%)] Loss: 0.17315 (QuantReg: 11.03007) QuantErr: 11.03007 batch_time=0.44030
Train Epoch: 4 [337/500 21568/32000 (67%)] Loss: 0.16300 (QuantReg: 10.74859) QuantErr: 10.74859 batch_time=0.46207
Train Epoch: 4 [345/500 22080/32000 (69%)] Loss: 0.23563 (QuantReg: 10.97024) QuantErr: 10.97024 batch_time=0.44685
Train Epoch: 4 [353/500 22592/32000 (71%)] Loss: 0.27254 (QuantReg: 10.98203) QuantErr: 10.98203 batch_time=0.47412
Train Epoch: 4 [361/500 23104/32000 (72%)] Loss: 0.40224 (QuantReg: 10.48666) QuantErr: 10.48666 batch_time=0.44721
Train Epoch: 4 [369/500 23616/32000 (74%)] Loss: 0.22941 (QuantReg: 10.84777) QuantErr: 10.84777 batch_time=0.45859
Train Epoch: 4 [377/500 24128/32000 (75%)] Loss: 0.23385 (QuantReg: 11.15265) QuantErr: 11.15265 batch_time=0.45302
Train Epoch: 4 [385/500 24640/32000 (77%)] Loss: 0.30820 (QuantReg: 10.94125) QuantErr: 10.94125 batch_time=0.44794
Train Epoch: 4 [393/500 25152/32000 (79%)] Loss: 0.29493 (QuantReg: 10.75366) QuantErr: 10.75366 batch_time=0.45352
Train Epoch: 4 [401/500 25664/32000 (80%)] Loss: 0.18232 (QuantReg: 10.47132) QuantErr: 10.47132 batch_time=0.45770
Train Epoch: 4 [409/500 26176/32000 (82%)] Loss: 0.19428 (QuantReg: 10.82954) QuantErr: 10.82954 batch_time=0.44465
Train Epoch: 4 [417/500 26688/32000 (83%)] Loss: 0.18538 (QuantReg: 10.43752) QuantErr: 10.43752 batch_time=0.44576
Train Epoch: 4 [425/500 27200/32000 (85%)] Loss: 0.18616 (QuantReg: 11.12352) QuantErr: 11.12352 batch_time=0.44392
Train Epoch: 4 [433/500 27712/32000 (87%)] Loss: 0.19220 (QuantReg: 11.25396) QuantErr: 11.25396 batch_time=0.44708
Train Epoch: 4 [441/500 28224/32000 (88%)] Loss: 0.31112 (QuantReg: 10.77728) QuantErr: 10.77728 batch_time=0.44637
Train Epoch: 4 [449/500 28736/32000 (90%)] Loss: 0.15565 (QuantReg: 10.83008) QuantErr: 10.83008 batch_time=0.44358
Train Epoch: 4 [457/500 29248/32000 (91%)] Loss: 0.21716 (QuantReg: 11.09379) QuantErr: 11.09379 batch_time=0.44529
Train Epoch: 4 [465/500 29760/32000 (93%)] Loss: 0.23212 (QuantReg: 11.09332) QuantErr: 11.09332 batch_time=0.44735
Train Epoch: 4 [473/500 30272/32000 (95%)] Loss: 0.21230 (QuantReg: 10.96637) QuantErr: 10.96637 batch_time=0.44227
Train Epoch: 4 [481/500 30784/32000 (96%)] Loss: 0.16492 (QuantReg: 11.23637) QuantErr: 11.23637 batch_time=0.44321
Train Epoch: 4 [489/500 31296/32000 (98%)] Loss: 0.33311 (QuantReg: 11.25983) QuantErr: 11.25983 batch_time=0.44640
Train Epoch: 4 [497/500 31808/32000 (99%)] Loss: 0.18437 (QuantReg: 11.06644) QuantErr: 11.06644 batch_time=0.44771
Train Epoch: 4 codebook_update_time=1.63558
Saving checkpoint: /apdcephfs/share_47076/gimwang/HCQ/exps/HCQ_ActivityNet_bs64/checkpoint-epoch4.pth ...
Done in 3.787s
Updating 'best' checkpoint: /apdcephfs/share_47076/gimwang/HCQ/exps/HCQ_ActivityNet_bs64/checkpoint-epoch4.pth ...
Done in 7.242s
removing stale ckpt [epoch 3] [took 0.00s]
epoch : 4
loss : 0.22879829144477845
quant_reg : 10.78496283531189
quant_err : 10.78496283531189
learning_rate : 4.25e-05
n_samples : 128000
n_steps : 2000
ActivityNet_val1_test/t2v_metrics/R1: 14.05328452308318
ActivityNet_val1_test/t2v_metrics/R5: 39.84136668700427
ActivityNet_val1_test/t2v_metrics/R10: 57.63677038844824
ActivityNet_val1_test/t2v_metrics/R50: 91.49888143176734
ActivityNet_val1_test/t2v_metrics/MedR: 8.0
ActivityNet_val1_test/t2v_metrics/MeanR: 25.777404921700224
ActivityNet_val1_test/t2v_metrics/geometric_mean_R1-R5-R10: 31.837373677270676
ActivityNet_val1_test/v2t_metrics/R1: 15.761643278421802
ActivityNet_val1_test/v2t_metrics/R5: 43.054708155379295
ActivityNet_val1_test/v2t_metrics/R10: 59.7518812283913
ActivityNet_val1_test/v2t_metrics/R50: 91.96664632906244
ActivityNet_val1_test/v2t_metrics/MedR: 7.0
ActivityNet_val1_test/v2t_metrics/MeanR: 24.908074028879398
ActivityNet_val1_test/v2t_metrics/geometric_mean_R1-R5-R10: 34.35510200588799
mnt_best : 31.837373677270676
not_improved_count: 0
Train Epoch: 5 [1/500 64/32000 (0%)] Loss: 0.17851 (QuantReg: 10.73540) QuantErr: 10.73540 batch_time=23.24251
Train Epoch: 5 [9/500 576/32000 (2%)] Loss: 0.12941 (QuantReg: 10.92670) QuantErr: 10.92670 batch_time=0.43812
Train Epoch: 5 [17/500 1088/32000 (3%)] Loss: 0.12576 (QuantReg: 10.31118) QuantErr: 10.31118 batch_time=0.50535
Train Epoch: 5 [25/500 1600/32000 (5%)] Loss: 0.19024 (QuantReg: 10.58322) QuantErr: 10.58322 batch_time=0.43682
Train Epoch: 5 [33/500 2112/32000 (7%)] Loss: 0.17136 (QuantReg: 10.55180) QuantErr: 10.55180 batch_time=0.44439
Train Epoch: 5 [41/500 2624/32000 (8%)] Loss: 0.18249 (QuantReg: 10.98717) QuantErr: 10.98717 batch_time=0.43842
Train Epoch: 5 [49/500 3136/32000 (10%)] Loss: 0.17490 (QuantReg: 10.97654) QuantErr: 10.97654 batch_time=0.44244
Train Epoch: 5 [57/500 3648/32000 (11%)] Loss: 0.21813 (QuantReg: 10.44407) QuantErr: 10.44407 batch_time=0.43920
Train Epoch: 5 [65/500 4160/32000 (13%)] Loss: 0.16118 (QuantReg: 10.59707) QuantErr: 10.59707 batch_time=0.45851
Train Epoch: 5 [73/500 4672/32000 (15%)] Loss: 0.17793 (QuantReg: 10.66656) QuantErr: 10.66656 batch_time=0.49298
Train Epoch: 5 [81/500 5184/32000 (16%)] Loss: 0.17820 (QuantReg: 10.78656) QuantErr: 10.78656 batch_time=0.44072
Train Epoch: 5 [89/500 5696/32000 (18%)] Loss: 0.17827 (QuantReg: 10.38005) QuantErr: 10.38005 batch_time=0.43655
Train Epoch: 5 [97/500 6208/32000 (19%)] Loss: 0.17859 (QuantReg: 10.61297) QuantErr: 10.61297 batch_time=0.43662
Train Epoch: 5 [105/500 6720/32000 (21%)] Loss: 0.16239 (QuantReg: 10.91727) QuantErr: 10.91727 batch_time=0.47699
Train Epoch: 5 [113/500 7232/32000 (23%)] Loss: 0.10224 (QuantReg: 10.97290) QuantErr: 10.97290 batch_time=0.43816
Train Epoch: 5 [121/500 7744/32000 (24%)] Loss: 0.12901 (QuantReg: 10.89669) QuantErr: 10.89669 batch_time=0.43704
Train Epoch: 5 [129/500 8256/32000 (26%)] Loss: 0.17970 (QuantReg: 10.67652) QuantErr: 10.67652 batch_time=0.47864
Train Epoch: 5 [137/500 8768/32000 (27%)] Loss: 0.22734 (QuantReg: 10.95682) QuantErr: 10.95682 batch_time=0.47040
Train Epoch: 5 [145/500 9280/32000 (29%)] Loss: 0.16587 (QuantReg: 11.14880) QuantErr: 11.14880 batch_time=0.46415
Train Epoch: 5 [153/500 9792/32000 (31%)] Loss: 0.22604 (QuantReg: 10.68453) QuantErr: 10.68453 batch_time=0.45043
Train Epoch: 5 [161/500 10304/32000 (32%)] Loss: 0.27198 (QuantReg: 10.38835) QuantErr: 10.38835 batch_time=0.43884
Train Epoch: 5 [169/500 10816/32000 (34%)] Loss: 0.16967 (QuantReg: 10.89299) QuantErr: 10.89299 batch_time=0.43511
Train Epoch: 5 [177/500 11328/32000 (35%)] Loss: 0.18981 (QuantReg: 10.52777) QuantErr: 10.52777 batch_time=0.44123
Train Epoch: 5 [185/500 11840/32000 (37%)] Loss: 0.15988 (QuantReg: 11.07530) QuantErr: 11.07530 batch_time=0.44027
Train Epoch: 5 [193/500 12352/32000 (39%)] Loss: 0.18778 (QuantReg: 11.06758) QuantErr: 11.06758 batch_time=0.43596
Train Epoch: 5 [201/500 12864/32000 (40%)] Loss: 0.19788 (QuantReg: 10.71915) QuantErr: 10.71915 batch_time=0.43515
Train Epoch: 5 [209/500 13376/32000 (42%)] Loss: 0.16586 (QuantReg: 10.94318) QuantErr: 10.94318 batch_time=0.43475
Train Epoch: 5 [217/500 13888/32000 (43%)] Loss: 0.15113 (QuantReg: 10.99276) QuantErr: 10.99276 batch_time=0.43444
Train Epoch: 5 [225/500 14400/32000 (45%)] Loss: 0.19650 (QuantReg: 11.03507) QuantErr: 11.03507 batch_time=0.43814
Train Epoch: 5 [233/500 14912/32000 (47%)] Loss: 0.13557 (QuantReg: 11.07434) QuantErr: 11.07434 batch_time=0.47939
Train Epoch: 5 [241/500 15424/32000 (48%)] Loss: 0.29574 (QuantReg: 10.89392) QuantErr: 10.89392 batch_time=0.44689
Train Epoch: 5 [249/500 15936/32000 (50%)] Loss: 0.18821 (QuantReg: 10.76620) QuantErr: 10.76620 batch_time=0.44700
Train Epoch: 5 [257/500 16448/32000 (51%)] Loss: 0.22391 (QuantReg: 11.00930) QuantErr: 11.00930 batch_time=0.45152
Train Epoch: 5 [265/500 16960/32000 (53%)] Loss: 0.12234 (QuantReg: 10.75050) QuantErr: 10.75050 batch_time=0.45152
Train Epoch: 5 [273/500 17472/32000 (55%)] Loss: 0.11610 (QuantReg: 10.56121) QuantErr: 10.56121 batch_time=0.44883
Train Epoch: 5 [281/500 17984/32000 (56%)] Loss: 0.22500 (QuantReg: 10.27580) QuantErr: 10.27580 batch_time=0.44363
Train Epoch: 5 [289/500 18496/32000 (58%)] Loss: 0.23407 (QuantReg: 10.15714) QuantErr: 10.15714 batch_time=0.45403
Train Epoch: 5 [297/500 19008/32000 (59%)] Loss: 0.18007 (QuantReg: 11.05294) QuantErr: 11.05294 batch_time=0.48725
Train Epoch: 5 [305/500 19520/32000 (61%)] Loss: 0.16549 (QuantReg: 11.18235) QuantErr: 11.18235 batch_time=0.49296
Train Epoch: 5 [313/500 20032/32000 (63%)] Loss: 0.19557 (QuantReg: 10.69771) QuantErr: 10.69771 batch_time=0.44493
Train Epoch: 5 [321/500 20544/32000 (64%)] Loss: 0.21296 (QuantReg: 10.54412) QuantErr: 10.54412 batch_time=0.44090
Train Epoch: 5 [329/500 21056/32000 (66%)] Loss: 0.19766 (QuantReg: 10.84439) QuantErr: 10.84439 batch_time=0.44442
Train Epoch: 5 [337/500 21568/32000 (67%)] Loss: 0.28217 (QuantReg: 10.65438) QuantErr: 10.65438 batch_time=0.43831
Train Epoch: 5 [345/500 22080/32000 (69%)] Loss: 0.13551 (QuantReg: 10.76471) QuantErr: 10.76471 batch_time=0.43604
Train Epoch: 5 [353/500 22592/32000 (71%)] Loss: 0.14686 (QuantReg: 11.19111) QuantErr: 11.19111 batch_time=0.45991
Train Epoch: 5 [361/500 23104/32000 (72%)] Loss: 0.16182 (QuantReg: 10.93837) QuantErr: 10.93837 batch_time=0.45052
Train Epoch: 5 [369/500 23616/32000 (74%)] Loss: 0.20485 (QuantReg: 10.66376) QuantErr: 10.66376 batch_time=0.44698
Train Epoch: 5 [377/500 24128/32000 (75%)] Loss: 0.22762 (QuantReg: 10.66908) QuantErr: 10.66908 batch_time=0.45072
Train Epoch: 5 [385/500 24640/32000 (77%)] Loss: 0.13504 (QuantReg: 10.73984) QuantErr: 10.73984 batch_time=0.48775
Train Epoch: 5 [393/500 25152/32000 (79%)] Loss: 0.10810 (QuantReg: 10.96009) QuantErr: 10.96009 batch_time=0.46806
Train Epoch: 5 [401/500 25664/32000 (80%)] Loss: 0.08526 (QuantReg: 10.92311) QuantErr: 10.92311 batch_time=0.43955
Train Epoch: 5 [409/500 26176/32000 (82%)] Loss: 0.15679 (QuantReg: 10.60987) QuantErr: 10.60987 batch_time=0.46727
Train Epoch: 5 [417/500 26688/32000 (83%)] Loss: 0.30909 (QuantReg: 10.89381) QuantErr: 10.89381 batch_time=0.43644
Train Epoch: 5 [425/500 27200/32000 (85%)] Loss: 0.17074 (QuantReg: 11.11278) QuantErr: 11.11278 batch_time=0.43733
Train Epoch: 5 [433/500 27712/32000 (87%)] Loss: 0.20773 (QuantReg: 10.68769) QuantErr: 10.68769 batch_time=0.43221
Train Epoch: 5 [441/500 28224/32000 (88%)] Loss: 0.12402 (QuantReg: 11.11136) QuantErr: 11.11136 batch_time=0.44055
Train Epoch: 5 [449/500 28736/32000 (90%)] Loss: 0.17298 (QuantReg: 10.68180) QuantErr: 10.68180 batch_time=0.43415
Train Epoch: 5 [457/500 29248/32000 (91%)] Loss: 0.16275 (QuantReg: 11.02382) QuantErr: 11.02382 batch_time=0.43334
Train Epoch: 5 [465/500 29760/32000 (93%)] Loss: 0.23387 (QuantReg: 10.86831) QuantErr: 10.86831 batch_time=0.43400
Train Epoch: 5 [473/500 30272/32000 (95%)] Loss: 0.18054 (QuantReg: 11.39383) QuantErr: 11.39383 batch_time=0.43523
Train Epoch: 5 [481/500 30784/32000 (96%)] Loss: 0.13112 (QuantReg: 11.02951) QuantErr: 11.02951 batch_time=0.46992
Train Epoch: 5 [489/500 31296/32000 (98%)] Loss: 0.11395 (QuantReg: 11.52053) QuantErr: 11.52053 batch_time=0.42833
Train Epoch: 5 [497/500 31808/32000 (99%)] Loss: 0.12778 (QuantReg: 11.05574) QuantErr: 11.05574 batch_time=0.44763
Train Epoch: 5 codebook_update_time=1.66809
Saving checkpoint: /apdcephfs/share_47076/gimwang/HCQ/exps/HCQ_ActivityNet_bs64/checkpoint-epoch5.pth ...
Done in 4.763s
Updating 'best' checkpoint: /apdcephfs/share_47076/gimwang/HCQ/exps/HCQ_ActivityNet_bs64/checkpoint-epoch5.pth ...
Done in 8.427s
removing stale ckpt [epoch 4] [took 0.00s]
epoch : 5
loss : 0.17780708983540536
quant_reg : 10.778248008728028
quant_err : 10.778248008728028
learning_rate : 3.6125000000000004e-05
n_samples : 160000
n_steps : 2500
ActivityNet_val1_test/t2v_metrics/R1: 15.659955257270694
ActivityNet_val1_test/t2v_metrics/R5: 42.42424242424242
ActivityNet_val1_test/t2v_metrics/R10: 59.99593247915396
ActivityNet_val1_test/t2v_metrics/R50: 91.376855806386
ActivityNet_val1_test/t2v_metrics/MedR: 7.0
ActivityNet_val1_test/t2v_metrics/MeanR: 25.778726865975187
ActivityNet_val1_test/t2v_metrics/geometric_mean_R1-R5-R10: 34.15928775673267
ActivityNet_val1_test/v2t_metrics/R1: 15.965019320724018
ActivityNet_val1_test/v2t_metrics/R5: 43.74618669920683
ActivityNet_val1_test/v2t_metrics/R10: 61.7246288387228
ActivityNet_val1_test/v2t_metrics/R50: 91.82428309945088
ActivityNet_val1_test/v2t_metrics/MedR: 7.0
ActivityNet_val1_test/v2t_metrics/MeanR: 25.051860890787065
ActivityNet_val1_test/v2t_metrics/geometric_mean_R1-R5-R10: 35.06356306931447
mnt_best : 34.15928775673267
not_improved_count: 0
Train Epoch: 6 [1/500 64/32000 (0%)] Loss: 0.19443 (QuantReg: 10.73992) QuantErr: 10.73992 batch_time=23.54455
Train Epoch: 6 [9/500 576/32000 (2%)] Loss: 0.15517 (QuantReg: 10.26380) QuantErr: 10.26380 batch_time=0.43502
Train Epoch: 6 [17/500 1088/32000 (3%)] Loss: 0.15631 (QuantReg: 10.70610) QuantErr: 10.70610 batch_time=0.43506
Train Epoch: 6 [25/500 1600/32000 (5%)] Loss: 0.11941 (QuantReg: 10.68809) QuantErr: 10.68809 batch_time=0.43604
Train Epoch: 6 [33/500 2112/32000 (7%)] Loss: 0.12116 (QuantReg: 10.46775) QuantErr: 10.46775 batch_time=0.61732
Train Epoch: 6 [41/500 2624/32000 (8%)] Loss: 0.18813 (QuantReg: 10.29799) QuantErr: 10.29799 batch_time=0.66069
Train Epoch: 6 [49/500 3136/32000 (10%)] Loss: 0.25467 (QuantReg: 10.45523) QuantErr: 10.45523 batch_time=0.43092
Train Epoch: 6 [57/500 3648/32000 (11%)] Loss: 0.34152 (QuantReg: 10.58192) QuantErr: 10.58192 batch_time=0.43480
Train Epoch: 6 [65/500 4160/32000 (13%)] Loss: 0.16110 (QuantReg: 10.65464) QuantErr: 10.65464 batch_time=0.44189
Train Epoch: 6 [73/500 4672/32000 (15%)] Loss: 0.22827 (QuantReg: 10.41096) QuantErr: 10.41096 batch_time=0.43523
Train Epoch: 6 [81/500 5184/32000 (16%)] Loss: 0.14081 (QuantReg: 10.56411) QuantErr: 10.56411 batch_time=0.44272
Train Epoch: 6 [89/500 5696/32000 (18%)] Loss: 0.22655 (QuantReg: 10.85612) QuantErr: 10.85612 batch_time=0.43739
Train Epoch: 6 [97/500 6208/32000 (19%)] Loss: 0.18192 (QuantReg: 10.39497) QuantErr: 10.39497 batch_time=0.59598
Train Epoch: 6 [105/500 6720/32000 (21%)] Loss: 0.20877 (QuantReg: 10.72946) QuantErr: 10.72946 batch_time=0.59381
Train Epoch: 6 [113/500 7232/32000 (23%)] Loss: 0.14428 (QuantReg: 10.76588) QuantErr: 10.76588 batch_time=0.43873
Train Epoch: 6 [121/500 7744/32000 (24%)] Loss: 0.08727 (QuantReg: 10.69525) QuantErr: 10.69525 batch_time=0.44349
Train Epoch: 6 [129/500 8256/32000 (26%)] Loss: 0.10754 (QuantReg: 10.64569) QuantErr: 10.64569 batch_time=0.44433
Train Epoch: 6 [137/500 8768/32000 (27%)] Loss: 0.12493 (QuantReg: 11.17598) QuantErr: 11.17598 batch_time=0.43958
Train Epoch: 6 [145/500 9280/32000 (29%)] Loss: 0.10857 (QuantReg: 11.05760) QuantErr: 11.05760 batch_time=0.44001
Train Epoch: 6 [153/500 9792/32000 (31%)] Loss: 0.13062 (QuantReg: 10.62638) QuantErr: 10.62638 batch_time=0.43618
Train Epoch: 6 [161/500 10304/32000 (32%)] Loss: 0.17111 (QuantReg: 10.60595) QuantErr: 10.60595 batch_time=0.59718
Train Epoch: 6 [169/500 10816/32000 (34%)] Loss: 0.20783 (QuantReg: 11.12497) QuantErr: 11.12497 batch_time=0.60353
Train Epoch: 6 [177/500 11328/32000 (35%)] Loss: 0.17495 (QuantReg: 10.66512) QuantErr: 10.66512 batch_time=0.43272
Train Epoch: 6 [185/500 11840/32000 (37%)] Loss: 0.23910 (QuantReg: 10.53741) QuantErr: 10.53741 batch_time=0.43558
Train Epoch: 6 [193/500 12352/32000 (39%)] Loss: 0.18836 (QuantReg: 10.70690) QuantErr: 10.70690 batch_time=0.43696
Train Epoch: 6 [201/500 12864/32000 (40%)] Loss: 0.18609 (QuantReg: 10.99919) QuantErr: 10.99919 batch_time=0.45367
Train Epoch: 6 [209/500 13376/32000 (42%)] Loss: 0.19375 (QuantReg: 10.72075) QuantErr: 10.72075 batch_time=0.48123
Train Epoch: 6 [217/500 13888/32000 (43%)] Loss: 0.11751 (QuantReg: 10.92525) QuantErr: 10.92525 batch_time=0.44562
Train Epoch: 6 [225/500 14400/32000 (45%)] Loss: 0.24624 (QuantReg: 11.00934) QuantErr: 11.00934 batch_time=0.61568
Train Epoch: 6 [233/500 14912/32000 (47%)] Loss: 0.15801 (QuantReg: 10.57636) QuantErr: 10.57636 batch_time=0.64081
Train Epoch: 6 [241/500 15424/32000 (48%)] Loss: 0.19955 (QuantReg: 10.80214) QuantErr: 10.80214 batch_time=0.46334
Train Epoch: 6 [249/500 15936/32000 (50%)] Loss: 0.17799 (QuantReg: 10.75019) QuantErr: 10.75019 batch_time=0.44791
Train Epoch: 6 [257/500 16448/32000 (51%)] Loss: 0.17378 (QuantReg: 10.61813) QuantErr: 10.61813 batch_time=0.43980
Train Epoch: 6 [265/500 16960/32000 (53%)] Loss: 0.18941 (QuantReg: 10.64413) QuantErr: 10.64413 batch_time=0.44444
Train Epoch: 6 [273/500 17472/32000 (55%)] Loss: 0.22122 (QuantReg: 10.77298) QuantErr: 10.77298 batch_time=0.43763
Train Epoch: 6 [281/500 17984/32000 (56%)] Loss: 0.17229 (QuantReg: 10.72384) QuantErr: 10.72384 batch_time=0.44168
Train Epoch: 6 [289/500 18496/32000 (58%)] Loss: 0.18959 (QuantReg: 10.73426) QuantErr: 10.73426 batch_time=0.60899
Train Epoch: 6 [297/500 19008/32000 (59%)] Loss: 0.16687 (QuantReg: 10.90883) QuantErr: 10.90883 batch_time=0.65076
Train Epoch: 6 [305/500 19520/32000 (61%)] Loss: 0.08038 (QuantReg: 10.94905) QuantErr: 10.94905 batch_time=0.47756
Train Epoch: 6 [313/500 20032/32000 (63%)] Loss: 0.24151 (QuantReg: 10.72030) QuantErr: 10.72030 batch_time=0.45327
Train Epoch: 6 [321/500 20544/32000 (64%)] Loss: 0.16828 (QuantReg: 10.95769) QuantErr: 10.95769 batch_time=0.44750
Train Epoch: 6 [329/500 21056/32000 (66%)] Loss: 0.13413 (QuantReg: 10.82130) QuantErr: 10.82130 batch_time=0.44325
Train Epoch: 6 [337/500 21568/32000 (67%)] Loss: 0.11289 (QuantReg: 10.78587) QuantErr: 10.78587 batch_time=0.44960
Train Epoch: 6 [345/500 22080/32000 (69%)] Loss: 0.12450 (QuantReg: 11.16088) QuantErr: 11.16088 batch_time=0.45245
Train Epoch: 6 [353/500 22592/32000 (71%)] Loss: 0.10545 (QuantReg: 10.76685) QuantErr: 10.76685 batch_time=0.61673
Train Epoch: 6 [361/500 23104/32000 (72%)] Loss: 0.22361 (QuantReg: 10.59281) QuantErr: 10.59281 batch_time=0.61894
Train Epoch: 6 [369/500 23616/32000 (74%)] Loss: 0.22367 (QuantReg: 11.38853) QuantErr: 11.38853 batch_time=0.44602
Train Epoch: 6 [377/500 24128/32000 (75%)] Loss: 0.14112 (QuantReg: 10.69026) QuantErr: 10.69026 batch_time=0.44957
Train Epoch: 6 [385/500 24640/32000 (77%)] Loss: 0.13163 (QuantReg: 10.85526) QuantErr: 10.85526 batch_time=0.44744
Train Epoch: 6 [393/500 25152/32000 (79%)] Loss: 0.13534 (QuantReg: 10.92420) QuantErr: 10.92420 batch_time=0.44674
Train Epoch: 6 [401/500 25664/32000 (80%)] Loss: 0.23201 (QuantReg: 10.34180) QuantErr: 10.34180 batch_time=0.44852
Train Epoch: 6 [409/500 26176/32000 (82%)] Loss: 0.11316 (QuantReg: 10.70067) QuantErr: 10.70067 batch_time=0.44643
Train Epoch: 6 [417/500 26688/32000 (83%)] Loss: 0.13033 (QuantReg: 10.63125) QuantErr: 10.63125 batch_time=0.59666
Train Epoch: 6 [425/500 27200/32000 (85%)] Loss: 0.16141 (QuantReg: 10.60937) QuantErr: 10.60937 batch_time=0.62940
Train Epoch: 6 [433/500 27712/32000 (87%)] Loss: 0.16288 (QuantReg: 10.90672) QuantErr: 10.90672 batch_time=0.45031
Train Epoch: 6 [441/500 28224/32000 (88%)] Loss: 0.19501 (QuantReg: 10.94549) QuantErr: 10.94549 batch_time=0.45183
Train Epoch: 6 [449/500 28736/32000 (90%)] Loss: 0.17285 (QuantReg: 10.55149) QuantErr: 10.55149 batch_time=0.45527
Train Epoch: 6 [457/500 29248/32000 (91%)] Loss: 0.12986 (QuantReg: 11.25691) QuantErr: 11.25691 batch_time=0.44852
Train Epoch: 6 [465/500 29760/32000 (93%)] Loss: 0.11187 (QuantReg: 11.05970) QuantErr: 11.05970 batch_time=0.44572
Train Epoch: 6 [473/500 30272/32000 (95%)] Loss: 0.11966 (QuantReg: 10.97981) QuantErr: 10.97981 batch_time=0.44821
Train Epoch: 6 [481/500 30784/32000 (96%)] Loss: 0.12350 (QuantReg: 11.30135) QuantErr: 11.30135 batch_time=0.61899
Train Epoch: 6 [489/500 31296/32000 (98%)] Loss: 0.18008 (QuantReg: 10.68379) QuantErr: 10.68379 batch_time=0.61792
Train Epoch: 6 [497/500 31808/32000 (99%)] Loss: 0.21449 (QuantReg: 10.93660) QuantErr: 10.93660 batch_time=0.45324
Train Epoch: 6 codebook_update_time=1.72323
Saving checkpoint: /apdcephfs/share_47076/gimwang/HCQ/exps/HCQ_ActivityNet_bs64/checkpoint-epoch6.pth ...
Done in 4.232s
Updating 'best' checkpoint: /apdcephfs/share_47076/gimwang/HCQ/exps/HCQ_ActivityNet_bs64/checkpoint-epoch6.pth ...
Done in 8.849s
removing stale ckpt [epoch 5] [took 0.01s]
epoch : 6
loss : 0.1595266472324729
quant_reg : 10.792181676864624
quant_err : 10.792181676864624
learning_rate : 3.6125000000000004e-05
n_samples : 192000
n_steps : 3000
ActivityNet_val1_test/t2v_metrics/R1: 15.74130567419158
ActivityNet_val1_test/t2v_metrics/R5: 43.380109823062845
ActivityNet_val1_test/t2v_metrics/R10: 61.15517592027659
ActivityNet_val1_test/t2v_metrics/R50: 91.29550538946512
ActivityNet_val1_test/t2v_metrics/MedR: 7.0
ActivityNet_val1_test/t2v_metrics/MeanR: 25.984950172869635
ActivityNet_val1_test/t2v_metrics/geometric_mean_R1-R5-R10: 34.69403986065813
ActivityNet_val1_test/v2t_metrics/R1: 16.758185885702666
ActivityNet_val1_test/v2t_metrics/R5: 44.90543014032947
ActivityNet_val1_test/v2t_metrics/R10: 62.151718527557456
ActivityNet_val1_test/v2t_metrics/R50: 92.17002237136465
ActivityNet_val1_test/v2t_metrics/MedR: 7.0
ActivityNet_val1_test/v2t_metrics/MeanR: 24.959833231645312
ActivityNet_val1_test/v2t_metrics/geometric_mean_R1-R5-R10: 36.029618913201624
mnt_best : 34.69403986065813
not_improved_count: 0
Train Epoch: 7 [1/500 64/32000 (0%)] Loss: 0.09704 (QuantReg: 10.49316) QuantErr: 10.49316 batch_time=24.43721
Train Epoch: 7 [9/500 576/32000 (2%)] Loss: 0.12900 (QuantReg: 10.85610) QuantErr: 10.85610 batch_time=0.47123
Train Epoch: 7 [17/500 1088/32000 (3%)] Loss: 0.10922 (QuantReg: 11.00590) QuantErr: 11.00590 batch_time=0.49107
Train Epoch: 7 [25/500 1600/32000 (5%)] Loss: 0.17490 (QuantReg: 10.18777) QuantErr: 10.18777 batch_time=0.44900
Train Epoch: 7 [33/500 2112/32000 (7%)] Loss: 0.10345 (QuantReg: 10.37253) QuantErr: 10.37253 batch_time=0.44786
Train Epoch: 7 [41/500 2624/32000 (8%)] Loss: 0.10339 (QuantReg: 10.49200) QuantErr: 10.49200 batch_time=0.44787
Train Epoch: 7 [49/500 3136/32000 (10%)] Loss: 0.15855 (QuantReg: 10.54745) QuantErr: 10.54745 batch_time=0.44993
Train Epoch: 7 [57/500 3648/32000 (11%)] Loss: 0.10423 (QuantReg: 10.99332) QuantErr: 10.99332 batch_time=0.43997
Train Epoch: 7 [65/500 4160/32000 (13%)] Loss: 0.10730 (QuantReg: 10.91005) QuantErr: 10.91005 batch_time=0.64946
Train Epoch: 7 [73/500 4672/32000 (15%)] Loss: 0.12108 (QuantReg: 10.92254) QuantErr: 10.92254 batch_time=0.46931
Train Epoch: 7 [81/500 5184/32000 (16%)] Loss: 0.16159 (QuantReg: 10.59173) QuantErr: 10.59173 batch_time=0.47555
Train Epoch: 7 [89/500 5696/32000 (18%)] Loss: 0.11138 (QuantReg: 10.62034) QuantErr: 10.62034 batch_time=0.44768
Train Epoch: 7 [97/500 6208/32000 (19%)] Loss: 0.15643 (QuantReg: 10.94290) QuantErr: 10.94290 batch_time=0.44085
Train Epoch: 7 [105/500 6720/32000 (21%)] Loss: 0.21477 (QuantReg: 10.38681) QuantErr: 10.38681 batch_time=0.43479
Train Epoch: 7 [113/500 7232/32000 (23%)] Loss: 0.06851 (QuantReg: 10.46452) QuantErr: 10.46452 batch_time=0.44106
Train Epoch: 7 [121/500 7744/32000 (24%)] Loss: 0.17253 (QuantReg: 11.06437) QuantErr: 11.06437 batch_time=0.43105
Train Epoch: 7 [129/500 8256/32000 (26%)] Loss: 0.13254 (QuantReg: 10.96112) QuantErr: 10.96112 batch_time=0.62601
Train Epoch: 7 [137/500 8768/32000 (27%)] Loss: 0.06908 (QuantReg: 10.75589) QuantErr: 10.75589 batch_time=0.52493
Train Epoch: 7 [145/500 9280/32000 (29%)] Loss: 0.19648 (QuantReg: 10.50917) QuantErr: 10.50917 batch_time=0.47633
Train Epoch: 7 [153/500 9792/32000 (31%)] Loss: 0.11552 (QuantReg: 10.51240) QuantErr: 10.51240 batch_time=0.44160
Train Epoch: 7 [161/500 10304/32000 (32%)] Loss: 0.11583 (QuantReg: 10.46801) QuantErr: 10.46801 batch_time=0.44849
Train Epoch: 7 [169/500 10816/32000 (34%)] Loss: 0.16549 (QuantReg: 10.44192) QuantErr: 10.44192 batch_time=0.44262
Train Epoch: 7 [177/500 11328/32000 (35%)] Loss: 0.14546 (QuantReg: 10.68585) QuantErr: 10.68585 batch_time=0.44518
Train Epoch: 7 [185/500 11840/32000 (37%)] Loss: 0.15287 (QuantReg: 10.58775) QuantErr: 10.58775 batch_time=0.43368
Train Epoch: 7 [193/500 12352/32000 (39%)] Loss: 0.12410 (QuantReg: 10.43377) QuantErr: 10.43377 batch_time=0.64405
Train Epoch: 7 [201/500 12864/32000 (40%)] Loss: 0.12400 (QuantReg: 10.82856) QuantErr: 10.82856 batch_time=0.47551
Train Epoch: 7 [209/500 13376/32000 (42%)] Loss: 0.10025 (QuantReg: 10.74147) QuantErr: 10.74147 batch_time=0.48246
Train Epoch: 7 [217/500 13888/32000 (43%)] Loss: 0.15438 (QuantReg: 10.99773) QuantErr: 10.99773 batch_time=0.44630
Train Epoch: 7 [225/500 14400/32000 (45%)] Loss: 0.17559 (QuantReg: 10.81037) QuantErr: 10.81037 batch_time=0.44680
Train Epoch: 7 [233/500 14912/32000 (47%)] Loss: 0.09291 (QuantReg: 10.30277) QuantErr: 10.30277 batch_time=0.44181
Train Epoch: 7 [241/500 15424/32000 (48%)] Loss: 0.14372 (QuantReg: 10.58403) QuantErr: 10.58403 batch_time=0.48488
Train Epoch: 7 [249/500 15936/32000 (50%)] Loss: 0.14310 (QuantReg: 10.83456) QuantErr: 10.83456 batch_time=0.44013
Train Epoch: 7 [257/500 16448/32000 (51%)] Loss: 0.08741 (QuantReg: 10.49801) QuantErr: 10.49801 batch_time=0.64434
Train Epoch: 7 [265/500 16960/32000 (53%)] Loss: 0.11062 (QuantReg: 10.61164) QuantErr: 10.61164 batch_time=0.47633
Train Epoch: 7 [273/500 17472/32000 (55%)] Loss: 0.22483 (QuantReg: 10.70112) QuantErr: 10.70112 batch_time=0.47922
Train Epoch: 7 [281/500 17984/32000 (56%)] Loss: 0.11811 (QuantReg: 10.88297) QuantErr: 10.88297 batch_time=0.44193
Train Epoch: 7 [289/500 18496/32000 (58%)] Loss: 0.13587 (QuantReg: 10.63014) QuantErr: 10.63014 batch_time=0.44572
Train Epoch: 7 [297/500 19008/32000 (59%)] Loss: 0.10072 (QuantReg: 11.32237) QuantErr: 11.32237 batch_time=0.43769
Train Epoch: 7 [305/500 19520/32000 (61%)] Loss: 0.16318 (QuantReg: 10.34722) QuantErr: 10.34722 batch_time=0.48989
Train Epoch: 7 [313/500 20032/32000 (63%)] Loss: 0.13148 (QuantReg: 10.90995) QuantErr: 10.90995 batch_time=0.43819
Train Epoch: 7 [321/500 20544/32000 (64%)] Loss: 0.19340 (QuantReg: 10.81755) QuantErr: 10.81755 batch_time=0.62891
Train Epoch: 7 [329/500 21056/32000 (66%)] Loss: 0.07983 (QuantReg: 10.79370) QuantErr: 10.79370 batch_time=0.47560
Train Epoch: 7 [337/500 21568/32000 (67%)] Loss: 0.06353 (QuantReg: 11.11154) QuantErr: 11.11154 batch_time=0.47347
Train Epoch: 7 [345/500 22080/32000 (69%)] Loss: 0.06114 (QuantReg: 11.28543) QuantErr: 11.28543 batch_time=0.44780
Train Epoch: 7 [353/500 22592/32000 (71%)] Loss: 0.13821 (QuantReg: 10.88274) QuantErr: 10.88274 batch_time=0.44244
Train Epoch: 7 [361/500 23104/32000 (72%)] Loss: 0.13396 (QuantReg: 10.31671) QuantErr: 10.31671 batch_time=0.46069
Train Epoch: 7 [369/500 23616/32000 (74%)] Loss: 0.07312 (QuantReg: 10.65281) QuantErr: 10.65281 batch_time=0.44689
Train Epoch: 7 [377/500 24128/32000 (75%)] Loss: 0.11083 (QuantReg: 10.86351) QuantErr: 10.86351 batch_time=0.44713
Train Epoch: 7 [385/500 24640/32000 (77%)] Loss: 0.18558 (QuantReg: 10.67930) QuantErr: 10.67930 batch_time=0.67042
Train Epoch: 7 [393/500 25152/32000 (79%)] Loss: 0.05921 (QuantReg: 10.80960) QuantErr: 10.80960 batch_time=0.48169
Train Epoch: 7 [401/500 25664/32000 (80%)] Loss: 0.13633 (QuantReg: 10.86471) QuantErr: 10.86471 batch_time=0.47853
Train Epoch: 7 [409/500 26176/32000 (82%)] Loss: 0.05893 (QuantReg: 10.84523) QuantErr: 10.84523 batch_time=0.45585
Train Epoch: 7 [417/500 26688/32000 (83%)] Loss: 0.10689 (QuantReg: 10.84120) QuantErr: 10.84120 batch_time=0.44281
Train Epoch: 7 [425/500 27200/32000 (85%)] Loss: 0.10912 (QuantReg: 10.58088) QuantErr: 10.58088 batch_time=0.45289
Train Epoch: 7 [433/500 27712/32000 (87%)] Loss: 0.11033 (QuantReg: 11.20600) QuantErr: 11.20600 batch_time=0.45147
Train Epoch: 7 [441/500 28224/32000 (88%)] Loss: 0.17868 (QuantReg: 11.03239) QuantErr: 11.03239 batch_time=0.46210
Train Epoch: 7 [449/500 28736/32000 (90%)] Loss: 0.09399 (QuantReg: 11.13415) QuantErr: 11.13415 batch_time=0.68104
Train Epoch: 7 [457/500 29248/32000 (91%)] Loss: 0.11696 (QuantReg: 10.73288) QuantErr: 10.73288 batch_time=0.48094
Train Epoch: 7 [465/500 29760/32000 (93%)] Loss: 0.12474 (QuantReg: 10.71038) QuantErr: 10.71038 batch_time=0.49415
Train Epoch: 7 [473/500 30272/32000 (95%)] Loss: 0.12535 (QuantReg: 10.83793) QuantErr: 10.83793 batch_time=0.45413
Train Epoch: 7 [481/500 30784/32000 (96%)] Loss: 0.14174 (QuantReg: 10.72293) QuantErr: 10.72293 batch_time=0.43990
Train Epoch: 7 [489/500 31296/32000 (98%)] Loss: 0.07962 (QuantReg: 10.76503) QuantErr: 10.76503 batch_time=0.43733
Train Epoch: 7 [497/500 31808/32000 (99%)] Loss: 0.10783 (QuantReg: 10.66794) QuantErr: 10.66794 batch_time=0.45295
Train Epoch: 7 codebook_update_time=1.83324
Saving checkpoint: /apdcephfs/share_47076/gimwang/HCQ/exps/HCQ_ActivityNet_bs64/checkpoint-epoch7.pth ...
Done in 3.955s
removing stale ckpt [epoch 6] [took 0.47s]
epoch : 7
loss : 0.12992934752255678
quant_reg : 10.763591983795166
quant_err : 10.763591983795166
learning_rate : 3.0706250000000004e-05
n_samples : 224000
n_steps : 3500
ActivityNet_val1_test/t2v_metrics/R1: 15.110839943054708
ActivityNet_val1_test/t2v_metrics/R5: 43.542810656904614
ActivityNet_val1_test/t2v_metrics/R10: 61.826316859873906
ActivityNet_val1_test/t2v_metrics/R50: 91.39719341061623
ActivityNet_val1_test/t2v_metrics/MedR: 7.0
ActivityNet_val1_test/t2v_metrics/MeanR: 26.011185682326623
ActivityNet_val1_test/t2v_metrics/geometric_mean_R1-R5-R10: 34.392162168455144
ActivityNet_val1_test/v2t_metrics/R1: 16.94122432377466
ActivityNet_val1_test/v2t_metrics/R5: 45.800284726459225
ActivityNet_val1_test/v2t_metrics/R10: 63.57535082367297
ActivityNet_val1_test/v2t_metrics/R50: 91.88529591214154
ActivityNet_val1_test/v2t_metrics/MedR: 6.0
ActivityNet_val1_test/v2t_metrics/MeanR: 24.925360992475085
ActivityNet_val1_test/v2t_metrics/geometric_mean_R1-R5-R10: 36.6747570417558
mnt_best : 34.69403986065813
not_improved_count: 1
Train Epoch: 8 [1/500 64/32000 (0%)] Loss: 0.09685 (QuantReg: 10.45446) QuantErr: 10.45446 batch_time=23.52926
Train Epoch: 8 [9/500 576/32000 (2%)] Loss: 0.17109 (QuantReg: 10.47085) QuantErr: 10.47085 batch_time=0.43912
Train Epoch: 8 [17/500 1088/32000 (3%)] Loss: 0.13079 (QuantReg: 11.10093) QuantErr: 11.10093 batch_time=1.81008
Train Epoch: 8 [25/500 1600/32000 (5%)] Loss: 0.14400 (QuantReg: 10.73205) QuantErr: 10.73205 batch_time=0.43963
Train Epoch: 8 [33/500 2112/32000 (7%)] Loss: 0.12962 (QuantReg: 10.63404) QuantErr: 10.63404 batch_time=0.43894
Train Epoch: 8 [41/500 2624/32000 (8%)] Loss: 0.17311 (QuantReg: 10.72897) QuantErr: 10.72897 batch_time=0.44882
Train Epoch: 8 [49/500 3136/32000 (10%)] Loss: 0.24176 (QuantReg: 10.92377) QuantErr: 10.92377 batch_time=0.44269
Train Epoch: 8 [57/500 3648/32000 (11%)] Loss: 0.11962 (QuantReg: 10.91237) QuantErr: 10.91237 batch_time=0.44291
Train Epoch: 8 [65/500 4160/32000 (13%)] Loss: 0.10775 (QuantReg: 10.93327) QuantErr: 10.93327 batch_time=0.63718
Train Epoch: 8 [73/500 4672/32000 (15%)] Loss: 0.15581 (QuantReg: 11.15445) QuantErr: 11.15445 batch_time=0.43741
Train Epoch: 8 [81/500 5184/32000 (16%)] Loss: 0.17137 (QuantReg: 10.35590) QuantErr: 10.35590 batch_time=1.38468
Train Epoch: 8 [89/500 5696/32000 (18%)] Loss: 0.10975 (QuantReg: 10.60612) QuantErr: 10.60612 batch_time=0.43771
Train Epoch: 8 [97/500 6208/32000 (19%)] Loss: 0.11793 (QuantReg: 10.65908) QuantErr: 10.65908 batch_time=0.45120
Train Epoch: 8 [105/500 6720/32000 (21%)] Loss: 0.10585 (QuantReg: 10.63674) QuantErr: 10.63674 batch_time=0.44202
Train Epoch: 8 [113/500 7232/32000 (23%)] Loss: 0.18324 (QuantReg: 10.85975) QuantErr: 10.85975 batch_time=0.43994
Train Epoch: 8 [121/500 7744/32000 (24%)] Loss: 0.16566 (QuantReg: 10.83301) QuantErr: 10.83301 batch_time=0.44208
Train Epoch: 8 [129/500 8256/32000 (26%)] Loss: 0.17084 (QuantReg: 10.38396) QuantErr: 10.38396 batch_time=0.67615
Train Epoch: 8 [137/500 8768/32000 (27%)] Loss: 0.08042 (QuantReg: 10.67128) QuantErr: 10.67128 batch_time=0.45092
Train Epoch: 8 [145/500 9280/32000 (29%)] Loss: 0.17392 (QuantReg: 10.99837) QuantErr: 10.99837 batch_time=1.63270
Train Epoch: 8 [153/500 9792/32000 (31%)] Loss: 0.14059 (QuantReg: 10.71436) QuantErr: 10.71436 batch_time=0.49904
Train Epoch: 8 [161/500 10304/32000 (32%)] Loss: 0.10058 (QuantReg: 10.56208) QuantErr: 10.56208 batch_time=0.44755
Train Epoch: 8 [169/500 10816/32000 (34%)] Loss: 0.06977 (QuantReg: 11.20388) QuantErr: 11.20388 batch_time=0.46438
Train Epoch: 8 [177/500 11328/32000 (35%)] Loss: 0.17418 (QuantReg: 10.43228) QuantErr: 10.43228 batch_time=0.44718
Train Epoch: 8 [185/500 11840/32000 (37%)] Loss: 0.09744 (QuantReg: 10.58468) QuantErr: 10.58468 batch_time=0.44529
Train Epoch: 8 [193/500 12352/32000 (39%)] Loss: 0.09072 (QuantReg: 10.68986) QuantErr: 10.68986 batch_time=0.65175
Train Epoch: 8 [201/500 12864/32000 (40%)] Loss: 0.08356 (QuantReg: 10.86297) QuantErr: 10.86297 batch_time=0.43817
Train Epoch: 8 [209/500 13376/32000 (42%)] Loss: 0.08190 (QuantReg: 10.87605) QuantErr: 10.87605 batch_time=1.45641
Train Epoch: 8 [217/500 13888/32000 (43%)] Loss: 0.09265 (QuantReg: 10.83875) QuantErr: 10.83875 batch_time=0.44421
Train Epoch: 8 [225/500 14400/32000 (45%)] Loss: 0.12076 (QuantReg: 10.25227) QuantErr: 10.25227 batch_time=0.45150
Train Epoch: 8 [233/500 14912/32000 (47%)] Loss: 0.23078 (QuantReg: 10.56816) QuantErr: 10.56816 batch_time=0.45170
Train Epoch: 8 [241/500 15424/32000 (48%)] Loss: 0.09514 (QuantReg: 10.58318) QuantErr: 10.58318 batch_time=0.44182
Train Epoch: 8 [249/500 15936/32000 (50%)] Loss: 0.07333 (QuantReg: 10.70683) QuantErr: 10.70683 batch_time=0.45027
Train Epoch: 8 [257/500 16448/32000 (51%)] Loss: 0.09858 (QuantReg: 10.54245) QuantErr: 10.54245 batch_time=0.69713
Train Epoch: 8 [265/500 16960/32000 (53%)] Loss: 0.11249 (QuantReg: 10.86164) QuantErr: 10.86164 batch_time=0.45203
Train Epoch: 8 [273/500 17472/32000 (55%)] Loss: 0.07778 (QuantReg: 10.75832) QuantErr: 10.75832 batch_time=1.39641
Train Epoch: 8 [281/500 17984/32000 (56%)] Loss: 0.13890 (QuantReg: 10.84099) QuantErr: 10.84099 batch_time=0.44490
Train Epoch: 8 [289/500 18496/32000 (58%)] Loss: 0.07823 (QuantReg: 10.67339) QuantErr: 10.67339 batch_time=0.44398
Train Epoch: 8 [297/500 19008/32000 (59%)] Loss: 0.11431 (QuantReg: 10.78340) QuantErr: 10.78340 batch_time=0.44630
Train Epoch: 8 [305/500 19520/32000 (61%)] Loss: 0.17722 (QuantReg: 10.98640) QuantErr: 10.98640 batch_time=0.47933
Train Epoch: 8 [313/500 20032/32000 (63%)] Loss: 0.07494 (QuantReg: 11.16432) QuantErr: 11.16432 batch_time=0.46380
Train Epoch: 8 [321/500 20544/32000 (64%)] Loss: 0.14744 (QuantReg: 10.77626) QuantErr: 10.77626 batch_time=0.64787
Train Epoch: 8 [329/500 21056/32000 (66%)] Loss: 0.08522 (QuantReg: 11.05707) QuantErr: 11.05707 batch_time=0.43949
Train Epoch: 8 [337/500 21568/32000 (67%)] Loss: 0.10111 (QuantReg: 10.90172) QuantErr: 10.90172 batch_time=1.41330
Train Epoch: 8 [345/500 22080/32000 (69%)] Loss: 0.08620 (QuantReg: 10.76810) QuantErr: 10.76810 batch_time=0.45203
Train Epoch: 8 [353/500 22592/32000 (71%)] Loss: 0.06339 (QuantReg: 10.84906) QuantErr: 10.84906 batch_time=0.45055
Train Epoch: 8 [361/500 23104/32000 (72%)] Loss: 0.11304 (QuantReg: 10.78549) QuantErr: 10.78549 batch_time=0.48066
Train Epoch: 8 [369/500 23616/32000 (74%)] Loss: 0.22352 (QuantReg: 10.67672) QuantErr: 10.67672 batch_time=0.47850
Train Epoch: 8 [377/500 24128/32000 (75%)] Loss: 0.15618 (QuantReg: 10.79058) QuantErr: 10.79058 batch_time=0.44309
Train Epoch: 8 [385/500 24640/32000 (77%)] Loss: 0.13974 (QuantReg: 10.15665) QuantErr: 10.15665 batch_time=0.66366
Train Epoch: 8 [393/500 25152/32000 (79%)] Loss: 0.13473 (QuantReg: 10.90666) QuantErr: 10.90666 batch_time=0.48045
Train Epoch: 8 [401/500 25664/32000 (80%)] Loss: 0.10541 (QuantReg: 10.58012) QuantErr: 10.58012 batch_time=1.50064
Train Epoch: 8 [409/500 26176/32000 (82%)] Loss: 0.12289 (QuantReg: 10.99899) QuantErr: 10.99899 batch_time=0.45424
Train Epoch: 8 [417/500 26688/32000 (83%)] Loss: 0.08157 (QuantReg: 11.03294) QuantErr: 11.03294 batch_time=0.44373
Train Epoch: 8 [425/500 27200/32000 (85%)] Loss: 0.22861 (QuantReg: 10.83061) QuantErr: 10.83061 batch_time=0.44591
Train Epoch: 8 [433/500 27712/32000 (87%)] Loss: 0.11655 (QuantReg: 11.18982) QuantErr: 11.18982 batch_time=0.44867
Train Epoch: 8 [441/500 28224/32000 (88%)] Loss: 0.10689 (QuantReg: 10.72007) QuantErr: 10.72007 batch_time=0.47740
Train Epoch: 8 [449/500 28736/32000 (90%)] Loss: 0.20061 (QuantReg: 10.38824) QuantErr: 10.38824 batch_time=0.64666
Train Epoch: 8 [457/500 29248/32000 (91%)] Loss: 0.10211 (QuantReg: 11.28168) QuantErr: 11.28168 batch_time=0.44074
Train Epoch: 8 [465/500 29760/32000 (93%)] Loss: 0.07625 (QuantReg: 10.96194) QuantErr: 10.96194 batch_time=1.47083
Train Epoch: 8 [473/500 30272/32000 (95%)] Loss: 0.10806 (QuantReg: 10.73047) QuantErr: 10.73047 batch_time=0.44911
Train Epoch: 8 [481/500 30784/32000 (96%)] Loss: 0.15183 (QuantReg: 10.97155) QuantErr: 10.97155 batch_time=0.45051
Train Epoch: 8 [489/500 31296/32000 (98%)] Loss: 0.12262 (QuantReg: 10.71086) QuantErr: 10.71086 batch_time=0.44408
Train Epoch: 8 [497/500 31808/32000 (99%)] Loss: 0.08480 (QuantReg: 10.63647) QuantErr: 10.63647 batch_time=0.44520
Train Epoch: 8 codebook_update_time=1.71395
Saving checkpoint: /apdcephfs/share_47076/gimwang/HCQ/exps/HCQ_ActivityNet_bs64/checkpoint-epoch8.pth ...
Done in 5.079s
Updating 'best' checkpoint: /apdcephfs/share_47076/gimwang/HCQ/exps/HCQ_ActivityNet_bs64/checkpoint-epoch8.pth ...
Done in 10.010s
removing stale ckpt [epoch 7] [took 0.01s]
epoch : 8
loss : 0.12136694414168596
quant_reg : 10.834959915161132
quant_err : 10.834959915161132
learning_rate : 3.0706250000000004e-05
n_samples : 256000
n_steps : 4000
ActivityNet_val1_test/t2v_metrics/R1: 15.61928004881025
ActivityNet_val1_test/t2v_metrics/R5: 45.12914378686191
ActivityNet_val1_test/t2v_metrics/R10: 62.55847061216189
ActivityNet_val1_test/t2v_metrics/R50: 91.41753101484645
ActivityNet_val1_test/t2v_metrics/MedR: 7.0
ActivityNet_val1_test/t2v_metrics/MeanR: 23.870042708968885
ActivityNet_val1_test/t2v_metrics/geometric_mean_R1-R5-R10: 35.32927642923768
ActivityNet_val1_test/v2t_metrics/R1: 17.9174293268253
ActivityNet_val1_test/v2t_metrics/R5: 45.67825910107789
ActivityNet_val1_test/v2t_metrics/R10: 63.61602603213341
ActivityNet_val1_test/v2t_metrics/R50: 91.90563351637176
ActivityNet_val1_test/v2t_metrics/MedR: 6.0
ActivityNet_val1_test/v2t_metrics/MeanR: 23.475696562944886
ActivityNet_val1_test/v2t_metrics/geometric_mean_R1-R5-R10: 37.34082652117908
mnt_best : 35.32927642923768
not_improved_count: 0
Train Epoch: 9 [1/500 64/32000 (0%)] Loss: 0.09106 (QuantReg: 10.68149) QuantErr: 10.68149 batch_time=24.39184
Train Epoch: 9 [9/500 576/32000 (2%)] Loss: 0.09507 (QuantReg: 10.90082) QuantErr: 10.90082 batch_time=0.43881
Train Epoch: 9 [17/500 1088/32000 (3%)] Loss: 0.08765 (QuantReg: 10.69934) QuantErr: 10.69934 batch_time=0.42937
Train Epoch: 9 [25/500 1600/32000 (5%)] Loss: 0.15104 (QuantReg: 10.89473) QuantErr: 10.89473 batch_time=0.43995
Train Epoch: 9 [33/500 2112/32000 (7%)] Loss: 0.14866 (QuantReg: 10.82577) QuantErr: 10.82577 batch_time=0.43718
Train Epoch: 9 [41/500 2624/32000 (8%)] Loss: 0.08190 (QuantReg: 10.75660) QuantErr: 10.75660 batch_time=0.52203
Train Epoch: 9 [49/500 3136/32000 (10%)] Loss: 0.11704 (QuantReg: 10.26471) QuantErr: 10.26471 batch_time=0.44582
Train Epoch: 9 [57/500 3648/32000 (11%)] Loss: 0.09522 (QuantReg: 10.58959) QuantErr: 10.58959 batch_time=0.44194
Train Epoch: 9 [65/500 4160/32000 (13%)] Loss: 0.10173 (QuantReg: 10.87994) QuantErr: 10.87994 batch_time=0.57922
Train Epoch: 9 [73/500 4672/32000 (15%)] Loss: 0.12604 (QuantReg: 11.16294) QuantErr: 11.16294 batch_time=0.44991
Train Epoch: 9 [81/500 5184/32000 (16%)] Loss: 0.07974 (QuantReg: 10.70484) QuantErr: 10.70484 batch_time=0.44481
Train Epoch: 9 [89/500 5696/32000 (18%)] Loss: 0.14877 (QuantReg: 10.65412) QuantErr: 10.65412 batch_time=0.44604
Train Epoch: 9 [97/500 6208/32000 (19%)] Loss: 0.17857 (QuantReg: 10.76158) QuantErr: 10.76158 batch_time=0.43861
Train Epoch: 9 [105/500 6720/32000 (21%)] Loss: 0.07012 (QuantReg: 11.04038) QuantErr: 11.04038 batch_time=0.50908
Train Epoch: 9 [113/500 7232/32000 (23%)] Loss: 0.16982 (QuantReg: 10.73767) QuantErr: 10.73767 batch_time=0.44646
Train Epoch: 9 [121/500 7744/32000 (24%)] Loss: 0.07712 (QuantReg: 11.38011) QuantErr: 11.38011 batch_time=0.44530
Train Epoch: 9 [129/500 8256/32000 (26%)] Loss: 0.06107 (QuantReg: 10.72460) QuantErr: 10.72460 batch_time=0.57260
Train Epoch: 9 [137/500 8768/32000 (27%)] Loss: 0.10466 (QuantReg: 10.45342) QuantErr: 10.45342 batch_time=0.44243
Train Epoch: 9 [145/500 9280/32000 (29%)] Loss: 0.08393 (QuantReg: 11.29388) QuantErr: 11.29388 batch_time=0.43672
Train Epoch: 9 [153/500 9792/32000 (31%)] Loss: 0.07062 (QuantReg: 10.98149) QuantErr: 10.98149 batch_time=0.45424
Train Epoch: 9 [161/500 10304/32000 (32%)] Loss: 0.09795 (QuantReg: 10.74881) QuantErr: 10.74881 batch_time=0.44430
Train Epoch: 9 [169/500 10816/32000 (34%)] Loss: 0.09244 (QuantReg: 10.84223) QuantErr: 10.84223 batch_time=0.52759
Train Epoch: 9 [177/500 11328/32000 (35%)] Loss: 0.13637 (QuantReg: 10.85983) QuantErr: 10.85983 batch_time=0.44029
Train Epoch: 9 [185/500 11840/32000 (37%)] Loss: 0.10584 (QuantReg: 11.07413) QuantErr: 11.07413 batch_time=0.44986
Train Epoch: 9 [193/500 12352/32000 (39%)] Loss: 0.16527 (QuantReg: 10.74557) QuantErr: 10.74557 batch_time=0.60231
Train Epoch: 9 [201/500 12864/32000 (40%)] Loss: 0.09735 (QuantReg: 11.10402) QuantErr: 11.10402 batch_time=0.44500
Train Epoch: 9 [209/500 13376/32000 (42%)] Loss: 0.09920 (QuantReg: 10.86981) QuantErr: 10.86981 batch_time=0.49823
Train Epoch: 9 [217/500 13888/32000 (43%)] Loss: 0.11819 (QuantReg: 10.92758) QuantErr: 10.92758 batch_time=0.44484
Train Epoch: 9 [225/500 14400/32000 (45%)] Loss: 0.13248 (QuantReg: 10.42890) QuantErr: 10.42890 batch_time=0.44099
Train Epoch: 9 [233/500 14912/32000 (47%)] Loss: 0.09031 (QuantReg: 10.97729) QuantErr: 10.97729 batch_time=0.50778
Train Epoch: 9 [241/500 15424/32000 (48%)] Loss: 0.09248 (QuantReg: 10.73767) QuantErr: 10.73767 batch_time=0.44354
Train Epoch: 9 [249/500 15936/32000 (50%)] Loss: 0.08170 (QuantReg: 11.07884) QuantErr: 11.07884 batch_time=0.44049
Train Epoch: 9 [257/500 16448/32000 (51%)] Loss: 0.08925 (QuantReg: 10.58916) QuantErr: 10.58916 batch_time=0.57657
Train Epoch: 9 [265/500 16960/32000 (53%)] Loss: 0.10294 (QuantReg: 10.78032) QuantErr: 10.78032 batch_time=0.44060
Train Epoch: 9 [273/500 17472/32000 (55%)] Loss: 0.10066 (QuantReg: 10.93202) QuantErr: 10.93202 batch_time=0.46753
Train Epoch: 9 [281/500 17984/32000 (56%)] Loss: 0.06932 (QuantReg: 11.05924) QuantErr: 11.05924 batch_time=0.45195
Train Epoch: 9 [289/500 18496/32000 (58%)] Loss: 0.07033 (QuantReg: 10.97915) QuantErr: 10.97915 batch_time=0.45253
Train Epoch: 9 [297/500 19008/32000 (59%)] Loss: 0.12030 (QuantReg: 10.68080) QuantErr: 10.68080 batch_time=0.52034
Train Epoch: 9 [305/500 19520/32000 (61%)] Loss: 0.13641 (QuantReg: 10.37909) QuantErr: 10.37909 batch_time=0.45262
Train Epoch: 9 [313/500 20032/32000 (63%)] Loss: 0.15327 (QuantReg: 10.37763) QuantErr: 10.37763 batch_time=0.45334
Train Epoch: 9 [321/500 20544/32000 (64%)] Loss: 0.13051 (QuantReg: 10.77019) QuantErr: 10.77019 batch_time=0.59575
Train Epoch: 9 [329/500 21056/32000 (66%)] Loss: 0.07535 (QuantReg: 11.02718) QuantErr: 11.02718 batch_time=0.45861
Train Epoch: 9 [337/500 21568/32000 (67%)] Loss: 0.11603 (QuantReg: 11.00078) QuantErr: 11.00078 batch_time=0.44826
Train Epoch: 9 [345/500 22080/32000 (69%)] Loss: 0.05171 (QuantReg: 10.85315) QuantErr: 10.85315 batch_time=0.45194
Train Epoch: 9 [353/500 22592/32000 (71%)] Loss: 0.06350 (QuantReg: 10.79515) QuantErr: 10.79515 batch_time=0.48115
Train Epoch: 9 [361/500 23104/32000 (72%)] Loss: 0.08870 (QuantReg: 10.84481) QuantErr: 10.84481 batch_time=0.50583
Train Epoch: 9 [369/500 23616/32000 (74%)] Loss: 0.06684 (QuantReg: 10.95028) QuantErr: 10.95028 batch_time=0.43633
Train Epoch: 9 [377/500 24128/32000 (75%)] Loss: 0.11543 (QuantReg: 10.74112) QuantErr: 10.74112 batch_time=0.43733
Train Epoch: 9 [385/500 24640/32000 (77%)] Loss: 0.10778 (QuantReg: 10.55594) QuantErr: 10.55594 batch_time=0.57837
Train Epoch: 9 [393/500 25152/32000 (79%)] Loss: 0.06935 (QuantReg: 11.12210) QuantErr: 11.12210 batch_time=0.44168
Train Epoch: 9 [401/500 25664/32000 (80%)] Loss: 0.10444 (QuantReg: 10.84229) QuantErr: 10.84229 batch_time=0.43803
Train Epoch: 9 [409/500 26176/32000 (82%)] Loss: 0.07294 (QuantReg: 11.01834) QuantErr: 11.01834 batch_time=0.44216
Train Epoch: 9 [417/500 26688/32000 (83%)] Loss: 0.10828 (QuantReg: 10.70757) QuantErr: 10.70757 batch_time=0.44837
Train Epoch: 9 [425/500 27200/32000 (85%)] Loss: 0.06006 (QuantReg: 10.87035) QuantErr: 10.87035 batch_time=0.50916
Train Epoch: 9 [433/500 27712/32000 (87%)] Loss: 0.07294 (QuantReg: 11.28106) QuantErr: 11.28106 batch_time=0.44097
Train Epoch: 9 [441/500 28224/32000 (88%)] Loss: 0.11700 (QuantReg: 10.79826) QuantErr: 10.79826 batch_time=0.44128
Train Epoch: 9 [449/500 28736/32000 (90%)] Loss: 0.04866 (QuantReg: 10.80731) QuantErr: 10.80731 batch_time=0.58082
Train Epoch: 9 [457/500 29248/32000 (91%)] Loss: 0.06534 (QuantReg: 11.07844) QuantErr: 11.07844 batch_time=0.44077
Train Epoch: 9 [465/500 29760/32000 (93%)] Loss: 0.04289 (QuantReg: 10.78385) QuantErr: 10.78385 batch_time=0.44238
Train Epoch: 9 [473/500 30272/32000 (95%)] Loss: 0.04286 (QuantReg: 10.98940) QuantErr: 10.98940 batch_time=0.44970
Train Epoch: 9 [481/500 30784/32000 (96%)] Loss: 0.07396 (QuantReg: 10.71796) QuantErr: 10.71796 batch_time=0.44988
Train Epoch: 9 [489/500 31296/32000 (98%)] Loss: 0.09072 (QuantReg: 10.70041) QuantErr: 10.70041 batch_time=0.51363
Train Epoch: 9 [497/500 31808/32000 (99%)] Loss: 0.12791 (QuantReg: 10.97866) QuantErr: 10.97866 batch_time=0.43793
Train Epoch: 9 codebook_update_time=1.64927
Saving checkpoint: /apdcephfs/share_47076/gimwang/HCQ/exps/HCQ_ActivityNet_bs64/checkpoint-epoch9.pth ...
Done in 4.871s
Updating 'best' checkpoint: /apdcephfs/share_47076/gimwang/HCQ/exps/HCQ_ActivityNet_bs64/checkpoint-epoch9.pth ...
Done in 9.850s
removing stale ckpt [epoch 8] [took 0.01s]
epoch : 9
loss : 0.10399316986650228
quant_reg : 10.813135730743408
quant_err : 10.813135730743408
learning_rate : 2.6100312500000002e-05
n_samples : 288000
n_steps : 4500
ActivityNet_val1_test/t2v_metrics/R1: 17.144600366076876
ActivityNet_val1_test/t2v_metrics/R5: 46.329062436444985
ActivityNet_val1_test/t2v_metrics/R10: 64.22615415904006
ActivityNet_val1_test/t2v_metrics/R50: 91.60056945291845
ActivityNet_val1_test/t2v_metrics/MedR: 6.0
ActivityNet_val1_test/t2v_metrics/MeanR: 25.2019524100061
ActivityNet_val1_test/t2v_metrics/geometric_mean_R1-R5-R10: 37.0877879942367
ActivityNet_val1_test/v2t_metrics/R1: 17.77506609721375
ActivityNet_val1_test/v2t_metrics/R5: 47.691681919869836
ActivityNet_val1_test/v2t_metrics/R10: 65.34472239170226
ActivityNet_val1_test/v2t_metrics/R50: 92.27171039251576
ActivityNet_val1_test/v2t_metrics/MedR: 6.0
ActivityNet_val1_test/v2t_metrics/MeanR: 24.140736221273134
ActivityNet_val1_test/v2t_metrics/geometric_mean_R1-R5-R10: 38.12016690271447
mnt_best : 37.0877879942367
not_improved_count: 0
Train Epoch: 10 [1/500 64/32000 (0%)] Loss: 0.05149 (QuantReg: 10.73204) QuantErr: 10.73204 batch_time=23.40699
Train Epoch: 10 [9/500 576/32000 (2%)] Loss: 0.09443 (QuantReg: 10.41991) QuantErr: 10.41991 batch_time=0.48021
Train Epoch: 10 [17/500 1088/32000 (3%)] Loss: 0.06597 (QuantReg: 10.75257) QuantErr: 10.75257 batch_time=0.45897
Train Epoch: 10 [25/500 1600/32000 (5%)] Loss: 0.06519 (QuantReg: 10.87070) QuantErr: 10.87070 batch_time=0.71872
Train Epoch: 10 [33/500 2112/32000 (7%)] Loss: 0.06119 (QuantReg: 11.10855) QuantErr: 11.10855 batch_time=0.69109
Train Epoch: 10 [41/500 2624/32000 (8%)] Loss: 0.12472 (QuantReg: 10.69189) QuantErr: 10.69189 batch_time=0.44351
Train Epoch: 10 [49/500 3136/32000 (10%)] Loss: 0.17063 (QuantReg: 10.70940) QuantErr: 10.70940 batch_time=0.43709
Train Epoch: 10 [57/500 3648/32000 (11%)] Loss: 0.09168 (QuantReg: 10.73877) QuantErr: 10.73877 batch_time=0.44645
Train Epoch: 10 [65/500 4160/32000 (13%)] Loss: 0.08872 (QuantReg: 10.97080) QuantErr: 10.97080 batch_time=0.79621
Train Epoch: 10 [73/500 4672/32000 (15%)] Loss: 0.14161 (QuantReg: 10.62375) QuantErr: 10.62375 batch_time=0.44404
Train Epoch: 10 [81/500 5184/32000 (16%)] Loss: 0.05421 (QuantReg: 10.71059) QuantErr: 10.71059 batch_time=0.44428
Train Epoch: 10 [89/500 5696/32000 (18%)] Loss: 0.06604 (QuantReg: 10.55017) QuantErr: 10.55017 batch_time=0.72223
Train Epoch: 10 [97/500 6208/32000 (19%)] Loss: 0.07207 (QuantReg: 10.65330) QuantErr: 10.65330 batch_time=0.68594
Train Epoch: 10 [105/500 6720/32000 (21%)] Loss: 0.04702 (QuantReg: 11.13562) QuantErr: 11.13562 batch_time=0.44006
Train Epoch: 10 [113/500 7232/32000 (23%)] Loss: 0.12587 (QuantReg: 10.82412) QuantErr: 10.82412 batch_time=0.44919
Train Epoch: 10 [121/500 7744/32000 (24%)] Loss: 0.08161 (QuantReg: 10.64907) QuantErr: 10.64907 batch_time=0.43988
Train Epoch: 10 [129/500 8256/32000 (26%)] Loss: 0.07734 (QuantReg: 10.65249) QuantErr: 10.65249 batch_time=0.78503
Train Epoch: 10 [137/500 8768/32000 (27%)] Loss: 0.07925 (QuantReg: 10.59092) QuantErr: 10.59092 batch_time=0.44032
Train Epoch: 10 [145/500 9280/32000 (29%)] Loss: 0.07130 (QuantReg: 10.57823) QuantErr: 10.57823 batch_time=0.43675
Train Epoch: 10 [153/500 9792/32000 (31%)] Loss: 0.11757 (QuantReg: 11.03039) QuantErr: 11.03039 batch_time=0.76198
Train Epoch: 10 [161/500 10304/32000 (32%)] Loss: 0.09744 (QuantReg: 10.73986) QuantErr: 10.73986 batch_time=0.69039
Train Epoch: 10 [169/500 10816/32000 (34%)] Loss: 0.12236 (QuantReg: 10.59127) QuantErr: 10.59127 batch_time=0.43958
Train Epoch: 10 [177/500 11328/32000 (35%)] Loss: 0.08881 (QuantReg: 10.64599) QuantErr: 10.64599 batch_time=0.43921
Train Epoch: 10 [185/500 11840/32000 (37%)] Loss: 0.15111 (QuantReg: 10.73991) QuantErr: 10.73991 batch_time=0.45068
Train Epoch: 10 [193/500 12352/32000 (39%)] Loss: 0.16472 (QuantReg: 11.05294) QuantErr: 11.05294 batch_time=0.78517
Train Epoch: 10 [201/500 12864/32000 (40%)] Loss: 0.05892 (QuantReg: 10.98078) QuantErr: 10.98078 batch_time=0.44231
Train Epoch: 10 [209/500 13376/32000 (42%)] Loss: 0.09886 (QuantReg: 10.77879) QuantErr: 10.77879 batch_time=0.44022
Train Epoch: 10 [217/500 13888/32000 (43%)] Loss: 0.03829 (QuantReg: 10.87745) QuantErr: 10.87745 batch_time=0.73083
Train Epoch: 10 [225/500 14400/32000 (45%)] Loss: 0.08682 (QuantReg: 10.86128) QuantErr: 10.86128 batch_time=0.70132
Train Epoch: 10 [233/500 14912/32000 (47%)] Loss: 0.11240 (QuantReg: 10.78337) QuantErr: 10.78337 batch_time=0.44120
Train Epoch: 10 [241/500 15424/32000 (48%)] Loss: 0.12175 (QuantReg: 10.95870) QuantErr: 10.95870 batch_time=0.44931
Train Epoch: 10 [249/500 15936/32000 (50%)] Loss: 0.11883 (QuantReg: 10.96865) QuantErr: 10.96865 batch_time=0.44894
Train Epoch: 10 [257/500 16448/32000 (51%)] Loss: 0.14479 (QuantReg: 10.94016) QuantErr: 10.94016 batch_time=0.82439
Train Epoch: 10 [265/500 16960/32000 (53%)] Loss: 0.07361 (QuantReg: 10.75476) QuantErr: 10.75476 batch_time=0.44478
Train Epoch: 10 [273/500 17472/32000 (55%)] Loss: 0.05422 (QuantReg: 10.74598) QuantErr: 10.74598 batch_time=0.45927
Train Epoch: 10 [281/500 17984/32000 (56%)] Loss: 0.09373 (QuantReg: 10.72189) QuantErr: 10.72189 batch_time=0.76823
Train Epoch: 10 [289/500 18496/32000 (58%)] Loss: 0.09910 (QuantReg: 10.65749) QuantErr: 10.65749 batch_time=0.72000
Train Epoch: 10 [297/500 19008/32000 (59%)] Loss: 0.10811 (QuantReg: 10.85019) QuantErr: 10.85019 batch_time=0.45080
Train Epoch: 10 [305/500 19520/32000 (61%)] Loss: 0.08857 (QuantReg: 10.71196) QuantErr: 10.71196 batch_time=0.44272
Train Epoch: 10 [313/500 20032/32000 (63%)] Loss: 0.11055 (QuantReg: 10.95957) QuantErr: 10.95957 batch_time=0.44300
Train Epoch: 10 [321/500 20544/32000 (64%)] Loss: 0.06749 (QuantReg: 10.70523) QuantErr: 10.70523 batch_time=0.78498
Train Epoch: 10 [329/500 21056/32000 (66%)] Loss: 0.05952 (QuantReg: 10.57877) QuantErr: 10.57877 batch_time=0.43975
Train Epoch: 10 [337/500 21568/32000 (67%)] Loss: 0.06293 (QuantReg: 10.66597) QuantErr: 10.66597 batch_time=0.43573
Train Epoch: 10 [345/500 22080/32000 (69%)] Loss: 0.09006 (QuantReg: 10.52944) QuantErr: 10.52944 batch_time=0.72189
Train Epoch: 10 [353/500 22592/32000 (71%)] Loss: 0.09162 (QuantReg: 10.76768) QuantErr: 10.76768 batch_time=0.68209
Train Epoch: 10 [361/500 23104/32000 (72%)] Loss: 0.08092 (QuantReg: 10.43036) QuantErr: 10.43036 batch_time=0.44027
Train Epoch: 10 [369/500 23616/32000 (74%)] Loss: 0.07153 (QuantReg: 11.09867) QuantErr: 11.09867 batch_time=0.50372
Train Epoch: 10 [377/500 24128/32000 (75%)] Loss: 0.12795 (QuantReg: 11.07561) QuantErr: 11.07561 batch_time=0.43969
Train Epoch: 10 [385/500 24640/32000 (77%)] Loss: 0.12610 (QuantReg: 10.82122) QuantErr: 10.82122 batch_time=0.76593
Train Epoch: 10 [393/500 25152/32000 (79%)] Loss: 0.12302 (QuantReg: 10.39358) QuantErr: 10.39358 batch_time=0.43389
Train Epoch: 10 [401/500 25664/32000 (80%)] Loss: 0.05929 (QuantReg: 10.67045) QuantErr: 10.67045 batch_time=0.44232
Train Epoch: 10 [409/500 26176/32000 (82%)] Loss: 0.08939 (QuantReg: 10.53979) QuantErr: 10.53979 batch_time=0.70414
Train Epoch: 10 [417/500 26688/32000 (83%)] Loss: 0.08417 (QuantReg: 10.86459) QuantErr: 10.86459 batch_time=0.68700
Train Epoch: 10 [425/500 27200/32000 (85%)] Loss: 0.11103 (QuantReg: 10.39026) QuantErr: 10.39026 batch_time=0.44929
Train Epoch: 10 [433/500 27712/32000 (87%)] Loss: 0.05365 (QuantReg: 10.75894) QuantErr: 10.75894 batch_time=0.44835
Train Epoch: 10 [441/500 28224/32000 (88%)] Loss: 0.06650 (QuantReg: 10.97997) QuantErr: 10.97997 batch_time=0.44664
Train Epoch: 10 [449/500 28736/32000 (90%)] Loss: 0.07434 (QuantReg: 10.51767) QuantErr: 10.51767 batch_time=0.81698
Train Epoch: 10 [457/500 29248/32000 (91%)] Loss: 0.17696 (QuantReg: 10.55169) QuantErr: 10.55169 batch_time=0.43746
Train Epoch: 10 [465/500 29760/32000 (93%)] Loss: 0.04616 (QuantReg: 10.54434) QuantErr: 10.54434 batch_time=0.45251
Train Epoch: 10 [473/500 30272/32000 (95%)] Loss: 0.11838 (QuantReg: 10.94390) QuantErr: 10.94390 batch_time=0.71323
Train Epoch: 10 [481/500 30784/32000 (96%)] Loss: 0.10361 (QuantReg: 11.03999) QuantErr: 11.03999 batch_time=0.68261
Train Epoch: 10 [489/500 31296/32000 (98%)] Loss: 0.05924 (QuantReg: 10.52190) QuantErr: 10.52190 batch_time=0.44318
Train Epoch: 10 [497/500 31808/32000 (99%)] Loss: 0.07322 (QuantReg: 10.72706) QuantErr: 10.72706 batch_time=0.44131
Train Epoch: 10 codebook_update_time=1.81380
Saving checkpoint: /apdcephfs/share_47076/gimwang/HCQ/exps/HCQ_ActivityNet_bs64/checkpoint-epoch10.pth ...
Done in 4.761s
Updating 'best' checkpoint: /apdcephfs/share_47076/gimwang/HCQ/exps/HCQ_ActivityNet_bs64/checkpoint-epoch10.pth ...
Done in 9.999s
removing stale ckpt [epoch 9] [took 0.01s]
epoch : 10
loss : 0.09655615510046482
quant_reg : 10.782334804534912
quant_err : 10.782334804534912
learning_rate : 2.6100312500000002e-05
n_samples : 320000
n_steps : 5000
ActivityNet_val1_test/t2v_metrics/R1: 17.754728492983528
ActivityNet_val1_test/t2v_metrics/R5: 46.12568639414277
ActivityNet_val1_test/t2v_metrics/R10: 64.51088061826317
ActivityNet_val1_test/t2v_metrics/R50: 91.35651820215578
ActivityNet_val1_test/t2v_metrics/MedR: 6.0
ActivityNet_val1_test/t2v_metrics/MeanR: 26.294285133211307
ActivityNet_val1_test/t2v_metrics/geometric_mean_R1-R5-R10: 37.52291900390766
ActivityNet_val1_test/v2t_metrics/R1: 17.876754118364858
ActivityNet_val1_test/v2t_metrics/R5: 48.32214765100671
ActivityNet_val1_test/v2t_metrics/R10: 66.07687614399023
ActivityNet_val1_test/v2t_metrics/R50: 91.72259507829978
ActivityNet_val1_test/v2t_metrics/MedR: 6.0
ActivityNet_val1_test/v2t_metrics/MeanR: 24.954443766524303
ActivityNet_val1_test/v2t_metrics/geometric_mean_R1-R5-R10: 38.503020388083
mnt_best : 37.52291900390766
not_improved_count: 0
Train Epoch: 11 [1/500 64/32000 (0%)] Loss: 0.06201 (QuantReg: 11.07409) QuantErr: 11.07409 batch_time=23.97259
Train Epoch: 11 [9/500 576/32000 (2%)] Loss: 0.08768 (QuantReg: 11.10478) QuantErr: 11.10478 batch_time=0.45020
Train Epoch: 11 [17/500 1088/32000 (3%)] Loss: 0.10756 (QuantReg: 10.54798) QuantErr: 10.54798 batch_time=0.44659
Train Epoch: 11 [25/500 1600/32000 (5%)] Loss: 0.14185 (QuantReg: 10.68512) QuantErr: 10.68512 batch_time=0.44891
Train Epoch: 11 [33/500 2112/32000 (7%)] Loss: 0.04992 (QuantReg: 10.70291) QuantErr: 10.70291 batch_time=0.49086
Train Epoch: 11 [41/500 2624/32000 (8%)] Loss: 0.11141 (QuantReg: 10.48239) QuantErr: 10.48239 batch_time=0.44590
Train Epoch: 11 [49/500 3136/32000 (10%)] Loss: 0.12479 (QuantReg: 10.46276) QuantErr: 10.46276 batch_time=0.44237
Train Epoch: 11 [57/500 3648/32000 (11%)] Loss: 0.10682 (QuantReg: 10.51935) QuantErr: 10.51935 batch_time=0.43908
Train Epoch: 11 [65/500 4160/32000 (13%)] Loss: 0.12314 (QuantReg: 10.80655) QuantErr: 10.80655 batch_time=0.52508
Train Epoch: 11 [73/500 4672/32000 (15%)] Loss: 0.07095 (QuantReg: 10.47108) QuantErr: 10.47108 batch_time=0.44690
Train Epoch: 11 [81/500 5184/32000 (16%)] Loss: 0.05180 (QuantReg: 10.85573) QuantErr: 10.85573 batch_time=0.44484
Train Epoch: 11 [89/500 5696/32000 (18%)] Loss: 0.06939 (QuantReg: 10.72767) QuantErr: 10.72767 batch_time=0.45172
Train Epoch: 11 [97/500 6208/32000 (19%)] Loss: 0.07486 (QuantReg: 10.50472) QuantErr: 10.50472 batch_time=0.50706
Train Epoch: 11 [105/500 6720/32000 (21%)] Loss: 0.04550 (QuantReg: 10.76776) QuantErr: 10.76776 batch_time=0.47172
Train Epoch: 11 [113/500 7232/32000 (23%)] Loss: 0.10694 (QuantReg: 10.65403) QuantErr: 10.65403 batch_time=0.44052
Train Epoch: 11 [121/500 7744/32000 (24%)] Loss: 0.14790 (QuantReg: 10.55584) QuantErr: 10.55584 batch_time=0.45025
Train Epoch: 11 [129/500 8256/32000 (26%)] Loss: 0.15121 (QuantReg: 10.57924) QuantErr: 10.57924 batch_time=0.50819
Train Epoch: 11 [137/500 8768/32000 (27%)] Loss: 0.11860 (QuantReg: 10.60624) QuantErr: 10.60624 batch_time=0.44378
Train Epoch: 11 [145/500 9280/32000 (29%)] Loss: 0.09421 (QuantReg: 10.92990) QuantErr: 10.92990 batch_time=0.48876
Train Epoch: 11 [153/500 9792/32000 (31%)] Loss: 0.12405 (QuantReg: 10.51924) QuantErr: 10.51924 batch_time=0.44270
Train Epoch: 11 [161/500 10304/32000 (32%)] Loss: 0.10365 (QuantReg: 10.84255) QuantErr: 10.84255 batch_time=0.47160
Train Epoch: 11 [169/500 10816/32000 (34%)] Loss: 0.04925 (QuantReg: 10.86308) QuantErr: 10.86308 batch_time=0.46726
Train Epoch: 11 [177/500 11328/32000 (35%)] Loss: 0.11291 (QuantReg: 10.72070) QuantErr: 10.72070 batch_time=0.44105
Train Epoch: 11 [185/500 11840/32000 (37%)] Loss: 0.04614 (QuantReg: 10.58689) QuantErr: 10.58689 batch_time=0.45003
Train Epoch: 11 [193/500 12352/32000 (39%)] Loss: 0.09663 (QuantReg: 10.87005) QuantErr: 10.87005 batch_time=0.51916
Train Epoch: 11 [201/500 12864/32000 (40%)] Loss: 0.07930 (QuantReg: 10.63771) QuantErr: 10.63771 batch_time=0.45142
Train Epoch: 11 [209/500 13376/32000 (42%)] Loss: 0.06329 (QuantReg: 11.07346) QuantErr: 11.07346 batch_time=0.45824
Train Epoch: 11 [217/500 13888/32000 (43%)] Loss: 0.05822 (QuantReg: 10.94934) QuantErr: 10.94934 batch_time=0.44197
Train Epoch: 11 [225/500 14400/32000 (45%)] Loss: 0.04746 (QuantReg: 10.65034) QuantErr: 10.65034 batch_time=0.47571
Train Epoch: 11 [233/500 14912/32000 (47%)] Loss: 0.07678 (QuantReg: 10.95339) QuantErr: 10.95339 batch_time=0.44242
Train Epoch: 11 [241/500 15424/32000 (48%)] Loss: 0.12976 (QuantReg: 10.56915) QuantErr: 10.56915 batch_time=0.49051
Train Epoch: 11 [249/500 15936/32000 (50%)] Loss: 0.07648 (QuantReg: 10.84206) QuantErr: 10.84206 batch_time=0.44055
Train Epoch: 11 [257/500 16448/32000 (51%)] Loss: 0.05494 (QuantReg: 10.81332) QuantErr: 10.81332 batch_time=0.51428
Train Epoch: 11 [265/500 16960/32000 (53%)] Loss: 0.05442 (QuantReg: 10.40579) QuantErr: 10.40579 batch_time=0.44274
Train Epoch: 11 [273/500 17472/32000 (55%)] Loss: 0.09093 (QuantReg: 10.59411) QuantErr: 10.59411 batch_time=0.44095
Train Epoch: 11 [281/500 17984/32000 (56%)] Loss: 0.08954 (QuantReg: 10.75133) QuantErr: 10.75133 batch_time=0.44872
Train Epoch: 11 [289/500 18496/32000 (58%)] Loss: 0.11268 (QuantReg: 10.61660) QuantErr: 10.61660 batch_time=0.47971
Train Epoch: 11 [297/500 19008/32000 (59%)] Loss: 0.15072 (QuantReg: 11.03575) QuantErr: 11.03575 batch_time=0.44464
Train Epoch: 11 [305/500 19520/32000 (61%)] Loss: 0.08495 (QuantReg: 10.83414) QuantErr: 10.83414 batch_time=0.47037
Train Epoch: 11 [313/500 20032/32000 (63%)] Loss: 0.04559 (QuantReg: 10.79020) QuantErr: 10.79020 batch_time=0.45332
Train Epoch: 11 [321/500 20544/32000 (64%)] Loss: 0.15620 (QuantReg: 10.51727) QuantErr: 10.51727 batch_time=0.53032
Train Epoch: 11 [329/500 21056/32000 (66%)] Loss: 0.07820 (QuantReg: 10.75639) QuantErr: 10.75639 batch_time=0.44451
Train Epoch: 11 [337/500 21568/32000 (67%)] Loss: 0.06514 (QuantReg: 11.04034) QuantErr: 11.04034 batch_time=0.44513
Train Epoch: 11 [345/500 22080/32000 (69%)] Loss: 0.08509 (QuantReg: 10.75801) QuantErr: 10.75801 batch_time=0.44376
Train Epoch: 11 [353/500 22592/32000 (71%)] Loss: 0.05984 (QuantReg: 10.93200) QuantErr: 10.93200 batch_time=0.47689
Train Epoch: 11 [361/500 23104/32000 (72%)] Loss: 0.07882 (QuantReg: 11.08491) QuantErr: 11.08491 batch_time=0.45108
Train Epoch: 11 [369/500 23616/32000 (74%)] Loss: 0.07266 (QuantReg: 10.65925) QuantErr: 10.65925 batch_time=0.46258
Train Epoch: 11 [377/500 24128/32000 (75%)] Loss: 0.15229 (QuantReg: 10.63134) QuantErr: 10.63134 batch_time=0.46952