-
Notifications
You must be signed in to change notification settings - Fork 1
/
fullpaper2.html
executable file
·2976 lines (2975 loc) · 345 KB
/
fullpaper2.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN"
"http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en">
<head>
<meta http-equiv="Content-Type" content="text/html;charset=utf-8" />
<link rel="stylesheet" href="css/fonts_import.css" type="text/css" />
<link rel="stylesheet" href="css/cs.css" type="text/css" />
<link rel="stylesheet" href="css/content.css" type="text/css" />
<!-- font family -->
<link href="//netdna.bootstrapcdn.com/font-awesome/4.7.0/css/font-awesome.css" rel="stylesheet" />
<link rel="stylesheet" href="https://cdn.rawgit.com/jpswalsh/academicons/master/css/academicons.min.css">
<link rel="stylesheet" href="css/full_publication.css" type="text/css" />
<title>Chunhua Shen</title>
</head>
<body>
<div id="layout-content">
<div id="menu">
<div id="menucontainer">
<ul id="nav">
<li><a href="index.html" target="_self">Home</a></li>
<li><a href="paper.html" target="_self">Publications</a></li>
<li><a href="teaching.html" target="_self">Teaching</a></li>
</ul>
</div>
</div>
<div id="toptitle">
<h1>Publications (Full List)</h1>
<div id="subtitle">Categorised <a href="fullpaper2.html" target=“blank”>by venue <i class='fa fa-location-arrow' aria-hidden='true'></i></a>, <a href="fullpaper.html" target=“blank”>by year <i class='fa fa-clock-o' aria-hidden='true'></i></a>. <b>446</b> papers.
</div>
</div>
<p><a href="https://scholar.google.com/citations?hl=en&user=Ljk2BvIAAAAJ&view_op=list_works&pagesize=100" target=“blank”>Google scholar (79159 citations) <i class='ai ai-google-scholar' aria-hidden='true'></i></a>,
<a href="https://dblp.org/pid/56/1673.html" target=“blank”>DBLP <i class='ai ai-dblp ai-1x'></i></a>,
<a href="https://arxiv.org/a/shen_c_1.html" target=“blank”>arXiv <i class='ai ai-arxiv ai-1x'></i></a>.
</p>
<p><div id="citation_plot_holder"></div>
</p>
<h2>Journal</h2>
<ol reversed>
<li><p><b>Scaling up multi-domain semantic segmentation with sentence embeddings</b>
<br />\(\cdot\) <i>W. Yin, Y. Liu, C. Shen, B. Sun, A. van den Hengel</i>.
<br />\(\cdot\) <i>International Journal of Computer Vision (IJCV), 2024</i>.
<br />\(\cdot\) <a href="data/bibtex/Yin2024SIW.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Scaling+Up+Multi-domain+Semantic+Segmentation+with+Sentence+Embeddings+Yin,+Wei+and+Liu,+Yifan+and+Shen,+Chunhua+and+Sun,+Baichuan+and+{van+den+Hengel},+Anton" target=“blank”>search</a>
</p>
</li>
<li><p><b>Towards robust monocular depth estimation: a new baseline and benchmark</b>
<br />\(\cdot\) <i>K. Xian, Z. Cao, C. Shen, G. Lin</i>.
<br />\(\cdot\) <i>International Journal of Computer Vision (IJCV), 2024</i>.
<br />\(\cdot\) <a href="data/bibtex/Depth2024Xian.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Towards+robust+monocular+depth+estimation:+a+new+baseline+and+benchmark+Xian,+Ke+and+Cao,+Zhiguo+and+Shen,+Chunhua+and+Lin,+Guosheng" target=“blank”>search</a>
</p>
</li>
<li><p><b>End-to-end video text spotting with Transformer</b>
<br />\(\cdot\) <i>W. Wu, C. Shen, Y. Cai, D. Zhang, Y. Fu, P. Luo, H. Zhou</i>.
<br />\(\cdot\) <i>International Journal of Computer Vision (IJCV), 2024</i>.
<br />\(\cdot\) <a href="data/bibtex/Wu2022transdetr.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=End-to-End+Video+Text+Spotting+with+{T}ransformer+Wu,+Weijia+and+Shen,+Chunhua+and+Cai,+Yuanqiang+and+Zhang,+Debing+and+Fu,+Ying+and+Luo,+Ping+and+Zhou,+Hong" target=“blank”>search</a>
</p>
</li>
<li><p><b>Masked channel modeling for bootstrapping visual pre-training</b>
<br />\(\cdot\) <i>Y. Liu, X. Wang, M. Zhu, Y. Cao, T. Huang, C. Shen</i>.
<br />\(\cdot\) <i>International Journal of Computer Vision (IJCV), 2024</i>.
<br />\(\cdot\) <a href="data/bibtex/Liuyang2024Masked.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Masked+Channel+Modeling+for+Bootstrapping+Visual+Pre-training+Liu,+Yang+and+Wang,+Xinlong+and+Zhu,+Muzhi+and+Cao,+Yue+and+Huang,+Tiejun+and+Shen,+Chunhua" target=“blank”>search</a>
</p>
</li>
<li><p><b>Target before shooting: accurate anomaly detection and localization under one millisecond via cascade patch retrieval</b>
<br />\(\cdot\) <i>H. Li, J. Hu, B. Li, H. Chen, Y. Zheng, C. Shen</i>.
<br />\(\cdot\) <i>IEEE Transactions on Image Processing (TIP), 2024</i>.
<br />\(\cdot\) <a href="data/bibtex/Li2024TIP.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Target+before+Shooting:+Accurate+Anomaly+Detection+and+Localization+under+One+Millisecond+via+Cascade+Patch+Retrieval+Li,+Hanxi+and+Hu,+Jianfei+and+Li,+Bo+and+Chen,+Hao+and+Zheng,+Yongbin+and+Shen,+Chunhua" target=“blank”>search</a>
</p>
</li>
<li><p><b>Self-supervised 3d scene flow estimation and motion prediction using local rigidity prior</b>
<br />\(\cdot\) <i>R. Li, C. Zhang, Z. Wang, C. Shen, G. Lin</i>.
<br />\(\cdot\) <i>IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2024</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/2310.11284" target=“blank”>arXiv</a><a href="data/bibtex/Ruibo2024TPAMI.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Self-Supervised+3D+Scene+Flow+Estimation+and+Motion+Prediction+using+Local+Rigidity+Prior+Li,+Ruibo+and+Zhang,+Chi+and+Wang,+Zhe+and+Shen,+Chunhua+and+Lin,+Guosheng" target=“blank”>search</a>
</p>
</li>
<li><p><b>Metric3D v2: a versatile monocular geometric foundation model for zero-shot metric depth and surface normal estimation</b>
<br />\(\cdot\) <i>M. Hu, W. Yin, C. Zhang, Z. Cai, X. Long, H. Chen, K. Wang, G. Yu, C. Shen, S. Shen</i>.
<br />\(\cdot\) <i>IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2024</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/2404.15506" target=“blank”>arXiv</a><a href="data/bibtex/hu2024metric3dv2.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q={Metric3D}+v2:+A+Versatile+Monocular+Geometric+Foundation+Model+for+Zero-shot+Metric+Depth+and+Surface+Normal+Estimation+Hu,+Mu+and+Yin,+Wei+and+Zhang,+Chi+and+Cai,+Zhipeng+and+Long,+Xiaoxiao+and+Chen,+Hao+and+Wang,+Kaixuan+and+Yu,+Gang+and+Shen,+Chunhua+and+Shen,+Shaojie" target=“blank”>search</a><a href="https://jugghm.github.io/Metric3Dv2/" target=“blank”>project webpage</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Zhang2023SegVITv2xxxarXiv.jpg"><b>SegViT v2: exploring efficient and continual semantic segmentation with plain vision transformers</b>
<br />\(\cdot\) <i>B. Zhang, L. Liu, M. Phan, Z. Tian, C. Shen, Y. Liu</i>.
<br />\(\cdot\) <i>International Journal of Computer Vision (IJCV), 2023</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/2306.06289" target=“blank”>arXiv</a><a href="data/bibtex/Zhang2023SegVITv2.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q={SegViT}+v2:+Exploring+Efficient+and+Continual+Semantic+Segmentation+with+Plain+Vision+Transformers+Zhang,+Bowen+and+Liu,+Liyang+and+Phan,+Minh+Hieu+and+Tian,+Zhi+and+Shen,+Chunhua+and+Liu,+Yifan" target=“blank”>search</a><a href="https://github.com/zbwxp/SegVit" target=“blank”>project webpage</a>
</p>
</li>
<li><p><b>SPL-Net: spatial-semantic patch learning network for facial attribute recognition with limited labeled data</b>
<br />\(\cdot\) <i>Y. Yan, Y. Shu, S. Chen, J. Xue, C. Shen, H. Wang</i>.
<br />\(\cdot\) <i>International Journal of Computer Vision (IJCV), 2023</i>.
<br />\(\cdot\) <a href="data/bibtex/YAN2023IJCVSPL.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q={SPL-Net}:+Spatial-Semantic+Patch+Learning+Network+for+Facial+Attribute+Recognition+with+Limited+Labeled+Data+Yan,+Yan+and+Shu,+Ying+and+Chen,+Si+and+Xue,+Jing-Hao+and+Shen,+Chunhua+and+Wang,+Hanzi" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Lu2023IJCVCountingxxxarXiv.jpg"><b>From open set to closed set: supervised spatial divide-and-conquer for object counting</b>
<br />\(\cdot\) <i>H. Xiong, H. Lu, C. Liu, L. Liu, C. Shen, Z. Cao</i>.
<br />\(\cdot\) <i>International Journal of Computer Vision (IJCV), 2023</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/2001.01886" target=“blank”>arXiv</a><a href="data/bibtex/Lu2023IJCVCounting.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=From+Open+Set+to+Closed+Set:+Supervised+Spatial+Divide-and-Conquer+for+Object+Counting+Xiong,+Haipeng+and+Lu,+Hao+and+Liu,+Chengxin+and+Liu,+Liang+and+Shen,+Chunhua+and+Cao,+Zhiguo" target=“blank”>search</a>
</p>
</li>
<li><p><b>A dynamic feature interaction framework for multi-task visual perception</b>
<br />\(\cdot\) <i>Y. Xi, H. Chen, N. Wang, P. Wang, Y. Zhang, C. Shen, Y. Liu</i>.
<br />\(\cdot\) <i>International Journal of Computer Vision (IJCV), 2023</i>.
<br />\(\cdot\) <a href="data/bibtex/XiY2023IJCV.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=A+Dynamic+Feature+Interaction+Framework+for+Multi-task+Visual+Perception+Xi,+Yuling+and+Chen,+Hao+and+Wang,+Ning+and+Wang,+Peng+and+Zhang,+Yanning+and+Shen,+Chunhua+and+Liu,+Yifan" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Lin2023IJCVSuperxxxarXiv.jpg"><b>Super vision transformer</b>
<br />\(\cdot\) <i>M. Lin, M. Chen, Y. Zhang, C. Shen, R. Ji, L. Cao</i>.
<br />\(\cdot\) <i>International Journal of Computer Vision (IJCV), 2023</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/2205.11397" target=“blank”>arXiv</a><a href="data/bibtex/Lin2023IJCVSuper.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Super+Vision+Transformer+Lin,+Mingbao+and+Chen,+Mengzhao+and+Zhang,+Yuxin+and+Shen,+Chunhua+and+Ji,+Rongrong+and+Cao,+Liujuan" target=“blank”>search</a><a href="https://github.com/lmbxmu/SuperViT" target=“blank”>project webpage</a>
</p>
</li>
<li><p><b>SAI: an efficient and user-friendly tool for measurement of stomatal pores and density using deep computer vision</b>
<br />\(\cdot\) <i>N. Sai, J. Bockman, H. Chen, N. Watson-Haigh, B. Xu, X. Feng, A. Piechatzek, C. Shen, M. Gilliham</i>.
<br />\(\cdot\) <i>New Phytologist (NPH), 2023</i>.
<br />\(\cdot\) <a href="https://doi.org/10.1101/2022.02.07.479482" target=“blank”>link</a><a href="data/bibtex/Sai2023NPJ.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q={SAI}:+An+efficient+and+user-friendly+tool+for+measurement+of+stomatal+pores+and+density+using+deep+computer+vision+Sai,+Na+and+Bockman,+James+Paul+and+Chen,+Hao+and+Watson-Haigh,+Nathan+and+Xu,+Bo+and+Feng,+Xueying+and+Piechatzek,+Adriane+and+Shen,+Chunhua+and+Gilliham,+Matthew" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Xie2023DodNetxxxarXiv.jpg"><b>Learning from partially labeled data for multi-organ and tumor segmentation</b>
<br />\(\cdot\) <i>Y. Xie, J. Zhang, Y. Xia, C. Shen</i>.
<br />\(\cdot\) <i>IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2023</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/2211.06894" target=“blank”>arXiv</a><a href="data/bibtex/Xie2023DodNet.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Learning+from+partially+labeled+data+for+multi-organ+and+tumor+segmentation+Xie,+Yutong+and+Zhang,+Jianpeng+and+Xia,+Yong+and+Shen,+Chunhua" target=“blank”>search</a><a href="https://git.io/DoDNet" target=“blank”>project webpage</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Sun2023TPAMIxxxarXiv.jpg"><b>SC-DepthV3: robust self-supervised monocular depth estimation for dynamic scenes</b>
<br />\(\cdot\) <i>L. Sun, J. Bian, H. Zhan, W. Yin, I. Reid, C. Shen</i>.
<br />\(\cdot\) <i>IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2023</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/2211.03660" target=“blank”>arXiv</a><a href="data/bibtex/Sun2023TPAMI.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q={SC-DepthV3}:+Robust+Self-supervised+Monocular+Depth+Estimation+for+Dynamic+Scenes+Sun,+Libo+and+Bian,+Jia-Wang+and+Zhan,+Huangying+and+Yin,+Wei+and+Reid,+Ian+and+Shen,+Chunhua" target=“blank”>search</a><a href="https://github.com/JiawangBian/sc_depth_pl" target=“blank”>project webpage</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/SPTSv2xxxarXiv.jpg"><b>SPTS v2: single-point scene text spotting</b>
<br />\(\cdot\) <i>Y. Liu, J. Zhang, D. Peng, M. Huang, X. Wang, J. Tang, C. Huang, D. Lin, C. Shen, X. Bai, L. Jin</i>.
<br />\(\cdot\) <i>IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2023</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/2301.01635" target=“blank”>arXiv</a><a href="data/bibtex/SPTSv2.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q={SPTS+v2}:+Single-Point+Scene+Text+Spotting+Liu,+Yuliang+and+Zhang,+Jiaxin+and+Peng,+Dezhi+and+Huang,+Mingxin+and+Wang,+Xinyu+and+Tang,+Jingqun+and+Huang,+Can+and+Lin,+Dahua+and+Shen,+Chunhua+and+Bai,+Xiang+and+Jin,+Lianwen" target=“blank”>search</a><a href="https://github.com/Yuliang-Liu/SPTSv2" target=“blank”>project webpage</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Liu2023TPAMIxxxarXiv.jpg"><b>Single-path bit sharing for automatic loss-aware model compression</b>
<br />\(\cdot\) <i>J. Liu, B. Zhuang, P. Chen, C. Shen, J. Cai, M. Tan</i>.
<br />\(\cdot\) <i>IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2023</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/2101.04935" target=“blank”>arXiv</a><a href="data/bibtex/Liu2023TPAMI.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Single-path+Bit+Sharing+for+Automatic+Loss-aware+Model+Compression+Liu,+Jing+and+Zhuang,+Bohan+and+Chen,+Peng+and+Shen,+Chunhua+and+Cai,+Jianfei+and+Tan,+Mingkui" target=“blank”>search</a>
</p>
</li>
<li><p><b>Effective eyebrow matting with domain adaptation</b>
<br />\(\cdot\) <i>L. Wang, H. Zhang, Q. Xiao, H. Xu, C. Shen, X. Jin</i>.
<br />\(\cdot\) <i>Computer Graphics Forum (CGF), 2022</i>.
<br />\(\cdot\) <a href="data/bibtex/Wang2022CGF.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Effective+Eyebrow+Matting+with+Domain+Adaptation+Wang,+Luyuan+and+Zhang,+Hanyuan+and+Xiao,+Qinjie+and+Xu,+Hao+and+Shen,+Chunhua+and+Jin,+Xiaogang" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Zhuang2022IJCVxxxarXiv.jpg"><b>Structured binary neural networks for image recognition</b>
<br />\(\cdot\) <i>B. Zhuang, C. Shen, M. Tan, P. Chen, L. Liu, I. Reid</i>.
<br />\(\cdot\) <i>International Journal of Computer Vision (IJCV), 2022</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1909.09934" target=“blank”>arXiv</a><a href="data/bibtex/Zhuang2022IJCV.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Structured+Binary+Neural+Networks+for+Image+Recognition+Zhuang,+Bohan+and+Shen,+Chunhua+and+Tan,+Mingkui+and+Chen,+Peng+and+Liu,+Lingqiao+and+Reid,+Ian" target=“blank”>search</a>
</p>
</li>
<li><p><b>Arbitrarily shaped scene text detection with dynamic convolution</b>
<br />\(\cdot\) <i>Y. Cai, Y. Liu, C. L. Jin, Y. Li, D. Ergu</i>.
<br />\(\cdot\) <i>Pattern Recognition (PR), 2022</i>.
<br />\(\cdot\) <a href="data/bibtex/Cai2022PR.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Arbitrarily+shaped+scene+text+detection+with+dynamic+convolution+Cai,+Ying+and+Liu,+Yuliang+and+ChunhuaShen+and+Jin,+Lianwen+and+Li,+Yidong+and+Ergu,+Daji" target=“blank”>search</a>
</p>
</li>
<li><p><b>TSGB: target-selective gradient backprop for probing CNN visual saliency</b>
<br />\(\cdot\) <i>L. Cheng, P. Fang, Y. Liang, L. Zhang, C. Shen, H. Wang</i>.
<br />\(\cdot\) <i>IEEE Transactions on Image Processing (TIP), 2022</i>.
<br />\(\cdot\) <a href="data/bibtex/TSGB2022TIP.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q={TSGB}:+Target-selective+gradient+backprop+for+probing+{CNN}+visual+saliency+Cheng,+Lin+and+Fang,+Pengfei+and+Liang,+Yanjie+and+Zhang,+Liao+and+Shen,+Chunhua+and+Wang,+Hanzi" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Chi2022TPAMIxxxarXiv.jpg"><b>DeepEMD: differentiable earth mover's distance for few-shot learning</b>
<br />\(\cdot\) <i>C. Zhang, Y. Cai, G. Lin, C. Shen</i>.
<br />\(\cdot\) <i>IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2022</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/2003.06777" target=“blank”>arXiv</a><a href="data/bibtex/Chi2022TPAMI.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q={DeepEMD}:+Differentiable+Earth+Mover's+Distance+for+Few-Shot+Learning+Zhang,+Chi+and+Cai,+Yujun+and+Lin,+Guosheng+and+Shen,+Chunhua" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Weiyin2022TPAMIxxxarXiv.jpg"><b>Towards accurate reconstruction of 3D scene shape from a single monocular image</b>
<br />\(\cdot\) <i>W. Yin, J. Zhang, O. Wang, S. Niklaus, S. Chen, Y. Liu, C. Shen</i>.
<br />\(\cdot\) <i>IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2022</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/2208.13241" target=“blank”>arXiv</a><a href="data/bibtex/Weiyin2022TPAMI.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Towards+Accurate+Reconstruction+of+{3D}+Scene+Shape+from+A+Single+Monocular+Image+Yin,+Wei+and+Zhang,+Jianming+and+Wang,+Oliver+and+Niklaus,+Simon+and+Chen,+Simon+and+Liu,+Yifan+and+Shen,+Chunhua" target=“blank”>search</a><a href="https://github.com/aim-uofa/depth/" target=“blank”>project webpage</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/CondInst2022TianxxxarXiv.jpg"><b>Instance and panoptic segmentation using conditional convolutions</b>
<br />\(\cdot\) <i>Z. Tian, B. Zhang, H. Chen, C. Shen</i>.
<br />\(\cdot\) <i>IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2022</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/2102.03026" target=“blank”>arXiv</a><a href="data/bibtex/CondInst2022Tian.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Instance+and+Panoptic+Segmentation+Using+Conditional+Convolutions+Tian,+Zhi+and+Zhang,+Bowen+and+Chen,+Hao+and+Shen,+Chunhua" target=“blank”>search</a><a href="https://github.com/aim-uofa/AdelaiDet/" target=“blank”>project webpage</a>
</p>
</li>
<li><p><b>FCOS: a simple and strong anchor-free object detector</b>
<br />\(\cdot\) <i>Z. Tian, C. Shen, H. Chen, T. He</i>.
<br />\(\cdot\) <i>IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2022</i>.
<br />\(\cdot\) <a href="https://doi.org/10.1109/TPAMI.2020.3032166" target=“blank”>link</a><a href="data/bibtex/TianSCH22.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q={FCOS:}+A+Simple+and+Strong+Anchor-Free+Object+Detector+Tian,+Zhi+and+Shen,+Chunhua+and+Chen,+Hao+and+He,+Tong" target=“blank”>search</a><a href="https://github.com/aim-uofa/AdelaiDet/" target=“blank”>project webpage</a>
</p>
</li>
<li><p><b>Dynamic convolution for 3D point cloud instance segmentation</b>
<br />\(\cdot\) <i>T. He, C. Shen, A. van den Hengel</i>.
<br />\(\cdot\) <i>IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2022</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/2107.08392" target=“blank”>arXiv</a><a href="data/bibtex/Tong2022TPAMI.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Dynamic+Convolution+for+{3D}+Point+Cloud+Instance+Segmentation+He,+Tong+and+Shen,+Chunhua+and+{van+den+Hengel},+Anton" target=“blank”>search</a>
</p>
</li>
<li><p><b>Improving monocular visual odometry using learned depth</b>
<br />\(\cdot\) <i>L. Sun, W. Yin, E. Xie, Z. Li, C. Sun, C. Shen</i>.
<br />\(\cdot\) <i>IEEE Transactions on Robotics (TRO), 2022</i>.
<br />\(\cdot\) <a href="data/bibtex/Sun2022TRO.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Improving+Monocular+Visual+Odometry+Using+Learned+Depth+Sun,+Libo+and+Yin,+Wei+and+Xie,+Enze+and+Li,+Zhengrong+and+Sun,+Changming+and+Shen,+Chunhua" target=“blank”>search</a>
</p>
</li>
<li><p><b>DenseCL: a simple framework for self-supervised dense visual pre-training</b>
<br />\(\cdot\) <i>X. Wang, R. Zhang, C. Shen, T. Kong</i>.
<br />\(\cdot\) <i>Visual Informatics (VI), 2022</i>.
<br />\(\cdot\) <a href="data/bibtex/Wang2022VI.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q={DenseCL}:+A+simple+framework+for+self-supervised+dense+visual+pre-training+Wang,+Xinlong+and+Zhang,+Rufeng+and+Shen,+Chunhua+and+Kong,+Tao" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Haokui2021NASxxxarXiv.jpg"><b>Memory-efficient hierarchical neural architecture search for image restoration</b>
<br />\(\cdot\) <i>H. Zhang, Y. Li, H. Chen, C. Gong, Z. Bai, C. Shen</i>.
<br />\(\cdot\) <i>International Journal of Computer Vision (IJCV), 2021</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/2012.13212" target=“blank”>arXiv</a><a href="data/bibtex/Haokui2021NAS.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Memory-Efficient+Hierarchical+Neural+Architecture+Search+for+Image+Restoration+Zhang,+Haokui+and+Li,+Ying+and+Chen,+Hao+and+Gong,+Chengrong+and+Bai,+Zongwen+and+Shen,+Chunhua" target=“blank”>search</a><a href="https://github.com/hkzhang91/HiNAS" target=“blank”>project webpage</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Yu2021BiSegV2xxxarXiv.jpg"><b>BiSeNet v2: bilateral network with guided aggregation for real-time semantic segmentation</b>
<br />\(\cdot\) <i>C. Yu, C. Gao, J. Wang, G. Yu, C. Shen, N. Sang</i>.
<br />\(\cdot\) <i>International Journal of Computer Vision (IJCV), 2021</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/2004.02147" target=“blank”>arXiv</a><a href="data/bibtex/Yu2021BiSegV2.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q={BiSeNet}+v2:+Bilateral+Network+with+Guided+Aggregation+for+Real-time+Semantic+Segmentation+Yu,+Changqian+and+Gao,+Changxin+and+Wang,+Jingbo+and+Yu,+Gang+and+Shen,+Chunhua+and+Sang,+Nong" target=“blank”>search</a>
</p>
</li>
<li><p><b>A dual-attention-guided network for ghost-free high dynamic range imaging</b>
<br />\(\cdot\) <i>Q. Yan, D. Gong, Q. Shi, A. van den Hengel, C. Shen, I. Reid, Y. Zhang</i>.
<br />\(\cdot\) <i>International Journal of Computer Vision (IJCV), 2021</i>.
<br />\(\cdot\) <a href="data/bibtex/Yan2021Ghostfree.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=A+Dual-Attention-guided+network+for+ghost-free+high+dynamic+range+imaging+Yan,+Qingsen+and+Gong,+Dong+and+Shi,+Qinfeng+and+{van+den+Hengel},+Anton+and+Shen,+Chunhua+and+Reid,+Ian+and+Zhang,+Yanning" target=“blank”>search</a><a href="https://github.com/qingsenyangit/AHDRNet" target=“blank”>project webpage</a>
</p>
</li>
<li><p><b>NAS-FCOS: efficient search for object detection architectures</b>
<br />\(\cdot\) <i>N. Wang, Y. Gao, H. Chen, P. Wang, Z. Tian, C. Shen, Y. Zhang</i>.
<br />\(\cdot\) <i>International Journal of Computer Vision (IJCV), 2021</i>.
<br />\(\cdot\) <a href="data/bibtex/Wang2021IJCV_NAS.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q={NAS-FCOS}:+Efficient+Search+for+Object+Detection+Architectures+Wang,+Ning+and+Gao,+Yang+and+Chen,+Hao+and+Wang,+Peng+and+Tian,+Zhi+and+Shen,+Chunhua+and+Zhang,+Yanning" target=“blank”>search</a><a href="https://github.com/Lausannen/NAS-FCOS" target=“blank”>project webpage</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/IJCV2021LiuylxxxarXiv.jpg"><b>Exploring the capacity of an orderless box discretization network for multi-orientation scene text detection</b>
<br />\(\cdot\) <i>Y. Liu, T. He, H. Chen, X. Wang, C. Luo, S. Zhang, C. Shen, L. Jin</i>.
<br />\(\cdot\) <i>International Journal of Computer Vision (IJCV), 2021</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1912.09629" target=“blank”>arXiv</a><a href="data/bibtex/IJCV2021Liuyl.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Exploring+the+Capacity+of+an+Orderless+Box+Discretization+Network+for+Multi-orientation+Scene+Text+Detection+Liu,+Yuliang+and+He,+Tong+and+Chen,+Hao+and+Wang,+Xinyu+and+Luo,+Canjie+and+Zhang,+Shuaitao+and+Shen,+Chunhua+and+Jin,+Lianwen" target=“blank”>search</a><a href="https://git.io/TextDet" target=“blank”>project webpage</a>
</p>
</li>
<li><p><b>Joint classification and regression for visual tracking with fully convolutional Siamese networks</b>
<br />\(\cdot\) <i>Y. Cui, D. Guo, Y. Shao, Z. Wang, C. Shen, L. Zhang, S. Chen</i>.
<br />\(\cdot\) <i>International Journal of Computer Vision (IJCV), 2021</i>.
<br />\(\cdot\) <a href="data/bibtex/Cui2021Joint.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Joint+classification+and+regression+for+visual+tracking+with+fully+convolutional+{S}iamese+networks+Cui,+Ying+and+Guo,+Dongyan+and+Shao,+Yanyan+and+Wang,+Zhenhua+and+Shen,+Chunhua+and+Zhang,+Liyan+and+Chen,+Shengyong" target=“blank”>search</a>
</p>
</li>
<li><p><a class="imglink" target="_blank" href="https://arxiv.org/pdf/2105.11610.pdf"><img class="imgP right" src="data/thumbnail/Bian2021IJCVxxxarXiv.jpg"></a><b>Unsupervised scale-consistent depth learning from video</b>
<br />\(\cdot\) <i>J. Bian, H. Zhan, N. Wang, Z. Li, L. Zhang, C. Shen, M. Cheng, I. Reid</i>.
<br />\(\cdot\) <i>International Journal of Computer Vision (IJCV), 2021</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/2105.11610" target=“blank”>arXiv</a><a href="data/bibtex/Bian2021IJCV.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Unsupervised+Scale-consistent+Depth+Learning+from+Video+Bian,+Jia-Wang+and+Zhan,+Huangying+and+Wang,+Naiyan+and+Li,+Zhichao+and+Zhang,+Le+and+Shen,+Chunhua+and+Cheng,+Ming-Ming+and+Reid,+Ian" target=“blank”>search</a><a href="https://github.com/JiawangBian/SC-SfMLearner-Release" target=“blank”>project webpage</a>
</p>
</li>
<li><p><b>Learning discriminative region representation for person retrieval</b>
<br />\(\cdot\) <i>Y. Zhao, X. Yu, Y. Gao, C. Shen</i>.
<br />\(\cdot\) <i>Pattern Recognition (PR), 2021</i>.
<br />\(\cdot\) <a href="data/bibtex/Zhao2021PRLearning.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Learning+Discriminative+Region+Representation+for+Person+Retrieval+Zhao,+Yang+and+Yu,+Xiaohan+and+Gao,+Yongsheng+and+Shen,+Chunhua" target=“blank”>search</a>
</p>
</li>
<li><p><b>Learning deep part-aware embedding for person retrieval</b>
<br />\(\cdot\) <i>Y. Zhao, C. Shen, X. Yu, H. Chen, Y. Gao, S. Xiong</i>.
<br />\(\cdot\) <i>Pattern Recognition (PR), 2021</i>.
<br />\(\cdot\) <a href="data/bibtex/Zhao2021PR1.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Learning+Deep+Part-Aware+Embedding+for+Person+Retrieval+Zhao,+Yang+and+Shen,+Chunhua+and+Yu,+Xiaohan+and+Chen,+Hao+and+Gao,+Yongsheng+and+Xiong,+Shengwu" target=“blank”>search</a>
</p>
</li>
<li><p><b>An adversarial human pose estimation network injected with graph structure</b>
<br />\(\cdot\) <i>L. Tian, P. Wang, G. Liang, C. Shen</i>.
<br />\(\cdot\) <i>Pattern Recognition (PR), 2021</i>.
<br />\(\cdot\) <a href="data/bibtex/Tian2021Adversarial.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=An+Adversarial+Human+Pose+Estimation+Network+Injected+with+Graph+Structure+Tian,+Lei+and+Wang,+Peng+and+Liang,+Guoqiang+and+Shen,+Chunhua" target=“blank”>search</a>
</p>
</li>
<li><p><b>Intra- and inter-pair consistency for semi-supervised gland segmentation</b>
<br />\(\cdot\) <i>Y. Xie, J. Zhang, Z. Liao, J. Verjans, C. Shen, Y. Xia</i>.
<br />\(\cdot\) <i>IEEE Transactions on Image Processing (TIP), 2021</i>.
<br />\(\cdot\) <a href="data/bibtex/Xie2021Intra.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Intra-+and+Inter-pair+Consistency+for+Semi-supervised+Gland+Segmentation+Xie,+Yutong+and+Zhang,+Jianpeng+and+Liao,+Zhibin+and+Verjans,+Johan+and+Shen,+Chunhua+and+Xia,+Yong" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Zhuang2021QuantizationxxxarXiv.jpg"><b>Effective training of convolutional neural networks with low-bitwidth weights and activations</b>
<br />\(\cdot\) <i>B. Zhuang, J. Liu, M. Tan, L. Liu, I. Reid, C. Shen</i>.
<br />\(\cdot\) <i>IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2021</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1908.04680" target=“blank”>arXiv</a><a href="data/bibtex/Zhuang2021Quantization.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Effective+Training+of+Convolutional+Neural+Networks+with+Low-bitwidth+Weights+and+Activations+Zhuang,+Bohan+and+Liu,+Jing+and+Tan,+Mingkui+and+Liu,+Lingqiao+and+Reid,+Ian+and+Shen,+Chunhua" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Yin2021PAMIvnxxxarXiv.jpg"><b>Virtual normal: enforcing geometric constraints for accurate and robust depth prediction</b>
<br />\(\cdot\) <i>W. Yin, Y. Liu, C. Shen</i>.
<br />\(\cdot\) <i>IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2021</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/2103.04216" target=“blank”>arXiv</a><a href="data/bibtex/Yin2021PAMIvn.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Virtual+Normal:+Enforcing+Geometric+Constraints+for+Accurate+and+Robust+Depth+Prediction+Yin,+Wei+and+Liu,+Yifan+and+Shen,+Chunhua" target=“blank”>search</a><a href="https://git.io/Depth" target=“blank”>project webpage</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/WXL2021SOLOxxxarXiv.jpg"><b>SOLO: a simple framework for instance segmentation</b>
<br />\(\cdot\) <i>X. Wang, R. Zhang, C. Shen, T. Kong, L. Li</i>.
<br />\(\cdot\) <i>IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2021</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/2106.15947" target=“blank”>arXiv</a><a href="data/bibtex/WXL2021SOLO.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q={SOLO}:+A+Simple+Framework+for+Instance+Segmentation+Wang,+Xinlong+and+Zhang,+Rufeng+and+Shen,+Chunhua+and+Kong,+Tao+and+Li,+Lei" target=“blank”>search</a><a href="https://git.io/AdelaiDet" target=“blank”>project webpage</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Wang2021PANplusxxxarXiv.jpg"><b>PAN++: towards efficient and accurate end-to-end spotting of arbitrarily-shaped text</b>
<br />\(\cdot\) <i>W. Wang, E. Xie, X. Li, X. Liu, D. Liang, Z. Yang, T. Lu, C. Shen</i>.
<br />\(\cdot\) <i>IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2021</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/2105.00405" target=“blank”>arXiv</a><a href="data/bibtex/Wang2021PANplus.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q={PAN++}:+Towards+Efficient+and+Accurate+End-to-End+Spotting+of+Arbitrarily-Shaped+Text+Wang,+Wenhai+and+Xie,+Enze+and+Li,+Xiang+and+Liu,+Xuebo+and+Liang,+Ding+and+Yang,+Zhibo+and+Lu,+Tong+and+Shen,+Chunhua" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Li2021TextxxxarXiv.jpg"><b>Towards end-to-end text spotting in natural scenes</b>
<br />\(\cdot\) <i>P. Wang, H. Li, C. Shen</i>.
<br />\(\cdot\) <i>IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2021</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1906.06013" target=“blank”>arXiv</a><a href="data/bibtex/Li2021Text.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Towards+End-to-End+Text+Spotting+in+Natural+Scenes+Wang,+Peng+and+Li,+Hui+and+Shen,+Chunhua" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Liu2021ABCNetv2xxxarXiv.jpg"><b>ABCNet v2: adaptive bezier-curve network for real-time end-to-end text spotting</b>
<br />\(\cdot\) <i>Y. Liu, C. Shen, L. Jin, T. He, P. Chen, C. Liu, H. Chen</i>.
<br />\(\cdot\) <i>IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2021</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/2105.03620" target=“blank”>arXiv</a><a href="data/bibtex/Liu2021ABCNetv2.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q={ABCNet}+v2:+Adaptive+Bezier-Curve+Network+for+Real-time+End-to-end+Text+Spotting+Liu,+Yuliang+and+Shen,+Chunhua+and+Jin,+Lianwen+and+He,+Tong+and+Chen,+Peng+and+Liu,+Chongyu+and+Chen,+Hao" target=“blank”>search</a><a href="https://git.io/AdelaiDet" target=“blank”>project webpage</a>
</p>
</li>
<li><p><b>Auto-rectify network for unsupervised indoor depth estimation</b>
<br />\(\cdot\) <i>J. Bian, H. Zhan, N. Wang, T. Chin, C. Shen, I. Reid</i>.
<br />\(\cdot\) <i>IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2021</i>.
<br />\(\cdot\) <a href="data/bibtex/Autorectify2021Bian.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Auto-Rectify+Network+for+Unsupervised+Indoor+Depth+Estimation+Bian,+Jia-Wang+and+Zhan,+Huangying+and+Wang,+Naiyan+and+Chin,+Tat-Jun+and+Shen,+Chunhua+and+Reid,+Ian" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Pan2020ACMSurveyxxxarXiv.jpg"><b>Deep learning for anomaly detection: a review</b>
<br />\(\cdot\) <i>G. Pang, C. Shen, L. Cao, A. van den Hengel</i>.
<br />\(\cdot\) <i>ACM Computing Surveys (ACMSurvey), 2020</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/2007.02500" target=“blank”>arXiv</a><a href="data/bibtex/Pan2020ACMSurvey.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Deep+Learning+for+Anomaly+Detection:+A+Review+Pang,+Guansong+and+Shen,+Chunhua+and+Cao,+Longbing+and+{van+den+Hengel},+Anton" target=“blank”>search</a>
</p>
</li>
<li><p><b>Towards light-weight portrait matting via parameter sharing</b>
<br />\(\cdot\) <i>Y. Dai, H. Lu, C. Shen</i>.
<br />\(\cdot\) <i>Computer Graphics Forum (CGF), 2020</i>.
<br />\(\cdot\) <a href="data/bibtex/Daiyt2020.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Towards+Light-Weight+Portrait+Matting+via+Parameter+Sharing+Dai,+Yutong+and+Lu,+Hao+and+Shen,+Chunhua" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Luo2020IJCVxxxarXiv.jpg"><b>Separating content from style using adversarial learning for recognizing text in the wild</b>
<br />\(\cdot\) <i>C. Luo, Q. Lin, Y. Liu, L. Jin, C. Shen</i>.
<br />\(\cdot\) <i>International Journal of Computer Vision (IJCV), 2020</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/2001.04189" target=“blank”>arXiv</a><a href="data/bibtex/Luo2020IJCV.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Separating+Content+from+Style+Using+Adversarial+Learning+for+Recognizing+Text+in+the+Wild+Luo,+Canjie+and+Lin,+Qingxiang+and+Liu,+Yuliang+and+Jin,+Lianwen+and+Shen,+Chunhua" target=“blank”>search</a>
</p>
</li>
<li><p><b>TasselNetv2: in-field counting of wheat spikes with context-augmented local regression networks</b>
<br />\(\cdot\) <i>H. Xiong, Z. Cao, H. Lu, S. Madec, L. Liu, C. Shen</i>.
<br />\(\cdot\) <i>Plant Methods (PLME), 2020</i>.
<br />\(\cdot\) <a href="data/bibtex/TasselNet2020.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q={TasselNetv2}:+in-field+counting+of+wheat+spikes+with+context-augmented+local+regression+networks+Xiong,+Haipeng+and+Cao,+Zhiguo+and+Lu,+Hao+and+Madec,+Simon+and+Liu,+Liang+and+Shen,+Chunhua" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/MobileFAN2020xxxarXiv.jpg"><b>MobileFAN: transferring deep hidden representation for face alignment</b>
<br />\(\cdot\) <i>Y. Zhao, Y. Liu, C. Shen, Y. Gao, S. Xiong</i>.
<br />\(\cdot\) <i>Pattern Recognition (PR), 2020</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1908.03839" target=“blank”>arXiv</a><a href="data/bibtex/MobileFAN2020.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q={MobileFAN}:+Transferring+Deep+Hidden+Representation+for+Face+Alignment+Zhao,+Yang+and+Liu,+Yifan+and+Shen,+Chunhua+and+Gao,+Yongsheng+and+Xiong,+Shengwu" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Zhangx2020T-ITSxxxarXiv.jpg"><b>Part-guided attention learning for vehicle instance retrieval</b>
<br />\(\cdot\) <i>X. Zhang, R. Zhang, J. Cao, D. Gong, M. You, C. Shen</i>.
<br />\(\cdot\) <i>IEEE Transactions on Intelligent Transportation Systems (T-ITS), 2020</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1909.06023" target=“blank”>arXiv</a><a href="data/bibtex/Zhangx2020T-ITS.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Part-Guided+Attention+Learning+for+Vehicle+Instance+Retrieval+Zhang,+Xinyu+and+Zhang,+Rufeng+and+Cao,+Jiewei+and+Gong,+Dong+and+You,+Mingyu+and+Shen,+Chunhua" target=“blank”>search</a>
</p>
</li>
<li><p><b>A robust attentional framework for license plate recognition in the wild</b>
<br />\(\cdot\) <i>L. Zhang, P. Wang, H. Li, Z. Li, C. Shen, Y. Zhang</i>.
<br />\(\cdot\) <i>IEEE Transactions on Intelligent Transportation Systems (T-ITS), 2020</i>.
<br />\(\cdot\) <a href="data/bibtex/Li2020Carlicense.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=A+robust+attentional+framework+for+license+plate+recognition+in+the+wild+Zhang,+Linjiang+and+Wang,+Peng+and+Li,+Hui+and+Li,+Zhen+and+Shen,+Chunhua+and+Zhang,+Yanning" target=“blank”>search</a>
</p>
</li>
<li><p><b>Real-time high-performance semantic image segmentation of urban street scenes</b>
<br />\(\cdot\) <i>G. Dong, Y. Yan, C. Shen, H. Wang</i>.
<br />\(\cdot\) <i>IEEE Transactions on Intelligent Transportation Systems (T-ITS), 2020</i>.
<br />\(\cdot\) <a href="data/bibtex/Dong2020segmentation.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Real-time+high-performance+semantic+image+segmentation+of+urban+street+scenes+Dong,+Genshun+and+Yan,+Yan+and+Shen,+Chunhua+and+Wang,+Hanzi" target=“blank”>search</a>
</p>
</li>
<li><p><b>Towards effective deep embedding for zero-shot learning</b>
<br />\(\cdot\) <i>L. Zhang, P. Wang, L. Liu, C. Shen, W. Wei, Y. Zhang, A. van den Hengel</i>.
<br />\(\cdot\) <i>IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), 2020</i>.
<br />\(\cdot\) <a href="data/bibtex/Zhang2020Zeroshot.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Towards+Effective+Deep+Embedding+for+Zero-Shot+Learning+Zhang,+Lei+and+Wang,+Peng+and+Liu,+Lingqiao+and+Shen,+Chunhua+and+Wei,+Wei+and+Zhang,+Yanning+and+{van+den+Hengel},+Anton" target=“blank”>search</a>
</p>
</li>
<li><p><b>NSSNet: scale-aware object counting with non-scale suppression</b>
<br />\(\cdot\) <i>L. Liu, Z. Cao, H. Lu, H. Xiong, C. Shen</i>.
<br />\(\cdot\) <i>IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), 2020</i>.
<br />\(\cdot\) <a href="data/bibtex/LiuL2020CSVT.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q={NSSNet}:+Scale-aware+object+counting+with+non-scale+suppression+Liu,+Liang+and+Cao,+Zhiguo+and+Lu,+Hao+and+Xiong,+Haipeng+and+Shen,+Chunhua" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Zhang2020CovidxxxarXiv.jpg"><b>Viral pneumonia screening on chest x-ray images using confidence-aware anomaly detection</b>
<br />\(\cdot\) <i>J. Zhang, Y. Xie, Z. Liao, G. Pang, J. Verjans, W. Li, Z. Sun, J. He, Y. Li, C. Shen, Y. Xia</i>.
<br />\(\cdot\) <i>IEEE Transactions on Medical Imaging (TMI), 2020</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/2003.12338" target=“blank”>arXiv</a><a href="data/bibtex/Zhang2020Covid.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Viral+Pneumonia+Screening+on+Chest+X-ray+Images+Using+Confidence-Aware+Anomaly+Detection+Zhang,+Jianpeng+and+Xie,+Yutong+and+Liao,+Zhibin+and+Pang,+Guansong+and+Verjans,+Johan+and+Li,+Wenxin+and+Sun,+Zongji+and+He,+Jian+and+Li,+Yi+and+Shen,+Chunhua+and+Xia,+Yong" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Xie2020TMIaxxxarXiv.jpg"><b>A mutual bootstrapping model for automated skin lesion segmentation and classification</b>
<br />\(\cdot\) <i>Y. Xie, J. Zhang, Y. Xia, C. Shen</i>.
<br />\(\cdot\) <i>IEEE Transactions on Medical Imaging (TMI), 2020</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1903.03313" target=“blank”>arXiv</a><a href="data/bibtex/Xie2020TMIa.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=A+Mutual+Bootstrapping+Model+for+Automated+Skin+Lesion+Segmentation+and+Classification+Xie,+Yutong+and+Zhang,+Jianpeng+and+Xia,+Yong+and+Shen,+Chunhua" target=“blank”>search</a>
</p>
</li>
<li><p><b>SESV: accurate medical image segmentation by predicting and correcting errors</b>
<br />\(\cdot\) <i>Y. Xie, J. Zhang, H. Lu, C. Shen, Y. Xia</i>.
<br />\(\cdot\) <i>IEEE Transactions on Medical Imaging (TMI), 2020</i>.
<br />\(\cdot\) <a href="data/bibtex/Xie2020TMIb.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q={SESV}:+Accurate+Medical+Image+Segmentation+by+Predicting+and+Correcting+Errors+Xie,+Yutong+and+Zhang,+Jianpeng+and+Lu,+Hao+and+Shen,+Chunhua+and+Xia,+Yong" target=“blank”>search</a>
</p>
</li>
<li><p><b>OPMP: an omni-directional pyramid mask proposal network for arbitrary-shape scene text detection</b>
<br />\(\cdot\) <i>S. Zhang, Y. Liu, L. Jin, Z. Wei, C. Shen</i>.
<br />\(\cdot\) <i>IEEE Transactions on Multimedia (TMM), 2020</i>.
<br />\(\cdot\) <a href="data/bibtex/ShengZhang2020TMM.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q={OPMP}:+An+Omni-directional+Pyramid+Mask+Proposal+Network+for+Arbitrary-shape+Scene+Text+Detection+Zhang,+Sheng+and+Liu,+Yuliang+and+Jin,+Lianwen+and+Wei,+Zhongrong+and+Shen,+Chunhua" target=“blank”>search</a>
</p>
</li>
<li><p><b>Joint deep learning of facial expression synthesis and recognition</b>
<br />\(\cdot\) <i>Y. Yan, Y. Huang, S. Chen, C. Shen, H. Wang</i>.
<br />\(\cdot\) <i>IEEE Transactions on Multimedia (TMM), 2020</i>.
<br />\(\cdot\) <a href="data/bibtex/Yan2020TMM.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Joint+deep+learning+of+facial+expression+synthesis+and+recognition+Yan,+Yan+and+Huang,+Ying+and+Chen,+Si+and+Shen,+Chunhua+and+Wang,+Hanzi" target=“blank”>search</a>
</p>
</li>
<li><p><b>Accurate tensor completion via adaptive low-rank representation</b>
<br />\(\cdot\) <i>L. Zhang, W. Wei, Q. Shi, C. Shen, A. van den Hengel, Y. Zhang</i>.
<br />\(\cdot\) <i>IEEE Transactions on Neural Networks and Learning Systems (TNN), 2020</i>.
<br />\(\cdot\) <a href="data/bibtex/Zhang2020TNNLS.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Accurate+Tensor+Completion+via+Adaptive+Low-Rank+Representation+Zhang,+Lei+and+Wei,+Wei+and+Shi,+Qinfeng+and+Shen,+Chunhua+and+{van+den+Hengel},+Anton+and+Zhang,+Yanning" target=“blank”>search</a>
</p>
</li>
<li><p><b>Deep clustering with sample-assignment invariance prior</b>
<br />\(\cdot\) <i>X. Peng, H. Zhu, J. Feng, C. Shen, H. Zhang, J. Zhou</i>.
<br />\(\cdot\) <i>IEEE Transactions on Neural Networks and Learning Systems (TNN), 2020</i>.
<br />\(\cdot\) <a href="data/bibtex/Peng2020TNNLS.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Deep+Clustering+with+Sample-Assignment+Invariance+Prior+Peng,+Xi+and+Zhu,+Hongyuan+and+Feng,+Jiashi+and+Shen,+Chunhua+and+Zhang,+Haixian+and+Zhou,+Joey" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Gong2020TNNLSxxxarXiv.jpg"><b>Learning deep gradient descent optimization for image deconvolution</b>
<br />\(\cdot\) <i>D. Gong, Z. Zhang, Q. Shi, A. van den Hengel, C. Shen, Y. Zhang</i>.
<br />\(\cdot\) <i>IEEE Transactions on Neural Networks and Learning Systems (TNN), 2020</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1804.03368" target=“blank”>arXiv</a><a href="data/bibtex/Gong2020TNNLS.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Learning+Deep+Gradient+Descent+Optimization+for+Image+Deconvolution+Gong,+Dong+and+Zhang,+Zhen+and+Shi,+Qinfeng+and+{van+den+Hengel},+Anton+and+Shen,+Chunhua+and+Zhang,+Yanning" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Liu2020TOGxxxarXiv.jpg"><b>Real-time image smoothing via iterative least squares</b>
<br />\(\cdot\) <i>W. Liu, P. Zhang, X. Huang, J. Yang, C. Shen, I. Reid</i>.
<br />\(\cdot\) <i>ACM Transactions on Graphics (TOG), 2020</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/2003.07504" target=“blank”>arXiv</a><a href="https://doi.org/10.1145/3388887" target=“blank”>link</a><a href="data/bibtex/Liu2020TOG.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Real-time+image+smoothing+via+iterative+least+squares+Liu,+Wei+and+Zhang,+Pingping+and+Huang,+Xiaolin+and+Yang,+Jie+and+Shen,+Chunhua+and+Reid,+Ian" target=“blank”>search</a><a href="https://github.com/wliusjtu/Real-time-Image-Smoothing-via-Iterative-Least-Squares" target=“blank”>project webpage</a>
</p>
</li>
<li><p><b>Plenty is plague: fine-grained learning for visual question answering</b>
<br />\(\cdot\) <i>Y. Zhou, R. Ji, J. Su, X. Sun, D. Meng, Y. Gao, C. Shen</i>.
<br />\(\cdot\) <i>IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2020</i>.
<br />\(\cdot\) <a href="https://doi.org/10.1109/TPAMI.2019.2956699" target=“blank”>link</a><a href="data/bibtex/Zhou2020TPAMIZhou.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Plenty+Is+Plague:+Fine-Grained+Learning+for+Visual+Question+Answering+Zhou,+Yiyi+and+Ji,+Rongrong+and+Su,+Jinsong+and+Sun,+Xiaoshuai+and+Meng,+Deyu+and+Gao,+Yue+and+Shen,+Chunhua" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Zhang2020OrderlessReIDxxxarXiv.jpg"><b>Ordered or orderless: a revisit for video based person re-identification</b>
<br />\(\cdot\) <i>L. Zhang, Z. Shi, J. Zhou, M. Cheng, Y. Liu, J. Bian, Z. Zeng, C. Shen</i>.
<br />\(\cdot\) <i>IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2020</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1912.11236" target=“blank”>arXiv</a><a href="https://doi.org/10.1109/TPAMI.2020.2976969" target=“blank”>link</a><a href="data/bibtex/Zhang2020OrderlessReID.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Ordered+or+Orderless:+A+Revisit+for+Video+based+Person+Re-Identification+Zhang,+Le+and+Shi,+Zenglin+and+Zhou,+Joey+Tianyi+and+Cheng,+Ming-Ming+and+Liu,+Yun+and+Bian,+Jia-Wang+and+Zeng,+Zeng+and+Shen,+Chunhua" target=“blank”>search</a><a href="https://github.com/ZhangLeUestc/VideoReid-TPAMI2020" target=“blank”>project webpage</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Lu2020PAMIIndexNetxxxarXiv.jpg"><b>Index networks</b>
<br />\(\cdot\) <i>H. Lu, Y. Dai, C. Shen, S. Xu</i>.
<br />\(\cdot\) <i>IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2020</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1908.09895" target=“blank”>arXiv</a><a href="https://doi.org/10.1109/TPAMI.2020.3004474" target=“blank”>link</a><a href="data/bibtex/Lu2020PAMIIndexNet.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Index+Networks+Lu,+Hao+and+Dai,+Yutong+and+Shen,+Chunhua+and+Xu,+Songcen" target=“blank”>search</a><a href="https://git.io/IndexNet" target=“blank”>project webpage</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Liu2020PAMIxxxarXiv.jpg"><b>Structured knowledge distillation for dense prediction</b>
<br />\(\cdot\) <i>Y. Liu, C. Shun, J. Wang, C. Shen</i>.
<br />\(\cdot\) <i>IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2020</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1903.04197" target=“blank”>arXiv</a><a href="https://ieeexplore.ieee.org/document/9115859" target=“blank”>link</a><a href="data/bibtex/Liu2020PAMI.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Structured+Knowledge+Distillation+for+Dense+Prediction+Liu,+Yifan+and+Shun,+Changyong+and+Wang,+Jingdong+and+Shen,+Chunhua" target=“blank”>search</a><a href="https://github.com/irfanICMLL/structure_knowledge_distillation" target=“blank”>project webpage</a>
</p>
</li>
<li><p><a class="imglink" target="_blank" href="https://arxiv.org/pdf/1711.00253.pdf"><img class="imgP right" src="data/thumbnail/Chen2019PAMIxxxarXiv.jpg"></a><b>Adversarial learning of structure-aware fully convolutional networks for landmark localization</b>
<br />\(\cdot\) <i>Y. Chen, C. Shen, H. Chen, X. Wei, L. Liu, J. Yang</i>.
<br />\(\cdot\) <i>IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2020</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1711.00253" target=“blank”>arXiv</a><a href="https://doi.org/10.1109/TPAMI.2019.2901875" target=“blank”>link</a><a href="data/bibtex/Chen2019PAMI.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Adversarial+Learning+of+Structure-Aware+Fully+Convolutional+Networks+for+Landmark+Localization+Chen,+Yu+and+Shen,+Chunhua+and+Chen,+Hao+and+Wei,+Xiu-Shen+and+Liu,+Lingqiao+and+Yang,+Jian" target=“blank”>search</a>
</p>
</li>
<li><p><a class="imglink" target="_blank" href="https://arxiv.org/pdf/2008.00942.pdf"><img class="imgP right" src="data/thumbnail/Cao2020GANxxxarXiv.jpg"></a><b>Improving generative adversarial networks with local coordinate coding</b>
<br />\(\cdot\) <i>J. Cao, Y. Guo, Q. Wu, C. Shen, J. Huang, M. Tan</i>.
<br />\(\cdot\) <i>IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2020</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/2008.00942" target=“blank”>arXiv</a><a href="data/bibtex/Cao2020GAN.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Improving+Generative+Adversarial+Networks+with+Local+Coordinate+Coding+Cao,+Jiezhang+and+Guo,+Yong+and+Wu,+Qingyao+and+Shen,+Chunhua+and+Huang,+Junzhou+and+Tan,+Mingkui" target=“blank”>search</a><a href="https://github.com/SCUTjinchengli/LCCGAN-v2" target=“blank”>project webpage</a>
</p>
</li>
<li><p><a class="imglink" target="_blank" href="https://arxiv.org/pdf/1806.01576.pdf"><img class="imgP right" src="data/thumbnail/Adaptive2019ZhangxxxarXiv.jpg"></a><b>Adaptive importance learning for improving lightweight image super-resolution network</b>
<br />\(\cdot\) <i>L. Zhang, P. Wang, C. Shen, L. Liu, W. Wei, Y. Zhang, A. van den Hengel</i>.
<br />\(\cdot\) <i>International Journal of Computer Vision (IJCV), 2019</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1806.01576" target=“blank”>arXiv</a><a href="data/bibtex/Adaptive2019Zhang.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Adaptive+Importance+Learning+for+Improving+Lightweight+Image+Super-resolution+Network+Zhang,+Lei+and+Wang,+Peng+and+Shen,+Chunhua+and+Liu,+Lingqiao+and+Wei,+Wei+and+Zhang,+Yanning+and+{van+den+Hengel},+Anton" target=“blank”>search</a><a href="https://tinyurl.com/Super-resolution-Network" target=“blank”>project webpage</a>
</p>
</li>
<li><p><b>Accurate imagery recovery using a multi-observation patch model</b>
<br />\(\cdot\) <i>L. Zhang, W. Wei, Q. Shen, C. Shen, A. van den Hengel</i>.
<br />\(\cdot\) <i>Information Sciences (IS), 2019</i>.
<br />\(\cdot\) <a href="data/bibtex/Zhang2019Accurate.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Accurate+Imagery+Recovery+Using+a+Multi-Observation+Patch+Model+Zhang,+Lei+and+Wei,+Wei+and+Shen,+Qiang+and+Shen,+Chunhua+and+{van+den+Hengel},+Anton" target=“blank”>search</a>
</p>
</li>
<li><p><b>Heritage image annotation via collective knowledge</b>
<br />\(\cdot\) <i>J. Zhang, Q. Wu, J. Zhang, C. Shen, J. Lu, Q. Wu</i>.
<br />\(\cdot\) <i>Pattern Recognition (PR), 2019</i>.
<br />\(\cdot\) <a href="data/bibtex/Zhang2019PR.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Heritage+Image+Annotation+via+Collective+Knowledge+Zhang,+Junjie+and+Wu,+Qi+and+Zhang,+Jian+and+Shen,+Chunhua+and+Lu,+Jianfeng+and+Wu,+Qiang" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Wu2019PRxxxarXiv.jpg"><b>Wider or deeper: revisiting the ResNet model for visual recognition</b>
<br />\(\cdot\) <i>Z. Wu, C. Shen, A. van den Hengel</i>.
<br />\(\cdot\) <i>Pattern Recognition (PR), 2019</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1611.10080" target=“blank”>arXiv</a><a href="data/bibtex/Wu2019PR.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Wider+or+Deeper:+Revisiting+the+{ResNet}+Model+for+Visual+Recognition+Wu,+Zifeng+and+Shen,+Chunhua+and+{van+den+Hengel},+Anton" target=“blank”>search</a>
</p>
</li>
<li><p><b>Order-aware convolutional pooling for video based action recognition</b>
<br />\(\cdot\) <i>P. Wang, L. Liu, C. Shen, H. Shen</i>.
<br />\(\cdot\) <i>Pattern Recognition (PR), 2019</i>.
<br />\(\cdot\) <a href="data/bibtex/Wang2019PR.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Order-aware+Convolutional+Pooling+for+Video+Based+Action+Recognition+Wang,+Peng+and+Liu,+Lingqiao+and+Shen,+Chunhua+and+Shen,+Heng+Tao" target=“blank”>search</a>
</p>
</li>
<li><p><b>Structural analysis of attributes for vehicle re-identification and retrieval</b>
<br />\(\cdot\) <i>Y. Zhao, C. Shen, H. Wang, S. Chen</i>.
<br />\(\cdot\) <i>IEEE Transactions on Intelligent Transportation Systems (T-ITS), 2019</i>.
<br />\(\cdot\) <a href="data/bibtex/Zhao2019Structural.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Structural+Analysis+of+Attributes+for+Vehicle+Re-identification+and+Retrieval+Zhao,+Yanzhu+and+Shen,+Chunhua+and+Wang,+Huibing+and+Chen,+Shengyong" target=“blank”>search</a>
</p>
</li>
<li><p><b>Human detection aided by deeply learned semantic masks</b>
<br />\(\cdot\) <i>X. Wang, C. Shen, H. Li, S. Xu</i>.
<br />\(\cdot\) <i>IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), 2019</i>.
<br />\(\cdot\) <a href="data/bibtex/Wangxy2019CSVT.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Human+Detection+Aided+by+Deeply+Learned+Semantic+Masks+Wang,+Xinyu+and+Shen,+Chunhua+and+Li,+Hanxi+and+Xu,+Shugong" target=“blank”>search</a>
</p>
</li>
<li><p><b>Embedding bilateral filter in least squares for efficient edge-preserving image smoothing</b>
<br />\(\cdot\) <i>W. Liu, P. Zhang, X. Chen, C. Shen, X. Huang, J. Yang</i>.
<br />\(\cdot\) <i>IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), 2019</i>.
<br />\(\cdot\) <a href="data/bibtex/Liu2019CSVT.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Embedding+Bilateral+Filter+in+Least+Squares+for+Efficient+Edge-preserving+Image+Smoothing+Liu,+Wei+and+Zhang,+Pingping+and+Chen,+Xiaogang+and+Shen,+Chunhua+and+Huang,+Xiaolin+and+Yang,+Jie" target=“blank”>search</a>
</p>
</li>
<li><p><b>Counting objects by blockwise classification</b>
<br />\(\cdot\) <i>L. Liu, H. Lu, H. Xiong, K. Xian, Z. Cao, C. Shen</i>.
<br />\(\cdot\) <i>IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), 2019</i>.
<br />\(\cdot\) <a href="data/bibtex/Counting2019CSVT.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Counting+Objects+by+Blockwise+Classification+Liu,+Liang+and+Lu,+Hao+and+Xiong,+Haipeng+and+Xian,+Ke+and+Cao,+Zhiguo+and+Shen,+Chunhua" target=“blank”>search</a>
</p>
</li>
<li><p><b>Hyperspectral classification based on lightweight 3D-CNN with transfer learning</b>
<br />\(\cdot\) <i>H. Zhang, Y. Li, Y. Jiang, P. Wang, Q. Shen, C. Shen</i>.
<br />\(\cdot\) <i>IEEE Transactions on Geoscience and Remote Sensing (TGRS), 2019</i>.
<br />\(\cdot\) <a href="data/bibtex/Zhang2019Lightweight.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Hyperspectral+Classification+Based+on+Lightweight+{3D-CNN}+With+Transfer+Learning+Zhang,+Haokui+and+Li,+Ying+and+Jiang,+Yenan+and+Wang,+Peng+and+Shen,+Qiang+and+Shen,+Chunhua" target=“blank”>search</a>
</p>
</li>
<li><p><b>Salient object detection with lossless feature reflection and weighted structural loss</b>
<br />\(\cdot\) <i>P. Zhang, W. Liu, H. Lu, C. Shen</i>.
<br />\(\cdot\) <i>IEEE Transactions on Image Processing (TIP), 2019</i>.
<br />\(\cdot\) <a href="data/bibtex/Zhang2019Salient.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Salient+Object+Detection+with+Lossless+Feature+Reflection+and+Weighted+Structural+Loss+Zhang,+Pingping+and+Liu,+Wei+and+Lu,+Huchuan+and+Shen,+Chunhua" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Wei2019TIPxxxarXiv.jpg"><b>Piecewise classifier mappings: learning fine-grained learners for novel categories with few examples</b>
<br />\(\cdot\) <i>X. Wei, P. Wang, L. Liu, C. Shen, J. Wu</i>.
<br />\(\cdot\) <i>IEEE Transactions on Image Processing (TIP), 2019</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1805.04288" target=“blank”>arXiv</a><a href="data/bibtex/Wei2019TIP.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Piecewise+classifier+mappings:+Learning+fine-grained+learners+for+novel+categories+with+few+examples+Wei,+Xiu-Shen+and+Wang,+Peng+and+Liu,+Lingqiao+and+Shen,+Chunhua+and+Wu,+Jianxin" target=“blank”>search</a>
</p>
</li>
<li><p><b>Multiple instance learning with emerging novel class</b>
<br />\(\cdot\) <i>X. Wei, H. Ye, X. Mu, J. Wu, C. Shen, Z. Zhou</i>.
<br />\(\cdot\) <i>IEEE Transactions on Knowledge and Data Engineering (TKDE), 2019</i>.
<br />\(\cdot\) <a href="data/bibtex/Wei2019TKDE.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Multiple+Instance+Learning+with+Emerging+Novel+Class+Wei,+Xiu-Shen+and+Ye,+Han-Jia+and+Mu,+Xin+and+Wu,+Jianxin+and+Shen,+Chunhua+and+Zhou,+Zhi-Hua" target=“blank”>search</a>
</p>
</li>
<li><p><b>Attention residual learning for skin lesion classification</b>
<br />\(\cdot\) <i>J. Zhang, Y. Xie, Y. Xia, C. Shen</i>.
<br />\(\cdot\) <i>IEEE Transactions on Medical Imaging (TMI), 2019</i>.
<br />\(\cdot\) <a href="data/bibtex/Zhang2019Attn.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Attention+residual+learning+for+skin+lesion+classification+Zhang,+Jianpeng+and+Xie,+Yutong+and+Xia,+Yong+and+Shen,+Chunhua" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/TZhang2019TMMxxxarXiv.jpg"><b>Decoupled spatial neural attention for weakly supervised semantic segmentation</b>
<br />\(\cdot\) <i>T. Zhang, G. Lin, J. Cai, T. Shen, C. Shen, A. Kot</i>.
<br />\(\cdot\) <i>IEEE Transactions on Multimedia (TMM), 2019</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1803.02563" target=“blank”>arXiv</a><a href="data/bibtex/TZhang2019TMM.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Decoupled+Spatial+Neural+Attention+for+Weakly+Supervised+Semantic+Segmentation+Zhang,+Tianyi+and+Lin,+Guosheng+and+Cai,+Jianfei+and+Shen,+Tong+and+Shen,+Chunhua+and+Kot,+Alex+C." target=“blank”>search</a>
</p>
</li>
<li><p><b>RefineNet: multi-path refinement networks for dense prediction</b>
<br />\(\cdot\) <i>G. Lin, F. Liu, A. Milan, C. Shen, I. Reid</i>.
<br />\(\cdot\) <i>IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2019</i>.
<br />\(\cdot\) <a href="https://doi.org/10.1109/TPAMI.2019.2893630" target=“blank”>link</a><a href="data/bibtex/Fayao2019PAMI.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q={RefineNet}:+Multi-Path+Refinement+Networks+for+Dense+Prediction+Lin,+Guosheng+and+Liu,+Fayao+and+Milan,+Anton+and+Shen,+Chunhua+and+Reid,+Ian" target=“blank”>search</a><a href="https://github.com/guosheng/refinenet" target=“blank”>project webpage</a>
</p>
<ol reversed>
<li><p>Pytorch code is <a href="https://github.com/DrSleep/refinenet-pytorch" target=“blank”>here</a>.
</p>
</li></ol>
</li>
<li><p><b>Cluster sparsity field: an internal hyperspectral imagery prior for reconstruction</b>
<br />\(\cdot\) <i>L. Zhang, W. Wei, Y. Zhang, C. Shen, A. van den Hengel, Q. Shi</i>.
<br />\(\cdot\) <i>International Journal of Computer Vision (IJCV), 2018</i>.
<br />\(\cdot\) <a href="https://www.researchgate.net/publication/323914969_Cluster_Sparsity_Field_An_Internal_Hyperspectral_Imagery_Prior_for_Reconstruction" target=“blank”>pdf</a><a href="data/bibtex/Zhang2018IJCV.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Cluster+Sparsity+Field:+An+Internal+Hyperspectral+Imagery+Prior+for+Reconstruction+Zhang,+Lei+and+Wei,+Wei+and+Zhang,+Yanning+and+Shen,+Chunhua+and+{van+den+Hengel},+Anton+and+Shi,+Qinfeng" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Li2018IVCxxxarXiv.jpg"><b>Reading car license plates using deep neural networks</b>
<br />\(\cdot\) <i>H. Li, P. Wang, M. You, C. Shen</i>.
<br />\(\cdot\) <i>Image and Vision Computing (IVC), 2018</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1601.05610" target=“blank”>arXiv</a><a href="data/bibtex/Li2018IVC.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Reading+Car+License+Plates+Using+Deep+Neural+Networks+Li,+Hui+and+Wang,+Peng+and+You,+Mingyu+and+Shen,+Chunhua" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Zhuang2018PRxxxarXiv.jpg"><b>Multi-label learning based deep transfer neural network for facial attribute classification</b>
<br />\(\cdot\) <i>N. Zhuang, Y. Yan, S. Chen, H. Wang, C. Shen</i>.
<br />\(\cdot\) <i>Pattern Recognition (PR), 2018</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1805.01282" target=“blank”>arXiv</a><a href="data/bibtex/Zhuang2018PR.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Multi-label+Learning+Based+Deep+Transfer+Neural+Network+for+Facial+Attribute+Classification+Zhuang,+Ni+and+Yan,+Yan+and+Chen,+Si+and+Wang,+Hanzi+and+Shen,+Chunhua" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Wei2018PRxxxarXiv.jpg"><b>Unsupervised object discovery and co-localization by deep descriptor transforming</b>
<br />\(\cdot\) <i>X. Wei, C. Zhang, J. Wu, C. Shen, Z. Zhou</i>.
<br />\(\cdot\) <i>Pattern Recognition (PR), 2018</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1707.06397" target=“blank”>arXiv</a><a href="data/bibtex/Wei2018PR.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Unsupervised+Object+Discovery+and+Co-Localization+by+Deep+Descriptor+Transforming+Wei,+Xiu-Shen+and+Zhang,+Chen-Lin+and+Wu,+Jianxin+and+Shen,+Chunhua+and+Zhou,+Zhi-Hua" target=“blank”>search</a>
</p>
</li>
<li><p><b>An extended filtered channel framework for pedestrian detection</b>
<br />\(\cdot\) <i>M. You, Y. Zhang, C. Shen, X. Zhang</i>.
<br />\(\cdot\) <i>IEEE Transactions on Intelligent Transportation Systems (T-ITS), 2018</i>.
<br />\(\cdot\) <a href="data/bibtex/You2018T-ITS.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=An+extended+filtered+channel+framework+for+pedestrian+detection+You,+Minyu+and+Zhang,+Yubin+and+Shen,+Chunhua+and+Zhang,+Xinyu" target=“blank”>search</a>
</p>
</li>
<li><p><b>Towards end-to-end car license plates detection and recognition with deep neural networks</b>
<br />\(\cdot\) <i>H. Li, P. Wang, C. Shen</i>.
<br />\(\cdot\) <i>IEEE Transactions on Intelligent Transportation Systems (T-ITS), 2018</i>.
<br />\(\cdot\) <a href="data/bibtex/Li2018T-ITSa.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Towards+End-to-End+Car+License+Plates+Detection+and+Recognition+with+Deep+Neural+Networks+Li,+Hui+and+Wang,+Peng+and+Shen,+Chunhua" target=“blank”>search</a>
</p>
</li>
<li><p><b>Unsupervised domain adaptation using robust class-wise matching</b>
<br />\(\cdot\) <i>L. Zhang, P. Wang, W. Wei, H. Lu, C. Shen, A. van den Hengel, Y. Zhang</i>.
<br />\(\cdot\) <i>IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), 2018</i>.
<br />\(\cdot\) <a href="data/bibtex/Zhang2018TCSVT.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Unsupervised+Domain+Adaptation+Using+Robust+Class-Wise+Matching+Zhang,+Lei+and+Wang,+Peng+and+Wei,+Wei+and+Lu,+Hao+and+Shen,+Chunhua+and+{van+den+Hengel},+Anton+and+Zhang,+Yanning" target=“blank”>search</a>
</p>
</li>
<li><p><b>Semantics-aware visual object tracking</b>
<br />\(\cdot\) <i>R. Yao, G. Lin, C. Shen, Y. Zhang, Q. Shi</i>.
<br />\(\cdot\) <i>IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), 2018</i>.
<br />\(\cdot\) <a href="data/bibtex/Yao2018TCSVT.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Semantics-Aware+Visual+Object+Tracking+Yao,+Rui+and+Lin,+Guosheng+and+Shen,+Chunhua+and+Zhang,+Yanning+and+Shi,+Qinfeng" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/TCSVT2017HuxxxarXiv.jpg"><b>Pushing the limits of deep CNNs for pedestrian detection</b>
<br />\(\cdot\) <i>Q. Hu, P. Wang, C. Shen, A. van den Hengel, F. Porikli</i>.
<br />\(\cdot\) <i>IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), 2018</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1603.04525" target=“blank”>arXiv</a><a href="data/bibtex/TCSVT2017Hu.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Pushing+the+Limits+of+Deep+{CNNs}+for+Pedestrian+Detection+Hu,+Qichang+and+Wang,+Peng+and+Shen,+Chunhua+and+{van+den+Hengel},+Anton+and+Porikli,+Fatih" target=“blank”>search</a>
</p>
</li>
<li><p><b>An embarrassingly simple approach to visual domain adaptation</b>
<br />\(\cdot\) <i>H. Lu, C. Shen, Z. Cao, Y. Xiao, A. van den Hengel</i>.
<br />\(\cdot\) <i>IEEE Transactions on Image Processing (TIP), 2018</i>.
<br />\(\cdot\) <a href="data/bibtex/Lu2018TIP.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=An+Embarrassingly+Simple+Approach+to+Visual+Domain+Adaptation+Lu,+Hao+and+Shen,+Chunhua+and+Cao,+Zhiguo+and+Xiao,+Yang+and+{van+den+Hengel},+Anton" target=“blank”>search</a><a href="https://github.com/poppinace/ldada" target=“blank”>project webpage</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Zhang2018TMMxxxarXiv.jpg"><b>Multi-label image classification with regional latent semantic dependencies</b>
<br />\(\cdot\) <i>J. Zhang, Q. Wu, C. Shen, J. Zhang, J. Lu</i>.
<br />\(\cdot\) <i>IEEE Transactions on Multimedia (TMM), 2018</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1612.01082" target=“blank”>arXiv</a><a href="data/bibtex/Zhang2018TMM.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Multi-Label+Image+Classification+with+Regional+Latent+Semantic+Dependencies+Zhang,+Junjie+and+Wu,+Qi+and+Shen,+Chunhua+and+Zhang,+Jian+and+Lu,+Jianfeng" target=“blank”>search</a>
</p>
</li>
<li><p><a class="imglink" target="_blank" href="https://arxiv.org/pdf/1712.09048.pdf"><img class="imgP right" src="data/thumbnail/Guo2018TMMxxxarXiv.jpg"></a><b>Automatic image cropping for visual aesthetic enhancement using deep neural networks and cascaded regression</b>
<br />\(\cdot\) <i>G. Guo, H. Wang, C. Shen, Y. Yan, H. Liao</i>.
<br />\(\cdot\) <i>IEEE Transactions on Multimedia (TMM), 2018</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1712.09048" target=“blank”>arXiv</a><a href="data/bibtex/Guo2018TMM.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Automatic+image+cropping+for+visual+aesthetic+enhancement+using+deep+neural+networks+and+cascaded+regression+Guo,+Guanjun+and+Wang,+Hanzi+and+Shen,+Chunhua+and+Yan,+Yan+and+Liao,+Hong-Yuan" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Wang2017FVQAxxxarXiv.jpg"><b>FVQA: fact-based visual question answering</b>
<br />\(\cdot\) <i>P. Wang, Q. Wu, C. Shen, A. Dick, A. van den Hengel</i>.
<br />\(\cdot\) <i>IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2018</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1606.05433" target=“blank”>arXiv</a><a href="data/bibtex/Wang2017FVQA.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q={FVQA}:+Fact-based+Visual+Question+Answering+Wang,+Peng+and+Wu,+Qi+and+Shen,+Chunhua+and+Dick,+Anthony+and+{van+den+Hengel},+Anton" target=“blank”>search</a>
</p>
</li>
<li><p><b>Ordinal constraint binary coding for approximate nearest neighbor search</b>
<br />\(\cdot\) <i>H. Liu, R. Ji, J. Wang, C. Shen</i>.
<br />\(\cdot\) <i>IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2018</i>.
<br />\(\cdot\) <a href="https://www.researchgate.net/publication/324053386_Ordinal_Constraint_Binary_Coding_for_Approximate_Nearest_Neighbor_Search" target=“blank”>pdf</a><a href="data/bibtex/HLiu2018TPAMI.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Ordinal+Constraint+Binary+Coding+for+Approximate+Nearest+Neighbor+Search+Liu,+Hong+and+Ji,+Rongrong+and+Wang,+Jingdong+and+Shen,+Chunhua" target=“blank”>search</a>
</p>
</li>
<li><p><a class="imglink" target="_blank" href="https://arxiv.org/pdf/1607.05910.pdf"><img class="imgP right" src="data/thumbnail/CVIU2017VQAxxxarXiv.jpg"></a><b>Visual question answering: a survey of methods and datasets</b>
<br />\(\cdot\) <i>Q. Wu, D. Teney, P. Wang, C. Shen, A. Dick, A. van den Hengel</i>.
<br />\(\cdot\) <i>Computer Vision and Image Understanding (CVIU), 2017</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1607.05910" target=“blank”>arXiv</a><a href="data/bibtex/CVIU2017VQA.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Visual+question+answering:+A+survey+of+methods+and+datasets+Wu,+Qi+and+Teney,+Damien+and+Wang,+Peng+and+Shen,+Chunhua+and+Dick,+Anthony+and+{van+den+Hengel},+Anton" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/IJCV2017LinxxxarXiv.jpg"><b>Structured learning of binary codes with column generation for optimizing ranking measures</b>
<br />\(\cdot\) <i>G. Lin, F. Liu, C. Shen, J. Wu, H. Shen</i>.
<br />\(\cdot\) <i>International Journal of Computer Vision (IJCV), 2017</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1602.06654" target=“blank”>arXiv</a><a href="data/bibtex/IJCV2017Lin.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Structured+Learning+of+Binary+Codes+with+Column+Generation+for+Optimizing+Ranking+Measures+Lin,+Guosheng+and+Liu,+Fayao+and+Shen,+Chunhua+and+Wu,+Jianxin+and+Shen,+Heng+Tao" target=“blank”>search</a><a href="https://bitbucket.org/guosheng/structhash" target=“blank”>project webpage</a>
</p>
</li>
<li><p><b>Removal of optically thick clouds from high-resolution satellite imagery using dictionary group learning and interdictionary nonlocal joint sparse coding</b>
<br />\(\cdot\) <i>Y. Li, W. Li, C. Shen</i>.
<br />\(\cdot\) <i>IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing (JSTAEORS), 2017</i>.
<br />\(\cdot\) <a href="data/bibtex/Li2017Removal.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Removal+of+Optically+Thick+Clouds+From+High-resolution+Satellite+Imagery+Using+Dictionary+Group+Learning+and+Interdictionary+Nonlocal+Joint+Sparse+Coding+Li,+Ying+and+Li,+Wenbo+and+Shen,+Chunhua" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Lu2017CountingxxxarXiv.jpg"><b>TasselNet: counting maize tassels in the wild via local counts regression network</b>
<br />\(\cdot\) <i>H. Lu, Z. Cao, Y. Xiao, B. Zhuang, C. Shen</i>.
<br />\(\cdot\) <i>Plant Methods (PLME), 2017</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1707.02290" target=“blank”>arXiv</a><a href="data/bibtex/Lu2017Counting.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q={TasselNet}:+Counting+maize+tassels+in+the+wild+via+local+counts+regression+network+Lu,+Hao+and+Cao,+Zhiguo+and+Xiao,+Yang+and+Zhuang,+Bohan+and+Shen,+Chunhua" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Wu2017PRxxxarXiv.jpg"><b>Deep linear discriminant analysis on Fisher networks: a hybrid architecture for person re-identification</b>
<br />\(\cdot\) <i>L. Wu, C. Shen, A. van den Hengel</i>.
<br />\(\cdot\) <i>Pattern Recognition (PR), 2017</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1606.01595" target=“blank”>arXiv</a><a href="data/bibtex/Wu2017PR.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Deep+Linear+Discriminant+Analysis+on+{F}isher+Networks:+A+Hybrid+Architecture+for+Person+Re-identification+Wu,+Lin+and+Shen,+Chunhua+and+{van+den+Hengel},+Anton" target=“blank”>search</a>
</p>
</li>
<li><p><b>Mask-CNN: localizing parts and selecting descriptors for bird species categorization</b>
<br />\(\cdot\) <i>X. Wei, C. Xie, J. Wu, C. Shen</i>.
<br />\(\cdot\) <i>Pattern Recognition (PR), 2017</i>.
<br />\(\cdot\) <a href="data/bibtex/Wei2017PR.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Mask-{CNN}:+Localizing+parts+and+selecting+descriptors+for+bird+species+categorization+Wei,+Xiu-Shen+and+Xie,+Chen-Wei+and+Wu,+Jianxin+and+Shen,+Chunhua" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/PR2017QiaoxxxarXiv.jpg"><b>Learning discriminative trajectorylet detector sets for accurate skeleton-based action recognition</b>
<br />\(\cdot\) <i>R. Qiao, L. Liu, C. Shen, A. van den Hengel</i>.
<br />\(\cdot\) <i>Pattern Recognition (PR), 2017</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1504.04923" target=“blank”>arXiv</a><a href="data/bibtex/PR2017Qiao.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Learning+discriminative+trajectorylet+detector+sets+for+accurate+skeleton-based+action+recognition+Qiao,+Ruizhi+and+Liu,+Lingqiao+and+Shen,+Chunhua+and+{van+den+Hengel},+Anton" target=“blank”>search</a>
</p>
</li>
<li><p><b>Deep CNNs with spatially weighted pooling for fine-grained car recognition</b>
<br />\(\cdot\) <i>Q. Hu, H. Wang, T. Li, C. Shen</i>.
<br />\(\cdot\) <i>IEEE Transactions on Intelligent Transportation Systems (T-ITS), 2017</i>.
<br />\(\cdot\) <a href="data/bibtex/SWP2017Hu.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Deep+{CNNs}+with+Spatially+Weighted+Pooling+for+Fine-grained+Car+Recognition+Hu,+Qichang+and+Wang,+Huibing+and+Li,+Teng+and+Shen,+Chunhua" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/TCSVT2017ShengxxxarXiv.jpg"><b>Crowd counting via weighted VLAD on dense attribute feature maps</b>
<br />\(\cdot\) <i>B. Sheng, C. Shen, G. Lin, J. Li, W. Yang, C. Sun</i>.
<br />\(\cdot\) <i>IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), 2017</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1604.08660" target=“blank”>arXiv</a><a href="data/bibtex/TCSVT2017Sheng.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Crowd+Counting+via+Weighted+{VLAD}+on+Dense+Attribute+Feature+Maps+Sheng,+Biyun+and+Shen,+Chunhua+and+Lin,+Guosheng+and+Li,+Jun+and+Yang,+Wankou+and+Sun,+Changyin" target=“blank”>search</a>
</p>
</li>
<li><p><a class="imglink" target="_blank" href="https://arxiv.org/pdf/1605.02305.pdf"><img class="imgP right" src="data/thumbnail/Cao2017xxxarXiv.jpg"></a><b>Estimating depth from monocular images as classification using deep fully convolutional residual networks</b>
<br />\(\cdot\) <i>Y. Cao, Z. Wu, C. Shen</i>.
<br />\(\cdot\) <i>IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), 2017</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1605.02305" target=“blank”>arXiv</a><a href="data/bibtex/Cao2017.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Estimating+Depth+from+Monocular+Images+as+Classification+Using+Deep+Fully+Convolutional+Residual+Networks+Cao,+Yuanzhouhan+and+Wu,+Zifeng+and+Shen,+Chunhua" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/TIP2017LiuxxxarXiv.jpg"><b>Discriminative training of deep fully-connected continuous CRF with task-specific loss</b>
<br />\(\cdot\) <i>F. Liu, G. Lin, C. Shen</i>.
<br />\(\cdot\) <i>IEEE Transactions on Image Processing (TIP), 2017</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1601.07649" target=“blank”>arXiv</a><a href="data/bibtex/TIP2017Liu.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Discriminative+Training+of+Deep+Fully-connected+Continuous+{CRF}+with+Task-specific+Loss+Liu,+Fayao+and+Lin,+Guosheng+and+Shen,+Chunhua" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/TIP2016CaoxxxarXiv.jpg"><b>Exploiting depth from single monocular images for object detection and semantic segmentation</b>
<br />\(\cdot\) <i>Y. Cao, C. Shen, H. Shen</i>.
<br />\(\cdot\) <i>IEEE Transactions on Image Processing (TIP), 2017</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1610.01706" target=“blank”>arXiv</a><a href="data/bibtex/TIP2016Cao.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Exploiting+Depth+from+Single+Monocular+Images+for+Object+Detection+and+Semantic+Segmentation+Cao,+Yuanzhouhan+and+Shen,+Chunhua+and+Shen,+Heng+Tao" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/TNNLS2017LiuxxxarXiv.jpg"><b>Structured learning of tree potentials in CRF for image segmentation</b>
<br />\(\cdot\) <i>F. Liu, G. Lin, R. Qiao, C. Shen</i>.
<br />\(\cdot\) <i>IEEE Transactions on Neural Networks and Learning Systems (TNN), 2017</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1703.08764" target=“blank”>arXiv</a><a href="data/bibtex/TNNLS2017Liu.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Structured+Learning+of+Tree+Potentials+in+{CRF}+for+Image+Segmentation+Liu,+Fayao+and+Lin,+Guosheng+and+Qiao,+Ruizhi+and+Shen,+Chunhua" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Wu2017ExternalxxxarXiv.jpg"><b>Image captioning and visual question answering based on attributes and external knowledge</b>
<br />\(\cdot\) <i>Q. Wu, C. Shen, P. Wang, A. Dick, A. van den Hengel</i>.
<br />\(\cdot\) <i>IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2017</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1603.02814" target=“blank”>arXiv</a><a href="data/bibtex/Wu2017External.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Image+Captioning+and+Visual+Question+Answering+Based+on+Attributes+and+External+Knowledge+Wu,+Qi+and+Shen,+Chunhua+and+Wang,+Peng+and+Dick,+Anthony+and+{van+den+Hengel},+Anton" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/TPAMI2017LiuxxxarXiv.jpg"><b>Compositional model based Fisher vector coding for image classification</b>
<br />\(\cdot\) <i>L. Liu, P. Wang, C. Shen, L. Wang, A. van den Hengel, C. Wang, H. Shen</i>.
<br />\(\cdot\) <i>IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2017</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1601.04143" target=“blank”>arXiv</a><a href="data/bibtex/TPAMI2017Liu.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Compositional+Model+based+{F}isher+Vector+Coding+for+Image+Classification+Liu,+Lingqiao+and+Wang,+Peng+and+Shen,+Chunhua+and+Wang,+Lei+and+{van+den+Hengel},+Anton+and+Wang,+Chao+and+Shen,+Heng+Tao" target=“blank”>search</a>
</p>
</li>
<li><p><a class="imglink" target="_blank" href="https://arxiv.org/pdf/1510.00921.pdf"><img class="imgP right" src="data/thumbnail/Cross2017LiuxxxarXiv.jpg"></a><b>Cross-convolutional-layer pooling for image recognition</b>
<br />\(\cdot\) <i>L. Liu, C. Shen, A. van den Hengel</i>.
<br />\(\cdot\) <i>IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2017</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1510.00921" target=“blank”>arXiv</a><a href="http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=7779086" target=“blank”>link</a><a href="data/bibtex/Cross2017Liu.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Cross-convolutional-layer+Pooling+for+Image+Recognition+Liu,+Lingqiao+and+Shen,+Chunhua+and+{van+den+Hengel},+Anton" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Lin2017SemanticxxxarXiv.jpg"><b>Exploring context with deep structured models for semantic segmentation</b>
<br />\(\cdot\) <i>G. Lin, C. Shen, A. van den Hengel, I. Reid</i>.
<br />\(\cdot\) <i>IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2017</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1603.03183" target=“blank”>arXiv</a><a href="data/bibtex/Lin2017Semantic.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Exploring+Context+with+Deep+Structured+models+for+Semantic+Segmentation+Lin,+Guosheng+and+Shen,+Chunhua+and+{van+den+Hengel},+Anton+and+Reid,+Ian" target=“blank”>search</a>
</p>
</li>
<li><p><a class="imglink" target="_blank" href="https://arxiv.org/pdf/1511.08531.pdf"><img class="imgP right" src="data/thumbnail/CVIU2016xxxarXiv.jpg"></a><b>Structured learning of metric ensembles with application to person re-identification</b>
<br />\(\cdot\) <i>S. Paisitkriangkrai, L. Wu, C. Shen, A. van den Hengel</i>.
<br />\(\cdot\) <i>Computer Vision and Image Understanding (CVIU), 2016</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1511.08531" target=“blank”>arXiv</a><a href="data/bibtex/CVIU2016.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Structured+learning+of+metric+ensembles+with+application+to+person+re-identification+Paisitkriangkrai,+Sakrapee+and+Wu,+Lin+and+Shen,+Chunhua+and+{van+den+Hengel},+Anton" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Zhang2015IJCVxxxarXiv.jpg"><b>Unsupervised feature learning for dense correspondences across scenes</b>
<br />\(\cdot\) <i>C. Zhang, C. Shen, T. Shen</i>.
<br />\(\cdot\) <i>International Journal of Computer Vision (IJCV), 2016</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1501.00642" target=“blank”>arXiv</a><a href="data/bibtex/Zhang2015IJCV.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Unsupervised+Feature+Learning+for+Dense+Correspondences+across+Scenes+Zhang,+Chao+and+Shen,+Chunhua+and+Shen,+Tingzhi" target=“blank”>search</a><a href="https://bitbucket.org/chhshen/ufl" target=“blank”>project webpage</a>
</p>
</li>
<li><p><a class="imglink" target="_blank" href="https://arxiv.org/pdf/1404.5009.pdf"><img class="imgP right" src="data/thumbnail/BnB2015WangxxxarXiv.jpg"></a><b>Efficient semidefinite branch-and-cut for MAP-MRF inference</b>
<br />\(\cdot\) <i>P. Wang, C. Shen, A. van den Hengel, P. Torr</i>.
<br />\(\cdot\) <i>International Journal of Computer Vision (IJCV), 2016</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1404.5009" target=“blank”>arXiv</a><a href="http://doi.org/10.1007/s11263-015-0865-2" target=“blank”>link</a><a href="data/bibtex/BnB2015Wang.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Efficient+Semidefinite+Branch-and-Cut+for+{MAP-MRF}+Inference+Wang,+Peng+and+Shen,+Chunhua+and+{van+den+Hengel},+Anton+and+Torr,+Philip" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Yao2016IJCVxxxarXiv.jpg"><b>Mining mid-level visual patterns with deep CNN activations</b>
<br />\(\cdot\) <i>Y. Li, L. Liu, C. Shen, A. van den Hengel</i>.
<br />\(\cdot\) <i>International Journal of Computer Vision (IJCV), 2016</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1506.06343" target=“blank”>arXiv</a><a href="http://rdcu.be/j1mA" target=“blank”>link</a><a href="data/bibtex/Yao2016IJCV.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Mining+Mid-level+Visual+Patterns+with+Deep+{CNN}+Activations+Li,+Yao+and+Liu,+Lingqiao+and+Shen,+Chunhua+and+{van+den+Hengel},+Anton" target=“blank”>search</a><a href="https://github.com/yaoliUoA/MDPM" target=“blank”>project webpage</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Liu2016TrackingxxxarXiv.jpg"><b>Online unsupervised feature learning for visual tracking</b>
<br />\(\cdot\) <i>F. Liu, C. Shen, I. Reid, A. van den Hengel</i>.
<br />\(\cdot\) <i>Image and Vision Computing (IVC), 2016</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1310.1690" target=“blank”>arXiv</a><a href="data/bibtex/Liu2016Tracking.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Online+Unsupervised+Feature+Learning+for+Visual+Tracking+Liu,+Fayao+and+Shen,+Chunhua+and+Reid,+Ian+and+{van+den+Hengel},+Anton" target=“blank”>search</a>
</p>
</li>
<li><p><b>Canonical principal angles correlation analysis for two-view data</b>
<br />\(\cdot\) <i>S. Wang, J. Lu, X. Gu, C. Shen, R. Xia, J. Yang</i>.
<br />\(\cdot\) <i>Journal of Visual Communication and Image Representation (JVCIR), 2016</i>.
<br />\(\cdot\) <a href="http://dx.doi.org/10.1016/j.jvcir.2015.12.001" target=“blank”>link</a><a href="data/bibtex/Canonical2016Wang.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Canonical+principal+angles+correlation+analysis+for+two-view+data+Wang,+Sheng+and+Lu,+Jianfeng+and+Gu,+Xingjian+and+Shen,+Chunhua+and+Xia,+Rui+and+Yang,+Jingyu" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/PRFace2016ShenxxxarXiv.jpg"><b>Face image classification by pooling raw features</b>
<br />\(\cdot\) <i>F. Shen, C. Shen, X. Zhou, Y. Yang, H. Shen</i>.
<br />\(\cdot\) <i>Pattern Recognition (PR), 2016</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1406.6811" target=“blank”>arXiv</a><a href="data/bibtex/PRFace2016Shen.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Face+Image+Classification+by+Pooling+Raw+Features+Shen,+Fumin+and+Shen,+Chunhua+and+Zhou,+Xiang+and+Yang,+Yang+and+Shen,+Heng+Tao" target=“blank”>search</a><a href="https://github.com/bd622/FacePooling" target=“blank”>project webpage</a>
</p>
</li>
<li><p><a class="imglink" target="_blank" href="https://arxiv.org/pdf/1110.0264.pdf"><img class="imgP right" src="data/thumbnail/Face2016LixxxarXiv.jpg"></a><b>Face recognition using linear representation ensembles</b>
<br />\(\cdot\) <i>H. Li, F. Shen, C. Shen, Y. Yang, Y. Gao</i>.
<br />\(\cdot\) <i>Pattern Recognition (PR), 2016</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1110.0264" target=“blank”>arXiv</a><a href="http://dx.doi.org/10.1016/j.patcog.2015.12.011" target=“blank”>link</a><a href="data/bibtex/Face2016Li.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Face+Recognition+Using+Linear+Representation+Ensembles+Li,+Hanxi+and+Shen,+Fumin+and+Shen,+Chunhua+and+Yang,+Yang+and+Gao,+Yongsheng" target=“blank”>search</a>
</p>
</li>
<li><p><b>Fast detection of multiple objects in traffic scenes with a common detection framework</b>
<br />\(\cdot\) <i>Q. Hu, S. Paisitkriangkrai, C. Shen, A. van den Hengel, F. Porikli</i>.
<br />\(\cdot\) <i>IEEE Transactions on Intelligent Transportation Systems (T-ITS), 2016</i>.
<br />\(\cdot\) <a href="data/bibtex/Hu2015T-ITS.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Fast+Detection+of+Multiple+Objects+in+Traffic+Scenes+with+a+Common+Detection+Framework+Hu,+Qichang+and+Paisitkriangkrai,+Sakrapee+and+Shen,+Chunhua+and+{van+den+Hengel},+Anton+and+Porikli,+Fatih" target=“blank”>search</a>
</p>
</li>
<li><p><b>Part-based robust tracking using online latent structured learning</b>
<br />\(\cdot\) <i>R. Yao, Q. Shi, C. Shen, Y. Zhang, A. van den Hengel</i>.
<br />\(\cdot\) <i>IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), 2016</i>.
<br />\(\cdot\) <a href="http://dx.doi.org/10.1109/TCSVT.2016.2527358" target=“blank”>link</a><a href="data/bibtex/Part2016Yao.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Part-based+robust+tracking+using+online+latent+structured+learning+Yao,+Rui+and+Shi,+Qinfeng+and+Shen,+Chunhua+and+Zhang,+Yanning+and+{van+den+Hengel},+Anton" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Pooling2016WangxxxarXiv.jpg"><b>Temporal pyramid pooling based convolutional neural network for action recognition</b>
<br />\(\cdot\) <i>P. Wang, Y. Cao, C. Shen, L. Liu, H. Shen</i>.
<br />\(\cdot\) <i>IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), 2016</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1503.01224" target=“blank”>arXiv</a><a href="data/bibtex/Pooling2016Wang.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Temporal+Pyramid+Pooling+Based+Convolutional+Neural+Network+for+Action+Recognition+Wang,+Peng+and+Cao,+Yuanzhouhan+and+Shen,+Chunhua+and+Liu,+Lingqiao+and+Shen,+Heng+Tao" target=“blank”>search</a>
</p>
</li>
<li><p><b>Dictionary learning for promoting structured sparsity in hyerpsectral compressive sensing</b>
<br />\(\cdot\) <i>L. Zhang, W. Wei, Y. Zhang, C. Shen, A. van den Hengel, Q. Shi</i>.
<br />\(\cdot\) <i>IEEE Transactions on Geoscience and Remote Sensing (TGRS), 2016</i>.
<br />\(\cdot\) <a href="data/bibtex/Zhang2016TGSE.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Dictionary+Learning+for+Promoting+Structured+Sparsity+in+Hyerpsectral+Compressive+Sensing+Zhang,+Lei+and+Wei,+Wei+and+Zhang,+Yanning+and+Shen,+Chunhua+and+{van+den+Hengel},+Anton+and+Shi,+Qinfeng" target=“blank”>search</a>
</p>
</li>
<li><p><b>Scalable linear visual feature learning via online parallel nonnegative matrix factorization</b>
<br />\(\cdot\) <i>X. Zhao, X. Li, Z. Zhang, C. Shen, L. Gao, X. Li</i>.
<br />\(\cdot\) <i>IEEE Transactions on Neural Networks and Learning Systems (TNN), 2016</i>.
<br />\(\cdot\) <a href="http://dx.doi.org/10.1109/TNNLS.2015.2499273" target=“blank”>link</a><a href="data/bibtex/Zhao2015TNN.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Scalable+Linear+Visual+Feature+Learning+via+Online+Parallel+Nonnegative+Matrix+Factorization+Zhao,+Xueyi+and+Li,+Xi+and+Zhang,+Zhongfei+and+Shen,+Chunhua+and+Gao,+Lixin+and+Li,+Xuelong" target=“blank”>search</a>
</p>
</li>
<li><p><b>Large-scale binary quadratic optimization using semidefinite relaxation and applications</b>
<br />\(\cdot\) <i>P. Wang, C. Shen, A. van den Hengel, P. Torr</i>.
<br />\(\cdot\) <i>IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2016</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1411.7564" target=“blank”>arXiv</a><a href="http://dx.doi.org/10.1109/TPAMI.2016.2541146" target=“blank”>link</a><a href="data/bibtex/BQP2015Wang.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Large-scale+Binary+Quadratic+Optimization+Using+Semidefinite+Relaxation+and+Applications+Wang,+Peng+and+Shen,+Chunhua+and+{van+den+Hengel},+Anton+and+Torr,+Philip+H.+S." target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Paisitkriangkrai2015TPAMIxxxarXiv.jpg"><b>Pedestrian detection with spatially pooled features and structured ensemble learning</b>
<br />\(\cdot\) <i>S. Paisitkriangkrai, C. Shen, A. van den Hengel</i>.
<br />\(\cdot\) <i>IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2016</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1409.5209" target=“blank”>arXiv</a><a href="http://doi.org/10.1109/TPAMI.2015.2474388" target=“blank”>link</a><a href="data/bibtex/Paisitkriangkrai2015TPAMI.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Pedestrian+Detection+with+Spatially+Pooled+Features+and+Structured+Ensemble+Learning+Paisitkriangkrai,+Sakrapee+and+Shen,+Chunhua+and+{van+den+Hengel},+Anton" target=“blank”>search</a><a href="https://github.com/chhshen/pedestrian-detection" target=“blank”>project webpage</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Liu2015TPAMIxxxarXiv.jpg"><b>A generalized probabilistic framework for compact codebook creation</b>
<br />\(\cdot\) <i>L. Liu, L. Wang, C. Shen</i>.
<br />\(\cdot\) <i>IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2016</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1401.7713" target=“blank”>arXiv</a><a href="http://doi.org/10.1109/TPAMI.2015.2441069" target=“blank”>link</a><a href="data/bibtex/Liu2015TPAMI.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=A+Generalized+Probabilistic+Framework+for+Compact+Codebook+Creation+Liu,+Lingqiao+and+Wang,+Lei+and+Shen,+Chunhua" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Depth2015LiuxxxarXiv.jpg"><b>Learning depth from single monocular images using deep convolutional neural fields</b>
<br />\(\cdot\) <i>F. Liu, C. Shen, G. Lin, I. Reid</i>.
<br />\(\cdot\) <i>IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2016</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1502.07411" target=“blank”>arXiv</a><a href="http://dx.doi.org/10.1109/TPAMI.2015.2505283" target=“blank”>link</a><a href="data/bibtex/Depth2015Liu.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Learning+Depth+from+Single+Monocular+Images+Using+Deep+Convolutional+Neural+Fields+Liu,+Fayao+and+Shen,+Chunhua+and+Lin,+Guosheng+and+Reid,+Ian" target=“blank”>search</a><a href="http://goo.gl/rAKWrS" target=“blank”>project webpage</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Xi2015TPAMIxxxarXiv.jpg"><b>Online metric-weighted linear representations for robust visual tracking</b>
<br />\(\cdot\) <i>X. Li, C. Shen, A. Dick, Z. Zhang, Y. Zhuang</i>.
<br />\(\cdot\) <i>IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2016</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1507.05737" target=“blank”>arXiv</a><a href="data/bibtex/Xi2015TPAMI.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Online+Metric-Weighted+Linear+Representations+for+Robust+Visual+Tracking+Li,+Xi+and+Shen,+Chunhua+and+Dick,+Anthony+and+Zhang,+Zhongfei+and+Zhuang,+Yueting" target=“blank”>search</a>
</p>
</li>
<li><p><a class="imglink" target="_blank" href="https://arxiv.org/pdf/1401.8126.pdf"><img class="imgP right" src="data/thumbnail/Harandi2015IJCVxxxarXiv.jpg"></a><b>Extrinsic methods for coding and dictionary learning on Grassmann manifolds</b>
<br />\(\cdot\) <i>M. Harandi, R. Hartley, C. Shen, B. Lovell, C. Sanderson</i>.
<br />\(\cdot\) <i>International Journal of Computer Vision (IJCV), 2015</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1401.8126" target=“blank”>arXiv</a><a href="data/bibtex/Harandi2015IJCV.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Extrinsic+Methods+for+Coding+and+Dictionary+Learning+on+{G}rassmann+Manifolds+Harandi,+Mehrtash+and+Hartley,+Richard+and+Shen,+Chunhua+and+Lovell,+Brian+and+Sanderson,+Conrad" target=“blank”>search</a><a href="https://github.com/chhshen/Grassmann/" target=“blank”>project webpage</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Liu2015CRFPRxxxarXiv.jpg"><b>CRF learning with CNN features for image segmentation</b>
<br />\(\cdot\) <i>F. Liu, G. Lin, C. Shen</i>.
<br />\(\cdot\) <i>Pattern Recognition (PR), 2015</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1503.08263" target=“blank”>arXiv</a><a href="data/bibtex/Liu2015CRFPR.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q={CRF}+Learning+with+{CNN}+Features+for+Image+Segmentation+Liu,+Fayao+and+Lin,+Guosheng+and+Shen,+Chunhua" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Hashing2015ShenxxxarXiv.jpg"><b>Hashing on nonlinear manifolds</b>
<br />\(\cdot\) <i>F. Shen, C. Shen, Q. Shi, A. van den Hengel, Z. Tang, H. Shen</i>.
<br />\(\cdot\) <i>IEEE Transactions on Image Processing (TIP), 2015</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1412.0826" target=“blank”>arXiv</a><a href="data/bibtex/Hashing2015Shen.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Hashing+on+Nonlinear+Manifolds+Shen,+Fumin+and+Shen,+Chunhua+and+Shi,+Qinfeng+and+{van+den+Hengel},+Anton+and+Tang,+Zhenmin+and+Shen,+Heng+Tao" target=“blank”>search</a><a href="https://github.com/chhshen/Hashing-on-Nonlinear-Manifolds" target=“blank”>project webpage</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/TIP2014ShortcutxxxarXiv.jpg"><b>A computational model of the short-cut rule for 2D shape decomposition</b>
<br />\(\cdot\) <i>L. Luo, C. Shen, X. Liu, C. Zhang</i>.
<br />\(\cdot\) <i>IEEE Transactions on Image Processing (TIP), 2015</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1409.2104" target=“blank”>arXiv</a><a href="data/bibtex/TIP2014Shortcut.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=A+Computational+Model+of+the+Short-Cut+Rule+for+{2D}+Shape+Decomposition+Luo,+Lei+and+Shen,+Chunhua+and+Liu,+Xinwang+and+Zhang,+Chunyuan" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/SDP2015LixxxarXiv.jpg"><b>Worst-case linear discriminant analysis as scalable semidefinite feasibility problems</b>
<br />\(\cdot\) <i>H. Li, C. Shen, A. van den Hengel, Q. Shi</i>.
<br />\(\cdot\) <i>IEEE Transactions on Image Processing (TIP), 2015</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1411.7450" target=“blank”>arXiv</a><a href="data/bibtex/SDP2015Li.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Worst-Case+Linear+Discriminant+Analysis+as+Scalable+Semidefinite+Feasibility+Problems+Li,+Hui+and+Shen,+Chunhua+and+{van+den+Hengel},+Anton+and+Shi,+Qinfeng" target=“blank”>search</a><a href="https://github.com/chhshen/SDP-WLDA" target=“blank”>project webpage</a>
</p>
</li>
<li><p><a class="imglink" target="_blank" href="https://arxiv.org/pdf/1408.5574.pdf"><img class="imgP right" src="data/thumbnail/FastHash2015LinxxxarXiv.jpg"></a><b>Supervised hashing using graph cuts and boosted decision trees</b>
<br />\(\cdot\) <i>G. Lin, C. Shen, A. van den Hengel</i>.
<br />\(\cdot\) <i>IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2015</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1408.5574" target=“blank”>arXiv</a><a href="http://dx.doi.org/10.1109/TPAMI.2015.2404776" target=“blank”>link</a><a href="data/bibtex/FastHash2015Lin.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Supervised+Hashing+Using+Graph+Cuts+and+Boosted+Decision+Trees+Lin,+Guosheng+and+Shen,+Chunhua+and+{van+den+Hengel},+Anton" target=“blank”>search</a><a href="https://bitbucket.org/chhshen/fasthash/" target=“blank”>project webpage</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Shen2014OutlierxxxarXiv.jpg"><b>Fast approximate \(l_\infty\) minimization: Speeding up robust regression</b>
<br />\(\cdot\) <i>F. Shen, C. Shen, R. Hill, A. van den Hengel, Z. Tang</i>.
<br />\(\cdot\) <i>Computational Statistics and Data Analysis (CSDA), 2014</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1304.1250" target=“blank”>arXiv</a><a href="data/bibtex/Shen2014Outlier.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Fast+approximate+L_\infty+minimization:+{S}peeding+up+robust+regression+Shen,+Fumin+and+Shen,+Chunhua+and+Hill,+Rhys+and+{van+den+Hengel},+Anton+and+Tang,+Zhenmin" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Liu2014MKLxxxarXiv.jpg"><b>Multiple kernel learning in the primal for multi-modal Alzheimer's disease classification</b>
<br />\(\cdot\) <i>F. Liu, L. Zhou, C. Shen, J. Yin</i>.
<br />\(\cdot\) <i>IEEE Journal of Biomedical and Health Informatics (JBHI), 2014</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1310.0890" target=“blank”>arXiv</a><a href="http://dx.doi.org/10.1109/JBHI.2013.2285378" target=“blank”>link</a><a href="data/bibtex/Liu2014MKL.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Multiple+Kernel+Learning+in+the+Primal+for+Multi-modal+{A}lzheimer's+Disease+Classification+Liu,+Fayao+and+Zhou,+Luping+and+Shen,+Chunhua+and+Yin,+Jianping" target=“blank”>search</a>
</p>
<ol reversed>
<li><p>Online published at IEEE: 10 October 2013.
</p>
</li></ol>
</li>
<li><p><b>Multiple kernel clustering based on centered kernel alignment</b>
<br />\(\cdot\) <i>Y. Lu, L. Wang, J. Lu, J. Yang, C. Shen</i>.
<br />\(\cdot\) <i>Pattern Recognition (PR), 2014</i>.
<br />\(\cdot\) <a href="data/bibtex/MKL2014.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Multiple+kernel+clustering+based+on+centered+kernel+alignment+Lu,+Yanting+and+Wang,+Liantao+and+Lu,+Jianfeng+and+Yang,+Jingyu+and+Shen,+Chunhua" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Yan2014TIPaxxxarXiv.jpg"><b>Efficient semidefinite spectral clustering via Lagrange duality</b>
<br />\(\cdot\) <i>Y. Yan, C. Shen, H. Wang</i>.
<br />\(\cdot\) <i>IEEE Transactions on Image Processing (TIP), 2014</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1402.5497" target=“blank”>arXiv</a><a href="data/bibtex/Yan2014TIPa.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Efficient+Semidefinite+Spectral+Clustering+via+{L}agrange+Duality+Yan,+Yan+and+Shen,+Chunhua+and+Wang,+Hanzi" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Paul2014TIPbxxxarXiv.jpg"><b>Large-margin learning of compact binary image encodings</b>
<br />\(\cdot\) <i>S. Paisitkriangkrai, C. Shen, A. van den Hengel</i>.
<br />\(\cdot\) <i>IEEE Transactions on Image Processing (TIP), 2014</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1402.6383" target=“blank”>arXiv</a><a href="data/bibtex/Paul2014TIPb.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Large-margin+Learning+of+Compact+Binary+Image+Encodings+Paisitkriangkrai,+Sakrapee+and+Shen,+Chunhua+and+{van+den+Hengel},+Anton" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Li2014TIPxxxarXiv.jpg"><b>Characterness: An indicator of text in the wild</b>
<br />\(\cdot\) <i>Y. Li, W. Jia, C. Shen, A. van den Hengel</i>.
<br />\(\cdot\) <i>IEEE Transactions on Image Processing (TIP), 2014</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1309.6691" target=“blank”>arXiv</a><a href="http://dx.doi.org/10.1109/TIP.2014.2302896" target=“blank”>link</a><a href="data/bibtex/Li2014TIP.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Characterness:+{A}n+Indicator+of+Text+in+the+Wild+Li,+Yao+and+Jia,+Wenjing+and+Shen,+Chunhua+and+{van+den+Hengel},+Anton" target=“blank”>search</a><a href="https://github.com/yaoliUoA/characterness" target=“blank”>project webpage</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Li2013HyperxxxarXiv.jpg"><b>Context-aware hypergraph construction for robust spectral clustering</b>
<br />\(\cdot\) <i>X. Li, W. Hu, C. Shen, A. Dick, Z. Zhang</i>.
<br />\(\cdot\) <i>IEEE Transactions on Knowledge and Data Engineering (TKDE), 2014</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1401.0764" target=“blank”>arXiv</a><a href="http://doi.ieeecomputersociety.org/10.1109/TKDE.2013.126" target=“blank”>link</a><a href="data/bibtex/Li2013Hyper.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Context-aware+hypergraph+construction+for+robust+spectral+clustering+Li,+Xi+and+Hu,+Weiming+and+Shen,+Chunhua+and+Dick,+Anthony+and+Zhang,+Zhongfei" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Paul2013TMMxxxarXiv.jpg"><b>Asymmetric pruning for learning cascade detectors</b>
<br />\(\cdot\) <i>S. Paisitkriangkrai, C. Shen, A. van den Hengel</i>.
<br />\(\cdot\) <i>IEEE Transactions on Multimedia (TMM), 2014</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1303.6066" target=“blank”>arXiv</a><a href="http://dx.doi.org/10.1109/TMM.2014.2308723" target=“blank”>link</a><a href="data/bibtex/Paul2013TMM.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Asymmetric+pruning+for+learning+cascade+detectors+Paisitkriangkrai,+Sakrapee+and+Shen,+Chunhua+and+{van+den+Hengel},+Anton" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Shen2014MetricxxxarXiv.jpg"><b>Efficient dual approach to distance metric learning</b>
<br />\(\cdot\) <i>C. Shen, J. Kim, F. Liu, L. Wang, A. van den Hengel</i>.
<br />\(\cdot\) <i>IEEE Transactions on Neural Networks and Learning Systems (TNN), 2014</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1302.3219" target=“blank”>arXiv</a><a href="data/bibtex/Shen2014Metric.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Efficient+Dual+Approach+to+Distance+Metric+Learning+Shen,+Chunhua+and+Kim,+Junae+and+Liu,+Fayao+and+Wang,+Lei+and+{van+den+Hengel},+Anton" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Paul2013FastboostingxxxPDF.jpg"><b>A scalable stage-wise approach to large-margin multi-class loss based boosting</b>
<br />\(\cdot\) <i>S. Paisitkriangkrai, C. Shen, A. van den Hengel</i>.
<br />\(\cdot\) <i>IEEE Transactions on Neural Networks and Learning Systems (TNN), 2014</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1307.5497" target=“blank”>arXiv</a><a href="http://dx.doi.org/10.1109/TNNLS.2013.2282369" target=“blank”>link</a><a href="https://bytebucket.org/chhshen/data/raw/7e2f958b104603e54e9d8376a8e1672363f742a3/papers/Paisitkriangkrai2014TNNLS.pdf" target=“blank”>pdf</a><a href="data/bibtex/Paul2013Fastboosting.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=A+scalable+stage-wise+approach+to+large-margin+multi-class+loss+based+boosting+Paisitkriangkrai,+Sakrapee+and+Shen,+Chunhua+and+{van+den+Hengel},+Anton" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Paisitkriangkrai2013RandomBoostxxxarXiv.jpg"><b>RandomBoost: Simplified multi-class boosting through randomization</b>
<br />\(\cdot\) <i>S. Paisitkriangkrai, C. Shen, Q. Shi, A. van den Hengel</i>.
<br />\(\cdot\) <i>IEEE Transactions on Neural Networks and Learning Systems (TNN), 2014</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1302.0963" target=“blank”>arXiv</a><a href="http://dx.doi.org/10.1109/TNNLS.2013.2281214" target=“blank”>link</a><a href="data/bibtex/Paisitkriangkrai2013RandomBoost.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q={RandomBoost}:+{S}implified+Multi-class+Boosting+through+Randomization+Paisitkriangkrai,+Sakrapee+and+Shen,+Chunhua+and+Shi,+Qinfeng+and+{van+den+Hengel},+Anton" target=“blank”>search</a>
</p>
</li>
<li><p><b>A hierarchical word-merging algorithm with class separability measure</b>
<br />\(\cdot\) <i>L. Wang, L. Zhou, C. Shen, L. Liu, H. Liu</i>.
<br />\(\cdot\) <i>IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2014</i>.
<br />\(\cdot\) <a href="https://bitbucket.org/chhshen/chhshen.bitbucket.org/src/be12d4ef8deb6207ec97f0fdac6efbe2df151b59/_download/TPAMI14Wang.pdf" target=“blank”>pdf</a><a href="data/bibtex/Wang2014PAMI.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=A+Hierarchical+Word-merging+Algorithm+with+Class+Separability+Measure+Wang,+Lei+and+Zhou,+Luping+and+Shen,+Chunhua+and+Liu,+Lingqiao+and+Liu,+Huan" target=“blank”>search</a>
</p>
</li>
<li><p><img class="imgP right" src="data/thumbnail/Shen2014SBoostingxxxarXiv.jpg"><b>StructBoost: Boosting methods for predicting structured output variables</b>
<br />\(\cdot\) <i>C. Shen, G. Lin, A. van den Hengel</i>.
<br />\(\cdot\) <i>IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2014</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1302.3283" target=“blank”>arXiv</a><a href="http://dx.doi.org/10.1109/TPAMI.2014.2315792" target=“blank”>link</a><a href="http://goo.gl/goCVLK" target=“blank”>pdf</a><a href="data/bibtex/Shen2014SBoosting.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q={StructBoost}:+{B}oosting+Methods+for+Predicting+Structured+Output+Variables+Shen,+Chunhua+and+Lin,+Guosheng+and+{van+den+Hengel},+Anton" target=“blank”>search</a>
</p>
</li>
<li><p><a class="imglink" target="_blank" href="https://arxiv.org/pdf/1301.2032.pdf"><img class="imgP right" src="data/thumbnail/FisherBoost2013IJCVxxxarXiv.jpg"></a><b>Training effective node classifiers for cascade classification</b>
<br />\(\cdot\) <i>C. Shen, P. Wang, S. Paisitkriangkrai, A. van den Hengel</i>.
<br />\(\cdot\) <i>International Journal of Computer Vision (IJCV), 2013</i>.
<br />\(\cdot\) <a href="https://arxiv.org/abs/1301.2032" target=“blank”>arXiv</a><a href="http://link.springer.com/article/10.1007%2Fs11263-013-0608-1" target=“blank”>link</a><a href="data/bibtex/FisherBoost2013IJCV.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Training+Effective+Node+Classifiers+for+Cascade+Classification+Shen,+Chunhua+and+Wang,+Peng+and+Paisitkriangkrai,+Sakrapee+and+{van+den+Hengel},+Anton" target=“blank”>search</a>
</p>
</li>
<li><p><b>Fully corrective boosting with arbitrary loss and regularization</b>
<br />\(\cdot\) <i>C. Shen, H. Li, A. van den Hengel</i>.
<br />\(\cdot\) <i>Neural Networks (NN), 2013</i>.
<br />\(\cdot\) <a href="http://hdl.handle.net/2440/78929" target=“blank”>pdf</a><a href="data/bibtex/Shen2013NN.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Fully+Corrective+Boosting+with+Arbitrary+Loss+and+Regularization+Shen,+Chunhua+and+Li,+Hanxi+and+{van+den+Hengel},+Anton" target=“blank”>search</a>
</p>
</li>
<li><p><b>Approximate least trimmed sum of squares fitting and applications in image analysis</b>
<br />\(\cdot\) <i>F. Shen, C. Shen, A. van den Hengel, Z. Tang</i>.
<br />\(\cdot\) <i>IEEE Transactions on Image Processing (TIP), 2013</i>.
<br />\(\cdot\) <a href="http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6408142" target=“blank”>link</a><a href="http://hdl.handle.net/2440/79428" target=“blank”>pdf</a><a href="data/bibtex/LMS2013TIP.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Approximate+Least+Trimmed+Sum+of+Squares+Fitting+and+Applications+in+Image+Analysis+Shen,+Fumin+and+Shen,+Chunhua+and+{van+den+Hengel},+Anton+and+Tang,+Zhenmin" target=“blank”>search</a>
</p>
</li>
<li><p><b>Visual tracking with spatio-temporal Dempster-Shafer information fusion</b>
<br />\(\cdot\) <i>X. Li, A. Dick, C. Shen, Z. Zhang, A. van den Hengel, H. Wang</i>.
<br />\(\cdot\) <i>IEEE Transactions on Image Processing (TIP), 2013</i>.
<br />\(\cdot\) <a href="http://hdl.handle.net/2440/77448" target=“blank”>pdf</a><a href="data/bibtex/Xi2013TIP.bib" target=“blank”>bibtex</a><a href="https://scholar.google.com/scholar?lr&ie=UTF-8&oe=UTF-8&q=Visual+Tracking+with+Spatio-Temporal+{Dempster-Shafer}+Information+Fusion+Li,+Xi+and+Dick,+Anthony+and+Shen,+Chunhua+and+Zhang,+Zhongfei+and+{van+den+Hengel},+Anton+and+Wang,+Hanzi" target=“blank”>search</a>
</p>
</li>