I am currently an Assistant Research Professor in CSE department of HKUST and Affiliated Assistant Professor at MBZUAI. I was a postdoctoral researcher in Professor Eric Xing and Professor Marios Savvides's group (CyLab, CMU) as a research lead. My research interests span machine learning, computer vision, efficient deep learning, etc. Prior to CMU, I was fortunate to be a joint-training Ph.D student (2017-2019) in UIUC/IFP group, advised by Prof. Thomas S. Huang.



Please send me your CV if you are interested in working with me.


Email: zhiqiangshen0214 AT gmail.com
zhiqiangshen AT ust.hk | Zhiqiang.Shen AT mbzuai.ac.ae
zhiqians AT andrew.cmu.edu | shen54 AT illinois.edu | zhiqiangshen13 AT fudan.edu.cn
[Google Scholar] |  [Github] |  [Zhihu] |  [Twitter]



Research Interest

My research focuses on the broad areas of machine learning, deep learning and their applications on computer vision and language. Specifically, I am interested in deep learning methods for object detection, fine-grained recognition, image/video captioning, domain adaptation, etc. Recently, I focus on

  • Low-bit Networks
  • Knowledge Distillation
  • Designing and Training Highly-efficient Network Structures for CNNs and Transformers
  • Weakly-supervised/Un(Self-)supervised Learning
  • Image Understanding, Including Object Detection, Captioning and Fine-grained Recognition
  • Few-shot and Zero-shot Learning

News

Recent & Selected Publications (Full List)

(*: equal contribution; ✝: corresponding author)

Zhiqiang Shen, Eric Xing.
A Fast Knowledge Distillation Framework for Visual Recognition
European Conference on Computer Vision (ECCV), 2022.
State-of-the-art accuracy of 80.1% (SGD) and 80.5% (AdamW) on ResNet-50 with a plain training and 16% faster than regular classification frameworks.
Project Page  |  Code & Models  |  Camera-Ready  |  arXiv Paper

Zhiqiang Shen, Zechun Liu, Eric Xing.
Sliced Recursive Transformer
European Conference on Computer Vision (ECCV), 2022.
Code & Models  |  arXiv Paper  |  Media (In Chinese)

Zhiqiang Shen, Zechun Liu, Zhuang Liu, Marios Savvides, Trevor Darrell, Eric Xing.
Un-Mix: Rethinking Image Mixtures for Unsupervised Visual Representation Learning
Association for the Advancement of Artificial Intelligence (AAAI), 2022.
State-of-the-art accuracy on tiny datasets, such as CIFAR-10/100, Tiny-ImageNet.
Code & Models  |  arXiv Paper  |  Media  &  Zhihu (In Chinese)

Zechun Liu, Zhiqiang Shen, Yun Long, Eric Xing, Kwang-Ting Cheng, Chas Leichner.
Data-Free Neural Architecture Search via Recursive Label Calibration
European Conference on Computer Vision (ECCV), 2022.
arXiv Paper

Xijie Huang, Zhiqiang Shen, Shichao Li, Zechun Liu, Xianghong Hu, Jeffry Wicaksana, Eric Xing, Kwang-Ting Cheng.
SDQ: Stochastic Differentiable Quantization with Mixed Precision
International Conference on Machine Learning (ICML), 2022.
Project Page  |  arXiv Paper

Arnav Chavan*, Zhiqiang Shen*,✝, Zhuang Liu, Zechun Liu, Kwang-Ting Cheng, Eric Xing.
Vision Transformer Slimming: Multi-Dimension Searching in Continuous Optimization Space
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022.
Code & Models  |  arXiv Paper

Zechun Liu, Kwang-Ting Cheng, Dong Huang, Eric P Xing, Zhiqiang Shen.
Nonuniform-to-Uniform Quantization: Towards Accurate Quantization via Generalized Straight-Through Estimation
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022.
Code & Models  |  arXiv Paper

Zhuang Liu, Hungju Wang, Tinghui Zhou, Zhiqiang Shen, Bingyi Kang, Evan Shelhamer, Trevor Darrell.
Exploring Simple and Transferable Recognition-Aware Image Processing
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2022.
Code & Models  |  Paper

Zechun Liu*,✝, Zhiqiang Shen*,✝, Shichao Li, Koen Helwegen, Dong Huang, Kwang-Ting Cheng.
How Do Adam and Training Strategies Help BNNs Optimization
International Conference on Machine Learning (ICML), 2021.
Code & Models  |  Paper

Zhiqiang Shen, Zechun Liu, Dejia Xu, Zitian Chen, Kwang-Ting Cheng, Marios Savvides.
Is Label Smoothing Truly Incompatible with Knowledge Distillation: An Empirical Study
International Conference on Learning Representations (ICLR), 2021.
OpenReview (Rating: 8 6 6 6)  |  Project Page  |  Paper  |  Zhihu (in Chinese)
A new perspective on the relationship between knowledge distillation and label smoothing. Reviewer acknowledges that this paper made a breakthrough regarding the correlation between label smoothing and knowledge distillation.

Zhiqiang Shen*, Zechun Liu*, Jie Qin, Marios Savvides, Kwang-Ting Cheng.
Partial Is Better Than All: Revisiting Fine-tuning Strategy for Few-shot Learning
Association for the Advancement of Artificial Intelligence (AAAI), 2021.
arXiv Paper (Code for searching is adapted from here .)
A searching based fine-tuning method for few-shot learning.

Zhiqiang Shen, Marios Savvides.
MEAL V2: Boosting Vanilla ResNet-50 to 80%+ Top-1 Accuracy on ImageNet without Tricks
Technical report. Short version has been accepted in NeurIPS 2020 Beyond BackPropagation: Novel Ideas for Training Neural Architectures workshop.
Code & Models  |  arXiv Paper
We achieve 80.67% top-1 accuracy using a single crop-size of 224×224 on the vanilla ResNet-50, the first work that is able to boost vanilla ResNet-50 to surpass 80% on ImageNet without architecture modification or additional training data. Our result can be regarded as a new strong baseline on ResNet-50 using knowledge distillation.

Zhiqiang Shen, Zechun Liu, Jie Qin, Lei Huang, Kwang-Ting Cheng, Marios Savvides.
S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural Networks via Guided Distribution Calibration
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021.
Self-supervised BNNs using distillation loss (5.5~15% improvement over contrastive baseline).
Code & Models  |  arXiv Paper

Zhiqiang Shen, Mingyang Huang, Jianping Shi, Zechun Liu, Harsh Maheshwari, Yutong Zheng, Xiangyang Xue, Marios Savvides, Thomas S. Huang.
CDTD: A Large-Scale Cross-Domain Benchmark for Instance-Level Image-to-Image Translation and Domain Adaptive Object Detection
International Journal of Computer Vision (IJCV), 2020.
Code & Models  |  Paper

Zhiqiang Shen, Zhuang Liu, Jianguo Li, Yu-Gang Jiang, Yurong Chen, Xiangyang Xue.
Object Detection from Scratch with Deep Supervision
IEEE transactions on pattern analysis and machine intelligence (T-PAMI), 2019.
Code & Models  |  arXiv Paper

Zhiqiang Shen, Mingyang Huang, Jianping Shi, Xiangyang Xue, Thomas S. Huang.
Towards Instance-level Image-to-Image Translation
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
Project  |  Paper  |  Dataset

Zhiqiang Shen*, Zhankui He*, Xiangyang Xue.
MEAL: Multi-Model Ensemble via Adversarial Learning
33rd AAAI Conference on Artificial Intelligence (AAAI), 2019. (Oral)
Code & Models  |  Our ResNet-50 (Top-1/5: 21.70%/5.99%)   [PyTorch Model (102.5M)]

Zhiqiang Shen, Honghui Shi, Jiahui Yu, Hai Phan, Rogerio Feris, Liangliang Cao, Ding Liu, Xinchao Wang, Thomas Huang, Marios Savvides.
Improving Object Detection from Scratch via Gated Feature Reuse
30th British Machine Vision Conference (BMVC), 2019.

Zhiqiang Shen*, Zhuang Liu*, Jianguo Li, Yu-Gang Jiang, Yurong Chen, Xiangyang Xue.
DSOD: Learning Deeply Supervised Object Detectors from Scratch
Proceedings of 16th IEEE International Conference on Computer Vision (ICCV2017).
(* indicates equal contribution)
Code & Models  |  Paper

Zhiqiang Shen, Jianguo Li, Zhou Su, Minjun Li, Yurong Chen, Yu-Gang Jiang, Xiangyang Xue.
Weakly Supervised Dense Video Captioning
Proceedings of 30th IEEE Conference on Computer Vision and Pattern Recognition (CVPR2017).
Project  |  Paper

Academic Activities

  • Meta-Reviewer (SPC): AAAI 2023.
  • Conference reviewer: ICLR 2023, NeurIPS 2022, ICML 2022, ICLR 2022, ECCV 2022, CVPR 2022, NeurIPS 2021, ICML 2021, CVPR 2021, AAAI 2021, WACV 2021, NeurIPS 2020, ECCV 2020, BMVC 2020, IJCAI 2020, CVPR 2020, AAAI 2020, ICCV 2019, CVPR 2019, AAAI 2019, CVPR 2018, ACCV 2018, NIPS 2016.
  • Journal reviewer: TPAMI, IJCV, TMLR, TIP, TMM, JVCI, etc.

Awards and Honors

  • CVPR 2019 Doctoral Consortium travel award. Mentor: Prof. Trevor Darrell.
  • ICLR 2019 travel award, 2019
  • AAAI 2019 student scholarship award, 2018
  • ICCV 2017 student volunteer, 2017
  • Huawei scholarship, 2017
  • During my internship, our team won the 2016 Intel China Award (ICA), the highest award for team achievement in Intel China, 2016
  • Tung OOCL scholarship, 2015
  • Special Grade Scholarship, 2013
  • University-level Outstanding Students, 2013

Competitions

  • iMaterialist Challenge on Product Recognition (Fine-grained image classification of products at FGVC6, CVPR'19 workshop): ranked 4th globally (Team leader).
  • MSR-VTT Challenge (video captioning): ranked 4th in human evaluation and ranked 5th in the automatic evaluation metrics (Team leader), 2016
  • Top 10% in Kaggle Competition of Right Whale Recognition, 2016
  • Second Prize in DataCastle Competition of the Verification Code Recognition, 2016
  • Second Prize (National-level) in China Graduate Student Mathematical Contest in Modeling, 2015
  • MCM/ICM -- Honorable Mention, 2012
  • First Prize (National-level) in Electrical Engineering Mathematical Contest in Modeling, 2012
  • First Prize (National-level) in China Undergraduate Mathematical Contest in Modeling, 2011 (大学生数学建模竞赛全国一等奖)
  • Second Prize of Jiangsu High School Physics Competition (江苏省高中物理竞赛二等奖)

Teaching Assistant

  • 2015.9- 2016.1, Fudan University, COMP120008.02, C++ language programming