Selected Publications

2024

Relational Diffusion Distillation for Efficient Image Generation.
Weilun Feng, Chuanguang Yang, Zhulin An, Libo Huang, Boyu Diao, Fei Wang, Yongjun Xu.
in ACM International Conference on Multimedia (ACM MM-2024 Oral)
CCF-A, Acceptance rate: 1149/4385=26.20%
Oral rate: 174/4385=3.97%
[Paper] [Code]
we propose Relational Diffusion Distillation, a novel distillation method tailored specifically for distilling diffusion models.
DetKDS: Knowledge Distillation Search for Object Detectors.
Lujun Li, Yufan Bao, Peijie Dong, Chuanguang Yang, Anggeng Li, Wenhan Luo, Qifeng Liu, Wei Xue, Yike Guo.
in International Conference on Machine Learning (ICML-2024)
CCF-A, Acceptance rate: 2610/9473=27.5%
[Paper] [Code]
We leverage search algorithms to discover optimal distillers for object detectors.
Online Policy Distillation with Decision-Attention.
Xinqiang Yu, Chuanguang Yang, Chengqing Yu, Libo Huang, Zhulin An, Yongjun Xu.
in International Joint Conference on Neural Networks (IJCNN-2024)
CCF-C, Acceptance rate: 1701/3272=51.99%
[Paper]
We propose Online Policy Distillation with Decision-Attention to make various policies operate in the same environment learn different perspectives.
CLIP-KD: An Empirical Study of CLIP Model Distillation.
Chuanguang Yang, Zhulin An, Libo Huang, Junyu Bi, Xinqiang Yu, Han Yang, Boyu Diao, Yongjun Xu.
in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR-2024)
CCF-A, Acceptance rate: 2719/11532=23.6%
[Paper] [Code]
We propose several distillation strategies, including relation, feature, gradient and contrastive paradigms, to examine the effectiveness of CLIP-Knowledge Distillation.
eTag: Class-Incremental Learning with Embedding Distillation and Task-Oriented Generation.
Libo Huang, Yan Zeng, Chuanguang Yang, Zhulin An, Boyu Diao, Yongjun Xu.
in AAAI Conference on Artificial Intelligence (AAAI-2024)
CCF-A, Acceptance rate: 2342/12100=23.8%
[Paper]
We propose a method of embedding distillation and task-oriented generation for class-incremental learning, which requires neither the exemplar nor the prototype.

2023

VL-Match: Enhancing Vision-Language Pretraining with Token-Level and Instance-Level Matching.
Junyu Bi, Daixuan Cheng, Ping Yao, Bochen Pang, Yuefeng Zhan, Chuanguang Yang, et al.
in IEEE/CVF International Conference on Computer Vision (ICCV-2023)
CCF-A, Acceptance rate: 2160/8068=26.8%
[Paper]
We propose VL-Match, a Vision-Language framework with Enhanced Token-level and Instance-level Matching.
Categories of Response-Based, Feature-Based, and Relation-Based Knowledge Distillation.
Chuanguang Yang, Xinqiang Yu, Zhulin An, Yongjun Xu.
in Advancements in Knowledge Distillation: Towards New Horizons of Intelligent Systems (Springer Book Chapter)
Invited Survey Paper
[Paper]
This paper provides a comprehensive KD survey, including knowledge categories, distillation schemes and algorithms, as well as some empirical studies on performance comparison.
Online Knowledge Distillation via Mutual Contrastive Learning for Visual Recognition.
Chuanguang Yang, Zhulin An, Helong Zhou, Fuzhen Zhuang, Yongjun Xu, Qian Zhang.
in IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI-2023)
CCF-A
[Paper] [Code]
We present a Mutual Contrastive Learning (MCL) framework for online KD. The core idea of MCL is to perform mutual interaction and transfer of contrastive distributions among a cohort of networks in an online manner.

2022

MixSKD: Self-Knowledge Distillation from Mixup for Image Recognition.
Chuanguang Yang, Zhulin An, Helong Zhou, Linhang Cai, Xiang Zhi, Jiwen Wu, Yongjun Xu, Qian Zhang.
in European Conference on Computer Vision (ECCV-2022)
CCF-B, Acceptance rate: 1650/5803=28.4%
[Paper] [Code]
This paper presents MixSKD, a powerful Self-KD method to regularize the network to behave linearly in feature maps and class probabilities between samples using Mixup images.
Knowledge Distillation Using Hierarchical Self-Supervision Augmented Distribution
Chuanguang Yang, Zhulin An, Linhang Cai, Yongjun Xu
in IEEE Transactions on Neural Networks and Learning Systems (TNNLS-2022)
CCF-B, IF:14.255
[Paper] [Code]
We investigate an auxiliary self-supervision augmented distributionan for training a single network, offline KD and online KD.
Localizing Semantic Patches for Accelerating Image Classification
Chuanguang Yang, Zhulin An, Yongjun Xu
in IEEE International Conference on Multimedia and Expo (ICME-2022)
CCF-B, Acceptance rate: 381/1285=29.6%
[Paper] [Code]
We propose an interpretable AnchorNet to localize semantic patches for accelerating image classification.
Cross-Image Relational Knowledge Distillation for Semantic Segmentation
Chuanguang Yang, Helong Zhou, Zhulin An, Xue Jiang, Yongjun Xu, Qian Zhang
in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR-2022)
CCF-A, Acceptance rate: 2067/8161=25.3%
[Paper] [Code]
We propose cross-Image relational knowledge distillation for semantic segmentation.
Mutual Contrastive Learning for Visual Representation Learning
Chuanguang Yang, Zhulin An, Linhang Cai, Yongjun Xu
in AAAI Conference on Artificial Intelligence (AAAI-2022 Oral)
CCF-A; Acceptance Rate: 1349/9020=15.0%; Oral Rate: Top 5%
[Paper] [Code]
We propose a simple yet effective mutual contrastive learning approach to learn better feature representations for both supervised and self-supervised image classification.
Prior Gradient Mask Guided Pruning-aware Fine-tuning
Linhang Cai, Zhulin An, Chuanguang Yang, Yanchun Yang, Yongjun Xu
in AAAI Conference on Artificial Intelligence (AAAI-2022)
CCF-A; Acceptance Rate: 1349/9020=15.0%
[Paper] [Code]
We proposed a prior gradient mask guided pruning-aware fine-tuning framework to accelerate CNNs for image classification.

2021

Hierarchical Self-supervised Augmented Knowledge Distillation
Chuanguang Yang, Zhulin An, Linhang Cai, Yongjun Xu
in International Joint Conference on Artificial Intelligence (IJCAI-2021)
CCF-A; Acceptance Rate: 587/4204=13.9%
[Paper] [Code]

We propose a strong self-supervised augmented knowledge distillation method from hierarchical feature maps for image classification.
Multi-View Contrastive Learning for Online Knowledge Distillation
Chuanguang Yang, Zhulin An, Yongjun Xu
in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP-2021)
CCF-B; Acceptance Rate: 1734/3610=48.0%
[Paper] [Code]

We propose multi-view contrastive learning to perform online knowledge distillation for image classification.

2020

DRNet: Dissect and Reconstruct the Convolutional Neural Network via Interpretable Manners
Xiaolong Hu, Zhulin An, Chuanguang Yang, Hui Zhu, Kaiqaing Xu, Yongjun Xu
in European Conference on Artificial Intelligence (ECAI-2020)
CCF-B; Acceptance Rate: 365/1363=26.8%
[Paper]

We propose an interpretable manner to dynamically run channels for sub-class image classification.
Gated Convolutional Networks with Hybrid Connectivity for Image Classification
Chuanguang Yang, Zhulin An, Hui Zhu, Xiaolong Hu, Kun Zhang, Kaiqiang Xu, Chao Li, Yongjun Xu
in AAAI Conference on Artificial Intelligence (AAAI-2020)
CCF-A; Acceptance Rate: 1591/7737=20.6%
[Paper] [Code]

We propose a new network architecture called HCGNet equipped with novel hybrid connectivity and gated mechanisms for image classification.

2019

EENA: Efficient Evolution of Neural Architecture
Hui Zhu, Zhulin An, Chuanguang Yang, Kaiqiang Xu, Erhu Zhao, Yongjun Xu
in International Conference on Computer Vision Workshops (ICCVW-2019)
CCF-A Workshop
[Paper]

We propose an efficient evolution algorithm for neural architecture search for image classification.
Multi-objective Pruning for CNNs using Genetic Algorithm
Chuanguang Yang, Zhulin An, Chao Li, Boyu Diao, Yongjun Xu
in International Conference on Artificial Neural Networks (ICANN-2019)
CCF-C
[Paper]

We propose a genetic algorithm to deal with model pruning using a multi-objective fitness function.