CCAP: Cooperative Context Aware Pruning for Neural Network Model Compression
Published In
2021 IEEE International Symposium on Multimedia (ISM)
Document Type
Citation
Publication Date
12-2021
Abstract
In this paper, we propose a new cross-domain model compression technique to yield a compact target model. We utilize a Cooperative Context-Aware Pruning (CCAP) module to produce sparse attention maps. They are then used to transmit the source models’ parameters to the target model precisely. We also leverage a weight-regular loss to minimize the difference between the source models’ and the target models’ parameters. Our quantitatively empirical evaluation shows that our CCAP module plus the weight-regular loss achieves lower model complexity without having serious performance decreasing.
Rights
Copyright IEEE 2021
Locate the Document
DOI
10.1109/ISM52913.2021.00051
Persistent Identifier
https://archives.pdx.edu/ds/psu/37013
Publisher
IEEE
Citation Details
Wang, L. Y., & Akhtar, Z. (2021, November). CCAP: Cooperative Context Aware Pruning for Neural Network Model Compression. In 2021 IEEE International Symposium on Multimedia (ISM) (pp. 257-260). IEEE.