Home >

news Help

Publication Information


Title
Japanese: 
English:Robustness of Compressed Convolutional Neural Networks 
Author
Japanese: Wijayanto Arie Wahyu, Choong Jun Jin, Madhawa Kaushalya Pituwala kankanamge, 村田剛志.  
English: Arie Wahyu Wijayanto, Jun Jin Choong, Kaushalya Madhawa, Tsuyoshi MURATA.  
Language English 
Journal/Book name
Japanese: 
English:Proceedings of the 2018 IEEE International Conference on Big Data (Big Data) 
Volume, Number, Page         pp. 4829-4836
Published date Dec. 14, 2018 
Publisher
Japanese: 
English: 
Conference name
Japanese: 
English:Workshop on Big Data for CyberSecurity (BigCyber-2018) 
Conference site
Japanese: 
English:Seattle 
Official URL https://ieeexplore.ieee.org/document/8622371
 
DOI https://doi.org/10.1109/BigData.2018.8622371
Abstract Advancements in deep neural networks have revolutionized the way how we conduct our day-to-day activities ranging from how we unlock our phones to self-driving cars. Convolutional Neural Networks (CNN) play the principal role in learning high level feature representations from visual inputs. It is crucial to know how reliable those neural networks are as human lives can be at stake. Recent experiments on the robustness of CNNs show that they are highly susceptible to small adversarial perturbations. Due to the increasing popularity of mobile devices, there is a significant demand for CNN models which are smaller enough to run on a mobile device without sacrificing the accuracy. Although recent researches have been successful at achieving smaller models with comparable accuracy on standard image datasets, their robustness to adversarial attacks has not been studied. However, massive deployment of smaller models on millions of mobile devices stresses importance of their robustness. In this work, we study how robust such models are with respect to state-of-the-art compression techniques such as quantization. Our contributions are summarized as follows: (1) insights to achieve smaller and robust models (2) a compression framework which is adversarial-aware. Our findings reveal that compressed models are naturally more robust than compact models. This provides an incentive to perform compression rather than designing compact models. Additionally, the latter provides benefits of increased accuracy and higher compression rate, up to 90×.

©2007 Tokyo Institute of Technology All rights reserved.