Home >

news ヘルプ

論文・著書情報


タイトル
和文: 
英文:Robustness of Compressed Convolutional Neural Networks 
著者
和文: Wijayanto Arie Wahyu, Choong Jun Jin, Madhawa Kaushalya Pituwala kankanamge, 村田剛志.  
英文: Arie Wahyu Wijayanto, Jun Jin Choong, Kaushalya Madhawa, Tsuyoshi MURATA.  
言語 English 
掲載誌/書名
和文: 
英文:Proceedings of the 2018 IEEE International Conference on Big Data (Big Data) 
巻, 号, ページ         pp. 4829-4836
出版年月 2018年12月14日 
出版者
和文: 
英文: 
会議名称
和文: 
英文:Workshop on Big Data for CyberSecurity (BigCyber-2018) 
開催地
和文: 
英文:Seattle 
公式リンク https://ieeexplore.ieee.org/document/8622371
 
DOI https://doi.org/10.1109/BigData.2018.8622371
アブストラクト Advancements in deep neural networks have revolutionized the way how we conduct our day-to-day activities ranging from how we unlock our phones to self-driving cars. Convolutional Neural Networks (CNN) play the principal role in learning high level feature representations from visual inputs. It is crucial to know how reliable those neural networks are as human lives can be at stake. Recent experiments on the robustness of CNNs show that they are highly susceptible to small adversarial perturbations. Due to the increasing popularity of mobile devices, there is a significant demand for CNN models which are smaller enough to run on a mobile device without sacrificing the accuracy. Although recent researches have been successful at achieving smaller models with comparable accuracy on standard image datasets, their robustness to adversarial attacks has not been studied. However, massive deployment of smaller models on millions of mobile devices stresses importance of their robustness. In this work, we study how robust such models are with respect to state-of-the-art compression techniques such as quantization. Our contributions are summarized as follows: (1) insights to achieve smaller and robust models (2) a compression framework which is adversarial-aware. Our findings reveal that compressed models are naturally more robust than compact models. This provides an incentive to perform compression rather than designing compact models. Additionally, the latter provides benefits of increased accuracy and higher compression rate, up to 90×.

©2007 Tokyo Institute of Technology All rights reserved.