Analysing Robustness of Tiny Deep Neural Networks
2023 (English)In: NEW TRENDS IN DATABASE AND INFORMATION SYSTEMS, ADBIS 2023, Springer Science and Business Media Deutschland GmbH , 2023, p. 150-159Conference paper, Published paper (Refereed)
Abstract [en]
Real-world applications that are safety-critical and resource-constrained necessitate using compact and robust Deep Neural Networks (DNNs) against adversarial data perturbation. MobileNet-tiny has been introduced as a compact DNN to deploy on edge devices to reduce the size of networks. To make DNNs more robust against adversarial data, adversarial training methods have been proposed. However, recent research has investigated the robustness of large-scale DNNs (such as WideResNet), but the robustness of tiny DNNs has not been analysed. In this paper, we analyse how the width of the blocks in MobileNet-tiny affects the robustness of the network against adversarial data perturbation. Specifically, we evaluate natural accuracy, robust accuracy, and perturbation instability metrics on the MobileNet-tiny with various inverted bottleneck blocks with different configurations. We generate configurations for inverted bottleneck blocks using different width-multipliers and expand-ratio hyper-parameters. We discover that expanding the width of the blocks in MobileNet-tiny can improve the natural and robust accuracy but increases perturbation instability. In addition, after a certain threshold, increasing the width of the network does not have significant gains in robust accuracy and increases perturbation instability. We also analyse the relationship between the width-multipliers and expand-ratio hyper-parameters with the Lipchitz constant, both theoretically and empirically. It shows that wider inverted bottleneck blocks tend to have significant perturbation instability. These architectural insights can be useful in developing adversarially robust tiny DNNs for edge devices.
Place, publisher, year, edition, pages
Springer Science and Business Media Deutschland GmbH , 2023. p. 150-159
Series
Communications in Computer and Information Science, ISSN 1865-0929, E-ISSN 1865-0937
Keywords [en]
Adversarial data perturbation, Adversarial training, Lipchitz constant, Robustness analysis, Safety engineering, Stability, Data perturbation, Hyper-parameter, Large-scales, Real-world, Recent researches, Training methods, Deep neural networks
National Category
Computer and Information Sciences
Identifiers
URN: urn:nbn:se:mdh:diva-64432DOI: 10.1007/978-3-031-42941-5_14ISI: 001351054200014Scopus ID: 2-s2.0-85171970100ISBN: 9783031429408 (print)OAI: oai:DiVA.org:mdh-64432DiVA, id: diva2:1803377
Conference
27th European Conference on Advances in Databases and Information Systems (ADBIS), Barcelona, Spain, 4-7 September, 2023
2023-10-092023-10-092024-12-18Bibliographically approved