https://www.mdu.se/

mdu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Learning Activation Functions for Sparse Neural Networks
Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
Institute of Artificial Intelligence, Leibniz University Hannover, Germany.
Department of Electrical Engineering, Tarbiat Modares University, Tehran, Iran.
Institute of Artificial Intelligence, Leibniz University Hannover, Germany.
2023 (English)In: Proc. Mach. Learn. Res., ML Research Press , 2023Conference paper, Published paper (Refereed)
Abstract [en]

Sparse Neural Networks (SNNs) can potentially demonstrate similar performance to their dense counterparts while saving significant energy and memory at inference. However, the accuracy drop incurred by SNNs, especially at high pruning ratios, can be an issue in critical deployment conditions. While recent works mitigate this issue through sophisticated pruning techniques, we shift our focus to an overlooked factor: hyperparameters and activation functions. Our analyses have shown that the accuracy drop can additionally be attributed to (i) Using ReLU as the default choice for activation functions unanimously, and (ii) Fine-tuning SNNs with the same hyperparameters as dense counterparts. Thus, we focus on learning a novel way to tune activation functions for sparse networks and combining these with a separate hyperparameter optimization (HPO) regime for sparse networks. By conducting experiments on popular DNN models (LeNet-5, VGG-16, ResNet-18, and EfficientNet-B0) trained on MNIST, CIFAR-10, and ImageNet-16 datasets, we show that the novel combination of these two approaches, dubbed Sparse Activation Function Search, short: SAFS, results in up to 15.53%, 8.88%, and 6.33% absolute improvement in the accuracy for LeNet-5, VGG-16, and ResNet-18 over the default training protocols, especially at high pruning ratios.

Place, publisher, year, edition, pages
ML Research Press , 2023.
Series
Proceedings of Machine Learning Research, ISSN 26403498
Keywords [en]
Chemical activation, Image enhancement, Machine learning, Activation functions, Condition, Energy, Fine tuning, Hyper-parameter, Hyper-parameter optimizations, Performance, Pruning techniques, Sparse network, Sparse neural networks, Drops
National Category
Computer and Information Sciences
Identifiers
URN: urn:nbn:se:mdh:diva-66091ISI: 001221429100011Scopus ID: 2-s2.0-85184354102OAI: oai:DiVA.org:mdh-66091DiVA, id: diva2:1840577
Conference
Proceedings of Machine Learning Research
Available from: 2024-02-26 Created: 2024-02-26 Last updated: 2024-12-04Bibliographically approved

Open Access in DiVA

No full text in DiVA

Scopus

Authority records

Loni, Mohammad

Search in DiVA

By author/editor
Loni, Mohammad
By organisation
Embedded Systems
Computer and Information Sciences

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 37 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf