https://www.mdu.se/

mdu.sePublications
System disruptions
We are currently experiencing disruptions on the search portals due to high traffic. We are working to resolve the issue, you may temporarily encounter an error message.
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
NeuroPower: Designing Energy Efficient Convolutional Neural Network Architecture for Embedded Systems
Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.ORCID iD: 0000-0002-9704-7117
Shiraz University of Technology, Shiraz, Iran.
Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
Show others and affiliations
2019 (English)In: Lecture Notes in Computer Science, Volume 11727, Munich, Germany: Springer , 2019, p. 208-222Conference paper, Published paper (Refereed)
Abstract [en]

Convolutional Neural Networks (CNNs) suffer from energy-hungry implementation due to their computation and memory intensive processing patterns. This problem is even more significant by the proliferation of CNNs on embedded platforms. To overcome this problem, we offer NeuroPower as an automatic framework that designs a highly optimized and energy efficient set of CNN architectures for embedded systems. NeuroPower explores and prunes the design space to find improved set of neural architectures. Toward this aim, a multi-objective optimization strategy is integrated to solve Neural Architecture Search (NAS) problem by near-optimal tuning network hyperparameters. The main objectives of the optimization algorithm are network accuracy and number of parameters in the network. The evaluation results show the effectiveness of NeuroPower on energy consumption, compacting rate and inference time compared to other cutting-edge approaches. In comparison with the best results on CIFAR-10/CIFAR-100 datasets, a generated network by NeuroPower presents up to 2.1x/1.56x compression rate, 1.59x/3.46x speedup and 1.52x/1.82x power saving while loses 2.4%/-0.6% accuracy, respectively.

Place, publisher, year, edition, pages
Munich, Germany: Springer , 2019. p. 208-222
Series
Lecture Notes in Computer Science, ISSN 0302-9743 ; 11727
Keywords [en]
Convolutional neural networks (CNNs), Neural Architecture Search (NAS), Embedded Systems, Multi-Objective Optimization
National Category
Engineering and Technology Computer Systems
Identifiers
URN: urn:nbn:se:mdh:diva-45043DOI: 10.1007/978-3-030-30487-4_17ISI: 000546494000017Scopus ID: 2-s2.0-85072863572ISBN: 9783030304867 (print)OAI: oai:DiVA.org:mdh-45043DiVA, id: diva2:1345193
Conference
The 28th International Conference on Artificial Neural Networks ICANN 2019, 17 Sep 2019, Munich, Germany
Projects
DPAC - Dependable Platforms for Autonomous systems and ControlDeepMaker: Deep Learning Accelerator on Commercial Programmable DevicesAvailable from: 2019-08-23 Created: 2019-08-23 Last updated: 2022-11-25Bibliographically approved
In thesis
1. DeepMaker: Customizing the Architecture of Convolutional Neural Networks for Resource-Constrained Platforms
Open this publication in new window or tab >>DeepMaker: Customizing the Architecture of Convolutional Neural Networks for Resource-Constrained Platforms
2020 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

Convolutional Neural Networks (CNNs) suffer from energy-hungry implementation due to requiring huge amounts of computations and significant memory consumption. This problem will be more highlighted by the proliferation of CNNs on resource-constrained platforms in, e.g., embedded systems. In this thesis, we focus on decreasing the computational cost of CNNs in order to be appropriate for resource-constrained platforms. The thesis work proposes two distinct methods to tackle the challenges: optimizing CNN architecture while considering network accuracy and network complexity, and proposing an optimized ternary neural network to compensate the accuracy loss of network quantization methods. We evaluated the impact of our solutions on Commercial-Off-The-Shelf (COTS) platforms where the results show considerable improvement in network accuracy and energy efficiency.

Abstract [sv]

Convolutional Neural Networks (CNNs) lider av energihungriga implementationer på grund av att de kräver enorm beräkningskapacitet och har en betydande minneskonsumtion. Detta problem kommer att framhävas mer när allt fler CNN implementeras på resursbegränsade plattformar i inbyggda datorsystem. I denna uppsats fokuserar vi på att minska resursåtgången för CNN, i termer av behövda beräkningar och behövt minne, för att vara lämplig för resursbegränsade plattformar. Vi föreslår två metoder för att hantera utmaningarna; optimera CNN-arkitektur där man balanserar nätverksnoggrannhet och nätverkskomplexitet, och föreslår ett optimerat ternärt neuralt nätverk för att kompensera noggrannhetsförluster som kan uppstå vid nätverkskvantiseringsmetoder. Vi utvärderade effekterna av våra lösningar på kommersiellt använda plattformar (COTS) där resultaten visar betydande förbättringar i nätverksnoggrannhet och energieffektivitet.

Place, publisher, year, edition, pages
Västerås: Mälardalen University, 2020
Series
Mälardalen University Press Licentiate Theses, ISSN 1651-9256 ; 299
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:mdh:diva-52113 (URN)978-91-7485-490-9 (ISBN)
Presentation
2020-12-04, U2-024 (+ Online/Zoom), Mälardalens högskola, Västerås, 11:30 (English)
Opponent
Supervisors
Projects
DeepMaker: Deep Learning Accelerator on Commercial Programmable DevicesDPAC - Dependable Platforms for Autonomous systems and ControlFAST-ARTS: Fast and Sustainable Analysis Techniques for Advanced Real-Time Systems
Available from: 2020-11-10 Created: 2020-10-29 Last updated: 2020-11-13Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Loni, MohammadSinaei, SimaDaneshtalab, MasoudNolin, Mikael

Search in DiVA

By author/editor
Loni, MohammadSinaei, SimaDaneshtalab, MasoudNolin, Mikael
By organisation
Embedded SystemsEmbedded Systems
Engineering and TechnologyComputer Systems

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 263 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf