https://www.mdu.se/

mdu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Computation reuse-aware accelerator for neural networks
School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran, Iran.
Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, United States.
School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran, Iran.
Show others and affiliations
2020 (English)In: Hardware Architectures for Deep Learning, Institution of Engineering and Technology , 2020, p. 147-158Chapter in book (Other academic)
Abstract [en]

Power consumption has long been a significant concern in neural networks. In particular, large neural networks that implement novel machine learning techniques require much more computation, and hence power, than ever before. In this chapter, we showed that computation reuse could exploit the inherent redundancy in the arithmetic operations of the neural network to save power. Experimental results showed that computation reuse, when coupled with the approximation property of neural networks, can eliminate up to 90% of multiplication, effectively reducing power consumption by 61%, on average in the presented architecture. The proposed computation reuse -aware design can be extended in several ways. First, it can be integrated into several state-of-the-art customized architectures for LSTM, spiking, and convolutional neural network models to further reduce power consumption. Second, we can couple computation reuse with existing mapping and scheduling algorithms toward developing reusable scheduling and mapping methods for neural network. Computation reuse can also boost the performance of the methods that eliminate ineffectual computations in deep learning neural networks. Evaluating the impact of CORN on reliability and customizing the CORN architecture for FPGA-based neural network implementation are the other future works in this line.

Place, publisher, year, edition, pages
Institution of Engineering and Technology , 2020. p. 147-158
Keywords [en]
Arithmetic operations, Computation reuse-aware accelerator, Convolutional neural nets, Convolutional neural network, Learning (artificial intelligence), LSTM, Machine learning, Neural networks, Power aware computing, Power consumption, Spiking neural network
National Category
Computer and Information Sciences
Identifiers
URN: urn:nbn:se:mdh:diva-62529DOI: 10.1049/PBCS055E_ch7Scopus ID: 2-s2.0-85106133376ISBN: 9781785617683 (print)OAI: oai:DiVA.org:mdh-62529DiVA, id: diva2:1761051
Available from: 2023-05-31 Created: 2023-05-31 Last updated: 2023-05-31Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Daneshtalab, Masoud

Search in DiVA

By author/editor
Daneshtalab, Masoud
By organisation
Embedded Systems
Computer and Information Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 112 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf