https://www.mdu.se/

mdu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Hardware architectures for deep learning
Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems. Tallinn University of Technology (Tal- Tech), Estonia.
The Department of Electrical and Computer Engineering, University of Tehran, Iran.
2020 (English)Collection (editor) (Other academic)
Abstract [en]

This book presents and discusses innovative ideas in the design, modelling, implementation, and optimization of hardware platforms for neural networks. The rapid growth of server, desktop, and embedded applications based on deep learning has brought about a renaissance in interest in neural networks, with applications including image and speech processing, data analytics, robotics, healthcare monitoring, and IoT solutions. Efficient implementation of neural networks to support complex deep learning-based applications is a complex challenge for embedded and mobile computing platforms with limited computational/storage resources and a tight power budget. Even for cloud-scale systems it is critical to select the right hardware configuration based on the neural network complexity and system constraints in order to increase power- and performance-efficiency. Hardware Architectures for Deep Learning provides an overview of this new field, from principles to applications, for researchers, postgraduate students and engineers who work on learning-based services and hardware platforms. 

Place, publisher, year, edition, pages
Institution of Engineering and Technology , 2020. p. 1-306
Keywords [en]
Analog accelerators, Binary data representations, Convolutional neural networks, Deep learning hardware, Embedded systems, Error-tolerance, Feedforward models, Feedforward neural nets, Hardware accelerators, Hardware architectures, Inverter-based memristive neuromorphic circuit, Learning (artificial intelligence), Low-precision data representation, Model sparsity, Neural chips, Neural net architecture, Neuromorphic engineering, Recurrent neural nets, Recurrent neural network, RNN, Stochastic data representations, Ultra-low-power IoT smart applications
National Category
Computer Systems
Identifiers
URN: urn:nbn:se:mdh:diva-62375DOI: 10.1049/PBCS055EScopus ID: 2-s2.0-85153666198ISBN: 9781785617683 (print)OAI: oai:DiVA.org:mdh-62375DiVA, id: diva2:1754344
Available from: 2023-05-03 Created: 2023-05-03 Last updated: 2023-05-24Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Daneshtalab, Masoud

Search in DiVA

By author/editor
Daneshtalab, Masoud
By organisation
Embedded Systems
Computer Systems

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 178 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf