https://www.mdu.se/

mdu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
DeepAxe: A Framework for Exploration of Approximation and Reliability Trade-offs in DNN Accelerators
Tallinn University of Technology, Tallinn, Estonia.
Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
Tallinn University of Technology, Tallinn, Estonia.
Tallinn University of Technology, Tallinn, Estonia.
Show others and affiliations
2023 (English)In: Proceedings - International Symposium on Quality Electronic Design, ISQED, IEEE Computer Society , 2023Conference paper, Published paper (Refereed)
Abstract [en]

While the role of Deep Neural Networks (DNNs) in a wide range of safety-critical applications is expanding, emerging DNNs experience massive growth in terms of computation power. It raises the necessity of improving the reliability of DNN accelerators yet reducing the computational burden on the hardware platforms, i.e. reducing the energy consumption and execution time as well as increasing the efficiency of DNN accelerators. Therefore, the trade-off between hardware performance, i.e. area, power and delay, and the reliability of the DNN accelerator implementation becomes critical and requires tools for analysis.In this paper, we propose a framework DeepAxe for design space exploration for FPGA-based implementation of DNNs by considering the trilateral impact of applying functional approximation on accuracy, reliability and hardware performance. The framework enables selective approximation of reliability-critical DNNs, providing a set of Pareto-optimal DNN implementation design space points for the target resource utilization requirements. The design flow starts with a pre-trained network in Keras, uses an innovative high-level synthesis environment DeepHLS and results in a set of Pareto-optimal design space points as a guide for the designer. The framework is demonstrated on a case study of custom and state-of-the-art DNNs and datasets. 

Place, publisher, year, edition, pages
IEEE Computer Society , 2023.
Keywords [en]
approximate computing, deep neural networks, fault simulation, reliability, resiliency assessment
National Category
Computer Systems
Identifiers
URN: urn:nbn:se:mdh:diva-63499DOI: 10.1109/ISQED57927.2023.10129353ISI: 001013619400058Scopus ID: 2-s2.0-85161606608ISBN: 9798350334753 (print)OAI: oai:DiVA.org:mdh-63499DiVA, id: diva2:1772796
Conference
24th International Symposium on Quality Electronic Design, ISQED 2023, San Francisco, 5 April 2023 through 7 April 2023
Available from: 2023-06-21 Created: 2023-06-21 Last updated: 2023-10-09Bibliographically approved
In thesis
1. DeepKit: a multistage exploration framework for hardware implementation of deep learning
Open this publication in new window or tab >>DeepKit: a multistage exploration framework for hardware implementation of deep learning
2023 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Deep Neural Networks (DNNs) are widely adopted to solve different problems ranging from speech recognition to image classification. DNNs demand a large amount of processing power, and their implementation on hardware, i.e., FPGA or ASIC, has received much attention. However, it is impossible to implement a DNN on hardware directly from its DNN descriptions, usually in Python language, libraries, and APIs. Therefore, it should be either implemented from scratch at Register Transfer Level (RTL), e.g., in VHDL or Verilog, or be transformed to a lower level implementation. One idea that has been recently considered is converting a DNN to C and then using High-Level Synthesis (HLS) to synthesize it on an FPGA. Nevertheless, there are various aspects to take into consideration during the transformation. In this thesis, we propose a multistage framework, DeepKit, that generates a synthesizable C implementation based on an input DNN architecture in a DNN description (Keras). Then, moving through the stages, various explorations and optimizations are performed with regard to accuracy, latency, resource utilization, and reliability. The framework is also implemented as a toolchain consisting of DeepHLS, AutoDeepHLS, DeepAxe, and DeepFlexiHLS, and results are provided for DNNs of various types and sizes.

Place, publisher, year, edition, pages
Västerås: Mälardalen university, 2023
Series
Mälardalen University Press Dissertations, ISSN 1651-4238 ; 390
National Category
Embedded Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:mdh:diva-64488 (URN)978-91-7485-613-2 (ISBN)
Public defence
2023-12-07, Delta, Mälardalens universitet, Västerås, 13:00 (English)
Opponent
Supervisors
Available from: 2023-10-09 Created: 2023-10-09 Last updated: 2023-11-16Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Riazati, MohammadDaneshtalab, MasoudLisper, Björn

Search in DiVA

By author/editor
Riazati, MohammadDaneshtalab, MasoudLisper, Björn
By organisation
Embedded Systems
Computer Systems

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 12 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf