Hardware acceleration for recurrent neural networks
2020 (English)In: Hardware Architectures for Deep Learning, Institution of Engineering and Technology , 2020, p. 27-52Chapter in book (Other academic)
Abstract [en]
This chapter focuses on the LSTM model and is concerned with the design of a high-performance and energy-efficient solution to implement deep learning inference. The chapter is organized as follows: Section 2.1 introduces Recurrent Neural Networks (RNNs). In this section Long Short Term Memory (LSTM) and Gated Recurrent Unit (GRU) network models are discussed as special kind of RNNs. Section 2.2 discusses inference acceleration with hardware. In Section 2.3, a survey on various FPGA designs is presented within the context of the results of previous related works and after which Section 2.4 concludes the chapter.
Place, publisher, year, edition, pages
Institution of Engineering and Technology , 2020. p. 27-52
Keywords [en]
Deep learning inference, Energy-efficient solution, Field programmable gate arrays, FPGA designs, Gated recurrent unit network models, GRU network models, Hardware acceleration, High-performance solution, Long short term memory, LSTM model, Recurrent neural nets, Recurrent neural networks, RNN
National Category
Other Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
URN: urn:nbn:se:mdh:diva-62517DOI: 10.1049/PBCS055E_ch2Scopus ID: 2-s2.0-85153645311ISBN: 9781785617683 (print)OAI: oai:DiVA.org:mdh-62517DiVA, id: diva2:1761078
2023-05-312023-05-312023-05-31Bibliographically approved