https://www.mdu.se/

mdu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
NeuroPIM: Felxible Neural Accelerator for Processing-in-Memory Architectures
University of Tehran, Iran.
University of Tehran, Iran.
University of Tehran, Iran.
University of Tehran, Iran; Institute for Research in Fundamental Sciences (IPM), School of Computer Science, Iran.
Show others and affiliations
2023 (English)In: Proceedings - 2023 26th International Symposium on Design and Diagnostics of Electronic Circuits and Systems, DDECS 2023, Institute of Electrical and Electronics Engineers Inc. , 2023, p. 51-56Conference paper, Published paper (Refereed)
Abstract [en]

The performance of microprocessors under many modern workloads is mainly limited by the off-chip memory bandwidth. The emerging process-in-memory paradigm present a unique opportunity to reduce data movement overheads by moving computation closer to memory. State-of-the-art processing-in-memory proposals stack a logic layer on top of one or multiple memory layers in a 3D fashion and leverage the logic layer to build near-memory processing units. Such processing units are either application-specific accelerators or general-purpose cores. In this paper, we present NeuroPIM, a new processing-in-memory architecture that uses a neural network as the memory-side general-purpose accelerator. This design is mainly motivated by the observation that in many real-world applications, some program regions, or even the entire program, can be replaced by a neural network that is learned to approximate the program's output. NeuroPIM benefits from both the flexibility of general-purpose processors and superior performance of application-specific accelerators. Experimental results show that NeuroPIM provides up to 41% speedup over a processor-side neural network accelerator and up to 8x speedup over a general-purpose processor.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc. , 2023. p. 51-56
Keywords [en]
Hardware acceleration, Neural network, Processing-in-memory, Application programs, Computation theory, Computer circuits, General purpose computers, Network architecture, Neural networks, Program processors, Application specific, General purpose processors, Logic layers, Memory bandwidths, Neural-networks, Off-chip memory, Performance, Processing units, Memory architecture
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:mdh:diva-63668DOI: 10.1109/DDECS57882.2023.10139567ISI: 001012062000009Scopus ID: 2-s2.0-85162274137ISBN: 9798350332773 (print)OAI: oai:DiVA.org:mdh-63668DiVA, id: diva2:1776893
Conference
26th International Symposium on Design and Diagnostics of Electronic Circuits and Systems, DDECS 2023, 3-5 May 2023, Tallin, Estonia
Available from: 2023-06-28 Created: 2023-06-28 Last updated: 2023-12-04Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Daneshtalab, Masoud

Search in DiVA

By author/editor
Daneshtalab, Masoud
By organisation
Embedded Systems
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 35 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf