Highly Performant, Deep Neural Networks with sub-microsecond latency on FPGAs for Trigger Applications Article Swipe
Noel Nottbeck
,
C. Schmitt
,
V. Büscher
·
YOU?
·
· 2020
· Open Access
·
· DOI: https://doi.org/10.1051/epjconf/202024501023
YOU?
·
· 2020
· Open Access
·
· DOI: https://doi.org/10.1051/epjconf/202024501023
Artificial neural networks are becoming a standard tool for data analysis, but their potential remains yet to be widely used for hardware-level trigger applications. Nowadays, high-end FPGAs, often used in low-level hardware triggers, offer theoretically enough performance to include networks of considerable size. This makes it very promising and rewarding to optimize a neural network implementation for FPGAs in the trigger context. Here an optimized neural network implementation framework is presented, which typically reaches 90 to 100% computational efficiency, requires few extra FPGA resources for data flow and controlling, and allows latencies in the order of 10s to few 100s of nanoseconds for entire (deep) networks.
Related Topics To Compare & Contrast
Vs
Paleontology
Vs
Biology
Concepts
Field-programmable gate array
Computer science
Artificial neural network
Latency (audio)
Context (archaeology)
Deep neural networks
Computer architecture
Embedded system
Computer hardware
Artificial intelligence
Telecommunications
Paleontology
Biology
Metadata
- Type
- article
- Language
- en
- Landing Page
- https://doi.org/10.1051/epjconf/202024501023
- https://www.epj-conferences.org/articles/epjconf/pdf/2020/21/epjconf_chep2020_01023.pdf
- OA Status
- diamond
- References
- 7
- Related Works
- 10
- OpenAlex ID
- https://openalex.org/W3103321383
All OpenAlex metadata
Raw OpenAlex JSON
No additional metadata available.