[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3289602.3293986acmconferencesArticle/Chapter ViewAbstractPublication PagesfpgaConference Proceedingsconference-collections
poster

Fast Inference of Deep Neural Networks for Real-time Particle Physics Applications

Published: 20 February 2019 Publication History

Abstract

Machine learning methods are ubiquitous and have proven to be very powerful in LHC physics, and particle physics as a whole. However, exploration of such techniques in low-latency, low-power FPGA (Field Programmable Gate Array) hardware has only just begun. FPGA-based trigger and data acquisition systems have extremely low, sub-microsecond latency requirements that are unique to particle physics. We present a case study for neural network inference in FPGAs focusing on a classifier for jet substructure which would enable many new physics measurements. While we focus on a specific example, the lessons are far-reaching. A compiler package is developed based on High-Level Synthesis (HLS) called HLS4ML to build machine learning models in FPGAs. The use of HLS increases accessibility across a broad user community and allows for a drastic decrease in firmware development time. We map out FPGA resource usage and latency versus neural network hyperparameters to allow for directed resource tuning in the low latency environment and assess the impact on our benchmark Physics performance scenario For our example jet substructure model, we fit well within the available resources of modern FPGAs with latency on the scale of 100~ns.

Cited By

View all
  • (2024)Special Session: Reliability Assessment Recipes for DNN Accelerators2024 IEEE 42nd VLSI Test Symposium (VTS)10.1109/VTS60656.2024.10538707(1-11)Online publication date: 22-Apr-2024
  • (2024)Impact of High-Level Synthesis on Reliability of Artificial Neural Network Hardware AcceleratorsIEEE Transactions on Nuclear Science10.1109/TNS.2024.337759671:4(845-853)Online publication date: Apr-2024
  • (2024)Efficient Neural Networks: from SW optimization to specialized HW accelerators2024 International Conference on Compilers, Architecture, and Synthesis for Embedded Systems (CASES)10.1109/CASES60062.2024.00009(17-18)Online publication date: 29-Sep-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
FPGA '19: Proceedings of the 2019 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays
February 2019
360 pages
ISBN:9781450361378
DOI:10.1145/3289602
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 20 February 2019

Check for updates

Author Tags

  1. fast-inference
  2. high level synthesis
  3. hls4ml
  4. machine learning
  5. multilayer perceptron
  6. neural network
  7. physics

Qualifiers

  • Poster

Conference

FPGA '19
Sponsor:

Acceptance Rates

Overall Acceptance Rate 125 of 627 submissions, 20%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 10 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Special Session: Reliability Assessment Recipes for DNN Accelerators2024 IEEE 42nd VLSI Test Symposium (VTS)10.1109/VTS60656.2024.10538707(1-11)Online publication date: 22-Apr-2024
  • (2024)Impact of High-Level Synthesis on Reliability of Artificial Neural Network Hardware AcceleratorsIEEE Transactions on Nuclear Science10.1109/TNS.2024.337759671:4(845-853)Online publication date: Apr-2024
  • (2024)Efficient Neural Networks: from SW optimization to specialized HW accelerators2024 International Conference on Compilers, Architecture, and Synthesis for Embedded Systems (CASES)10.1109/CASES60062.2024.00009(17-18)Online publication date: 29-Sep-2024
  • (2024)Accelerated and Highly Correlated ASIC Synthesis of AI Hardware Subsystems Using CGPIET Computers & Digital Techniques10.1049/2024/66236372024(1-23)Online publication date: 29-Jan-2024
  • (2023)FPGA-based Deep Learning Inference Accelerators: Where Are We Standing?ACM Transactions on Reconfigurable Technology and Systems10.1145/3613963Online publication date: 4-Sep-2023
  • (2023)STANN – Synthesis Templates for Artificial Neural Network Inference and TrainingAdvances in Computational Intelligence10.1007/978-3-031-43085-5_31(394-405)Online publication date: 30-Sep-2023
  • (2022)Generic Automated Implementation of Deep Neural Networks on Field Programmable Gate ArraysInnovations in Smart Cities Applications Volume 510.1007/978-3-030-94191-8_80(989-1000)Online publication date: 3-Mar-2022
  • (2019)Mapping Neural Networks to FPGA-Based IoT Devices for Ultra-Low Latency ProcessingSensors10.3390/s1913298119:13(2981)Online publication date: 5-Jul-2019

View Options

Login options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media