ML on FPGAs


Machine learning has become an extremely popular solution for a broad range of problems in high energy physics, from jet-tagging to signal extraction. While the application of machine learning can offer unrivaled performance, its use can also be expensive from both a latency and a computation perspective. While accelerating inference on GPUs can offer some improvements, another specialized architecture, field-programmable gate arrays (FPGAs), can further improve inference speeds. Performing inference on FPGAs has the potential to greatly reduce the computational resources required at large HEP experiments.

This work consists of both the further development of the hls4ml tool (J. Duarte et al. 2018) as well as the study of applications for fast inference on FPGAs. Ongoing developments of the hls4ml tool itself include wider support for neural network layer architectures and machine learning libraries, and improvements to the performance of the tools for large networks. Ongoing studies on the applications of machine learning on FPGAs include their use for particle tracking, calorimeter reconstruction, and particle identification.

This work also involves studies of large-scale heterogeneous computing using Brainwave at Microsoft Azure, Google Cloud Platform (GCP), and Amazon Web Services (AWS). Brainwave and GCP have both been used in studying the capabilities of cloud-based heterogeneous computing (J. Duarte et al. 2019), which appears extremely promising as an option for low-cost and low-latency inference acceleration (see table below). AWS has been used extensively for prototyping applications in heterogeneous environments, and will also be used to investigate alternative models for cloud-based heterogeneous computing.

SONIC summary table

GitHub. DOI

Team

Publications