SCL Online Seminar by Vladimir Lončar
You are cordially invited to the SCL online seminar of the Center for the Study of Complex Systems, which will be held on Thursday, 11 February 2021 at 14:00 on Zoom. The talk entitled
hls4ml: Fast inference of deep neural networks in FPGAs
will be given by Dr. Vladimir Lončar (Scientific Computing Laboratory, Center for the Study of Complex Systems, Institute of Physics Belgrade and CERN). Abstract of the talk:
With edge computing, real-time inference of deep neural networks (DNNs) on custom hardware has become increasingly relevant. Smartphone companies are incorporating Artificial Intelligence (AI) chips in their design for on-device inference to improve user experience and tighten data security, and the autonomous vehicle industry is turning to application-specific integrated circuits (ASICs) to keep the latency low. While the typical acceptable latency for real-time inference in applications like those above is O(1) ms, other applications require sub-microsecond inference. For instance, high-frequency trading machine learning (ML) algorithms are running on field-programmable gate arrays (FPGAs), highly accurate devices, to make decisions within nanoseconds. At the extreme inference spectrum end of both the low-latency (as in high-frequency trading) and limited-area (as in smartphone applications) is the processing of data from proton-proton collisions at the Large Hadron Collider (LHC) at CERN. Here, latencies of O(1) microsecond are required and resources are strictly limited. To address these challenges, we have developed hls4ml, an open-source library that converts pre-trained ML models into FPGA firmware, targeting extreme low-latency inference in order to stay within the strict constraints imposed by the CERN particle detectors.
In this talk, we will describe the essential features of the hls4ml workflow and network optimization techniques, including how to reduce the footprint of a machine learning model using state-of-the art techniques such as model pruning and quantization through quantization aware training.