The upcoming upgrades at the LHC have fueled increasing interest in alternative highly-parallel and GPU friendly algorithms for tracking and reconstruction. The PV-Finder project is developing a novel prototype algorithm for vertexing in high density collisions using a Convolutional Neural Network (CNN).
The PV-Finder algorithm uses a custom kernel to transform the sparse 3D space of hits and tracks into a dense 1D dataset, and then Deep Learning techniques are used to find PV locations. By training networks on our kernels using several CNN layers, we have achieved better than 90% efficiency with no more than 0.2 False Positives (FPs) per event. Beyond its physics performance, this algorithm also provides a rich collection of possibilities for visualization and study of 1D convolutional networks.
The current version of PV-Finder is based on a toy simulation of the LHCb detector in Run 3 conditions. We are breaking out the kernel generation, to allow the algorithm to be run on different inputs, such as the official LHCb framework, ATLAS or ACTS, and CMS track output.
The code currently lives at gitlab.cern.ch/LHCb-Reco-Dev/pv-finder.
- Rui Fang
- A hybrid deep learning approach to vertexing (Henry Schreiner, 17 Apr 2019) at 3rd IML Machine Learning Workshop
- A hybrid deep learning approach to vertexing (Henry Schreiner, 03 Apr 2019) at Connecting The Dots and Workshop on Intelligent Trackers 2019
- Machine Learning for the Primary Vertex reconstruction (Henry Schreiner, 20 Mar 2019) at 2019 Joint HSF/OSG/WLCG Workshop
- A hybrid deep learning approach to vertexing (Henry Schreiner, 11 Mar 2019) at 19th International Workshop on Advanced Computing and Analysis Techniques in Physics Research