Particle physics processes are usually modelled with complex Monte-Carlo simulations of the hard process, parton shower, and detector interactions. These simulators typically do not admit a tractable likelihood function: given a (potentially high-dimensional) set of observables, it is usually not possible to calculate the probability of these observables for some model parameters. Particle physicisists usually tackle this problem of “likelihood-free inference” by hand-picking a few “good” observables or summary statistics and filling histograms of them. But this conventional approach discards the information in all other observables and often does not scale well to high-dimensional problems.

In the three publications “Constraining Effective Field Theories With Machine Learning”, “A Guide to Constraining Effective Field Theories With Machine Learning”, and “Mining gold from implicit models to improve likelihood-free inference”, a new approach has been developed. In a nut shell, additional information is extracted from the simulators. This “augmented data” can be used to train neural networks to efficiently approximate arbitrary likelihood ratios. We playfully call this process “mining gold” from the simulator, since this information may be hard to get, but turns out to be very valuable for inference.

GitHub PyPI version Documentation Status Build Status Docker pulls Binder Code style: black License: MIT DOI