Glossary
A significant amount of jargon and acronyms are used in presentations and discussions about the IRIS-HEP software institution, the Large Hadron Collider program at CERN and HEP in general. This glossary attempts to provide some light on what these terms and acronyms mean. (See also the list of Related Projects for additional acronyms/names and explanations.)
-
ABC - Approximate Bayesian Computation
-
ACAT - A workshop series on Advanced Computing and Analysis Techniques in HEP. An important venue for discussing HEP software and computing topics. It takes place approximately every 18 months, out of phase by approximately 9 months with the CHEP confernences.
-
ALICE - A Large Ion Collider Experiment, an experiment at the LHC at CERN.
-
ALPGEN - An event generator designed for the generation of Standard Model processes in hadronic collisions, with emphasis on final states with large jet multiplicities. It is based on the exact LO evaluation of partonic matrix elements, as well as top quark and gauge boson decays with helicity correlations.
-
ANL - Argonne National Laboratory (ANL) is a multipurpose DOE national laboratory located in Lemont, IL
-
AS - Analysis Systems is one of the R&D focus areas of the IRIS-HEP Software Institute
-
AOD - Analysis Object Data is a summary of the reconstructed event and contains sufficient information for common physics analyses.
-
ATLAS - A Toroidal LHC ApparatuS, an experiment at the LHC at CERN. One of two large general-purpose detectors at the LHC
-
Blueprint - The Blueprint activity in the IRIS-HEP Software Institute is designed to inform the strategic vision of the institute through dedicated workshops on topics of interest to the institute’s mission.
-
BoF - “Birds of a Feather” sessions at a conference are informal (typically parallel, ad-hoc) sessions at a conference or workshop where interested parties get together to discuss some specific common topic of interest
-
BaBar - A large HEP experiment which ran at SLAC from 1999 through 2008.
-
BSM - Physics beyond the Standard Model (BSM) refers to the theoretical developments needed to explain the deficiencies of the Standard Model (SM), such as the origin of mass, the strong CP problem, neutrino oscillations, matter–antimatter asymmetry, and the nature of dark matter and dark energy.
-
CDN - Content Delivery Network
-
CERN - European Organization for Nuclear Research (Conseil Européen pour la Recherche Nucléaire) The European Laboratory for Particle Physics, the host laboratory for the LHC (and eventually HL-LHC) accelerators and the ALICE, ATLAS, CMS and LHCb experiments. Located in Geneva, Switzerland
-
CHEP - An international conference series on Computing in High Energy and Nuclear Physics. The main conference in which software and computing topics relevant for HEP are discussed. It takes place approximately every 18 months, out of phase by approximately 9 months with the ACAT workshop series.
-
CI - Cyberinfrastructure, referring to the people, software, and hardware necessary that provide powerful and advanced capabilities
-
CMS - Compact Muon Solenoid , an experiment at the LHC at CERN. The Compact Muon Solenoid (CMS) is one of two large general-purpose detectors at the LHC
-
CMSDAS - The CMS Data Analysis School
-
CMSSW - Application software for the CMS experiment including the processing framework itself and components relevant for event reconstruction, high-level trigger, analysis, hardware trigger emulation, simulation, and visualization workflows.
-
CoDaS-HEP - The COmputational and DAta Science in HEP school.
-
CP - Charge and Parity conjugation symmetry
-
CPV - CP violation
-
CRSG - Computing Resources Scrutiny Group, a WLCG committee in charge of scrutinizing and assessing LHC experiment yearly resource requests to prepare funding agency decisions.
-
CS - Computer Science
-
CTDR - Computing Technical Design Report, a document written by one of the experiments to describe the experiment’s technical blueprint for building the software and computing system. A technical planning document required by CERN from each of the experiments (ATLAS and CMS) in ~2022 to describe the computing plans and model for the HL-LHC upgrades
-
CVMFS - The CERN Virtual Machine File System is a network file system based on HTTP and optimized to deliver experiment software in a fast, scalable, and reliable way through sophisticated caching strategies.
-
CVS - Concurrent Versions System, a source code version control system
-
CWP - The Community White Paper is the result of an organized effort to describe the community strategy and a roadmap for software and computing R&D in HEP for the 2020s. This activity is organized under the umbrella of the HSF.
-
DASPOS - The Data And Software Preservation for Open Science project
-
Deep Learning - one class of Machine Learning algorithms, based on a high number of neural network layers.
-
DES - The Dark Energy Survey
-
dHTC or DHTC - Distributed High Throughput Computing
-
DIANA-HEP - The Data Intensive Analysis for High Energy Physics project, funded by NSF as part of the SI2 program
-
DOMA - Data Organization, Management and Access One of the R&D focus areas of the IRIS-HEP Software Institute. A term for an integrated view of all aspects of how a project interacts with and uses data.
-
DUNE - The Deep Underground Neutrino Experiment is the U.S. future flagship neutrino experiment at FNAL.
-
EFT - the Effective Field Theory, an extension of the Standard Model
-
EYETS - Extended Year End Technical Stop, used to denote a period (typically several months) in the winter when small upgrades and maintenance are performed on the CERN accelerator complex and detectors
-
FNAL or Fermilab - Fermi National Accelerator Laboratory, also known as Fermilab, the primary US High Energy Physics Laboratory, funded by the US Department of Energy
-
FPGA - Field Programmable Gate Array
-
FTE - Full Time Equivalent
-
FTS - File Transfer Service
-
GAN - Generative Adversarial Networks are a class of artificial intelligence algorithms used in unsupervised machine learning, implemented by a system of two neural networks contesting with each other in a zero-sum game framework.
-
GAUDI - An event processing application framework developed by CERN
-
Geant4 - A toolkit for the simulation of the passage of particles through matter.
-
GeantV - An R&D project that aims to fully exploit the parallelism, which is increasingly offered by the new generations of CPUs, in the field of detector simulation.
-
GNN - Graph Neural Network
-
GPGPU - General-Purpose computing on Graphics Processing Units is the use of a Graphics Pro- cessing Unit (GPU), which typically handles computation only for computer graphics, to perform computation in applications traditionally handled by the Central Processing Unit (CPU). Pro- gramming for GPUs is typically more challenging, but can offer significant gains in arithmetic throughput.
-
HEP - High Energy Physics
-
HEP-CCE - The HEP Center for Computational Excellence, a DOE-funded cross-cutting initiative to promote excellence in high performance computing (HPC) including data-intensive applications, scientific simulations, and data movement and storage
-
HEPData - The Durham High Energy Physics Database is an open access repository for scattering data from experimental particle physics.
-
HEPiX - HEPiX A series of twice-annual workshops which bring together IT staff and HEP personnel involved in HEP computing
-
HL-LHC - The High Luminosity Large Hadron Collider is a proposed upgrade to the Large Hadron Collider to be made in 2026. The upgrade aims at increasing the luminosity of the machine by a factor of 10, up to 1035cm−2s−1, providing a better chance to see rare processes and improving statistically marginal measurements.
-
HLT - High Level Trigger. Software trigger system generally using a large computing cluster located close to the detector. Events are processed in real-time (or within the latency defined by small buffers) and select those who must be stored for further processing offline.
-
HPC - High Performance Computing.
-
HS06 - HEP-wide benchmark for measuring CPU performance based on the SPEC2006 benchmark
-
HSF - The HEP Software Foundation facilitates coordination and common efforts in high energy physics (HEP) software and computing internationally.
-
IA - Innovative Algorithms, one of the R&D focus areas of the IRIS-HEP Software Institute
-
IgProf - The Ignominius Profiler, a tool for exploring the CPU and memory use performance of very large C applications like those used in HEP
-
IML - The Inter-experimental LHC Machine Learning Working Group is focused on the development of modern state-of-the art machine learning methods, techniques and practices for high-energy physics problems.
-
INFN - The Istituto Nazionale di Fisica Nucleare, the main funding agency and series of laboratories involved in High Energy Physics research in Italy
-
JavaScript - A high-level, dynamic, weakly typed, prototype-based, multi-paradigm, and interpreted programming language. Alongside HTML and CSS, JavaScript is one of the three core technologies of World Wide Web content production.
-
JLAB - Jefferson Lab is a DOE national laboratory focused on nuclear physics located in Newport News, VA
-
Jupyter Notebook - This is a server-client application that allows editing and running notebook documents via a web browser. Notebooks are documents produced by the Jupyter Notebook App, which contain both computer code (e.g., python) and rich text elements (paragraph, equations, figures, links, etc…). Notebook documents are both human-readable documents containing the analysis description and the results (figures, tables, etc..) as well as executable documents which can be run to perform data analysis.
-
LBNL or Berkeley Lab - Lawrence Berkeley National Laboratory (LBNL) is a multipurpose DOE national laboratory located in Berkeley, CA
-
LEP - The Large Electron-Positron Collider, the original accelerator which occupied the 27km circular tunnel at CERN now occupied by the Large Hadron Collider
-
LHC - Large Hadron Collider, the main particle accelerator at CERN. The proton-proton collider at CERN
-
LHCb - Large Hadron Collider beauty), an experiment at the LHC at CERN. A particle physics detector at the LHC specialized for studies in b-physics, charm physics, etc.
-
LIGO - The Laser Interferometer Gravitational-Wave Observatory
-
LS - Long Shutdown, used to denote a period (typically 1 or more years) in which the LHC is not producing data and the CERN accelerator complex and detectors are being upgraded.
-
LSST - The Large Synoptic Survey Telescope, former name of Vera C. Rubin Observatory
-
ML - Machine learning is a field of computer science that gives computers the ability to learn without being explicitly programmed. It focuses on prediction making through the use of computers and encompasses a lot of algorithm classes (boosted decision trees, neural networks. . . ).
-
MREFC - Major Research Equipment and Facilities Construction, an NSF mechanism for large construction projects
-
NAS - The National Academy of Sciences
-
NCSA - National Center of Supercomputing Applications, at the University of Illinois at Urbana-Champaign
-
NDN - Named Data Networking
-
NSF - The National Science Foundation
-
ONNX - Open Neural Network Exchange, an evolving open-source standard for exchanging AI models
-
Operations Programs or Ops Programs - - The US participation in the LHC includese funding for “operations” for each of the two experiments (ATLAS and USCMS). This funding comes from both the NSF and the DOE, but is jointly managed for each experiment as a single program. These are typically referred to together as the US-LHC Operations Programs or individually as the US-ATLAS and US-CMS operations programs. The operations programs fund effort in both labs (BNL, FNAL, etc.) and in many universities across the US. That effort supports both maintenance and operations activities on detectors that the US has contributed and the sofsoftware and computing efforts from the US (including the Tier 1 facilities at BNL and FNAL and Tier 2 facilities in a large number of universities). The operations programs are expected to continue as funded entities as long as the US maintains its efforts at the LHC.
-
Openlab - ERN Openlab is a public-private partnership that accelerates the development of cutting-edge solutions for the worldwide LHC community and wider scientific research.
-
OC - Object Condensation
-
OCT - Object Condensation Tracking
-
OSG - The Open Science Grid, which provides the fabric for distributed high-through computing
-
OSG-LHC - The services provided by IRIS-HEP to enable the LHC experiments to use the Open Science Grid
-
P5 - The Particle Physics Project Prioritization Panel is a scientific advisory panel tasked with recommending plans for U.S. investment in particle physics research over the next ten years. PI Principal Investigator
-
PEP - Project Execution Plan
-
PI - Principal Investigator
-
Pileup - the number of simultaneous proton-proton collisions visible in the detector as part of an “event”. Typically the vast majority of these are not interesting, but because they may be”overlaid” on an interesting signal collision they complicate the processing and analysis of the data. The HL-LHC will achieve higher “luminosity” (higher data rates) in a way that significantly increases “pileup”.
-
PTC - Programmatic Terms and Conditions, a document included in the IRIS-HEP award which includes agreements between NSF and Princeton University (and by extension the other institutions) as to how the IRIS-HEP project should be executed, how reporting should be done, etc.
-
QA - Quality Assurance
-
QC - Quality Control
-
QCD - Quantum Chromodynamics, the theory describing the strong interaction between quarks and gluons.
-
Run 3 - The LHC running period which started in 2022.
-
Run 4 or HL-LHC - The LHC running period which nominally starts from 2029, with the upgraded high luminosity LHC and upgraded ATLAS and CMS detectors
-
REANA - REusable ANAlyses, a system to preserve and instantiate analysis workflows
-
REU - Research Experience for Undergraduates, an NSF program to fund undergraduate participation in research projects
-
ROOT - A scientific software framework widely used in HEP data processing applications.
-
RRB - Resources Review Board, a CERN committee made up of representatives of funding agencies participating in the LHC collaborations, the CERN management and the experiment’s management.
-
SciDAC - Scientific Discovery through Advanced Computing](https://www.scidac.gov), a DOE program to fund advanced R&D on computing topics relevant to the DOE Office of Science
-
SDSC - San Diego Supercomputer Center, at the University of California at San Diego
-
SHERPA - A Monte Carlo event generator for the Simulation of High-Energy Reactions of PArticles in lepton-lepton, lepton-photon, photon-photon, lepton-hadron and hadron-hadron collisions.
-
SI2 - The Software Infrastructure for Sustained Innovation program at NSF
-
SIMD - Single instruction, multiple data (SIMD), describes computers with multiple processing elements that perform the same operation on multiple data points simultaneously.
-
SKA - The Square Kilometer Array
-
SLAC - The Stanford Linear Accelerator Center, a laboratory funded by the US Department of Energy
-
SM - The Standard Model is the name given in the 1970s to a theory of fundamental particles and how they interact. It is the currently dominant theory explaining the elementary particles and their dynamics.
-
SOW - Statement of Work, a mechanism used to define the expected activities and deliverables of individuals funded from a subaward with a multi-institutional project. The SOW is typically revised annually, along with the corresponding budgets.
-
SSI - The Software Sustainability Institute, an organization in the UK dedicated to fostering better, and more sustainable, software for research.
-
SWAN - Service for Web based ANalysis is a platform for interactive data mining in the CERN cloud using the Jupyter notebook interface.
-
S2I2-HEP - S2I2-HEP The original conceptualization project for the software institute. It produced both the S2I2-HEP Strategic Plan and (working with the HEP Software Foundation) drove the Community White Paper process.
-
Snowmass - The Snowmass process is a high energy physics community planning process organized by the Division of Particles and Fields of the American Physical Society. The last such process took place in 2012-2013 and a new one is being planned for 2020-2021.
-
SSC - The Sustainable Software Core is a functional area in the IRIS-HEP institute which focuses on general methods for producing and maintaining sustainable software.
-
SSL - The Scalable Systems Laboratory provides an integration path for the output of IRIS-HEP R&D to be integrated into the production infrastructure of the LHC experiments.
-
TEO - Training, Education and Outreach, one of the main focuses of the SSC area in the IRIS-HEP institute
-
TMVA - The Toolkit for Multivariate Data Analysis with ROOT is a standalone project that pro- vides a ROOT-integrated machine learning environment for the processing and parallel evaluation of sophisticated multivariate classification techniques.
-
TPU - Tensor Processing Unit, an application-specific integrated circuit by Google designed for use with Machine Learning applications
-
URSSI - the US Software Sustainability Institute, an S2I2 conceptualization activity recommended for funding by NSF
-
US-ATLAS - The US Collaboration for the ATLAS Experiment at the Large Hadron Collider.
-
US-CMS - The US Collaboration for the CMS Experiment at the Large Hadron Collider.
-
US-LHC Ops or US-LHC Operations Programs - Operations programs providing support to individuals at institutions and national laboratories
-
WAN - Wide Area Network
-
WLCG - The Worldwide LHC Computing Grid project is a global collaboration of more than 170 computing centres in 42 countries, linking up national and international grid infrastructures. The mission of the WLCG project is to provide global computing resources to store, distribute and analyse data generated by the Large Hadron Collider (LHC) at CERN.
-
x86 - 64-bit version of the x86 instruction set, which originated with the Intel 8086, but has now been implemented on processors from a range of companies, including the Intel and AMD processors that make up the vast majority of computing resources used by HEP today.
-
XRootD - Software framework that is a fully generic suite for fast, low latency and scalable data access.