正在加载图片...
1 Preface To outline the challenges in computing that high-energy physics will face over the next years and strategies to not attempt to take later developments into account. 2 Introduction notthe Large Hadron Colider (LHC)re,the high luminosity LHC (HI-LH),in a is to exploit the full pbysics will delive ayhat periments will be limited by the physics Machine learning (ML)applied to particle physics nt research and development where signil mprovements are ne .Physics performance of reconstruction and analysis algorithms: Execution time of computationally expensive parts of event simulation,pattern recognition,and cali- bration; .Realtime implementation of machine learning algorithms Reduction of the data footprint with data compression,placement and access 2.1 Motivation with physics beyond the SM.Both tasks require the identification of rare signals in immense bac rounds rie tc Machine learning algorithms are already the s e-of-the-art in event and particle identification,energy estima 2.2 Brief Overview of Machine Learning Algorithms in HEP Trees(BDTs)and Neural Networks(NN). model is tmined for1 Preface To outline the challenges in computing that high-energy physics will face over the next years and strategies to approach them, the HEP software foundation has organised a Community White Paper (CWP) [1]. In addition to the main document, several more detailed documents were worked out by different working groups. The present document focusses on the topic of machine learning. The goals are to define the tasks at the energy and intensity frontier that can be addressed during the next decade by research and development of machine learning applications. Machine learning in particle physics is evolving fast, while the contents of this community white paper were mainly compiled during community meetings in spring 2017 that took place at several workshops on machine learning in high-energy physics: S2I2 and [2–5]. The contents of this document thus reflect the state of the art at these events and does not attempt to take later developments into account. 2 Introduction One of the main objectives of particle physics in the post-Higgs boson discovery era is to exploit the full physics potential of both the Large Hadron Collider (LHC) and its upgrade, the high luminosity LHC (HL-LHC), in addition to present and future neutrino experiments. The HL-LHC will deliver an integrated luminosity that is 20 times larger than the present LHC dataset, bringing quantitatively and qualitatively new challenges due to event size, data volume, and complexity. The physics reach of the experiments will be limited by the physics performance of algorithms and computational resources. Machine learning (ML) applied to particle physics promises to provide improvements in both of these areas. Incorporating machine learning in particle physics workflows will require significant research and development over the next five years. Areas where significant improvements are needed include: • Physics performance of reconstruction and analysis algorithms; • Execution time of computationally expensive parts of event simulation, pattern recognition, and cali￾bration; • Realtime implementation of machine learning algorithms; • Reduction of the data footprint with data compression, placement and access. 2.1 Motivation The experimental high-energy physics (HEP) program revolves around two main objectives that go hand in hand: probing the Standard Model (SM) with increasing precision and searching for new particles associated with physics beyond the SM. Both tasks require the identification of rare signals in immense backgrounds. Substantially increased levels of pile-up collisions from additional protons in the bunch at the HL-LHC will make this a significant challenge. Machine learning algorithms are already the state-of-the-art in event and particle identification, energy estima￾tion and pile-up suppression applications in HEP. Despite their present advantage, machine-learning algorithms still have significant room for improvement in their exploitation of the full potential of the dataset. 2.2 Brief Overview of Machine Learning Algorithms in HEP This section provides a brief introduction to the most important machine learning algorithms in HEP, intro￾ducing key vocabulary (in italic). Machine learning methods are designed to exploit large datasets in order to reduce complexity and find new features in data. The current most frequently used machine learning algorithms in HEP are Boosted Decision Trees (BDTs) and Neural Networks (NN). Typically, variables relevant to the physics problem are selected and a machine learning model is trained for classification or regression using signal and background events (or instances). Training the model is the most human- and CPU-time consuming step, while the application, the so called inference stage, is relatively inex￾pensive. BDTs and NNs are typically used to classify particles and events. They are also used for regression, 6
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有