正在加载图片...
applications of machine learning Experiments have trained ML algorithms on the features from combined reconstruction algorithms to perform ex extracting networks. projection chamber from electromagnetic showers,jet p opertics including substructure and btagging.taus and missing ener are th that scales line s such as 3.4 End-To-End Deep Learning called MELA variables,used in the analysis of the final states.While a few analyses first at the Tevatro which is described in the next section. a from a detector together with etend的 high dimer s raw data in a controlled way that does not necessarily rely on domain knowledge. 3.5 Sustainable Matrix Element Method d for n nts of standard mod (SN ME method are given in Appendix A.1. p The me method has able fea ent all nation of a radiation within the LO ME framework using transverse boosting and dedicated transfer funct ions to inte wlnitialstat y on 9 applications of machine learning. Experiments have trained ML algorithms on the features from combined reconstruction algorithms to perform particle identification for decades. In the past decade BDTs have been one of the most popular techniques in this domain. More recently, experiments have focused on extracting better performance with deep neural networks. An active area of research is the application of DNNs to the output of feature extraction in order to perform particle identification and extracting particle properties [13]. This is particularly true for calorimeters or time projection chambers (TPCs), where the data can be represented as a 2D or 3D image and the problems can be cast as computer vision tasks, in which neural networks are used to reconstruct images from pixel intensities. These neural networks are adapted for particle physics applications by optimizing network architectures for complex, 3-dimensional detector geometries and training them on suitable signal and background samples de￾rived from data control regions. Applications include identification and measurements of electrons and photons from electromagnetic showers, jet properties including substructure and b-tagging, taus and missing energy. Promising deep learning architectures for these tasks include convolutional, recurrent and adversarial neural networks. A particularly important application is to Liquid Argon TPCs (LArTPCs), which are the chosen detection technology for the flagship neutrino program. For tracking detectors, pattern recognition is the most computationally challenging step. In particular, it becomes computationally intractable for the HL-LHC. The hope is that machine learning will provide a solution that scales linearly with LHC collision density. A current effort called HEP.TrkX investigates deep learning algorithms such as long short-term memory (LSTM) networks for track pattern recognition on many-core processors. 3.4 End-To-End Deep Learning The vast majority of analyses at the LHC use high-level features constructed from particle four-momenta, even when the analyses make use of machine learning. A high-profile example of such variables are the seven, so￾called MELA variables, used in the analysis of the final states H → ZZ → 4`. While a few analyses, first at the Tevatron, and later at the LHC, have used the four-momenta directly, the latter are still high-level relative to the raw data. Approaches based on the four-momenta are closely related to the Matrix Element Method, which is described in the next section. Given recent spectacular advances in image recognition based on the use of raw information, we are led to consider whether there is something to be gained by moving closer to using raw data in LHC analyses. This so-called end-to-end deep learning approach uses low level data from a detector together with deep learning algorithms [14, 15]. One obvious challenge is that low level data, for example, detector hits, tend to be both high-dimensional and sparse. Therefore, there is interest in also exploring automatic ways to compress raw data in a controlled way that does not necessarily rely on domain knowledge. 3.5 Sustainable Matrix Element Method The Matrix Element (ME) Method [16–19] is a powerful technique which can be utilized for measurements of physical model parameters and direct searches for new phenomena. It has been used extensively by collider experiments at the Tevatron for standard model (SM) measurements and Higgs boson searches [20–25] and at the LHC for measurements in the Higgs and top quark sectors of the SM [26–32]. A few more details on the ME method are given in Appendix A.1. The ME method has several unique and desirable features, most notably it (1) does not require training data being an ab initio calculation of event probabilities, (2) incorporates all available kinematic information of a hypothesized process, including all correlations, and (3) has a clear physical meaning in terms of the transition probabilities within the framework of quantum field theory. One drawback to the ME Method is that it has traditionally relied on leading order (LO) matrix elements, although nothing limits the ME method to LO calculations. Techniques that accommodate initial-state QCD radiation within the LO ME framework using transverse boosting and dedicated transfer functions to integrate over the transverse momentum of initial-state partons have been developed [33]. Another challenge is develop￾ment of the transfer functions which rely on tediously hand-crafted fits to full simulated Monte-Carlo events. 9
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有