Results 21 to 26 of about 4,928 (26)
Deep-learned Top Tagging with a Lorentz Layer
We introduce a new and highly efficient tagger for hadronically decaying top quarks, based on a deep neural network working with Lorentz vectors and the Minkowski metric.
Butter, Anja +3 more
core +1 more source
(Machine) Learning to Do More with Less
Determining the best method for training a machine learning algorithm is critical to maximizing its ability to classify data. In this paper, we compare the standard "fully supervised" approach (that relies on knowledge of event-by-event truth-level ...
Cohen, Timothy +2 more
core +1 more source
Pulling Out All the Tops with Computer Vision and Deep Learning
We apply computer vision with deep learning -- in the form of a convolutional neural network (CNN) -- to build a highly effective boosted top tagger.
Macaluso, Sebastian, Shih, David
core +1 more source
Ensemble smoothers are among the most successful and efficient techniques currently available for history matching. However, because these methods rely on Gaussian assumptions, their performance is severely degraded when the prior geology is described in
Canchumun, Smith W. A. +4 more
core
Practical bounds on the error of Bayesian posterior approximations: A nonasymptotic approach
Bayesian inference typically requires the computation of an approximation to the posterior distribution. An important requirement for an approximate Bayesian inference algorithm is to output high-accuracy posterior mean and uncertainty estimates ...
Broderick, Tamara +3 more
core
The importance of better models in stochastic optimization
Standard stochastic optimization methods are brittle, sensitive to stepsize choices and other algorithmic parameters, and they exhibit instability outside of well-behaved families of objectives.
Asi, Hilal, Duchi, John C.
core

