The Fisher-Rao Distance between Multivariate Normal Distributions: Special Cases, Bounds and Applications. [PDF]
Pinele J, Strapasson JE, Costa SIR.
europepmc +1 more source
Achieving well-informed decision-making in drug discovery: a comprehensive calibration study using neural network-based structure-activity models. [PDF]
Friesacher HR +4 more
europepmc +1 more source
Principal Curves for Statistical Divergences and an Application to Finance. [PDF]
Rodrigues AFP, Cavalcante CC.
europepmc +1 more source
Information Theory Meets Quantum Chemistry: A Review and Perspective. [PDF]
Zhao Y, Zhao D, Rong C, Liu S, Ayers PW.
europepmc +1 more source
A Quantitative Measurement Method for Nuclear-Pleomorphism Scoring in Breast Cancer. [PDF]
Teoh CL +6 more
europepmc +1 more source
Information Geometry for Covariance Estimation in Heterogeneous Clutter with Total Bregman Divergence. [PDF]
Hua X, Cheng Y, Wang H, Qin Y.
europepmc +1 more source
Related searches:
To characterize the differences between two positive functions or two distributions, a class of distortion functions has recently been defined termed the functional Bregman divergences. The class generalizes the standard Bregman divergence defined for vectors, and includes total squared difference and relative entropy.
Bela A. Frigyik +2 more
openaire +1 more source
Clustering with Bregman Divergences
Proceedings of the 2004 SIAM International Conference on Data Mining, 2004A wide variety of distortion functions, such as squared Euclidean distance, Mahalanobis distance, Itakura-Saito distance and relative entropy, have been used for clustering. In this paper, we propose and analyze parametric hard and soft clustering algorithms based on a large class of distortion functions known as Bregman divergences.
Arindam Banerjee +3 more
openaire +1 more source
Cost-Sensitive Sequences of Bregman Divergences
IEEE Transactions on Neural Networks and Learning Systems, 2012The minimization of the empirical risk based on an arbitrary Bregman divergence is known to provide posterior class probability estimates in classification problems, but the accuracy of the estimate for a given value of the true posterior depends on the specific choice of the divergence.
Santos-Rodriguez, Raúl +1 more
openaire +3 more sources
Bregman Divergences and Surrogates for Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2009Bartlett et al. (2006) recently proved that a ground condition for surrogates, classification calibration, ties up their consistent minimization to that of the classification risk, and left as an important problem the algorithmic questions about their minimization.
Richard, Nock, Frank, Nielsen
openaire +2 more sources

