Results 71 to 80 of about 3,294 (168)

Towards explaining the speed of $k$-means [PDF]

open access: yes, 2011
The $k$-means method is a popular algorithm for clustering, known for its speed in practice. This stands in contrast to its exponential worst-case running-time. To explain the speed of the $k$-means method, a smoothed analysis has been conducted.
Manthey, Bodo
core   +2 more sources

Bayesian influence diagnostics using normalized functional Bregman divergence

open access: yesCommunications in Statistics - Theory and Methods, 2020
Ideally, any statistical inference should be robust to local influences. Although there are simple ways to check about leverage points in independent and linear problems, more complex models require more sophisticated methods. Kullback-Leiber and Bregman divergences were already applied in Bayesian inference to measure the isolated impact of each ...
Ian M. Danilevicz, Ricardo S. Ehlers
openaire   +2 more sources

Alpha‐synuclein co‐pathology in a real‐world early Alzheimer's disease cohort

open access: yesAlzheimer's &Dementia, Volume 22, Issue 3, March 2026.
Abstract BACKGROUND Most Alzheimer's disease (AD) cases show mixed pathology, with α‐synuclein (αSyn) aggregates present in a substantial proportion. The cerebrospinal fluid (CSF) α‐synuclein seed amplification assay (αS‐SAA) enables in vivo detection of pathogenic αSyn aggregates, but its clinical significance remains unclear. METHODS We prospectively
Tamara Shiner   +15 more
wiley   +1 more source

Divergence Network: Graphical calculation method of divergence functions

open access: yes, 2018
In this paper, we introduce directed networks called `divergence network' in order to perform graphical calculation of divergence functions. By using the divergence networks, we can easily understand the geometric meaning of calculation results and grasp
Nishiyama, Tomohiro
core   +1 more source

T‐calibration in semi‐parametric models

open access: yesCanadian Journal of Statistics, Volume 54, Issue 1, March 2026.
AbstractThis article relates the calibration of models to the consistent loss functions for the target functional of the model. Correctly specified models are calibrated. Conversely, we demonstrate that if there is a parameter value that is optimal under all consistent loss functions, then a model is calibrated.
Anja Mühlemann, Johanna Ziegel
wiley   +1 more source

Interpretable Machine Learning: A Comprehensive Review of Foundations, Methods, and the Path Forward

open access: yesWIREs Data Mining and Knowledge Discovery, Volume 16, Issue 1, March 2026.
This systematic review of 352 studies establishes a comprehensive taxonomy for Interpretable Machine Learning, transitioning from foundational intrinsic models to advanced deep learning explanations. It reveals a critical paradigm shift toward “mechanistic interpretability” and actionable recourse, emphasizing that future AI systems must be rigorously ...
Shimon Fridkin, Michael Bendersky
wiley   +1 more source

A new self-organizing neural gas model based on Bregman divergences [PDF]

open access: yes, 2018
In this paper, a new self-organizing neural gas model that we call Growing Hierarchical Bregman Neural Gas (GHBNG) has been proposed. Our proposal is based on the Growing Hierarchical Neural Gas (GHNG) in which Bregman divergences are incorporated in ...
Luque-Baena, Rafael Marcos   +3 more
core  

Changes by Era in Risk Factors and Outcomes Among Deceased Donor Kidney Transplant Recipients With Delayed Graft Function

open access: yesClinical Transplantation, Volume 40, Issue 2, February 2026.
ABSTRACT Introduction There are no effective therapeutic agents for preventing or treating delayed graft function (DGF) among deceased donor kidney transplant recipients (DDKTRs). Donor and recipient factors are important to predicting DGF and associated outcomes, which we hypothesize differed over time.
Camille C. Ylagan   +7 more
wiley   +1 more source

Total Jensen divergences: Definition, Properties and k-Means++ Clustering

open access: yes, 2013
We present a novel class of divergences induced by a smooth convex function called total Jensen divergences. Those total Jensen divergences are invariant by construction to rotations, a feature yielding regularization of ordinary Jensen divergences by a ...
Nielsen, Frank, Nock, Richard
core  

Bregman-Divergence-Based Arimoto-Blahut Algorithm

open access: yesIEEE Transactions on Information Theory
We generalize the generalized Arimoto-Blahut algorithm to a general function defined over Bregman-divergence system. In existing methods, when linear constraints are imposed, each iteration needs to solve a convex minimization. Exploiting our obtained algorithm, we propose a minimization-free-iteration algorithm.
openaire   +2 more sources

Home - About - Disclaimer - Privacy