Results 61 to 70 of about 2,756,498 (272)
Machine Learning Model Interpretability for Precision Medicine [PDF]
Interpretability of machine learning models is critical for data-driven precision medicine efforts. However, highly predictive models are generally complex and are difficult to interpret. Here using Model-Agnostic Explanations algorithm, we show that complex models such as random forest can be made interpretable. Using MIMIC-II dataset, we successfully
arxiv
Subpar reporting of pre‐analytical variables in RNA‐focused blood plasma studies
Pre‐analytical variables strongly influence the analysis of extracellular RNA (cell‐free RNA; exRNA) derived from blood plasma. Their reporting is essential to allow interpretation and replication of results. By evaluating 200 exRNA studies, we pinpoint a lack of reporting pre‐analytical variables associated with blood collection, plasma preparation ...
Céleste Van Der Schueren+16 more
wiley +1 more source
Hybrid Predictive Model: When an Interpretable Model Collaborates with a Black-box Model [PDF]
Interpretable machine learning has become a strong competitor for traditional black-box models. However, the possible loss of the predictive performance for gaining interpretability is often inevitable, putting practitioners in a dilemma of choosing between high accuracy (black-box models) and interpretability (interpretable models).
arxiv
Cancer‐associated fibroblasts (CAFs) promote cancer growth, invasion (metastasis), and drug resistance. Here, we identified functional and diverse circulating CAFs (cCAFs) in patients with metastatic prostate cancer (mPCa). cCAFs were found in higher numbers and were functional and diverse in mPCa patients versus healthy individuals, suggesting their ...
Richell Booijink+6 more
wiley +1 more source
Where and What? Examining Interpretable Disentangled Representations [PDF]
Capturing interpretable variations has long been one of the goals in disentanglement learning. However, unlike the independence assumption, interpretability has rarely been exploited to encourage disentanglement in the unsupervised setting. In this paper, we examine the interpretability of disentangled representations by investigating two questions ...
arxiv
Circulating tumor DNA (ctDNA) offers a possibility for different applications in early and late stage breast cancer management. In early breast cancer tumor informed approaches are increasingly used for detecting molecular residual disease (MRD) and early recurrence. In advanced stage, ctDNA provides a possibility for monitoring disease progression and
Eva Valentina Klocker+14 more
wiley +1 more source
Sexual dimorphism of the human corpus callosum: Digital morphometric study [PDF]
Background/Aim. Changes in the morphology and the size of the corpus callosum, are related to various pathological conditions. An analysis of these changes requires data about sexual dimorphism of the corpus callosum, which we tried to obtain in our ...
Spasojević Goran+3 more
doaj +1 more source
Tackling Interpretability in Audio Classification Networks with Non-negative Matrix Factorization [PDF]
This paper tackles two major problem settings for interpretability of audio processing networks, post-hoc and by-design interpretation. For post-hoc interpretation, we aim to interpret decisions of a network in terms of high-level audio objects that are also listenable for the end-user. This is extended to present an inherently interpretable model with
arxiv
In this explorative biomarker analysis, we assessed serial sampling of circulating tumor cells (CTCs) with CellSearch in two randomized trials testing immune checkpoint inhibitors (ICIs) in metastatic breast cancer. Our data demonstrate a prognostic potential of CTCs, most apparent 4 weeks into ICI therapy.
Nikolai Kragøe Andresen+13 more
wiley +1 more source
FICNN: A Framework for the Interpretation of Deep Convolutional Neural Networks [PDF]
With the continue development of Convolutional Neural Networks (CNNs), there is a growing concern regarding representations that they encode internally. Analyzing these internal representations is referred to as model interpretation. While the task of model explanation, justifying the predictions of such models, has been studied extensively; the task ...
arxiv