Results 71 to 80 of about 3,606,574 (279)

Guided quantum compression for high dimensional data classification

open access: yesMachine Learning: Science and Technology
Quantum machine learning provides a fundamentally different approach to analyzing data. However, many interesting datasets are too complex for currently available quantum computers.
Vasilis Belis   +5 more
doaj   +1 more source

SLSB-forest:approximate k nearest neighbors searching on high dimensional data

open access: yesDianxin kexue, 2017
The study of approximate k nearest neighbors query has attracted broad attention.Local sensitive hash is one of the mainstream ways to solve this problem.Local sensitive hash and its varients have noted the following problems:the uneven distribution of ...
Tu QIAN   +3 more
doaj   +2 more sources

Dual Query: Practical Private Query Release for High Dimensional Data

open access: yesThe Journal of Privacy and Confidentiality, 2017
We present a practical, differentially private algorithm for answering a large number of queries on high dimensional datasets. Like all algorithms for this task, ours necessarily has worst-case complexity exponential in the dimension of the data. However,
Marco Gaboardi   +4 more
doaj   +1 more source

YAP1::TFE3 mediates endothelial‐to‐mesenchymal plasticity in epithelioid hemangioendothelioma

open access: yesMolecular Oncology, EarlyView.
The YAP1::TFE3 fusion protein drives endothelial‐to‐mesenchymal transition (EndMT) plasticity, resulting in the loss of endothelial characteristics and gain of mesenchymal‐like properties, including resistance to anoikis, increased migratory capacity, and loss of contact growth inhibition in endothelial cells.
Ant Murphy   +9 more
wiley   +1 more source

Parsimonious Mahalanobis Kernel for the Classification of High Dimensional Data [PDF]

open access: yes, 2012
The classification of high dimensional data with kernel methods is considered in this article. Exploit- ing the emptiness property of high dimensional spaces, a kernel based on the Mahalanobis distance is proposed.
Benediktsson, J. A.   +3 more
core  

Emerging role of ARHGAP29 in melanoma cell phenotype switching

open access: yesMolecular Oncology, EarlyView.
This study gives first insights into the role of ARHGAP29 in malignant melanoma. ARHGAP29 was revealed to be connected to tumor cell plasticity, promoting a mesenchymal‐like, invasive phenotype and driving tumor progression. Further, it modulates cell spreading by influencing RhoA/ROCK signaling and affects SMAD2 activity. Rho GTPase‐activating protein
Beatrice Charlotte Tröster   +3 more
wiley   +1 more source

Similarity Learning for High-Dimensional Sparse Data [PDF]

open access: yes, 2019
A good measure of similarity between data points is crucial to many tasks in machine learning. Similarity and metric learning methods learn such measures automatically from data, but they do not scale well respect to the dimensionality of the data.
Bellet, Aurélien, Liu, Kuan, Sha, Fei
core   +1 more source

A synthetic benzoxazine dimer derivative targets c‐Myc to inhibit colorectal cancer progression

open access: yesMolecular Oncology, EarlyView.
Benzoxazine dimer derivatives bind to the bHLH‐LZ region of c‐Myc, disrupting c‐Myc/MAX complexes, which are evaluated from SAR analysis. This increases ubiquitination and reduces cellular c‐Myc. Impairing DNA repair mechanisms is shown through proteomic analysis.
Nicharat Sriratanasak   +8 more
wiley   +1 more source

KERNEL LOGISTIC REGRESSION-LINEAR FOR LEUKEMIA CLASSIFICATION USING HIGH DIMENSIONAL DATA

open access: yesJUTI: Jurnal Ilmiah Teknologi Informasi, 2009
Kernel Logistic Regression (KLR) is one of the statistical models that has been proposed for classification in the machine learning and data mining communities, and also one of the effective methodologies in the kernel–machine techniques.
S P Rahayu   +3 more
doaj   +1 more source

High Dimensional Data Enrichment: Interpretable, Fast, and Data-Efficient

open access: yes, 2018
High dimensional structured data enriched model describes groups of observations by shared and per-group individual parameters, each with its own structure such as sparsity or group sparsity. In this paper, we consider the general form of data enrichment
Asiaee, Amir   +3 more
core  

Home - About - Disclaimer - Privacy