Results 171 to 180 of about 25,409,385 (300)
Wigner Distribution Sets Universal Lower Bound for Quantum Advantage in Gaussian Boson Sampling. [PDF]
Kocharovsky VV, Kalra K.
europepmc +1 more source
This study integrates random matrix theory (RMT) and principal component analysis (PCA) to improve the identification of correlated regions in HIV protein sequences for vaccine design. PCA validation enhances the reliability of RMT‐derived correlations, particularly in small‐sample, high‐dimensional datasets, enabling more accurate detection of ...
Mariyam Siddiqah +3 more
wiley +1 more source
Optimal Finite Difference Angular Velocity Estimation for Spacecraft. [PDF]
Leo JP, Enright JP.
europepmc +1 more source
Deep learning‐based denoising models are applied to DNA data storage systems to enhance error reduction and data fidelity. By integrating DnCNN with DNA sequence encoding methods, the study demonstrates significant improvements in image quality and correction of substitution errors, revealing a promising path toward robust and efficient DNA‐based ...
Seongjun Seo +5 more
wiley +1 more source
Intra-individual structural covariance network in patients with chronic neck and shoulder pain: a longitudinal brain structure analysis. [PDF]
Liu T +6 more
europepmc +1 more source
A low‐cost, self‐driving laboratory is developed to democratize autonomous materials discovery. Using this "frugal twin" hardware architecture with Bayesian optimization, the platform rapidly converges to target lower critical solution temperature (LCST) values while self‐correcting from off‐target experiments, demonstrating an accessible route to data‐
Guoyue Xu, Renzheng Zhang, Tengfei Luo
wiley +1 more source
Long-term intensive golf training induces reconfiguration of brain structural covariance networks. [PDF]
Lei Z, Hou Y, Song X.
europepmc +1 more source
A machine learning method, opt‐GPRNN, is presented that combines the advantages of neural networks and kernel regressions. It is based on additive GPR in optimized redundant coordinates and allows building a representation of the target with a small number of terms while avoiding overfitting when the number of terms is larger than optimal.
Sergei Manzhos, Manabu Ihara
wiley +1 more source

