Results 251 to 260 of about 302,460 (378)
Abstract Graph neural networks (GNNs) have revolutionised the processing of information by facilitating the transmission of messages between graph nodes. Graph neural networks operate on graph‐structured data, which makes them suitable for a wide variety of computer vision problems, such as link prediction, node classification, and graph classification.
Amit Sharma+4 more
wiley +1 more source
Maximum principles for the fractional p-Laplacian and symmetry of solutions [PDF]
Wenxiong Chen, Congming Li
semanticscholar +1 more source
Enhanced MRI-PET fusion using Laplacian pyramid and empirical mode decomposition for improved oncology imaging. [PDF]
Suryanarayana G+7 more
europepmc +1 more source
Enhancing generalized spectral clustering with embedding Laplacian graph regularization
Abstract An enhanced generalised spectral clustering framework that addresses the limitations of existing methods by incorporating the Laplacian graph and group effect into a regularisation term is presented. By doing so, the framework significantly enhances discrimination power and proves highly effective in handling noisy data.
Hengmin Zhang+5 more
wiley +1 more source
Uniqueness for positive solutions of $p$-Laplacian problem in an annulus [PDF]
Eric Nabana
openalex +1 more source
Further Exploration of an Upper Bound for Kemeny's Constant. [PDF]
Kooij RE, Dubbeldam JLA.
europepmc +1 more source
Boosted unsupervised feature selection for tumor gene expression profiles
Abstract In an unsupervised scenario, it is challenging but essential to eliminate noise and redundant features for tumour gene expression profiles. However, the current unsupervised feature selection methods treat all samples equally, which tend to learn discriminative features from simple samples.
Yifan Shi+5 more
wiley +1 more source
Comparison theorems on H-type sub-Riemannian manifolds. [PDF]
Baudoin F+3 more
europepmc +1 more source
Laplacian attention: A plug‐and‐play algorithm without increasing model complexity for vision tasks
Abstract Most prevailing attention mechanism modules in contemporary research are convolution‐based modules, and while these modules contribute to enhancing the accuracy of deep learning networks in visual tasks, they concurrently augment the overall model complexity.
Xiaolei Chen, Yubing Lu, Runyu Wen
wiley +1 more source