Results 311 to 320 of about 601,753 (327)
Some of the next articles are maybe not open access.
2013
In traditional supervised learning, one uses ”labeled” data to build a model. However, labeling the training data for real-world applications is difficult, expensive, or time consuming, as it requires the effort of human annotators sometimes with specific domain experience and training.
Mohamed Farouk Abdel Hady+1 more
openaire +2 more sources
In traditional supervised learning, one uses ”labeled” data to build a model. However, labeling the training data for real-world applications is difficult, expensive, or time consuming, as it requires the effort of human annotators sometimes with specific domain experience and training.
Mohamed Farouk Abdel Hady+1 more
openaire +2 more sources
2005
For many classification problems, unlabeled training data are inexpensive and readily available, whereas labeling training data imposes costs. Semi-supervised classification algorithms aim at utilizing information contained in unlabeled data in addition to the (few) labeled data.
openaire +1 more source
For many classification problems, unlabeled training data are inexpensive and readily available, whereas labeling training data imposes costs. Semi-supervised classification algorithms aim at utilizing information contained in unlabeled data in addition to the (few) labeled data.
openaire +1 more source
2020
This chapter is concerned with the advanced supervised learning, including the semi-supervised learning. We start with the three versions of the Kohonen Networks: the unsupervised version, the supervised version, and the semi-supervised version.
openaire +2 more sources
This chapter is concerned with the advanced supervised learning, including the semi-supervised learning. We start with the three versions of the Kohonen Networks: the unsupervised version, the supervised version, and the semi-supervised version.
openaire +2 more sources
2018 24th International Conference on Pattern Recognition (ICPR), 2018
Convolutional neural networks (CNNs) attain state-of-the-art performance on various classification tasks assuming a sufficiently large number of labeled training examples. Unfortunately, curating sufficiently large labeled training dataset requires human involvement, which is expensive and time consuming.
Xue-wen Chen+2 more
openaire +2 more sources
Convolutional neural networks (CNNs) attain state-of-the-art performance on various classification tasks assuming a sufficiently large number of labeled training examples. Unfortunately, curating sufficiently large labeled training dataset requires human involvement, which is expensive and time consuming.
Xue-wen Chen+2 more
openaire +2 more sources
Semi-supervised learning by disagreement
Knowledge and Information Systems, 2009In many real-world tasks, there are abundant unlabeled examples but the number of labeled training examples is limited, because labeling the examples requires human efforts and expertise. So, semi-supervised learning which tries to exploit unlabeled examples to improve learning performance has become a hot topic.
Zhi-Hua Zhou, Ming Li
openaire +1 more source
2006
In recent years, there has been considerable interest in non-standard learning problems, namely in the so-called semi-supervised learning scenarios. Most formulations of semisupervised learning see the problem from one of two (dual) perspectives: supervised learning (namely, classification) with missing labels; unsupervised learning (namely, clustering)
openaire +2 more sources
In recent years, there has been considerable interest in non-standard learning problems, namely in the so-called semi-supervised learning scenarios. Most formulations of semisupervised learning see the problem from one of two (dual) perspectives: supervised learning (namely, classification) with missing labels; unsupervised learning (namely, clustering)
openaire +2 more sources
On semi-supervised learning and sparsity
2009 IEEE International Conference on Systems, Man and Cybernetics, 2009In this article we establish a connection between semi-supervised learning and compressive sampling. We show that sparsity and compressibility of the learning function can be obtained from heavy-tailed distributions of filter responses or coefficients in spectral decompositions.
Alexander Balinsky, Helen Balinsky
openaire +2 more sources
Semi-Supervised Bilinear Subspace Learning
IEEE Transactions on Image Processing, 2009Recent research has demonstrated the success of tensor based subspace learning in both unsupervised and supervised configurations (e.g., 2-D PCA, 2-D LDA, and DATER). In this correspondence, we present a new semi-supervised subspace learning algorithm by integrating the tensor representation and the complementary information conveyed by unlabeled data.
Dong Xu, Shuicheng Yan
openaire +3 more sources
Reliable Semi-supervised Learning
2016 IEEE 16th International Conference on Data Mining (ICDM), 2016In this paper, we propose a Reliable Semi-Supervised Learning framework, called ReSSL, for both static and streaming data. Instead of relaxing different assumptions, we do model the reliability of cluster assumption, quantify the distinct importance of clusters (or evolving micro-clusters on data streams), and integrate the cluster-level information ...
Qinli Yang+3 more
openaire +2 more sources
Privileged Semi-Supervised Learning
2018 25th IEEE International Conference on Image Processing (ICIP), 2018Semi-Supervised Learning (SSL) aims to leverage unlabeled data to improve performance. Due to the lack of supervised information, previous works mainly focus on how to utilize the available unlabeled data to improve the training quality. However, the estimation of the data distribution revealed by the unlabeled examples might not be accurate as their ...
Chao Ma+4 more
openaire +2 more sources