Results 281 to 290 of about 338,614 (301)
Some of the next articles are maybe not open access.
2021
We come to the watermelon field during the harvest season, and the ground is covered with many watermelons. The melon farmer brings a handful of melons and says that they are all ripe melons, and then points at a few melons in the ground and says that these are not ripe, and they would take a few more days to grow up.
openaire +2 more sources
We come to the watermelon field during the harvest season, and the ground is covered with many watermelons. The melon farmer brings a handful of melons and says that they are all ripe melons, and then points at a few melons in the ground and says that these are not ripe, and they would take a few more days to grow up.
openaire +2 more sources
2014
Abstract In the world of modern technology, digital data are generated at a lightning speed. These data are typically unlabeled as obtaining labels often requires time-consuming and costly human input. Semi-supervised learning was introduced to study the problem of using the labeled and unlabeled data together to improve learning. Two basic questions
Xueyuan Zhou, Mikhail Belkin
openaire +2 more sources
Abstract In the world of modern technology, digital data are generated at a lightning speed. These data are typically unlabeled as obtaining labels often requires time-consuming and costly human input. Semi-supervised learning was introduced to study the problem of using the labeled and unlabeled data together to improve learning. Two basic questions
Xueyuan Zhou, Mikhail Belkin
openaire +2 more sources
2015
In this chapter, we give an overview of different approaches developed in semi-supervised learning, as well as different assumptions leading to these methods. We particularly consider the margin as an indicator of confidence which constitutes the working hypothesis of algorithms that search the decision boundary on low density regions.
Massih-Reza Amini, Nicolas Usunier
openaire +1 more source
In this chapter, we give an overview of different approaches developed in semi-supervised learning, as well as different assumptions leading to these methods. We particularly consider the margin as an indicator of confidence which constitutes the working hypothesis of algorithms that search the decision boundary on low density regions.
Massih-Reza Amini, Nicolas Usunier
openaire +1 more source
2013
In traditional supervised learning, one uses ”labeled” data to build a model. However, labeling the training data for real-world applications is difficult, expensive, or time consuming, as it requires the effort of human annotators sometimes with specific domain experience and training.
Mohamed Farouk Abdel Hady +1 more
openaire +2 more sources
In traditional supervised learning, one uses ”labeled” data to build a model. However, labeling the training data for real-world applications is difficult, expensive, or time consuming, as it requires the effort of human annotators sometimes with specific domain experience and training.
Mohamed Farouk Abdel Hady +1 more
openaire +2 more sources
Semi-Supervised Bilinear Subspace Learning
IEEE Transactions on Image Processing, 2009Recent research has demonstrated the success of tensor based subspace learning in both unsupervised and supervised configurations (e.g., 2-D PCA, 2-D LDA, and DATER). In this correspondence, we present a new semi-supervised subspace learning algorithm by integrating the tensor representation and the complementary information conveyed by unlabeled data.
Dong, Xu, Shuicheng, Yan
openaire +2 more sources
2018 24th International Conference on Pattern Recognition (ICPR), 2018
Convolutional neural networks (CNNs) attain state-of-the-art performance on various classification tasks assuming a sufficiently large number of labeled training examples. Unfortunately, curating sufficiently large labeled training dataset requires human involvement, which is expensive and time consuming.
Zeyad Hailat +2 more
openaire +1 more source
Convolutional neural networks (CNNs) attain state-of-the-art performance on various classification tasks assuming a sufficiently large number of labeled training examples. Unfortunately, curating sufficiently large labeled training dataset requires human involvement, which is expensive and time consuming.
Zeyad Hailat +2 more
openaire +1 more source
2020
This chapter is concerned with the advanced supervised learning, including the semi-supervised learning. We start with the three versions of the Kohonen Networks: the unsupervised version, the supervised version, and the semi-supervised version.
openaire +1 more source
This chapter is concerned with the advanced supervised learning, including the semi-supervised learning. We start with the three versions of the Kohonen Networks: the unsupervised version, the supervised version, and the semi-supervised version.
openaire +1 more source
Privileged Semi-Supervised Learning
2018 25th IEEE International Conference on Image Processing (ICIP), 2018Semi-Supervised Learning (SSL) aims to leverage unlabeled data to improve performance. Due to the lack of supervised information, previous works mainly focus on how to utilize the available unlabeled data to improve the training quality. However, the estimation of the data distribution revealed by the unlabeled examples might not be accurate as their ...
Xingyu Chen +4 more
openaire +1 more source
When Semi-supervised Learning Meets Ensemble Learning
Frontiers of Electrical and Electronic Engineering in China, 2009Semi-supervised learning and ensemble learning are two important machine learning paradigms. The former attempts to achieve strong generalization by exploiting unlabeled data; the latter attempts to achieve strong generalization by using multiple learners.
openaire +1 more source

