Results 241 to 250 of about 1,274,940 (254)
Some of the next articles are maybe not open access.
Recipe Representation Learning with Networks
Proceedings of the 30th ACM International Conference on Information & Knowledge Management, 2021Learning effective representations for recipes is essential in food studies for recommendation, classification, and other applications. Unlike what has been developed for learning textual or cross-modal embeddings for recipes, the structural relationship among recipes and food items are less explored.
Yijun Tian +3 more
openaire +1 more source
Unsupervised Representation Learning on Attributed Multiplex Network
Proceedings of the 31st ACM International Conference on Information & Knowledge Management, 2022Embedding learning in multiplex networks has drawn increasing attention in recent years and achieved outstanding performance in many downstream tasks. However, most existing network embedding methods either only focus on the structured information of graphs, rely on the human-annotated data, or mainly rely on multi-layer GCNs to encode graphs at the ...
Zhang, Rui +2 more
openaire +1 more source
Text-enhanced network representation learning
Frontiers of Computer Science, 2020Network representation learning called NRL for short aims at embedding various networks into low-dimensional continuous distributed vector spaces. Most existing representation learning methods focus on learning representations purely based on the network topology, i.e., the linkage relationships between network nodes, but the nodes in lots of networks ...
Yu Zhu +3 more
openaire +1 more source
Representation learning in distributed networks
2022The effectiveness of machine learning (ML) in today's applications largely depends on the goodness of the representation of data used within the ML algorithms. While the massiveness in dimension of modern day data often requires lower-dimensional data representations in many applications for efficient use of available computational resources, the use ...
openaire +1 more source
Learning Representations in Directed Networks
2015We propose a probabilistic model for learning continuous vector representations of nodes in directed networks. These representations could be used as high quality features describing nodes in a graph and implicitly encoding global network structure. The usefulness of the representations is demonstrated on link prediction and graph visualization tasks ...
Oleg U. Ivanov, Sergey O. Bartunov
openaire +1 more source
Transferable Representation Learning with Deep Adaptation Networks
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019Domain adaptation studies learning algorithms that generalize across source domains and target domains that exhibit different distributions. Recent studies reveal that deep neural networks can learn transferable features that generalize well to similar novel tasks.
Mingsheng Long +4 more
openaire +2 more sources
Learning Neural Representations for Network Anomaly Detection
IEEE Transactions on Cybernetics, 2019This paper proposes latent representation models for improving network anomaly detection. Well-known anomaly detection algorithms often suffer from challenges posed by network data, such as high dimension and sparsity, and a lack of anomaly data for training, model selection, and hyperparameter tuning. Our approach is to introduce new regularizers to a
Van Loi Cao +2 more
openaire +2 more sources
Learning Network Representation via Ego-Network-Level Relationship
2019Network representation, aiming to map each node of a network into a low-dimensional space, is a fundamental problem in the network analysis. Most existing works focus on the self-level or pairwise-level relationship among nodes to capture network structure. However, it is too simple to characterize the complex dependencies in the network. In this paper,
Bencheng Yan, Shenglei Huang
openaire +1 more source
Representation and learning in feedforward neural networks
CWI Quarterly, 1993Summary: The paper gives an introduction to feedforward neural networks. The aim is to present some of the basics of artificial neural networks, with a particular emphasis on the following two central issues. The first central issue of this paper is: in what sense do artificial neural networks represent mathematical functions, and what mathematical ...
openaire +2 more sources

