Results 331 to 340 of about 5,239,056 (367)
Some of the next articles are maybe not open access.
Continuously Learning from User Feedback
2022Machine Learning has been evolving rapidly over the past years, with new algorithms and approaches being devised to solve the challenges that the new properties of data pose. Specifically, algorithms must now learn continuously and in real time, from very large and possibly distributed sets of data.
Carneiro, Davide Rua +5 more
openaire +2 more sources
Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 2016
A strong User Experience (UX) discipline has become a business imperative across commercial industry. Accordingly, Human Factors professionals may be part of UX teams in large organizations designing enterprise systems (business-to-business technologies that serve as corporate back-ends or enabling technologies for other products).
Danielle Smith +4 more
openaire +1 more source
A strong User Experience (UX) discipline has become a business imperative across commercial industry. Accordingly, Human Factors professionals may be part of UX teams in large organizations designing enterprise systems (business-to-business technologies that serve as corporate back-ends or enabling technologies for other products).
Danielle Smith +4 more
openaire +1 more source
SURF: improving classifiers in production by learning from busy and noisy end users
International Conference on AI in Finance, 2020Supervised learning classifiers inevitably make mistakes in production, perhaps mis-labeling an email, or flagging an otherwise routine transaction as fraudulent.
J. Lockhart +5 more
semanticscholar +1 more source
FedRec: Federated Recommendation With Explicit Feedback
IEEE Intelligent Systems, 2021Recommendation models have been widely embedded in various online services, while most of which are designed with the assumption that users’ original behaviors are available in a central server. This may cause the privacy issue.
Guanyu Lin +3 more
semanticscholar +1 more source
Graph-Refined Convolutional Network for Multimedia Recommendation with Implicit Feedback
ACM Multimedia, 2020Reorganizing implicit feedback of users as a user-item interaction graph facilitates the applications of graph convolutional networks (GCNs) in recommendation tasks. In the interaction graph, edges between user and item nodes function as the main element
Yin-wei Wei +4 more
semanticscholar +1 more source
2012
One of the main reasons the Web has revolutionized working life and communications is its immediacy. Unlike printed media, websites can be continually updated at relatively minimal cost and also be available worldwide on a 24/7 basis. However, communication isn’t one-way, and the Web makes it very easy to enable site users to offer feedback.
Craig Grannell +2 more
openaire +1 more source
One of the main reasons the Web has revolutionized working life and communications is its immediacy. Unlike printed media, websites can be continually updated at relatively minimal cost and also be available worldwide on a 24/7 basis. However, communication isn’t one-way, and the Web makes it very easy to enable site users to offer feedback.
Craig Grannell +2 more
openaire +1 more source
Adapting User Preference to Online Feedback in Multi-round Conversational Recommendation
Web Search and Data Mining, 2021This paper concerns user preference estimation in multi-round conversational recommender systems (CRS), which interacts with users by asking questions about attributes and recommending items multiple times in one conversation. Multi-round CRS such as EAR
Kerui Xu +5 more
semanticscholar +1 more source
International Conference on Human Factors in Computing Systems, 2020
Automatically generated explanations of how machine learning (ML) models reason can help users understand and accept them. However, explanations can have unintended consequences: promoting over-reliance or undermining trust.
Alison Smith-Renner +6 more
semanticscholar +1 more source
Automatically generated explanations of how machine learning (ML) models reason can help users understand and accept them. However, explanations can have unintended consequences: promoting over-reliance or undermining trust.
Alison Smith-Renner +6 more
semanticscholar +1 more source
Personalized Language Modeling from Personalized Human Feedback
arXiv.orgPersonalized large language models (LLMs) are designed to tailor responses to individual user preferences. While Reinforcement Learning from Human Feedback (RLHF) is a commonly used framework for aligning LLMs with human preferences, vanilla RLHF assumes
Xinyu Li, Z. Lipton, Liu Leqi
semanticscholar +1 more source
On Targeted Manipulation and Deception when Optimizing LLMs for User Feedback
International Conference on Learning RepresentationsAs LLMs become more widely deployed, there is increasing interest in directly optimizing for feedback from end users (e.g. thumbs up) in addition to feedback from paid annotators.
Marcus Williams +5 more
semanticscholar +1 more source

