Results 111 to 120 of about 33,931 (272)

Introduction to Multi-Armed Bandits [PDF]

open access: yesFoundations and Trends® in Machine Learning, 2019
Multi-armed bandits a simple but very powerful framework for algorithms that make decisions over time under uncertainty. An enormous body of work has accumulated over the years, covered in several books and surveys. This book provides a more introductory, textbook-like treatment of the subject.
openaire   +2 more sources

Enhancing Analytic Hierarchy Process Modelling Under Uncertainty With Fine‐Tuning LLM

open access: yesExpert Systems, Volume 42, Issue 6, June 2025.
ABSTRACT Given that decision‐making typically encompasses stages such as problem recognition, the generation of alternatives, and the selection of the optimal choice, Large Language Models (LLMs) are progressively being integrated into tasks requiring the enumeration and comparative evaluation of alternatives, thereby promoting more rational decision ...
Haeun Park   +3 more
wiley   +1 more source

Combining Cmab with Matrix Factorization and Clustering For Enhanced Movie Recommendations [PDF]

open access: yesITM Web of Conferences
The contextual multi-armed bandit (CMAB) algorithm faces problems of a large-scale, sparse data matrix and many arms. This paper proposes a method that combines matrix factorization with a clustering algorithm to enhance the performance of the CMAB ...
Xiong Shuyue
doaj   +1 more source

Irregularized Transits to the South: A Social Force in the Cross‐Border Spatial Dispute in South America

open access: yesThe Journal of Latin American and Caribbean Anthropology, Volume 30, Issue 2, June 2025.
ABSTRACT This article examines Venezuelan irregularized transits in South America, focusing on the dynamics of mobility and control that shape the southern corridor—a transnational space linking the Andean Region (Venezuela, Colombia, Ecuador, Peru, and Bolivia) to the Southern Cone, particularly Chile.
Soledad Álvarez Velasco   +1 more
wiley   +1 more source

Scaling Multi-Armed Bandit Algorithms

open access: yesProceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2019
The Multi-Armed Bandit (MAB) is a fundamental model capturing the dilemma between exploration and exploitation in sequential decision making. At every time step, the decision maker selects a set of arms and observes a reward from each of the chosen arms. In this paper, we present a variant of the problem, which we call the Scaling MAB (S-MAB): The goal
Fouché, E., Komiyama, J., Böhm, K.
openaire   +2 more sources

Multi armed bandit based resource allocation in Near Memory Processing architectures

open access: yesMemories - Materials, Devices, Circuits and Systems
Recent advances in 3D fabrication have allowed handling the memory bottlenecks for modern data-intensive applications by bringing the computation closer to the memory, enabling Near Memory Processing (NMP).
Shubhang Pandey, T.G. Venkatesh
doaj   +1 more source

Multi-Armed Bandit Based Traffic Signal Control for Congestion Management [PDF]

open access: yesITM Web of Conferences
Against the background of rapid urbanization and continuous growth of vehicle numbers, traffic congestion has become increasingly prominent, causing serious negative impacts on urban life and economic development.
Bai Ke
doaj   +1 more source

Moving beyond diagnostic labels in psychiatry: outcome‐linked treatment modelling

open access: yes
General Psychiatry, Volume 38, Issue 6, December 2025.
Stanley Lyndon
wiley   +1 more source

Learning the distribution with largest mean: two bandit frameworks*

open access: yesESAIM: Proceedings and Surveys, 2017
Over the past few years, the multi-armed bandit model has become increasingly popular in the machine learning community, partly because of applications including online content optimization. This paper reviews two different sequential learning tasks that
Kaufmann Emilie, Garivier Aurélien
doaj   +1 more source

Bandit Algorithms for Advertising Optimization: A Comparative Study [PDF]

open access: yesITM Web of Conferences
In recent years, the rapid development of digital advertising has challenged advertisers to make optimal choices among multiple options quickly. This is crucial for increasing user engagement and return on investment.
Tian Ziyue
doaj   +1 more source

Home - About - Disclaimer - Privacy