Results 41 to 50 of about 33,931 (272)

Collaborative Multi-Agent Multi-Armed Bandit Learning for Small-Cell Caching [PDF]

open access: yesIEEE Transactions on Wireless Communications, 2020
This paper investigates learning-based caching in small-cell networks (SCNs) when user preference is unknown. The goal is to optimize the cache placement in each small base station (SBS) for minimizing the system long-term transmission delay.
Xianzhe Xu, M. Tao, Cong Shen
semanticscholar   +1 more source

Finding structure in multi-armed bandits [PDF]

open access: yesCognitive Psychology, 2018
AbstractHow do humans search for rewards? This question is commonly studied using multi-armed bandit tasks, which require participants to trade off exploration and exploitation. Standard multi-armed bandits assume that each option has an independent reward distribution.
Schulz, Eric   +2 more
openaire   +4 more sources

Multi-armed bandits for performance marketing

open access: yesInternational Journal of Data Science and Analytics, 2023
Abstract This paper deals with the problem of optimising bids and budgets of a set of digital advertising campaigns. We improve on the current state of the art by introducing support for multi-ad group marketing campaigns and developing a highly data efficient parametric contextual bandit.
Gigli, M, Stella, F
openaire   +1 more source

Performance of Multi-Armed Bandit Algorithms in Dynamic vs. Static Environments: A Comparative Analysis [PDF]

open access: yesITM Web of Conferences
This paper conducts a comparative analysis of Multi-Armed Bandit (MAB) algorithms, particularly the Upper Confidence Bound (UCB) and Thompson Sampling (TS) algorithms, and focuses on the performance of these algorithms in both static and dynamic ...
Zhao Boxi
doaj   +1 more source

Multi-armed bandits with dependent arms

open access: yesMachine Learning, 2023
We study a variant of the classical multi-armed bandit problem (MABP) which we call as Multi-Armed Bandits with dependent arms. More specifically, multiple arms are grouped together to form a cluster, and the reward distributions of arms belonging to the same cluster are known functions of an unknown parameter that is a characteristic of the cluster ...
Rahul Singh   +3 more
openaire   +3 more sources

Multi-Armed Bandits and Quantum Channel Oracles [PDF]

open access: yesQuantum
Multi-armed bandits are one of the theoretical pillars of reinforcement learning. Recently, the investigation of quantum algorithms for multi-armed bandit problems was started, and it was found that a quadratic speed-up (in query complexity) is possible ...
Simon Buchholz   +2 more
doaj   +1 more source

Differential Privacy in Social Networks Using Multi-Armed Bandit

open access: yesIEEE Access, 2022
There has been an exponential growth over the years in the number of users connected to social networks. This has spurred research interest in social networks to ensure the privacy of users. From a theoretical standpoint, a social network is modeled as a
Olusola T. Odeyomi
doaj   +1 more source

Characterizing Truthful Multi-armed Bandit Mechanisms [PDF]

open access: yesSIAM Journal on Computing, 2014
This is the full version of a conference paper published in ACM EC 2009. This revision is re-focused to emphasize the results that do not rely on the "IIA assumption" (see the paper for the definition)
Babaioff, Moshe   +2 more
openaire   +2 more sources

A Change-Detection based Framework for Piecewise-stationary Multi-Armed Bandit Problem [PDF]

open access: yesAAAI Conference on Artificial Intelligence, 2017
The multi-armed bandit problem has been extensively studied under the stationary assumption. However in reality, this assumption often does not hold because the distributions of rewards themselves may change over time.
Fang Liu, Joohyung Lee, N. Shroff
semanticscholar   +1 more source

Addictive Games: Case Study on Multi-Armed Bandit Game

open access: yesInformation, 2021
The attraction of games comes from the player being able to have fun in games. Gambling games that are based on the Variable-Ratio schedule in Skinner’s experiment are the most typical addictive games.
Xiaohan Kang   +3 more
doaj   +1 more source

Home - About - Disclaimer - Privacy