Results 51 to 60 of about 33,931 (272)

Multi-armed bandits in metric spaces [PDF]

open access: yesProceedings of the fortieth annual ACM symposium on Theory of computing, 2008
In a multi-armed bandit problem, an online algorithm chooses from a set of strategies in a sequence of trials so as to maximize the total payoff of the chosen strategies. While the performance of bandit algorithms with a small finite strategy set is quite well understood, bandit problems with large strategy sets are still a topic of very active ...
Kleinberg, Robert   +2 more
openaire   +2 more sources

The multi-armed bandit, with constraints [PDF]

open access: yesAnnals of Operations Research, 2012
The colorfully-named and much-studied multi-armed bandit is the following Markov decision problem: At epochs 1, 2, ... , a decision maker observes the current state of each of several Markov chains with rewards (bandits) and plays one of them. The Markov chains that are not played remain in their current states.
Denardo, Eric V.   +2 more
openaire   +2 more sources

Context Attentive Bandits: Contextual Bandit with Restricted Context

open access: yes, 2017
We consider a novel formulation of the multi-armed bandit model, which we call the contextual bandit with restricted context, where only a limited number of features can be accessed by the learner at every iteration.
Bouneffouf, Djallel   +3 more
core   +1 more source

A cost-sensitive decision tree learning algorithm based on a multi-armed bandit framework [PDF]

open access: yes, 2017
This paper develops a new algorithm for inducing cost-sensitive decision trees that is inspired by the multi-armed bandit problem, in which a player in a casino has to decide which slot machine (bandit) from a selection of slot machines is likely to pay ...
Auer   +8 more
core   +2 more sources

StreamingBandit: Experimenting with Bandit Policies

open access: yesJournal of Statistical Software, 2020
A large number of statistical decision problems in the social sciences and beyond can be framed as a (contextual) multi-armed bandit problem. However, it is notoriously hard to develop and evaluate policies that tackle these types of problems, and to use
Jules Kruijswijk   +3 more
doaj   +1 more source

Shrewd Selection Speeds Surfing: Use Smart EXP3!

open access: yes, 2018
In this paper, we explore the use of multi-armed bandit online learning techniques to solve distributed resource selection problems. As an example, we focus on the problem of network selection. Mobile devices often have several wireless networks at their
Appavoo, Anuja Meetoo   +2 more
core   +1 more source

Optimizing Coupon Recommendation Using Multi-Armed Bandit Algorithms [PDF]

open access: yesITM Web of Conferences
In recent years, coupon recommendations have become an essential strategy for e-commerce platforms to attract users and increase transaction volume.
Guo Jun
doaj   +1 more source

LP-MAB: Improving the Energy Efficiency of LoRaWAN Using a Reinforcement-Learning-Based Adaptive Configuration Algorithm

open access: yesSensors, 2023
In the Internet of Things (IoT), Low-Power Wide-Area Networks (LPWANs) are designed to provide low energy consumption while maintaining a long communications’ range for End Devices (EDs).
Benyamin Teymuri   +3 more
doaj   +1 more source

Nonparametric Stochastic Contextual Bandits

open access: yes, 2018
We analyze the $K$-armed bandit problem where the reward for each arm is a noisy realization based on an observed context under mild nonparametric assumptions.
Guan, Melody Y., Jiang, Heinrich
core   +1 more source

Why Should we Worry about Nigeria's Fragile Security?

open access: yesThe Political Quarterly, EarlyView.
Abstract This paper explores the multifaceted implications of Nigeria's persistent security crisis, highlighting its domestic, regional and global consequences. It examines the humanitarian toll, economic disruption, poverty, food insecurity and the erosion of social cohesion within Nigeria. Regionally, it analyses how Nigeria's instability exacerbates
Onyedikachi Madueke
wiley   +1 more source

Home - About - Disclaimer - Privacy