Assaf Zeevi

“Nonparametric bandits with covariates”

Coauthor(s): Philippe Rigolet.

Editors: A. T. Kalai and M. Mohri

Download:

Adobe Acrobat PDF

Abstract:
We consider a bandit problem which involves sequential sampling from two populations (arms). Each arm produces a noisy reward realization which depends on an observable random covariate. The goal is to maximize cumulative expected reward. We derive general lower bounds on the performance of any admissible policy, and develop an algorithm whose performance achieves the order of said lower bound up to logarithmic terms. This is done by decomposing the global problem into suitably "localized" bandit problems. Proofs blend ideas from nonparametric statistics and traditional methods used in the bandit literature.

Source: Proceedings of the 23rd conference on learning theory (COLT)
Exact Citation:
Zeevi, Assaf, and Philippe Rigolet. "Nonparametric bandits with covariates." In Proceedings of the 23rd conference on learning theory (COLT), 54–66. Ed. A. T. Kalai and M. Mohri. New York: Association for Computing Machinery, July 2010.
Pages: 54?66
Place: New York
Date: 7 2010