The Implicitly Normalized Forecaster (INF) algorithm is considered to be an optimal solution for adversarial multi-armed bandit (MAB) problems. However, most of the existing complexity results for INF rely on restrictive assumptions, such as bounded rewards. Recently, a related algorithm was proposed that works for both adversarial and stochastic heavy-tailed MAB settings. However, this algorithm fails to fully exploit the available data. In this paper, we propose a new version of INF called the Implicitly Normalized Forecaster with clipping (INF-clip) for MAB problems with heavy-tailed reward distributions. We establish convergence results under mild assumptions on the rewards distribution and demonstrate that INF-clip is optimal for linear heavy-tailed stochastic MAB problems and works well for non-linear ones. Furthermore, we show that INF-clip outperforms the best-of-both-worlds algorithm in cases where it is difficult to distinguish between different arms.
Journal Computational Management Science Bandits and online learning
Implicitly normalized forecaster with clipping for linear and non-linear heavy-tailed multi-armed bandits
arXiv:2305.06743
Cite this paper
Implicitly normalized forecaster with clipping for linear and non-linear heavy-tailed multi-armed bandits
@inproceedings{dorn2024forecaster,
title = {Implicitly normalized forecaster with clipping for linear and non-linear heavy-tailed multi-armed bandits},
author = {Yuriy Dorn and Nikita Kornilov and Nikolay Kutuzov and Alexander Nazin and Eduard Gorbunov and Alexander Gasnikov},
booktitle = {Computational Management Science},
year = {2024},
url = {https://arxiv.org/abs/2305.06743}
}