Recently gradient-free optimization methods have become a major tool in reinforcement learning and memory-efficient LLM fine-tuning. Under the standard setting of uniformly bounded noise variance an optimal accelerated algorithm has been derived. However, the assumption of bounded variance is strict and usually is not fulfilled in practice. Therefore, we will relax it, allowing the noise distribution to be heavy-tailed and, thus, broadening the class of problems to be solved. We propose gradient-free algorithms with zeroth-order oracle under adversarial noise with unbounded variance, for non-smooth convex and convex-concave optimization problems. We apply clipping operator to deal with heavy-tailedness and batching to allow efficient computation via parallelization. Our analysis provides asymptotic bounds for such key parameters as iteration complexity, oracle complexity and maximal adversarial noise level.
Journal Optimization Methods and Software Optimization
Zeroth-order methods for non-smooth stochastic problems under heavy-tailed noise
Taylor & Francis
Cite this paper
Zeroth-order methods for non-smooth stochastic problems under heavy-tailed noise
@inproceedings{bashirov2026zeroth,
title = {Zeroth-order methods for non-smooth stochastic problems under heavy-tailed noise},
author = {Nail Bashirov and Alexander Gasnikov and Aleksandr Lobanov},
booktitle = {Optimization Methods and Software},
year = {2026}
}