Welcome to Lilian Besson’s “AlgoBandits” project documentation!

A research framework for Single and Multi-Players Multi-Arms Bandits (MAB) Algorithms: UCB, KL-UCB, Thompson... and MusicalChair, ALOHA, MEGA, rhoRand etc.


See more on the GitHub page for this project: https://naereen.github.io/AlgoBandits/. The project is also hosted on Inria GForge, and the documentation can be seen online at http://banditslilian.gforge.inria.fr/. Website up

Bandit algorithms, Lilian Besson’s “AlgoBandits” project

This repository contains the code of my numerical environment, written in Python, in order to perform numerical simulations on single-player and multi-players Multi-Armed Bandits (MAB) algorithms.

PyPI implementation PyPI pyversions MIT license

I (Lilian Besson) have started my PhD in October 2016, and this is a part of my on going research since December 2016.

Maintenance Ask Me Anything


This documentation is publically available, but the code is not (yet) open-source. I will publish it soon, when it will be stable and clean enough to be used by others.

GitHub forks GitHub stars GitHub watchers

GitHub contributors GitHub issues GitHub issues-closed

Indices and tables


Should you use bandits?

In 2015, Chris Stucchio advised against the use of bandits, in the context of improving A/B testings, opposed to his 2013 blog post in favor of bandits, also for A/B testings. Both articles are worth reading, but in this research we are not studying A/B testing, and it has been already proved how efficient bandit algorithms can be for real-world and simulated cognitive radio networks. (See for instance this article by Wassim Jouini, Christophe Moy and Jacques Palicot, [“Multi-armed bandit based policies for cognitive radio’s decision making issues”, W Jouini, D Ernst, C Moy, J Palicot 2009]).

made-with-latex made-with-sphinx

ForTheBadge uses-badges ForTheBadge uses-git forthebadge made-with-python ForTheBadge built-with-science