Thompson Sampling Tutorial
-
Updated
Jan 25, 2019 - Jupyter Notebook
Thompson Sampling Tutorial
Contextual bandit algorithm called LinUCB / Linear Upper Confidence Bounds as proposed by Li, Langford and Schapire
Another A/B test library
Privacy-Preserving Bandits (MLSys'20)
Solutions and figures for problems from Reinforcement Learning: An Introduction Sutton&Barto
Client that handles the administration of StreamingBandit online, or straight from your desktop. Setup and run streaming (contextual) bandit experiments in your browser.
Network-Oriented Repurposing of Drugs Python Package
Movie Recommendation using Cascading Bandits namely CascadeLinTS and CascadeLinUCB
Reinforcement learning
Adversarial multi-armed bandit algorithms
Solutions to the Stanford CS:234 Reinforcement Learning 2022 course assignments.
Research project on automated A/B testing of software by evolutionary bandits.
This presentation contains very precise yet detailed explanation of concepts of a very interesting topic -- Reinforcement Learning.
A small collection of Bandit Algorithms (ETC, E-Greedy, Elimination, UCB, Exp3, LinearUCB, and Thompson Sampling)
Online learning approaches to optimize database join operations in PostgreSQL.
Add a description, image, and links to the bandit-algorithm topic page so that developers can more easily learn about it.
To associate your repository with the bandit-algorithm topic, visit your repo's landing page and select "manage topics."