From 8e63cee2893e6c6b9492fdb2c1e860ad56bd4a92 Mon Sep 17 00:00:00 2001 From: "C. Benjamins" <75323339+benjamc@users.noreply.github.com> Date: Tue, 4 Apr 2023 15:24:06 +0200 Subject: [PATCH 1/3] Update README.md Fix documentation link. --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 2773f81c..10df90d5 100644 --- a/README.md +++ b/README.md @@ -25,7 +25,7 @@ Benchmarks include: ![Screenshot of each environment included in CARL.](./docs/source/figures/envs_overview.png) -For more information, check out our [documentation](https://carl.readthedocs.io/en/latest/)! +For more information, check out our [documentation](https://automl.github.io/CARL/)! ## Installation From 347fac2578ace1569c4139743593ca5194afe562 Mon Sep 17 00:00:00 2001 From: Theresa Eimer Date: Fri, 9 Jun 2023 14:41:31 +0200 Subject: [PATCH 2/3] Update CITATION.bib (#92) --- CITATION.bib | 19 +++++++++++++------ 1 file changed, 13 insertions(+), 6 deletions(-) diff --git a/CITATION.bib b/CITATION.bib index 316285c8..f6a2b599 100644 --- a/CITATION.bib +++ b/CITATION.bib @@ -1,7 +1,14 @@ -@inproceedings { BenEim2021a, - author = {Carolin Benjamins and Theresa Eimer and Frederik Schubert and André Biedenkapp and Bodo Rosenhahn and Frank Hutter and Marius Lindauer}, - title = {CARL: A Benchmark for Contextual and Adaptive Reinforcement Learning}, - booktitle = {NeurIPS 2021 Workshop on Ecological Theory of Reinforcement Learning}, - year = {2021}, - month = dec +@inproceedings { BenEim2023a, + author = {Carolin Benjamins and + Theresa Eimer and + Frederik Schubert and + Aditya Mohan and + Sebastian Döhler and + André Biedenkapp and + Bodo Rosenhahn and + Frank Hutter and + Marius Lindauer}, + title = {Contextualize Me - The Case for Context in Reinforcement Learning}, + journal = {Transactions on Machine Learning Research}, + year = {2023}, } From af97301ea77689533ba4efcffcb1fbede76498a0 Mon Sep 17 00:00:00 2001 From: Theresa Eimer Date: Fri, 9 Jun 2023 15:56:23 +0200 Subject: [PATCH 3/3] Some doc updates --- docs/index.rst | 32 +++++++++++++++++++++++++-- docs/source/api/autoapi_link.rst | 1 + docs/source/api/index.rst | 1 + docs/source/cite.rst | 23 ++++++++++--------- docs/source/environments/carl_env.rst | 7 +----- 5 files changed, 45 insertions(+), 19 deletions(-) create mode 100644 docs/source/api/autoapi_link.rst diff --git a/docs/index.rst b/docs/index.rst index ff21c06d..87b20b54 100644 --- a/docs/index.rst +++ b/docs/index.rst @@ -16,12 +16,40 @@ Welcome to the documentation of CARL, a benchmark library for Contextually Adapt Reinforcement Learning. CARL extends well-known RL environments with context, making them easily configurable to test robustness and generalization. -CARL is being developed in Python 3.9. - Feel free to check out our `paper `_ and our `blog post `_ on CARL! +What is Context? +---------------- + +.. image:: ../figures/concept.png + :width: 75% + :align: center + :alt: CARL contextually extends Brax' Fetch. + +Context can change the goals and dynamics of an environment. +The interaction interval in Pendulum, for example, can make that environment muhc easier or harder. +The same is true for the composition of a Mario level. +So context is a tool for creating variations in reinforcement learning environments. +In contrast to other approaches like procedural generation, however, context can easily be defined and controlled by the user. +That means you have full control over the difficulty and degree of variations in your environments. +This way, you can gain detailed insights into the generalization capabilities of your agents - where do they excel and where do they fail? +CARL can help you find the answer! + +If you're interested in learning more about context, check out our `paper `_ or context in RL or the corrsponding `blog post `_. + +What can you do with CARL? +-------------------------- + +With CARL, you can easily define train and test distributions across different features of your favorite environments. +Examples include: +- training on short CartPole poles and testing if the policy can transfer to longer ones +- training LunarLander on moon gravity and seeing if it can also land on mars +- training and testing on a uniform distribution of floor friction values on Halfcheetah +... and many more! + +Simply decide on a generalization task you want your agent to solve, choose the context feature(s) to vary and train your agent just like on any other gymnasium environment Contact ------- diff --git a/docs/source/api/autoapi_link.rst b/docs/source/api/autoapi_link.rst new file mode 100644 index 00000000..ef1930e7 --- /dev/null +++ b/docs/source/api/autoapi_link.rst @@ -0,0 +1 @@ +.. include:: ../../autoapi/src/envs/carl_env/index \ No newline at end of file diff --git a/docs/source/api/index.rst b/docs/source/api/index.rst index 6efa6069..3b957615 100644 --- a/docs/source/api/index.rst +++ b/docs/source/api/index.rst @@ -6,6 +6,7 @@ This page gives an overview of all CARL environments. .. toctree:: + autoapi_link ../../autoapi/src/envs/carl_env/index ../../autoapi/src/envs/classic_control/index ../../autoapi/src/envs/box2d/index diff --git a/docs/source/cite.rst b/docs/source/cite.rst index 0246786d..eacdcfcf 100644 --- a/docs/source/cite.rst +++ b/docs/source/cite.rst @@ -10,15 +10,16 @@ If you use CARL in your research, please cite us with the following Bibtex entry .. code:: text @inproceedings { BenEim2021a, - author = {Carolin Benjamins and - Theresa Eimer and - Frederik Schubert and - André Biedenkapp and - Bodo Rosenhahn and - Frank Hutter and - Marius Lindauer}, - title = {CARL: A Benchmark for Contextual and Adaptive Reinforcement Learning}, - booktitle = {NeurIPS 2021 Workshop on Ecological Theory of Reinforcement Learning}, - year = {2021}, - month = dec + author = {Carolin Benjamins and + Theresa Eimer and + Frederik Schubert and + Aditya Mohan and + Sebastian Döhler and + André Biedenkapp and + Bodo Rosenhahn and + Frank Hutter and + Marius Lindauer}, + title = {Contextualize Me - The Case for Context in Reinforcement Learning}, + journal = {Transactions on Machine Learning Research}, + year = {2023}, } diff --git a/docs/source/environments/carl_env.rst b/docs/source/environments/carl_env.rst index 4ed18b19..23cec924 100644 --- a/docs/source/environments/carl_env.rst +++ b/docs/source/environments/carl_env.rst @@ -1,18 +1,13 @@ The CARL Environment ==================== -CARL extends the standard `gym interface `_ with context. +CARL extends the standard `gymnasium interface `_ with context. This context changes the environment's transition dynamics and reward function, creating a greater challenge for the agent. During training we therefore can encounter different contexts and train for generalization. We exemplarily show how Brax' Fetch is extended and embedded by CARL. Different instantiations can be achieved by setting the context features to different values. -.. image:: ../figures/concept.png - :width: 75% - :align: center - :alt: CARL contextually extends Brax' Fetch. - Here we give a brief overview of the available options on how to create and work with contexts.