Skip to content
Soichiro Nishimori edited this page May 26, 2024 · 22 revisions

Validity of the reported results and the details on hyperparameters.

As well known, deep offline RL algorithms are highly sensitive to hyperparameters and small details of implementations. You feel that by skimming through some papers and comparing results. Surprisingly it is known that even different DNN libraries produce different results with the code logics identical [1].

In such situation, it is difficult to ensure the same performance between different codebases. In other words, there is no such thing as the performance of CQL as a unified value. What exists is, or rather, the performance of CQL with xxx hyperparameters written in xxx. Considering this situation, we did our best to choose single reliable existing codebase for each algorithm and tried to transfer that codebase into single file with the same hyperparameters.

Here for each algorithm, we report

  • The codebase we referred to (Also in README)
  • Published paper using the codebase for baseline experiment (If exists)
  • The performance report by the paper, (If there is not, accepted report with different codebase.)

We can run the codebase we refer by ourselves, but it takes time. Furthermore, for those who would like to use jax-corl as baselines in your own research, results from published papers would be more reliable certification to use.

AWAC

  • Codebase: jaxrl
  • Paper using the codebase: Cal-QL [2]
  • Results

CQL

  • Codebase: JaxCQL
  • Paper using the codebase: Cal-QL [2]
  • Results

IQL

  • Codebase: Original
  • Paper using the codebase: TD7 [3]
  • Results

TD3+BC

  • Codebase: Original
  • Paper using the codebase: TD7 [3]
  • Results

DT

Clone this wiki locally