Skip to content

Step-by-step process for MMBench dataset collection #4

@zef1611

Description

@zef1611

Hi @nicklashansen and team,

First of all, thank you for the great work on "Learning Massively Multitask World Models for Continuous Control" and for releasing the code!

I am reading through the paper and am very interested in the MMBench benchmark introduced in the work. As I am relatively new to this area, I am trying to get a better sense of how the benchmark is constructed programmatically.

Could you provide some high-level guidance and details on the step-by-step process used to collect the demonstrations for the 200 tasks.

I would love to understand how to work with the benchmark directly or potentially extend it in the future, so any instructions would be extremely helpful.

Thanks for your time!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions