Replies: 1 comment
-
Thank you for your insight! |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I am interested in your multi-agent safety reinforcement learning environments. I have carefully reviewed the code related to 'SafetyPointMultiGoal1-v0' and currently have two points of confusion.
In this environment, both the observation and state include the radar information for two agents. Considering the overall consistency of the environment, the radar information for different agents seems to only change the angle of observation of the environment?
From the perspective of cost['agent_0']['cost_contact_other'], it appears that agent 1 needs to avoid obstacles with agent 0. However, it seems that agent 1's observation does not include a radar facing agent 0?
I am not sure if my understanding is correct and would appreciate the opportunity to have a further discussion with the code authors.
Beta Was this translation helpful? Give feedback.
All reactions