[Question] QuadCopter training with camera #1451
-
Hi, Apologies if this question has been asked before or if the answer is obvious. I’m still relatively new to reinforcement learning, robotics, and simulation, having started just a month ago. I’m currently working on a reinforcement learning task for a drone that needs to navigate through a level to reach a desired position on the opposite side. To achieve this, I’ve added a camera to the drone for perception. My question is: how should I approach training for such an application? In your opinion, what would be the fastest and most effective way to implement this? Would it be better to:
Thanks in advance for your responses |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 7 replies
-
I also really want to know that currently the quadcopter example only has a direct version, not a manage_based version, and I don't know if it's possible to implement a manage_based version because it doesn't seem to be possible to implement it through ActionsCfg's EffortActionCfg, etc. Therefore, I personally think that it may be difficult to implement a ready-made visual quadcopter. |
Beta Was this translation helpful? Give feedback.
-
This is a great discussion. Will move it into our Discussions section for the team to follow upon. Thanks for posting this. |
Beta Was this translation helpful? Give feedback.
-
@JulienHansen in order to provide an appropriate solution there is necessary to know in details what is the expected observation and action space you want to use |
Beta Was this translation helpful? Give feedback.
Well, I asked for an initial network structure to get an idea of a possible skrl configuration definition.
Again, if you are going to use skrl you need to use the version 1.4.0 (not released yet): develop branch.
Here is an example:
left network: