Hello Dobot Community,
I’m exploring the feasibility to train the Dobot Magician arm(preferably E6 model), for tasks such as rearranging cubes on a table through reinforcement learning. I have a few questions:
-
Is DobotStudio Pro adaptable as a simulator for learning environments [i.e. can run deep learning platform / has access to GPU], or should we look towards custom-developed environments for such tasks?
-
For direct control and data collection necessary for reinforcement learning, is there comprehensive documentation or an official API available for the TCP mode of the Dobot Magician?
-
Considering the requirement for mentioned, does the Magician E6 offer significant advantages over the basic model for use in educational or research-focused reinforcement learning projects? If so, what are these advantages? specifically talking about infra / software solutions.
Links to guides, official documentation, or personal experiences with similar projects would be incredibly valuable.
Thank you for your time and assistance.
Elen.