Welcome to the official repository of the Dropbear Humanoid Robot! Developed by Hyperspawn & Pointblank. Dropbear is an advanced humanoid robot designed to operate in varied environments, showcasing agility, precision, and intelligence.
Dropbear Humanoid is a cutting-edge robot featuring advanced AI and superior hardware, designed for seamless human interaction, exploration, and task execution in extreme conditions. This project encapsulates our vision at Hyperspawn Robotics for the future of humanoid robots.
- Height: 6 feet and 2.02 inches (1880 mm)
- Weight: 45 kg
- Actuators: Brushless Lightweight DC Servo Motor - Precise Planetary Rotation MCX500 Driver
- Sensors: Vision, Audio, IMU, Pressure.
Detailed specifications, CAD models, and schematics of the hardware components can be found here:
- Actuator MyActuator RMD-X8, X-10
- Sensors
- Vision Sensors: Cameras: For visual perception, object recognition, and navigation.
- IMU (Inertial Measurement Unit): Combining accelerometers and gyroscopes for orientation and balance.
- Pressure Sensors: To detect the force exerted on the robot, aiding in gripping and interaction with objects.
- Audio Sensors - Microphones: For voice recognition and environmental sound detection.
- Control Units
- Nvidia Jetson Orin
- Custom FPGAs
- Body Frame Material: 3D-printed ABS, Extruded Aluminium
Dropbear uses Large Language Models for it's ability to process and understand human language. Vision LLMs extend the capabilities of traditional LLMs by integrating visual data processing, enabling dropbear to not just "see" but understand and interpret visual information in a contextually relevant manner.
- Natural Language Understanding: Dropbear understands spoken or written instructions.
- Object Recognition: Dropbear can identify and categorize objects within it's visual field.
- Navigation: Dropbear can navigate complex environments by recognizing landmarks and obstacles.
- Interaction: Dropbear can engage in conversational AI, providing responses and acting on user commands.
- Learning: Continuously improves through interactions, adapting to new phrases and contexts.
Dropbear utilizes a pre-trained model (LLaVA-1.6 8B), finetuned for robotic applications, enhanced by continuous learning from interactions.
Open X-Embodiment RT-2 shows that vision-language models (VLMs) can be transformed into powerful vision-language-action (VLA) models, which can directly control a robot by combining VLM pre-training with robotic data.
Dropbear can be used as a proxy avatar by controlling the robot using VR gear like a motion-tracking suit, etc. The robot precisely mimics your actions. While interacting with physical objects, VR gloves give you sensation feedback for an immersive teleportation-like experience.
For a step-by-step guide on assembling the Dropbear Humanoid Robot, refer to the Assembly Guide. To Explore Low Level Control, Check out the folder in this repo! For the Head and Neck specifically, click here!
Instructions and guidelines for operating the Dropbear Humanoid Robot can be found in the User Manual.
Contributions are welcome! Please refer to the Contribution Guide for details.
Dropbear Humanoid Robot is licensed under the MIT License.
For additional information and inquiries, please visit Hyperspawn Robotics or contact us at contact@hyperspawn-robotics.com.
Join us in advancing humanoid robotics!