Instruction-following vision-language model (VLM): grounded text instructions executed via multi-modal reasoning
-
Updated
Aug 1, 2025 - Python
Instruction-following vision-language model (VLM): grounded text instructions executed via multi-modal reasoning
C7 — A Two-Hemisphere Grounded Cognitive Architecture
Add a description, image, and links to the grounded-ai topic page so that developers can more easily learn about it.
To associate your repository with the grounded-ai topic, visit your repo's landing page and select "manage topics."