SEER (Semantic Extraction for Enhanced Reasoning) enhances GPT-4V's capability to interact with graphical user interfaces (GUIs) by extracting semantic meanings from UI elements and identifying interactable regions. This enables region-grounded actions, making multimodal environments more intuitive and interactive.
SEER pushes the boundaries of multimodal AI by combining vision-based UI parsing and advanced machine learning techniques to create structured, actionable insights from GUI screenshots.
- Semantic Parsing: Extracts UI elements' semantic meanings, categorizing buttons, icons, and text regions.
- Interactive Region Detection: Accurately identifies clickable or interactable regions.
- Local Semantics Integration: Enriches region data with descriptive functionality to improve context comprehension.
- Region-Grounded Actions: Enables action generation contextualized to GUI regions.
- Gradio Demo: Provides an interactive interface for testing SEER's capabilities.
SEER is designed to decompose complex tasks into structured steps, leveraging multiple components to alleviate computational burdens on GPT-4V and enhance decision-making accuracy.
SEER integrates outputs from three key components to produce a structured, DOM-like representation of the UI, overlayed with bounding boxes for interactable elements:
- Finetuned Interactable Icon Detection Model
- Finetuned Icon Description Model
- OCR Module
This parsing process simplifies GPT-4V's tasks by focusing on semantic and functional information extraction.
Identifying interactable regions is a foundational step:
- A custom dataset of 67k UI screenshots was curated, with bounding boxes derived from DOM trees of public web pages.
- Bounding boxes from interactable region detection and OCR modules are merged while minimizing overlaps (overlap threshold > 90%).
- Each region is assigned a unique ID to facilitate precise action mapping.
To improve understanding of UI elements:
- A dataset of 7k icon-description pairs was curated using GPT-4o and used to finetune a BLIP-v2 model.
- This finetuned model generates accurate descriptions of icon functionality.
- The descriptions and detected texts are integrated into prompts alongside the UI screenshot.
By incorporating local semantics, SEER addresses limitations in GPT-4V's ability to simultaneously identify semantic information and predict actions. This integration significantly enhances its performance in multimodal tasks.
-
Create and activate the Python environment:
conda create -n "seer" python=3.12 conda activate seer pip install -r requirements.txt -
Download pre-trained model checkpoints:
Access models from HuggingFace and place them in the appropriate directories:weights/icon_detect/weights/icon_caption_florence/weights/icon_caption_blip2/
-
Convert safetensor files to PyTorch format:
python weights/convert_safetensor_to_pt.py
Test SEER's capabilities with the Gradio-powered demo:
python gradio_demo.pyExplore SEER with sample use cases provided in demo.ipynb.
- UI Parsing Model: Decomposes screenshots into actionable semantic data.
- Interactive Region Detection: Recognizes clickable and functional UI areas.
- Action Grounding Framework: Maps detected actions to the corresponding GUI elements.
- Local Semantics Integration: Enriches UI parsing with descriptive labels for improved task performance.
Feel free to fork the repository, make changes, and submit pull requests. Contributions are welcome!
This project is licensed under the CC 4.0 License. See the LICENSE file for details.
Pirate-Emperor
- GitHub: Pirate-Emperor
- Reddit: PirateKingRahul
- Twitter: PirateKingRahul
- Discord: PirateKingRahul
- LinkedIn: PirateKingRahul
- Skype: Join Skype
- Medium: PirateKingRahul
Thank you for visiting the SEER project!