The models are listed by the format of 'Model Name: model name in command line'
- QwenVL-2.5: qwenvl2.5_7b
- InternVL-2.5
- LLaMA3.2-Vision
- Molmo
- ....
- GPT-4o
- Claude
- Gemini-Pro
I collected the generation pipeline into one single file generate_response.py
python -m generate_response --model qwenvl2.5_7b
TBD
TBD