You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am Zhiqiu Lin, a final-year PhD student at Carnegie Mellon University working on the evaluation of generative models. I wanted to share some of our recent work with you and hope they might be of interest:
1 - VQAScore (ECCV'24): A simple but effective alignment score for text-to-image/video/3D generation, strongly agreeing with human judgments. VQAScore can be run using one-line Python code here! Google's Imagen3 used VQAScore as the strongest replacement for CLIPScore.
2 - GenAI-Bench (CVPR'24 SynData Workshop): A benchmark with 1,600 complex prompts from professional designers. We also show that VQAScore can serve as a strong reward metric to re-rank the DALLE-3 generated images. GenAI-Bench was awarded the Best Short Paper at the SynData@CVPR24 workshop and adopted in Imagen3's report.
Best,
Zhiqiu
The text was updated successfully, but these errors were encountered:
KOALA is a great work! Congrats to NeurIPS!
I am Zhiqiu Lin, a final-year PhD student at Carnegie Mellon University working on the evaluation of generative models. I wanted to share some of our recent work with you and hope they might be of interest:
1 - VQAScore (ECCV'24): A simple but effective alignment score for text-to-image/video/3D generation, strongly agreeing with human judgments. VQAScore can be run using one-line Python code here! Google's Imagen3 used VQAScore as the strongest replacement for CLIPScore.
2 - GenAI-Bench (CVPR'24 SynData Workshop): A benchmark with 1,600 complex prompts from professional designers. We also show that VQAScore can serve as a strong reward metric to re-rank the DALLE-3 generated images. GenAI-Bench was awarded the Best Short Paper at the SynData@CVPR24 workshop and adopted in Imagen3's report.
Best,
Zhiqiu
The text was updated successfully, but these errors were encountered: