Maintained by: Teo Wu (Haoning Wu)
🌟New: We have published a benchmark for multi-modality large language models (MLLMs) on low-level vision and visual quality assessment!
See our 🖥️codebase and 📑paper!
We are a young research team from Nanyang Technological University (NTU) and Sensetime Research, aiming to build efficient and explainable Image and Video Quality Assessment approaches as well as exploring the perceptual mechanisms behind the human quality perception.
Code repositories to our works under the project:
MaxVQA and MaxWell database (ACM MM, 2023) | Paper
- An Extended 16-dimensional Video Quality Assessment Database
- Query Multi-Axis Dimension Specific Quality via Language (CLIP)
- Try our demo at Github
- State-of-the-art Method for Non-reference Video Quality Assessment
- First Multi-dimensional Dataset for Non-reference Video Quality Assessment
- See our repository at Github
FAST-VQA/FasterVQA | ECCV-2022 | TPAMI, 2023
- The first end-to-end Video Quality Asssessment method family!
- Super Efficient, real-time even on Apple M1 CPU!
- See our weights/code/toolbox at Github
Contact Teo Wu (Haoning Wu, haoning001@e.ntu.edu.sg or realtimothyhwu@gmail.com) for inquiries related to code and datasets.