Skip to content

pycvvdp: Introduce saving quality metrics by default #7

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

vibhoothi
Copy link

@vibhoothi vibhoothi commented Jun 17, 2024

This allows us to have a CSV file (can be extended to JSON/XML) in future for quality metrics computed by the program. By default it saves to $output/${basename}-metrics.csv format.

This is enabled by default because why not

Edit:

Sample:

Frame Number,cvvdp,PU21-PSNR-Y,PU21-PSNR-RGB2020
0,9.959638,39.131298,38.098446
1,9.880101,38.201233,37.380569
2,9.859364,37.959854,37.151367
3,9.830442,37.814854,37.006905
4,9.864499,37.908867,37.078354
5,9.826247,37.806644,37.008781

This allows us to have a CSV file (can be extended to JSON/XML) in
future for quality metrics computed by the program.
By default it saves to $output/${basename}-metrics.csv format.

This is enabled by default because why not
@mantiuk
Copy link
Collaborator

mantiuk commented Jul 9, 2024

I know that VMAF can output the quality per-frame and the final score is the mean of those.

This is not the case for cvvdp - it computes the normalized L2 norm of per-frame scores in a different space (before regressing JOD). Therefore, I would be against outputting per-frame scores for cvvdp.

But we will look into creating a text file with the results. Right now, --quiet option could be an alternative.

@vibhoothi
Copy link
Author

Thanks,

Are you suggesting that mean of all JODs for a given video is not accurate? Cause that is essentially we get when we use quiet. It is more or less csv of quiet option output in a slightly more readable form

@mantiuk
Copy link
Collaborator

mantiuk commented Jul 9, 2024

The JOD score for video should be computed by running on video files, not individual frames.

I suggest combining frames into video files (with ffmpeg), using lossless compression or no compression.

It is also important to have correct frame-per-second metadata in the video file.

@vibhoothi
Copy link
Author

vibhoothi commented Jul 9, 2024

I was trying to load linear light EXR files which is RGB image sequence with correct fps, wouldn’t that be same as loading video as we have frame array there in operation pipeline?

so source and distortion are exr image sequences which is obtained with standard HDRTools library with appropriate conversion files. (From PQ BT2020 sources and compression with codecs)

I strongly believe handling BT.2020 with libavfilter is not a good way to retain original colours since it is inherently not well designed at the moment.

@vibhoothi
Copy link
Author

Just to add that initially i was trying to load lossless Y4M/YUV, but tool did not like it
#6

@mantiuk
Copy link
Collaborator

mantiuk commented Jul 9, 2024

cvvdp command line currently does not have direct support for what you are trying to do (process EXR frames as video). However, the correct way of handling frames stored in EXR files is to create a subclass of video_source (pycvvdp/video_source.py) and implement the abstract methods. Then, you can call predict_video_source method (

def predict_video_source(self, vid_source):
) from your own Python code for running the metric on the sequences you have (see the examples folder).

@cosmin
Copy link
Contributor

cosmin commented Aug 20, 2024

I know that VMAF can output the quality per-frame and the final score is the mean of those.

This is not the case for cvvdp - it computes the normalized L2 norm of per-frame scores in a different space (before regressing JOD). Therefore, I would be against outputting per-frame scores for cvvdp.

I think it can still be useful to get per frame scores to understand the distribution of quality across the sequence, even if the overall quality is not the average of the frame quality.

For example I'm working on some encoder tuning right now where I get virtually the same cvvdp score on two sequences, but in one case the quality is worse at the beginning and getting better faster and in the other case the quality is more consistently average throughout. But I need to eyeball the heatmap and distograms to understand this, which is less than ideal.

@mantiuk
Copy link
Collaborator

mantiuk commented Aug 21, 2024

You could dump the values that are used to plot a distogram:

def export_distogram(self, stats, fname, jod_max=None, base_size=6):

There is also an experimental feature of dumping channels as video in the branch dump_channels.

What you need is rather specialized and needed for your particular application so I suggest that you customise cvvdp.

If we output per-frame scores, most people will misinterpret those and start computing the mean.

@mantiuk
Copy link
Collaborator

mantiuk commented Sep 30, 2024

The new version, v0.4.2 (3a746fd), adds an option to save the results and also process a video stored as a sequence of images. I hope it addresses most of the issues that this pull request targets.

@cosmin
Copy link
Contributor

cosmin commented Oct 22, 2024

@mantiuk what if instead of per frame scores there was support for computing a short term quality score, and reporting the value of the short term quality throughout the sequence. Not sure what the minimum number of frames would be to have a meaningful measure of short term quality, maybe 0.5s? 1s?

It would provide an alternative to looking frame by frame (which ignores the video aspect) while still providing a way to identify the worst section of a sequence and take that into consideration when performing encoder evaluations and tuning.

@mantiuk
Copy link
Collaborator

mantiuk commented Dec 13, 2024

Per-frame (and also per-channel and per-band) saving is now in the branch result_detailed and explained here. If it all looks good, I will merge this branch with the main.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants