From 2eac38f180ece0bbfc670b38e1ebc5b78877ea13 Mon Sep 17 00:00:00 2001 From: nilfm Date: Tue, 28 Jan 2025 15:38:28 -0500 Subject: [PATCH] #1402: Fix dead link in models.md --- resource/doc/models.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/resource/doc/models.md b/resource/doc/models.md index 1278fb6c9..4fc4630cd 100644 --- a/resource/doc/models.md +++ b/resource/doc/models.md @@ -4,7 +4,7 @@ This repository offers a number of pre-trained VMAF models to be used in differe ## Predict Quality on a 1080p HDTV screen at 3H -The default VMAF model (`model/vmaf_v0.6.1.json`) is trained to predict the quality of videos displayed on a 1080p HDTV in a living-room-like environment. All the subjective data were collected in such a way that the distorted videos (with native resolutions of 1080p, 720p, 480p etc.) get rescaled to 1080 resolution and shown on the 1080p display with a viewing distance of three times the screen height (3H). Note that 3H is the critical distance for a viewer to appreciate 1080p resolution sharpness (see [recommendation](https://www.itu.int/dms_pubrec/itu-r/rec/bt/R-REC-BT.2022-0-201208-I!!PDF-E.pdf)). +The default VMAF model (`model/vmaf_v0.6.1.json`) is trained to predict the quality of videos displayed on a 1080p HDTV in a living-room-like environment. All the subjective data were collected in such a way that the distorted videos (with native resolutions of 1080p, 720p, 480p etc.) get rescaled to 1080 resolution and shown on the 1080p display with a viewing distance of three times the screen height (3H). Note that 3H is the critical distance for a viewer to appreciate 1080p resolution sharpness (see [recommendation](https://www.itu.int/dms_pubrec/itu-r/rec/bt/R-REC-BT.2022-0-201208-W!!PDF-E.pdf)). This model is trained using subjective data collected in a lab experiment, based on the [absolute categorical rating (ACR)](https://en.wikipedia.org/wiki/Absolute_Category_Rating) methodology, with the exception that after viewing a video sequence, a subject votes on a continuous scale (from "bad" to "excellent", with evenly spaced markers of "poor", "fair" and "good" in between), instead of the more conventional five-level discrete scale. The test content are video clips selected from the Netflix catalog, each 10 seconds long. For each clip, a combination of 6 resolutions and 3 encoding parameters are used to generate the processed video sequences, resulting 18 impairment conditions for testing. @@ -31,7 +31,7 @@ From the figure it can be interpreted that due to the factors of screen size and ## Predict Quality on a 4KTV Screen at 1.5H -As of v1.3.7 (June 2018), we have added a new 4K VMAF model at `model/vmaf_4k_v0.6.1.json`, which predicts the subjective quality of video displayed on a 4KTV and viewed from the distance of 1.5 times the height of the display device (1.5H). Again, this model is trained with subjective data collected in a lab experiment, using the ACR methodology (notice that it uses the original 5-level discrete scale instead of the continuous scale). The viewing distance of 1.5H is the critical distance for a human subject to appreciate the quality of 4K content (see [recommendation](https://www.itu.int/dms_pubrec/itu-r/rec/bt/R-REC-BT.2022-0-201208-I!!PDF-E.pdf)). More details can be found in [this](presentations/VQEG_SAM_2018_025_VMAF_4K.pdf) slide deck. +As of v1.3.7 (June 2018), we have added a new 4K VMAF model at `model/vmaf_4k_v0.6.1.json`, which predicts the subjective quality of video displayed on a 4KTV and viewed from the distance of 1.5 times the height of the display device (1.5H). Again, this model is trained with subjective data collected in a lab experiment, using the ACR methodology (notice that it uses the original 5-level discrete scale instead of the continuous scale). The viewing distance of 1.5H is the critical distance for a human subject to appreciate the quality of 4K content (see [recommendation](https://www.itu.int/dms_pubrec/itu-r/rec/bt/R-REC-BT.2022-0-201208-W!!PDF-E.pdf)). More details can be found in [this](presentations/VQEG_SAM_2018_025_VMAF_4K.pdf) slide deck. To invoke this model, specify the model path using the `--model` option. For example: