-
Hello, in Results of the 2020 fastMRI Challenge for Machine Learning MR Image Reconstruction, there are extra evaluation metrics like rank, sharpness and artifacts. (All of us know CNR). Which one were used for sharpness (there are some sharpness metrics using gradients), or Best, |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 10 replies
-
Hello @mclvnot, I assume you are referring to the results in Table IV. The ranks are not based on actual metrics, but rather radiologist Likert score assessments of the quality of the images. For the radiologist eval, we used the following: 1=extremely sharp boundaries between areas |
Beta Was this translation helpful? Give feedback.
-
Thank you Matthew, But I actually want to ask the artifact and sharpness metrics. According to your answer, for artifact and sharpness, ragiologists evaluate the examinations by means of likert score. But about Johnsons last RSNA article, they evaluate the reconstruction results in terms of sharpness and artifact,too. That's why, I thought there could be a standart metric for sharpness and artifact. (For sharpnes, average gradient magnitude etc. For artifact, I don't know). Thank you, |
Beta Was this translation helpful? Give feedback.
Hello @mclvnot, I assume you are referring to the results in Table IV.
The ranks are not based on actual metrics, but rather radiologist Likert score assessments of the quality of the images. For the radiologist eval, we used the following:
1=extremely sharp boundaries between areas
2= clear boundaries between areas with only minor blurring that is not bothersome
3= blurring at boundaries that decreases overall image quality
4 = unacceptable blurring for standard clinical imaging