Skip to content

Compute and print mean single-pass uncertainty metrics during model test #4

Open
samrat-rm wants to merge 3 commits intoOrion-AI-Lab:mainfrom
samrat-rm:feat/single_pass_mean_uncertainity
Open

Compute and print mean single-pass uncertainty metrics during model test #4
samrat-rm wants to merge 3 commits intoOrion-AI-Lab:mainfrom
samrat-rm:feat/single_pass_mean_uncertainity

Conversation

@samrat-rm
Copy link
Copy Markdown

This change adds single-pass entropy-based uncertainty estimation to the model testing pipeline.
The model now computes uncertainty from softmax probabilities and reports the mean uncertainty alongside Pixel Accuracy and mIoU in the terminal output.

This implementation currently serves as a prototype for uncertainty estimation during model evaluation and can also be extended to the model training pipeline in the future for deeper analysis and experimentation.

Changes

  • Compute entropy-based uncertainty from model logits.
  • Aggregate mean uncertainty across the test dataset.
  • Print mean uncertainty together with existing evaluation metrics.
image






AI usage disclosure :
AI tools like Chat GPT were used to assist with understanding concepts and drafting parts of the implementation. The final code was reviewed, tested, and validated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant