Skip to content

omerahmed12345elhussien/Explainable-Image-Classifier-case-study-on-Dogs-Breeds

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Explainable Image Classifier: case study on Dogs Breeds

This repository contains the source code necessary to reproduce all the results of the work.

Project Abstract

Explainability helps the AI agent through several stages of its life. For instance, when the agent is weak, it can help direct scientists to the weaknesses. The second case is when the agent has the same strength as a specialized person; it increases others' confidence and trust. The last scenario is when the agent is hugely more potent; we could learn from it. In this work, we address the issue of explainability of image classifiers. A dataset of three different types of dogs is used. We used two approaches; one relied on dealing with our models as a complete black box, and the other benefited from the gradient signal to understand what was happening inside the model.

In this work, we are interested to know which parts of the image our model used to make correct predictions about the different classes in the dataset. The images below express our findings with one classifier. We can see how our classifier focuses on specific parts of the dog for each breed to make a correct prediction.

Repository contents

About

No description or website provided.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published