🛡️ A simple AI Governance Risk Scoring Tool to evaluate AI models based on Data Quality, Bias, Explainability, Robustness, and Privacy Compliance. Built for organizations adopting AI Governance frameworks like ISO 42001, NIST AI RMF.
- Model risk scoring system (0-25)
- Streamlit interface for easy use
- Auto-generated risk reports in PDF
- Clean, modular Python code
- Python 3.x
- Streamlit
- pandas
- fpdf
| Criteria | Description |
|---|---|
| Data Quality | Completeness, accuracy |
| Bias Presence | Detected model bias |
| Explainability | Interpretability of model |
| Robustness | Adversarial resistance |
| Privacy Compliance | Privacy policy adherence |
git clone https://github.com/yourusername/AI-Governance-Risk-Assessment-Toolkit.git
cd AI-Governance-Risk-Assessment-Toolkit
pip install -r requirements.txt