Skip to content

Lightweight AI Governance Risk Assessment tool to score AI models across key risk factors (data quality, bias, privacy, explainability, robustness) with PDF report generation using Streamlit.

License

Notifications You must be signed in to change notification settings

Abhisan2221/AI-Governance-Risk-Assessment-Toolkit

Repository files navigation

AI Governance Risk Assessment Toolkit

🛡️ A simple AI Governance Risk Scoring Tool to evaluate AI models based on Data Quality, Bias, Explainability, Robustness, and Privacy Compliance. Built for organizations adopting AI Governance frameworks like ISO 42001, NIST AI RMF.

🚀 Features

  • Model risk scoring system (0-25)
  • Streamlit interface for easy use
  • Auto-generated risk reports in PDF
  • Clean, modular Python code

🧰 Tech Stack

  • Python 3.x
  • Streamlit
  • pandas
  • fpdf

📊 Example Scoring Criteria

Criteria Description
Data Quality Completeness, accuracy
Bias Presence Detected model bias
Explainability Interpretability of model
Robustness Adversarial resistance
Privacy Compliance Privacy policy adherence

📦 Installation

git clone https://github.com/yourusername/AI-Governance-Risk-Assessment-Toolkit.git
cd AI-Governance-Risk-Assessment-Toolkit
pip install -r requirements.txt

About

Lightweight AI Governance Risk Assessment tool to score AI models across key risk factors (data quality, bias, privacy, explainability, robustness) with PDF report generation using Streamlit.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages