TrustyAI is an open source AI Explainability (XAI) Toolkit. It provides a Java and Python library and Open Data Hub (ODH) component for XAI explanations of predictive models for both enterprise and data science.
The project roadmap offers a view on new tools and integration the project developers are planning to add.
Included in the TrustyAI core library are several tools, which include:
- Local explainers
- Global explainers
- Time-series explainers
- Fairness metrics
- Drift metrics
- Language performance metrics
- Generative text model evaluations and benchmarks
TrustyAI also provides the following integrations:
- KServe and ModelMesh support in Kubernetes, OpenShift and OpenDataHub
- KServe side-car explainer support
TrustyAI uses the ODH governance model and code of conduct. Additional information specific for this project can be found in the community repository.
Discussion: https://github.com/orgs/trustyai-explainability/discussions
Contribution Guidelines: https://github.com/trustyai-explainability/trustyai-explainability/blob/main/CONTRIBUTING.md
Roadmap: https://github.com/orgs/trustyai-explainability/projects/10
- explainability-core: The core TrustyAI algorithms, including explainers and metrics
- explainability-service: TrustyAI ODH component and ModelMesh integrations
- trustyai-explainability-python: A Python library allowing usage of TrustyAI from the Python ecosystem (including Jupyter notebooks)
- trustyai-service-operator: The TrustyAI Kubernetes Operator
- trustyai-kserve-explainer: TrustyAI as a KServe built-in explainer
- trustyai-explainability-python-examples: Examples on how to get started with the Python TrustyAI library.
- trustyai-odh-demos: Demos of the TrustyAI Service within Open Data Hub.
- trustyai-service