Skip to content

Commit

Permalink
Updates the graph with more structure and some improvements in linkin…
Browse files Browse the repository at this point in the history
…g between pages
  • Loading branch information
sindoc committed Sep 15, 2024
1 parent b231aa6 commit dde3596
Show file tree
Hide file tree
Showing 18 changed files with 219 additions and 29 deletions.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
@@ -1 +1 @@
{:highlights [], :extra {:page 1}}
{:highlights [], :extra {:page 4}}
5 changes: 5 additions & 0 deletions journals/2022_11_30.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
alias:: the day OpenAI released ChatGPT

- On this day
- [[OpenAI]] released [[ChatGPT]], marking a paradigm shift in general availability of [[Artificial Intelligence]] to humans by way of [[language use]]
-
2 changes: 1 addition & 1 deletion logseq/config.edn
Original file line number Diff line number Diff line change
Expand Up @@ -220,7 +220,7 @@
:ref/linked-references-collapsed-threshold 50

;; Favorites to list on the left sidebar
:favorites ["homepage" "data & ai governance" "reference data" "reference datasets for gender bias detection" "list of countries in the world" "knowyourai" "human-ai relationships glossary" "gender bias test suite for generative ai" "ai governance/tools/gender bias detector"]
:favorites ["homepage" "data & ai governance" "ai governance/tools/bias detector" "reference data" "reference datasets for gender bias detection" "list of countries in the world" "knowyourai" "human-ai relationships glossary" "gender bias test suite for generative ai" "ai governance/tools/gender bias detector"]

;; any number between 0 and 1 (the greater it is the faster the changes of the next-interval of card reviews) (default 0.5)
;; :srs/learning-fraction 0.5
Expand Down
82 changes: 76 additions & 6 deletions pages/AI Governance___Policies___AI Monitoring Policy.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,76 @@
- Required Tests
- [[Sentiment Disparity Test]]
- Actor Swap Test
- Occupational Test
- Social Role Test
- [[...]]
- Part of [[Data & AI Governance]]
- Implemented by [[AI Monitoring/Continuous AI Ethics check]]
- ### Overview
- The **AI Monitoring Policy** outlines the *continuous monitoring process* for [[AI systems]] to ensure they adhere to **ethical**, **performance**, and **security standards**.
- This policy is designed to detect and mitigate [[biases]], ensure fairness, and monitor the performance and reliability of deployed AI systems.
- #### Executive Summary
background-color:: green
- The **AI Monitoring Policy** ensures that AI systems remain ethical, secure, and performant over time. By following this policy, the organisation commits to responsible AI deployment and continuous improvement of its AI models.
- ### Purpose
background-color:: yellow
- The purpose of this policy is to:
- Continuously monitor AI systems for potential biases, performance degradation, and security risks.
- Ensure the responsible and ethical deployment of AI models.
- Provide guidelines for intervention when AI systems deviate from acceptable standards.
- ### Scope
background-color:: yellow
- This policy applies to all AI models deployed across [[the organisation]].
- It covers:
- Models deployed for decision-making, automation, and customer interaction.
- Both internal and external-facing AI applications.
- Monitoring AI systems in real-time, including bias detection, security vulnerabilities, and performance metrics.
- ### Key Components
background-color:: yellow
- #### 1. **Bias Detection and Fairness Monitoring**
- **Objective**: Ensure that AI models do not introduce or amplify biases (e.g., based on age, gender, race, socioeconomic status).
- **Frequency**: Bias detection tests will be run on a weekly basis.
- **Tools**: Use of automated bias detection tools such as [[AI Governance/Test Suite/Age Bias Detection Suite]] to monitor biases.
- **Action Plan**: If bias is detected, the AI model will be retrained or adjusted based on the severity of the issue.
- #### 2. **Performance Monitoring**
- **Objective**: Monitor the accuracy, reliability, and responsiveness of AI models in production.
- **Metrics**: Accuracy, latency, failure rate, user satisfaction.
- **Frequency**: Real-time performance monitoring will be enabled.
- **Tools**: Monitoring tools like cloud-based dashboards (e.g., AWS CloudWatch, Azure Monitor).
- **Action Plan**: In case of performance degradation, immediate investigation and retraining will be initiated.
- #### 3. **Security Monitoring**
- **Objective**: Ensure that AI models are protected from malicious attacks, data leaks, or adversarial threats.
- **Frequency**: Continuous monitoring with security audits conducted quarterly.
- **Tools**: Use of security tools for AI model integrity (e.g., model monitoring services, threat detection tools).
- **Action Plan**: If a security breach is detected, an emergency response team will be deployed to mitigate and resolve the issue.
- #### 4. **Explainability and Transparency**
- **Objective**: Ensure that AI models remain explainable and transparent, especially in decision-making systems.
- **Tools**: Use of explainability frameworks (e.g., SHAP, LIME) to provide insights into model decisions.
- **Frequency**: Monthly reports on the explainability of AI models will be generated.
- **Action Plan**: If a model’s decisions are found to be non-explainable, alternative models or approaches will be considered.
- ### Roles and Responsibilities
background-color:: yellow
- **AI Governance Team**: Responsible for implementing this policy, conducting monitoring, and enforcing ethical guidelines.
- **Data Science Team**: Ensures that models are built, trained, and deployed in accordance with the monitoring policy.
- **IT Security Team**: Handles security monitoring and breach response.
- **Model Owners**: Responsible for ensuring that the models under their purview are compliant with this policy.
- **Compliance Officer**: Oversees adherence to regulatory and ethical standards.
- ### Procedures
background-color:: yellow
- #### 1. **Monitoring Setup**
- Ensure that all deployed AI models are connected to the monitoring system.
- Configure automated alerts for bias detection, performance degradation, and security risks.
- #### 2. **Review and Reporting**
- Weekly and monthly reviews will be conducted by the AI Governance team.
- Regular reports on model performance, bias detection results, and security events will be submitted to senior leadership.
- #### 3. **Intervention Process**
- If an AI model fails to meet ethical or performance standards, the following steps will be taken:
- Investigate the root cause of the issue.
- Retrain, recalibrate, or modify the AI model.
- Suspend the AI model if the issue poses a significant risk.
- Communicate the findings and resolution to stakeholders.
- ### Compliance
background-color:: yellow
- This policy adheres to industry regulations, including GDPR, CCPA, and other data protection and ethical AI standards. Regular audits will be conducted to ensure ongoing compliance.
- WIP
- Use [[test suites]]
- Required Tests
- [[Sentiment Disparity Test]]
- Actor Swap Test
- Occupational Test
- Social Role Test
- [[...]]
2 changes: 1 addition & 1 deletion pages/AI Governance___Tools___Bias Detector.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
alias:: Bias Detection Test Suite for Generative AI, Bias Detection Test Suite

- Contains fine-grained tests for detecting
- Contains fine-grained tests suites for detecting the following [[undesired outcomes]]
- [[Gender Bias]] -> [[Gender Bias Detection]]
- [[AI Governance/Tools/Gender Bias Detector]]
- [[Age Bias]] -> [[Age Bias Detection]]
Expand Down
3 changes: 0 additions & 3 deletions pages/AI Monitoring for Generative AI.md

This file was deleted.

3 changes: 3 additions & 0 deletions pages/AI Monitoring.md
Original file line number Diff line number Diff line change
@@ -1 +1,4 @@
alias:: Monitoring of AI Systems

- Implements [[AI Governance/Policies/AI Monitoring Policy]]
-
109 changes: 109 additions & 0 deletions pages/AI Monitoring___Continuous AI Ethics check.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,109 @@
alias:: Continuous application of AI Ethics, Continuous AI monitoring requirements

- Implemented by
- [[AI Governance/Tools/Bias Detector]]
- ### Overview
- The **AI Monitoring/Continuous AI Ethics Check** is designed to ensure that AI models in production adhere to ethical standards, including fairness, transparency, and security. This involves real-time monitoring, scheduled audits, and automated alerts for potential issues such as bias or model drift.
- ### Key Components of Continuous AI Ethics Monitoring
- 1. **Bias Detection and Mitigation**
- - **Objective**: Continuously monitor for bias in AI outputs (e.g., gender, race, age, socioeconomic).
- - **Tools**: Use automated bias detection tools such as [[AI Governance/Tools/Bias Detector]] to monitor for bias.
- - **Action Plan**: Retrain or adjust AI models when bias is detected.
- 2. **Fairness and Equity Checks**
- - **Objective**: Ensure AI treats all users fairly, without favoring or disadvantaging any group.
- - **Tools**: Regular audits and fairness assessments.
- 3. **Transparency and Explainability**
- - **Objective**: Ensure that AI decisions are explainable and transparent, particularly in decision-making systems.
- - **Tools**: Use explainability frameworks like **LIME** or **SHAP**.
- 4. **Security and Data Privacy**
- - **Objective**: Ensure AI systems respect user privacy and are secure from attacks or data breaches.
- - **Tools**: Real-time monitoring for security incidents.
- 5. **Accountability and Compliance**
- - **Objective**: Ensure AI complies with internal governance rules and external regulations such as **GDPR** or **CCPA**.
- - **Action Plan**: Maintain an audit trail for accountability and compliance checks.
- ### Implementation Steps
- #### 1. [[AI Governance/Frameworks/Ethics Monitoring Framework]
- - Define the key ethical principles (e.g., fairness, transparency, privacy) that AI must adhere to.
- - Develop specific metrics (e.g., bias metrics, transparency scores).
- #### 2. [[AI Governance/Procedures/Real-Time Monitoring Setup]]
- - **Data Collection**: Collect real-time data from AI systems, including inputs, outputs, and decision logs.
- - **Bias Detection**: Continuously run tests (like age bias tests) to monitor for bias.
- - **Performance Monitoring**: Track the AI’s performance to detect any ethical issues or degradation.
- #### 3. [[AI Governance/Procedures/Alerting and Reporting System]]
- - **Automated Alerts**: Set up alerts to notify teams of detected ethical violations (bias, fairness, or privacy breaches).
- - **Dashboard Reporting**: Use tools like **AWS CloudWatch**, **Azure Monitor**, or
- **Google Cloud Monitoring** to display real-time ethics and bias data.
- #### 4. [[AI Governance/Tools/Explainability Tools Integration]]
- - Tools like **LIME** and **SHAP** should be integrated to ensure that model decisions are explainable and easily interpreted.
- #### 5. [[AI Governance/Policies/Data Privacy and Security]]
- - Ensure compliance with **GDPR**, **CCPA**, and other data protection laws.
- - Use security tools to monitor the AI system for adversarial attacks or data leaks.
- #### 6. [[AI Governance/Procedures/Feedback Loops and Model Retraining]]
- - Establish feedback loops for flagged ethical issues, triggering model adjustments or retraining when necessary.
- #### 7. [[AI Governance/Procedures/Compliance and Accountability Auditing]]
- - Keep logs of AI decisions, monitoring reports, and ethics check outcomes for audits.
- - Automate logging and auditing to ensure traceability.
- ### Tools and Technologies for Continuous AI Ethics Monitoring
- 1. **Bias Detection Tools**
- - **Fairness Indicators** (Google)
- - **Aequitas**: Bias and fairness audit tool.
- - **What-If Tool**: Bias detection and what-if scenarios.
- 2. **Explainability Tools**
- - **LIME**: Local interpretable model-agnostic explanations.
- - **SHAP**: Quantifies feature importance in AI decisions.
- 3. **Model Monitoring and Alerting**
- - **AWS CloudWatch**: For real-time monitoring and alerting.
- - **Azure Monitor**: For tracking AI model performance and bias metrics.
- - **Google Cloud Monitoring**: For performance and bias monitoring on Google Cloud.
- 4. **Governance and Compliance Tools**
- - **Azure Machine Learning Governance**: Responsible AI tools.
- - **IBM Watson OpenScale**: Bias and drift detection.
- ### Sample Code for Real-Time AI Ethics Monitoring in AWS Lambda
```python
import os
import boto3
from datetime import datetime
from bias_detection_module import detect_age_bias
# Initialize AWS CloudWatch client
cloudwatch = boto3.client('cloudwatch')
# Lambda function for real-time AI ethics check
def lambda_handler(event, context):
# Get AI model output (hypothetical event structure)
model_output = event['model_output']
prompt = event['prompt']
# Run bias detection check
bias_result = detect_age_bias(prompt, model_output)
# If bias is detected, log to CloudWatch and send an alert
if bias_result['bias_detected']:
log_bias_to_cloudwatch(bias_result)
send_alert(bias_result)
return bias_result
# Log bias metrics to AWS CloudWatch
def log_bias_to_cloudwatch(bias_result):
cloudwatch.put_metric_data(
Namespace='AI/EthicsMonitoring',
MetricData=[{
'MetricName': 'BiasDetection',
'Dimensions': [{'Name': 'BiasType', 'Value': 'AgeBias'}],
'Timestamp': datetime.utcnow(),
'Value': 1 if bias_result['bias_detected'] else 0,
'Unit': 'Count'
}]
)
# Send an alert if bias is detected
def send_alert(bias_result):
sns = boto3.client('sns')
sns.publish(
TopicArn=os.getenv('SNS_ALERT_TOPIC'),
Message=f"Bias Detected in AI Model: {bias_result['details']}",
Subject="AI Ethics Monitoring Alert"
)
```
3 changes: 2 additions & 1 deletion pages/AI___Governance.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,5 @@ alias:: AI Governance
broader:: [[Data & AI Governance]]

- [[AI Governance/Tools]]
- [[AI Governance/Policies]]
- [[AI Governance/Policies]]
- [[AI Monitoring]]
5 changes: 0 additions & 5 deletions pages/AI___Monitoring.md

This file was deleted.

23 changes: 15 additions & 8 deletions pages/Data & AI Governance.md
Original file line number Diff line number Diff line change
@@ -1,18 +1,20 @@
- Artificial Intelligence and its advanced capabilities around [[language use]] has given rise to concern over its governance if they are to be used in [[organisations]].
- We intend to study this topic in this graph and provide practical [tools](AI Governance/Tools), to this end.
- ### Data & AI Governance as an [[organisational unit]]
collapsed:: true
- An [[organisational unit]] that has responsibility over [[Data Governance]] as the foundation for [[AI Governance]].
- While AI Governance has no meaning without a solid and healthy Data Governance infrastructure and organisation that is aligned with its objectives at [[tactical]] and [[strategic]] levels, at operational levels however, AI Governance must be studied and operationalised as a separate discipline.
- While AI Governance has no meaning without a solid and healthy Data Governance infrastructure and organisation, which aims at aligning the wider organisation's data operations at *tactical* and *strategic* levels with overall company objectives by making sure that all the appropriate risks are mitigated.
- At *operational* levels however, AI Governance must be studied and operationalised as a novel discipline, mainly due to **continuous** and **blackbox** nature of AI systems.
- There are two distinct features of AI Governance.
collapsed:: true
- [[Continuous AI monitoring requirements]]
collapsed:: true
- In [[Numerical Data Science]],
- [[Undesired Outcome/Data Drift]] DO happen and a lot more often that desired
- [[data drifts]] DO happen and a lot more often that desired
- In case of [[LLMs]],
- [[Continuous application of AI Ethics]] to prevent [[Human Value Drift]]
- requiring a great deal of [[ethics]] and [[AI Monitoring]]. demands that to be studied on its own, due to [[the radical shift that Generative AI has caused, since ChatGPT]] [[the human-computer interaction with AI]].
- Note that this definition does not necessarily define an rigid [[organisational hierarchy]] between teams serving under these functions.
- [[Continuous application of AI Ethics]] is required to prevent [[Human Value Drift]] while the AI system is interacting with its users.
- [[Blackbox nature of most AI systems]]
collapsed:: true
- This simply means that the structure that interacts with a human user in an AI system, is not directly designed by humans and [[the AI response]] is not a conscious act.
- *Note* that this definition of Data & AI Governance does **not** necessarily define an rigid [[organisational hierarchy]] between teams serving under these functions.
collapsed:: true
- However, it highlights the need for full alignment between the type of data governance that is required for a successful AI Governance initiative.
- Example
Expand All @@ -30,4 +32,9 @@
- Data Governance Unit
- Data Engineering Unit
- Data Science & Predictive Analytics Unit
- ...
- ...
- #### Relevant Policies
- [[AI Governance/Policies/AI Monitoring Policy]]
- #### Required Capabilities to Implement the [[AI Governance/Policies]]
- [[AI Monitoring/Continuous AI Ethics check]]
-
1 change: 1 addition & 0 deletions pages/TestSuite.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
alias:: test suites
2 changes: 1 addition & 1 deletion pages/Types of Organisations.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
alias:: your company's, enterprise, an organisation, an enterprise, industry classification codes, industry classification, organisations, organisation, your organisation
alias:: your company's, enterprise, an organisation, an enterprise, industry classification codes, industry classification, organisations, organisation, your organisation, the organisation
title:: Types of Organisations

- Organisations vary in size, structure, and purpose, and these factors influence not only how they are perceived but also how they function internally and externally.
Expand Down
3 changes: 3 additions & 0 deletions pages/Undesired Outcome.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
alias:: undesired outcomes

-
1 change: 0 additions & 1 deletion pages/Undesired Outcome___Data Drift.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@ title:: Undesired Outcome/Data Drift
- Data drift occurs when the statistical properties of data change over time, causing models that were once accurate to become less effective.
- This phenomenon is particularly significant in [[Machine Learning]], where models are trained on historical data and expected to make predictions on future or real-time data. When data drifts, the relationship between input data and predicted output can deteriorate, leading to poor model performance.
- ## Types of Data Drift
collapsed:: true
Data drift can manifest in different ways depending on how the data is changing over time:
- **Covariate Drift**: Occurs when the distribution of independent variables (features) changes. The relationship between the features and target variable may remain the same, but the distribution of the input data changes.
- Example: A retail company's customer base shifts over time, leading to changes in purchasing behavior without changes in the purchasing patterns themselves.
Expand Down
1 change: 1 addition & 0 deletions pages/test suites.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
-
Original file line number Diff line number Diff line change
@@ -1 +0,0 @@
-

0 comments on commit dde3596

Please sign in to comment.