You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Panel discussion 'Statistics: burden or solution for efficient supervision on algorithms and AI?', Dutch Journal for Supervision and network of competent authorities VIDE
Copy file name to clipboardExpand all lines: content/english/algoprudence/case-repository.md
+60-19Lines changed: 60 additions & 19 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,11 +1,14 @@
1
1
---
2
+
layout: repository
2
3
title: Algoprudence repository
3
-
subtitle: "Stakeholders learn from our\_techno-ethical jurisprudence, can help to improve it and can use it as to resolve ethical issues in a harmonized manner.\n\nWe are open to new cases. Please <span style=\"color:#005aa7\">[submit</span>](/algoprudence/submit-a-case/) a case for review.\n\nOr read our <span style=\"color:#005aa7\">[white paper</span>](/knowledge-platform/knowledge-base/white_paper_algoprudence/) on algoprudence.\n"
4
+
subtitle: "Stakeholders learn from our case law for algorithms (_algoprudence_), can help to improve it and can use it to resolve ethical issues in a harmonized manner when deploying algorithmic systems"
Case study how irresponsible driving can be identified and predicted in the database of a car sharing platform. An independent commission issues advice on among others model validity, balancing false positives and false negatives and meaningful transparency.
Further research into CUB process of Education Executive Agency of The
@@ -92,9 +140,9 @@ algoprudences:
92
140
- value: ethical_issue_proxy
93
141
label: proxy discrimination
94
142
- value: owner_public
95
-
label: public organisation
143
+
label: DUO
96
144
- value: standard_risk_management
97
-
label: risk mmanagement
145
+
label: risk management
98
146
hide: true
99
147
- value: standard_governance_data_quality
100
148
label: governance & data quality
@@ -132,7 +180,7 @@ algoprudences:
132
180
- value: ethical_issue_proxy
133
181
label: proxy discrimination
134
182
- value: owner_public
135
-
label: public organisation
183
+
label: DUO
136
184
- value: standard_risk_management
137
185
label: risk management
138
186
hide: true
@@ -150,8 +198,8 @@ algoprudences:
150
198
hide: true
151
199
- title: Risk Profiling for Social Welfare Reexamination
152
200
intro: >-
153
-
The commission judges that algorithmic risk profiling can be used under
154
-
strict conditions for sampling residents receiving social welfare for
201
+
Case study to ML-driven risk predictions on unduly granted social welfare. An independent commission judges that algorithmic risk profiling can be used under
202
+
strict conditions for sampling residents for
155
203
re-examination. The aim of re-examination is a leading factor in judging
The audit commission believes there is a low risk of (higher-dimensional)
202
-
proxy discrimination by the BERT-based disinformation classifier and that
203
-
the particular difference in treatment identified by the quantitative bias
204
-
scan can be justified, if certain conditions apply.
249
+
Case study to algorithmic detection on fake news on Twitter. An independent advice commission believes there is a low risk of proxy discrimination by the BERT-based disinformation classifier and that the particular difference in treatment identified by the quantitative bias scan can be justified, if certain conditions apply.
- title: Type of SIM card as a predictor variable to detect payment fraud
237
283
intro: >-
238
-
The audit commission advises against using type of SIM card as an input
239
-
variable in algorithmic models that predict payment defaults and block
240
-
afterpay services for specific customers. As it is likely that type of SIM
241
-
card acts as a proxy-variable for sensitive demographic categories, the
242
-
model would run an intolerable risk of disproportionally excluding
243
-
vulnerable demographic groups from the payment service.
284
+
Case study to ML-driven risk profiling to detect after-pay fraud at an e-commerce platform. An independent commission advises against using type of SIM card as an input variable in the algorithmic risk model. As it is likely that type of SIM
285
+
card acts as a proxy-variable for sensitive demographic categories, the model would run an intolerable risk of disproportionally excluding vulnerable demographic groups from the payment service.
- <spanstyle="color:#005aa7; font-weight:600;">Model validity is fundamental</span>\
54
+
The algorithm must be altered to specifically predict driving behavior that causes damage, not general platform misuse. As for any risk prediction model, getting alignment between training data and intended purpose is a critical prerequisite.
55
+
- <spanstyle="color:#005aa7; font-weight:600;">Balance monitoring with user autonomy</span>\
56
+
Monitoring irresponsible driving to reduce damage costs is a legitimate business interest but must not become excessive surveillance or veer into paternalistic advice about general driving habits.
Users need specific explanations about what driving behavior triggered the warning and clear guidance for improvement, not generic warnings or confusing technical jargon that means nothing to the average driver.
Speeding has obvious safety implications, but acceleration and similar variables are trickier. They depend on context and may just reflect personal driving preferences. Before including them, there must be solid evidence linking them to actual damage risk, not just different driving styles or environments.
Human analysts currently override 50-60% of the model’s recommendations, demonstrating real discretion rather than rubber-stamping. This meaningful human oversight must continue.
63
+
64
+
#### Summary advice
65
+
66
+
The commission judges that algorithmic risk prediction for identifying irresponsible driving behavior should
67
+
take place under strict conditions and should be weighed against alternative methods of reducing damage.
68
+
The validity of the prediction model is a critical prerequisite, and hence the current mismatch between the
69
+
stated objective (predicting irresponsible driving) and the target variable in training (user bans for a wide variety
70
+
of misuse) must first be resolved. The commission emphasizes that while monitoring to reduce damage cost
71
+
may be a legitimate business interest, it should not become excessive surveillance or be used for paternalistic
72
+
feedback on users’ general driving style. Users should receive specific, meaningful explanations about which
73
+
driving behaviors triggered warnings, not generic notifications or lists of technical variables that users cannot
74
+
comprehend. Variable selection must be carefully justified, with speeding as the most legitimate variable,
75
+
while contextual behaviors like fast acceleration or hard braking require attention to driving context and solid
76
+
evidence in what sense they are related to damage risk. The commission recommends maintaining substantial
77
+
human review of algorithmic recommendations, to mitigate the risk that warnings are unduly sent and to
78
+
facilitate appeal and redress by users.
79
+
80
+
#### Source of case
81
+
82
+
Collaboration with car sharing platform. Both the commission and Algorithm Audit have conducted this
83
+
study independently from the car sharing platform. Neither the investigation nor the advice have been commissioned or funded by the platform.
84
+
85
+
#### Presentation
86
+
87
+
This case study was published during UNESCO's Expert roundtable II: Capacity building for AI supervisory authorities in Paris on September 30, 2025.
88
+
89
+
<!-- {{< image id="presentation-minister" image1="/images/algoprudence/AA202302/Algorithm audit presentatie BZK FB-18.jpg" alt1="Presentation advice report to Dutch Minister of Digitalization" caption1="Presentation advice report to Dutch Minister of Digitalization" width_desktop="5" width_mobile="12" >}} -->
Paneldiscussie ‘Statistiek: kwaal of wondermiddel voor effectief toezicht op algoritmes en AI?’, Tijdschrift voor Toezicht en professional association VIDE
0 commit comments