Skip to content

Commit

Permalink
Add ISO definition of bias
Browse files Browse the repository at this point in the history
part of #17
  • Loading branch information
dontcallmedom committed Mar 27, 2024
1 parent 50499a8 commit cd6ca5c
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions index.html
Original file line number Diff line number Diff line change
Expand Up @@ -243,13 +243,13 @@ <h4>Transparency on AI-mediated services</h4>


<p>
A well-known issue with relying operationally on [=Machine Learning models=] is that they will integrate and possibly strengthen any <a href="https://www.w3.org/TR/webmachinelearning-ethics/#bias">bias</a> in the data that was used during their [=training=]. Bias commonly occurs in other algorithms and human decision processes. Where [=AI systems=] make that bias a bigger challenge is because these models are at this point harder to audit and amend since they operate mostly as a closed box.
A well-known issue with relying operationally on [=Machine Learning models=] is that they will integrate and possibly strengthen any <dfn>bias</dfn> ("systematic difference in treatment of certain objects, people or groups in comparison to others" [[[[ISO/IEC-22989]]) in the data that was used during their [=training=]. [=Bias=] commonly occurs in other algorithms and human decision processes. Where [=AI systems=] make that [=bias=] a bigger challenge is because these models are at this point harder to audit and amend since they operate mostly as a closed box.
</p>
<p>
Such bias will disproportionately affect users whose expected input or output is less well represented in training data (as e.g., discussed in the <a href="https://www.w3.org/WAI/research/ai2023/">report from the 2023 AI & Accessibility research symposium</a> [[WAI-AI]]), which intuitively is likely to correlate strongly with users already disenfranchised by society and technology - e.g., if your language, appearance or behavior doesn't fit the mainstream-expected norm, you're less likely to feature in mainstream content and thus less visible or misrepresented in training data.
Such [=bias=] will disproportionately affect users whose expected input or output is less well represented in training data (as e.g., discussed in the <a href="https://www.w3.org/WAI/research/ai2023/">report from the 2023 AI & Accessibility research symposium</a> [[WAI-AI]]), which intuitively is likely to correlate strongly with users already disenfranchised by society and technology - e.g., if your language, appearance or behavior doesn't fit the mainstream-expected norm, you're less likely to feature in mainstream content and thus less visible or misrepresented in training data.
</p>
<p>
Until better tools emerge to facilitate at least the systematic detection of such bias, encouraging and facilitating the systematic publication of information on whether a Machine Learning model is in use, and how such a model was trained and checked for bias may help end- users make more informed choices about the services they use (which, of course, only helps if they have a choice in the first place, which may not apply e.g., to some government-provided services).
Until better tools emerge to facilitate at least the systematic detection of such [=bias=], encouraging and facilitating the systematic publication of information on whether a Machine Learning model is in use, and how such a model was trained and checked for [=bias=] may help end-users make more informed choices about the services they use (which, of course, only helps if they have a choice in the first place, which may not apply e.g., to some government-provided services).
</p>
<div id=b2 class=advisement>
<p>
Expand Down

0 comments on commit cd6ca5c

Please sign in to comment.