-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Example of Bias in Adversarial Faces #114
Comments
This is a really compelling example, thanks for sharing! We like to have well-reported articles in the news media or research in academic journals to give as full a picture of possible of the examples where something goes wrong. This seems to be pretty cutting-edge at the moment, could you add it to our list of community contributed examples here? We're starting to grow that page as an easy way to contribute these: https://github.com/drivendataorg/deon/wiki/Community-contributed-examples |
Sure, I've never contributed to a wiki before though. Do I commit and push to this repo? https://github.com/drivendataorg/deon.wiki.git |
I may also have found a nice example of a bad visualisation here. That said, there's plenty of these. |
Here's some info on updating the wiki—you can do it right in the GitHub UI (permissions used to be restricted, but I just opened them up): |
There's a nice example circulating of a system that can de-noise pixelated images. Unfortunately it defaults to white faces.
There's a small thread of examples on twitter. The orignal tweet is here, the Obama example is here and this thread also highlights issues with asian faces.
Something tells me this might be a nice example to list here. There's evidence that this might not be a data issue, rather an algorithm issue.
The text was updated successfully, but these errors were encountered: