Trying to remove gender bias from NLP
The Ethics of AI Ethics - An Evaluation of Guidelines
Actual Positive | Actual Negative | |
Predicted Positive |
True Positive (TP) |
False Positive (FP) |
Predicted Negative |
False Negative (FP) |
True Negative (TP) |
Actual Positive | Actual Negative | |
Predicted Positive |
True Positive (TP)PPV = TP / (TP + FP) TPR = TP / (TP + FN)
|
False Positive (FP)FDR = FP / (TP + FP) FPR = TP / (FP + TP)
|
Predicted Negative |
False Negative (FP)FOR = FN / (TN + FN) FNR = TP / (TP + FN)
|
True Negative (TP)NPV = TN / (TN + FN) TNR = TN / (TN + FP)
|
Individuals in protected and unprotected groups have equal probability of being predicted 'positive' by the classifier
Individuals in protected and unprotected groups have equal PPV (probability of groups being actual positive and predicted positive)
Individuals in protected and unprotected groups have equal predictive accuracy (Probability of a subject from actual positive or negative to be predicted positive or negative)
No sensitive or protected attributes are used in the decision making process
If predicted outcome does not rely on protected attribute (G)
Unpopular opinion: the entangled nature of the different measures of fairness is important and useful
We don't (yet) have a single fairness measure to rule them all.
No physics 'Theory of everything' (yet)
Building ML systems in a capitalist, corporate context:
Co-opt these techniques to optimise for fairness?
All production data is validation data
What if we combined a mathematical approach like this with data about outcomes - could we create a fair 'positive bias' model to negate systemic bias?
"Complex systems are the best gift to finger pointing in the history of humanity"
SOURCE: What is wrong with UX podcast
Empathy is a useful tool but it's not a full coverage test
When you other ethics,
you other ethics
"I am an ethical person therefore I build ethical tech"