Hazard: Reinforces Existing Biases#
Reinforces unfair treatment of individuals and groups. This may be due to for example input data, algorithm or software design choices, or society at large.
Note: this is a hazard in it’s own right, even if it isn’t then used to harm people directly, due to e.g. reinforcing stereotypes.
Test the effect of the algorithm for different marginalised groups, considering different definitions of bias and fairness.
Think about the input data, what intrinsic bias it contains, and how this can be reduced (for example by having a more representative data set).
Think about the bias of the algorithm, what intrinsic bias it contains, and how this can be reduced.
Do not deploy tools that are know to reinforce biases against particular groups (for instance, systemic racism).