Hazard: Automates Decision Making#
Automated decision making can be hazardous for a number of reasons, and these will be highly dependent on the field in which it is being applied. We should ask ourselves whose decisions are being automated, what automation can bring to the process, and who is benefitted/harmed from this automation.
Example 1: Predictive policing is used to decide where to deploy officers.
Example 2: Credit scores are produced automatically and rarely involve human input.
Inclusion of feedback into the system, so that poor or incorrect decisions are not repeated.
Rigorous testing across a broad range of scenarios, and edge cases. This should include those with experience of the domain, and those about whom decisions are being made.
Consider human-in-the-loop solutions for higher risk decisions.
Ensure outputs of decision making processes are interpretable.
Ensure there is an easy, accessible and timely way to challenge the automated decision making and/or not have it applied in the first place.