I linked my lecture about how I see "ethics" as another term for "politics" in the academic sense.
Other points to note: All AI will always have bias; a) because neural networks literally have a constant called 'bias' in them; and b) training a prediction algorithm means it is being trained to be bias and running a clustering algorithm means lumping commonly themed attributes together.
If you mean "BIAS" as in "RACE/ETHNICITY/SOCIOECONOMIC_STATUS", then most groups already do this, state they do it, and still deal with the general public not believing them.
Reread my last sentence - allow me to rephrase it:
If you mean "BIAS" as in avoiding "RACE/ETHNICITY/SOCIOECONOMIC_STATUS", most groups state they do not use them for decision making and still deal with the "general public" not believing them because results do not support their opinions.
Other points to note: All AI will always have bias; a) because neural networks literally have a constant called 'bias' in them; and b) training a prediction algorithm means it is being trained to be bias and running a clustering algorithm means lumping commonly themed attributes together.
If you mean "BIAS" as in "RACE/ETHNICITY/SOCIOECONOMIC_STATUS", then most groups already do this, state they do it, and still deal with the general public not believing them.