Built In Bias: The Human Fingerprints on Artificial Intelligence
A 42-year-old man, Robert Julian-Borchak Williams, was working in his front yard when the police came and arrested him for theft. He was identified by facial recognition software.
When he arrived at the police station, the pictures did not match the appearance of the man standing in front of them.
Thirty hours later, he was released, as the police finally admitted that the arrest had been made due to faulty facial identification by artificial intelligence. Upon further research, it was revealed that the facial identification model was trained on mostly white faces, thus making it more prone to error in identifying black Americans. This is a clear example of bias in A.I.
Given this case, the question arises: In what other areas is artificial intelligence biased?
If we think about artificial intelligence, it is basically a compilation of human information, written and composed by humans. Do these humans possess bias in their judgments and statements? Absolutely. It only follows that the same bias will most likely occur in artificial intelligence and its pronouncements. On the other hand, I don’t think we should disregard artificial intelligence as a source of information simply due to bias. In that case, we would have to disregard most human knowledge, which has an inborn implicit bias to one degree or another.
We have to delve into the substance of what it means to have a bias. Philosopher Hans Georg Gadamer states that “prejudices are biases of our openness to the world” (Truth and Method, 1960). He maintains that bias is something we grow up within our family, culture, and society. There is no getting around our personal biases. However, he maintains that these biases are the starting point or our openness to the world, meaning they are the starting point we use to confront the world. He advocates for a critical consciousness of those biases and, upon receiving new knowledge, to confront and either affirm or discard those old biases.
Gadamer calls this process the Fusion of Horizons — the point where our prior understanding meets new information, and there is a fusion of the two viewpoints to formulate a more just interpretation. In this sense, Gadamer doesn’t see a bias as an evil to be eliminated; rather, it is simply a starting point from which to develop and expand our knowledge.
Let’s apply this to artificial intelligence. A.I. is biased, no doubt about it, but that doesn’t mean that A.I. should be discarded as a source of information. The bias of A.I. is the starting point to gather information, to compare A.I. information from our own potential bias in the fusion of horizons.
Here is where I believe that A.I. possesses a refinable bias. If I watch a biased political commentator, I can rest assured this commentator will never change his own bias, due to a lifelong commitment to a point of view or due to pride and arrogance. Many times, in conversations with such a person, no opposed opinions are ever given merit for their correctness. However, with artificial intelligence, you can have a rational “conversation” where other points of view can be agreed upon and where common ground can be found. I have yet to ask A.I. a question in which it gave a deflection and avoided the question. A.I. will also admit and correct itself if opposing arguments are valid. It doesn’t hold onto its initial bias. Its bias is refinable. This is a positive development.
It should not be surprising to find A.I. bias in the area of historical information, given our human propensity to have a bias. However, in other areas where artificial intelligence is embedded, bias should be rooted out.
Let’s take the issue of A.I. in health care. A.I. predictive results depend on the training set of data. If A.I. is trained on data that include few clients who belong to a minority group, the results of A.I. predictions for health care will be skewed for the majority. It is essential for A.I. to be trained in a non-biased manner for its health care predictions to be accurate.
How about A.I. and surveys? If A.I. is trained on a data set that consists primarily of Americans, then A.I. will be biased with an American cognitive bias, forgoing opinions from other countries in the world. This could be an issue for international businesses.
Bias in A.I. is not an unknown problem. Many of the top promoters of A.I. are trying to correct the issue. Read this quote from the World Economic Forum:
Currently, the United States and European Union are driving efforts to limit the rising instances of artificial intelligence bias through Equal Employment Opportunity Commission oversight in the US and the AI Act and AI Liability Directive in the EU.
The focus initially should be on certain sectors where AI bias can potentially deny access to vital services. The best examples include credit, healthcare, employment, education, home ownership, law enforcement, and border control. Here, stereotypes and prejudices regularly propagate an inequitable status quo that can lead to shorter life expectancy, unemployment, homelessness, and poverty.
Control of artificial intelligence bias must begin with testing algorithm outcomes before they are implemented. Mistakes of AI bias are most often made when those evaluating algorithms focus on data going into decision-making rather than whether the outcomes are fair.
The authors, Townson and Zeltkevic, propose a threefold solution. They advocate focusing on data selection, algorithm design, and a fairness threshold.
Although A.I. has made remarkable strides, it’s not immune to bias. This underscores the importance of cross-referencing information and relying on multiple sources for a well rounded perspective. Just as human thinking can be biased, A.I. reflects the data and design behind it. Fortunately, developers are actively working to identify and reduce these biases, making systems fairer and more reliable. Until then, A.I. should be used with both curiosity and caution — not as an unquestionable authority, but as a tool that benefits from human oversight.
George Matwijec is an adjunct philosophy teacher at Immaculata University who specializes in teaching knowledge and logic. He is the author of a book entitled My Interview with AI. He can be reached at iteacher101.com.
Picryl.