Experiences… and what we make of them

3 minute read

Published:

Some days ago I read a post about bias written by a colleague here at the Department of Informatics, Systems and Communication of the University of Milano-Bicocca. The post discussed the relationship between what he called “model bias” and “domain bias”. The point was, in some cases the bias is not in the model, or - better - it was not introduced by the model, but it is rather replicated from the data describing a phenomenon. In turn, those data are not necessarily biased, what is biased is our world (the domain), our society, sometimes even our formal rules.

Bias refers to a systematic deviation from rationality or neutral thinking. Sometimes it can be a consequence of repeated experiences that lead to an often ineradicable set of beliefs and attitudes. Bias can therefore be the result of an overgeneralization, leading to erronoeus position. Bias, however, can also arise due to various forms of racism and forms of discrimination of gender, beliefs, political positions, and so on. In the latter case, bias could be fueled by what is called (ironically) confirmation bias: I have a prejudice and I actively seek information (not very important if it’s true or false) supporting my opinion, discarding any conflicting information as wrong, caused by prejudice or bias. If you’re thinking about social media, well, you’re right: echo chambers, sometimes fueled by recommender systems that are doing what they’re designed for (proposing appreciated contents), certainly have non trivial relationships with bias.

One problem with bias is that it can sometimes be (to a certain extent) related to actual experiences.

We also have a different, although maybe non that popular, name for mental shortcuts that we use to simplify some decision making activity also related to experiences: heuristics. Heuristics are pragmatic methods, often based in induction or analogical thinking, and they generally support us in taking decisions based on the idea that finding optimal solutions is sometimes impractical or even impossible. On the other hand, it can be relatively simple to find satisfactory or acceptable solutions. Of course, heuristics can fail or anyway produce suboptimal results, but nonetheless we generally project a more neutral intent to the term.

AI generated image of Mother Theresa fighting against poverty
AI generated image of Mother Theresa fighting against poverty

But how can we decide if we are facing a fallible but neutral heuristic rule, a rather neutral bias, or a bias rooted on prejudice, created and fueled like a self-fulfilling prophecy? And even when it is backed up by data somehow justifying the rule, that is therefore not exactly a prejudice, is it right to keep applying it or could it be more just, fair, transformative, and potentially even useful (in the long run) to bring some change in the state of affairs?

That seems to me a serious problem, and I think a problem that is not technical, is not a disciplinary problem for AI, or psychology and cognitive sciences. It seems to me a societal problem, a political problem, an issue pertaining the moral and ethical domain. I like to remind anyone taking the time to read these words, and wanting to go deeper in the subject, to take the chance to read a book by Cathy O’Neil that has sometimes been violently overtaken by events but that, in its essence, still holds most of the initial value.

Weapons of Math Destruction by Cathy O'Neil
Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, by Cathy O'Neil