Artificial Intelligence

Artificial Intelligence and the fallacy of bias

Leon Festinger’s Theory of Cognitive Dissonance has been one of the most influential theories of social psychology. His theory suggests that a pair of dissonant cognitions give rise to psychological discomfort. In simple terms, when an individual’s behaviour is inconsistent with her thoughts and beliefs, it creates psychological tension. But the natural reaction is to reduce this discomfort. Hence, a human tends to avoid new information that promotes dissonance, or rationalises her choices or actions, or alters her behaviour to achieve consistency in her beliefs and actions. This is called the “principle of cognitive consistency”. In this article, the author will explore if cognitive inconsistency will force artificial intelligence to conserve bias, or the technology will regardless usher in an era of uniform human rights standards.

Individuals and Social Dissonance

While Festinger’s theory is limited to the functionality of an individual brain, an extension of his theory is ‘Social Dissonance’. It refers to misplaced norms across a human community/ society, also identifiable as collective cognitive dissonance. For instance, in the pre-industrialisation era, the practice of Sati was common in Hinduism (despite giving significant importance to human life). And society was unwilling to accept changes, since new information like feminism contradicted their existing norms.

While social dissonance has existed ever since humans started to form communities, the theory has not been a part of mainstream discourse. Mostly, because it is inherently understood to be a natural part of human existence. However, with the emergence of a new stakeholder in our society, the discourse around this theory has become more important than ever.

Enter Artificial Intelligence

The new stakeholder of our society, undoubtedly, is Artificial Intelligence. It promises to be a significant part of the Fourth Industrial Revolution and certainly holds the potential to eliminate biases and discrimination that victimise certain sections of our society, by deploying an objective threshold in almost all aspects of societal operations. The technology has already shown its capability to drive the human system towards egalitarianism.

However, at present, artificial intelligence is not alien to present complexities in human society. It also reflects existing biases, positive or negative. Since artificial intelligence relies on data analytics, it evidently adopts the underlying dissonance as well. But this dissonance contradicts the promise of artificial intelligence (eliminating bias and discrimination).

The proponents of a functional artificial intelligence regime propose that the technology will be sound enough to maximise human well-being. But that inherently includes moving away from any social dissonance and establishing a comprehensive objective framework that is widely acceptable to human society. So how can a biased artificial intelligence do that?

Therefore, the hypothesis is that artificial intelligence regimes will only be possible in the absence of social dissonance. The author will check the veracity of the hypothesis on three fronts: identifying the ‘discomfort’ that artificial intelligence may cause and consequently invite huge opposition, the role of cultural relativism in morality, and the flexibility in the misplaced norms.

Understanding “Discomfort” under The Dissonance Theory

First, we need to understand what “discomfort” represents in cognitive dissonance theory. Festinger does not have a definite answer to this, and even academia only gives instantaneous or representative responses. Arguably, discomfort fundamentally represents a human emotion that is ‘unideal’ or ‘sub-optimal’ to what it should be.

Ekman’s Theory of Emotions, which is a well-accepted theory over human emotions, proposes that humans have six basic emotions: anger, surprise, disgust, enjoyment, fear, and sadness. Ekman argues that these emotions transcend “language, regional, cultural, and ethnic differences” and pertain to distinctive signals. Technically, this means that Festinger’s ‘discomfort’ is a representation of any or some of these emotions, like anger-disgust, fear-sadness, fear-disgust-surprise, and so on. However, although widely accepted, even this theory is not free from criticism. In fact, research has shown that Ekman’s universal six emotions did not exist. Rather, they were a result of isolated sampling.

The next best argument worth considering is Lisa Feldman’s Theory of Constructed Emotions. It takes a more nuanced approach that processes a number of factors like internal feelings, external circumstances, experiences, etc. in parallel to create an emotion. However, it does not suggest any universal set of emotions like Ekman, meaning emotions are again subject to variable circumstances.

Individual Resistance to Artificial Intelligence

The above discussion infers that the feeling of discomfort during cognitive dissonance does not have a definite interpretation. An extension of this inference would be that when a society is under collective cognitive dissonance, the feeling of discomfort against the new information that questions their norms is also not objectively interpretable.

Therefore, quite likely, when new information against existing norms approaches, the society as a singular unit may not oppose it. For instance, the emerging blockchain revolution is also against the existing norm of having a central body that is accountable for managing the service. Despite that, society has well-appreciated the revolution, without much opposition (criticism is inevitable). This is because society hardly responds as a single unit to any dissonance. Rather, the individuals in that society take their independent approach which is relative to their level of cognitive dissonance.

Hence, presuming that artificial intelligence’s objectivity in decision making might cause social ‘discomfort’ and invite opposition, remains unfounded.

Cultural Relativism and Global Influence over AI’s Decision Making

Artificial intelligence is functionally designed to respond to context-driven problems of our society. Hence, it requires context-specific big and metadata for analytics. Certainly, social norms are inherently based on the understanding of morality in that region. For instance, it might be a norm to eat beef in one region, while it may not be in another. Similarly, an artificial intelligence system’s response is bound to be culturally driven. Therefore, the artificial intelligence regimes need to solve such complex social dissonance before concluding, because failing to do so could lead to social unrest.

However, existing norms in society are subject to alteration in relative real-time. With globalisation and liberalisation, what may have been historically immoral can be moral or vice versa, due to cross-cultural influence and interconnected information networks. So, the algorithm which was until now presumed to look only into regional factors is now also expected to analyse the global scenario before concluding any outcome. Therefore, it is critical to rely on sources of information that do not cause discomfort to both the regional as well as the global community.

Since the source of information could be anything, from social media to scientific research and credible state instruments, it is important that the sources are not a mere reflection of majoritarianism. Such sources could holistically discard the moral principles of society. The responsibility would then fall on the creator of AI to ensure that the algorithm is able to differentiate between the veracity of various sources. Besides, it is also needed to ascertain a review mechanism so that the societal decisions are not based on the mercy of the creator. The creative autonomy of artificial intelligence has to “emerge out of interactions with multiple critics and creators, not from solitary confinement.”

A gradual shift from misplaced norms

It is arguably true that preferring certain sources over others will cause discomfort in society. However, as earlier discussed, it is unclear what this discomfort would entail. Such as, would that lead to wide-scale protest? Or submerge the communitarian identity? Or would simply cause hesitation against existing norms of morality? Quite likely, individuals in society would represent their discomfort differently and variably.

Possibly, the continuous transformation of existing (misplaced) norms due to globalisation will harmonise with the cultural and moral shift that the artificial intelligence regime would bring due to its global influence. In simpler terms, artificial intelligence, through its decision making and interactions with varied aspects of human society, would bring a gradual change in the regional norms such that they are in line with the universal principles of morality and human rights.

The idea is not of a radical and forced shift to suppress any misplaced norms, but to disengage the social dissonance consensually and harmoniously through gradual intake of new information that is universally sound.


The hypothesis that an artificial intelligence regime is only possible in the absence of social dissonance remains unfounded. Since the meaning of discomfort in the cognitive dissonance theory is not well-established, it brings a scope of taking decisions that are supposed against the existing misplaced norms. In fact, the global influence over the cultural context through artificial intelligence would promote a human system where the misplaced norms would gradually shift to support the basic fundaments of human existence. Therefore, an artificial intelligence regime can not only thrive during the period of social dissonance but would also gradually dissolve the misplaced norms.

The author of this article is Milind Yadav, a penultimate year law student at Jindal Global Law School.

Do subscribe to our Telegram group for news updates, resources, and discussions on tech-law. To receive weekly updates, you can subscribe to this Newsletter.

2 thoughts on “Artificial Intelligence and the fallacy of bias

Share your thoughts!

This site uses Akismet to reduce spam. Learn how your comment data is processed.