Author: ajit jaokar
AI bias is in the news – and it’s a hard problem to solve
But what about the other way round?
When AI engages with humans – how does AI know what humans really means?
In other words, why is it hard for AI to detect human bias?
That’s because humans do not say what they really mean due to factors such as cognitive dissonance.
Cognitive dissonance refers to a situation involving conflicting attitudes, beliefs or behaviours. This produces a feeling of mental discomfort leading to an alteration in one of the attitudes, beliefs or behaviours to reduce the discomfort and restore balance. For example, when people smoke (behaviour) and they know that smoking causes cancer (cognition), they are in a state of cognitive dissonance.
From an AI/ Deep learning standpoint, we are trying to use deep learning to fund hidden rules where none may exist.
In future, the same problem may arise when we try to explain our own bias to AI.
In a previous blog AI and algorithmorcacy-what the future will look like –
I discussed why it would be so hard to explain religion to AI.
All religion is inherently faith based. An acceptance of faith implies a suspension of reason. From an AI perspective, Religion hence does not ‘compute’. Religion is a human choice(bias). But if AI rejects that bias, then AI risks alienating vast swathes of humanity
The next time we talk of AI bias – lets spare a thought for the poor AI who has to work with the biggest ‘black box’ system – the human mind##
But not all is lost for AI
Affective computing (sometimes called artificial emotional intelligence, or emotion AI) is the study and development of systems and devices that can recognize, interpret, process, and simulate human affects(emotions). It is an interdisciplinary field spanning computer science, psychology, and cognitive science. The more modern branch of computer science originated with Rosalind Picard’s 1995 paper on affective computing. The difference between sentiment analysis and affective analysis is that the latter detects the different emotions instead of identifying only the polarity of the phrase. (above adapted from wikipedia).
Within Affective computing, Facial emotion recognition an important topic in the field of computer vision and artificial intelligence. Facial expressions are one of the main Information channels for interpersonal Communication. Verbal components only convey 1/3 of human communication – and hence nonverbal components such as facial emotions are important for recognition of emotion.
Facial emotion recognition is based on the fact that humans display subtle but noticeable changes in skin color, caused by blood flow within the face, in order to communicate how we’re feeling. Darwin was the first to suggest that facial expressions are universal and other studies have shown that the same applies to communication in primates.
Independent of AI, there has been work done in facial emotional recognition. Plutchik wheel of emotions illustrates the various relationships among the emotions. Ekman and Friesen pioneered the study of emotions and their relation to facial expressions.
AI’s ability to detect emotion from facial expressions better than humans lies in the understanding of microemotions. Macroexpressions last between 0.5 to 4 seconds (and hence are easy to see). In contrast, Microexpressions last as little as 1/30 of a second. AI is better at detection microemotions than humans. Haggard & Isaacs (1966) verified the existence of microexpressions while scanning films of psychotherapy sessions in slow motion. Microexpressions occurred when individuals attempted to be deceitful about their emotional expressions.
Thus, while AI lacks the ability to detect human bias due to cognitive dissonance and other aspects, AI does have the ability to detect microexpressions. Over time, this ability would overcome the limitations of AI in understanding human behaviour such as cognitive dissonance
Comments welcome
Image source: Emotion research labs