AP: Recognising emotions using AI is “questionable and risky”
Organisations are increasingly using artificial intelligence (AI) to recognise emotions in people. However, emotion recognition is based on controversial assumptions about emotions and their measurability. If it is used nonetheless, this entails risks and ethical issues. This is the conclusion of the Autoriteit Persoonsgegevens (AP), in its latest “Report AI & Algorithms Netherlands” (RAN).
Your voice that is used to analyse “your emotional state” during a conversation with customer service. Your smartwatch that measures whether you are stressed. Or a chatbot that recognises your emotions, allowing it to respond more empathetically. More and more organisations are using AI-based emotion recognition because they believe it will allow them to improve their products and services. In marketing or customer contact, for example, but also in public spaces and healthcare.
AP: “Be very cautious with these types of applications”
The AP examined the use of AI-based emotion recognition by customer services, in wearables (such as smartwatches) and in language models. This showed that it is not always clear how AI systems recognise emotions and whether the results are reliable. Despite the growth of these applications, people are not always aware that emotion recognition is being used. Nor do they know which data is used to do so.
The AP concludes that great caution should be exercised with these types of application. Otherwise, there is a risk of discrimination and restriction of human autonomy and dignity.
“Emotions are strongly connected to your human autonomy. If you want to identify emotions, this should be done very carefully, using reliable technology. This is often not the case now”, says Aleid Wolfsen, Chair of the AP.
Emotional expressions are not universal and measurability is controversial
Many AI systems that claim to be able to recognise emotions are build upon controversial assumptions. As a result, biometric characteristics – such as voice, facial expression or heart rate – are translated into emotions in a simplistic manner.
“The idea that everyone experiences an emotion in the same way is incorrect. Let alone that those emotions can be measured using biometrics”, says Wolfsen. “There can be major differences in how people from different cultures experience, express and refer to emotions. There are also differences between individuals, for example because of age. Furthermore, emotions cannot always be interpreted in the same way. After all, a high heart rate is not always a sign of fear, and a loud voice does not always express anger.”
Ethical issue
Various applications of voice recognition will soon be subject to specific AI regulations and already must comply with privacy legislation, such as the General Data Protection Regulation (GDPR). In education and in the workplace, the use of AI systems for emotion recognition is already prohibited under the European AI Act.
Whether this technology is even desirable is another question. “Whether society considers the recognition of emotions through AI acceptable is an ethical issue”, says Wolfsen. “This requires a societal and democratic assessment: do you want to use these systems and, if so, in what form and for what purpose.”
