
AI is increasingly capable of recognizing human emotions, and this development will have a profound impact on life and work.
Since the introduction of ChatGPT in late 2022, the cybersecurity landscape has witnessed a dramatic change. Boosted by the massive capabilities of generative artificial intelligence (GenAI) technology, many tasks that were once limited to humans can now be executed by AI-powered machines with greater accuracy and cost-effectiveness.
Opponents of AI technology argue that a significant shortcoming of AI solutions is their inability to understand human emotions. This is an essential skill that only humans possess, and machines cannot fully replicate.
However, recent advancements in emotional AI are beginning to challenge this thinking. While AI may not express emotions as humans do, it is increasingly capable of recognizing and responding to human emotions. This development will profoundly impact different aspects of life and work, including cybersecurity.
Emotional intelligence explained
EI refers to the ability to understand and manage our own emotions as well as recognize and influence the feelings of others. For example, let’s say someone is on a phone call and has a high level of EI. This person would likely understand unspoken cues from the other person’s voice tone and word choices and respond appropriately.
EI allows people to build stronger relationships, manage their stress effectively, handle critical incidents calmly, resolve conflicts with others constructively (both at work and with friends), and even anticipate future reactions in various situations. EI has become a key factor in personal and professional success, enabling individuals to understand social complexities more efficiently and make more informed decisions.
There are five key elements composing EI:
- Self-awareness: Recognize how our emotions impact our decisions and interactions with others. For example, self-awareness is a critical skill for cybersecurity professionals, as it helps them understand how to remain calm during security incidents and interact with their colleagues during work.
- Self-regulation: Learn how to identify and manage your emotions. For example, recognize your anger, sadness, and joy and control them in a healthy way.
- Motivation: Learn how to utilize your emotions to achieve your personal and professional goals. For example, security professionals may be into this field for various reasons, such as financial incentives, a passion for protecting sensitive data, a love for technology, preventing cybercrimes, or safeguarding the digital world.
- Empathy: Understanding the emotions of others allows you to work and create a supportive work environment that fosters solid relationships and enhances collaboration between all employees.
- Social skills: Your ability to communicate effectively with people from diverse backgrounds and environments is a critical skill everyone should develop. In cybersecurity, organizations are increasingly outsourcing their work to other companies. Your ability to efficiently communicate with employees located in different countries has become essential, and this is what social skills try to do.
Now, after understanding the meaning of EI and its main elements, let us discuss how AI is reshaping this field.
Emotional Intelligence in AI
EI in AI refers to the ability of AI systems to understand and interpret human emotions. Since its invention, AI technology has faced a significant obstacle in understanding human emotions. Interpreting human emotions is crucial to leveraging AI in different areas, such as customer support (e.g., AI-powered chatbots), marketing, banking, and healthcare applications.
How do AI solutions detect people's emotions?
AI systems use three main methods to understand human emotions:
Facial recognition. In this type, the AI solution uses pre-trained ML models to scan human faces to understand their current emotions, such as happiness, sadness, anger, or surprise. The AI system is trained to read people's visual expressions, including mouth shape, eye movements, and general facial expression lines, to predict their current emotional status. Some AI systems are trained to understand human emotions from their physical movements and walking patterns.
Voice recognition. The AI interprets a person's voice to understand their emotions using pre-trained ML models and Natural language processing (NLP) methods. It scans for keywords, tone, pitch, speaking speed, and vocal intensity, among other indicators, to determine the user's emotional state.
Text analysis. In text recognition, the AI system scans written text to understand the writer's emotions. This method, often called sentiment analysis, works by evaluating the overall emotional tone of a piece of text.
Sentiment analysis is a robust technique utilized by AI systems to understand the emotional tone behind the text. It works by examining text for clues that reveal a writer's emotions or opinions. For instance, it searches for the following:
Word choice. Words like love, hate, confidence, anger, joy, and fear give strong hints about sentiment.
Tone. The overall writing tone, whether cheerful, angry, neutral, optimistic, pessimistic, informative, or objective, can help determine the writer's emotions, intentions, and the overall message they are trying to convey.
Context. he bigger picture or environment that gives meaning to the words or phrases. For example, consider the following sentence: "The ball is in the court."
Linguistic context: The word "ball" may refer to the traditional ball used in sports, such as football, basketball, or gambling games. The word "court" could refer to a legal courtroom, a sports arena, or a royal residence.
Situational context: If the sentence is used in the context of a tennis match, then the AI technology would suggest that the words "ball" and "court" refer to the tennis ball and the tennis court. Otherwise, the entire meaning will become different.
AI systems can perform sentiment analysis on various data sources:
- Social media posts: Analyzing Facebook posts to understand people's opinions about a new product or social case.
- Customer reviews: Understand a customer's opinions about a specific product or service.
- Voice calls: Detecting customer emotions (such as frustration or satisfaction) when talking with customer support staff via phone or internet messaging.
There are numerous tools (many of them free) to perform sentiment analysis on text.
- Free Sentiment Analyzer from danielsoper.com
- Twitter Sentiment Visualization – A project for visualizing the sentiment of tweets posted on Twitter

As with everything in technology, there are risks associated with leveraging EI in AI systems. Here are the most prominent ones:
Challenge of using EI in AI systems #1: Privacy risks
Utilizing EI features in AI requires collecting personal information. While this information may not directly identify each person, combining it with other data points can reveal extensive information about each user, which is very dangerous if that information falls into the wrong hands.
For instance, an AI system that incorporates EI features might analyze the following:
- Users' facial expressions during video calls
- Voice patterns (tone, word choice, accent) in phone conversations
- Text messages, social media posts and email correspondence
- Biological indicators such as heart rate and other biometric data from medical wearable devices
Challenge #2: Bias and discrimination risks
ML models power AI systems to perform their functions and act intelligently. These models are trained on massive datasets acquired from various sources – the internet, customers' feedback and interactions, and specialized databases.
Some risks against ML models appear in the following cases:
- Threat actors may try to poison the data used to train these models by injecting misleading data, causing the AI system to produce inaccurate or wrong outputs. This type of attack, known as data poisoning
- ML models used to power EI systems may train on data acquired from one culture, making it challenging to understand users from different cultures. This cultural bias can lead to misinterpretations of emotional signs and may produce inappropriate responses when interacting with users from a culture that is not included in the training dataset.
On the other hand, an AI-powered system with EI capability will typically learn from human-generated data (e.g., people's visual and voice interactions with AI systems), which often contains inherent biases. For example:
- An AI-powered recruiting system might prefer candidates who show emotions, pretending to be a specific culture while refusing those from different backgrounds.
- Voice-based customer service AI tools could misinterpret emotional marks from people with specific accents (e.g., immigrants) or those suffering from speech impediments, which could eventually lead to poor service.
Challenge #3: Cultural differences
Human emotions and their expressions are heavily influenced by culture. For instance, what is considered a sign of happiness in one culture may indicate sadness in another. This cultural variance also applies to text, where some cultures have specific uses of language that differ from regular meaning.
Aside from these cultural differences, individuals within the same culture may express their feelings differently. Some people are naturally more expressive, while others are more reserved. This further complicates the process of interpreting emotions in AI systems.
Challenge #4: Context understanding
AI struggles to understand language context. For example, a user may request something from chatbots using humor or irony, but AI chatbots cannot interpret such meaning easily without knowing the surrounding context.
Similarly, certain sentences, words, or phrases can have multiple meanings within the same culture. For instance, I asked Google Gemini to provide different interpretations for the sentence: The bank is closed.

Your email address will not be published. Required fields are markedmarked