The relevance of AI agents: What you should know


AI is constantly evolving, and as a result, it might be difficult to keep up with all the changes happening in this landscape. Nonetheless, one change in particular is getting a lot of spotlight and is hard to miss. Traditional AI models are shifting to more autonomous systems called AI agents, which can carry out independent actions.

As AI agents are gradually stepping into our daily lives, it’s important to stay informed and be aware of related implications. In this article, I delve into detail about AI agents, including their types, key characteristics, what they are capable of, and related legal considerations.

What is an AI agent?

ADVERTISEMENT

An AI agent is a system that performs actions autonomously by interacting with its environment. An AI agent receives an input, which it processes and then takes a specific action to complete a goal. Compared to a traditional AI model, it’s capable of adapting to a changing environment and making a decision based on new information received from the changes in its environment.

The evolution of traditional software to a more advanced version has been happening in waves. It all started back in the 1950s with rule-based systems. They continued evolving, leading to more sophisticated models. Fast-forward to the second wave, which lasted between the 1990s and 2010s, a new concept of machine learning was introduced. Later, in the 2020s, it fueled the beginning of the third wave of AI evolution and AI agents capable of reasoning, learning, and independent action.

Key characteristics of AI agents

AI agents are more sophisticated than traditional AI software and have some key characteristics that set them apart:

  • Autonomy. AI agents are able to operate and make decisions independently with the aim to achieve a goal.
  • Reactivity. AI agents have the ability to assess the environment and act on gathered information to achieve goals.
  • Reasoning and decision-making. AI agents employ cognitive processes that involve logic and available information for decision-making. The reasoning permits AI agents to analyze data and identify patterns so they can make decisions based on factual information and context.
  • Learning. AI agents are capable of learning and enhancing their performance via machine, deep, and reinforcement learning methods.

How do AI agents differ from traditional software programs?

The main difference between AI agents and traditional software programs is autonomy. Traditional software needs predefined rules and specific instructions to execute commands. An AI agent, on the other hand, is autonomous in its actions which are based on real-time data.

Let’s have a closer look at the AI agent vs traditional software comparison in the table below:

ADVERTISEMENT
AI agentsTraditional software
Autonomy and decision-makingAnalyze real-time data and environmental inputsWorks based on if-then logic
Learning and adaptationContinuously improve their performance via machine learningStatic, unless manually updated
Architectural designsBased on natural language processing (NLP) that permits conversational dialogueRelies on rigid UIs, like buttons and forms

The way traditional software functions is great for simple and repetitive tasks that can follow predefined rules and scripts. While an AI agent is built to adapt to dynamic environments, it takes into account context and information relevant to a particular case.

What are the main types of AI agents?

There are 5 types of artificial intelligence agents:

  1. Simple reflex agents. These agents respond to direct environmental stimuli based on pre-defined rules. A few real-life examples of a simple reflex agent that acts on a condition-action rule basis are automatic doors, thermostats, and basic spam filters.
  2. Model-based reflex agents. It’s a more complex AI agent, which can remember past actions and predict future ones. It’s constantly updated with data coming from the environment so AI can anticipate future conditions. An example of a model-based reflex agent in action is an autonomous vehicle.
  3. Goal-based agents. AI agents that consider future consequences and plan actions accordingly in order to achieve specific objectives. A great example is a robot vacuum cleaner – its goal is to clean accessible floor space and navigate obstacles.
  4. Utility-based agents. This type of AI agent is capable of complex decision-making with multiple potential outcomes. It processes large amounts of data, mapping out possible options for the most preferable decision. Utility-based agents are great for high-stakes decision-making, such as financial trading.
  5. Learning agents. This is the only type of agent that is able to adapt and improve over time. Learning agents can change their behavior and strategies based on the changing environment. For example, customer service chatbots can improve their response accuracy over time.
types-of-ai-agents
Credit: OpenAI

How do AI agents perceive and interact with their environment?

For an AI agent to perceive its environment and interact with it, it needs input data. The inputs can take different forms: text, images, or sounds. After the collection of data, an AI agent processes it using algorithms, which commonly include machine learning, to better understand its environment. After assessing the current state of the environment, an AI agent can make decisions or take action.

Let’s take a robot vacuum cleaner as an example. To perceive its environment, the vacuum uses infrared sensors to scan its surroundings to detect the walls and any obstacles within its environment. Then it creates a map of the room’s layout. When the vacuum is cleaning the room and senses a wall or an obstacle, it changes direction to avoid collision, interacting with its environment in real time.

Autonomy and decision-making in AI agents

ADVERTISEMENT

Autonomy in AI agents refers to their ability to act independently with minimal human intervention and supervision. The agents make decisions by employing several key techniques – data analytics, machine learning algorithms, and identifying patterns.

AI agents’ ability to process large amounts of data, combine techniques of machine learning and data analytics, create novel solutions, and forecast the future turns AI agents into good decision-makers. In essence, the decision-making process in AI agents works similarly to humans. Afterall, that’s where the inspiration came from. Like humans, AI agents use neural networks to learn and problem-solve and for continuous improvement.

What is a neural network?

A neural network is a method of machine learning that was inspired by how the human brain works. As the name suggests, it uses a layered structure of interconnected nodes and neurons resembling the brain. Artificial neural networks are able to solve complex problems with great accuracy due to their ability to learn from mistakes and constantly improve.

How do AI agents learn and adapt to new information?

AI agents learn new information from pre-trained data modules, such as large language models (LLMs). They encompass vast amounts of data that enhance AI’s capabilities of natural language processing (NLP) and natural language generation (NGL). As a result, AI agents can use natural language and adapt its communication style based on a situation. For example, chatbots used in customer service are interaction-based AI agents that are capable of assisting customers while adapting to their communication style.

Depending on a specific AI agent’s goal, it can be trained using one of the common LLM learning models:

  • Zero-shot learning. In zero-shot learning, the model is prompted without adding any examples. This model is particularly useful when there’s no task-specific data available in addition to the task instruction.
  • One-shot learning. This approach is used for pre-trained models that are capable of viewing only one example before making a prediction. Therefore, only one example is added to the task description to serve as context.
  • Few-shot learning. A few examples are given as prompts to help the model understand how the question of the given task should be solved. Compared to fine-tuning learning, the amount of task-specific examples is significantly lower. Therefore, this approach is well suited for tasks that include smaller data sets.
  • Chain-of-thought learning (CoT). Dealing with reasoning tasks remains a challenge for state-of-the-art models. Such cases require a more complex solution like CoT. As reasoning tasks, for example, arithmetic reasoning problems, require solving a problem in intermediate steps in particular order, only CoT is capable of applying required rationale.

LLM challenges

Researchers point out that LLM doesn’t come without challenges. On several occasions, language models were found to generate biased outputs and misinformation that can be used with malicious intent. Since, during training, LLMs absorb all the information available in the text, including toxicity and bias, they’re likely to replicate such information in their outputs later.

Here are the most common LLM challenges and ways to overcome them:

  • Toxic content. LLMs can generate toxic language, such as hate speech, insults, threats, and profanities. Toxicity can be reduced by removing toxic content from training data, regular testing with specific prompts, and human moderators.
  • Hallucinations. Generating fake or incorrect information is called LLM hallucination. It can be controlled by using high-quality training data and defining systems’ responsibilities and limitations.
  • Biases. LLM can show bias regarding gender, age, and race. For example, its data set can contain unbalanced representations of some groups of people and favor one group over another in its responses. Dealing with bias in LLMs requires a multilayered approach, including technical, regulatory, and ethical measures.
ADVERTISEMENT

AI agents are programmed in a way that allows them to carry out independent actions without human intervention. Due to their high autonomy, it becomes unclear who is accountable when things go wrong. Therefore, you might wonder what are the legal considerations associated with harm caused by AI agents. As a result, legal systems are working on various frameworks defining liability for AI-related incidents. Nonetheless, the law regarding AI-related occurrences remains limited.

Currently, in the US, cases brought into court that address harm caused by AI are assessed under tort law. Therefore, each of these cases is handled individually with the aim to develop a precedent that can be used in the future. While it’s a step forward, many unresolved issues remain. For example, to successfully bring a case to a standing, plaintiffs might face difficulties in proving that they were harmed. Also, plaintiffs might bring cases on the grounds that products involving AI are defective. However, it’s not clear yet whether the courts will define AI as a product.

Meanwhile, on June 13th, 2024, the European Commission introduced the first legal framework on AI. The AI Act addresses the associated risks, making the EU a global leader on the matter. The goal of the AI Act is to reduce risks associated with AI systems and provide protection for those who have been unfairly disadvantaged by AI.

The AI Act is based on a 4-level risk-based approach:

ai-act-risk-levels
Credit: OpenAI
  1. Unacceptable risk. This refers to prohibited practices that are a clear threat to safety, livelihoods, and rights of people.
  2. High risk. This involves cases when AI can pose serious risks to one’s safety, health, and fundamental rights.
  3. Limited risk. This refers to risks that are related to the need for transparency when using AI. The AI Act advocates for the right of human users to be informed whenever they are interacting with a machine, which is necessary to preserve trust.
  4. Minimal risk. The AI Act doesn’t define rules for AI that is considered to be minimal to no risk. However, the majority of AI systems that are already used in the EU fall into this category.

Some provisions of the AI Act are already fully in force. However, some requirements for high-risk AI systems and other provisions will be fully enforced only by the end of the transitional period, which is foreseen to be completed by August 2nd, 2027.

Conclusion

AI agents mark a big leap in technological advancement. They’re capable of autonomous decision-making, which proves to be useful not just in highly professional areas such as Healthcare AI systems and trading but also in our everyday life.

ADVERTISEMENT

As we find ourselves interacting with customer support chatbots or buying home appliances that utilize AI, it’s important to know more about AI agents. Therefore, I encourage you to stay up-to-date with the latest advancements and related legal considerations for safe AI use.