The legal arms race: How do different regions apply AI regulations?


In November 2022, ChatGPT took the world by storm, bringing new capabilities to millions. However, the rise of AI chatbots also exposed something that technology often exposes – the fact that the law often lags behind it.

Now, governments are in an arms race against giant corporations to create regulations that will protect their citizens against the dangers brought about by the rise of AI. So far, their legal weapons can’t match the chatbots’ rising usage and power. However, this may soon change, as institutions get a handle on AI.

Key takeaways

ADVERTISEMENT
Key takeaways:

Why are AI regulations needed?

AI touches on many aspects of our daily lives, and while many of us use it to improve our lives, it doesn’t come without dangers. Here are some key issues that require regulation:

“Ideas take new shape—
machine learns from human thought,
who owns what it dreams?”

I didn’t write this haiku. ChatGPT generated it based on a prompt. The question is, who holds the copyright? Is it me, the person who decided to make a point by writing the prompt? Is it OpenAI and the programmers who created the bot? Or perhaps it’s simply “the bot”.

Right now, there’s no simple answer to this question. With each jurisdiction having a different copyright system, a lot depends on the judges and their understanding of technology and the law. Currently, works created solely by AI are not copyrightable in the US, as the US Copyright Office applies copyrights only to works created by humans.

While this would seem like a solution, another question relates to how the models were trained. After all, AI’s generative abilities are based on countless pieces of source material, which enables the program to build itself. For example, I took my Cybernews profile picture and asked ChatGPT to turn it into something based on Studio Ghibli’s style – a popular trend not so long ago.

ADVERTISEMENT
ai-regulations-studio-ghibli-article
Studio Ghibli-styled image based on my photo

The image is based on a photo of me taken by a former colleague who gave me rights to its use, and a prompt made up by me. However, it’s also based on the hours of work of artists who created the anime in question. The prompt itself would be impossible to perform without the creative works of others. While corporations have managed to protect their trademarks and characters, style can’t really be protected this way. After all, somebody inspired by Studio Ghibli could create something like this themselves. Yet, we instinctively differentiate between somebody’s hard work and an AI’s quick generation.

Solving this maze of interests and finding the right solution is one of the challenges that lawmakers face when attempting to develop AI regulations.

AIs can be biased and make things up

The way AI trains itself is another issue that can impact regulation. After all, a Large Language Model (LLM) is only as good as the given inputs. For example, a 2023 Dartmouth study on AI biases showed that AI models have some ingrained biases they picked up from the materials they were trained on. These included some very pervasive racial and gender stereotypes that could impact some people's lives negatively. For example, in 2023, AI facial recognition has led to false accusations levied against six men, all Black, and early machine learning attempts from Amazon have led to bias against women when used in the hiring process.

What's more, even the non-biased data isn't always reliable. For example, a book about mushroom foraging written in AI allegedly led to a family's serious poisoning. With more and more AI content being created, cases like these may happen more often in various fields of life.

Unfortunately, the biases and hallucinations AI presents affect how we think. A 2023 study published in Scientific Reports shows that our brains absorb the information relayed to us by AI and may impact our decision-making, which is particularly dangerous.

AIs can violate privacy

Of course, another aspect of data training is privacy. There's a reason why most chatbots offer a free version – you pay them with your data, which is then used to train the AI. While regulations like the EU's General Data Protection Regulation (GDPR) technically apply to AI, tracking what happens with your data is near-impossible given the robustness of the models.

Privacy experts also see potential for the training data delivered to AI to be used by threat actors. With technologies like AI voice spoofing and AI-generated video, the line between reality and artificiality is being crossed every day, and regulators don't really have an answer to it other than constant attempts to use existing laws to curb these issues.

ADVERTISEMENT

Right now, this limitation is down to individual companies. For example, Meta states that it doesn’t train its AI on people’s posts, photos, or comments. However, this may not be the case with every company.

Other challenges

Those aren't the only reasons why AI regulation is needed. From the ethical questions of AI morality in military uses to the relatively straightforward issue of the environmental sustainability of AI's resource-heavy computing, regulators have their work cut out for them when trying to create blanket regulations for AI.

Who is regulating AI, and how?

Now that you know the key challenges and issues facing regulators, let’s take a look at existing and planned AI laws in jurisdictions all over the world, and how they reflect the challenges ahead.

The European Union comes out guns blazing

So far, the EU has created the most robust set of AI regulations by passing the EU AI Act. The rules were passed in June 2024, and they will take full effect in August 2026. The laws introduced as part of the AI Act are wide.

The Act categorizes AI software based on risk levels. The higher the risk, the stronger the regulation introduced by the EU.

The following types of software are categorized as unacceptable risk, and will be banned under the AI Act:

  • Cognitive behavioural manipulation of people or specific vulnerable groups: for example, voice-activated toys that encourage dangerous behaviour in children
  • Social scoring AI: classifying people based on behaviour, socio-economic status or personal characteristics
  • Biometric identification and categorisation of people (with some law-enforcement related exceptions in serious cases)
  • Real-time and remote biometric identification systems, such as facial recognition in public spaces (with some law-enforcement related exceptions in serious cases)
ADVERTISEMENT

High-risk apps, meanwhile, will be highly regulated and will require evaluation before being approved for market in the EU. They are divided into two categories. One includes AI systems used in products like toys, aviation, cars, medical devices, and lifts.

A second category, which will require registration, is AI systems related to:

  • Management and operation of critical infrastructure
  • Education and vocational training
  • Employment, worker management, and access to self-employment
  • Access to and enjoyment of essential private services, public services, and benefits
  • Law enforcement
  • Migration, asylum, and border control management
  • Assistance in legal interpretation and application of the law

Finally, the AI Act will require all generative AI to comply with the regulation by:

  • Disclosing that the content was generated by AI by clear labeling
  • Designing models to prevent it from generating illegal content
  • Publishing summaries of copyrighted data used for training

Violating these laws can result in a fine of up to €35 million or 7% of a company’s annual turnover, with the final maximum being the higher of the two. These are the strictest fines in the current regulations.

As you can see, the regulations are robust and cover most topics I’ve discussed earlier. Some parts of the regulation, particularly the banning of unacceptable risk systems, have already come into force in February 2025, while others are still waiting to be implemented. Whether the regulation will bring some order into the chaos of AI remains to be seen.

The United States doesn’t want to limit innovation

The US doesn’t plan on introducing federal AI laws, forcing states to draft their own AI regulations. What’s more, as of the time of writing, the Senate is working on a bill that would cut federal broadband funding to any state that implemented AI laws within a planned 10-year break. Many state legislatures might think twice before regulating AI if this bill passes.

The change has angered Republican and Democratic state lawmakers and is seen as a case of deep federal overreach. The bill is supported by AI giants, including OpenAI’s Sam Altman, who argues that legislation could stand in the way of AI innovation.

ADVERTISEMENT

This is bad news, particularly for states that have already enacted AI laws. California has enacted a law covering AI in medical fields, and a robust law passed by the Colorado Senate is due to come into force in February 2026. Some other states, including Utah, Nebraska, and West Virginia are currently drafting appropriate bills. Other states have turned to creating AI task forces to create regulations.

The Colorado AI bill is by far the widest of the laws introduced in the US. It shares a lot in common with the EU AI Act, particularly focusing on high-risk AI systems related to decision-making in areas like employment, education, lending, housing, insurance, healthcare, legal, and government services.

It also prohibits algorithmic discrimination and requires developers and distributors of AI operating in Colorado to do so with reasonable care. However, it does not require the disclosure of AI-generated content. Instead, it requires businesses to disclose when a user interacts with AI, for example, a customer support bot.

China’s following in the EU’s footsteps

China’s regulations aren’t as broad as the AI Act. Instead, lawmakers have chosen to focus on pointed regulations for various industries with baseline disclosure requirements. These include disclosing AI use in applications and proper tagging of AI-generated content.

Lawmakers in China are planning to extend these laws as they discover new potential issues in various sectors to balance innovation with safety and transparency. The implemented measures, which will come into force in September 2025, are meant to work in the interim while more detailed laws are written.

China plans to penalize violations of the law with penalties of up to 50 million yuan (around $7 million) or 5% of global turnover for serious privacy violations. Some cases may even result in prison sentences.

South Korea is taking a less aggressive approach

South Korea joins China and the EU in actually adopting AI regulations. This law is mainly focused on sectors similar to the EU AI Act’s “high-risk” sectors, including healthcare, energy, education, and law enforcement.

The law also established an AI Institute and provided a framework for risk assessment, safety, and transparency. Similar to the EU, it requires the disclosure of AI-generated content.

ADVERTISEMENT

That said, the South Korean fines for violating the act are lower than the EU or Chinese fines, with the maximum fine being around $21,000.

The United Kingdom is leaving everything to agencies

Since the UK left the EU in 2020, European regulations don’t cover British citizens. So far, the UK hasn’t released any targeted AI laws. Instead, it’s looking to regulatory agencies to create laws and procedures related to artificial intelligence.

No binding AI laws have been introduced thus far. However, some efforts have been made to create a task force responsible for working on a wider set of laws. It remains unclear when and if they will result in any changes.

Other countries are looking for solutions

Many other countries around the world are also implementing their own AI regulations. This includes other international associations like the African Union, which introduced a Continental AI Strategy that led to the creation of national AI strategies in African countries, including Egypt, Benin, Morocco, and Senegal among others. This continental strategy is more of a set of goals and guidelines than strict laws.

Similarly, Japan has adopted AI regulations that are mostly guidelines for developers. There are no fines for breaking them, but companies can be named and shamed publicly.

Countries like Canada, Brazil, and Australia are also in the process of drafting their own legislation. However, as of right now, there isn’t a clear timeline for when these will be introduced.

What is the future of AI regulation?

The future really depends on how AI develops. A big aspect virtually untouched by any of the regulations is copyright, with lawmakers believing existing laws will be enough to address these questions.

However, I believe that as we see more AI-generated content, exact guidelines will have to be established, not just for AI outputs but also for the data used to train them.

The current laws also don’t address AI's environmental impact, with only the EU’s AI Act briefly setting out sustainability goals. With LLMs becoming increasingly resource-intensive, I believe that sooner or later, we’ll see some frameworks for AI environmental sustainability.

Finally, as AI grows in workplaces, labor laws related to using AI agents to replace human employees might have to be implemented to balance innovation with social stability. With some jobs already suffering massive layoffs due to AI, it’s interesting to see how various countries react to offer support to job-seekers who may need to change their line of work.

Conclusion

Whenever there’s innovation, sooner or later, it needs regulation. The invention of cars brought about the introduction of speed limits, and the invention of the internet eventually led to anti-piracy and GDPR laws.

AI is no different, and as its influence over our lives grows, so will the laws guiding it. While these regulations are currently focused on broad strokes and worst-case scenarios, I believe that in the future, we’ll see more and more detailed laws as we learn more about AI's impact on our daily lives.