With the final text of the AI Act now approved by EU member states, Alon Yamin, the CEO of Copyleaks, a young AI startup, says he’s concerned that the regulation might hurt smaller firms.
After a prolonged debate, ambassadors of the EU member states finally – and unanimously – rubber-stamped the political agreement about the AI Act last week. The new rules are now on track to be approved in April and may enter into force later in 2024.
A lot of regulation inserted in the AI Act has been welcomed as very sensible by activists around the world.
The EU is seemingly seeking to establish rules not merely for AI technology but for its use in scenarios where societies could suffer serious consequences. The Eurocrats call it a “risk-based” approach.
For instance, AI systems aimed at influencing behavior or exploiting a person or group’s vulnerabilities will be banned. Using biometric data to ascertain a person’s race, sexual orientation, or beliefs won’t be allowed either.
Real-time facial recognition in public places? Banned except in cases when law enforcement will be dealing with serious crimes or searching for missing people. Predictive policing, already prevalent in the United States, won’t be allowed, either.
However, most of the lengthy discussions before last week’s agreement revolved around the regulation of foundation models – large machine-learning models that are the cornerstones of the AI industry.
France, Germany, and Italy were opposed to the strict regulation of such models, saying it would hurt the chances of success of promising European AI startups such as Mistral AI and Aleph Alpha.
Others weren’t convinced, and the European Parliament said it was unacceptable to leave all the regulatory burden on smaller actors. The trio finally relented after sending out reminders about the need to avoid double regulation in order to boost competitiveness.
How about heavier regulation for the big players and a bit more breathing space for companies that are just starting out with barely any cash around?
According to Yamin, the co-founder and CEO of Copyleaks, precisely such a small firm that uses AI to detect AI-generated text, this kind of solution would work best.
Another layer of complexity
Alon, are you glad the AI Act is now so close to adoption?
I think it's a move in the right direction. We at Copyleaks are not part of Big Tech, we’re a smaller AI company, though, so when I'm looking at how the regulation is beginning, I'm always thinking about the effect on smaller AI startups.
The idea is really to create a path to innovation and to be able to use AI to change markets and create new things.
The question is whether you're creating a process that will make it hard for small companies or startups to actually innovate or whether you’re just creating a lot of processes and bureaucracy that will be hard for small companies to actually comply with.
On the other hand, of course, you have all the risks that come with using generative AI without having some sort of guardrails.
It’s definitely a step in the right direction, but we just need to make sure that smaller players are still able to be part of the game. You don't want only Big Tech – 10, 20 companies – controlling the AI market. You also want to give smaller players the opportunity to be part of it. That’s what I'm a bit worried about.
But when the European Parliament was kind of rebuking the objections of France and Germany, it said that providing less regulation of the larger players would be unfair to the smaller ones, right? Do you not think a compromise has been found?
I do think that it's a compromise. But I also think it's very hard to have something that will be fair for everyone because we’re talking about different types of players with different types of resources.
Yet you're putting them under the same regulation. There's got to be some differentiation between approaching really big companies and smaller firms.
For a smaller company with modest resources, there is a limit to what it can do in order to comply with all these different regulations – just think of the legal fees, the advisors you need to hire, and the processes that you need to head.
It’s very hard to do for a startup with 10, 20, or 30 employees. Of course, the situation is never going to be perfect, but it's important to have the small businesses and tech startups in mind here.
Obviously, lobbying can still happen during the implementation phase, and the AI Act can still change. But so far, where do you think the regulation of the technology goes too far and where it doesn't go far enough in the document?
It's going a little bit too far, and there's no specific focus on the smaller players. It's as if the AI industry is all the same: all the players are equal, and everyone has the same type of resources, needs, and abilities.
"It's very hard to have something that will be fair for everyone because we’re talking about different types of players with different types of resources."Alon Yamin, CEO of Copyleaks.
What’s really missing is focusing a bit more on distinguishing between the big players and the smaller ones. Right now, innovating is still possible for smaller firms, but we shouldn’t create a playing field that is only available for the big guys.
But the technology companies were already preparing for this one way or another, right? The usual argument has always been that they have to raise their own bar before it's forced on them. If I were an outsider and just heard the news that the AI Act is happening, I’d say, great, the rules are there, on paper, and this is something companies can now work with and prepare. Am I wrong?
I totally agree. Rules are very, very necessary. Obviously, AI is powerful. And just recently, there was really nothing there clearly stating what guardrails AI companies need to have.
We're doing a lot of checks for ourselves, making sure that we're complying with everything that we set as a standard. But it’s great that there is a more universal standard now – it’s an issue when every company creates its own standard, obviously.
Having said that, I and another co-founder started this company with almost no resources. We have more now, and we're able to hire lawyers and advisors.
But I'm just thinking of potential entrepreneurs sitting at home and wishing to use this amazing technology, hoarding innovative ideas for innovations. I’m wondering whether or not this will hinder their ability to thrive. It definitely adds another layer of complexity.
I hope there's going to be more specific regulation in the US and in other countries because some of the biggest and brightest ideas start with an idea at a small company.
Precious resources needed
Giant firms obviously have a lot of cash around, they can pay their fines and find loopholes around any regulation. Do you think the bigger players will find a way to bend the rules their way?
Of course. They have the legal resources to fight things like that. And if a smaller company is targeted because they violated some rule, even unintentionally, this is yet another kind of challenge for it. These Big Tech companies have legal times the size of a small country.
The process of building an AI company and starting to play in a field where much bigger players are playing is difficult. Again, you don’t have the funds to deal with legal or compliance issues.
Unless some adjustments are made, new regulations will definitely hurt us. In the beginning, when you're just releasing a new product, there is a limit to how much compliance you can build into it.
So maybe creating something that is a bit more gradual and dependent on the stage of the company or the status of the company could work better. Again, having the rules on paper is great and important because limits in this technology are needed. However, too strict regulation can hurt innovation in smaller companies.
Copyleaks works as a plagiarism and AI content detection platform, ensuring the responsible adoption of generative AI. The AI Act will regulate all these minuscule details like labels and the use of copyrighted material. Will the new regulation help you as a company?
In order to detect someone using AI and not interested in being found out, you really need dedicated technologies that are able to accurately identify such content.
You have different types of AI content. You need to have a different strategy for detecting AI-generated text, and you need to have a different method for detecting images, music, and videos.
Definitely, I think regulation may make it much easier because a certain percentage of companies is always going to be compliant. Actual tools are still needed, though, so regulation combined with technology could be the most comprehensive solution regarding the transparency of AI use.
Basically, you're worried that the AI Act will regulate Copyleaks, a small firm trying to help illegal AI use detection efforts, more than the bigger AI companies, right?
That's exactly it. We're doing AI detection, and our whole focus is being able to provide some level of transparency. Yet, we're now going to be like any other AI company under this regulation.
Luckily, we're already at a stage where we have some resources. We're able to comply. But let's say there is a startup now that is trying to do similar things in video – how are you even bringing your ideas to the market?
On the one hand, you want to be able to have the technologies in order to control Big Tech companies and have visibility around them, but on the other hand, you may be hindering innovations and the actual solutions that are able to do so.
It's not an even playing field. We need to have different approaches for companies in different stages of their growth. As the company grows, it has more resources. Regulation should be implemented in a gradual way.
More from Cybernews:
Subscribe to our newsletter