The future of AI in business: are companies ready for regulatory realities?


Many companies infamously raced to implement AI tools to stay ahead of the curve and avoid being left behind. But as AI becomes more woven into daily operations, government bodies are looking closer at how data is collected, processed, and protected.

ADVERTISEMENT

Organizations are increasingly concerned about handling the risks and regulatory questions surrounding AI. Juan Orlandini, CTO of North America at Insight, breaks the risks of AI into three categories: the creators who build foundational AI models, the adapters that refine or tune these models for specific needs, and the consumers who employ ready-made AI tools or services in everyday operations.

Although these three groups might overlap in some respects, Orlandini warned that each faces distinct risks.

"Creators must ensure the data they use does not infringe on intellectual property, while adapters have to confirm the models they source are trustworthy. Consumers, in turn, must vet their third-party providers and consider privacy laws whenever they upload their data."

Despite frequent warnings about data mismanagement or potential biases in AI systems, Orlandini reminded me that this wave of technology has enormous potential.

"It is 100% opportunity. What remains to be seen is where we will derive the most value."

Identifying a few strong use cases is more prudent than spreading AI experiments across the entire organization without a roadmap. Businesses can measure success clearly by focusing on targeted AI initiatives, gathering lessons, and proceeding with broader adoption based on hard evidence instead of buzz.

AI workplace
Image by Cybernews.

Just another tool

ADVERTISEMENT

Orlandini repeatedly stressed that generative AI is just another tool, albeit a powerful one, in the established world of enterprise software.

"AI, generative AI, is just another algorithm. It's just another tool in your tool belt. Building a new AI application is no different than building any other enterprise application," he said.

In other words, technical leaders should lean on longstanding best practices. Define a problem, pilot small projects, evaluate results, and scale up. Security, compliance, performance, and governance remain as important here as in any other domain. Companies that treat AI as a mysterious black box risk introducing vulnerabilities into production environments.

Businesses jump on board every time a new technology trend emerges, only to discover that rushed deployments create "technical debt." Orlandini has seen this pattern repeat with client-server systems, websites, mobile apps, and cloud computing.

"A pattern happens every time you rush to a technology. Within a very short period after that, it becomes technical debt."

He added that a measured approach is the best way to reduce these headaches. Don't assume AI fixes everything overnight. Instead, identify pressing problems that AI can address, confirm the feasibility, and expand. Organizations can test the waters without incurring massive hidden costs by starting with small, high-value projects.

vilius Ernestas Naprys Gintaras Radauskas Konstancija Gasaityte profile
Get our latest stories today on Google News

The opportunities for AI governance and compliance

Bindi Dave, Deputy CISO of DigiCert & Global Ambassador with the Global Council for Responsible AI, recently told me that integrating AI into business operations presents transformative opportunities, from streamlining workflows to driving innovation.

Business leaders are looking to leverage AI to enhance efficiency and decision-making. Governments are actively shaping policies to ensure AI is deployed responsibly.

"Organizations that fail to anticipate and adapt to changes risk non-compliance, reputational damage, cybersecurity risk, and legal liabilities," Dave said.

ADVERTISEMENT

It doesn't have to end this way. Dave states that proactively integrating AI governance through comprehensive risk assessments, implementing internal audits, and training staff on ethical AI usage can remove these risks.

"Transparency is essential. Businesses should document AI decision-making processes and employ "red teaming" strategies to evaluate vulnerabilities in their AI systems."

Dave believes this strong governance approach will help mitigate potential risks and help build stakeholder trust. It will also ensure organizations remain resilient against cybersecurity threats while fostering investor and public confidence.

She also added that AI regulations help create uniform standards to prevent bias, safeguard sensitive data, and protect against the misuse of autonomous systems.

"Aligning AI practices with human values, similar to the ethical guidelines followed by cybersecurity professionals, ensures AI serves as a force for good. By embedding responsible AI principles into every stage of AI development and deployment, companies can safeguard against unintended consequences and reinforce digital trust."

Balancing AI risks and rewards

Randy Weakly, Chief AI Architect at ImageSource, believes the recent punctuated equilibrium in AI's evolutionary journey will see many organizations scrambling to create sustainable value with these technologies.

"We see internal security, compliance, and governance groups struggling to create appropriate AI safety policies. It's not really about the mechanics of creating the policies but more about balancing the risks and rewards of AI," Weakly said.

Weakly also shared how LLMs need access to internal and typically confidential information in almost every enterprise AI use case to create real value.

"Allowing poorly regulated access to enterprise data to AI solutions will certainly lead to disaster, but universal restrictions on AI ensure it won't deliver any value. A balanced approach is critical here."

Another area that Weakly believes needs a watchful eye is that many AI capabilities are already spreading throughout organizations, often unnoticed.

ADVERTISEMENT

"AI-powered capabilities show up daily as new features embedded within popular business applications like HubSpot, Adobe, Grammarly, Slack, GitHub Copilot, etc. These applications are often departmental and fall below the radar of corporate IT and security."

Artificial Intelligence employees workers
Image by Cybernews.

Are we ready for AI regulations?

Tackling AI without a clear framework will inevitably lead to confusion, wasted budgets, and potential run-ins with the law. However, businesses can minimize uncertainty by treating AI like any other enterprise platform project.

For Manoj Kuruvanthody, CISO at Tredence, the message is clear. AI adoption is accelerating and regulatory frameworks are playing catch-up. But this is forcing enterprises to rethink governance at scale.

"Companies cannot afford to treat compliance as an afterthought. Instead, they must embed it into AI development lifecycles, ensuring transparency, accountability, and adaptability," Kuruvanthody said.

He added that the challenge isn't just about meeting today's evolving AI regulations. Instead, he says, it's about building a compliance model that can dynamically adjust as policies, ethical expectations, and AI capabilities evolve.

Implement real-time risk assessments that align with emerging global standards and integrate AI audit frameworks that continuously evaluate model behavior. CXOs must also ensure traceability across AI processes, from data sourcing to decision-making, preventing misalignment between business goals and AI outcomes.

"Businesses must prepare for the complexities of agentic AI, where autonomous systems handle critical decision-making. Traditional compliance mechanisms won't be enough."

AI governance will shift from static policies to self-regulating AI ecosystems, where sub-agents execute specific tasks under stringent compliance layers. But Kuruvanthody warned that leaders' focus must extend beyond avoiding regulatory pitfalls. The real differentiator will be trust.

ADVERTISEMENT

We are moving beyond implementing technology. As AI matures, embed ethical AI principles, explainability, fairness, and privacy, and it will gain a competitive edge by fostering customer confidence and industry credibility.

Finally, Kuruvanthody offers a timely reminder that the question isn't whether companies are ready for AI regulation. It's whether they are prepared to lead in a world where responsible AI defines market success.

Striking the right balance

Many challenges lie ahead. Companies that plan for them will be best equipped to face the regulatory wave by clarifying their data assets, leaning on traditional software development wisdom, and thinking critically about which AI tools deliver real value.

Whether you're a creator, adapter, or consumer of AI, caution and ambition must go hand in hand. That is the ultimate balance in today's era when the hype around AI must finally meet the reality of accountability and meaningful execution.

There's pressure on business leaders to reflect on where their organizations stand in this rapidly shifting AI environment. But rather than chase every trend, it might be wiser to focus on the data, governance structures, and core objectives that support genuine progress.