We talked to AI platform's CTO — the biggest security pitfalls of rushed AI adoption


AI has already become part of the core infrastructure for large corporations, with large language models (LLMs) being used everywhere from customer service to strategic decision-making. However, this does not come without risks.

According to recent Cybernews research on how Fortune 500 companies will use AI in 2025, all of them plan to do so. We reached out to nexos.ai, an AI orchestration platform, to discuss the biggest security risks associated with rushed AI implementation within companies.

“On the human side, there is a skill gap and cultural resistance: companies often need to train or hire specialists, communicate AI’s benefits clearly, and manage fears of job disruption,” Emmanuelis Norbutas, nexos.ai Chief Technology Officer told Cybernews.

ADVERTISEMENT

The interview was conducted via email and redacted for length.

What do you consider to be the primary challenges associated with the implementation of AI in organizations?

Implementing AI at scale involves many moving parts. A common concern is data: AI models need large volumes of high-quality, well-governed data. Poor data quality can cripple AI initiatives. Integration is another headache – fitting new AI tools into legacy systems and workflows can be complex and resource-intensive. Organizations must also address ethics and compliance by defining acceptable use, auditing outputs for bias, and ensuring that regulatory alignment.

On the human side, there is a skill gap and cultural resistance: companies often need to train or hire specialists, communicate AI’s benefits clearly, and manage fears of job disruption. Budgeting is a factor, to – building and maintaining custom AI systems (like a getaway for many LLMs) can be extremely expensive. If done improperly, you end up with a fragmented patchwork of AI tools.

So the toughest challenges are making data usable, securely plugging AI into existing operations, and aligning people and processes while controlling costs and risk.

Recent Cybernews research indicates that approximately 5% of large organizations depend on third-party AI providers. What specific security risks does this reliance entail?

Outsourcing AI capabilities to external providers can accelerate deployment but also shift risks. The biggest worry is data leakage and loss of control. Any sensitive prompt or document sent to a third-party model could become part of that vendor’s training data or a breach if their systems are compromised. A famous example: Samsung’s engineers pasted confidential source code and designs into ChatGPT, accidentally uploading trade secrets to OpenAI’s model. Such inputs become part of the AI service’s database, meaning proprietary intellectual property (IP) and private data leave the company’s firewall without real oversight.

Beyond the risk of handling data to an outside company, there is also the issue of trusting the vendor’s security. Recent analyses show that many popular AI providers have had security gaps. For instance, a recent Cybernews study found breaches in half of ten major LLM services, even though most were due to stolen user credentials, it highlights how endpoint security can expose an entire AI pipeline. There are also privacy and compliance concerns: data protection laws may not allow certain information to be processed on overseas servers, and the lack of transparency about how third parties handle inputs can create liability.

ADVERTISEMENT

I recommend treating any external LLM like a critical vendor to reduce these risks and applying strict controls.

How should companies develop and implement a comprehensive strategy for AI adoption?

A solid AI strategy starts by aligning technology with business goals. Begin with leadership buy-in: executive sponsors must champion AI efforts, secure funding, and articulate the vision. Deloitte recommends that management have a “comprehensive strategy for AI adoption and integration” that covers resources, pace of rollout, performance metrics, third-party vendors, and emerging trends. In practice, companies should inventory where AI can add value (for example, in data analytics, customer service automation, or operational efficiency) and pilot small, high-impact projects first to prove ROI.

Governance is equally important. Set up cross-functional oversight, combining IT, legal/compliance, and business units, to define usage policies, ethical guardrails, and risk controls. As one industry guide notes, organizations must “address employee fears… through transparent governance” by establishing clear data privacy protocols and fairness standards. Educating teams is part of this: invest in training so that users understand what AI can and cannot do, and get comfortable with the tools. Celebrate early successes to build momentum, but also share lessons from setbacks.

It’s not mandatory for every enterprise to build its own AI models from scratch. Some organizations will find it more efficient to leverage existing commercial AI services (for instance, using ChatGPT or Google models via an API) rather than retraining in-house models. The key is strategic fit: if AI can dramatically improve a core process or product, it’s worth pursuing. If it’s a marginal gain, resources might be better spent elsewhere. Whatever the approach, the strategy should include a plan for technology infrastructure and security.

What are the most pressing security risks related to the use of AI?

AI introduces several novel attack surfaces and amplifies old ones. Chief among the risks is data exposure. Every time a user queries an AI model, they potentially send corporate data into that model. If that model is compromised or if the AI vendor’s security is weak, sensitive information can leak. Shadow AI is part of this risk: when employees use unapproved AI tools outside official channels, their queries and any uploaded data bypass organizational safeguards and can expose sensitive information without detection.

Then there are adversarial and poisoning attacks. Attackers can craft malicious inputs to trick or corrupt an AI model. For example, an adversary might subtly poison training data so that the model makes wrong decisions. Or they can use adversarial examples - specially designed inputs that cause a model to misclassify or malfunction. These threats are well-known in machine learning security.

Generative AI also enables new kinds of attacks. A malicious user can feed internal data to an LLM and extract patterns, or use AI to automate phishing. Generative models can produce highly realistic content, fueling more convincing social engineering and deepfakes. For example, AI-generated voice impersonations or video deepfakes can be used to trick employees into revealing information or transferring funds. These capabilities mean that AI both breaks down barriers for attackers and creates new vectors that traditional defenses may miss.

Given these challenges, organizations must layer their defenses. Techniques include strong data encryption, strict access controls on AI systems, watermarking models, and even adopting a zero-trust approach where every AI input and output is monitored. But fundamentally, the most significant risk is human: without clear policies and user training, even the best tech can fail.

ADVERTISEMENT

It is widely recognized that the approval process for adopting certain types of LLMs can be lengthy in some organizations. As a result, employees may resort to using AI tools without formal authorization. In light of this, are organizations truly prepared to adapt their processes to keep pace with technological innovation?

Many companies are racing to keep up with AI, but the processes often lag behind the technology. In practice, that means employees frequently turn to unsanctioned tools if approval is too slow. Banning or restricting popular AI tools only creates a shadow AI problem, similar to shadow IT: in shadow IT staff deploy unapproved software to bypass slow or complicated IT processes, exposing uncontrolled data flows and compliance gaps, and shadow AI arises when employees use unsanctioned AI services outside official oversight. In other words, if staff can solve a problem faster with an AI app, many will do it, policy or not.

To adapt, companies need to proactively embrace the change. Rather than playing whack-a-mole, it is better to offer employees a controlled alternative. For example, a secure AI workspace gives teams a sanctioned environment to use multiple models transparently. All queries and file uploads go through company filters, and administrators get full visibility into who did what. This keeps innovation moving while preventing data leaks.

Only a few organizations are 100% prepared, but the answer is to align governance with agility. That means executives must prioritize AI, shorten decision cycles for tool adoption, and empower CISO/CDAO teams to manage risk in real time. Companies that do this, by building formal AI programs and providing safe, compliant ways for employees to experiment, will avoid the messy shadow AI pitfall and stay ahead in the innovation race.