
AI is no longer a niche technology — it’s becoming a fundamental part of business strategy for most Fortune 500 companies in 2025. All of them are now using AI, but they differ in their approaches to implementing it. Cybernews researchers warn of the risks involved as the rulebooks have yet to be written.
-
All Fortune 500 companies use AI – none have declared that they are not using it.
-
Companies showcase their proprietary solutions and are less open about naming third-party LLMs used in their activities. However, vendors disclose that most companies use their solutions.
-
Platform-based solutions provide ease of access but might raise concerns about data control and vendor dependence.
-
Security is paramount – companies must prioritize robust security measures to protect sensitive information and maintain trust.
The question of how many Fortune 500 companies have declared non-use of AI yields a clear answer: none. Quite the opposite, all of them are actively employing or exploring AI solutions, or at least declare that to the stakeholders in one form or another.
Cybernews researchers scraped all the companies' publicly available websites and allowed Gemini Deep Research to scrutinize them for examples of AI use. While this analysis lacks a comprehensive statistical aspect, it still reveals AI becoming a fundamental part of business strategy for many large organizations.
AI is already integrated with core operations, from customer services to strategic decision-making. And this comes with some significant risks.
“While big companies are quick to jump to the AI bandwagon, the risk management part is lagging behind. Companies are left exposed to the new risks associated with AI,” Aras Nazarovas, a senior security researcher at Cybernews, warns.
What does AI find about AI on Fortune 500 companies’ websites?
Disclaimer: This data is based on the Gemini 2.5 Pro Deep Research findings after analyzing the websites of 500 companies. It does not constitute a comprehensive statistical analysis.
Trying to find a single instance of AI non-use yields zero results.
A third of companies (33.5%) focus on broad AI and big data capabilities rather than specific LLMs. They highlighted AI for general purposes like data analysis, pattern recognition, system optimization, and others.
More than a fifth of companies (22%) emphasized their AI adoption for a functional application across various specific domains. These entries describe how AI is being used to address business problems, such as inventory optimization, predictive maintenance, or customer service.
For example, dozens of companies already explicitly mention using AI for customer service, chatbots, virtual assistants, or related customer interaction automation. Similarly, companies say they use AI to automate “entry-level positions” in areas like inventory management, data entry, and basic process automation.
Some companies like to take things into their own hands, developing proprietary models. Around 14% of companies specified their own internal or proprietary LLMs as a focus, such as Walmart’s Wallaby or Saudi Aramco’s Metabrain.
“This approach is particularly prevalent in industries like Energy and Finance, where specialized applications, data control, and intellectual property are key concerns,” Nazarovas noted.
A similar number of companies gave AI strategic importance, indicating AI integration within an organization’s overall strategy.
Fewer companies, only around 5%, proudly declare reliance on external LLM services from third-party providers, leveraging providers like OpenAI, DeepSeek AI, Anthropic, Google, and others.
However, there are also a tenth of the companies that only vaguely mention AI use, without specifying the actual product or its use.
“While only a few companies (~4%) mention a hybrid or multiple approach towards AI, blending proprietary, open source, third-party, and other solutions, it is likely that this approach is more prevalent as the experimentation phase is still ongoing,” Nazarovas notes.
The data suggests companies often don’t want to explicitly name their use of AI tools. Only 21 companies mention the use of OpenAI, DeepSeek (19), Nvidia (14), Google (8), Anthropic (7), META Llama (6), and less for Cohere and others.
Meanwhile, for comparison, Microsoft boasts that over 85% of Fortune 500 companies utilize its AI solutions. Other reports suggest that 92% of the 500 companies use OpenAI products.
AI is here, and so are the risks
YouTube’s algorithm recently flagged tech reviewer and developer Jeff Geerling’s video for violating community guidelines. The automated service determined that the content “describes how to get unauthorized or free access to audio or audiovisual content, software, subscription services, or games.”
The problem is that the YouTuber never described “any of that stuff.” He appealed, but his appeal was rejected. However, after some noise on social media, the video was later reinstated after what Geerling presumes was “a human review process.”
Many smaller creators might never get similar treatment.
This story is just the tip of the iceberg of the risks of AI adoption. Cybernews researchers listed many more:
- Data Security/leakage: This is the most commonly mentioned security concern, appearing in a significant number of entries across all industries. Issues related to protecting sensitive data, including personally identifiable information (PII), health information, and operational data, are consistently highlighted.
- Prompt injection: Vulnerabilities associated with prompt manipulation and insecure inputs are also frequently noted, particularly in the context of chatbots, search engines, and other interactive AI systems.
- Model integrity/poisoning: Concerns about the integrity of LLMs and the potential for poisoning training data are present, especially for proprietary models. This includes risks related to biased outputs and manipulated model behavior.
- Critical infrastructure vulnerabilities: For organizations operating in critical infrastructure sectors (e.g., energy, utilities), the security of AI integrated into control systems and operational technologies is a major risk.
- Intellectual property theft: Protecting proprietary LLMs, algorithms, and AI-related intellectual property is a concern, particularly for companies investing heavily in internal AI development.
- Supply chain/external risks: Risks associated with third-party LLM providers, partner LLMs, and the broader AI supply chain are also mentioned, highlighting the need for secure vendor management and risk assessment.
- Bias/algorithmic bias: Concerns about bias in LLM outputs and algorithmic decision-making are present, emphasizing the need for fairness and ethical considerations in AI development and deployment.
- Insecure output: Risks related to LLMs generating harmful, misleading, or insecure outputs are noted, particularly in applications where the AI's response directly impacts users or systems.
- Lack of transparency/governance: Issues related to the lack of transparency in LLM decision-making processes and the need for robust AI governance frameworks are also highlighted.
“Critical infrastructure and healthcare sectors, for example, often face unique and heightened security vulnerabilities,” Nazarovas said.
“As companies start to grapple with new challenges and risks, it’s likely to have significant implications for consumers, industries, and the broader economy in the coming years.”
Reckless AI adoption
“AI was adopted rapidly across enterprises, long before serious attention was paid to its security. It is like a wunderkind raised without supervision—brilliant but reckless. In environments without proper governance, it can expose sensitive data, introduce shadow tools or act on poisoned inputs. Fortune 500 companies have embraced AI, but the rulebook is still being written,” says Emanuelis Norbutas, Chief Technology Officer at nexos.ai.
Emanuelis adds: “As adoption deepens, securing model access alone is not enough. Organizations need to control how AI is used in practice — from setting input and output boundaries to enforcing role-based permissions and tracking how data flows through these systems. Without that layer of structured oversight, the gap between innovation and risk will only grow wider.”
Common strategies to mitigate the risk
The regulation of artificial intelligence (AI) in the US is currently a mix of federal and state efforts, with no comprehensive federal law yet established.
Several frameworks and standards are emerging to address AI and LLM security.
The National Institute of Standards and Technology (NIST) has released the AI Risk Management Framework (AI RMF), which provides guidance on managing risks associated with AI for individuals, organizations, and society.
The EU has passed the AI Act, a regulation aiming to establish a legal framework for AI in the European Union. The act raises requirements for high-risk AI systems, including security and transparency obligations.
ISO/IEC 42001 is another international standard that specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS). It focuses on managing risks and ensuring responsible AI development and use.
“The problem with frameworks is that AI's rapid evolution outpaces current frameworks and presents additional hurdles, vague guidance, compliance challenges, and other limitations,” Nazarovas said. “Frameworks won’t always provide effective solutions to specific problems, but they surely can strain companies when enforced.”
Cybernews researchers suggest companies clearly identify the risks their specific AI implementation approaches bring and mitigate them accordingly.
Data security/leakage:
- Data classification: Identify and classify sensitive data (PII, health data, etc.) to apply appropriate security controls.
- Encryption: Encrypt data, both at rest and in transit to protect it from unauthorized access.
- Access controls: Implement strict access controls and authentication mechanisms (like multi-factor authentication) to limit data access.
- Data minimization: Collect and retain only necessary data.
- Anonymization/pseudonymization: De-identify sensitive data when possible.
- Data loss prevention (DLP) tools: Use tools to monitor and prevent sensitive data from leaving the organization's control.
Prompt injection:
- Input validation and sanitization: Validate and sanitize all user inputs to prevent malicious prompts from manipulating the LLM.
- Output validation: Validate and filter LLM outputs to ensure they are safe and aligned with intended responses.
- Sandboxing/isolation: Run LLMs in isolated environments to limit the impact of potential prompt injection attacks.
- Clear input/output boundaries: Define clear boundaries between user inputs and LLM outputs to avoid confusion.
Model integrity/poisoning:
- Secure training data pipelines: Ensure the integrity and security of training data to prevent poisoning attacks.
- Model validation and testing: Regularly validate and test models to detect any anomalies or manipulation.
- Model monitoring: Monitor model behavior in production for any unexpected or malicious activity.
- Version control: Maintain version control of models to track changes and roll back to previous versions if needed.
- Adversarial training: Train models to be resilient against adversarial attacks.
Critical infrastructure vulnerabilities:
- Security by design: Integrate security considerations into the design of AI systems from the beginning.
- Network segmentation: Segment networks to isolate critical systems and limit the impact of potential attacks.
- Intrusion detection and prevention systems (IDPS): Deploy IDPS to detect and prevent malicious activity targeting AI systems.
- Regular vulnerability assessments and penetration testing: Conduct regular assessments to identify and address vulnerabilities.
- Incident response planning: Develop and maintain incident response plans for AI-related security incidents.
Intellectual property theft:
- Access controls: Implement strict access controls to protect proprietary models and algorithms.
- Watermarking: Watermark models and data to detect unauthorized copying or use.
- Confidential computing: Use confidential computing environments to protect models and data during processing.
- Legal agreements and contracts: Establish clear legal agreements and contracts to protect intellectual property.
Supply chain/external risks:
- Vendor risk management: Conduct thorough due diligence on third-party LLM providers and establish secure contracts.
- Secure integration: Ensure secure integration of third-party LLMs and APIs.
- Monitoring and auditing: Monitor and audit the activity of third-party providers and integrations.
Bias/algorithmic bias:
- Diverse training data: Use diverse and representative training data to mitigate bias.
- Bias detection and mitigation techniques: Implement techniques to detect and mitigate bias in model outputs.
- Explainability and transparency: Strive for explainability and transparency in model decision-making processes.
- Ethical guidelines and reviews: Establish ethical guidelines for AI development and deployment, and conduct regular ethical reviews.
Insecure output:
- Output filtering and moderation: Filter and moderate LLM outputs to ensure they are safe and appropriate.
- Human review: Implement human review processes for sensitive or critical LLM outputs.
- Safety training: Train models to avoid generating harmful or insecure outputs.
Lack of transparency/governance:
- AI governance frameworks: Establish clear AI governance frameworks with defined roles, responsibilities, and policies.
- Documentation and auditing: Maintain detailed documentation of AI systems and conduct regular audits.
- Transparency mechanisms: Implement mechanisms to increase the transparency of LLM decision-making processes.
Your email address will not be published. Required fields are markedmarked