The Cybersecurity Risks of AI: What Every Organization Needs to Know

AI is transforming cybersecurity — but attackers are using it too. Here's what's changed and how to stay ahead.

Reading time: 12 minutes

By the Numbers

87% of organizations were targeted by an AI-powered cyberattack in the past year

83% of phishing emails are now AI-generated

2,137% increase in deepfake fraud attacks since 2022

$5.72M average cost of an AI-powered breach

16% of all breaches in 2025 involved attackers using AI

AI Changed the Game — for Both Sides

Artificial intelligence is reshaping cybersecurity at a pace that few predicted. On one side, AI-powered security tools are detecting threats faster than ever. On the other, attackers have adopted the same technology to launch more convincing, more automated, and harder-to-detect attacks.

The result is an arms race — and organizations that don't understand both sides of it are already falling behind. This article covers the real-world risks of AI in cybersecurity: how attackers weaponize it, the hidden dangers of AI adoption, and what your organization can do to defend against it.

1. How Attackers Are Weaponizing AI

AI-Generated Phishing: Nearly Impossible to Spot

Phishing has always been the most common attack vector. AI has made it dramatically more effective.

According to KnowBe4's 2025 Phishing Trends Report, nearly 83% of phishing emails are now AI-generated. The second half of 2024 saw a 202% increase in phishing messages, and AI-assisted phishing has surged 1,265% since generative AI tools became widely available.

What makes AI-generated phishing different from the clumsy "Nigerian prince" emails of the past?

  • Perfect grammar and tone. AI eliminates the spelling errors and awkward phrasing that trained employees to spot phishing.
  • Personalization at scale. Attackers use AI to scrape LinkedIn, company websites, and social media to craft emails that reference real projects, real colleagues, and real deadlines.
  • Rapid iteration. If a phishing template gets flagged by email security, the attacker generates a new variant in seconds.
  • Multilingual attacks. AI enables attackers to target organizations in any language with native-quality writing.

Why This Matters

The traditional advice of "look for spelling errors and suspicious senders" no longer works. AI-generated phishing passes SPF, DKIM, and DMARC checks, bypasses email gateways, and looks indistinguishable from legitimate business email. Organizations need behavioral detection — not just email filters.

Deepfakes: Seeing Is No Longer Believing

Deepfake attacks have exploded. They now account for 6.5% of all fraud attacks — a 2,137% rise since 2022. In Q1 2025 alone, there were 19% more deepfake incidents than in all of 2024 combined.

The attacks are devastating:

  • Arup (2024): A finance employee was deceived by a deepfaked video call impersonating the CFO. The attacker instructed a "confidential transaction." Result: $25.6 million stolen.
  • WPP (2024): Scammers cloned CEO Mark Read's voice for a fake Microsoft Teams call, requesting credentials and fund transfers. The employee grew suspicious and the attack was foiled.
  • Ferrari (2024): An executive received a WhatsApp call from a deepfaked voice of CEO Benedetto Vigna. The employee recognized irregularities and stopped the attack.

Voice cloning has crossed what researchers call the "indistinguishable threshold" — human listeners can no longer reliably tell a cloned voice from a real one. A voice can be cloned from as little as 60 seconds of audio. Voice phishing skyrocketed 442% in 2025, enabling an estimated $40 billion in fraud globally.

The barrier to entry has collapsed. Dark web forums now sell "synthetic identity kits" — AI video actors, cloned voices, and biometric datasets — for as little as $5. Real-time deepfake platforms cost between $1,000 and $10,000.

AI-Generated Malware: Code That Rewrites Itself

Attackers have moved beyond using AI for phishing into deploying AI-enabled malware in active operations:

  • PromptFlux and PromptSteal (2025): Google's Threat Intelligence Group discovered two new malware strains that use LLMs to change their behavior mid-attack — dynamically generating scripts, obfuscating code, and creating malicious functions on demand.
  • Autonomous network compromise (2025): An AI model using Model Context Protocol achieved domain dominance on a corporate network in under an hour with no human intervention, evading endpoint detection through on-the-fly tactic adaptation.
  • Dark LLMs: Underground marketplaces now offer AI models without safety restrictions for $30–$200/month, with some vendors claiming over 1,000 users.

The cost of going from vulnerability discovery to working exploit has collapsed. What used to take weeks and thousands of dollars is now near zero. This enables micro-targeted attacks built for a single system, a single company, or even a single developer.

2. The Hidden Risks of AI Adoption

It's not just attackers using AI that creates risk. The way organizations adopt AI introduces entirely new attack surfaces.

Shadow AI: Your Biggest Blind Spot

Shadow AI — employees using AI tools that haven't been vetted or approved — is one of the fastest-growing operational risks in cybersecurity.

  • 77% of enterprise employees who use AI have pasted company data into a chatbot query
  • 22% of those instances included confidential personal or financial data
  • In the World Economic Forum's Global Cybersecurity Outlook 2026, data leaks linked to GenAI (34%) now outweigh fears about adversarial AI capabilities (29%)

Gartner forecasts that 40% of enterprise applications will feature AI agents by 2026, yet only 6% of organizations have an advanced AI security strategy in place. The gap between AI adoption speed and AI security maturity is widening.

Prompt Injection: The #1 LLM Vulnerability

Prompt injection — tricking an AI system into executing unintended commands — remains the #1 risk on OWASP's Top 10 for LLM Applications.

A real-world example: the EchoLeak vulnerability in Microsoft 365 Copilot. A poisoned email could force the AI assistant to exfiltrate sensitive business data to an external URL — without any user interaction. Zero clicks. Zero warnings. The AI itself became the attack vector.

In 2025, critical prompt injection vulnerabilities were also discovered in AI coding tools from Cursor, GitHub, and Google's Gemini.

AI Supply Chain Attacks

The AI supply chain has become a major attack surface:

  • 97% of organizations use models from public repositories
  • 45% of reported breaches were traced back to malware introduced through public repositories
  • Supply chain breaches have increased 40% compared to 2023

A new attack vector called "slopsquatting" exploits AI hallucinations: LLMs hallucinate non-existent but plausible package names at a rate of up to 21% for open-source models. Attackers then publish malicious packages under those hallucinated names, waiting for developers to install them. With 84% of developers now using AI coding tools and 59% using AI-generated code they don't fully understand, the risk is significant.

Data Poisoning: The Invisible Threat

Data poisoning — invisibly corrupting training data — allows attackers to create hidden backdoors that bypass traditional security entirely. Research in 2025 demonstrated that healthcare AI models can be compromised with as few as 100–500 poisoned samples, enough to sway diagnostic outputs across different institutions.

Stanford's 2025 AI Index Report found that publicly reported AI-related security and privacy incidents rose 56.4% from 2023 to 2024. Only 32% of organizations are actively monitoring their AI systems, and just 16% have run adversarial testing against their models.

3. Real-World Impact: By the Numbers

72%
YoY Increase in AI-Powered Attacks
$5.72M
Average AI-Powered Breach Cost
$25.6M
Largest Deepfake Fraud Loss
108
Days Faster Detection with AI Defense

That last number is the silver lining: organizations using AI-powered security systems detect and contain breaches 108 days faster than those without, saving an average of $1.76 million per breach. AI is both the threat and the most effective defense.

4. The Regulatory Landscape Is Catching Up

EU AI Act

The EU AI Act is the world's first comprehensive AI regulation, being implemented in stages:

  • February 2025: First prohibitions on unacceptable-risk AI practices took effect
  • August 2025: General-purpose AI model obligations began
  • August 2026: Full compliance requirements for high-risk AI systems become enforceable

The Act requires providers to ensure accuracy, robustness, and cybersecurity are integrated at launch and maintained throughout the AI lifecycle. It mandates technical measures against data poisoning, model evasion, and adversarial attacks. Penalties reach up to 35 million euros or 7% of global annual turnover.

NIST AI Risk Management Framework

In the US, the NIST AI RMF is being updated to address generative AI, supply chain vulnerabilities, and model provenance. It focuses on three overlapping areas: securing AI systems themselves, using AI to enhance security operations, and defending against AI-enabled attacks.

OWASP Agentic AI Top 10 (New in 2025)

OWASP released a new framework specifically for autonomous AI agents, addressing risks like memory poisoning, spoofed inter-agent messages, false signal cascading, and AI agents showing misalignment or concealment behavior. Developed with input from over 100 security researchers, it signals that the industry recognizes agentic AI as a distinct security category.

5. How to Defend Against AI-Powered Threats

The threat landscape has changed fundamentally. Here's what organizations need to do:

Upgrade Your Detection Capabilities

Traditional signature-based and rule-based detection cannot keep pace with AI-generated threats that mutate in real time. Organizations need:

  • Behavioral analysis: Establish baselines and detect anomalies — a foreign VPN login at 2 AM, a finance employee accessing unusual systems, rapid file access patterns
  • Multi-source correlation: Phishing emails, credential theft, and account takeover aren't three separate alerts — they're one coordinated attack. Your detection platform needs to connect the dots across email, identity, endpoint, and network logs simultaneously
  • AI-powered defense: 77% of organizations have now adopted AI for cybersecurity. If your attackers use AI and your defenders don't, you're outgunned

Address Shadow AI Before It Becomes a Breach

  • Create an approved AI tool catalog — give employees sanctioned alternatives so they don't find their own
  • Implement data loss prevention (DLP) policies that monitor for sensitive data being sent to AI services
  • Establish a clear AI acceptable use policy and train employees on what data can and cannot be shared with AI tools

Secure Your AI Systems

  • Input validation: Filter and sanitize all inputs to AI models to prevent prompt injection
  • Continuous monitoring: Monitor AI model outputs for anomalous behavior that could indicate data poisoning
  • Supply chain hygiene: Vet AI dependencies, maintain software bills of materials (SBOMs), and scan AI-generated code before deployment
  • Adversarial testing: Red-team your AI systems. Only 16% of organizations do this today

Train for the AI Era

  • Traditional security awareness training is outdated. Employees need to understand AI-generated phishing, deepfake voice calls, and social engineering that references real internal context
  • Implement verification procedures for financial transactions and sensitive requests — especially those that come via video or voice call
  • Run deepfake simulation exercises so employees experience AI-powered social engineering in a controlled environment

Get 24/7 Coverage

AI-powered attacks don't happen during business hours. The ransomware attacks we respond to most frequently occur between 1 AM and 5 AM, when attackers know no one is watching. Without around-the-clock monitoring, a 2 AM attack waits until 8 AM to be discovered — giving the attacker a 6+ hour head start.

The Bottom Line

AI has fundamentally changed cybersecurity. Attackers are using it to generate undetectable phishing, clone voices, create self-modifying malware, and automate entire attack chains. The organizations that survive this shift will be the ones that fight AI with AI — combining AI-powered detection with human expertise and 24/7 coverage to stay ahead of threats that never sleep.

Ready to defend against AI-powered threats?

Our MDR service uses AI-powered detection and human expertise to stop the attacks that traditional tools miss — 24/7.

Book a Free Consultation