AI Advantage to the Attackers: The Rising Threat – and What Comes Next

Written by
Russ Andersson
Published on
December 1, 2025

We have been predicting a wave of AI-generated software vulnerability exploits for some time. Last week, we began to see the first outline of that wave forming. According to recent reporting, nation-state actors used Anthropic to help execute a sophisticated breach. The incident is an early indicator of the direction this is headed.

This was not a routine exploit. It demonstrated how AI is now being applied across the entire attack lifecycle rather than in isolated stages.

How AI Is Changing the Modern Breach Lifecycle

A typical system breach follows a series of steps:

  1. Target selection

  2. Reconnaissance and attack surface mapping

  3. Vulnerability discovery and validation

  4. Credential harvesting and lateral movement

  5. Data collection and intelligence extraction

  6. Ransom, espionage, or follow-on actions

Historically, AI was used primarily in step 3 — identifying and understanding vulnerabilities.

In this most recent case, AI was used at every stage, which marks a significant shift.

The Old Workflow (Manual)

Attackers previously had to:

  • Read a vulnerability description

  • Review a proof-of-concept exploit

  • Weaponize it manually

  • Add it to their toolchain

  • Begin searching for victims
The New Workflow (AI-Driven)

Now, the entire process is automated:

  • No POC required

  • No manual adaptation

  • No lengthy analysis cycle

  • AI generates the exploit path directly

Attackers just became faster, cheaper, and far more scalable.

So How Do We Respond?

Several ideas are being discussed across the industry. Some are appealing but unrealistic. Some are technically constrained. A few have practical promise.

1. Hold LLM model providers responsible

A commonly suggested path, but easily bypassed.

Even if commercial models were locked down perfectly, open-source versions running locally would still enable misuse.

2. Use defensive AI agents

This concept surfaced again at a Microsoft security conference I attended — the idea that we will counter offensive AI with a swarm of defensive agents.

The flaw is structural:

  • Offensive AI doesn’t need perfect fidelity. Moderate accuracy is enough to cause real damage.

  • Defensive AI needs near-perfect precision. One incorrect autonomous remediation action can break production.

We are nowhere close to the fidelity required for safe automated defensive action.

3. Use AI to detect breaches

There is real potential here, but false positives remain too high for this to serve as a dependable primary mechanism.

4. Build better software

A far more durable strategy is to reduce the underlying attack surface and eliminate unnecessary components.

This is the direction RapidFort focuses on — using analysis and automation to help teams build software that is smaller, cleaner, and less exposed.

5. Use AI to strengthen existing security frameworks

Rather than replacing what works, we can use AI to:

  • Improve prioritization

  • Enhance visibility

  • Streamline remediation

  • Increase consistency in existing practices
6. Increase deterrence

A policy-driven option: raise penalties for cyber breaches.

This will eventually happen, but it will not close the underlying technical gap attackers are now exploiting.

What Actually Works Today

When we examine these options realistically:

  • Holding model providers accountable is not practical.

  • Swarm-style defensive AI is not feasible for a long time.

  • Policy changes help indirectly but don’t solve the technical challenge.

The viable path forward lies in:

  • Better detection capabilities, and

  • Building fundamentally stronger software from the start

This means:

  • Smaller attack surfaces

  • Fewer unnecessary components

  • Less exposed code

  • Automated hardening

  • Continuous visibility into what software actually contains and uses

AI has accelerated attackers. Our response must be to ensure software is harder to exploit, not easier.

Conclusion: Where RapidFort Fits Into This Future

As attackers become faster and more automated, the only sustainable defense is to reduce the opportunities they can exploit. That means building software that begins secure and remains secure throughout its lifecycle. Achieving this requires understanding what components are truly needed, removing the ones that are not, and continuously maintaining an accurate picture of what is running in production.

This is where RapidFort’s approach fits naturally. By analyzing software deeply, identifying unused components, and reducing exposure, RapidFort helps teams shrink their vulnerability footprint and maintain a more defensible posture over time. The goal isn’t to out-automate attackers, but to give them less to work with in the first place — a practical path in a world where AI has fundamentally changed the economics of exploitation.

Subscribe to newsletter

Subscribe to receive the latest blog posts to your inbox every week.

By subscribing you agree to with our Privacy Policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Latest posts