I use AI coding agents every day. Claude Code, OpenClaw, Copilot — they're woven into how I build and how I deliver for clients. I'm not writing this from the sidelines.
So when I tell you these tools have a supply chain problem that most teams aren't addressing, understand it's coming from someone who ships with them, not someone who's afraid of them.
Part 1 of this series covered how AI assistants are being weaponized as C2 channels. This time we're going deeper into the dependency layer — where AI hallucinations meet real-world package registries, and attackers are already exploiting the gap.
TL;DR — What You'll Learn
- Slopsquatting: LLMs hallucinate package names at scale — 19.7% of recommendations don't exist — and attackers are registering them on real package registries.
- Typosquatting, supercharged: AI assistants don't typo like humans — they confuse package names with authority, cross-pollinating ecosystems.
- The discourse problem: Both the fear crowd and the hype crowd are making this harder to solve. The answer is supply chain security fundamentals, applied to a new attack surface.
- What to do: Six practical mitigations you can implement this week — from dependency allowlists to treating AI output as untrusted input.
Slopsquatting: When Your AI Invents a Dependency and Someone Else Registers It
Here's the attack in three sentences: An LLM generates code that imports a package called flask-jwtlib. That package doesn't exist on PyPI. An attacker registers flask-jwtlib on PyPI with a malicious payload, then waits for the next developer who trusts the AI's recommendation and runs pip install.
That's slopsquatting. The term was coined by Seth Larson, the Python Software Foundation's Developer-in-Residence, and it describes what happens when AI hallucinations become a reliable attack vector.
The numbers behind it are worse than most people realize. A research team from UT San Antonio, Virginia Tech, and the University of Oklahoma tested 16 code-generation models across 576,000 code samples. Their findings:
- 19.7% of all recommended packages didn't exist
- Open-source models hallucinated at 21.7% on average; commercial models at 5.2%
- They identified over 205,000 unique hallucinated package names
- 58% of hallucinated names were repeatable across multiple runs — meaning attackers can predict which fake names an LLM will suggest
Why 58% Repeatability Is the Key Number
These aren't random one-off errors. When GPT or CodeLlama hallucinates python-dateutil-extra for the hundredth time, an attacker only needs to register that name once. The LLM does the distribution for them. And the names are convincing — only 13% were simple typos of real packages. Nearly half were entirely fabricated but semantically plausible — names that look like they should exist.
A senior developer might squint at it. A junior developer copy-pasting from Cursor at 11 PM won't.
Typosquatting: The Classic Attack, Supercharged by AI
Typosquatting has been a supply chain threat for over a decade. Register reqeusts instead of requests, wait for fat fingers, harvest credentials. It's well understood.
What's changed is the delivery mechanism. When a human types pip install manually, they might catch a typo. When an AI coding assistant generates an import statement and the developer trusts it wholesale, there's no human in the loop to notice that python-nmap became python-nmap3 or beautifulsoup4 became beautifulsoup-4.
AI assistants don't typo the way humans do. They confuse package names in ways that feel authoritative. They'll recommend a package name that's close to the real thing but subtly wrong — and they'll do it with the same confidence they use for everything else. No hedging, no "I think this might be…" Just a clean import statement in otherwise perfect code.
The research backs this up: 8.7% of hallucinated Python packages actually matched valid npm (JavaScript) packages. The models aren't just making things up — they're cross-pollinating ecosystems, recommending JavaScript package names in Python code. If you've ever wondered how left-pad ends up in a Python requirements file, now you know.
The Discourse Is Broken. Both Sides.
Here's where I'm going to annoy everyone.
The fear crowd — you've seen the takes. "AI agents will steal your data, hallucinate malware, destroy your infrastructure." Every week there's another breathless thread about how Claude Code or Codex is going to autonomously compromise your production environment. The framing is always existential, always imminent, always lacking a practical mitigation section.
The hype crowd — equally unhelpful. "Just let the AI agent do everything, it's fine, ship faster, worry later." I've watched teams give AI agents write access to production databases because velocity. The absence of an immediate catastrophe became evidence that controls are unnecessary.
Both are wrong. And both are making the actual problem harder to solve.
The risk from slopsquatting and AI-assisted typosquatting is real. It's measurable. It's being actively exploited. But it's also not fundamentally new. This is supply chain security — the same discipline we've been building for years, applied to a new attack surface.
When npm had the event-stream compromise in 2018, we didn't abandon package managers. We built lockfiles, added npm audit, implemented hash verification. When SolarWinds happened, we didn't stop using build systems. We added attestation, supply chain provenance, SLSA frameworks.
AI-assisted dependency hallucination is the next chapter of the same book. The principles haven't changed. The attack surface has.
The practitioners I respect — the ones actually building with these tools in production — aren't panicking and they aren't ignoring the risks. They're applying supply chain security fundamentals to a new threat model. That's the only conversation worth having.
What to Actually Do About It
Here's what I'm implementing for clients right now. None of this is theoretical — it's all in production environments using AI coding agents daily.
1. Dependency Allowlists, Not Blocklists
Maintain a curated list of approved packages. Any AI-generated code that introduces a dependency not on the list triggers a review. Yes, this adds friction. That's the point. The friction is where the security lives.
2. Pin Versions and Verify Hashes
Every dependency gets pinned to an exact version with a hash in your lockfile. pip install --require-hashes exists for a reason. If an AI suggests a package, the hash verification will fail if someone registered a malicious package under that name after your last audit.
3. Automated Package Provenance Checks
Before any new dependency enters your build, run automated checks: Does this package exist? How long has it existed? Who maintains it? How many downloads? A package registered two weeks ago with 14 downloads that your AI just recommended should trigger every alarm you have.
4. Sandbox Your AI Agents
AI coding agents should operate in sandboxed environments with no direct access to production package registries. Let the agent generate code, then run the dependency resolution through your standard supply chain controls. OpenClaw, Claude Code, and Codex all support configurable permission boundaries — use them.
5. Audit Trails for AI-Generated Dependencies
Every package an AI agent recommends should be logged — what model suggested it, what prompt triggered it, when it was introduced. When (not if) a supply chain incident involves an AI-hallucinated package, you want that forensic trail.
6. Treat AI Output Like Untrusted Input
This is the mental model shift most teams haven't made. AI-generated code is untrusted input. It should go through the same review gates as a pull request from a new contractor. Code review, dependency scanning, static analysis — all of it.
The Bottom Line
AI coding agents are a permanent part of the development landscape. The companies that will navigate this well aren't the ones avoiding the tools or the ones blindly trusting them. They're the ones applying supply chain security discipline to a new attack surface.
Slopsquatting and AI-amplified typosquatting are real threats with real mitigations. The fundamentals haven't changed: know what's in your build, verify what you install, trust but verify the tools generating your code.
Is Your AI Stack a Backdoor?
Take our 3-minute self-assessment to identify AI governance gaps in your security program — before your auditor does.
Take the Assessment →If you're deploying AI agents and want to make sure your supply chain security program covers the new attack surface, let's talk.
Peter Hallen is a fractional CISO and compliance strategist helping growing companies build security programs that hold up under pressure. He works with AI coding agents daily and helps clients do the same — securely.