
no time, jump straight to how to Protecting yourself in the age of vibecoding and Secure AI-assisted development
Artificial intelligence has revolutionized software development, with tools like GitHub Copilot, Claude, and Chatgpt generating functional code from natural language prompts. This has led to the rise of “Vibecoding” – a development approach where programmers describe what they want and let AI handle the implementation details. While this can dramatically improve software development prototyping, it introduces a dangerous security risk that many developers overlook: Slopsquatting.
The dangerous dance of AI trust and verification
Vibecoding, a term coined by AI researcher Andrej Karpathy in February 2025, describes a programming technique where developers rely heavily on AI to generate code based on natural language descriptions, often without thoroughly reviewing the resulting code. As Karpathy put it, vibecoding means:
fully giv[ing] in to the vibes, embrace exponentials, and forget that the code even exists.
AI researcher Simon Willison clarified this distinction:
If an LLM wrote the code for you, and you then reviewed it, tested it thoroughly and made sure you could explain how it works to someone else that’s not vibe coding, it’s software development. The usage of an LLM to support that activity is immaterial.
True vibecoding involves accepting AI-generated solutions without comprehensive review or deep understanding of the underlying implementation.
This approach has democratized programming, allowing even non-technical users to create functional prototype applications. However, it creates the perfect conditions for a new type of supply chain attack called “Slopsquatting“.
Slopsquatting is the exploitation of plausible but non-existent package names hallucinated by AI.
This is a specific case of AI hallucination, called package hallucination, where a large language model (LLM) suggests a package that doesn’t actually exist.
Package hallucination occurs when an LLM generates code that recommends or contains a reference to a package that does not actually exist.
Origin of the Term Slopsquatting
Andrew Nesbitt introduced the term Slopsquatting:
Slopsquatting – when an LLM hallucinates a non-existent package name, and a bad actor registers it maliciously. The AI brother of typosquatting.
How AI hallucinates your next security nightmare
Large language models frequently suggest packages that don’t actually exist. A 2024 study “We Have a Package for You! A Comprehensive Analysis of Package Hallucinations by Code Generating LLMs” by researchers from the University of Texas at San Antonio, and the University of Oklahoma analyzed code samples generated LLMs. The results were alarming: 19.7% of all recommended packages were hallucinations – completely non-existent libraries that sounded plausible enough to fool developers.
More concerning still, these hallucinations aren’t random. The same study found that 43% of hallucinated package names reappeared consistently across multiple iterations of the same prompt, and 58% appeared more than once. Open-source models hallucinate ~21.7% of package names on average, whereas commercial models do better, with an average of ~5.2%. This predictability makes slopsquatting attacks viable, as attackers can reliably predict which fake packages to create.
The hallucinated names aren’t obviously fake, either. Analysis showed that 38% had moderate string similarity to real packages, suggesting a plausible naming structure that makes them difficult to identify as fabrications.
The Perfect Storm: Where Vibecoding Meets Slopsquatting
The vibecoding workflow creates several conditions that make developers vulnerable to slopsquatting attacks:
Blind trust in AI output
The core philosophy of vibecoding encourages developers to trust AI implicitly. This leads to reduced or none existing manual verification of package names and dependencies, little or no package auditing before installation, and the dangerous habit of copy-pasting installation commands directly from AI suggestions.
Speed over security
Vibecoding emphasizes rapid development, which often leads to shortcuts with respect to resilience, security, reliability, debugging:
- Developers skip security checks to maintain workflow momentum
- Pressure to deliver quickly discourages thorough dependency verification
- The focus shifts to functionality prototyping rather than security validation
Limited code understanding
Vibecoding often results in developers having an incomplete understanding of the generated code and accept the proposed implementation:
- AI-generated code may use unfamiliar packages that developers don’t recognize
- Without full comprehension, developers can’t easily identify suspicious dependencies
- Complex package ecosystems (javascript / typescript / python) make manual verification challenging
Real-world examples emerging
The risks aren’t merely theoretical. In 2024, security researcher Bar Lanyado demonstrated this vulnerability by uploading an empty package named “huggingface-cli” – a name frequently hallucinated by LLMs that confused the command used for HuggingFace Hub’s CLI with an actual package name.
In just three months, this empty package received over 15,000 downloads in three months. It was even referenced in the README file of a research repository conducted by Alibaba. This experiment shows how easily developers, guided by AI recommendations, can be led to download non-existent packages.
Why slopsquatting makes vibecoding more dangerous
Slopsquatting transforms vibecoding from a merely risky practice into a potentially dangerous one for several reasons:
- It exploits the trust relationship between developers and AI tools. When developers “vibe code,” they’re effectively outsourcing their technical judgment to AI systems that can hallucinate with confidence.
- It targets the least technical users. Vibecoding has democratized programming by allowing non-technical users to create functional software. These same users often lack the security awareness to verify package authenticity, making them prime targets.
- It creates a silent vulnerability. Unlike many security issues that cause immediate problems, slopsquatting attacks can remain dormant in a codebase for extended periods, making detection difficult.
- It scales efficiently for attackers. By targeting consistently hallucinated package names, attackers can create a relatively small number of malicious packages that will be inadvertently used by many developers.
- It bypasses traditional security controls. Most security tools focus on detecting vulnerabilities in existing code, not preventing the installation of malicious packages that appear legitimate.
Protecting yourself in the age of vibecoding
- Never blindly trust AI-suggested packages. Always verify their existence and reputation on official repositories before installation.
- Code Reviews. Ask experienced developers in the field to review your vibed code.
- Use lower temperature settings in AI models to reduce randomness to improve accuracy.
- Ask the AI model to verify its own outputs for hallucinations.
- Cross-check outputs against a known list of valid packages or maintain detailed Software Bills of Materials (SBOMs) to quickly identify unauthorized or unexpected dependencies.
- Use RAG or fine-tuning: Retrieval-Augmented Generation (RAG) and supervised fine-tuning with real package data can reduce hallucinations.
- Test all AI-generated code in isolated environments before incorporating it into production codebases.
- Enhance developer training about the risks of AI-hallucinated dependencies.
- Implement dependency management tools like Dependabot (GitHub Security) or OWASP Dependency-Check to identify suspicious or newly published packages.
- Integrate automated security scanning tools like OWASP ZAP, Snyk, or SonarQube.
Secure AI-assisted development
The combination of vibecoding and slopsquatting represents a significant emerging security challenge. As AI continues to transform software development, we must adapt our security practices to address these new risks. The productivity benefits of vibecoding prototyping are too significant to ignore, but the corresponding security risks require a thoughtful, balanced approach.
By understanding the connection between vibecoding and slopsquatting, and implementing appropriate safeguards, we can continue to leverage AI’s capabilities while minimizing the associated security risks. The key is to maintain human judgment and security awareness even as we embrace the power of AI-assisted development.
Found a funny analogy today on Linkedin: Waterfall Vs Agile Vs AI Vs Vibe Coding