Vibecoding and security: the hidden risks of coding with AI
February 6, 2026
Vibecoding and security: the hidden risks of coding with AI
Vibecoding — the practice of letting an AI generate code from natural language prompts — has exploded in popularity. Cursor, Copilot, Claude: tools are multiplying and productivity is skyrocketing.
But behind this speed lies a major problem: security.
What AI doesn't tell you
When you ask an AI to "create a login form," it delivers functional code in seconds. What it doesn't do:
- Validate inputs on the server side exhaustively
- Hash passwords with a modern algorithm (bcrypt, argon2)
- Protect against SQL injection in all edge cases
- Manage sessions securely
- Implement rate limiting against brute force
The most common vulnerabilities in AI-generated code
1. SQL and NoSQL injection
AI tends to use string concatenation for queries, especially when the prompt is vague. A simple SELECT * FROM users WHERE id = ${userId} can become a gaping doorway.
2. Hardcoded secrets
How many times have you seen AI generate const API_KEY = "sk-..." directly in the source code? Secrets must live in environment variables, never in code.
3. Lack of validation
AI often generates "happy path" code — everything works when the data is clean. But in production, data is never clean.
4. Vulnerable dependencies
AI suggests packages it saw in its training data. Some are obsolete, others have known CVEs.
How to protect yourself?
- Never blindly trust generated code
- Run a security audit on your codebase, especially before launch
- Use static analysis tools (ESLint security, Semgrep)
- Test edge cases: what happens with malicious inputs?
- Learn the basics of web security (OWASP Top 10)
Conclusion
Vibecoding isn't the enemy. It's a powerful tool that, like any tool, requires vigilance. Development speed should never come at the expense of your users' security.
Have doubts about your project's security? Get a quick diagnostic to see where you stand.
