AI promises a surge in hacking - Datatoy
Datatoy Logo
🇬🇧 securityvibecodingAI

AI promises a surge in hacking

June 28, 2025

AI promises a surge in hacking

But not because of some digital Terminator or malicious AGI hacker — for a very simple reason: AIs have no concept of security by default.

Real-world vibecoding examples

While using vibecoding tools, I've encountered the following cases:

Google API key in plaintext on the frontend

Translation: anyone can use Google services at MY expense.

Password verification on the frontend

Translation: everyone knows the super-secret password.

Database connection on the frontend

Translation: anyone can browse and edit my database.

Unprotected privileged API routes

Translation: anyone can use my API for free without authentication.

Why this is serious

When you know that any service deployed on the internet is constantly being scanned by malicious bots for all kinds of vulnerabilities, the fact that these solutions ship as-is is terrifying.

Personally, it's not a problem for me because I recognize the issues as they're being written and they never make it to production.

But for beginners who want to prototype, the consequences can be devastating.

The real risk

The danger isn't that AI is malicious. The danger is that it's naive. It produces code that works, but without any consideration for security — unless you explicitly ask for it.

And even then, security best practices can't be summed up in a few prompt rules. It's a profession, constant vigilance, a culture.

Vibecoding without security expertise is like building a house without foundations: it stands until the first gust of wind.


Originally published on LinkedIn.