AI can be exhausting: managing frustrations with Cursor
November 4, 2025
AI can be exhausting
Lessons learned 9/11 on Cursor after 15 years in software
Having to talk to a machine like a human is sometimes very inefficient and frustrating. Here are the main pitfalls and how to handle them.
Task too complicated
If that's the case, you can either arm yourself with patience or go fix the problem yourself directly. It'll save you from the endless loop of:
- "You're right"
- "I see the problem"
- "Ah, I understand"
- "I found the issue"
...which gives the impression of talking to a wall.
Stubbornness
Sometimes a piece of information is missing, and even if you provide it to the AI, it'll go look for it itself rather than listening to you:
— "Ah, your env file doesn't exist" — It does, it's at the root and complete, the problem is elsewhere
— "Actually your .env is probably incomplete" — No, the variable is properly defined, the problem is elsewhere
— "I'll run a command to check your .env" — No need, everything is properly defined, it was working before
—
$cat .env | grep CREDENTIALS— "Ah! The problem isn't from the .env"(╯°□°)╯︵ ┻━┻
Yes-manning
Another annoying bias: whatever your choices or requests, the AI will support you with praise, regardless of whether it's driving you into a wall.
Only solution: ask from the start for very critical responses, while requesting clarification when there isn't enough data to answer effectively and resiliently.
Summary
- A cycle of responses with the problem "identified" without any progress → too complex, use a bigger model or step in manually
- When AI goes in circles, start from scratch
- If you don't explicitly ask for critical feedback, you won't get any
AI is a powerful tool, but like any tool, you need to know when to put it down and take over yourself.
Originally published on LinkedIn.
