Writing tests with AI: the use case nobody wants to do by hand
October 26, 2025
Writing tests with AI
Lessons learned 6/11 on Cursor after 15 years in software
I don't know a single sane person who enjoys writing tests. And yet, it's still the best way to deliver regularly and quickly.
AI is here to help in the following cases.
Starting a new project (TDD)
Once your design document is complete, you can start by asking the AI to write the main tests your system must pass.
A pure TDD approach, which also helps the AI iterate later by testing as it generates the system's code. Moreover, it allows you to restrict the project to the behaviors you're testing.
End-to-end tests
By describing the business context and how the system is launched, the AI can organize for you the different steps of building, launching and testing your system while proposing relevant tests for your use case — including CI integration.
The process works but requires vigilance because you quickly end up with A LOT of tests. Specifying the number of tests and their nature reduces potential waste.
Unit tests
Pass the file containing the classes or functions to test, specifying:
- The business context
- The frameworks used
- An existing test file as a style example (if you have one)
The AI will produce tests consistent with your existing codebase.
What's coming
Frameworks are being developed to have each test as a user story: an AI then takes over and navigates the page like a user, validating the test or not.
We're moving from a "verify that the code does what we expect" to "verify that the user can do what we promise" logic. It's a major shift in perspective.
Originally published on LinkedIn.
