Why Aren’t Developers Writing Tests? (And How AI Can Change That)

Why Aren’t Developers Writing Tests? (And How AI Can Change That)
After countless refinement sessions, hours of discussions, and collaborative back-and-forth, the PBI lands on your plate. You’re eager to dive in. You skim the title, glance at the Acceptance Criteria (AC), and jump straight into the code. The lines flow effortlessly, and by the end of the day, the feature is done. You push it to QA, and the cycle of questions begins:
“Did you notice the first AC isn’t fully implemented?” A 2-minute fix. No big deal.
“The error message here doesn’t match the design spec.” Another 2-minute fix.
What felt like a productive day suddenly turns into a series of small fixes that chip away at your confidence. The elegance of your implementation and the robustness of your API are overshadowed by these “little” details.
Sound familiar? Don’t worry—you’re not alone. At some point, we all start trusting our memory to retain every nuance of the AC, or we convince ourselves we “get” the PBI because we hashed it out for weeks.
But here’s the truth: Missing details are costly.
The fix? A combination of discipline, process, and leveraging modern tools—specifically AI-powered test case generation. Let’s explore how to make this work.
The Problem with Post-Code Unit Testing
Most developers write unit tests after the code. If AI is involved, it’s often used to auto-generate tests based on what has already been written. While convenient, this approach is flawed. Why?
Tests mirror the code, not the intent. AI-generated tests for post-written code often validate what the code does rather than what it’s supposed to do.
Missed edge cases. Without clear test cases derived from the business requirements, you’re more likely to overlook edge cases.
The solution is simple but powerful: Flip the process. Write test cases first, with AI as your partner.
A Step-by-Step Workflow for AI-Driven Test Planning
1. Start with Acceptance Criteria Validation
Use AI to evaluate the AC.
Ask the AI: “Can you summarize the business requirements based on this AC?”
Cross-check: Does this match the intent of the business request?
If not, refine the AC until it clearly reflects the business need.
2. Generate Test Cases Before Coding
Prompt the AI to create test cases for each AC. Include:
Functional cases: Does the feature work as expected?
Edge cases: What happens when inputs are invalid, or the system is under stress?
Negative cases: What should not happen?
Example Prompt for AI:
Based on this acceptance criterion: “The system should allow users to reset their password if they provide a valid email,” generate test cases. Include functional, edge, and negative cases, such as invalid email formats, rate limits, or unregistered emails.
3. Categorize Test Cases
Divide test cases into functional areas:
API-level tests
UI tests
CRUD operations
This step ensures that each part of your system gets adequate attention. A structured test matrix can be helpful here.
4. Collaborate with QA
Share the test cases with your QA team (if applicable).
Validate the coverage of edge and negative cases.
Elicit their input—they’re experts at breaking your code.
5. Generate Unit Tests
With categorized test cases, prompt the AI to generate unit tests tailored to specific functionality.
Include relevant files or templates in your prompt.
Review and refine the generated tests to ensure they align with the requirements.
Example:
- AI Output (Basic):
it('should return a 200 status for valid input', async () => {
const response = await myFunction(validInput);
expect(response.status).toBe(200);
});
- Improved by Developer:
it('should return a 200 status and correct data structure for valid input', async () => {
const response = await myFunction(validInput);
expect(response.status).toBe(200);
expect(response.data).toEqual({
id: expect.any(String),
name: expect.any(String),
createdAt: expect.any(Date),
});
});
6. Start Coding with Confidence
Now that you have a comprehensive suite of test cases, dive into the code. By the time you’re done, you’ll have confidence in your implementation and reduced back-and-forth with QA.
AI Tools to Streamline the Process
Two standout tools for this workflow are ChatGPT and GitHub Copilot. Both excel at generating code snippets, creating test cases, and summarizing requirements. However, for those who want more control and privacy, there are exciting possibilities with fully local AI setups. Tools like OpenWeb-UI or custom fine-tuned LLMs allow developers to integrate AI into their workflow without sharing sensitive data externally.
Curious? A future post will dive into setting up your own local AI environment and exploring its potential in streamlining development workflows.
Why Invest the Time?
Yes, this process takes 2–4 hours upfront. But compare that to the time lost in:
Multiple QA cycles.
Waiting for PR approvals.
Rerunning pipelines.
Retesting across environments.
The investment pays off in spades. Whether you’re part of a team or working solo, the habit of test-first development builds confidence, improves your reputation, and ultimately leads to faster delivery.
Building Your Reputation as a Developer
This workflow isn’t just about writing better tests—it’s about building your reputation. The developer who pushes changes with minimal rework is invaluable. While being seen as a “fixer” is great, being known as a “1-and-done” developer is even better.
AI isn’t just a tool; it’s an opportunity to improve your craft. By using it thoughtfully, you can streamline your process, reduce errors, and focus on delivering value—not debugging the little details.