I Stopped Writing Tests and My Code Got Better

Yeah, I said it. Come at me.

Before you start typing that angry comment, hear me out. I’m not saying “don’t test.” I’m saying I stopped writing tests the way everyone tells you to.

The Problem Nobody Talks About

For years, I followed the g…


This content originally appeared on DEV Community and was authored by Sahil Sahu

Yeah, I said it. Come at me.

Before you start typing that angry comment, hear me out. I'm not saying "don't test." I'm saying I stopped writing tests the way everyone tells you to.

The Problem Nobody Talks About

For years, I followed the gospel: Write tests first. Test everything. 100% coverage. TDD or bust.

My codebase had 3,247 tests. Coverage was 94%. CI took 23 minutes to run. I felt like a responsible adult developer.

Then I shipped a bug that wiped out $12k worth of data.

The tests? All green. ✅

What Actually Happened

The bug was simple: An edge case in our payment processing where users could submit the same transaction twice within 50ms. Race condition. Classic.

Why didn't the tests catch it? Because I tested what I thought about, not what actually breaks.

Our test suite was massive, but it was testing:

  • Happy paths (90% of tests)
  • Edge cases I imagined (9% of tests)
  • Whatever got me to 94% coverage (1% of tests)

Zero tests for the actual user behavior that broke things.

The Uncomfortable Truth

Most tests are just checking if functions return what you told them to return. That's not testing, that's just... writing the same logic twice.

// My old tests looked like this
describe('calculateTotal', () => {
  it('should add tax to subtotal', () => {
    expect(calculateTotal(100, 0.1)).toBe(110);
  });
});

// Cool story. But did I test:
// - What if subtotal is negative?
// - What if tax is a string "10%"?
// - What if this runs 1000 times per second?
// - What if the user's locale formats numbers differently?

What I Do Now Instead

1. I Write Fewer, Better Tests

Instead of 3,247 tests, I have about 400. But these 400 tests are mean.

// New style: Test like users actually break things
describe('Payment processing under stress', () => {
  it('handles rapid duplicate submissions', async () => {
    const userId = 'test-user';
    const paymentData = { amount: 100, card: '4242...' };

    // Fire 10 identical requests simultaneously
    const promises = Array(10).fill(null).map(() => 
      processPayment(userId, paymentData)
    );

    const results = await Promise.all(promises);
    const successful = results.filter(r => r.success);

    // Only ONE should succeed
    expect(successful.length).toBe(1);
  });
});

2. I Test Integration, Not Units

Unit tests are overrated. There, I said it again.

Your calculateTax() function works fine in isolation. But does it work when:

  • The database returns null
  • The API times out
  • The user's session expires mid-request
  • Redis goes down

That's what breaks in production. Not your pure functions.

3. I Use Property-Based Testing

This changed everything.

import fc from 'fast-check';

// Instead of testing specific cases, test properties
test('user input never causes crashes', () => {
  fc.assert(
    fc.property(
      fc.string(), // any string
      fc.integer(), // any integer
      fc.object(), // any object
      (name, age, metadata) => {
        // This should NEVER throw, no matter what garbage we pass
        expect(() => {
          createUser(name, age, metadata);
        }).not.toThrow();
      }
    )
  );
});

This generates thousands of random test cases. It found 7 bugs in my code that I would NEVER have thought to test.

4. I Test in Production

Controversial? Maybe. Effective? Absolutely.

// Feature flags + monitoring = production tests
if (featureFlags.newPaymentFlow) {
  try {
    result = await newPaymentProcessor.process(payment);
    metrics.increment('new_payment_flow.success');
  } catch (error) {
    metrics.increment('new_payment_flow.error');
    logger.error('New payment flow failed', { error, payment });

    // Fallback to old flow
    result = await oldPaymentProcessor.process(payment);
  }
}

I know in real-time if something's broken. My test environment never caught the issues that production monitoring does.

The Results

6 months after this switch:

  • Tests run in 4 minutes instead of 23
  • Found 3x more bugs before users did
  • Deployments went from scary to boring (in a good way)
  • Onboarding new devs is faster - they understand 400 good tests way easier than 3k mediocre ones

What I'm NOT Saying

I'm not saying "don't test." I'm saying:

❌ Stop writing tests just to hit coverage numbers

❌ Stop testing only happy paths

❌ Stop writing tests that just repeat your implementation

✅ Start testing like users actually use (and break) your app

✅ Start testing the integration points where things actually fail

✅ Start monitoring production like it's part of your test suite

The Backlash I'm Ready For

"But TDD!"

TDD is great for well-defined problems. But most of our work isn't well-defined. Requirements change. You'll rewrite those tests 5 times.

"But code coverage!"

Coverage tells you what you executed, not what you tested. 100% coverage with bad tests is worse than 60% coverage with good tests.

"But best practices!"

Best practices from 2010 don't apply to 2025 codebases. We have better tools now. Use them.

Try This Instead

For your next feature:

  1. Write ONE integration test that exercises the whole flow
  2. Add property-based tests for any user input
  3. Test the failure modes (timeouts, null responses, etc.)
  4. Add monitoring to catch what you missed
  5. Ship it

You'll find more bugs, write less code, and ship faster.

Your Turn

Am I completely wrong? Probably partially. Tell me why in the comments.

Already doing something like this? Share your approach. Let's learn from each other.

Still writing unit tests for getters and setters? I'm sorry for your loss.

Tools I Actually Use:

Hit that ❤️ if this made you question your test suite. Drop a 💀 if you think I'm about to get fired.

testing #controversial #webdev #javascript #devops


This content originally appeared on DEV Community and was authored by Sahil Sahu


Print Share Comment Cite Upload Translate Updates
APA

Sahil Sahu | Sciencx (2025-11-04T07:42:05+00:00) I Stopped Writing Tests and My Code Got Better. Retrieved from https://www.scien.cx/2025/11/04/i-stopped-writing-tests-and-my-code-got-better-2/

MLA
" » I Stopped Writing Tests and My Code Got Better." Sahil Sahu | Sciencx - Tuesday November 4, 2025, https://www.scien.cx/2025/11/04/i-stopped-writing-tests-and-my-code-got-better-2/
HARVARD
Sahil Sahu | Sciencx Tuesday November 4, 2025 » I Stopped Writing Tests and My Code Got Better., viewed ,<https://www.scien.cx/2025/11/04/i-stopped-writing-tests-and-my-code-got-better-2/>
VANCOUVER
Sahil Sahu | Sciencx - » I Stopped Writing Tests and My Code Got Better. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2025/11/04/i-stopped-writing-tests-and-my-code-got-better-2/
CHICAGO
" » I Stopped Writing Tests and My Code Got Better." Sahil Sahu | Sciencx - Accessed . https://www.scien.cx/2025/11/04/i-stopped-writing-tests-and-my-code-got-better-2/
IEEE
" » I Stopped Writing Tests and My Code Got Better." Sahil Sahu | Sciencx [Online]. Available: https://www.scien.cx/2025/11/04/i-stopped-writing-tests-and-my-code-got-better-2/. [Accessed: ]
rf:citation
» I Stopped Writing Tests and My Code Got Better | Sahil Sahu | Sciencx | https://www.scien.cx/2025/11/04/i-stopped-writing-tests-and-my-code-got-better-2/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.