← Back to context

Comment by graypegg

3 months ago

Tests are a tool for you, the developer. They have good effects for other people, but developers are the people that directly interact with them. When something fails, it's a developer that has to figure out what change they wrote introduced a regression. They're just tools, not some magic incantation that protects you from bugs.

I think the author might be conflating good tests with good enough tests. If IOService is handled by a different team, I expect them to assure IOService behaves how it should, probably using tests. The reason we're mocking IOService is because it's a variable that I can remove, that makes the errors I get from a test run MUCH easier to read. We're just looking at the logic in one module/class/method/function. It's less conceptually good to mock things in tests, since I'm not testing the entire app that we actually ship, but full no-mocks E2E tests are harder to write and interpret when something goes wrong. I think that makes them a less useful tool.

The thing I do agree on, is assuming your mocks should only model the happy path. I'd say if something can throw an exception, you should at least include that in a mock. (as a stubbed method that always throws) but making the burden of reimplementing your dependancies mandatory, or relying on them in tests is going to mean you write less tests, and get worse failure messages.

Like everything, it depends eh?

This 100%. I'm not sure how the author managed to create consistent failure cases using real service dependencies, but in my code I find mocks to be the easiest way to test error scenarios.

With I/O in general, I've observed that socket, protocol, and serialization logic are often tightly coupled.

If they're decoupled, there's no need to mock protocol or serialization.

There's some cliché wrt "don't call me, I'll call you" as advice how to flip the call stack. Sorry, no example handy (on mobile). But the gist is to avoid nested calls, flattening the code paths. Less like a Russian doll, more like a Lego instructions.

In defense of mocks, IoC frameworks like Spring pretty much necessitate doing the wrong thing.

> E2E tests are harder to write and interpret when something goes wrong.

If the test is hard to debug when it goes wrong, then I assume the system is hard to debug when something goes wrong. Investing in making that debugging easy/easier unlocks more productivity. Of course it matters on how often bugs show up, how often the system changes, the risks of system failure on the business, etc. it may not be worth the productivity boost to have a debuggable system. In my cases, it usually is worth it.

  • I think it's always going to be harder to debug 1 thing, versus everything, regardless of how a system is built. If you're not mocking anything, then anything could have gone wrong anywhere.

    But also, if you're able to fix things effectively from E2E test results due to a focus on debug-ability, then that's great! I think it's just the framing of the article I have trouble with. It's not an all or nothing thing. It's whatever effectively helps the devs involved understand and fix regressions. I haven't seen a case where going all in on E2E tests has made that easier, but I haven't worked everywhere!