Test coverage is almost the perfect illustration of Goodhart’s law. Good programming practices do result in high test coverage, coverage is very easy to measure, but very easy to fake with useless “tests”. So, when the coverage is measured, the coverage goes up, but stops being meaningful.
While it doesn't alleviate the problems entirely, you can also run things like mutation tests that check that your unit tests actually test conditions, rather than just execute all the code.
High coverage isn't enough but, in my experience, it's a great place to start.
I've written an depressingly high quantity of code in my career that blows up literally the first time it runs. I'd much rather that happen in a unit test than in production.
Any test that exercises a given branch is better than nothing.
Coverage can tell you what you didn't test, but it can't tell you what you did test.
> Any test that exercises a given branch is better than nothing.
I disagree with this. If you have a test that doesn't actually test anything, you can't tell that you're not really testing that branch. No test is better than a bad test because it's easier to fix.