That's true. But then, engage with the question of whether those promises were reasonable. "Even if we have the capability of complying with a request for information on a specific user, we will resist the courts". There's a reason providers tend not to make that promise: keeping it can involve going out of business.
That's not the promise they made, rather it was that they had not developed and would not develop any means to circumvent the protection/encryption for paying users' accounts.
The 'not complying' is only a side effect, and is more so tantamount to a refusal to do work for the government in opposition to lavabit's own business promises. I'm not sure the government in any instance has a right to compel work to meet their specified ends.
Here my biases as a secure software engineer may be coloring my comments, because to my mind, building an architecture which solicits sensitive data from clients but fails to preclude the disclosure of those secrets without enormous engineering effort is the same thing as conceding that such disclosure is possible.
Imagine a mail service that operated solely as a Tor hidden service and required all users to use PGP --- for instance, by checking the contents of mail messages to ensure they were encoding them, and rejecting them if they weren't. That's a service that might reasonably make a promise not to cooperate with a court order.
Lavabit didn't have that system and instead had to make a difference promise: that they would shutter the enterprise before cooperating with a court. And so they did.
That's retarded though. The absence of getpeername calls in their frontend does not constitute an inability to comply with a pen register order.
There are costs of doing business and one of the costs is the ability to comply with lawful court orders. You are completely wrong about the government lacking the means to compel compliance.
Do you think it's more common for businesses to studiously avoid making promises that they might one day be unable to keep, or to make promises for temporary competitive advantage without worrying if 'unforeseen exigent circumstances' might require these 'promises' to be broken? I think your conclusion is correct, but my assessment would be: personal integrity or business success, choose one.
Edit: To tone that down a bit, "Moral Mazes" by Robert Jackall is an excellent although academic work on the ways in which corporate and government ethics differ from commonly espoused personal ethics. I find it a valuable key to trying to understand attitudes toward conscientious leakers such as Snowden. I am implying a value judgment, but realize the details are complex.
"Moral Mazes" seems to have become a cultural signifier, meant to evoke a whole series of positions and beliefs about the trustworthiness of large organizations. When someone drops "moral mazes" in a conversation, I read that as a shorthand; a more intellectually credible way of saying "the whole system is out of order!"
Generally: I think that when the core promise of your business is that you'll do everything you can to resist incursions on user privacy, then yes, it should be pretty common for those promises to be scrutinized.
> "Moral Mazes" seems to have become a cultural signifier
I'm sure it's that too, but I've currently got it checked out as an interlibrary loan serving as my bedtime reading, and I'm finding it really insightful. But perhaps this is because I'm starting from a point of bewilderment as to what motivates most people to act as they do.
> a more intellectually credible way of saying "the whole system is out of order!"
But the miracle is that rather than being out of order, the system mostly works, and tends to keep working. What I like about the book is that it strives to explain the situation from the inside as a mostly coherent belief system, rather than critiquing it from outside as untenable.