I'll say what I said the last time something like this came up, which was something like last week: It is impossible for you to prove that you've destroyed the data. That requires a Trent, a trusted party; it is unverifiable.
Especially in the US, Trent is one NSL away from being falsely trusted (see Lavabit). In fact, as far as I can see, you're not even positively claiming you are destroying the messages: "We may retain personal information indefinitely." Are you able to explain that item in the privacy policy?
In general, please try not to design cryptosystems which require Trent in the post-Snowden era. That is a sign you've either failed in your design, are trying to solve the wrong problem, or aren't being creative enough.
I think the point is that, unless you have insider information, it's impossible to trust any third-party product that works with these assumptions.
Really, if you're trying to share a file with someone over the Internet, your best bet is public-key encryption. You can safely share an ASCII armored GPG file over something like Pastebin (or email) without worrying about the confidentiality of the message. The problem, of course, would be that there is still (somewhat public) evidence of the communication.
Web-based end-to-end encrypted (and, better yet, ethereal) messaging isn't really solved yet. I think the "grandparent" poster is just trying to illustrate that publishing the same idea -- all hinging on "but really, trust us!" -- doesn't make much difference.
More or less, yes. There's already plenty of snakeoil out there claiming to erase ciphertext or keys anyway - with zero verifiability, and often the claims have been demonstrably false. I could name two right off the bat: Snapchat. Cryptolocker.
If you want ephemeral communications, GPG actually won't do the trick, the public encryption asymmetric subkeys are pretty long-term; ephemeral communications have key lifespans measured in seconds, not months or years. The closest you can get to what you want right now is probably Axolotl as used by TextSecure/Signal, or perhaps OTR, or perhaps Pond: good designs try to not to need to trust third-parties to destroy things and try to instead make sure third-parties don't see things that are too useful to Eve or Mallory in the future. (Even there, metadata is still a nightmarish concern which is hard to protect and very valuable to Eve, sometimes moreso than content.)
Even trusting the first or second party to clear things, you may have some problems. Did you really clear that memory? Are you certain the compiler didn't 'helpfully' optimise out your memset? (See also: explicit_bzero(), etc.) Did Bob secretly take a copy, or can you trust him not to? (You pretty much have to trust Bob to follow the protocol there; that is the impossible problem behind DRM! If Bob wants to cheat, he's got physical access and all the time in the world: he always wins.) And if Bob is honest, if jackboots bust down Bob's door, does Alice still have to worry about a cold-boot attack? (Probably, yes.) Or Mallory playing really dirty and rooting Alice or Bob's machine, which is always, possibly short of the $5 wrench/rubber-hose attack, the easiest route. (See also: Firewire and Thunderbolt DMA attacks, exploitable USB stack bugs, etc.)
It's possible to try to do trusted ICs with tamper-resistant low-retention EEPROM storage, such as you might find on a specialised smartcard/crypto token or TPM - Pond tries to use a TPM's key storage if one's available, I think? - but these devices are often black boxes with far too little external auditing from the good guys and it is something of a blind spot to prove it is above tampering. (There are a couple of open-source efforts to develop trusted cryptography/security cores, including https://cryptech.is/ for example, but verification that hardware objects came from sources and weren't trojaned even as low as at the gate doping level is Very Hard™!; much harder than deterministic assembly/compilation for software.) They may be trusted by some, but they have a long way to go to be trustworthy.
Rolling along with a pretty website is lovely, but doesn't help with any of these problems. Worse, it may give people a false sense of security. Please, no. The doghouse has enough dogs it in already.
While I don't disagree with anything you've written, depending on your risk appetite there is a difference between compromises that require physical access and those that don't - not that I think this particular product is a good idea at all, but not everything needs full local security.
For some users a secure messaging protocol that protects messages as long as Alice & Bob's machines aren't rooted is Good Enough, and I don't see anything wrong with creating a product that offers this (as long as it's made explicit that this is the case - Google's End-to-End is a good example of providing this level of security and being clear about this limitation).
Also, re: metadata, I really liked the work in Pond to massage the data stream so that it didn't look like encrypted data, which while certainly doesn't solve the metadata problem is one step along that path. And yes, it does use a TPM if available.
I'd be very interested to hear more about TPMs that are open, well audited and trusted, if such a thing exists.
I'll say what I said the last time something like this came up, which was something like last week: It is impossible for you to prove that you've destroyed the data. That requires a Trent, a trusted party; it is unverifiable.
Especially in the US, Trent is one NSL away from being falsely trusted (see Lavabit). In fact, as far as I can see, you're not even positively claiming you are destroying the messages: "We may retain personal information indefinitely." Are you able to explain that item in the privacy policy?
In general, please try not to design cryptosystems which require Trent in the post-Snowden era. That is a sign you've either failed in your design, are trying to solve the wrong problem, or aren't being creative enough.