Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Making email more modern with JMAP (fastmail.blog)
327 points by mfsch on Aug 16, 2019 | hide | past | favorite | 166 comments


It will be interesting to see how this develops.

I've spent the last month implementing and testing IMAP support for my app [1].

It has... not been very fun.

Before we were limited to providers with a REST API, so implementing IMAP should in theory allow us to support a much wider range of email servers with a much lower support burden.

Although the protocol spec is tight, it seems like some providers randomly bug out or hang [2], respond with custom error messages, offer bespoke functionality [3], or generally don't comply with the spec in breaking ways [4].

As a result I'm more worried about this going forward than I had hoped.

I'll now also happily support anything that can help modernize email, so I'll apply to the developer program and will be watching JMAP closely.

[1]: https://leavemealone.app/

[2]: Yahoo Mail always down (https://downdetector.com/status/yahoo-mail)

[3]: https://developers.google.com/gmail/imap/imap-extensions

[4]: https://github.com/mscdex/node-imap/issues/775


Having worked with IMAP before myself, the spec is very vague a lot. It's a bad spec.

We had our own IMAP server implementation and had all sorts of problems even between different versions of outlook.

Each project had an inbox which you could cc in, resulting in huge inboxes (for the 2000s at least). You could also drag and drop an email you'd received into the project folder to share it with colleagues. And we had mysterious disappearing emails, but only at some clients.

One (of the many) problems I remember is that messages could have an ID, but the spec declared that it could change between sessions, so one version of outlook stored that in a short uint (max 65535) because why would you need more for a temporary id. So have fun if you sent anything more than that. And of course the client had different versions of outlook on different sites so it took ages to even figure out what was happening.

Another was that you had to make several back and forths on every request, messages about the messages, to say the payload and number of messages, but the spec was gleefully vague about the order this should happen in. So back then what worked for one email client resulted in an empty inbox for another.

I don't think we ever got it working properly with Firebird.


Validation of UIDs is thankfully something I don't have to worry about. It looks like a total nightmare.


Outlook is known to have a horrible IMAP implementation.


I've found Outlook's not too bad actually, and at least they support ActiveSync and MAPI for desktop and mobile. I always had the most problems with Gmail over IMAP.


Could you highlight what you consider as vague?


Your app looks really cool! I've been looking for something like this for a while now to deal with my crazy inbox. You've made a lot of right choices like opt-in cookies, credit-based purchases, and audit logs, so keep on keeping on! Thanks and best of luck!


Hey thanks! :) Glad to hear you like it!


In the past I've used a 3rd party component [1] for IMAP, it was like outsourcing all the work of dealing with the quirks of each server.

[1]: https://sockettools.com


I hope it does go somewhere, as IMAP isn't well suited to mobile, and neither is webmail.


Its still far better than POP. Its really nice how you can let the server decode the structure of a message and then selectively download only the parts you actually want to display.

(Yes, I once wrote a mobile IMAP client.)


This is ironic as imap was designed specifically for mobile and remote machines.

The definition of those terms is quite different of course today, so my statement isn’t meant to contradict yours. I doubt Mark ever saw a laptop, at least when working with imap. Imap was a real step forward at the time.


It's literally a virtual filesystem and a terrible one at that.


Why is IMAP not well suited to mobile?


Too many network hops! Imagine if an account has an account with 3 folders and want to synchronize all of them. Then need at least to select each folder examine is there something new and fetch needed data. This is at least 6 network calls. You can do that with just one call in JMAP.

Some server support only basic IMAP and do not offer QResync and Condstore extensions. So to detect deleted move email you need to select entire email UIDs to see is it still there. A lo of unneeded network call just to fully synchronize client state.

I even do not want to start that some providers like Outlook don't offer UTF8 extension, so the cannot implement a good search for different languages.


> Although the protocol spec is tight

Are you talking about the IMAP RFC? It's the worst spec I have ever seen and so many parts are open to interpretation. If you think it's tight it must be because you didn't realized all the ambiguities.


My use case is pretty specific, I'm only dealing with a small subset.

I guess I was just lucky and can remain happily ignorant.


Though I have no complaints about the way they went about implementing it (open standards hooray!), as a mail client author and someone with lots of experience with mail in general - I'm NACK to JMAP. I think that it comes with some questionable design decisions (HTTP? JSON?) which on their face I imagine look fine to the typical HNer, but in many ways is a regression from IMAP. I would prefer to see the limitations of IMAP addressed with a few carefully written extensions to the spec - which is already extensible and has a thriving set of useful extensions in the wild.

By no means is IMAP a particularly good protocol, but it is extremely broadly supported and designed more with email in mind than JMAP appears to be, which seems to focus more on bringing shiny web tech to email regardless of its utility in such a context. It's also one of many closely interlinked mail-related specifications, and JMAP fails to integrate well with many related specs imo.


I think HTTP and JSON are great.

You might not like it, but every programming language in existence has easy to use HTTP and JSON parsing libraries. It's basically a solved problem.

As a software developer I dread working with protocols that are not via HTTP because most often then not the clients are badly designed, leaky and I can't fix it by changing the client.

And I love JSON in spite of its inefficiencies, because it is easy to debug and there are really good JSON parsing libraries for the statically typed languages I'm using (Scala, Haskell). You basically declare your high level types and the library will derive (de)serialization and validation logic for you.

Also consider that this isn't necessarily about building full fledged email clients. Maybe you want an automated way to read an Inbox and import certain messages in your database. Maybe you build a browser extension that counts the number of unread messages. There are many, many micro tasks that developers would be interested in doing, if they had an easy way to do it.

And do you know what protocol does NOT have good libraries for every language in existence? IMAP.

It's basically not a competition from my perspective, HTTP/JSON wins by default.


"We have libraries for these technologies" is a poor substitute for "These technologies don't really solve the problems that need to be solved."

The nice thing about the old internet protocols is that they tried to distinguish between abstraction and implementation layers. (Not always successfully or effectively, but they did at least try.)

Modern protocol designs seem to do the opposite. They start from a default implementation layer - usually REST, HTTP, and JSON - and then try to build the abstraction layer out of that.

This isn't necessarily a good approach. But it is a reflection of the difference in culture - the change from hackers who were comfortable with open-ended bottom-up development, and modern developers who seem to prefer snap-together development using standard libraries.


> They start from a default implementation layer - usually REST, HTTP, and JSON - and then try to build the abstraction layer out of that.

Also known as the Unix philosophy ;-)

This isn't about comfort, but about doing what needs to be done.

Every webmail interface in existence has implemented their own half baked, proprietary protocol that does what JMAP is doing. Or do you think Google's Gmail web interface is using IMAP?

---

> "hackers who were comfortable with open-ended bottom-up development"

That's because they lacked basic tools. Hackers in those days weren't getting much work done either.


> That's because they lacked basic tools. Hackers in those days weren't getting much work done either.

Wat.


I grew up with ZX Spectrum, MS-DOS, Windows 3.x, started programming in secondary school and while in high school watched as the Internet took over the world.

I was one of those hackers ;-)


Well if you're really going to go down that route, you should remember that the "unix philosophy" favours plain text as the data-passing mechanism. JSON itself does not have rules for how things like malformed UTF8 decoding should work, which means that every implementation of this JMAP protocol is going to need to refer to a specific JSON spec for starters, which is already destroying the credibility of the "simple" argument. Also, I don't see how your claims about hackers "in those days" has any basis other than speculation here.


> every programming language in existence has easy to use HTTP and JSON parsing libraries

That argument doesn't hold much water. Every programming language in existence also has easy to use socket libraries. Are you implying that HTTP and JSON are less error-prone than whatever-over-TCP? There's nothing magical about either HTTP or JSON that makes them the right tool for every job. But I guess when all you have is a hammer ...


I would be willing to put a lot of money that the same application built on HTTP/JSON will be much much less error prone than the same app built on TCP sockets.

The tooling is better; debugging is easier; the interfaces are less finnicky; the transfer is less brittle; dealing with objects and documents is much easier conceptually than streams; very powerful performance optimizations and catching can just be turned on while being transparent to the app; crappy networking equipment won't ruin your day; corporate firewalls won't get in your way; encryption is basically drop-in with transparent tooling support.

In no way is HTTP/JSON the best protocol for every situation but thinking you can do better with something home grown on top of TCP is ignoring all the human effort that's gone into solving the problems with straight-tcp protocols. So yes any protocol that's had as much love put into it would probably be just as good but few things have.


You're very welcome to join the EXTRA working group at IETF and propose the extensions that you think would make IMAP better. Apart from working on JMAP, I have also written an extension to IMAP and may write another one next year bringing some of the push stuff from JMAP back to IMAP as well. But JMAP is based on many years of working on servers, middleware and (at least on the web) clients.

We did actually extent IMAP with some experimental stuff first (cross folder threading and thread collapsing for example) but it never solved the batching and proxying difficulties and the cost of instantiating message numbers per-session that are un-avoidable parts of the IMAP model. By the way, engineers who work on other servers have said similar things to us - IMAP is costly to implement in terms of server resources when you're doing true concurrency, high availability, etc.


> IMAP is costly to implement in terms of server resources when you're doing true concurrency, high availability

Yes, IMAP requires expensive coordination among multi-master servers to generate UIDs in such a way that won't mess with client synchronization. In other words, IMAP is not a fully distributed design, it assumes a centralized control plane (think Raft or Paxos), i.e. a single central server with many distributed clients, which is a shame for new systems building on CRDTs.

How does JMAP address this achilles heel of IMAP? Merkle tree sync?

More detail on the problem: https://news.ycombinator.com/item?id=20479011


Individual unique IDs and /changes lists all the IDs that might have changed. One good thing about IMAP is that there's a fair bit of data which is immutable, and JMAP retains that - emails with the same ID must have the same content, so there's a limited amount of metadata that you need to resync.

I wrote something about how you fix UID clashes in a split brain while staying within the IMAP data model many years ago...

https://lists.andrew.cmu.edu/pipermail/cyrus-devel/2010-July...


> I wrote something about how you fix UID clashes in a split brain while staying within the IMAP data model many years ago...

Thanks for this. It was a good read and promoting UIDs instead of bumping UIDVALIDITY is a great idea.

Surely, it would be even better if IMAP had support for distributed Merkle Tree synchronization, so that clients can sync with different server replicas without the server replicas having to coordinate or promote UIDs at all in the first place?

I don't know of any algorithm other than a Merkle Tree that will do this correctly and simply and efficiently in a distributed setting?


That would be a pretty massive change to the entire data model of IMAP. It's never going to happen[tm]. IMAP4rev2 is much more conservative than that.

https://datatracker.ietf.org/doc/draft-ietf-extra-imap4rev2/

Merkle tree would is great for full history - I built something like that with digital signing for tracking clinical data over 15 years ago that would have been fine for email as well - but it's way over-expensive for email.

Which leads back to the key design principle I try to use when working on protocols, which is something like "make the weird edge cases sand and possible, but don't optimise them!" You don't want to build the Merkle tree complexity into a client and have all the servers be completely independent, because it makes the protocol a pain to use for the simple case.

(What we use for IMAP replication in Cyrus is much dumber than all this - it uses a per-message CRC and XORs them together for the whole mailbox to create the sync_crc value. If those don't match, it does a resync. Fixes up split brain just fine while being cheap and dumb for the common case.


> Merkle tree would is great for full history - [...] but it's way over-expensive for email.

Does JMAP then not support syncing full history efficiently as a key design priority?

> That would be a pretty massive change to the entire data model of IMAP. It's never going to happen[tm].

Sure, and that's why I had hoped JMAP would, especially in a distributed master setting, since this was something IMAP overlooked.

I guess it's because JMAP still needs to be backwards-compatible with the strict consistency master-slave restriction of IMAP?

> What we use for IMAP replication in Cyrus is much dumber than all this - it uses a per-message CRC and XORs them together for the whole mailbox to create the sync_crc value. If those don't match, it does a resync. Fixes up split brain just fine while being cheap and dumb for the common case.

You're implementing the first layer of an incremental Merkle tree right there. How many bits are those CRCs just out of interest?


Not full history as in "you can rewind back in time and see every possible past state", no. If I wanted one of those, I'd probably use git as the substrate.

The thing JMAP doesn't have, that I think you're trying to ask for, is the ability to talk to divergent endpoints and resolve the state on the client. JMAP expects to talk to a server which is merging the state into a linear world for it. That linear world view can be different from different servers.

For example, if you implemented your server as a series of git commits, complete with merges whenever you'd had changes on two different servers, then you could just use a commit hash as the state string - and if you got an oldState which was a commit that didn't happen on your server you could still use git to calculate the series of changes on your branch since the fork point plus changes on the remote branch after that commit - and you'd be able to issue a full set of changes. If you DON'T have at least that commit and its predecessors locally, you can't calculate the diff, so you have to return cannotCalculateChanges.

I think what you're wanting here is for the server to get from the client what the last fork point was, and give the client just the changes it knows about since that fork point (like a git fetch) and then the client calculates the merged changes. That's not how JMAP is designed to work. It's an interesting idea, but it does lead to very high client complexity, so it's not what we wanted JMAP to be.

The CRCs are 32 bits, but they're not the only signal. There's also modseq and uidnext, so the CRC doesn't need to be cryptographically strong - if somebody deliberately creates a replica state that comes up with the same CRC but has different content then they could make servers believe they are in sync, but in that attack vector they probably also have enough control to just make the replication layer lie.

We're also using sha1 for message content integrity, which I'd like to move away from eventually. There are plans, but we have other things to finish first. We do insert some randomness into a header on message delivery which makes it much harder for someone to calculate what will be injected when sending somebody an email via SMTP!


> Not full history as in "you can rewind back in time and see every possible past state", no. If I wanted one of those, I'd probably use git as the substrate.

No, not "history" in the git sense, just "history" in the simpler email sense, i.e. "email history", all past emails in their final converged state.


> /changes lists all the IDs that might have changed.

How does it do this in a distributed setting? When clients might be connecting to multiple servers with different states?


Well, it depends on how you architect your system - you can embed whatever you like in your state. I'd probably do something cheap and nasty like embed a modseq counter for each of the different servers into the state string, and if any of them went backwards I'd reject the /changes with a cannotCalculateChanges error.

I mean, you can do other things - but at the point where you're switching between multiple masters in a split brain from your client, all bets are off - so you're going to need to fall back resyncing any mutable data.


> but at the point where you're switching between multiple masters in a split brain from your client, all bets are off

CRDTs handle split brain fine. They eventually converge. You just don't want your clients flapping in the meantime by resyncing tons of data. That's where a distributed sync algorithm would help keep things working in the interim.


> Yes, IMAP requires expensive coordination among multi-master servers to generate UIDs in such a way that won't mess with client synchronization. In other words, IMAP is not a fully distributed design

Just use NNTP! /s

:)


JMAP is literally the only positive thing that has happened to email in decades, please don't dismiss it (or do you know of anything else in the works that has any chance of success?).

HTTP and JSON are such an insignificant price to pay for the sliver of hope that email can be saved and not being in complete control of google et al.


The real positive thing that needs to happen to email, is putting an end to near-monopoly of gmail. Email is great. It's decentralized. You can simply roll your own. You can use whatever client you like. Apparently even using HTTP/JSON.

But you can't send mail to anyone using gmail, which is nearly everyone, unless you play by Google's rules. And even then they'll still put you in their spam folder, or just randomly blackhole you without any explanation or recourse.

JMAP vs IMAP vs POP is moot as far as I'm concerned. As long as my mail client continues to work, I'm good. If this gains any traction, I'm sure dovecot and the likes will add support for it (if they haven't already). But no amount of protocol fiddling can fix the fundamental issues of email centralization.


That's what I meant. And we need something better than IMAP/POP to have a fighting chance in challenging google. Because without it IMAP will, without doubt, loose market share even more - further cementing the dominance of larger players.

That is why protocol fiddling is an essential part of it. Not enough, but without something better than IMAP might as well give up yesterday (as most already have done).


> we need something better than IMAP/POP to have a fighting chance in challenging google

How's that? I don't see how messing with IMAP/POP/JMAP will make a difference there. Mail delivery, over SMTP, is what the Gmails and Outlooks of this world are so nasty about. They claim to protect their users from spam, but small mail servers get killed in the crossfire. Mail clients downloading mail over whatever protocol is a trivial problem by comparison, and they have zilch to do with SMTP abuse.


Yes, but it is all about market share.

If 99.9% of all mail originate from gmail+outlook then they can (and will) pretty much ignore the rest.

The less dominant they are the more they are forced to be reasonable.

JMAP will help diversify the market as well as simplify the process of creating competing products. And has the potential to vastly increase demand for native clients and proper support from webmail vendors.

All of the above will work towards lessen the dominance of gmail etc.

It isn't anything by itself but I believe it is a requirement going forward.

Because IMAP doesn't cut it, so everyone has to create their own crap, which won't be as "good/easy/cheap" as gmail.


> HTTP and JSON are such an insignificant price to pay for the sliver of hope that email can be saved and not being in complete control of google et al.

How would JMAP accomplish that, in a way that other protocols don't?


The shortcomings of IMAP and POP are reflected in all mail clients and is a large part in what has made webmail so popular.

Another reason is that webmail vendors of today typically deliver a subpar IMAP/POP experience. So combining a native client and webmail when on the go is rarely ideal.

JMAP is the first chance in a long time to get a decent experience that doesn't involve an expert and (even if you are willing to put in the time) tons of compromises.

And aside from all that we have an opportunity and reason to revive native clients that has gone stagnant (which is pretty much all of them).


> The shortcomings of IMAP and POP are reflected in all mail clients and is a large part in what has made webmail so popular.

Could you elaborate on this? I personally find using a mail client like Thunderbird to access my mail account far more responsive and seamless compared to using the webmail interface. If I don't have much bandwidth to work with, the local mail client using IMAP and SMTP will work while the webmail client will experience timeouts.


So do I.

But syncing mail with IMAP isn't good enough and many features of webmail clients do not map to IMAP clients. There are also tons of caveats during configuration so tons of people, rightfully so, don't bother.


IMAP is incredibly annoying. To send a message, you send via SMTP and then upload that message to your Sent folder, so it's transferred twice. For some reason the IMAP implementation I use means that when I move a message to a different folder, it uploads the message to the new folder, deletes the original message, and then downloads the message from the new folder. I have no idea if it's my server, client or the spec, but very wasteful and often fucks up.


Some interesting work was done on adding prescence and chat to email that im disappointed not to see in jmap. Be great if we could chat to anyone with a name and dns domain instead of being stuck in a silo like facebook.


That would indeed be awesome. Every email server a chat server. But they are not done with JMAP yet, still working on contacts and calendar. Maybe chat could be next.


> questionable design decisions (HTTP? JSON?)

Those are the premises for JMAP, from which the rest follows.


Yes, which is why the above post finds it not particularly desirable.


What's unappealing about HTTP and JSON? What's better about having a completely custom protocol and format?


HTTP is not designed for persistent connections (WebSockets were hacked on later, but JMAP does not use them anyway). This turns a push protocol into a poll protocol, which is a stark regression. It's also got all sorts of other crap built-in which is really not necessary for this use-case but which an intrepid implementer will have to deal with regardless.

JSON does not cope well with binary data, which is common in emails. It fails to elegantly deal with the various email encodings which exist in the wild. Consider as well that all JSON numbers are floating point numbers, despite the fact that floating point numbers provide absolutely nothing of value to this spec and in fact are more likely to introduce bugs than not. And embedded devices can't deal with floats quickly or elegantly, but still need to implement them if they want to use JSON-based protocols. And for what gain!

In short, JMAP reinvents the wheel but worse for the sake of making it easier for web developers to build web shit around email. This kind of stupid change for change's sake is an epidemic in the webshit scene. IMAP is warty but it's fine. Let it be. When your only hammer is JavaScript, everyone else's thumb looks like a nail.


Some remarks as a developer on Fastmail’s webmail, using JMAP daily but not having been much involved in its development:

IMAP really isn’t fine. If it was, mail providers wouldn’t have kept on making their own protocols/APIs for their mobile apps or their webmail. JMAP has various concrete benefits over IMAP. JMAP has been made by people that have been working with IMAP for decades, and have been actively involved in IMAP improvements over that time as well.

Pretty much all of your objections seem to me to either be misinformed, or addressed at https://jmap.io/. Strongly relevant are headings 2, 3, 4, 5, 6 and 8. (I could write more about individual issues, but I’ll just leave it at that.)

The main thing that is only kind-of addressed there is your objection to JSON number representation. JMAP does not state that JSON is the best thing out there; but rather that it is good enough. JMAP doesn’t use floating-point numbers, uses opaque strings for IDs (where many APIs people home-bake would use numbers), and expressly limits its integers to the −2⁵³+1 to 2⁵³−1 range where it does actually use integers. (And none of those places should ever be anywhere near that.)


I will preface this by admitting my only direct experience with IMAP as a protocol was with writing a few scripts to synchronise mail and having to fix bugs in the IMAP libraries I was using.

I must be missing something obvious -- how is IMAP a push protocol? All experiences with IMAP I've had are in the form of basic request-response flows.

Also, regarding JSON floats, most languages have the ability to give you errors if you encounter a float in a JSON field that is meant to be an integer. So it really shouldn't matter at all that JSON's floats are awful. I haven't looked, but does JMAP use floats?

I don't particularly love HTTP nor JSON, but if that's the only problem with JMAP then it's such a massive improvement over IMAP that we should have started switching to it yesterday.


Basically all IMAP servers found in the wild support IDLE:

https://tools.ietf.org/html/rfc2177


The problem with IDLE is that it requires one TCP connection per folder you want to watch. The better alternative is NOTIFY, but in my experience, it is not as widely supported by server and client implementations.

https://tools.ietf.org/html/rfc5465


But that is an extension so you cannot count on it being there. Thus a well-written client would need to handle servers that do not support IDLE. The X11 protocol is similar in that it has a lot of extensions that are theoretically optional but de facto required to deliver a good user experience. It makes implementing such protocols complex because you need to duplicate code paths, one for when the extension is present and one for when it isn't.


IDLE isn't sufficient which is why Lemonade was defined

https://tools.ietf.org/html/rfc5550


Except AOL and Yahoo.


The IDLE-command must be a push type of command? Which is used heavily by clients instead of polling.

As Microsoft does it with RPC is just a mess, they do a http call with 2billion bytes in request size, let the connection terminate at timeout and fire up a new call. I for one hope not that this is how JMAP solves it :(


If submit an app to Apple with IDLE support I would bet will be rejected. JMAP has PushSubscription which will play nice with Apple / Google push services.


JMAP doesn't solve it at all.



JSON explicitly does not require floating point (https://tools.ietf.org/html/rfc8259#page-7). As a data-interchange format, it's not interested in most of the issues surrounding floating point numbers (operations, rounding, exception handling, etc), but it doesn't even use the interchange formats of floating point either.

I'm sure that as a practical matter, it will be a rare implementation that does not work by deserializing non-integer decimal numbers into floating point numbers, but the JMAP spec does not require this behavior, and I believe it uses no decimals.


I do hate this fashion of tunneling everything through HTTP, but then:

- Email protocols are all shit. The email ecosystem still didn't evolve into a format where we can safely transit binary data around. It is still risky to rely even on octet encodings, so there is very little binary data on emails anyway, it's mostly 7bit strings.

- The numbers in emails are nearly all strings. The very few integers are encoding content length and are not necessary with HTTP.

- We have much better odds of having message push that actually works everywhere with an HTTP protocol, mostly because many of our computers are configured to serve ads first, and ads use HTTP push.

That said, I do look forward into all the interesting problems HTTP will create for large emails.


> The email ecosystem still didn't evolve into a format where we can safely transit binary data around.

The only significant characters in email are carriage return, line feed, and period. There's also a line length limit in the SMTP protocol specification. Other than that, bytes sent during the DATA phase are sent unaltered.

Base64 encoding is meant to address this, but it results in a 33% overhead in attachment size. On the usenet side, people came up with an encoding scheme called yenc that actually only escapes those characters mentioned above and only as a 2 to 3% overhead over the original file size.


If you are talking about SMTP, servers may fail on receiving any character larger than 127, unless the client started the session with EHLO and the server announced either the 8BITMIME or the BINARYMIME extensions, where the first one only allows valid UTF8 stings, and the second requires a completely different mechanism that does not use DATA.


> Consider as well that all JSON numbers are floating point numbers, despite the fact that floating point numbers provide absolutely nothing of value to this spec

JMAP seems to not use any float, so implementers do not need floating point logic and can just reject JSON with floats.

Aren't most email attachments base64 encoded anyway? If so, JSON not supporting binary is not really a problem.


Converting to base64 generally increases the payload size. Being able to convert it that way is a workaround - not a solution to being able to transmit binary information.


Well yes, but all emails I've ever seen already have their binary attachments encoded as base64


I received two emails yesterday with application/octet-stream as part of the inner body, compliant with RFC2046 [0]. Having a Content-Transfer-Encoding of base64 is entirely optional.

It's the default for certain larger email hosts - that's the reason you see it frequently, but that in no way means that it is the _best_ option for a protocol that suggests it's solving the problems of the old one.

[0] https://tools.ietf.org/html/rfc2046#section-4.5.1


This isn’t your main point, but FWIW, there is a draft in the IETF for JMAP over websockets. It was approaching trivial to implement; I think our (Fastmail’s) main Cyrus developer wrote it from scratch over a weekend.


Pretty cool protocol. I've been following this for a couple of years. I had already written a library for an earlier version of JMAP (back then references worked differently / didn’t exist); and it was interesting to see how that improved in the IETF process. I wrote a library [1] and a POC email client for Android [2] earlier this year. It takes a moment to fully understand a now fairly complex protocol but when you get the hang of it it becomes very powerful.

Sadly the server support isn’t really there yet. The support for Cyrus hasn’t been released yet (you need git) and some vital functionality like push [3] is still missing. Also no word from Dovecot yet.

[1]: https://github.com/iNPUTmice/jmap

[2]: https://github.com/iNPUTmice/lttrs-android

[3]: https://github.com/cyrusimap/cyrus-imapd/issues/2714


I've started defining the bits necessary for storing push subscriptions: see

https://github.com/cyrusimap/cyrus-imapd/commit/7144a747267c...

But it's rudimentary - it will need sql_db and dblookup components to make it fast enough to integrate with an mboxevent reading component to do the actual pushes. At least - that's how we'll probably implement it at Fastmail, and there's a plan to extract the key parts of our non-blocking push daemon and lift them up into the Cyrus project.


Do you use Lttrs daily? I'm wondering if it works in practice (even when using non-standard setup) or if it's just a proof-of-concept (that'd mean it may take some time to mature).


No I don’t use it daily. It can’t even send emails yet and you have to set your login credentials at compile time. I got frustrated having to deal with HTML emails and now I’m waiting for the servers to mature a bit. And honestly also for dovecot to announce support. I mean I love what the fastmail+cyrus team is doing; but if Dovecot is not going to support it, it is probably not worth it to build a JMAP-only MUA.


I'm a believer in open protocols, but in recent years, I'm also loving Exchange and ActiveSync. I know that's not going to be a popular opinion here. To be honest, I don't know much about how the magic works, but the configuration is simple, the push is reliable, and everything just works. It doesn't drain my mobile and it doesn't lose its indexing and download for hours like IMAP always seems to. And besides, it's mature -- Exchange and EAS have been around for many years.

So here's to JMAP becoming the open standard that replaces that.


Yes, but I can’t tun it on my own low hardware at home for free.


Sure you can, I use z-push [0] integrated with Kopano which works great. You can also use z-push with IMAP, CalDAV and CardDAV. The only issue I know of using IMAP is that the z-push server polls your mailbox, so it's not true push, but vastly better than the battery drain of IMAP IDLE.

[0] http://z-push.org


I dislike the details of JMAP’s design.

While I appreciate the value it brings, I can see the 5 years of struggle in the sheer size of the multiple specs needed to define some pretty rudimentary mail client/server functionality (retrieve, send, push, pagination, etc).

At some points they use JSON (a data format) to describe procedures... like a programming language written in JSON. It feels a bit like using a screwdriver handle as a hammer.

How did we end up with JMAP instead of something simpler or modular, especially with GraphQL and other similar innovations on the radar?


> At some points they use JSON (a data format) to describe procedures... like a programming language written in JSON. It feels a bit like using a screwdriver handle as a hammer.

And folks laugh at me for suggesting S-expressions until I'm blue in the face!

Seriously, this is one of the areas in which they excel (ref.: literally every line of Lisp ever written). They're a pretty nifty data format too (cf. https://sites.google.com/site/steveyegge2/the-emacs-problem).


From what I gather it should be possible to use 2FA with JMAP. I‘m looking forward to no longer having to decide between using third-party email clients and properly securing the account that’s probably most worth securing.


What would be nice is if the servers supported using client side certificates as an authentication factor. It's possible to configure postfix and dovecot to support this [1], but I haven't looked into whether it's possible to require both the certificate and the username and password to authenticate.

[1] https://blog.mortis.eu/blog/2017/06/dovecot-and-postfix-with...


For IMAP already, there are three main approaches to logging in that get used: use your main password for the service to access IMAP, use a separate token for each client (“app passwords” is the most common term of art, and what we call them at Fastmail), and OAuth, which can then have whatever restrictions you like, 2FA or whatever. IMAP providers each support one or more of these techniques. (At this time, we support the second. Gmail supports the third and, if you have “less secure apps” turned on, the first.)

The JMAP core deliberately doesn’t express opinions about authentication technique, because that’s a divisive topic where there is no clearly right answer. We shall see what happens there; conventions will rise.

For Fastmail’s internal JMAP usage, we have our own authentication flow, which will entail 2FA if you have that set up on your account. I cannot remark at this time about what our plans are around authentication for public JMAP access.

But this is my conclusion here: look, I love JMAP, but I don’t believe it actually changes anything on this front over existing email protocols.


Would WebAuthn support be possible with JMAP? That would really move things forward.


One high level problem that jumps out at me is: When do we authenticate?

For web sites WebAuthn drops nicely into the existing login flow. The user knows what they're trying to do is log in, and so when they're prompted to touch the button or whatever it makes sense in that context.

But today most email apps just silently update all the time. So are we authenticating when the app starts and then re-using some credentials obtained to re-connect as necessary? Is that safe? Even if that's all we do (which might be safe) it means the user gets prompted for WebAuthn (to press the button) when the mail app starts, even if they aren't currently interested in email. If they dismiss it, when do we prompt them again? Because meanwhile they of course don't get any new email.


OAuth2 can handle this; you authenticate the mail client with an offline token that's valid for a long time. The mail client can refresh their tokens regularly and use a short-live token to authenticate their JMAP requests.

OAuth2 doesn't particularly care how the token is obtained, so it can handle any arbitrary authentication flow, including WebAuthn.

And it's a widely supported protocol; there are libs for C++, Java, Go, JS and others, so it should be easy to integrate.


Probably the same as now. Authentication with user supplied credentials grants you a long-living session for the current device and current mail application via a locally stored secret unlocked by the user logging on to his computer or unlocking his device (which too could be done via WebAuthn or a similar 2FA approach; e.g., a fingerprint on a smartphone).

It can be invalidated by exceeding an age limit or by the user logging out or otherwise retracting the access grant.


Fastmail should throw some engineering resources towards implementing JMAP support in some mail clients like Mutt.


The fastest route to MUA support is compatibility with existing MDAs like procmail, maildrop, and the like. At the very least a JMAP client that populates a local Maildir would get you Thunderbird, Evolution, mutt, and more.


Could a fuse filesystem do the trick for maildir emulation? Assuming that'd be beneficial to a full maildir, but I guess it'd only be so at large sizes.


Couldn't à hypothetical fuse fs emulate any file based store for email?


The strong suit of JMAP (among other things) is getting access to specific properties of a server side parsed email. It will do the MIME parsing for you; separate attachments and text bodies from each other and for example allows a client to fetch only Subject and senders instead of the entire email. By brainlessly dumping all that into a Maildir you'd be giving up a lot of the benefits of JMAP. I'm a big fan of JMAP but I don’t see it replacing IMAP for (desktop) clients that will keep a fully synced Maildir locally.


Nobody said anything about doing brainless things. There's a lot of sound and fury from Fastmail about JMAP, but the only client is ... Fastmail's web interface.

I don't care what JMAP wants to do for me as long as it's not actually available to me. IMAP supports downloading headers too -- that doesn't mean everyone uses IMAP 24/7. The IMAP + local synced maildir use case is extremely common and there's no reason JMAP can't do it better.

I merely suggest that starting by making small shims for interoperability would encourage protocol adoption so that when clients arise they'll be in a position to be successful out of the gate.


Maybe, but mutt seems like a poor choice?

They are providing an open source server implementation, which is a great way to increase adoption. Adding support to widely used open source mail clients is also a great option, but I doubt usage of mutt is anywhere near high enough to provide a good return on investment.

Implementing in the default mail client on Ubuntu might be worth it, or maybe the most popular open source web mail client that exists could be worth it. Ideally though they need adoption in something like Outlook, Gmail, Apple Mail/iCloud, etc, but these are all closed source so a) impossible to do without support from the vendor, and b) of limited use to increasing the community of JMAP implementors.


That's what the ircv3 guys did. I think it's a cancerous attitude: you get your protocol implemented regardless of whether it's good or not, since most managers of open source software can't say no to free code


I don’t think anyone in the IRCv3 team has provided free code to any other project just for implementing IRCv3 functionality. In fact, the opposite is true, most projects have to spend significant effort to support it — but believe that it is worth it.


That seems like a pretty entitled attitude.


If they want it to be taken seriously by the open source community, might as well start with mail clients used by the top contributors to things like the Linux kernel and core open source libraries used in most Linux distros. This is how to help ensure adoption and making it a replacement for IMAP.


"Ensure adoption" sounds so strong. I won't say I don't know mutt, but it just doesn't strike me as their best bang for buck, in terms of user base, does it? Would it not make more sense for them to invest in, dunno, say, Thunderbird?


I think the point OP was trying to make was that they should invest in MUA support for this feature.. mutt was merely an example.


One thing I find weird (probably because I’m new to JMAP and still ignorant) is having position AND anchor+anchorOffset. Why have both? Surely the latter is sufficient and safer (as any position is only guaranteed to be valid when the last response was written)? From a glance of the spec I didn’t see any explanation of why. If anyone could shed more light it’d be much appreciated.


Both are necessary because you may not have the necessary IDs.

anchor + anchorOffset is useful when you use pagination and new entries might appear at the head of the list but you don’t want them to mess with your pagination once you’re inside.

position is useful when you have a total count and wish to jump in at the middle; as a concrete example, when you use lazy loading instead of pagination. For example, if I have a mailbox of 100,000 emails, and jump to the bottom: I don’t want to need to fetch all the IDs (probably well over 1MB to fetch), just so that I can specify an anchor. I want instead to ask “give me the thirty messages from position 99,970”.

(Note that when I speak of lazy loading I’m not talking of infinite scrolling, but rather of when you know that you have N messages, and that each message is allocated M pixels of vertical space, so you can allocate an exact space, and provide a proper scrollbar. This is how Fastmail does it and how desktop email clients have historically done it.)


I read imap description long time ago, and I know what it does in principle (along with issues itt), but can someone explain what value does it serve today? I have two confusing thoughts.

First, imap seems to be an overkill for a line wider than 64kbps. Texts do not consume much traffic and an entire mailbox can be served easily to everyone. There are some huge mailboxes out there, but messengers have this problem too and they solve it somehow (I suspect trivial &last_date=... from programming tg bots).

Second, and this may be due to bad clients (haven’t seen good however), imap is just sloooow. It doesn’t update instantly, but instead I have to enter every folder and pull down with my finger to update it. It can show outdated messages which I deleted on a pc for ten seconds before it figures out that they were deleted. My guess that it has something to do with an infinite number of roundtrips which this protocol is based on. Occasionally it shows N unread when there is none. Even push notifications fail to work properly most of the time (same server-side issues?) And with all these complications, I still have to download every attachment by finger. How come that heavy sites/forums/boards/messengers always instantly show new data and mail cannot? I know that I can simply use webmail, but wtf?

I’m a developer, but when I see software, I always try to evaluate it from user-only perspective. And from that, mail tech seems simply incompetent. I wish there was a port smtp->tg and tg->smtp, and special per-subj chats, so I could leave all that legacy for someone else to deal with and just talk to them without missing messages or waiting for updates.

(Not even considering mail message formatting, spam mismatches, size restrictions and non-delivery laws)


The slow mail client issue you’re describing is very probably an artefact of how IMAP IDLE is designed; JMAP fixes this design, as explained at https://jmap.io/#push-mechanism.


"Modern", the darling adjective of startup lingo. We thank you for your service and efficacy. It's time to abdicate the throne. All hail "more modern" the rightful heir, protector of the realm.

Pre-modern -> modern -> more modern -> post modern?

Jokes aside I enjoyed the experience of Fastmail with the exception of the migration tools. Very responsive. Kudos for submitting JMAP as an open standard.


I believe the filtering is for user-initiated search, right?

Would be nice to have some standardized support for sieve-like server side prefiltering. That's how I sort my mail into mailboxes, send spam (labeled elsewhere) to the spam trap, and the like.


What would this support look like, in the context of JMAP? By the time the email is in your inbox it's already gone through server-side prefiltering. So the only integration I can imagine is a way to fetch and store Sieve scripts so you can edit them from your mail client, but that seems like a very provider-specific thing.

For example, Fastmail themselves support Sieve, except they don't give you a single editable script, instead they convert your structured rules and spam settings into 8 labelled sections in a Sieve script and give you 4 free-form edit boxes to insert your own code at specific points (so you can't edit the code that they synthesize for you). I don't know how a protocol like JMAP would expose this in a generic fashion.


Two mechanisms: 1, the ability to write arbitrarily nested if all/any/none of various parameters (any/specific header w/restricted reflex) for server-side searches. The Mac mail app has these for the client side, as does my rss reader. A simpler grammar than needing inMailboxOtherThan yet more powerful. Could be represented as a json tree. 2 - the ability to run that filter n incoming mail basically where sieve is run today, with a similar small grammar for specifying a small set of commands to run like file in mailbox, tag, etc.


I didn't see anything about security. I understand sending is outside the scope. But if storage and downloading is to be portable, doesn't encryption have a place?


The encryption of the connection is orthogonal. That's what HTTPS is for.

And client-side encryption is managed by solutions like PGP. Doing it on the server-side is incompatible with the email protocol IMO.


So server side is secured how?


JMAP (or IMAP for that matter) is the transport format. JMAP runs over HTTPS, so the connection is encrypted. The transport protocol says nothing about how data is stored on the MDA (the server). That's up to the implementation of the MDA, of which no standard exist. The MDA is something that you should trust, if you cannot trust the MDA, you shouldn't use it in the first place.

What you need to worry about it transport, and that has been fixed by mandating HTTPS.


Not sure I understand the question. What's your threat model?


On the server, the emails are all encrypted like say passwords? Or are they stored as text?


Passwords aren't encrypted, they are hashed.

Every email service out there stores email in what by password standards is plaintext, but hopefully encrypted at rest.


So, how long until everything is JSON-shit over port 80 and how long until port 80 is old news and only port 443 is used for anything and everything? I can't claim to be familiar with IMAP, but it can't be worse than this garbage that requires an HTTP client and a JSON parser.

To process JSON, you must inspect every character, which is reason enough it's an awful ''format'' that only an ignorant could hold dear. Douglas Crockford brags about how smart he is for disallowing comments, but there's whitespace allowed everywhere. Let's not forget this asinine integer nonsense, because JavaScript lacks real integers.

I'm sure most people think HTTP clients and JSON parsers are standard-library affairs, nowadays, though; let's just entirely ignore matters of licensing and whatnot.

It's disgusting to see this as an IETF standard, but people have been complaining about how that moves along for decades; take a look at Google's robots.txt ''standard'' that doesn't do anything useful and effectively makes every existing file standard, rather than do much of anything worthwhile. The WWW and JavaScript are nothing more than a continuation of this worse-is-better filth that UNIX and C previously championed; decades ago, the IETF revised the RFC for the Finger protocol and warned about security, solely because some idiot wrote a shitty Finger server in C; now, we have useless and unnecessary standards such as this that only serve to further this next generation of idiocy.

I wonder when all of this will finally collapse under its own weight.


Can you provide some actual arguments as to what's wrong with JSON? It's a text format that can be inspected by humans, unlike binary formats. (Maybe IMAP is a text format too, so this isn't an advantage inherently) And JMAP may have better semantics than IMAP, and there's nothing inherently wrong with JSON (other than edge cases).


The problems with JSON are pretty obvious:

* It's inefficient compared to schema-based formats like Protobuf - you have to encode key names again and again.

* It's inefficient compared to binary equivalent formats like CBOR - both in space and speed.

* It's less reliable than explicit schema-based formats because it can't be validated (yes I know about JSON-Schema, nobody uses it). You essentially end up with an implicit schema that everyone has to recreate themselves from the documentation - especially if using statically typed languages.

* It's a pain to use from statically typed languages. You have to write the schema in your language, whereas with things like Protobuf it will generate the code for you.

* It doesn't have a proper type for arbitrary binary data. Most people (but not all!) encode it as a base64 string which is not very efficient and not actually a different type to a string, so again you rely on the implicit schema.

* Maps can only have strings as their key type.

* Minor annoyance, but trailing commas would be nice!

* Its number type is not well specified. E.g. does it support uint64?

"Human readable" is overrated. CBOR and similar formats can easily be inspected by humans just by converting them to JSON. The Smile format even starts with a :) smilie as its magic number to make it easily recognisable.

Would you recommend not using compression or encryption on HTTP requests because it stops them being human readable?


I use json-schema. I hate it but, i use it.


>Can you provide some actual arguments as to what's wrong with JSON?

It's a wasteful, textual format derived from a programming language vomited forth in under a week. It has asinine integer characteristics for no good reason; it's not a random-access format; validating it is even more work; hash table keys that appear multiple times, instead of being in error, instead overwrite each other, defeating most any kind of optimization; it is thus wildly inefficient to manipulate, as manipulating it correctly in all cases requires full and comprehensive parsing with checking. It is an asinine ''format''.

>It's a text format that can be inspected by humans, unlike binary formats.

Even then, there are much better textual formats that could be used, but I'm of the opinion that numerical (binary) formats are strictly superior, especially as they're used increasingly often. It's better to have good tools that will provide a human-readable representation, rather than pretend a single, inefficient representation for both man and machine is somehow the best option.

>(Maybe IMAP is a text format too, so this isn't an advantage inherently) And JMAP may have better semantics than IMAP, and there's nothing inherently wrong with JSON (other than edge cases).

Computer programs written by anyone competent are built around edge cases; that's much of what any reliable program is, accounting for edge cases. To disregarded edge cases, of which there are many in JSON, speaks to me that you have little concern for writing software correctly, but I imagine you'd deny this, as who doesn't want to write correct software? Simply understand that using a format with so many edge cases and other defects as JSON is foolhardy for much of anything.


>numerical (binary) formats are strictly superior, especially as they're used increasingly often. It's better to have good tools that will provide a human-readable representation, rather than pretend a single, inefficient representation for both man and machine is somehow the best option.

I'm working on a chiptune music composing program (FamiTracker), which saves projects in a binary format designed by my predecessor. I once repaired a corrupted file for someone else. It's borderline impossible to repair binary formats by hand, since inspecting the data on-disk provides no clues about the structure, which I don't know since I didn't write the program. I ended up merging the corrupted file and an older backup in Audacity (an audio editor) by sliding the two files back and forth relative to each other, because hex editors are awful at visualizing and aligning binary files (whereas diff tools are good at visualizing and aligning text files).

Additionally, the on-disk layout changes between versions, and the reader code is a mess of branching between different on-disk layouts based on the version number.

Additionally, it's impossible to check in your music project (binary blob) into Git, then get a meaningful diff between different versions, let alone being able to merge changes made in 2 branches. (Yes, I use Git for music. It's actually useful.)

Needless to say, I'm using a text format if I ever write a replacement. A text format may have trouble encoding binary blobs, but I don't like the idea of "transcoding .wav into binary files, and throwing away the parameters used". I'm more inclined to just use pointers to on-disk .wav, and store all metadata needed to reproduce the binary blobs, and possibly cache the binary blobs in base64 for speed and convenience.


I'm working on a chiptune music composing program (FamiTracker), which saves projects in a binary format designed by my predecessor. I once repaired a corrupted file for someone else. It's borderline impossible to repair binary formats by hand, since inspecting the data on-disk provides no clues about the structure, which I don't know since I didn't write the program.

That's interesting. If you're interested in discussing that more at some point, email me. Now, I've designed a numerical format for metadata involving machine code programs and, while I've not written this piece yet, an important aspect of this will be a repairing tool. The only reason a textual format would be easier to repair by hand is because you can use a text editor and it's more redundant. Now, good design is another quality; ideally, a numerical format would lack holes that are invalid states and wouldn't be riddled with pointers or other things if feasible; it varies based on what you're doing, though.

I ended up merging the corrupted file and an older backup in Audacity (an audio editor) by sliding the two files back and forth relative to each other, because hex editors are awful at visualizing and aligning binary files (whereas diff tools are good at visualizing and aligning text files).

That's again a simple matter of tooling that could be corrected.

Additionally, the on-disk layout changes between versions, and the reader code is a mess of branching between different on-disk layouts based on the version number.

A runaway format revision like that isn't ideal, I can agree. You can get the same mess with textual formats, however.

Additionally, it's impossible to check in your music project (binary blob) into Git, then get a meaningful diff between different versions, let alone being able to merge changes made in 2 branches. (Yes, I use Git for music. It's actually useful.)

I don't use git at all, so this is another difference between us. I'm largely of the opinion that the focus on textual formats for UNIX-likes was due to sloth more than anything, as there's generally a lack of coherency between /etc/fstab to /etc/passwd and so on; I believe the focus was largely due to the advantage of ostensibly generic tooling, but reading or writing a configuration in a text editor, with the manual handy, and getting errors and other things upon loading it is worse than having a tool write it for you and show it to you at a high-level.

Needless to say, I'm using a text format if I ever write a replacement. A text format may have trouble encoding binary blobs, but I don't like the idea of "transcoding .wav into binary files, and throwing away the parameters used". I'm more inclined to just use pointers to on-disk .wav, and store all metadata needed to reproduce the binary blobs, and possibly cache the binary blobs in base64 for speed and convenience.

I don't know the details of what you're dealing with, but again, if you're interested, email me and I can perhaps work on a repair tool or help design a replacement numerical format that will be easier to manipulate.


The problem with being clever and efficient is that you wind up having to write things like:

https://tools.ietf.org/html/rfc6868

Which is the kind of horror case of trying to do everything with clever little pieces of micro-format.

Now, the argument between XML and JSON is less of a big deal, except XML gets wishy-washy with whitespace in some ways which frustrate me. JSON number handling is screwy, it's true. I'd be tempted by CBOR if there were no other considerations - and I'd love to write a JMAP-over-CBOR or JMAP-over-ASN.1 or whatever at some point.

But you can very easily structure non-binary data in and out of JSON reliably, and everyone can get a parser which works. That's a big win. It's fine to disregard edge cases if you stay away from the edges - for example if you're serialising integers between 0 and 1000 then the fact that your datatype isn't 256 bit clean isn't a big deal. Same way you can use Newtonian physics when dealing with sprinters - but I don't think that race organisers have little concern for running a race correctly because they don't factor in relativity.

So we kept JMAP in the safe subset of JSON, because it's the best format for a whole lot of other reasons.


Specifically, JMAP explicitly requires I-JSON (https://tools.ietf.org/html/rfc8620#section-1.5).

If, for example, you try sending a request with duplicated keys in an object, the JMAP server will reject your request with the urn:ietf:params:jmap:error:notJSON error.


> hash table keys that appear multiple times, instead of being in error, instead overwrite each other

I don't think that's true. The behavior with multiple keys is not defined. A JSON reader can choose to do whatever it wants (error, use first, use last, use all, etc). An interoperable JSON writer should never output duplicate keys.


You won't believe what happened next: RFC 6902 comes along and explicitly prohibits multiple keys. Programmers want to reuse JSON parsers, but can't. This is not a good position to be in.


RFC 6902 is not a standard for JSON, it's a standard for "JSON patch", which appears to be another data format layered on top of JSON. It has no descriptive power on JSON in general.

The relevant RFC is 8259, which neither explicitly allows nor prohibits duplicate keys. Instead it says:

> An object whose names are all unique is interoperable in the sense that all software implementations receiving that object will agree on the name-value mappings. When the names within an object are not unique, the behavior of software that receives such an object is unpredictable. Many implementations report the last name/value pair only. Other implementations report an error or fail to parse the object, and some implementations report all of the name/value pairs, including duplicates.

That isn't a prohibition on duplicate keys, it's just a warning on what will happen in practice if you try to do that.


JMAP explicitly requires I-JSON, which is RFC 7493, and restricts this exact case:

> Objects in I-JSON messages MUST NOT have members with duplicate names.


Thanks! I was commenting just from the perspective of JSON in general, didn't check the JMAP spec.

(Does JMAP require validating that the input is valid I-JSON? Or does it just say that a JMAP endpoint is allowed to assume it's I-JSON, but doesn't need to validate it?)


If the input is not valid I-JSON, a JMAP server that does not reject it is non-compliant.

(Related: Postel’s law is wrong. See https://tools.ietf.org/html/draft-iab-protocol-maintenance-0... for some explanation.)


> (Maybe IMAP is a text format too, so this isn't an advantage inherently)

Actually, IMAP is a binary format that looks like a text format. This distinction is extremely important when you want to make a reliable Email client. There are also multiple ways of encoding text within it, that you need to keep track of.

Personally, I wish they used a structured binary encoding. This becomes especially important when data size efficiency actually matters. With something like protobuf, you can even have the parser for it generated for you. (However, I have worked with binary formats that are more loosely structured.)

Of course I am a fan of well defined and strongly typed formats.


- Anything outside of int32 must be encoded and decoded as a float. Oh the bugs/crashes this has caused.

- It is way too easy to change the schema and wreak havoc with downstream users.

- Some libraries will encode empty strings as null for instance, some as empty strings. Some will have empty objects, some null. Switch some configuration, change your serializer, ORM or anything that touches your data. Everything will work perfectly in your code, but json output can change and you will again break your downstream users.

If you are designing an API, use protocol buffers. Json maybe as an optional output/input to your service. Keep it as a discouraged dirty optional feature and you will save yourself a whole lot of debugging and time wasting.


google defined websocket protocol not long ago. We didn't actually need it. They did. With such a huge browser market share, all they crave for is information. All of it. And most of us (developers) embrace this flow because each and every "fault" they "discover" and use to create a new idiotic standard is an opportunity for us to profit. And we feed it. I don't need jmap, none of us do. They need it, so they wrote 200 pages of stupid, extremely complicated specs with the hope that it will gain traction and we'll start developing. They can't do it alone. They need us to embrace it. I won't and I hope none of you will. I hope this standard will die with a whimper. I am doing my best to keep my e-mails where they belong. Far away from browser. I use SMTP. There's nothing wrong with it. It works. I use MIME. There's nothing wrong with it. It works. I personally don't use IMAP. ODMR is all I actually need. But I configure and maintain IMAP servers. I see nothing wrong with it either. It works. What baffles me is cyrus project accepting this idiocy. I just hope it won't become the new deal and the others (dovecot et all) will stay away from it. Panic and fear can and will escalate the process.

So how long until "everything is JSON"-shit will take over? I'm 40 years old and my bet is I will live to see it happening. This is actually scary. And not because json or hydras.


To be honest, JSON is pretty reasonable. The parsing speeds are very good, the grammar is a lot simpler than SMTP/HTTP/IMAP, it actually has numeric and boolean types.

It is no surprise to me that the two people here complaining that IMAP is somehow acceptable don't even use it.

JMAP solves real problems inherent with IMAP, particularly on clients that are not suitable for local archiving of messages (hence, ODMR/POP is not an option), and where IMAP sends considerably more data than the client actually uses (making it slow).

I don't like the use of JSON for local IPC, as is showing up in some systems, but for RPC over an unreliable network, JSON is more convenient than not.

As for the use of HTTP, it's not ideal in my view, and I tend to think it's overused. The one argument though, is that it allows JMAP to go over port 443 to a host that has an existing https service on that port (since you likely have a reverse proxy, probably NGINX, as your listener anyway) without having to send a bespoke upgrade request, which defeats the simplicity benefits of a custom protocol, since you have to implement HTTP to do the upgrade.


I tend to agree with you, even so, if not for these moves we wouldn't have any change at all. Email is not a solved problem, we just have a lot of fixes that makes it sort of stable. If not else we get a competing extension to IMAP because someone does not want JMAP.


So a few years back, everything was super fast on IMAP on my phone. But now it just crawls. On the Gmail App it's still pretty fast. So I just assumed it was slow because Google was being a dick and trying to get people to use their app instead of Apple Mail or Outlook. Is the protocol slow, or is Google throttling it?


I don't know about throttling, but later Android versions are known to mess with email notifications.

See: https://github.com/k9mail/k-9/issues/3950


IMAP is fine. It's likely your email provider's fault.


Having just this month implemented an IMAP client I can tell you it's blazingly fast for Gmail & Outlook.

It's actually much faster and more efficient than using the Gmail API.


I just clocked it at 16 seconds to check for new mail in Apple Mail checking against a Gmail Account on an iPhone SE connected to Google Fiber. It feels pretty slow...

Meanwhile, same update vs same account in Outlook took about 6 seconds.

And in the Gmail App, about 4 seconds.

Not super scientific, just me looking at my watch and opening each app and seeing how long it took to update.

So maybe something with Apple Mail?


The slow part is opening the IMAP connection and authenticating, after that it's fast.

I guess it might depend on how these clients are maintaining their connections.


Wondering if it could be your isp prioritization being lower because of the not http(s) port.


Well, IMAP works fine, so it's not the protocol.


Gmail has its own sync engine.


Give me the speed of fastmail with the encryption of protonmail, make the two talk to each other out of the box, and actually make something new instead of just the same old thing.

Actually, loan some of your devs to protonmail because both their mobile app and their imap bridge are embarrassing.


They list the wrong RFC number "8261", but the linked RFC is actually "8621".


Thanks! Have fixed the copy. The link was going to the right place, but somebody (possibly me) put the digits in the wrong order.

For your efforts, please enjoy a joke: https://comic.browserling.com/tag/multithreading

(that's my excuse - brain was running multithreaded output to the fingers)


https://jmap.io/#push-mechanism JMAP going to use rfc8030 push for mobile (3rd party service for notifications, really?) and EventSource (kind of http long polling) for desktop clients, as I understood. I don't like that idea, it's even worse than IMAP4 IDLE. Why not just WebSockets for mobile and desktop clients?

Or just building it on top of XMPP protocol (or any other chat protocol).


HTML5 Server Sent Events are better than long polling, and they're about as old as websockets, but simpler, as they maintain the concept of HTTP resources whereas websockets replace the concept. HTML5 SSE is also easier to implement.

Not hating on websockets, but saying HTML5 SSE is outdated tech is just wrong. If you were making a real time app that works over HTTP either would be fine.


> HTML5 Server Sent Events are better than long polling

But it works the same way: infinite http connection.


> The JMAP protocol is transport agnostic and can be easily transported over a WebSocket, for example, as well as HTTP

From section 10. of the jmap.io link you referenced. I haven't used JMAP or looked into the spec in any detail, but wanted to provide that context for anyone coming here and seeing the parent comment first. Sounds like you can use WebSocket or any other transport mechanism.


Is the protocol itself important to users? or is it mostly important just to developers?

If it makes a practical difference to a user, how so?


Lets ignore the biggest issue (spam) and focus on the bike shed.


No one's stopping you from coming up with your own anti-spam technology.

In fact, it's possible for individuals and organisations to work towards multiple goals at once. For example, Fastmail and the IETF have concurrently been working on RFC 8617, which allows for stricter spam filtering:

https://dmarc.org/2019/07/arc-protocol-published-as-rfc-8617...


The reason people choose Gmail or Office instead of smaller providers, is 1) They get less spam 2) Less likely get their mail market as spam. This centralization is killing e-mail. But that's just e-mail. What protocols do people use to send messages ? Facebook messenger, Whatsapp, etc. More proprietary centralized solutions. We need a simple decentralized protocol, as simple as SMTP, bit it need to solve the spam issue, and it need not to be over-engineered (like XMPP). And no, it should not be built ontop of SMTP, SMTP has so many issues, like you can send mail pretending to be anyone. There are solutions like SPF, DKIM etc, but 1) they are built ontop of SMTP 2) They are not stopping spam. We need a new protocol that replaces SMTP, and have spam prevention, identify, etc built in.


Email could be free from spam if you configured your email client to only accept emails from people you manually added (and if SPF/DKIM was enforced). It's not clear how replacing SMTP with a different protocol would make the situation any better.

The reason why centralised chat systems can limit spam is that one company gets to decide who can and can't send messages to whom. Recreating that in a decentralised architecture is precisely the problem that needs to be solved, and the solution could then be used by email providers.

Now there are no technical reasons not to use SPF/DKIM, and recipients can enforce it more strictly, the problem seems to be determining which mail providers (i.e. which domains) can be trusted to stop their users from sending spam. Spammers have realised this too, of course, which is why a lot of spam comes from domains that are registered cheaply and quickly abandoned.

One solution that might be worth exploring is requiring that domain registrants provide a bond to their registrar if they want to send mail from that domain (grandfathering in all existing domains). The bond could be refunded after, say, 12 months of sending enough emails that recipients don't mark as spam.

The difficulty with this is creating a reporting mechanism that can't be gamed, either to unfairly cost a domain its bond, or to flood the system with fake positive reports to drown out the true negative ones. Fortunately, with DKIM, there is at least cryptographic proof that a given email was sent by a given domain.


Messages should be sent from client to client. The server layer can be skipped. But the client could of course be a cloud based email solution. Messages would need double opt in, which would be baked into the protocol. Id's would be a random hash. Personal messages could be trust based eg. "Do i know this person" and corporates could use certificates. Once a user has opted in to receive messages from a id/hash the receiver would generate a challenge at the start of each session, either using a shared secret or public key crypto (also baked into the protocol). The protocol should have a standard for sending simple text. But then allow extension protocols, for example implementing voice/video. With capabilities eg. Client tells the other client what formats it accepts.


FINALLY


Super happy customer of Fastmail of 7 years now.

It’s a joy to use. Fast, responsive, and reliable. No pointless redesigns or 10 second loading screens like gmail.


.. what does this comment have to do with the actual article other than keying off of 'fastmail' in the title?


I mean, JMAP is Fastmail's project, and this is a post on Fastmail's site. Still a little off topic though


Fastmail's web interfaces uses JMAP under the hood.


Hmm. I'm a happy FastMail customer of >2 years, but sometimes the web app loses the connection with the host (especially when on flaky wifi) and fails to re-establish it. This is really aggrevating when you hit send on an email, which disables the input text box but then just hangs indefinitely in a "Sending..." state. I've already reported this to customer support. Symptom of JMAP, or just a run-of-the-mill bug somewhere?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: