We've spend the last couple of years building a local-first end-to-end encrypted multiplayer notes app (Thymer). We also explored running sqlite in the browser but ultimately concluded that we didn't really sqlite or a real database at all in the browser.
What does a document system need? A few simple indexes that track links and backlinks, mentions, and hashtags. You’re broadcasting change notifications anyway (for reactive DOM updates), so updating the indexes for those manually isn’t much extra work. You apply updates eagerly and then notify other users so they can apply the same updates, with some rules to guarantee that all users end up with the same state regardless of the order in which they receive the updates. But a relational database doesn’t help with any of this. Document systems tend to be versioned, so every user action turns into another entry in a transaction log. Even queries like “last Monday’s version of my document” don’t map naturally to SQL. You can query for transactions in a given time period, but unless you have snapshots, you’re still forced to re-apply transactions all the way from t=0 if you want to re-create the document state at a given date.
With Thymer we really care about performance, but Thymer is also end-to-end encrypted because we don't want to compromise on privacy. And it's real-time collaborative and offline first.
Thymer has optional self-hosting. Then you can upgrade (or not) at your own leisure, or intentionally stick to an older version you like better. Enshittification is a big problem in our industry. We've all been burned by it -- we certainly have -- and being able to opt out of a "new and improved!" version is a real feature.
Thymer will also be very extensible. Today we launched our plugin SDK: https://thymer.com/plugins and https://github.com/thymerapp/thymer-plugin-sdk/ with a bunch of examples. With Thymer you will be able to "vibe code" the very simple plugins and with VSCode/Cursor you can make more complex plugins with hot-reload.
Rule of thumb: every 10% increase in complexity cuts your potential user base in half.
This is why people make backups by copy-pasting files. This is why Excel is so dominant. This is why systems like hypercard and git are not mainstream and never will be.
There is a large universe of tools people would love if only they would bother to learn how they worked. If only. Most people will just stick to whatever tools they know.
For most people the ability to go back and forward in time (linear history) is something they grasp immediately. Being able to go back in time and make a copy also requires no explanation. But having a version tree, forking and merging, having to deal multiple timelines and the graphs that represent them -- that's where you lose people.
I wouldn't frame it as "complexity", I would frame it as "cognitive load". You can lower cognitive load despite having high complexity. For example, you could (and many companies have done so) build a user-friendly version management system and UI on top of git, which on its surface is just "version 1", "version 2", "version 2 (final) (actually)" but under the hood is using commits and branches. You can have submenus expose advanced features to advanced users while the happy path remains easy to use.
> Rule of thumb: every 10% increase in complexity cuts your potential user base in half.
I agree this is an accurate rule of thumb. However if the complexity lets users achieve more, then the complexity can earn its keep. Using version control is so beneficial that software engineers deal with the complexity. The ability to maintain a more complicated model in one's head and use it to produce more value is not something that all users are able to do. More sophisticated users can afford to use more complicated tools.
However the sophisticated users are reigned in by network effects. If you want to work with people then everyone needs to be able to deal with the complexity. Programmers are more sophisticated than most office workers, which is why we ubiquitously version codebases, and not so much spreadsheets.
> This is why systems like hypercard and git are not mainstream and never will be.
We are moving towards a world where fewer humans are needed, and the humans that are needed are the most sophisticated operators in their respective domains. This means less network effects, less unsophisticated user drag holding back the tooling. The worst drop off and the population average increases.
I would not be surprised to see an understanding of version control and other sophisticated concepts become common place among the humans that still do knowledge work in the next few years.
> But having a version tree, forking and merging, having to deal multiple timelines and the graphs that represent them -- that's where you lose people.
There's no good reason why that should be the case. E.g., one could imagine the guts of the "copy-pasting files" UI being a VCS. That would keep the original 100% of the userbase plus allow whatever percentage to level up if/when the need arises (or go "back in time" in the event of a major screw-up).
It's just that software UX in 2025 is typically very bad. The real axiom: the longer you run an application, the more likely it will do the opposite of its intended purpose.
Oops, the word "stash" in git has an idiosyncratic meaning. That content has been removed from the history I was trying to keep. Fuck.
Oops, "Start" in Windows pauses interactivity and animation until ads are ready to be displayed in the upcoming dialog. Fuck!
Especially in the latter case, I don't think users are deterred by the cognitive load required to interact with the interface. It's probably more a case of them being deterred because the goddamned stupid thing isn't doing what it's supposed to.
In theory you can have these "zero cost abstractions" but in practice I don't think so. The user manual gets thicker. Concepts like 'delete permanently' and backup/restore get more complicated. Users will get confronted by scary "advanced users only" warnings in the interface. Some enthusiast blogger or youtuber will create content highlighting those advanced features and then regular users will get themselves in trouble. Customer support gets way more complicated because you always have to consider the possibility that the user has (unknowingly) used these advanced features. If you put buttons in the interface users will press those buttons. That's just a fact of life. Advanced features always come at a cost. Sometimes that cost is worth it, but only sometimes.
Part of the reasons why so many people are disillusioned by AI. We are attempting to tame complexity that shouldn't exist at the first place.
Im guessing lots of code that was getting written was kind of verbose boiler plate, automating all that doesn't move the productivity needle all that much. That shouldn't have existed at all to start with.
I think the author's ideas are likely too complex for a wide audience, but they could be a game changer for those who can handle that kind of complexity.
It's still early, but we have a checkpointing system that works very well for us. And once you have checkpoints you can start dropping inconsequential transactions in between checkpoints, which you're right, can be considered GC. However, checkpointing is desirable anyway otherwise new users have to replay the transaction log from T=0 when they join, and that's impractical.
As long as all clients agree on the order of CRDT operations then cycles are no problem. It's just an invalid transaction that can be dropped. Invalid or contradictory updates can always happen (regardless of sync mechanism) and the resolution is a UX issue. In some cases you might want to inform the user, in other cases the user can choose how to resolve the conflict, in other cases quiet failure is fine.
Unfortunately, a hard constraint of (state-based) CRDTs is that merging causally concurrent changes must be commutative. ie it is possible that clients will not be able to agree on the order of CRDT operations, and they must be able to arrive at the same state after applying them in any order.
I don't think that's required, unless you definitionally believe otherwise.
When clients disagree about the the order of events and a conflict results then clients can be required to roll back (apply the inverse of each change) to the last point in time where all clients were in agreement about the world state. Then, all clients re-apply all changes in the new now-agreed-upon order. Now all changes have been applied and there is agreement about the world state and the process starts anew.
This way multiple clients can work offline for extended periods of time and then reconcile with other clients.
1. Planned, but our first focus is the web app (plus desktop Electron)
2. Yes. We have a bunch of default views like table, kanban, photo gallery, and calendar. You can also create your own views with a JS plugin, like this silly example of spinning globe view: https://x.com/wcools/status/1898828593255346287
3. Our aim is a full feature todo app. But we won't have every feature on day 1.
In the short term a free open source govt alternative may be a net positive for society. I don't think it is in the long run. Government projects like these are not likely to really push the state-of-the-art forward. This project even advertises itself as a FOSS Notion alternative. Do government-sponsored clones encourage or stymie innovation? I think the latter.
Every week we read in the news that the EU struggles with entrepreneurship. That our tech industry is languishing. That the EU gets out-competed by the US on software and by China on everything else. Europe should be making industry-leading apps. Europe should produce software startups that make products that get used worldwide. EU subsidized clones of popular American products feels like admitting defeat.
I'm obviously biased because I'm also working on a product in this space. But if Notion developers must become farmers because innovation no longer pays that is a loss to the world in my book.
There are plenty of projects pushing the state of the art forward.
A very specific example: basically all interactive theorem proving tooling is built in public research halls. This has allowed Compcert, a C compiler with “no bugs”[0] to exist.
The Compcert case is interesting because private funding is also involved. Public state research can still pull in private funds! We are not entirely throwing in the towel!
[0] “no bugs” here means “we have defined a spec for C, and this compiler is guaranteed to compile your C code along the spec we defined, so long as your program terminates”. There’s some hand waving around a theorem prover’s own validity but all Compcert bugs have been “we misewrote a chunk of spec” varietals
Your whole argument is based on neomania: progress is always good and there is no point in working on something unless it advances the state of the art.
Certainly not. I don't believe progress is always good. But subsidies should be reserved for ambitious projects that push the state of the art forward. For those projects that realistically will not get funded commercially. CERN, for instance.
If that's true in a large organization, how do SaaS companies actually make a profit?
If you develop an in-house tool, you have very predictable user numbers so you can go on-prem versus cloud for the compute and save ~10x on that side.
You also have the benefit of being second, the other guys already did the hard work of UX research etc. and your in-house team just needs to replicate a slightly complicated CRUD app.
The one significant roadblock I can see is being able to put together the right team for the job. But cost-wise it has to be a no-brainer that in-house is cheaper.
They are putting their resources into the development of a product that can be universally shared and used. There is no favored party.
Also, I completely disagree with the "ambitious projects". I actually would favor the government let all the risky ventures to private enterprises and focused only on tried-and-true developments and make them universally available to its citizens.
>government projects like these are not likely to really push the state-of-the-art forward.
why it would need to be state of the art? it needs to be stable and 'good enough'. This isn't rocket science, nor quantum mechanics - this is literally a glorified CRUD app that focuses on documentation.
As of 2025 any US-based services are persona non grata for national security reasons. Which other nation's services could the EU switch to that isn't from US?
> Government projects like these are not likely to really push the state-of-the-art forward.
Well, if a government project can easily push you out, then you're not really a state-of-the-art.
> EU subsidized clones of popular American products feels like admitting defeat.
Governments need to think long-term. And one danger of relying on something like Notion is vendor lock-in. You can't easily migrate your data out of Notion, with all the rich content preserved (edit history, text comments, etc.)
EU can try to mandate a common interoperability standard, but it takes years and the end result always ends up being behind the state of the art.
The government could act like an immortal mega corp if it had the authority to do so. Such as pushing out competition via loss leaders. And as a bonus, with the government, every program can be a loss leader.
The funding potential for this pattern is constrained today, which is why government projects that compete with private industry are generally terrible. But, clearly, the money is there to be captured by this segment out of government funding generally, if the government is allowed to enter business directly.
The solid argument I see against allowing such actions is a slippery slope towards the above. Slippery slope arguments aren’t always correct, of course, but they aren’t always wrong either; they just point out a risk. Depending on one’s risk tolerance, it is wise to avoid slippery slopes when you can’t quantify just how steep it is.
One limiting factor: the government-produced software will be open source. So the barriers for innovation will be significantly lower for _everyone_.
Right now, I can't fix that one small bug in Notion that keeps bothering me. I have to raise an issue and hope that they add the API required to do that. In the case of open source base produced by the government, I can make a small (perhaps paid) add-on with that functionality.
Yeah totally I think this instance is fine too. I’m kind of speculating why some people seem to get a spooky feeling around stuff like this, even though on the surface it seems totally innocuous.
Government crowding out companies is absolutely a concern. I don't want the government running grocery shops or making video games.
But it works fine for infrastructure where competition is not only rare, but often is counter-productive, like for sewer and water delivery. Can this include software infrastructure? Maybe.
> Europe should be making industry-leading apps. Europe should produce software startups that make products that get used worldwide.
I've kind of lost hope when it comes to commercial services and proprietary apps. They're sadly all sooner or later enshittified. We need something different, not by promises but by design (FOSS).
> EU subsidized clones of popular American products feels like admitting defeat.
I think it's a fresh and needed take on the financing of our common digital infra.
Typically a FOSS community seems to take a while to get started, but once it gets going (Blender, Linux, etc) it tends to stick around and even seriously gain traction.
Maybe you are not building something in the sector but do you have any idea of how shitty collaborative work is for public agents ?
The possibility of data being sifoned back to the US if they use american cloud services has millions of public agents not being able to collaborate online.
Some of them try to provide on premise versions of the software but Microsoft want you so bad to pay for 365 or teams that they are willing to maintain only super old versions.
I spoke with a guy reponsible for 100k public agents who told me his only choice is to host Sharepoint 2011 (in 2025 !)
So maybe Docs is not as innovative as Notion but hey, we need as efficient as we can public servants. And we will do that by providing modern tools they can use online with their colleagues.
+ When we think of Microsoft we think about the Office Suite but in lot of cases they do the authentication with Active Directory. Go luck doing interoperability or SSO accross agencies when all of them rely on closed source code and are locked in by vendors...
We're actually solving with OIDC identity federation called ProConnect.
What does a document system need? A few simple indexes that track links and backlinks, mentions, and hashtags. You’re broadcasting change notifications anyway (for reactive DOM updates), so updating the indexes for those manually isn’t much extra work. You apply updates eagerly and then notify other users so they can apply the same updates, with some rules to guarantee that all users end up with the same state regardless of the order in which they receive the updates. But a relational database doesn’t help with any of this. Document systems tend to be versioned, so every user action turns into another entry in a transaction log. Even queries like “last Monday’s version of my document” don’t map naturally to SQL. You can query for transactions in a given time period, but unless you have snapshots, you’re still forced to re-apply transactions all the way from t=0 if you want to re-create the document state at a given date.