Back in 2014, I was rebuilding the core of an event processing engine. At the time, the decision was between Apache Kafka and rolling our own. After investigating Zookeeper, we decided to roll our own and chose ZeroMQ as the messaging layer as our on-prem customers probably didn't want to own and manage Zookeeper.
ZeroMQ was absolutely solid and stable; incredibly trouble free and the only issues we ran into were when IT teams didn't open the ports we documented in the configuration procedure. (The resultant architecture actually looks a lot like Flink)
In any case, ZeroMQ is a fantastic piece of technology that I feel like I don't see out in the wild quite enough. It's dead simple and incredibly stable from my experience.
The problem is that it started out as any but stable and reliable. It asserted on received data, which in a network application is a super newbie mistake. When I looked at it, the pub/sub socket would hang forever if the other end interrupted the connection. So the zeromq guide which said "look how easy it is" was only true if you ignored errors. If you are writing network code and ignore errors, well, good luck. That was a long time ago (~10yrs) so if it is better now, good for them. Also, both founders have left the project. One passed from cancer, the other didn't like what he built and started over in C. Not that they can't be replaced, but transitions can be hard and take time.
What I mean is literally have an assert with incoming data as the parameter:
> assert(data_buf[4] < 8);
While your protocol might guarantee that data_buf[4] should always be a value less than 8, you don't use assert() to check it because it aborts the program if the check fails. The proper thing to do is a range check that returns an error for a protocol error (malformed data, etc.).
ZeroMQ literally called assert and any bad data coming in over the wire would cause your app to abort. Insane.
I literally meant the library would call assert() on incoming data. I am fairly certain that has been removed for a long time, but it can be hard to get past first impressions.
In a company I work for they decided to do the same, around the same time. I believe it was a wrong call. Over time requirements have grown and we ended up bolting all kinds of kafka features on top of the zeromq thing, but of course much crappier. And in the meantime kafka doesn't require zookeeper anymore and is the de-facto standard
Of course, ZeroMQ and Kafka are two very different tools that serve different purposes and one needs to understand the tradeoffs.
For us, delivering an on-prem commercial off the shelf solution, it was untenable to expect the customer IT team to operate a separate, relatively huge piece of tech (remember, this is 2014). Maybe the heuristics would be different today with K8s and advancement of Kafka. But ZeroMQ as an in-process, distributed messaging layer is dead simple. If your use case requires anything else on top of that, it's on the team to design the right solution like resiliency, statefulness, etc.
For a high throughput, distributed compute focused use case, I think ZMQ is still a great choice for coordinating execution. But Kafka and other options on the market now are great choices for higher order abstractions.
I worked on a similar greenfield project around the same time and we looked at RabbitMQ and Kafka, eventually going the RabbitMQ route. We were also developing an on-prem COTS product, and zookeeper played a big role in our decision to go with RMQ, not to mention at the time CloudAMQP had a very generous free tier (not so much anymore, but it's still okay-bordering-decent). No one install would ever hit the scale where Kafka makes sense so I still think it was a good call pushing 10 years later.
A company I worked for had the same problem. Messages were being dropped, and either no one on backend knew how or wanted to investigate. I was on the data team and we just had to deal with it.
I remember we had a similar use-case. This was for collecting live statistics from a sizable Varnish Cache cluster. We wrote a in memory database to store the data. It's been chugging away for about 10 years now and last I heard the zmq traffic alone was about 3Gbps with zero issues.
Aside from some scaling issues, it still is a great solution for having real-time insight into the performance of Varnish and the cache.
We even went as far as writing a Golang version of the VCS server to better handle some of the scaling challenges. I don’t remember the exact library that we are using to call into ZeroMQ, but the CGO overhead was minimal.
IIRC it does all I/O on separate background threads which means every operation actually goes through an inter-thread queue. Which can be good and bad.
The problem with ZeroMQ is that it is highly opinionated about threading and concurrency in a way that doesn't necessarily mesh well with other components of your application stack that have their own opinions
There's nothing wrong with the opinions presented there -- but it really wants to own your whole threading / serving stack, and isn't really compatible with e.g. a Rust async tokio binary where threads can "move around" etc.
Well, the original non-thread-safe sockets (e.g., ZMQ_SUB, etc.) were that way because they support multi-part messages. And yes, the fact that they are not thread-safe is a bit of a stumbling-block for ZeroMQ newbies (including me, back in the day).
In any case, newer socket types (e.g., ZMQ_CLIENT) have since been defined that are thread-safe, but necessarily don't support multi-part messages. (They tend to be different in other ways as well -- e.g., ZMQ_RADIO/ZMQ_DISH are "sort of" replacements for the original ZMQ_PUB/ZMQ_SUB sockets, but have other constraints as well).
If you have played around with GNU Radio, ZeroMQ is baked-in for communications with outside applications. I've played with it a bit, and found it to be very much fire-and-forget once you have it set up properly.
That's cool, I've played with GNU Radio some, wasn't aware it used ZeroMQ, actually, I've used several *MQ's but not ZeroMQ, guess I should look into it more.
I wonder if this is one of those problems Erlang and the Beam VM solve out of the box. Native clustering, message passing from one node to the other, no need for a separate dependency... That's the same design, isn't it?
Disclaimer: I don't work with any of that tech right now, but I'm looking forward to working with Elixir, which is in the same ecosystem.
Having used both (from Python) I found NATS much better. ZeroMQ in particular caused a lot of problems around the HWM (High Water Mark) limit for me or around process starting order (who creates what channel). Having a separate server helps with the last problem.
ZMQ for me fills a very specific use case of needing high-throughput, _in-process_ distributed messaging.
I think once the _in-process_ constraint is lifted -- you can install or rely on an external messaging server -- then the field becomes much wider in terms of solutions to pick from.
BTW, the way we solved for a similar HWM issue is that we decoupled the ingestion of events from the distribution of said events (with ZMQ). So one process was ingesting events and would send it to a coordinator process that would then send events to processing nodes. The coordinator would reply with it's current queue size and the ingest process would do a back-off once it reached a threshold. This allowed us to stop/start any node in the cluster in any order.
Sure, to get the most out of NATS you should run it as a server (which frankly isn't difficult as its a single Go binary).
But you can embed it if you so wish. Indeed if you look at the NATS source code[1], that is exactly how NATS do their testing. They use an embedded NATS server in their test routines.
Exactly, zmq is far more flexible in deployment options because of this.
We (at work) use ZMQ + our own special sauces to run vehicle automation messaging stuff. Running some enterprise-ish broker in that environment just isn't on the menu.
Found Pieter and his books a couple years ago and it make a big impact. The Collective Code Construction Contract is a brilliant piece of community engineering and should be adopted/adapted by every FOSS project on the planet. We can accomplish so much more together if we have C4 and similar methods of coordination. Next time you hear of a burnt-out maintainer, have them read https://rfc.zeromq.org/spec/42/
They legally have to ask every contributor. The exception/loophole used by big companies (but also FSF for example) is that every contributor had to sign a CLA where they legally reassign ownership of their code to the project owner.
> If you believe in software freedoms, then there will never be any reason to need to relicense, nor would you want to.
The Tivo-ization process of the 90s shows that while this might be frequently true, it isn’t without exception. From a practical standpoint, continuing to provide for user freedom would have been best accomplished (personal opinion) if many projects had been able to move to a more AGPL style license.
Yeah, the entire security posture of Android would be massively different if Linux could have been relicensed away from GPL 2.0 to a license that says "you have to give users a way to compile your code and install it". Now the community can fix old phones without security updates.
> For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable.
My amateur understanding is that the major kernel copyright holders are essentially comfortable with Tivoization and aren't looking to rock the boat with a lawsuit.
What if there was an extreme license that simply said you have to share it upon request from anyone, even private versions? Ignoring whether that's annoying or whether it's enforceable, would that be non-free?
I've seen an argument that the particular way the AGPL is worded makes it non-free, which seems pretty plausible, but I don't think that's an argument against "a more AGPL style license".
Most of the world is developing closed-source software.
When you publish an open-source project, people are going to assume you want your project to be open source. This often provides an enormous boost to the project, as people are way more willing to contribute to a collaborative community project than just donating time to some for-profit company.
I am totally fine with companies making proprietary for-profit software, but don't leech off the open-source community by pretending to be something you are not. I am at a point where I assume any company-backed project with a CLA is going to do a bait-and-switch as soon as that becomes the more profitable option. Remember kids: corporations are not your friend.
Isn't this very topic, the relicensing of ZeroMQ, a proof that there is a need and desire to relicense by an organization that believes in software freedom?
People constantly make this comparison, but it's stupid, the Linux kernel's DCO is just a CLA by a slightly different name and slightly different signatory procedure; giving it a slightly different acronym doesn't make it something else. The very fact it's mandatory makes the exact opposite case, that the world's most popular free software project sees it as important. Putting a Signed-off-by is literally a legal statement that you have the right to yield the given changes to the project, and that you affirm the right for them to be redistributed. This is exactly what most CLAs do; most don't assign or transfer ownership or copyright in any way because it isn't necessary.
Ironically, despite all the (unequivocally 100% wrong) yammering about this topic on places like this forum, many of the bigger "evil" companies like Meta and Google don't require transfer of copyright to contribute to their FOSS projects, while places like the FSF do require it so they can relicense under potential future FSF licenses e.g. a practically stronger version of the GPL 3's "or later versions" clause. And there are even more agreements like the FSFe's FSA that can stipulate exactly a fixed set of licenses that might be used in the future, as a sort of middleground.
The one time I was asked about signing a CLA, it wanted to me guarantee patent indemnification as well forever. Not only is this inadvisable, there was no way the legal counsel of my employer would permit that.
Seen this quite a few times as well, but we’ve managed to strike the indemnification clause from various third party CLAs by putting our legal teams in touch.
This is why there should be an easier way to auto-sign CLAs or better yet, programmatically declare that all my contributions are CC0/public-domain so nobody has to contact me to find out that my code doesn't come with strings attached
As the author of the changes, I thought you can license them however you wish. What you're contributing is basically a diff, I don't think that counts as a derivative work as you wrote all the content. If you distributed your change with the original repo that sounds more like a derivative work to me.
what you're describing would make end-running the GPL absurdly easy. Vendors would just distribute vanilla Linux source code and in another file distribute their not-a-derivative patches under their own proprietary license.
Patches are obviously a derivative work. No one spontaneously describes deleting several lines of code & then replacing it with other lines of code.
My understanding is that that would be legal. The problem for those companies is they can't build and distribute a Linux kernel that contains those patches, because then that is a derivative work. So in practice they have to release their changes under the GPL as it's not feasible to ask users to compile their own kernel with their custom patches.
Yes, most also require confirming that you're allowed to make contributions. That's because most tech companies make you sign "everything you do on a clock is ours" documents, CLA here meant to protect from your employee claiming your contribution is actually weren't yours to make.
CLA’s don’t fully transfer ownership (hence the L) and don’t unilaterally allow changing the software license. Though that does of course depend on the nature of each CLA since there’s no singular contract associated.
A CTA (transfer vs license) does allow unilateral license changes after the fact.
Ironically, a CLA backed by a nonprofit like the FSF is probably one of the few ways to do it properly. At least they are guaranteed to act in the interest of the community.
The other "loophole" is not to relicense, but instead for a corporation to make their own future contributions with a different license. This doesn't work too well from GPL-like licenses, but is fine for file-based copyleft.
People contacted were people who made major modifications, not every single contributor. And yeah, if the contributor don't agree to the new license, you can either don't do the license change, or remove the code they wrote as they don't agree to the new license.
Sometimes companies who do "FOSS" make you sign some sort of agreement that they own the code you produce and you won't have any say about re-licensing, so maybe the projects you're thinking about have done that?
IANAL, but if the license isn’t materially compatible (which perhaps is the case for LGPL and MPL, I don’t know MPL enough to say), I would imagine this is the only legal way to do it. When I contribute to a project, I release my code on the agreed upon license. You can’t change that without my permission.
Yes very common. I’ve helped some projects relicense in the past (because the original developer didn’t actually understand the license they chose) and it’s arduous having to contact each individual (and sometimes companies) to relicense.
You can do license changes without that however as long as:
1. You license per file in the repository. This can be quite arduous but many projects will move the old licensed stuff into a sub project to make that easier to grok.
2. Your new license is compatible with the old license.
Yeah. This is how Squeak changed the license from SqueakL to Apache and MIT. That code has a lot of history, so it was a pretty big effort, but worth it in the end.
Fun ZeroMQ fact: bitcoind has used zeromq since 2015. This is used by client software including lightning node daemons to get notification of incoming transactions and blocks in a highly reliable and rapid way.
O wow! It's been a while since the last release, in the back of my mind I had the idea that the project was slowly dying because of Pieter Hintjes' untimely demise... good to see it's still kicking. Very solid software, an absolute joy to work with.
This. I have read the guide a few times through the years, it's such an interesting dive, specially if one follows the programming exercises... I must admit that any time I work with Tokio or other message passing libraries I end up doing some sort of majordomo, I do not know if this is food, but MDP got stuck in my mind for better or for worse... For example with Tokio mpsc I end up creating majordomo coordinator, clients and services in separate threads, I don't know if this good tbh but it's just so easy to split (and so easy to shoot myself in the foot with synchronize shutdown across threads).
Do you know of other guides like this?
The main difference is the LGPL works at the “library” level, and requires the source for the library is open source. It requires in particular you can replace the library, which means you can’t really static link to it from a closed source program (or open source but LGPL incompatible license).
MPL works on the “source file” level. You have to release any changes you make to MPL licensed files, but you can link those files into a closed source program any way you like.
So I could compile a program, dynamically link an LGPL library and that's fine, but the moment I statically link the same program I'd be violating the LGPL. I never considered that, don't that make the LGPL a pretty poor license for pretty much anything?
Not necessarily, because it means LGPL libraries are suitable for use with non-free programs. As the library user, this is good, because you potentially get access to free libraries that you'd otherwise be unable to use. (GPL libraries are off limits for non-free programs.) As the library author, this is also good, because your potential user base is widened, but your library can remain free software.
(Of course, you could equally spin it that these are bad things - an exercise for the reader.)
Fwiw a lot of legal scholars have had their doubts on the virality of the GPL when it comes to dynamic linking.
The FSF takes a pretty major logical leap by considering dynamically linking a work to a GPL library to be creating a work that falls under the GPL.
Both EU and US scholars doubt that mere dynamic linking constitutes making a derivative work. (Specifically for the US, Galoob v. Nintendo ruled that a derivative work "must incorporate a portion of the copyrighted work in some form"; which obviously isn't the case with dynamic linking. - Legal scholars in the EU have come to similar conclusions when it comes to the various EU copyright directives.)
Generally speaking it's untested enough ground to kinda avoid the GPL for this usecase anyway, but the FSF's Legal FAQ presents things as fact in a way mostly only benefitting their cause.
Could you give a few examples of what you mean as "a lot of legal scholars have had their doubts"?
A key question is if different aspect of a work should be considered separate independent works communicating with each other or as a single copyrighted work. In games people often talk about DLL files (in terms of modding), game content like images, video and sound, game engines, game and sever code. How much and what aspects can be modified without the permission of the copyright author?
There are generally three arguments I have heard in favor of a "single work". One is that everything will eventually be copied into memory, and thus while independently they may have individual copyrights the combined work which the author calls "The Game" is a single work.
The second argument is that all this technical details doesn't matter for a judge or jury. What matter is what those people perceive as a single work. Technical aspects like did the copying arrive there through the internet, a CD, a DLL file, or what have you isn't that important in determining the question about a single work vs interoperability between different independent works. It is all about the experience for the end-user.
The third argument I hear is that DDL files or programs that dependent on them are not independent. One can not run them independently, they are generally not developed independently, nor can the "single work" even start if parts are missing. Putting files into DDL is just a form of splitting the work into multiple files for technical convenient reasons, which is not a basis to form a legal distinction between a single work and multiple independent works. If the technical aspect would allow this then anything sent over the internet would loose copyright, since content is split into thousands IP packages which individually might not be large enough to be copyrighted.
Main ones I know of (being European) are well, EU ones. The main one I'm deferring to is the EUs own words[0] on the matter, which explains that "the parts of the programs known as ‘interfaces’, which provide interconnection and interaction between elements of software and hardware may be reproduced by a legitimate licensee without any authorisation of their rightholder".
As for a realistic example as to where this applies, I'd pick the age-old "GNU Readline" library. Readline is infamous for the fact that it's a standard library on Linux distros (because it's a bash dependency) that is easy to accidentally import in a C project and lands under the GPL. The FSF from what I can tell loves to parade this library around as a way to "gotcha" developers; it's to the point where even Readlines Wikipedia article mentions this[1].
In the case of EU law - this is just straight up not an issue[2]. As long as you're not distributing your software with readline, but rather with a dynamic link to readlines .so file (which for Linux can be easily assumed since it's a bash dependency and the overwhelming majority of Linux PCs have bash installed), readline's license doesn't apply since a user can just supply their own library and as long as it's compatible, it will work. It's hard to argue someone is distributing readline or making a derivative work from readline just by linking with its public API.
To put it in a slightly different form - the idea of linking not being a derivative work to stop somewhere because otherwise the literal Linux Kernel would force every program ever written for it to be under GPL2.0-only (which obviously isn't true, not even in US law from what I can tell), since every linux program is technically a derivative of linux the kernel. The EUs interpretation seems to be that it ends exactly on the moment the code in a program stops being ran from the files with which it's distributed.
---
Game mods are probably split down the middle, if we just look at them "as code" (so without going into asset patches - those would probably be a derivative work regardless, I'm thinking here of say, editing a loot table in a game; basically just number tweaks). Games with officially used mod loading can likely claim that mods are plugins, which would make them derivative works. That said, most games as of recent don't ship with mod loaders and rely on patching a DLL file shipped with the game[3], which likely would make an individual mod not a derivative work, given it's just an interface re-implementation with user-defined side effects.
Then you have the really old-school IPS files which just are straight up binary patches. I have absolutely no clue how those fit into the mix, given an IPS patch is literally a series of data offsets + what data to dump at those bytes. Those mostly fit with old ROMs though since IPS patches were abandoned due to inherent size limits + a magic word bug.
---
That said, ultimately it's important to keep in mind that law isn't computer code. It's not that if function foo takes argument bar and produces result foobar, that you always get result foobar with the law[4]. Not even in the US, which almost always defers to precedent ("case law") is that the case, and even less so in the EU where precedent is just treated as another argument rather than something to defer to. There's a zillion edge-cases to each example and a judge can rule differently in the end for most situations.
This is simply what the EU has written on the matter and from what I know about CJEU rulings, the CJEU tends to side on the interpretation that unless the goal is extremely blatant copyright violation, it's probably fine.
The EU law, also linked in the sibling comment, has some direct conditions. Interoperability of interfaces are allowed on the condition that it does not prejudice the legitimate interest of the copyright holder, and it does not conflict with a normal exploitation of the program, and is independent. This is likely the reason why Linus toward had said that non-free drivers may be legal, may in other cases not be, and it depend on the exact details.
A driver that itself was intertwined with the internals of the kernel, or would conflict with the exploitation of the linux kernel, or cause conflict with the legitimate interest of the copyright holders, might be a derivative work. The linux kernel has published clear boundaries for this (https://www.kernel.org/doc/html/latest/process/license-rules...), like the the syscall interface. Drivers that do not respect those boundaries may be more likely to fall outside of the EU law, but as with most legal discussions it would be a gliding scale.
Readline is one of those more odd theoretical case where the specific main work is the library itself. This makes questions like "legitimate interest of the copyright holder" and "normal exploitation of the program" a bit more complex. It would however never become a real legal case since anyone accused of infringement can just replace the small library with an compatible alternative and stop distributing the GNU Readline library. Since compliance is generally the goal by FSF I doubt they or any company would be willing to spend money on lawyers to fight over it.
In contrast, Unity sells their game engine library, so companies that tried to bypass copyright (by not paying) and "link" their games with existing installed versions on users computers would likely still end up in court. My money would also be on unity winning that battle.
That article makes an interesting argument. If the code is for the purpose of interoperability, and the use does not prejudice the legitimate interest of the copyright holder, and it does not conflict with a normal exploitation of the program, and is independent software, then it may not be a derivative work.
It would take a very special situation when a company would rely on fulfilling all those conditions in order to use that as a legal defense.
> GPL libraries are off limits for non-free programs
But since GPL and LGPL are compatible licenses, can't the non-free author just fork the GPL to an LGPL version and then use it? A bit of inconvenience and technicality involved but still a workable workaround.
No, this would undermine the entire point of the GPL. The compatibility is one way.
In an extremely simplified version, LGPL says you can only use the software if you guarantee A and B, while GPL says you can only use the software if you guarantee A, B, and C. Since {A,B} is a subset of {A,B,C}, licensing the LGPL software under something that requires A, B, and C guarantees A and B and so is fine by the LGPL. However, since the LGPL doesn't require you to guarantee C, then licensing software under the LGPL will not maintain all the requirements you must maintain to use GPL software.
Good point. I think it's a good thing in some way as it will force the downstream (or end user) of that library to also go GPL and not a non-free license. I think more and more software should be produced as FOSS anyway and we should move there using both advocacy and FOSS licensing.
No, it makes it quite usable for dual licensing scenarios.
Those that want to take the work of others for free, get the same payment that they are willing to pay upstream developers for.
Otherwise they can dynamically link it and take it as it is, or if it doesn't suit them, pay for the commercial license instead, and share their gold coins with upstream.
That does make a difference though, because if you dynamically link it the user can modify the library and put it back in place of the original. Being able to modify things is one of the main aims of the GPL
>> You can link statically, see my reply to parent.
You can static link code under the equivalent version of GPL license. The point of the LGPL was to compromise so non-free software could still use free libraries. I was unaware of the static compilation aspect of MPL - that's interesting.
I think the other commenter is wrong. You are restricted from statically linking LGPL code in non free programs unless you follow the restrictions in section 4d of the license
If you statically link against an LGPLed library, you must also provide your application in an object (not necessarily source) format, so that a user has the opportunity to modify the library and relink the application.
I'm being more than a bit mean here, in my attempt to be slightly amusing, because there's actually an obvious workaround: dynamically link with the LGPL library, like people usually do, and then it's nice and easy. The sort of systems where this would be difficult are the sort of systems where actually distributing a GPL program is just going to be annoying anyway, and you're probably better off not even trying.
But it is actually an interesting idea! I assume for the average program you'd be including more of your symbol information than you might like, though, as object files have to find their external symbols somehow! (I imagine LTCG will add a lot of additional information as well. All this would add up to a lot of useful info that would assist in the sort of reverse-engineering effort that proprietary software vendors would like to make more tedious, rather than less so.)
But, if you really wanted to do it, and didn't mind putting a bit of effort in, you could probably do something. An enormous non-LTCG translation unit containing all of the code, probably.
Improved interop with other open source programs because they used to have a 'one off' license. That makes using ZeroMQ harder than it should be because there is always the question of whether or not their license is more or less permissive than the one that you've picked for your project and risks future unintended side effects. This move should lay that all to rest.
Lawyers have mostly looked at the common open source licenses long enough to know what they say: what is compatible with each other, what is compatible with their [corporate/clients] other needs, and otherwise how it works. While mostly they have not been tested in court, there general consensus they will hold up somehow (nobody knows for sure and there are a couple hundred countries that could each decide their own thing).
When you do your own license though lawyers need to figure everything out for just your project.
How does ZMQ compare to recent message passing libraries? Does it have a place in rust's tokio ecosystem? Are there projects that implement the HA transaction store to disk protocol "out of the box" ? Forgot the name... Binary star something? I always think of working with ZMQ as I really enjoyed it, but I can't see it as easy no deploy as Kafka with so much ACL tooling, easy Auth and debugging, dashboards, metrics...
Maybe one more question... What are recent projects that use ZMQ behind the scene that you find a good place to learn more message passing techniques and their tradeoffs?
Did you actually work seriously with / inside 0MQ?
It's indeed an intuitive library, but it's far, far from perfect, thus why it has been rewritten countless times by Sustrik (crossroads), then by D'Amore (nanomsg).
The state machine in 0MQ is a nightmare of maintenance and a constant source of tricky bugs. The overall "all sockets in one" design makes things such as reconnection and peer identification pretty much impossible.
Don't get me wrong, there's a lot of good in 0MQ. But calling it a jewel that will be in use for decades is far from reality.
Worked on a team that used zeromq for a trading system. Worked well, but if you do not read the guide and get a good grip on what guarantees are provided by zero mq.
No. I wanted to say that it is very easy to shoot yourself in the foot with zeromq if you put no thought into designing your protocols.
It's a task that you have to do, since the basic guarantees of 0MQ are almost never enough and is very well explained in the guide. But you have to read it carefully.
I need to look, but I think one of my clients might owe me 6 figures for this one... I'll never get paid, but I'm definitely going to write the invoice.
I talked with Pieter at some length about his cancer, which he was very open and forthcoming about, and his ultimate decision for euthanasia. He was very stoic about the whole ordeal. He was somewhat depressed about it to start, but eventually came around to realizing the accomplishments and contributions he made throughout his life in both the technical realm and outside.
I say all that to mean that anyone being sad about his death is something that Pieter would not have wanted.
I was at his wake party in Brussels. He did that before he died, so he was there. I only have the most positive memories, even though it was complicated. To give you all a bit of insight into his wonderful way of reasoning with his end: there was a T-Shirt we all could sign. It read „I was at my own wake party and all I got was this lousy T-Shirt“. That was Pieter. I can never NOT end with a smile on my face whenever I think about him.
One of the things I am proud of is to be listed as a contributor in the ZeroMQ O'Reilly (paper) book -- just because Pieter was this generous dude who never doubted about spreading good will. I still miss him.
I met him once at a conference in 2013. One of the most fun and human people in greater tech I had the honor to talk with in person. I can highly recommend his writings.
ZeroMQ was absolutely solid and stable; incredibly trouble free and the only issues we ran into were when IT teams didn't open the ports we documented in the configuration procedure. (The resultant architecture actually looks a lot like Flink)
In any case, ZeroMQ is a fantastic piece of technology that I feel like I don't see out in the wild quite enough. It's dead simple and incredibly stable from my experience.