But invoking No True Scotsman would imply that the focus is on gatekeeping the profession of programming. I don’t think the above poster is really concerned with the prestige aspect of whether vibe bros should be considered true programmers. They’re more saying that if you’re a regular programmer worried about becoming obsolete, you shouldn’t be fooled by the bluster. Vibe bros’ output is not serious enough to endanger your job, so don’t fret.
I’m currently engineering a system that uses an actor framework to describe graphs of concurrent processing. We’re going to a lot of trouble to set up a system that can inflate a description into a running pipeline, along with nesting subgraphs inside a given node.
It’s all in-process though, so my ears are perking up at your comment. Would you relax your statement for cases where flexibility is important? E.g. we don’t want to write one particular arrangement of concurrent operations, but rather want to create a meta system that lets us string together arbitrary ones. Would you agree that the actor abstraction becomes useful again for such cases?
Data flow graphs could arguably be called structured concurrency (granted, of nodes that resemble actors).
FWIW, this has become a perfectly cromulent pattern over the decades.
It allows highly concurrent computation limited only by the size and shape of the graph while allowing all the payloads to be implemented in simple single-threaded code.
The flow graph pattern can also be extended to become a distributed system by having certain nodes have side-effects to transfer data to other systems running in other contexts. This extension does not need any particularly advanced design changes and most importantly, they are limited to just the "entrance" and "exit" nodes that communicate between contexts.
I am curious to learn more about your system. In particular, what language or mechanism you use for the description of the graph.
We’re using the C++ Actor Framework (CAF) to provide the actor system implementation, and then we ended up using a stupid old protobuf to describe the compute graph. Protobuf doubles as a messaging format and a schema with reflection, so it lets us receive pipeline jobs over gRPC and then inflate them with less boilerplate (by C++ standards, anyway).
Related to what you were saying, the protobuf schema has special dedicated entries for the entrance and exit nodes, so only the top level pipeline has them. Thus the recursive aspect (where nodes can themselves contain sub-graphs) applies only to the processor-y bit in the middle. That allowed us to encourage the side effects to stay at the periphery, although I think it’s still possible in principle. But at least the design gently guides you towards doing it that way.
After having created our system, I discovered the Reactor framework (e.g. Lingua Franca). If I could do it all over, I think I would have built using that formalism, because it is better suited for making composable dataflows. The issue with the actor model for this use case is that actors generally know about each other and refer to each other by name. Composable dataflows want the opposite assumption: you just want to push data into some named output ports, relying on the orchestration layer above you to decide who is hooked up to that port.
To solve the above problem, we elected to write a rather involved subsystem within the inflation layer that stitches the business actors together via “topic” actors. CAF also provides a purpose-built flows system that sits over top of the actors, which allows us to write the internals of a business actors in a functional reactive-x style. When all is said and done, our business actors don’t really look much like actors - they’re more like MIMO dataflow operators.
When you zoom out, it also becomes obvious that we are in many ways re-creating gstreamer. But if you’ve ever used gstreamer before, you may understand why “let’s rest our whole business on writing gstreamer elements” is too painful a notion to be entertained.
Since you still have C++ involved and if you are still looking for composable dataflow ideas, take a look at TBB's "flow_graph" module. It's graph execution is all in-process while what you describe sounds more distributed, but perhaps it is still interesting.
> we don’t want to write one particular arrangement of concurrent operations, but rather want to create a meta system that lets us string together arbitrary ones. Would you agree that the actor abstraction becomes useful again for such cases?
Actors are still just too general and uncontrolled, unless you absolutely can't express the thing you want to any other way. Based on your description, have you looked at iterate-style abstractions and/or something like Haskell's Conduit? In my experience those are powerful enough to express anything you want to (including, critically, being able to write a "middle piece of a pipeline" as a reusable value), but still controlled and safe in a way that actor-based systems aren't.
As Douglas Adams and xkcd #1227 have pointed out, the older generations have complained about this sort of thing since Plato. However, I do not believe this observation settles the matter, because it does not seriously contend with the null hypothesis: that we really have been steadily enshittifying the human experience since Plato.
Who has the right of it? Do the new generations simply not know what they are missing. Or is there something in human nature that makes us unavoidably crotchety as we get older and, thus, not to be taken too seriously? In my opinion, it is simply an open mystery.
On the one hand, many tangible, measurable things have improved over the last 2000 years, or, indeed, since the 90's. Steven Pinker has made this point somewhat convincingly by looking at unambiguously positive things like reduced infant mortality.
On the other hand, every single generation can give detailed accounts of how much more real and alive and authentic the world was a few decades ago. The accounts have similarities across the generations, but they are also rooted in specifics. To argue that we're all mis-remembering or failing to appreciate what the new decade has to offer is to insist on a rather fantastic level of self-doubt. If our entire lived experience is this untrustworthy, it kind of makes it impossible to rule on anything - good OR bad. Why should we default to trusting the younger generation?
I think the surrounding technological context of our age has brought this long-simmering matter to a boil. Now that our electronic communication is so sophisticated that we can essentially build "anything", it starts to re-focus our attention from "CAN we build it" to "SHOULD we build it". This question about digital society is complementary to the broader, long-standing civilizational question. Have the trillions of hours the human race has expended shaping our society resulted in _better_ life, or just life with a deeper tech tree?
One novelty of our time is how certain human enterprises play out at 10x speed in cyberspace. This lets us watch the entire lifecycle as they Rise and Fall, over and over. This lends perspective, and allows patterns to emerge. Indeed, this is exactly how Doctorow came to coin the term enshittification. If there's any truth to the life-really-is-getting-worse theory, you'd want to find some causal mechanism - some constant factor that explains why we've been driving things in the wrong direction so consistently. Digital life lets us see enough trials to start building such an account. You can imagine starting to understand the "physics" of why all human affairs eventually lead to an Eternal September. Wherever brief pockets of goodness pop up, they are like arbitrage opportunities: they tautologically attract more and more people trying to harvest the goodness until it's pulverized - a tragedy of the commons. Perhaps some combination of population growth and the inevitable depletion of Earth's natural resources lead to such a framework.
Whatever you think about it, I mostly just wish people would acknowledge that it is an unresolved debate and treat it as such. It is critical to understanding what it is worth spending our time on, and it is the kernel of many comparatively superficial disagreements (e.g. the red-blue culture war in US politics).
> On the other hand, every single generation can give detailed accounts of how much more real and alive and authentic the world was a few decades ago.
I don’t know that this is true, and I doubt it meant the same thing to Plato as it did to us; I read somewhere that in ancient times, nostalgia would have been for the world of the gods, not a specific time and place.
The thirties and fourties’ were probably not more alive and authentic than the
50’s and 60’s. The 1920’s were a cultural peak that retreated until the 1960’s, and prior to the Industrial Revolution, things didn’t change fast enough for like decades to be significant. The original documented example of nostalgia was about soldiers nostalgic for home, not explicitly their youth or another time.
All these feelings, “nostalgia” are going to hit different without shared cultural experiences, and changing technological and aesthetic context,
You're saying it's _rare_ for developers to want to advance a dependency past the ancient version contained in <whatever the oldest release they want to support> is?
Speaking for the robotics and ML space, that is simply the opposite of a true statement where I work.
Also doesn't your philosophy require me to figure out the packaging story for every separate distro, too? Do you just maintain multiple entirely separate dependency graphs, one for each distro? And then say to hell with Windows and Mac? I've never practiced this "just use the system package manager" mindset so I don't understand how this actually works in practice for cross-platform development.
Check it in and build it yourself using the common build system that you and the third party dependency definitely definitely share, because this is the C/C++ ecosystem?
I find this sentiment bewildering. Can you help me understand your perspective? Is this specifically C or C++? How do you manage a C/C++ project across a team without a package manager? What is your methodology for incorporating third party libraries?
I have spent the better half of 10 years navigating around C++'s deplorable dependency management story with a slurry of Docker and apt, which had better not be part of everyone's story about how C is just fine. I've now been moving our team to Conan, which is also a complete shitshow for the reasons outlined in the article: there is still an imaginary line where Conan lets go and defers to "system" dependencies, with a completely half-assed and non-functional system for communicating and resolving those dependencies which doesn't work at all once you need to cross compile.
For most C and C++ software, you use the system packaging which uses libraries that (usually) have stable ABIs. If your program uses one of those problematic libraries, you might need to recompile your program when you update the library, but most of the time there's no problem.
For your company's custom mission critical application where you need total control of the dependencies, then yes you need to manage it yourself.
Ok - it sounds like you’re right, but I think despite your clarification I remain confused. Isn’t the linked post all about how those two things always have a mingling at the boundary? Like, suppose I want to develop and distribute a c++ user-space application in a cross platform way. I want to manage all my dependencies at the language level, and then there’s some collection of system libraries that I may or may not decide to rely on. How do I manage and communicate that surface area in a cross platform and scalable way? And what does this feel like for a developer - do you just run tests for every supported platform in a separate docker container?
Genuine question - are there examples (research? old systems?) of the interface to the operating system being exposed differently than a library? How might that work exactly?
> examples ... of the interface to the operating system being exposed differently than a library
Linux syscalls, MS-DOS 'software interrupts'...
But that's not the issue, operating system interfaces can be exposed via DLLs, those DLLs interfaces just must be guaranteed to be stable (like on Windows).
Tbh, I'm not sure why I can't simply tell the gcc linker some random old glibc version number from the late 1990s and the gcc linker checks whether I'm using any functions that haven't been available in that old version (and in that case errors out). That would be the most frictionless solution, and surely it can't be too hard to annotate glibc functions with a version number in the gcc system headers when that function first appeared.
But if I get to Bring My Own Dependencies, then I know the exact versions of all my dependencies. That makes testing and development faster because I don’t have to expend effort testing across many different possible platforms. And if development is just generally easier, then maybe it’s easier to react expediently to security notices and release updates as necessary.. .
You would need to monitor all your dependencies (and their dependencies), compile new binaries for all supported platform each time their is an issue (which you likely learn about later), notify all your user, and distribute improved binaries. I think this is far more effort than using dynamic libraries and compiling for a couple of Linux distributions. And I would be surprised if entities distributing statically linked binaries actually do this (properly).
That’s fine as a “second order” rebuttal, but you’re leaving out _third_ order effects which are where all the action is in terms of the unique horribleness of real estate rental.
The world is full of goods that share many of the nasty features that the real estate rental market has. For example, it’s not hard to find goods where:
- The value partially derives from the limited supply
- The limited supply is artificially limited by forces that the market cannot correct for (either because law prevents entrance of new competitors, or because would-be competitors are colluding to form a cartel that is deliberately restricting it)
For instance, taxi medallions and diamonds meet these criteria.
What makes rental housing special is other qualities:
- The vast majority of a rental property’s value derives from its proximity to publicly funded resources which the seller did not create themselves. If your tax dollars pay for a new park, the value of that park is vacuumed up by the landlords near the park. (This is what it IS to be an economic rent… thus the name.)
- Demand at the low end is extremely inelastic. People have to live somewhere if their life is entangled with that city. Compare with diamonds or taxi medallions, which you can opt out of.
- In theory, most landlord-tenant relationships operate on a year-long cadence because it mixes flexibility with predictability. The renter doesn’t have to commit their life to staying in a particular city for multiple years just to please some landlord, and the landlord gets to re-auction the rental rights by re-setting the price once a year, keeping up with the going market rate. However, in practice, most renters end up wanting to stay more than one year, and are not mentally or logistically preparing to move. Thus, a substantial price increase is disruptive. You might be tempted to say that the real problem is that the renter went in blind without guarantees about what they were really getting signed up for, and thus a fix could be to secure much longer leases which schedule the rent increases up front. However, as the lease duration goes up, the chances go up that the renter experiences changes in life circumstance that make it impossible or intolerable to continue renting. Barring the creation of a society of debt prisoners, the landlord will inevitably end up enduring lease breaks. Because the switching cost is uniquely high, this creates a fundamental dilemma: people don’t want to move until they do, yet they need to be prepared to move frequently - unless they secure longer leases, which they can’t realistically promise to honor.
So yes, you have cartel behavior and supply distorted by out-of-band zoning restrictions that the market can’t correct, but those are par for the course. The real anger comes from the fact that a place to live isn’t really a “good” in the first place - everybody needs one, and while a roof over your head and good plumbing is worth _something_, the rent you’re paying is driven primarily by a segment of our society _preventing_ you from being able to live close to the public center unless you pay their troll toll. This is where the perceived injustice comes from. When you layer in the Gordian knot of lease duration, rent increase, and the high switching costs, that’s when people really start to hate you.
Right - but coming back to the original question, if I'm not mistaken, the explanation is that the blogpost is measuring information gained from an actual outcome, as opposed to _expected_ information gain. An example will help:
Say you're trying to guess the number on a 6-sided die that I've rolled. If I wanted to outright tell you the answer, that would be 2.58 bits of information I need to convey. But you're trying to guess it without me telling, so suppose you can ask a yes or no question about the outcome. The maximum of the _expected_ information add is 1 bit. If you ask "was it 4 or greater?", then that is an optimal question, because the expected information gain is min-maxed. That is, the minimum information you can gain is also the maximum: 1 bit. However, suppose you ask "was it a 5?". This is a bad question, because if the answer is no, there are still 5 numbers it could be. Plus, the likelihood of it being 'no' is high: 5/6. However, despite these downsides, it is true that 1/6 times, the answer WILL be yes, and you will gain all 2.58 bits of information in one go. The downside case more than counteracts this and preserves the rules of information theory: the _expected_ information gain is still < 1 bit.
EDIT: D'oh, nevermind. Re-reading the post, it's definitely talking about >1 bit expectations of potential matchings. So I don't know!
reply