Judging by your post history, you might be a software engineer. As such, you might benefit from a few specific tactics I haven't seen in other comments yet:
1. Being an interviewer.
2. Being a mentor. (onboarding mentor, for example.)
3. Join your workplace's "Donut" program, if it has one. (I don't know what these are normally called, but at my job it's a thing where you get matched with a new random coworker every two weeks for a half-hour chat.)
The common thread of these tactics is you have the opportunity to meet new people in a mostly-familiar workplace context where you probably have more confidence and a greater sense of belonging. The more supportive environment lets you learn conversational skills that you can then deploy in more unfamiliar contexts like meetups or gyms.
Not only that, but for (1) and (2) you -- hopefully -- get training/shadowing opportunities before being thrown in the deep end.
Between those three things, I've had 1-1 (or 1-1-1, for interviews) conversations with 100+ different people over ~5 years. In retrospect, this has considerably reduced my social anxiety, although that had never been my explicit intent (I was just trying to help / learn / etc.)
As with all things in life, YMMV. Obviously these tactics are workplace-dependent. And if the idea of mentoring or interviewing puts you in the "panic zone" (brain shuts down), you might be advised to try some intermediate steps first.
Not disagreeing with your point, and I'm sure you already know this, I just wanted to point out (for the benefit of people that don't have other options) that it is possible to build "webhooks" in such a way that you're confident nothing is dropped and nothing goes (permanently) out of sync. (At least, AFAIK -- correct me if this sounds wrong!)
Conceptually, the important thing is each stage waits to "ACK" the message until it's durably persisted. And when the message is sent to the next stage, the previous stage _waits for an ACK_ before assuming the handoff was successful.
In the case that your application code is down, the other party should detect that ("Oh, my webhook request returned a 502") and handle it appropriately -- e.g. by pausing their webhook queue and retrying the message until it succeeds, or putting it on a dead-letter queue, etc. Your app will be "out of sync" until it comes back online and the retries succeed, but it will eventually end up "in sync."
Of course, the issue with this approach is most webhook providers... don't do that (IME). It seems like webhooks are often viewed as a "best-effort" thing, where they send the HTTP request and if it doesn't work, then whatever. I'd be inclined to agree that kind of "throw it over the fence" webhook is not great and risks permanent desync. But there are situations where an async messaging flow is the right decision and believe it or not, it can work! :)
This misses the problem explained in the article, which is that there are scenarios where events are "acked" but things still go wrong because of bugs.
For example, you rolled out code on the receiver side that did the wrong thing with each message. Now there's no way to replay the old webhooks events in order to reinstate the right behaviour; there's no way to ask the producer to send them again.
The only way around this is to store a record of every received message on the receiver side, too, which the article author thinks is an unnecessary burden compared to polling.
Personally, I think push is an antipattern in situations where data needs to be kept in sync. The state about where the consumer is in the stream should be kept at the consumer side precisely so it can go back and forth.
If you want to be 100% sure that you get all the webhooks, the sender could implement an incrementing "webhook ID". If the receiver knows the last webhook ID was 53 and the sender sends one for 55, you can tell one has been dropped. There are some other concerns around that like if 54 has been sent but they arrived out of order, or if they arrive almost simultaneously. Nothing that isn't solvable afaict though.
Of course, then you need a way for the receiver to retrigger or view the webhook if one gets missed, which starts to look like you have to have a polling endpoint anyways, though.
We have a system that pushes loads of messages (as in thousands a minute) and some consumer insists on using there http backend to push the messages to.
There system is down every once in a while for quite some time.
We're using an async queueing solution, but you can't keep those messages forever.
We sometimes have milions of messages for them in there queue's, which take up space...
If all of our consumers had those problems we would have to buy loads of storage..
We're simply dropping messages older than x, and have an endpoint that they can call to retreive the 'latest state of things'.
This way when they come back from a failure, they simply get the latest state, and then continue with updates from our end..
It's far from perfect, but it works really well.
I know the goal for most systems is just to be 'up to date'
Not to get the entire history.
So in most cases you don't need to stash all the messages, you just need to be able to retreive the latest state of stuff...
> "Of course, the issue with this approach is most webhook providers... don't do that "
Embedded systems don't do that for webhooks because they can't (very little RAM or non-volatile storage) but customers clamor for webhooks anyway because it's what their web developers know how to use. So inevitably they're going to lose data but they're only getting what they asked for.
Yep -- under the hood, "immutability" in Clojure is implemented with data structures that provide pretty good performance by sharing sections of their immutable structure with each other.
For example, if you have a vector of 100 items, and you "mutate" that by adding an item (actually creating a new vector), the language doesn't allocate a new 101-length vector. Instead, we can take advantage of the assumption of immutability to "share structure" between both vectors, and just allocate a new vector with two items (the new item, and a link to the old vector.) The same kind of idea can be used to share structure in associative data structures like hash-maps.
I'm no expert on this, so my explanation is pretty anemic and probably somewhat wrong. If you're curious, the book "Purely Functional Data Structures" [0] covers these concepts in concrete detail.
w.r.t. long-form writing, two suggestions -- depending on whether you prefer keyboard or handwriting:
- Keyboard: With a phone that supports USB host mode, you can plug your keyboard of choice into the USB port and use that for input. With a small mechanical keyboard like [0], you can have a great mobile typing experience.
- Handwriting: E-ink tablets like the reMarkable 2 [1] are quite cool if you want a paper-like writing experience.
I've been involved in rewrite-from-scratch projects that worked, because the scope of the system being rewritten was relatively small. So, I've concluded "rewrite from scratch" projects get a bad rap because they tend to also be very big, nebulously scoped projects, almost by definition. But it's totally unrelated to the fact that they're rewriting a codebase, and everything to do with the size and scope of the project to rewrite a codebase.
Either way, though, we're lead to similar conclusions as the article, regarding refactoring. If a codebase/system is too large to rewrite in a single project, you have to break the work into incremental steps. Theoretically the only difference between "refactor" and "rewrite" is the scope of the affected code.
Most writes we do as engineers are rewrites. The only difference is whether we're rewriting a statement, a function, a module or an entire system. You are correct. It is the size of the rewrite that matters.
As someone who's experienced something very similar, I've built a theory around why it happens:
1. The floating from task to task is almost always avoidant behavior. The experience is usually accompanied by a "slippery" thought -- something you don't want to think about, something that makes you feel sick or causes you stress, and so your thoughts are trying to latch onto something to distract them.
2. One solution is to eliminate distractions, rest calmly in yourself, and practice mindful attention. Meditate. Engage rationally with the slippery thought. Or just observe it, practice letting it happen without interrupting it. Teach yourself: that thought is not to be feared.
3. The long-term solution is to resolve the concerns that your slippery thoughts are related to. For example, In the case of the person you replied to, they mentioned this phrase: "thoughts that center around a perceived lack of meaning in my life." In that case, there might be actual life changes that you can make to introduce a greater sense of meaning. On the other hand, if the underlying thought is "I have a presentation tomorrow," that's a case where you may want to look at reducing your performance anxiety in general. And so forth.
Of course every case is different, but when this happens to you next I'd strongly recommend looking inward to see if there are any of those slippery, "hot potato" thoughts that you're subconsciously trying to avoid. If so -- I suggest addressing that. If not -- well, maybe you've got a dopamine addiction :)
Not a theory per se, but my "lightbulb moment" with jq came when I thought about it like this:
jq is basically a templating language, like Jsonnet or Jinja2. What jq calls a "filter" can also be called a template for the output format.
Like any template, a jq filter will have the same structure as the desired output, but may also include dynamically calculated (interpolated) data, which can be a selection from the input data.
So, at a high level, write your filter to look like your output, with hardcoded data. Then, replace the "dynamic" parts of the output data with selectors over the input.
Don't worry about any of the other features (e.g. conditionals, variables) until you need them to write your "selectors."
I recently got a GergoPlex and it's so fun to be able to actually reprogram the keyboard itself.
In fact, I got so addicted to keybindings that I tweaked the chording engine to make chords QMK-layer-dependent. The ability to do "modal chords" is just great.
Anyway, thank you for making these things -- I may never have heard of QMK if it wasn't for your boards!
Sure! I don't have the GergoPlex keymap on Github yet (still a WIP, that one...) but I used the modal chords concept in my Faunchpad keymap as well. Here's the .def file with "modal chords" in it (the first parameter is the layer aka mode, then the rest are the usual ones.)
The only change is that all the chording macros (PRES, KEYS, etc.) accept a QMK layer number as the first parameter, and will only activate when that layer is active. The end result is similar to the "sticky bits" functionality, but without the side-effect of masking the signal of those bits.
This is my first foray into embedded programming stuff, so I can't promise I didn't do anything dumb in there. It seems to work for me, though!
I think of this as an optimization kind of problem. The word "efficiency" itself is only meaningful in context of what's being made more efficient.
A system could be "more efficient at becoming stable," for example.
But if by "efficiency" we limit ourselves to mean "the time-cost of a set of actions," (as in the most efficient path is the one that takes the least time), we quickly encounter problems with maximizing usage of time and how that conflicts with unexpected work, which leads to the anti-stability you mentioned.
The way I think about it is that a 100% time-efficient process has zero time-flexibility. If you want to gain time-flexibility (e.g. the ability to pivot, or to work on different things too, or to introduce error bars to your calculations), you lose time-efficiency.
Interesting article. Are there any wireheading proponents in this crowd? I'm curious to understand the phenomenon.
The following quote in the article illustrates what they describe as "Wireheading done right":
"Their primary state of consciousness cycles over a period of 24 hours. Here is their routine: They wake up and experience intense zest for life and work at full capacity making others happy and having fun. Then they go crazy creative in the afternoon, usually spending that time alone or with friends, and explore (and share) strange but always awesome psychedelic-like states of consciousness. Finally, at night they just relax to the max (the healthy and genetically encoded phenomenological equivalent of shooting heroin)."
I can see the appeal of this type of existence. But taking a step back, I question the value in experiencing these states unless they correspond to real events in the world.
For example, focus on "They wake up and experience intense zest for life and work at full capacity making others happy and having fun." This is a perfectly fine sentiment to have. But things should feel better because they're better things to do. Wouldn't a wireheader working in a cheap toxic factory be just as happy as one working in an expensive, safe factory? How might that ultimately impact the factories we design?
With utilitarianism, we attempt to maximize pleasure (very roughly speaking). But part of that is because pleasure has been tied to good events by our built-in wiring. If we have the ability to make any event pleasurable, it feels like we need a new ethical system that employs a full gradient of emotions, including low-valence ones, to appropriately reflect the difference between the desired and the actual reality, and avoid a dystopia where everyone is happy. How does wireheading take this into account?
"Are there any wireheading proponents in this crowd?"
I have never heard the term and this is my first introduction to the concept, at least framed in this manner ...
But ... aren't we all, already, "wireheads" ? Our tools and heuristics might be a little blunt, or ineffectual, but the quote you provide:
"Their primary state of consciousness cycles over a period of 24 hours. Here is their routine: They wake up and experience intense zest for life and work at full capacity making others happy and having fun. Then they go crazy creative in the afternoon, usually spending that time alone or with friends, and explore (and share) strange but always awesome psychedelic-like states of consciousness. Finally, at night they just relax to the max (the healthy and genetically encoded phenomenological equivalent of shooting heroin)."
... sounds a lot like the better days that I have - it's just that I accomplish it with Caffeine, meditation, intense exercise, good sleep hygiene and (sometimes) alcohol.
While I haven't formally explored my day to day life on a happiness maximization metric, I did not come to these tools accidently, or randomly - I've slowly tailored them, and my own habits, to achieve maximum happiness on a specific time horizon ...
It's an interesting thing, because I also take caffeine, and exercise, and meditate. Maybe it's just a matter of degrees, a sliding scale. But "too much of a good thing" isn't unheard of, and I have a strong suspicion that "pleasure control" is one of those things that's tolerable in small doses, but ultimately isn't conducive to survival or satisfaction, especially taken to the extreme of avoiding negative emotions entirely.
Good points. FWIW, I took the article to assume a sort of post-human lifestyle/environment in which physical needs (food, safety, etc.) are met. E.g. "In principle the whole economy may eventually be entirely based on exploring the state-space of consciousness and trading information about the most valuable contents discovered doing so." I.e., all problems are solved except hedonism.
That makes sense, thanks. I was definitely thinking in a nearer-term context.
Even in that distant future though, wireheading would be a practice that fundamentally damages the emotional feedback loops that led to that type of society being formed in the first place. (i.e. the feedback loops that cause people in a society to reject agents that want to change or destroy it.)
Without those feedback loops, a perfect utopia would become an unstable equilibrium, because nobody has any reason to prefer that society over any other society. Thus, you could argue that wireheading is long-term incompatible with a perfect utopia.
There is an "out", which is to just have some of the population wirehead, and then have the society be steered by the individuals that don't wirehead, and therefore can make ethical decisions. Alternatively, you could emulate Ian Bank's Culture, and remove decisionmaking power from human hands entirely via automation. But really, even in that world, I'd rather be one of the un-wireheaders who retained their ethical agency, even if it came with suffering. At least, I think I would... although I'm not sure exactly why.
> With utilitarianism, we attempt to maximize pleasure (very roughly speaking). But part of that is because pleasure has been tied to good events by our built-in wiring. If we have the ability to make any event pleasurable, it feels like we need a new ethical system that employs a full gradient of emotions, including low-valence ones, to appropriately reflect the difference between the desired and the actual reality, and avoid a dystopia where everyone is happy. How does wireheading take this into account
TFA is entirely about addressing this. Wireheading is a common argument against utilitarianism. Furthermore, if everyone is happy, is it a dystopia?
If someone steals my wallet, and it doesn't make me unhappy, then am I really hurt? Money that can't increase my happiness isn't useful
A dystopia where everyone is happy is a bit of an oxymoron. I can imagine this "dystopia" being unsustainable though. Like everyone is happy but forgets to eat or has no motivation to reproduce or everyone is happy but they are quickly killing the planet (still better than our current state, lol).
But I think the article deals with the unsustainability.
1. Being an interviewer.
2. Being a mentor. (onboarding mentor, for example.)
3. Join your workplace's "Donut" program, if it has one. (I don't know what these are normally called, but at my job it's a thing where you get matched with a new random coworker every two weeks for a half-hour chat.)
The common thread of these tactics is you have the opportunity to meet new people in a mostly-familiar workplace context where you probably have more confidence and a greater sense of belonging. The more supportive environment lets you learn conversational skills that you can then deploy in more unfamiliar contexts like meetups or gyms.
Not only that, but for (1) and (2) you -- hopefully -- get training/shadowing opportunities before being thrown in the deep end.
Between those three things, I've had 1-1 (or 1-1-1, for interviews) conversations with 100+ different people over ~5 years. In retrospect, this has considerably reduced my social anxiety, although that had never been my explicit intent (I was just trying to help / learn / etc.)
As with all things in life, YMMV. Obviously these tactics are workplace-dependent. And if the idea of mentoring or interviewing puts you in the "panic zone" (brain shuts down), you might be advised to try some intermediate steps first.