Isn't Frozen something you do to a set or dictionary to say, I'm not going to add any more values, please give me a version of this which is optimized for lookup only?
Do people reading this post not understand that this is the output of a prompt like 'analyze <event> with <perspective> arriving at <conclusion>'? Tighten up your epistemology if you're arguing with an author who isn't there.
The very fact that people are arguing with a non-existent author signals that whatever generated the content did a good enough job to fool them today. Tomorrow it will do a good enough job to fool you. I think the more important question is what this means in terms of what is really important and what we should invest in to remain anchored in what matters.
This has been happening a lot recently, where an article immediately sets off all my AI alarm bells but most people seem to be happily engaging with it. I’m worried we’re headed for a dystopian future where all communication is outsourced to the slop machine. I hope instead there is a societal shift to better recognize it and stigmatize it.
I've noticed some of this in recent months. I've also noticed people editing out some of the popular tells, like replacing em-dashes with commas, or at least I think so, because of odd formatting/errors in places where it sounds like the LLM would have used a dash.
But at this point I'm not confident that I'm not failing to identify a lot of LLM-generated text and not making false positives.
My feeling when I found this blog was "so I'm NOT the only one!". It's the painful truth about staff+ engineering that I've also experienced, but haven't felt safe to talk about.
You're not wrong that there's something cynical or nihilistic about it. The core thesis is "do what the company wants, even if it's not what they should want". That idea may be unpalatable, but getting ground up in the corporate gears is worse.
My personal feeling is that there's a way to both agree with the underlying issues Sean writes about while being more optimistic and providing better alternatives. Something I feel I should start writing about more.
Recent experience report: I updated four of my team's five owned microservices to .net 10 over the past two weeks. All were previously on .net 8 or 9. The update was smooth: for the .net 9 services, I only had to update our base container images and the csproj target frameworks. For the .net 8 services, I also had to update the Mvc.Testing reference in their integration tests.
It's hard for me to imagine a version increment being much easier than this.
We instrument JWT libraries directly (jsonwebtoken, jwks-rsa). Both `jwt.sign()` and `jwt.verify()` are captured during recording and replayed with the original results. During replay, you get back the recorded verification result. So if the token was valid during recording, it stays valid during replay, even if it would be expired "now". The test runs in the temporal context of when it was recorded.
reply