I'm just slightly younger than you, but have the exact same sentiment. Hell, even moreso maybe, because what I realized is that "writing code to implement interesting ideas" is not really what I enjoy - it's coming up with the interesting ideas and experimenting with them. I couldn't care less about writing the code - and I only did it because I had to...if I wanted to see my idea come to life.
AI has also been a really good brainstorming partner - especially if you prompt it to disable sycophancy. It will tell you straight up when you are over-engineering something.
It's also wonderful at debugging.
So I just talk to my computer, brainstorm architectures and approaches, create a spec, then let it implement it. If it was a bad idea, we iterate. The iteration loop is so fast that it doesn't matter.
Did you end up regretting a design choice, but normally you'd live with it because so much code would have to be changed? Not with Agentic coding tools - they are great at implementing changes throughout the entire codebase.
And its so easy to branch out to technologies you're not an expert in, and still be really effective as you gain that expertise.
I honestly couldn't be happier than I am right now. And the tools get better every week, sometimes a couple times a week.
Even with human junior devs, ideally you'd maintain some documentation about common mistakes/gotchas so that when you onboard new people to the team they can read that instead of you having to hold their hand manually.
You can do the same thing for LLMs by keeping a file with those details available and included in their context.
You can even set up evaluation loops so that entries can be made by other agents.
The problem with LLMs is they have a defacto recency bias, so they only really remember that last few things you said. I've yet to see any real LLM manage a proper listing of rules.
> “No computer, no AI can replace a human touch,” said Amy Grewal, a registered nurse.
I don't believe this to be true in the long run. We will almost certainly have Westworld like robots in the future. Researchers have also found, on average, ChatGPT has better bedside manners than doctors.
Step 3.5 Doctor rewrites analysis to go with whatever the AI spits out because:
* Too busy (assumptions were made that patient load could increase with ML support)
* Too scared of being sued even if the doctor disagrees
* Too scared of being fired because the doctor keeps overriding the "cost effective" ML verdicts
We've seen this play out over and over in the medical field. I've watched my Primary Care physician go from angst over patient outcomes to angst over statistical outcomes for his department. It took 15 years, but the system got him.
True and if they're leading with their fucking feels when making major life decisions their arrested emotional development is liable to land them in deeply regrettable circumstances.
... even if the tech materializes, is Westworld the thing you want to reference if you're arguing that AI replacing humans is a good thing? I think all incarnations of that book/film/show involve robots that (a) don't work as intended and (b) kill people (c) make humans behave badly.
CPAP has been life changing for me. For my entire life I struggled getting up in the morning, and I never felt "refreshed". Getting up before 9-10AM was super difficult. I also was often in a sort of fugue state when I would first wake up, I'd often have no memory of any interactions, and apparently was often mean.
Now I can easily get up, even at hours that were previously unthinkable, and more often than not feel fully recharged.
I have no doubt that played a part in me being unhealthy - though by no means was it the sole reason.
As an aside, also getting on a dose of Semaglutide has been similarly life changing. The damn near elimination of "food noise" has been incredible.
I know there are a number of folks of the opinion that its somehow cheating. But for me I am left wondering "Is this just how normal people feel?".
It always bothers me when someone says it’s "cheating" to use a GLP-1 agonist.
It helps but the person losing weight still has to clean up their diet and start an exercise routine. There are still major changes they need to make to become successful. GLP-1 agonists help a lot of people make better decisions due to how they fight hunger. Less hunger means less chances to make bad choices when eating, and weight loss progress is a virtuous mental cycle where you keep doing what you’re doing because you see results.
None of that is cheating. There are still major changes one must make. Taking Ozempic but continuing to eat a trash-tier diet will yield little to no progress.
Definitely agree, but man, the sheer number of folks who leave just horrible comments on videos/posts people make about how they lost weight on a GLP-1 agonist is so disheartening.
In addition to saying that its cheating, they will actively wish harm to the person by saying "just wait til you get X" where X is some side effect (real or imagined). Or just the "well once you stop taking it you'll just get fat again".
The people who say "you'll get fat again once you stop taking it" also baffle me.
The most difficult part of losing weight for me personally is changing my routine and habits. Setting myself up with a kitchen that's ready to cook. Figuring out what kind of meals I'm happy to have on a weeknight that don't require a lot of cooking. Preparing parts of meals over an hour or two on the weekend to complete some of the more time consuming parts when I'm not so constrained on time. Learning to deal with the urge I (used to) have to "eat my feelings".
All of those things don't just magically go away when you stop taking a GLP-1 agonist. Losing a lot of weight isn't just about self-control like if you're just trying to lose five pounds to make your pants more comfortable for going to your 20th high school reunion; you have to rewire your habits and mind and make a life long commitment to those changes.
Rewiring your habits and rewiring your brain are things that persist if your intention going into the weight loss was to change your habits instead of just moving the number on the scale. If you are looking to do it only temporarily and are unwilling to lock in those behavioral changes then you're likely going to fail, and that has much more to do with mindset than medication.
Now, we need price to go down and availability to go up, but people who think all the weight bounces back as a sort of "gotcha" are silly. It's possible, but is a testament to the difficulty of dealing with obesity -- not some sort of gotcha of the drug.
I personally think that Musk is kind of a dipshit when it comes to his political views, that he is a bit of a charlatan (FSD this year, I swear), that Tesla jumped the shark tank about 2-3 years ago, and I laugh every time I see more bad news about X/Twitter...
BUT I am really rooting for SpaceX and Starlink. Honestly I hope the shiny toy that is X/Twitter keeps his attention for a while and he leaves those orgs to run as they have been.
Its pretty easy to learn the rules. Once you've done that you can watch a game and know roughly what is happening. But its still a hard game to master.
One thing I found interesting, is it you go with PEWMA and create a scenario where the cluster is stressed, and then add 1 server, it pummels the shit out of the new server and you have a brief surge in failed requests.
Not sure if that is a real world issue, or just with the simulation...
This is very likely a bug in the simulation. My simplified implementation of PEWMA prioritises servers that have had no traffic, in order to send at least 1 request to all servers. There will be a window, until this new server serves its first request, where it is considered the highest priority server.
I doubt very much that this would be part of any real world implementation
I'm not familiar with PEWMA, but real load balancers sometimes have this problem. Either because of dynamic weighting that slams the new server which shows zero load, or because the new server needs to do some sort of cache warming, whether that's disk or code or jit or connection establishment or ???, a lot of times early requests are handled slowly.
Most load balancers should have a way to do some sort of slow start for newly added or newly healthy servers. That could be an age factor to weighting, or an age factor on max connections or ???. Some older load balancers are just not great at this, so you develop experienced based rules like 'always use round robin, leastconn will kill your servers with lumpy loads'. All that said, and a repeated theme across my comments in this thread, the more sophisticated your load balancing is, the harder your load balancer needs to work, and the sooner you need to figure out how to load balance your load balancers.
It should happen in the real world as well, at least that's what I've been told when I started my first job as a system admin.
The reason people cited to me back then was that the balancer usually isn't particularly smart when balancing, so they only see a free node, thus every free request is routed to it. The errors (mostly timeout) will happen once the request start to actually get processed.
Normally, the node gets a steady amount of requests over time, thus the load is constant (generally speaking, a request will require the most resources at the same relative time of their lifecycle). As all requests are fresh, they'll all hit the same load bottleneck at the same time, causing all the timeouts.
The answer is to both aggressively scale horizontally and then quickly decommission until you're back to baseline.
Or just accept the failed requests
Its been over 10 years though, it mightve been improved since.
I don't know anything about this subject, but my first thought (which may be wrong) would be to just set the weight of the new server to be the same as one of the other servers that are receiving messages (perhaps one of the lower ranks). In that way, it would not be overloaded so easily and adjust its ranking after a while
I guess my explanation was lacking then, as that wouldn't help. reducing the weight below the old nodes might work, but it would also extend the duration you're overloaded, which would also cause requests to fail.
AI has also been a really good brainstorming partner - especially if you prompt it to disable sycophancy. It will tell you straight up when you are over-engineering something.
It's also wonderful at debugging.
So I just talk to my computer, brainstorm architectures and approaches, create a spec, then let it implement it. If it was a bad idea, we iterate. The iteration loop is so fast that it doesn't matter.
Did you end up regretting a design choice, but normally you'd live with it because so much code would have to be changed? Not with Agentic coding tools - they are great at implementing changes throughout the entire codebase.
And its so easy to branch out to technologies you're not an expert in, and still be really effective as you gain that expertise.
I honestly couldn't be happier than I am right now. And the tools get better every week, sometimes a couple times a week.