> supervised by a human who occasionally knew what he was doing.
seems in jest but I could be wrong. If omitted or flagged as actual sarcasm I would feel a lot better about the project overall. As long as you’re auditing the LLM’s outputs and doing a decent code review I think it’s reasonable to trust this tool during incidents.
I’ll admit I did go straight to the end of the readme to look for this exact statement. I appreciate they chose to disclose.
Nobody who was writing code before LLMs existed "needs" an LLM, but they can still be handy. Procfs parsing trivialities are the kind of thing LLMs are good at, although apparently it still takes a human to say "why not using an existing library that solves this, like https://pkg.go.dev/github.com/prometheus/procfs"
Sometimes LLMs will give a "why not..." or just mention something related, that's how I found out about https://recoll.org/ and https://www.ventoy.net/ But people should probably more often explicitly prompt them to suggest alternatives before diving in to produce something new...
Neither do you need and IDE, syntax highlighting or third party libraries, yet you use all of them.
There's nothing wrong for a software engineer about using LLMs as an additional tool in his toolbox. The problem arises when people stops doing software engineering because they believe the LLM is doing the engineering for them.
Every IDE I've used just worked out of the box, be it Visual Studio, Eclipse, or anything using the language server protocol.
Having the ability to have things like method auto-completion, go-to-definition and symbol renaming is a net productivity gain from the minute you start using it and I couldn't imagine this being a controversial take in 2025…
> I don't know what “tarpit” you're talking about.
Really? You don't know software developers that would rather futz around with editor configs and tooling and libraries and etc, etc, all day every day instead of actually shipping the boring code?
I'd not trust any app that parses /proc to obtain process information (for reasons [0]), specially if the machine has been compromised (unless by "incident", the author means another thing):
I’m struggling with the utility of this logic. The argument seems to be "because malware can intercept /proc output, any tool relying on it is inherently unreliable."
While that’s theoretically true in a security context, it feels like a 'perfect is the enemy of the good' situation. Unless the author is discussing high-stakes incident response on a compromised system, discarding /proc-based tools for debugging and troubleshooting seems like throwing the baby out with the bathwater. If your environment is so compromised that /proc is lying to you, you've likely moved past standard tooling anyway.
No to me. It just has to demonstrate to work well, which is plenty possible with a developer focused on outcome rather than process (though hopefully they cared a bit about process/architecture too).
I was looking into LLM prompt and context efficacy and found this article. Having one language model snitching on the other might be effective?
I think that the practical uses of LLMs will be in more controlled environments and that on-device NPUs will make good use of domain-specific Small Language Models.
Thanks, funny enough the last 2 are on Udemy (my first courses) where as the others are on my main site.
I've kept the Flask one up to date for almost 10 years, all free updates.
I have so many course ideas but starting a new one is tough because I've lost all search traction to my site and courses in general. I don't want it to end but I also have to be real.
I've put a decade into writing blog posts, hundreds of free YouTube videos (without ads or sponsors), 100+ episode podcast related to programming and none of it has grown an audience in 5-10 years. I mean sure I have 21k subs on YouTube but most videos get like 200 views. I do it because I enjoy it but that doesn't mean it's wrong to also want to be able to sustain myself again doing it like I did between 2015 and 2021.
I took your Build A SaaS course on Udemy some year back, it was really good. I didn't realize it has updates to the day. The Udemy version is still the 10 hour one though, so perhaps that's why.
Which is fine! I don't mind if it's down for a few hours. It reminds me that it's just a place to stop by for a bit before moving on. Like a digital coffee shop that sometimes has a leaky pipe and isn't open right at 7am.
Not every OS or every SSH daemon support byte ranges but most up to date Linux systems and OpenSSH absolutely support it. One should not assume this exists on legacy systems and daemons.
I agree but there are legacy daemons that do not follow the spec. Most here will never see them in their lifetime but I had to deal with it in the financial world. People would be amazed and terrified at all the old non-standard crap that their payroll data is flying across. They just ignore the range and send the entire file. I am happy to not have to deal with that any more.
I use lftp a lot because of it's better UI compared to sftp. However, for large files, even with scp I can pin GigE with an old Xeon-D system acting as a server.
I'm going to assume it is "more than you think; not as much as you'd like" because I don't have the time to burn this morning to replicate your research.
ChatGPT offered a "robotic" personality which really improved my experience. My frustrations were basically decimated right away and I quickly switched to a more "You get out of it what you put in" mindset.
And less than two weeks in they removed it and replaced it with some sort of "plain and clear" personality which is human-like. And my frustrations ramped up again.
That brief experiment taught me two things:
1. I need to ensure that any robots/LLMs/mech-turks in my life act at least as cold and rational as Data from Star Trek.
2. I should be running my own LLM locally to not be at the whims of $MEGACORP.
I approve of this, but in your place I'd wait for hardware to become cheaper when the bubble blows over. I have a i9-10900, and bought an M.2 SSD and 64GB of RAM in july for it, and get useful results with Qwen3-30B-A3B (some 4-bit quant from unsloth running on llama.cpp).
It's much slower than an online service (~5-10 t/s), and lower quality, but it still offers me value for my use cases (many small prototypes and tests).
In the mean time, check out LLM service prices on https://artificialanalysis.ai/ Open source ones are cheap! Lower on the homepage there's a Cost Efficiency section with a Cost vs Intelligence chart.
I have a 9070 XT (16 GB VRAM) and it is fast with deepseek-r1:14B but I didn't know about that Qwen model. Most of the 'better' models will crash for lack of RAM.
This works if you can connect your actions directly with the outcomes. How would would you assess the efficacy of preventative actions whose consequences are delayed and uncertain?
"I think we should X because it will probably contribute to Y."
What if Z happens? You could say "Doing X was pointless - Z happened anyway!" but then you are discounting at least two things:
1. the possibility that the magnitude of Z would be much higher
2. that it's a numbers game: sometimes you lose despite making the right decision
I don't really understand your examples in the context of decision making - they feel more like execution lapses than strategic choices.
Choosing to park my car correctly because I used get tickets is a reactive action. Helping someone because they asked for help is a reactive action. Being late and then doing things to stop being late is also reactive.
I’m not talking about preventing hypothetical consequences for events that could happen but have not even happened.
Do you mean "Static Mappings"?
I have a couple dozen of those and had no issue during my pfSense upgrade.
I also rely heavily on two settings in "Services > DHCP Server":
- [x] Enable DNS Registration (leases will auto-register with the DNS Resolver)
- [x] Enable Early DNS Registration (static mappings will auto-register with the DNS Resolver)
I do not use the "Create a static ARP table entry for this MAC & IP Address pair." option for individual static mappings.
> This project was developed with assistance from AI/LLMs [...] supervised by a human who occasionally knew what he was doing.
This seems contradictory to me.
reply