Hacker Newsnew | past | comments | ask | show | jobs | submit | zenoprax's commentslogin

> witr is successful if users trust it during incidents.

> This project was developed with assistance from AI/LLMs [...] supervised by a human who occasionally knew what he was doing.

This seems contradictory to me.


The last bit

> supervised by a human who occasionally knew what he was doing.

seems in jest but I could be wrong. If omitted or flagged as actual sarcasm I would feel a lot better about the project overall. As long as you’re auditing the LLM’s outputs and doing a decent code review I think it’s reasonable to trust this tool during incidents.

I’ll admit I did go straight to the end of the readme to look for this exact statement. I appreciate they chose to disclose.


Thank you, yes I added it in jest and still keeping it for sometime. It was always meant to be removed in future.

If you're capable of auditing the LLM’s outputs and doing a decent code review then you don't need an LLM.

Nobody who was writing code before LLMs existed "needs" an LLM, but they can still be handy. Procfs parsing trivialities are the kind of thing LLMs are good at, although apparently it still takes a human to say "why not using an existing library that solves this, like https://pkg.go.dev/github.com/prometheus/procfs"

Sometimes LLMs will give a "why not..." or just mention something related, that's how I found out about https://recoll.org/ and https://www.ventoy.net/ But people should probably more often explicitly prompt them to suggest alternatives before diving in to produce something new...

> Procfs parsing trivialities are the kind of thing LLMs are good at

Have you tried it? Procfs trivialities is exactly the kind of thing where an LLM will hallucinate something plausible-looking.

Fixing LLM hallucinations takes more work and time than just reading manpages and writing code yourself.


Claude code can read manpages too

If I'd ever feel the urge to misengineer a rube goldberg contraption to manage my vibe coder LLM output I'll get back to you.

But at the moment I feel like all that sounds suspiciously like actual work.


It cant "read" anything. It can include the man page in the prompt, but it can never "read" it.

If the output is working code I don't really care whether it's reading, "reading", or """reading"""

Neither do you need and IDE, syntax highlighting or third party libraries, yet you use all of them.

There's nothing wrong for a software engineer about using LLMs as an additional tool in his toolbox. The problem arises when people stops doing software engineering because they believe the LLM is doing the engineering for them.


I don't use IDEs that require more time and effort investment than they save.

You mileage may vary, though. Lots of software engineers love those time and effort tarpits.


I don't know what “tarpit” you're talking about.

Every IDE I've used just worked out of the box, be it Visual Studio, Eclipse, or anything using the language server protocol.

Having the ability to have things like method auto-completion, go-to-definition and symbol renaming is a net productivity gain from the minute you start using it and I couldn't imagine this being a controversial take in 2025…


> I don't know what “tarpit” you're talking about.

Really? You don't know software developers that would rather futz around with editor configs and tooling and libraries and etc, etc, all day every day instead of actually shipping the boring code?

You must be working in a different industry.


right, we don't need a lot of things, yet here we are

need and can use are different things.

I'd not trust any app that parses /proc to obtain process information (for reasons [0]), specially if the machine has been compromised (unless by "incident", the author means another thing):

https://github.com/pranshuparmar/witr/tree/main/internal/lin...

It should be the last option.

[0] https://news.ycombinator.com/item?id=46364057


I’m struggling with the utility of this logic. The argument seems to be "because malware can intercept /proc output, any tool relying on it is inherently unreliable."

While that’s theoretically true in a security context, it feels like a 'perfect is the enemy of the good' situation. Unless the author is discussing high-stakes incident response on a compromised system, discarding /proc-based tools for debugging and troubleshooting seems like throwing the baby out with the bathwater. If your environment is so compromised that /proc is lying to you, you've likely moved past standard tooling anyway.


Fair enough! That line was meant tongue‑in‑cheek, and to be transparent about LLM usage. Rest assured, they were assistants, not authorities.

No to me. It just has to demonstrate to work well, which is plenty possible with a developer focused on outcome rather than process (though hopefully they cared a bit about process/architecture too).

Regardless of code correctness, it's easy enough for malware to spoof process relationships.

I agree, the LLM probably has a much better idea of what's happening than any human

I was looking into LLM prompt and context efficacy and found this article. Having one language model snitching on the other might be effective?

I think that the practical uses of LLMs will be in more controlled environments and that on-device NPUs will make good use of domain-specific Small Language Models.


I respect the restraint from self-promotion here but...

Your [Docker, Flask, HTTPS, AWS Docker, and DevOps courses](https://nickjanetakis.com/courses) look good and the price is fair. Bookmarked!

(the last two could use some more detail in the overview but the first three would give me enough confidence to take a chance)


Thanks, funny enough the last 2 are on Udemy (my first courses) where as the others are on my main site.

I've kept the Flask one up to date for almost 10 years, all free updates.

I have so many course ideas but starting a new one is tough because I've lost all search traction to my site and courses in general. I don't want it to end but I also have to be real.

I've put a decade into writing blog posts, hundreds of free YouTube videos (without ads or sponsors), 100+ episode podcast related to programming and none of it has grown an audience in 5-10 years. I mean sure I have 21k subs on YouTube but most videos get like 200 views. I do it because I enjoy it but that doesn't mean it's wrong to also want to be able to sustain myself again doing it like I did between 2015 and 2021.


I took your Build A SaaS course on Udemy some year back, it was really good. I didn't realize it has updates to the day. The Udemy version is still the 10 hour one though, so perhaps that's why.

Thank you.

Yep on my site there's around 30 hours of content for the same course. Basically a bunch of updates and refactors along with building a 2nd app.

I was trying to differentiate my site vs Udemy by adding extra perks.


Which is fine! I don't mind if it's down for a few hours. It reminds me that it's just a place to stop by for a bit before moving on. Like a digital coffee shop that sometimes has a leaky pipe and isn't open right at 7am.

I hope it doesn't change (much).


Wow, I hadn't heard of this before. You're saying it can "chunk" large files when operating against a remote sftp-subsystem (OpenSSH)?

I often find myself needing to move a single large file rather than many smaller ones but TCP overhead and latency will always keep speeds down.


Not every OS or every SSH daemon support byte ranges but most up to date Linux systems and OpenSSH absolutely support it. One should not assume this exists on legacy systems and daemons.

Byte ranges are the only way to access files over sftp. Look at the read and write requests in https://datatracker.ietf.org/doc/html/draft-ietf-secsh-filex...

I agree but there are legacy daemons that do not follow the spec. Most here will never see them in their lifetime but I had to deal with it in the financial world. People would be amazed and terrified at all the old non-standard crap that their payroll data is flying across. They just ignore the range and send the entire file. I am happy to not have to deal with that any more.

I use lftp a lot because of it's better UI compared to sftp. However, for large files, even with scp I can pin GigE with an old Xeon-D system acting as a server.


Yes, for local access this is my experience too. For trans-oceanic file transfers I can really see the limits and parallelization is essential.

I'm going to assume it is "more than you think; not as much as you'd like" because I don't have the time to burn this morning to replicate your research.


ChatGPT offered a "robotic" personality which really improved my experience. My frustrations were basically decimated right away and I quickly switched to a more "You get out of it what you put in" mindset.

And less than two weeks in they removed it and replaced it with some sort of "plain and clear" personality which is human-like. And my frustrations ramped up again.

That brief experiment taught me two things: 1. I need to ensure that any robots/LLMs/mech-turks in my life act at least as cold and rational as Data from Star Trek. 2. I should be running my own LLM locally to not be at the whims of $MEGACORP.


> I should be running my own LLM

I approve of this, but in your place I'd wait for hardware to become cheaper when the bubble blows over. I have a i9-10900, and bought an M.2 SSD and 64GB of RAM in july for it, and get useful results with Qwen3-30B-A3B (some 4-bit quant from unsloth running on llama.cpp).

It's much slower than an online service (~5-10 t/s), and lower quality, but it still offers me value for my use cases (many small prototypes and tests).

In the mean time, check out LLM service prices on https://artificialanalysis.ai/ Open source ones are cheap! Lower on the homepage there's a Cost Efficiency section with a Cost vs Intelligence chart.


I have a 9070 XT (16 GB VRAM) and it is fast with deepseek-r1:14B but I didn't know about that Qwen model. Most of the 'better' models will crash for lack of RAM.

https://dev.to/composiodev/qwen-3-vs-deep-seek-r1-evaluation...

If it runs, it looks like I can get a bit more quality. Thanks for the suggestion.


Sort of a personal modified Butlerian Jihad? Robots / chatbots are fine as long as you KNOW they're not real humans and they don't pretend to be.


`la -A` will also show hidden files but excludes the "." and ".."

I prefer that way in theory but a capital "A" is not as quick/easy to type.


This works if you can connect your actions directly with the outcomes. How would would you assess the efficacy of preventative actions whose consequences are delayed and uncertain?

"I think we should X because it will probably contribute to Y."

What if Z happens? You could say "Doing X was pointless - Z happened anyway!" but then you are discounting at least two things:

1. the possibility that the magnitude of Z would be much higher

2. that it's a numbers game: sometimes you lose despite making the right decision

I don't really understand your examples in the context of decision making - they feel more like execution lapses than strategic choices.


We’re not talking about preventative actions.

Choosing to park my car correctly because I used get tickets is a reactive action. Helping someone because they asked for help is a reactive action. Being late and then doing things to stop being late is also reactive.

I’m not talking about preventing hypothetical consequences for events that could happen but have not even happened.


> Choosing to park my car correctly because I used get tickets is a reactive action.

How do you explain someone who chooses to park correctly and has never received a parking ticket?


Why do I need to? I didn’t propose some framework to analyze people who are perfect.

This thread chain is about people who do something and it doesn’t work out.


Could your parking have been better? How so?

If you answered "no" to the above question then you answered wrong. This is true for s/parking/*/ because perfect doesn't exist.


Do you mean "Static Mappings"? I have a couple dozen of those and had no issue during my pfSense upgrade. I also rely heavily on two settings in "Services > DHCP Server":

- [x] Enable DNS Registration (leases will auto-register with the DNS Resolver)

- [x] Enable Early DNS Registration (static mappings will auto-register with the DNS Resolver)

I do not use the "Create a static ARP table entry for this MAC & IP Address pair." option for individual static mappings.

Hopefully this helps you in your troubleshooting.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: