Hacker Newsnew | past | comments | ask | show | jobs | submit | NoPicklez's commentslogin

Sort've related, but here in Australia pet food manufactures are not required to list the nutritional content of their foods, whereas in the US as I understand it they do.

They do but the nutritional information guides you to feeding your dog 20,000kcal a day. The suggested serving size on every brand I've seen is about 5 cups for a 70lb dog, whereas my dog gains weight on more than one cup.

At least the "grain free" labels appear to be accurate.


I've found it helpful to knock down a glass of metamucil (Psyllium Husk) morning an night most days where I can.

Its a good perspective, however perhaps your body just cannot deal with a higher amount of fiber compared to others.

In the same way that diabetics need to tightly control their sugar intake, it doesn't mean therefore that need for sugar is up for debate.


My greater point is that there is no one size fits all here. And generalizing things that are supposed to cover everyone like above is not always helpful, but can be actually harmful.

It would've been more fun to do this via a conversation with a real human

Semi-polished Response: Oh, believe me, this very old but bored squishy human brain dreamed up the whole pigeon relay twist. Mr. AI just polished my prompt into coherent words since aging has an effect on people's ability to think clearly, especially in the "Information Superhighway." Sorry if it robbed you of that authentic chat vibe, that always-in-a-hurry young'uns seem to thrive on.

Unpolished Comment: In other words, my comment was not levity. Learn to read the room, listen, close your mouth (e.g. flies may get in) and try to understand the deeper meaning in what others say or post before making an assumption.

More AI Polishing: (oh, I know you know) In case you missed the point, "pigeon relay" is a historical messaging system where homing pigeons carried notes in stages across long distances. Pigeons were raised at stations along a route. A message was attached to one bird and released; it flew home to the next station. There, the note was transferred to a fresh pigeon headed to the following station, and so on—like a relay race. Genghis Khan used this to span Asia and Europe. It was fast, hard to intercept, and worked when other methods failed.

Another Unpolished Comment: In other words, this could be a viable option to transfer information in places where regimes have instituted information blackouts that block all forms of modern electronic and digital communications, such as "Iran, Cuba, Venezuela, North Korea" and other places where crime is a daily occurrence, civil and human rights violations contine and remain unchecked or acknowledged by "Biased People" covertly embedded inside media outlets, especially the Western Media.

Extremely Unpolished Point of View: Inexperienced and younger people--hint, hint--are usually the first to criticize points of view like mine because they have been taught to think or feel a certain way by listening to a single source of news or point of view. Since I have lived thousands, hundreds, er tens of decades, I have learned the game, thus I prefer to bypass all that crap, ignore TV, ignore most if not all media outlets, and use word of mouth (e.g. people, shortwave radio) to verify what's going on in the world.

Hopefully this should convince you that I am a cyborg and not an AI :)

Sigh, my brain hurts :(


Why not in that case provide an example to rebut and contribute as opposed to knocking someone elses example even if it was against the use of agentic coding.

Serious question - what kind of example would help at this point?

Here are a sample of (IMO) extremely talented and well known developers who have expressed that agentic coding helps them: Antirez (creator of Reddit), DHH (creator of RoR), Linus (Creator of Linux), Steve Yegge, Simon Wilison. This is just randomly off the top of my head, you can find many more. None of them claim that agentic coding does a years' worth of work for them in an hour, of course.

In addition, pretty much every developer I know has used some form of GenAI or agentic coding over the last year, and they all say it gives them some form of speed up, most of them significant. The "AI doesn't help me" crowd is, as far as I can tell, an online-only phenomenon. In real life, everyone has used it to at least some degree and finds it very valuable.


Those are some high profile (celebrity) developers.

I wonder if they have measured their results? I believe that the perceived speed up of AI coding is often different from reality. The following paper backs this idea https://arxiv.org/abs/2507.09089 . Can you provide data that objects this view, based on these (celebrity) developers or otherwise?


Almost off-topic, but got me curious: How can I measure this myself? Say I want to put concrete numbers to this, and actually measure, how should I approach it?

My naive approach would be to just implement it twice, once together with an LLM and once without, but that has obvious flaws, most obvious that the order which you do it with impacts the results too much.

So how would I actually go about and be able to provide data for this?


> My naive approach would be to just implement it twice, once together with an LLM and once without, but that has obvious flaws, most obvious that the order which you do it with impacts the results too much.

You'd get a set of 10-15 projects, and a set of 10-15 developers. Then each developer would implement the solution with LLM assistance and without such assistance. You'd ensure that half the developers did LLM first, and the others traditional first.

You'd only be able to detect large statistical effects, but that would be a good start.

If it's just you then generate a list of potential projects and then flip a coin as to whether or not to use the LLM and record how long it takes along with a bunch of other metrics that make sense to you.


The initial question was:

> wonder if they have measured their results?

Which seems to indicate that there would be a suitable way for a single individual to be able to measure this by themselves, which is why I asked.

What you're talking about is a study and beyond the scope of a single person, and also doesn't give me the information I'd need about myself.

> If it's just you then generate a list of potential projects and then flip a coin as to whether or not to use the LLM and record how long it takes along with a bunch of other metrics that make sense to you.

That sounds like I can just go by "yeah, feels like I'm faster", which I thought exactly was parent wanted to avoid...


> That sounds like I can just go by "yeah, feels like I'm faster", which I thought exactly was parent wanted to avoid...

No it doesn't, but perhaps I assumed too much context. Like, you probably want to look up the Quantified Self movement, as they do lots of social science like research on themselves.

> Which seems to indicate that there would be a suitable way for a single individual to be able to measure this by themselves, which is why I asked.

I honestly think pick a metric you care about and then flip a coin to use an LLM or not is the best you're gonna get within the constraints.


> Like, you probably want to look up the Quantified Self movement, as they do lots of social science like research on themselves.

I guess I was looking for something bit more concrete, that one could apply themselves, which would answer the "if they have measured their results? [...] Can you provide data that objects this view" part of parents comment.

> then flip a coin to use an LLM or not is the best you're gonna get within the constraints.

Do you think trashb who made the initial question above would take the results of such evaluation and say "Yeah, that's good enough and answers my question"?


> I guess I was looking for something bit more concrete, that one could apply themselves, which would answer the "if they have measured their results? [...] Can you provide data that objects this view" part of parents comment.

This stuff is really, really hard. Social science is very difficult as there's a lot of variance in human ability/responses. Added to that is the variance surrounding setup and tool usage (claude code vs aider vs gemini vs codex etc).

Like, there's a good reason why social scientists try to use larger samples from a population, and get very nerdy with stratification et al. This stuff is difficult otherwise.

The gold standard (rather like the METR study) is multiple people with random assignment to tasks with a large enough sample of people/tasks that lots of the random variance gets averaged out.

On a 1 person sample level, it's almost impossible to get results as good as this. You can eliminate the person level variance (because it's just one person), but I think you'd need maybe 100 trials/tasks to get a good estimate.

Personally, that sounds really implausible, and even if you did accomplish this, I'd be sceptical of the results as one would expect a learning effect (getting better at both using LLM tools and side projects in general).

The simple answer here (to your original question) is no, you probably can't measure this yourself as you won't have enough data or enough controls around the collection of this data to make accurate estimates.

To get anywhere near a good estimate you'd need multiple developers and multiple tasks (and a set of people to rate the tasks such that the average difficulty remains constant.

Actually, I take that back. If you work somewhere with lots and lots of non-leetcode interview questions (take homes etc) you could probably do the study I suggested internally. If you were really interested in how this works for professional development, then you could randomise at the level of interviewee and track those that made it through and compare to output/reviews approx 1 year later.

But no, there's no quick and easy way to do this because the variance is way too high.

> Do you think trashb who made the initial question above would take the results of such evaluation and say "Yeah, that's good enough and answers my question"?

I actually think trashb would have been OK with my original study, but obviously that's just my opinion.


To wrap this up, what I was trying to say is that the feeling of being faster may not align with the reality. Even for people that have a good understanding of the matter it may be difficult to estimate. So I would say be skeptical of claims like this and try to somehow quantize it in a way that matters for the tasks you do. This is something managers of software projects have been trying to tackling for a while now.

There is no exact measurement in this case but you could get an idea by testing certain types of implementations. For example if you are finishing similar tasks on average 25% faster during a longer testing period with and without AI. Just the act of timing yourself doing tasks with or without AI may already give a crude indication of the difference.

You could also run a trail implementing coding tasks like leet code however you will introduce some kind of bias due to having done it previously. And additionally the tasks may not align with your daily activities.

A trail with multiple developers working on the same task pool with or without AI could lead to more substantial results but you won't be able to do that by yourself.


So there seems to be an shared underestanding how difficult "measure your results" would be in this case, so could we also agree that asking someone:

> I wonder if they have measured their results? [...] Can you provide data that objects this view, based on these (celebrity) developers or otherwise?

isn't really fair? Because not even you or I really know how to do so in a fair and reasonable manner, unless we start to involve trials with multiple developers and so on.


> isn't really fair? Because not even you or I really know how to do so in a fair and reasonable manner, unless we start to involve trials with multiple developers and so on.

I think in a small conversation like this, it's probably not entirely fair.

However, we're hearing similar things from much larger organisations who definitely have the resources to do studies like this, and yet there's very little decent work available.

In fact, lots of the time they are deliberately misleading people (25% of our code generated by AI being copilot/other autocomplete). Like, that 25% stat was probably true historically with JetBrains products and using any form of code generations (for protobufs et al) so it's wildly deceptive et al.


> I wonder if they have measured their results?

This is a notoriously difficult thing to measure in a study. More relevantly though, IMO, it's not a small effect that might be difficult to notice - it's a huge, huge speedup.

How many developers have measured whether they are faster when programming in Python vs assembly? I doubt many have. And I doubt many have chosen Python over assembly because of any study that backs it up. But it's also not exactly a subtle difference - I'm fairly 99% of people will say that, in practice, it's obvious that Python is faster for programming than assembly.

I talked literally yesterday to a colleague who's a great senior dev, and he made a demo in an hour and a half that he says would've taken him two weeks to do without AI. This isn't a subtle, hard to measure difference. Of course this is in an area where AI coding shines (a new codebase for demo purposes) - but can we at least agree that in some things AI is clearly an order of magnitude speedup?


A lot of comments reads like a knee jerk reaction to the Twitter crowd claiming they vibe code apps making 1m$ in 2 weeks.

As a designer I'm having a lot of success vibe coding small use cases, like an alternative to lovable to prototype in my design system and share prototypes easily.

All the devs I work with use cursor, one of them (front) told me most of the code is written by AI. In the real world agentic coding is used massively


I think it is a mix of ego and fear - basically "I'm too smart to be replaced by a machine" and "what I'm gonna do if I'm replaced?".

The second part is something I think a lot about now after playing around with Claude Code, OpenCode, Antigravity and extrapolating where this is all going.


I agree it's about the ego .. about the other part I am also trying to project few scenarios in my head.

Wild guess nr.1: large majority of software jobs will be complemented (mostly replaced) with the AI agents, reducing the need for as many people doing the same job.

Wild guess nr.2: demand for creating software will increase but the demand for software engineers creating that software will not follow the same multiplier.

Wild guess nr.3: we will have the smallest teams ever with only few people on board leading perhaps to instantiating the largest amount of companies than ever.

Wild guess nr.4: in near future, the pool of software engineers as we know them today, will be drastically downsized, and only the ones who can demonstrate they can bring the substantial value over using the AI models will remain relevant.

Wild guess nr.5: getting the job in software engineering will be harder than ever.


Nit: s/Reddit/Redis/

Though it is fun to imagine using Reddit as a key-value store :)


That is hilarious.... and to prove the point of this whole comment thread, I created reddit-kv for us. It seems to work against a mock, I did not test it against Reddit itself as I think it violates ToS. My prompts are in the repo.

https://github.com/ConAcademy/reddit-kv/blob/main/README.md


Typo-Driven Development!

Aaarg I was typing quickly and mistyped. :face-palm:

Thanks for the correction.


You haven't provided a sample either... But sure, lets dig in.

> Antirez

When I first read his recent article, I found the whole article, uncompelling. https://antirez.com/news/158 (don't buy into the anti-AI hype) But gave it a 2nd chance; and re-read it. I'm gonna have to resist going line by line, because I find some of it outright objectionable.

> Whatever you believe about what the Right Thing should be, you can't control it by refusing what is happening right now. Skipping AI is not going to help you or your career.

Setting aside the rhetorical/argumentative deficiencies, and the fact this is just FUD because (he next suggests if you disagree, just keep trying it every few months? which suggests to me even he knows it's BS). He writes that in the context of the ethical or moral objections he raises. So he's suggesting that the best way to advance in your career, is to ignoring the social and ethical concerns and just get on board?

Gross.

Individual careers aside, I'm not impressed by the correctness of the code emitted, by AI and committed by most AI users. I'm unconvinced that AI will improve the industry, and it's reputation as a whole.

But the topic is supposed to be specific examples of code, so lets do that. He mentions adding utf-8 to his toy terminal input project -> https://github.com/antirez/linenoise/commit/c12b66d25508bd70... It's a very useful feature to add, without a doubt! His library is better than it was before. But parsing utf-8, while something that's very easy to implement without care, or incompletely, i.e. something that's very easy to trip over if you're careless. The implementation specifics of it are fairly described as a solved problem. It's been done so many times, if you're willing to re-implement from another existing source, It wouldn't take very long to do this without AI. (And if you're not, why are you using AI? I'm ethically opposed to the laundered provenience of source material) Then, it absolutely would take more time to verify that the code is correct if you did it by hand. The thing everyone keeps telling me I have to ensure that the AI hasn't made a mistake, so either I trust the vibes, or I'm still spending that time. Even Simon Willison agrees with me[1].

> Simon Willison

Is another suggested, so he's perfect to go next. I normally would exclude someone who's clearly best know as an AI influencer, but he's without a doubt an engineer too to fair game. Especially given he's answered a similar question just recently https://news.ycombinator.com/item?id=46582192 I've been searching for a counter point to my personal anti-AI hype, so was eager to see what the experts are making.... it's all boilerplate. I don't mean to say there's nothing valuable or that there's nothing useful there. Only that the vast majority of the code in these repos, is boilerplate that has no use out of context. The real value is just a few lines of code, something that I believe would only take 30m if you wrote the code without AI for the project you were already working on. It'd take a few hours to make any of this myself (assuming I'm even good enough to figure it out).

And I do admit, 10m on BART vs 3-4hours on a weekend is a very significant time delta. But also, I like writing code. So what was I really gonna do with that time? Make share holder value go up no doubt!

> Linus Torvalds

I can't find a single source where he's an advocate for AI. I've seen the commit, and while some of the github comments are gold. I wasn't able to draw any meaningful conclusions from the commit in isolation. Especially not when the last I read about it, he used it because he doesn't write python code. So I don't know what conclusions there are I can pull from this commit, other than AI can emit code. I knew that.

I don't have enough context to comment on the opinions of Steve Yegge or his AI generated output. I simply don't know enough, and after a quick search nothing other than AI influencer jumped out at me.

Then I try to care about who I give my time and attention to, or who I associate with so this is the end of list.

I contrast these, examples with all the hype that's proven over and over to be a miscommunication if I'm being charitable, or an outright lie if I'm not. I also think it's important to consider the incentives leading to these "miscommunications" when evaluating how much good faith you assign them.

On top of that, there's the countless examples of AI confidently lying to me about something. Explaining my fundamental concrete objection to being lied to; would take another hour I shouldn't spend on a HN comment.

What specific examples of impressive things/projects/commits/code am I missing? What output, makes all the downsides of AI a worthwhile trade off?

> In addition, pretty much every developer I know has used some form of GenAI or agentic coding over the last year, and they all say it gives them some form of speed up

I remember reading something that when tested, they're not actually faster. Any source on this other than vibes?

[1]: https://simonwillison.net/2025/Dec/18/code-proven-to-work/


That's like saying "Everyone be friendly and helpful to one another"

Easier said than done


Memes

That's no meme- it's a fully operational crazy custom build. A remarkable video that doesn't end in "like and mash subscribe".

I think they meant "store memes on it".

That too! There seem to be a lot of YT videos that are memes on this theme as well though. He alludes to the fact that people make build videos that are essentially adding a couple of pre-existing more or less built components together. This is another level.

Yeah "memes" was the answer to the question of the post.

Archival of the memes


Amazed by how many things he built in the process of making his NAS.

High quality vid!


Cant respond to your EU tariff comment but just wanted to tell you that VAT and luxury tax applies to EU made products as well, so it doesn’t seem relevant to add when talking about tariffs.

Why not the same thing for alcohol?

I think its fairly obvious why there are certain age restrictions for younger groups of people as they are more vulnerable.


Because people actually want alcohol, whereas advertising is generally something they're stuck with.

Google translate says its just "Dead?"

So it translates literally to "Dead" as opposed to the non-literal "Are you dead yet?"


Yes, in the Chinese phrase "死了么", "死" means "die", "死了" means "dead" (or "has died"), "么" is modal particle for a question.

These are the sorts of small details that I remember Apple being known for and putting a lot of thought into, often times a little obsessive.

Not to say this isn't the case anymore but


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: