Hacker Newsnew | past | comments | ask | show | jobs | submit | Tarsul's commentslogin

I once read a study during the height of covid about this[1], which is why I loaded up on Metform. Was lucky enough not to get covid in the meanwhile (or didn't notice), but better safe than sorry.[2]

[1]2022: https://www.nejm.org/doi/full/10.1056/NEJMoa2201662 [2]2024: https://www.cidrap.umn.edu/covid-19/common-diabetes-drug-low...


I've been thinking: Trump won't settle for less than Greenland unless it's the Nobel Peace Price. So... why not give it to him but with caveats? E.g. It will be presented to him in an extraordinary pompous celebration (to tickle his ego) but will remain in Norway until the day of the end of Trump's presidency. He will receive it again on that day, and can keep it!, in another majestic ceremony.


That would also be the end of the nobel peace prize, no? If you can 'win' it by blackmailing.


The Nobel Peace Prize committee can then award it to themselves the following year, for averting a hostile takeover of Greenland.


I understand the pragmatism, but wouldn’t that just fuel his desire for more?


It'd be interesting to see what trump would do if the Nobel committee promised to give him a peace prize if he stopped all tariffs and gave up on greenland (or better yet, if he resigned).


Appeasement doesn’t work.


Because every other agreement we have made with him on tariffs or Ukraine, every other appeasement, has done nothing to sway him from his actual course.

Contrary to popular thinking (and it is a nicer fantasy), he is not an inconsistent, emotionally manipulated short-termist with no attention-span.

He is actually smarter than we thought (or wanted to think) OR someone actually is a bona-fide Trump whisperer.

His main foreign policy aims and beliefs seem remarkably fixed.

All of this to say, no further appeasement. No need to completely undermine the Nobel peace price also for 5 minutes of respite, he will literally be back to this within a fortnight.


Humans are not rational. Even if you are 99% of the time, with a smartphone in your pocket there's a good chance you will use it for your emotional 1% within 2hours (and unravel). Read Rutger Bregman's goal for 2026: https://www.theguardian.com/lifeandstyle/2026/jan/04/lifes-t...


Yes. It's propaganda, not speech. Also the algorithms favor this sh*t. Also this massive generation of content floods the zone[1]. There is nothing "freedom of choice" about it if it resurfaces all the time. Upvotes/Views count disproportionally in most social media against downvotes/"not interested" (tiktok is better but even there you can't downvote enough AI-videos for them to not resurface. Probably because the algorithm isn't good enough to understand what is AI and what not, so these downvotes often don't count against AI).

[1]https://en.wikipedia.org/wiki/Flood_the_zone


the country is Niger, not Antigua. Thanks for playing, better luck next time.


D’oh that’s what I get for squeezing my reading and commenting between compiles. But this is worse. Americans have little reason to go to Antigua; they have no reason whatsoever to immigrate to Niger. So what was achieved here? How is US policy affected in any way? It’s a theatrical empty gesture.


Visas are not just for immigration. You do know that, right?


No. No, you don't understand. This is actually pro-consumer because if the patent is enforced, other car-manufacturers cannot pull this stunt. So, thanks BMW, good job for keeping anti-competetive practices at bay by patenting them.


After watching the video: It feels like this is basically the same result as what would've happened with ChatGPT in December 2022 with a custom prompt. I mean ok, probably more back and forth to break it but in the end... it feels like nothing's really changed, has it? (and yes, programmers might argue otherwise, but for the general "chatbot" experience for the general audience I really feel like we are treading water)


If my hunch is correct, people are focusing on "happy cases" and kinda decided to ignore whatever the fail case is.


It's not just you. Despite the claims to the contrary by the companies trying to sell you AI, I haven't noticed any serious improvement in the past few years.


They are better at programming and generating pictures.


they are better at generating convincing agit-prop and destroying internet discourse.

and nudes of celebs.

coding utility is up a little, but was useless for unique situations


Interesting. Do you have any specific evidence for your claims, or is it just because they got a bit better in general?

> and nudes of celebs.

Well, they got better at not giving people six fingers etc in general. So I can believe that they also got better at producing pictures of naked people.

> coding utility is up a little, but was useless for unique situations

They can't code up everything. Just like a hammer can't screw a screw. But there are many situations many people find them useful for?


LLMs really can't be improved all that much beyond what we currently have, because they're fundamentally limited by their architecture, which is what ultimately leads to this sort of behaviour.

Unfortunately the AI bubble seems to be predicated on just improving LLMs and really really hoping that they'll magically turn into even weakly general AIs (or even AGIs like the worst Kool-aid drinkers claim they will), so everybody is throwing absolutely bonkers amounts of money at incremental improvements to existing architectures, instead of doing the hard thing and trying to come up with better architectures.

I doubt static networks like LLMs (or practically all other neural networks that are currently in use) will ever be candidates for general AI. All they can do is react to external input, they don't have any sort of an "inner life" outside of that, ie. the network isn't active except when you throw input at it. They literally can't even learn, and (re)training them takes ridiculous amounts of money and compute.

I'd wager that for producing an actual AGI, spiking neural networks or something similar to them would be what you'd want to lean in to, maybe with some kind of neuroplasticity-like mechanism. Spiking networks already exist and they can do some pretty cool stuff, but nowhere near what LLMs can do right now (even if they do do it kinda badly). Currently they're harder to train than more traditional static NNs because they're not differentiable so you can't do backpropagation, and they're still relatively new so there's a lot of open questions about eg. the uses and benefits of different neural models and such.


I think there is something to be said about the value of bad information. For example, pre ai, how might you come to the correct answer for something? You might dig into the underlying documentation or whatever "primary literature" exist for that thing and get the correct answer.

However, that was never very many people. Only the smart ones. Many would prefer to have shouted into the void at reddit/stackoverflow/quora/yahoo answers/forums/irc/whatever, to seek an "easy" answer that is probably not entirely correct if you bothered going right to the source of truth.

That represents a ton of money controlling that pipeline and selling expensive monthly subscriptions to people to use it. Even better if you can shoehorn yourself into the workplace, and get work to pay for it at a premium per user. Get people to come to rely on it and have no clue how to deal with anything without it.

It doesn't matter if it's any good. That isn't even the point. It just has to be the first thing people reach for and therefore available to every consumer and worker, a mandatory subscription most people now feel obliged to pay for.

This is why these companies are worth billions. Not for the utility, but from the money to be made off of the people who don't know any better.


But the thing is that they aren't even making money; eg. OpenAI lost $11 billion in one quarter. Big LLMs are just so fantastically expensive to train and operate, and they ultimately really aren't as useful to eg businesses as they've been evangelised as so demand just hasn't picked up – plus the subscription plans are priced so low that most if not all "LLM operators" (OpenAI, Anthropic, etc) apparently actually lose money on even the most expensive ones. They'd lose all their customers if the plans actually cost as much as they should.

Apropos to that, I wonder if OpenAI et al are losing money on API plans too, or if it's just the subscriptions.

Source for the OpenAI loss figure: https://www.theregister.com/2025/10/29/microsoft_earnings_q1...

Source for OpenAI losing money on their $200/mo sub: https://fortune.com/2025/01/07/sam-altman-openai-chatgpt-pro...


To lose 11 billion means you have successfully convinced some people to give you 11 billion to lose. And money wasn't lost either. It was spent. It was used for things, making people richer and buying hardware, which also makes people richer.


He started in 2018. In 2021 he had shipped 30 (to Iraq). Wanted to ship 7500 in the next 3 years. Fast foward to 2025: he has shipped 500 in 13 countries. Hopefully, with his partnerships and local production (in India) his ramp-up will fasten up. I wish him luck.


Just read Treasure Island. It's a classic but one that is easy to comprehend and also timeless.


In Germany you can get 20 Euros for blood donations (at least at the Deutsches Rotes Kreuz).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: