Hacker Newsnew | past | comments | ask | show | jobs | submit | thurn's commentslogin

No "max" or "pro" equivalent? I wanted to get a new Macbook Pro, but there's no obvious successor to the M4 Max available, M5 looks like a step down in performance if anything.




No doubt the "wider" versions of the M5 are coming.

My hope is that they are taking longer because of a memory system upgrade that will make running significantly more powerful LLMs locally more feasible.


I assume that would come with the next release cycle of the MacBook? Isn’t that supposed to be early next year?


Apparently not until early next year. I was surprised by this too, but I hadn’t really been following the rumors at all, so I didn’t really have any grounds for being surprised by this.


Does the "caching containers for Codex Cloud" mean I have some chance of being able to reuse build artifacts between tasks? My Rust project takes around 20 minutes to set up from scratch in a new Codex environment, which seems extremely expensive.


I think Cursor tab-completion is entirely in-house, right? That feature on its own is worth at least $5/month, it's super well done.


I think this is up to the user. I actually found tab so annoying that it was a big reason I quit cursor and cancelled my sub. I couldn't think straight with it constantly suggesting things to put in after every key stroke and caused a few annoying bugs for me.

I find pure claude and neovim to be a great pair. I set up custom vim commands to make sharing file paths, line numbers, and code super easy. that way I can move quickly through code for manual developing as well as have claude right there with the context it needs quickly.


I’m paying $20/m just for tab, and willing to pay $40/m just to have it in Rider so I can return back to using single IDE.


Doesn't Rider have JetBraims AI? It's basically the same thing as Cursor.


It doesn't have Junie (Jetbrains AI Agent) yet, but I'm talking about the agents. I'm happy with Claude Code. I just want Cursor Tab in there. I use it for quick edits & refactoring, not writing new code, and it's damn good at what it does.


And dare I say their only remaining moat.


JetBrains IDEs also have that.


I agree, their tab completion is magical.


I didn't think much of it until I canceled Cursor to try out copilot, which is slower and yet also worse quality. I reluctantly resubscribed to Cursor.


Are we close to having generic semaglutides e.g. available in India? Or locked into high prices for the foreseeable future?


Generic Semaglutide is already produced on a massive scale throughout the world. However, it is unlawful to import and sell and will remain so until 2032 in the USA.

In other markets, where it is under patent, it is significantly cheaper than the $500/month or more in the US currently. For example in the UK it is roughly $150/month USD privately (i.e. not through the NHS).

In China it will be out of patent within two years.


There's a whole little online subculture of people in the US importing the precursors and making it themselves at home for dirt-cheap.

I gather it's extremely easy and basically fool-proof, as far as producing the desired drug and not producing some other, undesired drug. Much easier than, say, home-brewing beer. The risk is all in contamination, which presents a vector for infection.

[EDIT] I don't mean to downplay the risks or suggest people go do this, only to highlight that there's enough demand for this that we're well into "life, uh, finds a way" territory, and also just how lucky (assuming these hold up as no-brainers to take for a large proportion of the population) we are that these things are so incredibly cheap and simple to make, if you take the patents out of the picture.


Not just the current generation of drugs, but they also import and use the next generation that is still in clinical trials and won't be on the market for at least a year. I had it reccomended to me online in a very casual as if it were a supplement. The risk with the is not just contamination but also if you get side effects there's no recourse to sue because you bought it from a chemical factory in China. The new generation of glp peptides is similar to the old one, but still can have unintended side effects as they do work on three receptors rather than the two that the current generation does


It's not the precursors, it's the freeze dried powder form of the drug plus an excipient, generally mannitol.

You just reconstitute it with BAC water and inject it.


Iirc, that pricing will change in the US as Trump will require that the price of drugs to Medicaid patients must match or be less than that of any other developed nation.

Since about 1/4 of the people in the US are on medicaid, close to 90 million, that means the drug manufacturers will probably raise the price for everyone else in the US because they got to get their profits somehow...

https://www.whitehouse.gov/fact-sheets/2025/07/fact-sheet-pr...


Unfortunately, per the link, it sounds like a voluntary arrangement. Essentially they're asking drug companies nicely to stop ripping off Americans.

If they're serious about this, they would introduce legislation rather than send strongly worded letters to pharma companies.


That would be pretty , unusual - Congress has shown an exceptionally strong bias to support medical industry profits over health Care through the decades.

I wonder if the bribery (campaign donations) has anything to do with it?


It has always been available


Which of these statements do you disagree with?

- Superintelligence poses an existential threat to humanity

- Predicting the future is famously difficult

- Given that uncertainty, we can't rule out the chance of our current AI approach leading to superintelligence

- Even a 1-in-1000 existential threat would be extremely serious. If an asteroid had a 1-in-1000 chance of hitting Earth and obliterating humanity we should make serious contingency plans.

Second question: how confident are you that you're correct? Are you 99.9% sure? Confident enough to gamble billions of lives on your beliefs? There are almost no statements about the future which I'd assign this level of confidence to.


You could use the exact same argument to argue the opposite. Simply change the first premise to "Super intelligence is the only thing that can save humanity from certain extinction". Using the exact same logic, you'll reach the conclusion that not building superintelligence is a risk no sane person can afford to take.

So, since we've used the exact same reasoning to prove two opposite conclusions, it logically follows that this reasoning is faulty.


That’s not how logic works. The GP is applying the precautionary principle: when there’s even a small chance of a catastrophic risk, it makes sense to take precautions-like restricting who can build superintelligent AI, similar to how we restrict access to nuclear technology.

Changing the premise to "superintelligence is the only thing that can save us" doesn’t invalidate the logic of being cautious. It just shifts the debate to which risk is more plausible. The reasoning about managing existential risks remains valid either way, the real question is which scenario is more likely, not whether the risk-based logic is flawed.

Just like with nuclear power, which can be both beneficial and dangerous, we need to be careful in how we develop and control powerful technologies. The recent deregulation by the US admin are an example of us doing the contrary currently.


Not really. If there is a small chance that this miraculous new technology will solve all of our problems with no real downside, we must invest everything we have and pull out all the stops, for the very future of the human race depends on AGI.

Also, @tsimionescu's reasoning is spot on, and exactly how logic works.


It literally isn't, changing/reversing a premise and not adressing the point that was made is not a valid way to counter the initial argument in a logical way.

Just like your proposition that any "small" chance justifies investing "everything" disregards the same argument regarding the precautionary principle of potentially devastating technologies. You've also slipped in an additonal "with no real downside" which you cannot predict with certainty anyways, rendering this argument infalsifiable. At least tsimionescu didn't dare making such a sweeping (but baseless) statement.


Some of us believe that continued AI research is by far the biggest threat to human survival, much bigger for example than climate change or nuclear war (which might cause tremendous misery and reduce the population greatly, but seem very unlikely to kill every single person).

I'm guessing that you think that society is getting worse every year or will eventually collapse, and you hope that continued AI research might prevent that outcome.


The best we can hope for is that Artificial Super Intelligence treats us kindly as pets, or as wildlife to be preserved, or at least not interfered with.

ASI to humans, is like humans to rats or ants.


Isn't the question you're posing basically Pascals wager?

I think the chance they're going to create a "superintelligence" is extremely small. That said I'm sure we're going to have a lot of useful intelligence. But nothing general or self-conscious or powerful enough to be threatening for many decades or even ever.

> Predicting the future is famously difficult

That's very true, but that fact unfortunately can never be used to motivate any particular action, because you can always say "what if the real threat comes from a different direction?"

We can come up with hundreds of doomsday scenarios, most don't involve AI. Acting to minimize the risk of every doomsday scenario (no matter how implausible) is doomsday scenario no. 153.


> I think the chance they're going to create a "superintelligence" is extremely small.

I'd say the chance that we never create a superintelligence is extremely small. You either have to believe that for some reason the human brain achieved the maximum intelligence possible, or that progress on AI will just stop for some reason.

Most forecasters on prediction markets are predicting AGI within a decade.


Why are you so sure that progress won't just fizzle out at 1/1000 of the performance we would classify as superintelligence?

> that progress on AI will just stop for some reason

Yeah it might. I mean, I'm not blind and deaf, there's been tremendous progress in AI over the last decade, but there's a long way to go to anything superintelligent. If incremental improvement of the current state of the art won't bring superintelligence, can we be sure the fundamental discoveries required will ever be made? Sometimes important paradigm shifts and discoveries take a hundred years just because nobody made the right connection.

Is it certain that every mystery will be solved eventually?


Aren't we already passed 1/1000th of the performance we would classify as superintelligence?

There isn't an official precise definition of superintelligence, but it's usually vaguely defined as smarter than humans. Twice as smart would be sufficient by most definitions. We can be more conservative and say we'll only consider superintelligence achieved when it gets to 10x human intelligence. Under that conservative definition, 1/1000th of the performance of superintelligence would be 1% as smart as a human.

We don't have a great way to compare intelligences. ChatGPT already beats humans on several benchmarks. It does better than college students on college-level questions. One study found it gets higher grades on essays than college students. It's not as good as humans on long, complex reasoning tasks. Overall, I'd say it's smarter than a dumb human in most ways, and smarter than a smart human in a few ways.

I'm not certain we'll ever create superintelligence. I just don't see why you think the odds are "extremely small".


I agree, the 1/1000 ratio was a bit too extreme. Like you said, almost any way that's measured it's probably fair to say chatgpt is already there.


Yes, this is literally Pascal's wager / Pascal's mugging.


> Given that uncertainty, we can't rule out the chance of our current AI approach leading to superintelligence

I think you realise this is the weak point. You can't rule out the current AI approach leading to superintelligence. You also can't rule out a rotting banana skin in your bin spontaneously gaining sentience either. Does that mean you shouldn't risk throwing away that skin? It's so outrageous that you need at least some reason to rule it in. So it goes with current AI approaches.


Isn't the problem precisely that uncertainty though? That we have many data points showing that a rotting banana skin will not spontaneously gain sentience, but we have no clear way to predict the future? And we have no way of knowing the true chance of superintelligence arising from the current path of AI research—the fact that it could be 1-in-100 or 1-in-1e12 or whatever is part of the discussion of uncertainty itself, and people are biased in all sorts of ways to believe that the true risk is somewhere on that continuum.


>And we have no way of knowing the true chance of superintelligence arising from the current path of AI research

What makes people think that the future advances in AI will continue to be linear instead of falling of and plateau? Don't all breakthrough technologies develop quickly at the start and then fall of in improvements as all the 'easy' improvements have already been made? In my opinion AI and AGI is like the car and the flying car. People saw continous improvements in cars and thought this rate of progress would continue indefinitely. Leading to cars that have the ability to not only drive but fly as well.


We already have flying cars. They’re called airplanes and helicopters. Those are limited by the laws of physics, so we don’t have antigravity flying vehicles.

In the case of AGI we already know it is physically possible.


There are lots of data points of previous AI efforts not creating super intelligence.


You bring up the example of an extinction-level asteroid hurling toward earth. Gee, I wonder if this superintelligence you’re deathly afraid of could help with that?

This extreme risk aversion and focus on negative outcomes is just the result of certain personality types, no amount of rationalizing will change your mind as you fundamentally fear the unknown.

How do you get out of bed everyday knowing there’s a chance you could get hit by a bus?

If your tribe invented fire you’d be the one arguing how we can’t use it for fear it might engulf the world. Yes, humans do risk starting wildfires, but it’s near impossible to argue the discovery of fire wasn’t a net good.


Since the internet inception there were a few wrong turns taken by the wrong people (and lizards, ofc) behind the wheel, leading to the sub-optimal, enshitified tm experience we have today. I think GP just don't want to live through that again.


You mean right turns. The situation that we have today is the one that gets most rewarded. A right move is defined as one that gets rewarded.


I think of the invention of ASI as introducing a new artificial life form.

The new life form will be to humans, as humans are to chimps, or rats, or ants.

At this point we have lost control of the situation (the planet). We are no longer at the top of the food chain. Fingers crossed it all goes well.

It's an existential gamble. Is the gamble worth taking? No one knows.


> Superintelligence poses an existential threat to humanity

I disagree at least on this one. I don't see any scenario where superintelligence comes into existence, but is for some reason limited to a mediocrity that puts it in contention with humans. That equilibrium is very narrow, and there's no good reason to believe machine-intelligence would settle there. It's a vanishingly low chance event. It considerably changes the later 1-in-n part of your comment.


So you assume a superintelligence, so powerful it would see humans as we see ants, would not destroy our habitat for resources it could use for itself?


More fundamental than that, I assume that a superintelligence on that level wouldn't have resource-contention with humans at all.


> There are almost no statements about the future which I'd assign this level of confidence to.

You have cooked up a straw man that will believe anything as long as it contains a doomsday prediction. You are more than 99.9% confident about doomsday predictions, even if you claim you aren't.


is this necessarily linux for dependency reasons, or could it be on OSX in the future?


Yes, I think it's possible to support macOS. However, the main challenge isn't the operating system itself but rather the architecture.


Also copyright duration when Star Wars was created was a maximum of 56 years, and obviously George Lucas felt that was sufficient incentive to create it!


this doesn't seem to provide much of a justification for an extraordinary claim like "the widely accepted number for the age of the universe is wrong by 4 billion years"?


officer she told me she was 4 billion years old


Kind of surprised that this book could be published by O'Reilly and also freely available online? Seems unusually generous.


Possibly a sign of confidence. After browsing this for a few minutes, I'm very convinced of its quality and will probably buy it.

Wouldn't have happened with a book with just sample pages.


Why buy it if it’s completely free which is implied by your post?


Because I have written a book and thus know how much work it is to write even a mediocre one.

Also as a way to increase my motivation to read it.

Plus I have money. This book costs about as much as a good bottle of wine or a bad bottle of whiskey.


> Plus I have money. This book costs about as much as a good bottle of wine or a bad bottle of whiskey.

Exactly.

A few years ago I did a really aggressive weeding out of my bookshelves as things were getting far too cluttered. In the process I threw out what must have been - at cover price - several thousand pounds worth of IT related books.

On the resale market they were all too stale to have any value (though I did manage to give a handful away to friends). In one way it was a bit painful, but those few thousand pounds worth of books has given me a huge (financial) return on that investment!

Cheap at the cost of a good bottle of wine ... for the foundations of a career!


> a good bottle of wine or a bad bottle of whiskey.

I don't enjoy either but I have friends who decided to specialise and so I'm confident that you can easily reverse this split if you have decided you care more about one or the other.


To support the author. And as a way of saying thank you.


The last 2 books I’ve bought (ostep and nand2tetris) are available online. Hard copies are nice and personally seeing it on my desk gives more more motivation to finish them.


Because we all know what happens if we're not the customer.

I have this; I bought it because I want to reward the author for producing a quality work, and because I want to encourage the publishers to produce other works that would appeal to me.

I also happen to like physical texts so I bought the paperback but I have this and the digital edition. The latter is convenient for when I am travelling and appropriately formatted for an eReader (not just the raw html from these pages).


Because the people want to show appreciation for the good work the author has done?


True for digital copies, I've never yet bought one of those.

I have no trouble paying for physical books though.


The book isn’t free, its contents are published online by the author. Yes, nitpicking. But (1) I like a well formatted epub and (2) the author/publisher still hold copyright.


I want to read on Kindle or own the book.


hm, wouldn't you almost by definition think you were doing a good job of flagging them at any level of actual effectiveness?


Not if I were seeing a bunch of them get through.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: