Hacker Newsnew | past | comments | ask | show | jobs | submit | anon373839's commentslogin

Exactly. Apple operates at a scale where it's very difficult to deploy this technology for its sexy applications. The tech is simply too broken and flawed at this point. (Whatever Apple does deploy, you can bet it will be heavily guardrailed.) With ~2.5 billion devices in active use, they can't take the Tesla approach of letting AI drive cars into fire trucks.

This is so obvious I'm kind of surprised the author used to be a software engineer at Google (based on his Linkedin).

OpenClaw is very much a greenfield idea and there's plenty of startups like Raycast working in this area.


Being good at leetcode grinding isn’t the same as being a good product person.

iOS 26 is proof that many product managers at Apple need to find another calling. The usability enshittification in that release is severe and embarrassing.

Or maybe, while being as good as they are at their jobs, they were forced to follow a broken vision with a non-negotiable release date.

And simply chose to keep their jobs.


Which also suggests that they need a new calling

shots fired!

Ouch. You could have taken a statistical approach "google is not known for high quality product development and likely therefore does not select candidates for qualities in product-development domain" - I'm talking too much to Gemini, aren't I?

I'm not that surprised because of how pervasive the 'move fast and break things' culture is in Silicon Valley, and what is essentially AI accelerationism. You see this reflected all over HN as well, e.g. when Cloudflare goes down and it's a good thing because it gives you a break from the screen. Who cares that it broke? That's just how it is.

This is just not how software engineering goes in many other places, particularly where the stakes are much higher and can be life altering, if not threatening.


It is obvious if viewed through an Apple lens. It wouldn't be so obvious if viewed through a Google lens. Google doesn't hesitate to throw whatever its got out there to see what sticks; quickly cancelling anything that doesn't work out, even if some users come to love the offering.

Regardless of how Apple will solve this, please just solve it. Siri is borderline useless these days.

> Will it rain today? Please unlock your iphone for that

> Any new messages from Chris? You will need to unlock your iphone for that

> Please play youtube music Playing youtube music... please open youtube music app to do that

All settings and permission granted. Utterly painful.


You'll need to unlock your iPhone first. Even though you're staring at the screen and just asked me to do something, and you saw the unlocked icon at the top of your screen before/while triggering me, please continue staring at this message for at least 5 seconds before I actually attempt FaceID to unlock your phone to do what you asked.

I think half your examples are made up, or not Apple's fault, but it sounds like what you really want is to disable your passcode.

I LOVE the "complaining about apple ux? no way, YOU'RE the problem / you're doing it wrong / you must not be a mac person".

Thanks for keeping this evergreen trope going strong!


well if you're making complaints that aren't true, or asking for functionality that exists already, your complaints don't seem very credible to me.

"Will it rain today? Sorry, I can't do that while you're driving."

Do you want people being able to command your phone without unblocking? Maybe what you want is to disable phone blocking all together

I want a voice control experience that is functional. I don't want every bad thing that could happen-- especially those that will only happen if I'm careless to begin with-- circumscribing an ever shrinking range, often justified by contrived examples and/or for things much more easily accomplished through other methods.

That would be very useful but is not a trivial problem.

Oh no, what if they put on Christmas music playlist in February? the horror!

There should exist something between "don't allow anything without unlocking phone first" and "leave the phone unlocked for anyone to access", like "allow certain voice commands to be available to anyone even with phone locked"


Playing music doesn’t require unlocking though, at least not from the Music app. If YouTube requires an unlock that’s actually a setting YouTube sets in their SiriKit configuration.

For reading messages, IIRC it depends on whether you have text notification previews enabled on the lock screen (they don’t document this anywhere that I can see.) The logic is that if you block people from seeing your texts from the lock screen without unlocking your device, Siri should be blocked from reading them too.

Edit: Nope, you’re right. I just enabled notification previews for Messages on the lock screen and Siri still requires an unlock. That’s a bug. One of many, many, many Siri bugs that just sort of pile up over time.


Can it not recognize my voice? I had to record the pronunciation of 100 words when I setup my new iPhone - isn’t there a voice signature pattern that could be the key to unlock?

It certainly should have been a feature up until now. However, I think at this point anyone can clone your voice and bypass it.

But as a user I want to be able to give it permission to run selected commands even with the phone locked. Like I don't care if someone searches google for something or puts a song via spotify. If I don't hide notifications when locked, what does it matter that someone who has my phone reads them or listens to them?


Personal Voice learns to synthesize your voice, not to identify it.

Probably need VoiceID so only authorized people can talk to it.

Not really. Giving the weather forecast or playing music seems pretty low risk to me.

Siri doesnt make me unlock the phone to give a weather report.

Right, but you understand why allowing access to unauthenticated voice is bad for security right?

But you understand why if I don't care about that, I should be able to run it, right?

you can, you can turn locking off.

But the point is, you are a power user, who has some understanding of the risk. You know that if your phone is stolen and it has any cards stored on them, they can be easily transferred to another phone and drained. Because your bank will send a confirmation code, and its still authorized, you will be held liable for that fraud.

THe "man in the street" does not know that, and needs some level of decent safe defaults to avoid such fraud.


I understand why you'd want to do it.

Oddly enough I also understand Apple telling you, good luck, find someones platform that will allow that, that's not us.


re: youtube music, I just tried it on my phone and it worked fine... maaaybe b/c you're not a youtube premium subscriber and google wants to shove ads into your sweet sweet eyeballs?

The one that kindof caught me off guard was asking "hey siri, how long will it take me to get home?" => "You'll need to unlock your iPhone for that, but I don't recommend doing that while driving..." => if you left your phone unattended at a bar and someone could figure out your home address w/o unlock.

...I'm kindof with you, maybe similar to AirTags and "Trusted Locations" there could be a middle ground of "don't worry about exposing rough geolocation or summary PII". At home, in your car (connected to a known CarPlay), kindof an in-between "Geo-Unlock"?


I pay for YouTube Music and I see really inconsistent behavior when asking Siri to play music. My five-year-old kid is really into an AI slop song that claims to be from the KPop Daemon Hunters 2 soundtrack, called Bloodline (can we talk about how YT Music in full of trashy rip-off songs?). He's been asking to listen to it every day this week in the car and prior to this morning, saying "listen to kpop daemon hunters bloodline" would work fine, playing it via YT Music. This morning, I tried every iteration of that request I could think of and I was never able to get it to play. Sometimes I'd get the response that I had to open YT Music to continue, and other times it would say it was playing, but it would never actually queue it up. This is a pretty regular issue I see. I'm not sure if the problem is with Siri or YT Music.

It's true that open models are a half-step behind the frontier, but I can't say that I've seen "sheer intelligence" from the models you mentioned. Just a couple of days ago Gemini 3 Pro was happily writing naive graph traversal code without any cycle detection or safety measures. If nothing else, I would have thought these models could nail basic algorithms by now?

Did it have reason to assume the graph to be a certain type, such as directed or acyclic?

Yeah. Q2 in any model is just severely damaged, unfortunately. Wish it weren’t so.


The numbers you stated sound off ($500k capex + electricity per 3 concurrent requests?). Especially now that the frontier has moved to ultra sparse MoE architectures. I’ve also read a couple of commodity inference providers claiming that their unit economics are profitable.

You're delusional, I didn't even include the labor the install and run the damn thing. More than 500k

Much of these gains can be attributed to better tooling and harnesses around the models. Yes, the models also had to be retrained to work with the new tooling, but that doesn’t mean there was a step change in their general “intelligence” or capabilities. And sure enough, I’m seeing the same old flaws as always: frontier models fabricating info not present in the context, having blindness to what is present, getting into loops, failing to follow simple instructions…

> Much of these gains can be attributed to better tooling and harnesses around the models.

This isn't the case.

Take Claude Code and use it with Haiku, Sonnet and Opus. There's a huge difference in the capabilities of the models.

> And sure enough, I’m seeing the same old flaws as always: frontier models fabricating info not present in the context, having blindness to what is present, getting into loops, failing to follow simple instructions…

I don't know what frontier models you are using but Opus and Codex 5.2 don't ever do these things for me.


> But then I decided I'm just a chemical reaction

That doesn’t address the practical significance of privacy, though. The real risk isn’t that OpenAI employees will read your chats for personal amusement. The risk is that OpenAI will exploit the secrets you’ve entrusted to them, to manipulate you, or to enable others to manipulate you.

The more information an unscrupulous actor has about you, the more damage they can do.


I have seen ~1,300 tokens/sec of total throughout with Llama 3 8B on a MacBook Pro. So no, you don’t halve the performance. But running batched inference takes more memory, so you have to use shorter contexts than if you weren’t batching.

I think this is 100% in your mind. The article does not in any way read to me as having AI-generated prose.


You can call me crazy or you can attack my points: do you think the first example logically follows? Do you think the second isn't wordy? Just to make sure I'm not insane, I just copy pasted the article into Pangram, and lo and behold, 70% AI-generated.

But I don't need a tool to tell me that it's just bad writing, plain and simple.


You are gaslighting. I 100% believe this article was AI generated for the same reason as the OP. And yes, they do deserve negative scrutiny for trying to pass off such lack of human effort on a place like HN!


Either this article was written by AI or someone deliberately trying to sound like AI.


> protect their investment

Viewed another way, the preferential pricing they're giving to Claude Code (and only Claude Code) is anticompetitive behavior that may be illegal.


This is a misunderstanding of the regulations.

They’re not obligated to give other companies access to their services at a discounted rate.


They may however be obligated to not give customers access to their services at a discounted rate either - predatory pricing is at least some of the time and in some jurisdictions illegal.


Predatory pricing? They have a public API that anyone can use for a public rate. There is no predatory pricing here.

The Claude Code endpoint is a private API. They’re free to control usage of their private API.


Predatory pricing is selling something below cost to acquire/maintain market dominance.

The Claude subscription used for Claude Code is to all appearances being sold substantially below the cost to run it, and it certainly seems that this is being done to maintain Claude Code's market dominance and force out competitors who cannot afford to subsidize LLM inference in the same way such as OpenCode.

It's not a matter of there being a public API, I don't believe they are obligated to offer one at all, it's a matter of the Claude Subscription being priced fairly so that OpenCode (on top of, say, gemini) can be competitive.


> Predatory pricing is selling something below cost to acquire/maintain market dominance.

Yet they have to acquire market dominance in a meaningful market first if you want to prosecute, otherwise it's just a failed business strategy. Like that company selling movie tickets bellow cost.


The modern consumer benefit doctrine means predatory pricing is impossible to prosecute in 99% of cases. I’m not saying it’s right, but legally it is toothless.


This is true... in the US (though there is still that 1%). Anthropic operates globally and the US isn't the only country who ever realized it might be an issue.


Claude code is so successful that they could silence the api to protect the moat.

I’m surprised they didn’t go with the option of offering opus 4.6 to Claude code only.


The API is really expensive compared to a Max subscription! So they're probably making a lot of money (or at least losing much less) via the API. I don't think it's going anywhere. Worst case scenario they could raise the price even more.


That’s what OpenAI is doing with GPT-5.2-Codex


What makes it predatory?


The Claude subscription (i.e. the pro and max plans, not the API) is sold at what appears to be well below cost in what appears to be a blatant attempt to preserve/create market dominance for claude code, destroying competitors by making it impossible to compete without also having a war chest of money to give away.


You’re making a big assumption. LLM providers aren’t necessarily taking a loss on the marginal cost of inference. It’s when you include R&D and training costs that it requires the capital inputs. They’ve come out and said as much.

The Claude Code plans may not be operating at a loss either. Most people don’t use up 100% of their plan. Few people do. A lot of it goes idle.


If you check the actual tokens consumption and compare it to the API you will find a factor 10.

Training models cost tens of millions, their revenues from sub + api are well above hundreds of millions.

If you look at minimax IPO data, you can see that they spent 3x their revenue on "cloud bills".

So yes, it's probable that they do subsidize inference through subscriptions in order to capture market.

No inferrence provider is profitable, and most run on VC money to serve customers.


Now dig into copilot plus and how they price the premium requests. The math is not mathing, they are aggressively trying to capture the market.


Are you suggesting Anthropic has a “duty to deal” with anyone who is trying to build competitive products to Claude Code, beyond access to their priced API? I don’t think so. Especially not to a product that’s been breaking ToS.


No, but I think they should. Or anti-trust was enforced through some other means. Or at all really.

Citing the ToS is circular logic. They set the terms and can change them whenever they want!


A regulatory duty to deal is the opposite of setting your own terms. Yes, citing a ToS is acceptable in this scenario. We can throw ToS out if we all believed in duty to deal.


Do other companies have a similar "duty to deal" - for example, if Microsoft or Apple ToS forbid use of open source software with their software? Or if VS Code ToS forbid people from using VS Code to work on a competitor?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: