Hacker Newsnew | past | comments | ask | show | jobs | submit | CharlesW's commentslogin

That's been my experience as well. I like the idea of Beads, but it's fallen apart for me after a couple weeks of moderate use on two different projects now. Luckily, it's easy to migrate back to plain ol' Markdown files, which work just as well and have never failed me.


Home Depot does support Apple Pay and other standard forms of tap-to-pay now. (It’s possible that some areas are still in transition.)

Must have been within the past year. I forgot my wallet going there and was surprised I could not pay with my phone at their self checkout.

Yes, recent for my local stores — Q3 or very early Q4.

> Not a single demonstration of contrast?

The nano-texture has less contrast.

"The nano-texture adds a filter-like appearance, resulting in a lower contrast ratio than the glossy panel. That said, there are differing opinions about the subjective appearance of the raised blacks. Some say it's a dealbreaker, while others prefer it, arguing that it looks more like what you would see on paper. The glossy panel produces a deeper, more Google Pixel HDR type of contrast that some find unnatural." — https://www.rtings.com/laptop/learn/apple-nano-texture


I like to use MIST (macOS Installer Super Tool) to grab old macOS versions: https://github.com/ninxsoft/Mist

Apple also provides instructions for downloading many older macOS versions via your terminal: https://support.apple.com/en-us/102662#terminal


Nice, thanks for sharing! It'd be interesting to integrate MIST into lume's ipsw command - right now Apple's native features in Apple Vz only provides download links for the latest supported version of the host, so grabbing older versions requires workarounds like this.

"I don't like the style of Apple's icons. It's like they're not even trying!"

Meanwhile, at Adobe: https://www.adobe.com/products/catalog.html


> Back in the day, light mode wasn’t called “light mode”. It was just the way that computers were…

Things people born after Macintosh say.


True. This was how my first computer was [1]

I didn’t call it dark mode though!

[1] https://upload.wikimedia.org/wikipedia/commons/0/0e/Hard_res...


I spent an enormous amount of time in DOS, color 7 on color 0, gray on black. The IDE everybody used was aggressively gray on blue [1].

Which is #AAAAAA on #000000 to you kids with your fancy Super VGA monitors with megabytes of video memory and 24-bit color.

This white background stuff is the invention of Microsoft or somebody's marketing department, who decided people would be less afraid of computers if they made the screen look like a piece of paper.

Back in my day we only used 16 colors at a time [2], because you had to quarter the resolution if you wanted more than that, because of course video memory has to fit in a 64k segment -- why would anyone even want to go bigger, wouldn't that consume way too much conventional memory? And if you did decide you wanted to use 8-bit mode, if you wanted square pixels you had to read Michael Abrash's book and do terrible black magic involving directly programming VGA registers and bank-switched bit-planes.

If you don't know what any of that means, it means you kids've got it way too easy these days and don't even know it. The real programmers who knew all this stuff and made brilliant masterpieces like Master of Magic and the original X-COM were scattered to the winds when the original Microprose folded. Now get off my lawn.

[1] https://en.wikipedia.org/wiki/QBasic#/media/File:QBasic_Open...

[2] https://en.wikipedia.org/wiki/Color_Graphics_Adapter#With_an...


>> Back in the day, light mode wasn’t called “light mode”. It was just the way that computers were…

>Things people born after Macintosh say.

Things people born after teleprompters say.


Yep the first OS I (the author) used was the delightful beige of Windows 98

Exactamundo.

> I'm oversimplifying but this effectively turns the iPhone into a dumb terminal for Google's brain, wrapped in Apple's privacy theater.

Setting aside the obligatory HN dig at the end, LLMs are now commodities and the least important component of the intelligence system Apple is building. The hidden-in-plain-sight thing Apple is doing is exposing all app data as context and all app capabilities as skills. (See App Intents, Core Spotlight, Siri Shortcuts, etc.)

Anyone with an understanding of Apple's rabid aversion to being bound by a single supplier understands that they've tested this integration with all foundation models, that they can swap Google out for another vendor at any time, and that they have a long-term plan to eliminate this dependency as well.

> Apple admitted that the cost of training SOTA models is a capex heavy-lift they don't want to own.

I'd be interested in a citation for this (Apple introduced two multilingual, multimodal foundation language models in 2025), but in any case anything you hear from Apple publicly is what they want you to think for the next few quarters, vs. an indicator of what their actual 5-, 10-, and 20-year plans are.


My guess is that this is bigger lock-in than it might seem on paper.

Google and Apple together will posttrain Gemini to Apple's specification. Google has the know-how as well as infra and will happily do this (for free ish) to continue the mutually beneficial relationship - as well as lock out competitors that asked for more money (Anthropic)

Once this goes live, provided Siri improves meaningfully, it is quite an expensive experiment to then switch to a different provider.

For any single user, the switching costs to a different LLM are next to nothing. But at Apple's scale they need to be extremely careful and confident that the switch is an actual improvement


It's a very low baseline with Siri, so almost anything would be an improvement.

The point is that once Siri is switched to a Gemini-based model, the baseline presumably won't be low anymore.

I’m not so sure. Just think about coding assistants with MCP based tools. I can use multiple different models in GitHub Copilot and get good results with similarly capable models.

Siri’s functionality and OS integration could be exposed in a similar, industry-standard way via tools provided to the model.

Then any other model can be swapped in quite easily. Of course, they may still want to do fine tuning, quantization, performance optimization for Apple’s hardware, etc.

But I don’t see why the actual software integration part needs to be difficult.


> But I don’t see why the actual software integration part needs to be difficult.

That’s not the issue. The issue is that once Gemini is in place as the intelligence behind Siri, the bar is now much higher than today and so you have to be more careful if you consider replacing Gemini, because you’re as likely as not to make Siri worse. Maybe more likely to make it worse.


Oh well that’s a good problem to have, isn’t it? Siri being so good that they don’t want to mess it up.

That gives them plenty of runway to test and optimize new models internally before release and not feel like they need to rush them out because Siri sucks.


Doubt it. Of all the issues I run into with Siri none could be solved by throwing AI slop at it. Case in point: if I ask Siri to play an album and it can't match the album name it just plays some random shit instead of erroring out.

Um if I ask an LLM about a fake band it literally say I couldn't find any songs by the fake band did you type is correctly and it's about a millions times more likely to guess correctly. Why do you say it doesn't solve loads of things? I'm more concerned about the problems it creates (prompt injection, hallucinations in important work, bad logic in code), the actual functionality will be fantastic compared to Siri right now!

  Why do you say it doesn't solve loads of things? 
Because I'm sitting here twiddling my thumbs waiting for random pages to go through their anti-LLM bot crap. LLMs create more problems than they solve.

  Um if I ask an LLM about a fake band it literally say I couldn't find any
  songs by the fake band did you type is correctly and it's about a millions
  times more likely to guess correctly
Um if Apple wrote proper error handling in the first place the issue would be solve without LLM baggage. Apple made a conscious decision to handle "unknown" artists this way, LLMs don't change that.

Ollama! Why didn’t they just run Ollama and a public model! They’ve kept the last 10 years with a Siri who doesn’t know any contact named Chronometer only to require the best in class LLM?

The other day I was trying to navigate to a Costco in my car. So I opened google maps on Android Auto on the screen in my car and pressed the search box. My car won't allow me to type even while parked... so I have to speak to the Google Voice Assistant.

I was in the map search, so I just said "Costco" and it said "I can't help with that right now, please try again later" or something of the sort. I tried a couple more times until I changed up to saying "Navigate me to Costco" where it finally did the search in the textbox and found it for me.

Obviously this isn't the same thing as Gemini but the experience with Android Auto becomes more and more garbage as time passes and I'm concerned that now we're going to have 2 google product voice assistants.

Also, tbh, Gemini was great a month ago but since then it's become total garbage. Maybe it passes benchmarks or whatever but interacting with it is awful. It takes more time to interact with than to just do stuff yourself at this point.

I tried Google Maps AI last night and, wow. The experience was about as garbage as you can imagine.


Siri on my Apple Home will default to turning off all the lights in the kitchen if it misunderstands anything. Much hilarity ensues

Would be worse if it turned off your car.

Share and Enjoy

Same issue with apple car. ‘ hey Siri, please play ‘call your girlfriend’ on Spotify by (artist)’

‘Sorry I don’t know anyone called ‘your girlfriend’’ The kids find it hilarious


I have been getting great results with Gemini 3 Deep Think, though I am not using it as my personal assistant.

I'm genuinely curious about this too. If you really only need the language and common sense parts of an LLM -- not deep factual knowledge of every technical and cultural domain -- then aren't the public models great? Just exactly what you need? Nobody's using Siri for coding.

Are there licensing issues regarding commercial use at scale or something?


Pure speculation, but I’d guess that an arrangement with Google comes with all sorts of ancillary support that will help things go smoothly: managed fine tuning/post-training, access to updated models as they become available, safety/content-related guarantees, reliability/availability terms so the whole thing doesn’t fall flat on launch day etc.

Probably repeatability and privacy guarantees around infrastructure and training too. Google already have very defined splits for their Gemma and in house models with engineers and researchers rarely communicating directly.

> Why didn’t they just run Ollama and a public model

Same reason they switched to Intel chips in the 2000s. They were better. Then Cupertino watched. And it learned. And it leapfrogged.

If I were Google, my fear would be Apple launching and then cutting the line at TSMC to mass produce custom silicon in the 2030s.


> provided Siri improves meaningfully

Not a high bar…

That said, Apple is likely to end up training their own model, sooner or later. They are already in the process of building out a bunch of data centers, and I think they have even designed in-house servers.

Remember when iPhone maps were Google Maps? Apple Maps have been steadily improving, to the point they are as good as, if not better than, Google Maps, in many areas (like around here. I recently had a friend send me a GM link to a destination, and the phone used GM for directions. It was much worse than Apple Maps. After a few wrong turns, I pulled over, fed the destination into Apple Maps, and completed the journey).


> what their actual 5-, 10-, and 20-year plans are

Seems like they are waiting for the "slope of enlightenment" on the gartner hype curve to flatten out. Given you can just lease or buy a SOTA model from leading vendors there's no advantage to training your own right now. My guess is that the LLM/AI landscape will look entirely different by 2030 and any 5 year plan won't be in the same zip code, let alone playing field. Leasing an LLM from Google with a support contract seems like a pretty smart short term play as things continue to evolve over the next 2-3 years.


This is the key. The real issue is that you don’t need superhuman intelligence in a phone AI assistant. You don’t need it most of the time in fact. Current SOTA models do a decent job of approximating college grad level human intelligence let’s say 85% of the time which is helpful and cool but clearly could be better. But the pace at which the models are getting smart is accelerating AND they are getting more energy efficient and memory efficient. So if something like DeepSeek is roughly 2 years behind SOTA models from Google and others who have SOTA models then in 2030 you can expect 2028 level performance out open models. There will come a time when a model capable of college grad level intelligence 99.999% of the time will be able to run on a $300 device. If you are Apple you do not need to lead the charge on a SOTA model, you can just wait until one is available for much cheaper. Your product is the devices and services consumers buy. If you are OpenAI you have no other products. You must become THE AI to have in an industry that will in the next few years become dominated by open models that are good enough or to close up shop or come up with another product that has more of a moat.

"pace at which the models are getting smart is accelerating". The pace is decelerating.

My impression is that solar (and maybe wind?) energy have benefited from learning-by-doing [1][2] that has resulted in lower costs and/or improved performance each year. It seems reasonable to me that a similar process will apply to AI (at least in the long run). The rate of learning could be seen as a "pace" of improvement. I'm curious, do you have a reference for the deceleration of pace that you refer to?

[1] https://emp.lbl.gov/news/new-study-refocuses-learning-curve

[2] https://ourworldindata.org/grapher/solar-pv-prices-vs-cumula...


Why would a the curve of solar prices be in any way correlated with the curve of AI improvements ?

The deceleration of pace is visible to anyone capable of using Google.


u/ipaddr is probably referring to

  1) the dearth of new (novel) training data. Hence the mad scramble to hoover up, buy, steal, any potentially plausible new sources.

  2) diminishing returns of embiggening compute clusters for training LLMs and size of their foundation models.
(As you know) You're referring to Wright's Law aka experience learning curve.

So there's a tension.

Some concerns that we're nearing the ceiling for training.

While the cost of applications using foundation models (implementing inference engines) is decreasing.

Someone smarter than me will have to provide the slopes of the (misc) learning curves.


I was not aware of (or had forgotten) the term "Wright's law" [1], but that indeed is what I was thinking of. It looks like some may use the term "learning curve" to refer to the same idea (efficiency gains that follow investment); the Wikipedia page on "Learning curve" [2] includes references to Wright.

[1] https://en.wikipedia.org/wiki/Experience_curve_effect

[2] https://en.wikipedia.org/wiki/Learning_curve


> It seems reasonable to me that a similar process will apply to AI

If its reasonable, then reason it. Because it is a highly apples to oranges comparison you are making


I don't think anyone really knows, because there's no objective standard for determining progress.

Lots of benchmarks exist where everyone agrees that higher scores are better, but there's no sense in which going from a score of 400 to 500 is the same progress as going from 600 to 700, or less, or more. They only really have directional validity.

I mean, the scores might correspond to real-world productivity rates in some specific domain, but that just begs the question -- productivity rates on a specific task are not intelligence.


$300 college student in your pocket sure sounds like the Singularity to me.

That's not an "obligatory HN dig" though, you're in-media-res watching X escape removal from the App Store and Play Store. Concepts like privacy, legality and high-quality software are all theater. We have no altruists defending these principles for us at Apple or Google.

Apple won't switch Google out as a provider for the same reason Google is your default search provider. They don't give a shit about how many advertisements you're shown. You are actually detached from 2026 software trends if you think Apple is going to give users significant backend choices. They're perfectly fine selling your attention to the highest bidder.


There are second-order effects of Google or Apple removing Twitter from their stores.

Guess who's the bestie of Twitter's owner? Any clues? Could that be a vindictive old man with unlimited power and no checks and balances to temper his tantrums?

Of course they both WANT Twitter the fuck out of the store, but there are very very powerful people addicted to the app and what they can do with it.


That further proves my point that they are monopolies that cannot survive without protectionist intervention.

In the current US environment, no one can survive going against Trump, and as recently evidenced, this is meant literally.

The US, for all intents and purposes, is now a kleptocracy. Rule of law, freedom of speech, even court orders, all of that doesn't matter any more in practice. There will always be some way for the federal government to strong-arm anyone into submission.


Not with that attitude they can’t. Let’s see what happens to the first person to call his bull shit. If jpow folds or is actually indicted, you may be right. Let’s see what happens with Exxon though i think they’re gonna bend the knee.

Why would you not bend the key ? There's a reasonnable chance Trump is gone in less than 3 years and MAGA tears itself apart. Better bet on that than try to make a stand now and lose everything.

> Let’s see what happens to the first person to call his bull shit

Well, ICE just executed a woman in broad daylight with multiple cameras filming, and a day later you got Kristi Noem standing on a podium with a slogan referencing to an OG Nazi massacre [1][2] and half the US government gaslighting the country, spreading outright lies [3] without consequences so far.

When they can get away with this level of lies, they can get away with anything. Trump's infamous "I Could ... Shoot Somebody, And I Wouldn't Lose Any Voters" quote [4] wasn't a joke - it was a clear prediction of what he intended to enable eventually.

[1] https://www.billboard.com/music/music-news/tom-morello-trump...

[2] https://www.deutschlandfunk.de/80-jahre-massaker-lidice-100....

[3] https://www.abc.net.au/news/2026-01-08/what-happened-in-minn...

[4] https://www.npr.org/sections/thetwo-way/2016/01/23/464129029...


I do not usually comment on politics but just this one time, and hopefully I can wordsmith it without taking a political stance.

When Trump started his campaign, circa 2011 with the birth certificate, he did not know he will win or not, but he made it his life's mission.

Countering him will take the same zeal. I know we have a precedence of presidents retiring, but unless Obama (and Hillary and Biden and Kamala) hits the streets as the leader of resistance, the resistance will be quelled easily by constant distracting. Yeah maybe AOC, maybe Bernie, maybe someone else, but no ... Trump is smart and dedicated (despite the useful idiot role he plays), he can not be countered by mid-term and full-term campaigns. We are not in Kansas any more. Been a while. The opposition needs a named resistance leader whose full time job is to engage Trump.


I'm not American, but in my opinion Newsom+AOC might have the zeal and voter base to MAYBE do something.

Newsom (or his PR team) knows how to play the troll game correctly, hitting low blows and not sticking to the fucking high ground.

AOC on the other hand will make the MAGA base _so_ irrationally angry they might do something actually stupid. She's also got Bernie's views, which might make America a place I want to visit some day in the next decade again. I've literally turned down all expenses paid company trips to USA a few times because I just don't want to risk either not getting into the country or not getting back.


Caveat: as long as it doesn’t feel like you’re being sold out.

Which is why privacy theatre was an excellent way to put it


Apple’s various privileged device-level ads and instant-stop-on-cancel trials and special rules for notifications for their paid additional services like Fitness+, Music, Arcade, iCloud+, etc are all proof that they do not care about the user anymore.

          LLMs are now commodities and the least important component of the intelligence system Apple is building

If that was even remotely true, Apple, Meta, and Amazon would have SoTA foundational models.

Why? Grain is a commodity, but I buy flour at the store rather than grow my own. The “commmodity” argument suggets that new companies should stay away from model training unless they have a cost edge.

Are you not aware that all of the above have all invested billions trying to train a SoTA Foundational model?

> And actually, why do we have both 48kHz and 44.1kHz anyway?

Those two examples emerged independently, like rail standards or any number of other standards one can cite. That's really just the top of the rabbit-hole, since there are 8-20 "standard" audio sample rates, depending how how you count.

This isn't really a drawback, and it does provide flexibility when making tradeoffs for low bitrates (e.g. 8 kHz narrowband voice is fine for most use cases) and for other authoring/editing vs. distribution choices.


> This isn't really a drawback

But, that's only true because people freely resample between them all the time and nobody knows or cares about it.


The nice thing about standards is, there are so many from which to choose! :)

As is pointed out elsewhere in the thread, there are at three official ways to download your own photos. This complements those.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: