I feel like this sort of misses the point. I didn't think the primary thrust of his article was so much about the specific details of AI, or what kind of tasks AI can now surpass humans on. I think it was more of a general analysis (and very well written IMO) that even when new technologies advance in a slow, linear progression, the point at which they overtake an earlier technology (or "horses" in this case), happens very quickly - it's the tipping point at which the old tech surpasses the new. For some reason I thought of Hemingway's old adage "How did you go bankrupt? - Slowly at first, then all at once."
I agree with all the limitations you've written about the current state of AI and LLMs. But the fact is that the tech behind AI and LLMs never really gets worse. I also agree that just scaling and more compute will probably be a dead end, but that doesn't mean that I don't think that progress will still happen even when/if those barriers are broadly realized.
Unless you really believe human brains have some sort of "secret special sauce" (and, FWIW, I think it's possible - the ability of consciousness/sentience to arise from "dumb matter" is something that I don't think scientists have adequately explained or even really theorized), the steady progress of AI should, eventually, surpass human capabilities, and when it does, it will happen "all at once".
For what it's worth, the decline in use of horses was much slower than you might expect. The model T Ford motor car reached peak production in 1925 [0], and for an inexact comparison (I couldn't find numbers for the US) the horse population of France started to decline in 1935, but didn't drop below 80% of its historical peak until the late 1940's down to 10% of its peak by the 1970's [1].
> For what it's worth, the decline in use of horses was much slower than you might expect.
Not really, given that the article goes into detail about this in the first paragraph, with US data and graphs: "Then, between 1930 and 1950, 90% of the horses in the US disappeared."
Eyeballing the chart in the OP and the French data shows them to have a comparable pattern. While OP's data is horses per person, and the French is total number of horses, both show a decline in horse numbers starting about 10 years after widespread adoption of the motor vehicle and falling to 50% of their peak in the mid- to late-1950's, with the French data being perhaps a bit over 5 years delayed compared to the US data. That is, it took 25 to 30 years after mass production of automobiles was started by Ford for 50% of "horsepower" to be replaced.
The point isn't to claim that motor vehicles did not replace horses, they obviously did, but that the replacement was less "sudden" than claimed.
Yes, I considered that. Someone using a horse-drawn wagon to deliver goods about town would likely not consider buying a truck until the cart horse needed replacing.
The working life of a horse may be shorter than the realistic lifespan. Searching for "horse depreciation" gives 7 years for a horse under age 12, the prime years for a horse being between 7 and 12 yrs old, depending on what it is used for.
I'm willing to accept the input of someone more knowledgeable about working horses, though!
I read the abstract (not the whole paper) and the great summarizing comments here.
Beyond the practical implications of this (i.e. reduced training and inference costs), I'm curious if this has any consequences for "philosophy of the mind"-type of stuff. That is, does this sentence from the abstract, "we identify universal subspaces capturing majority variance in just a few principal directions", imply that all of these various models, across vastly different domains, share a large set of common "plumbing", if you will? Am I understanding that correctly? It just sounds like it could have huge relevance to how various "thinking" (and I know, I know, those scare quotes are doing a lot of work) systems compose their knowledge.
Somewhat of a tangent, but if you enjoy the philosophy of AI and mathematics, I highly recommend reading Gödel, Escher, Bach: an Eternal Golden Braid by D. Hofstadter. It is primarily about the Incompleteness Theorem, but does touch on AI and what we understand as being an intelligence
Insofar as this article is about the 4 execs leaving Apple, this is a total non-story and the "What the heck is going on at Apple" is just click bait:
- Lisa Jackson, Apple’s vice president of environment, policy and social initiatives, and general counsel Kate Adams, are set to retire. While these may be high level execs, they don't really have much to do with the overall direction and success of the company. And given the change in the political environment you've seen tons of changes in roles like these at many companies in the past 11 months.
- Alan Dye, vice president of human interface design, is leaving to join Meta as its chief design officer. Sounds like he won't really be missed: https://9to5mac.com/2025/12/04/gruber-apple-employees-giddy-.... Assuming he was responsible for Liquid Glass, I say good riddance.
- John Giannandrea, senior vice president of machine learning and AI strategy, is also retiring. He had basically already been demoted, taken off leading Siri due to Siri's competitive failures.
So yeah, it's pretty obvious that Apple is behind the AI wave, but honestly, they may end up having the last laugh given how much backlash there is from consumers about trying to shoehorn AI into all these places where it's just an annoyance.
There's more than just 4 execs and imo an unprecedented level of turnover for a historically very stable company. It’s multiple senior leaders across legal, policy, AI, design, hardware, and operations leaving within a short period, making it one of Apple’s most significant leadership shakeups in years, which is why several outlets are finding it newsworthy.
1) John Giannandrea, Senior VP of Machine Learning & AI Strategy, Apple’s AI chief is leaving in 2026 after setbacks with Siri, his entire team is being reorganized and cut.
2) Alan Dye, VP of Design and responsible for liquid glass left for Meta
Bloomberg
3) Kate Adams, the top lawyer and general counsel is leaving
4) Lisa Jackson, VP of Policy & Social Initiatives also leaving
5) Johny Srouji, hardware/chip head, said he is "seriously considering leaving" which is really interesting seeing as he actually said that out loud for press to report on.
6) Jeff Williams, COO retired
7) Luca Maestri the CFO left ealier this year
8) Ruoming Pang the AI foundation leader left for Meta
9) Ke Yang, head of Siri search also left for Meta.
I was reading and 2 (Srouji) is 61 years old. While that is not too old, but that does explain why he may not be choice for next CEO (besides any other things). You want someone to helm the ship for a decade.
Apple is (for a very long time) essentially a hardware company so all the contrived drama about not embracing AI is perhaps Apple style accumulation of data as it refines the sequence of "neural cores" to efficiently serve wherever the industry is careening.
While Apple wants its hardware to best run popular apps (AI included), it's premature to presume these people leaving for Meta (Dye in particular) have any impact other than tribal knowledge in their departures.
(disclaimer: was an engineer in an inner sanctum of apple for several years)
“many people” are mostly stupid (go to your local dmv to see “many people”) so that it irrelevant. sales are through the roof, profits same, cash on hand to buy many countries, life is good…
Honestly some of the posts defending "it could be true!!" when nearly any rational reading of it would deem it "fake beyond a reasonable doubt" are just tiresome at this point.
Like you say, it's easy to have a rational discussion that these adverts are dumb and annoying, and purporting this fan fiction as truth just weakens the case.
Why would ICE leave the number as low as "70%" if they could be higher? Every illegal alien is a criminal as far as the law is concerned. Every illegal alien arrested is "charged with a crime". Otherwise ICE is openly stating to its supporters that they arrest illegal aliens and then release them, something their supporters are vocally against, and the administration believes and claims to be a serious problem.
A direct reading of ICE's claims (that seem to be contrary to information obtained through FOIA?) is that 70% of the people they arrest are criminals, which by their own definitions, would imply 30% of the people they arrest are not illegally here, but that's reading between the lines and it's hard to lend any credence to anything said by an administration that treats public statements as a fun gaslighting game.
But essentially, if ICE COULD claim everyone they arrest is an illegal alien (and literally a criminal they are legally allowed to arrest and deport), why wouldn't they?
I think this is a misinterpretation of the document. The claim is:
> 70% of ICE arrests are of criminal illegal aliens charged with or convicted of crimes in the U.S.
I believe the claim here is that 70% of the people ICE arrests have been charged with or convicted of crimes other than being present in the USA illegally. I don't think this is at all meant to imply that 30% of arrests are of people who are present in the USA legally. I think it's just sloppy writing.
I'm glad I asked the question, and I thank you for responding, but come on, don't you think it's not just a stretch but just flat out false to go from Homeland Security's quote of "Despite FALSE claims by sanctuary politicians and the media, 70% of ICE arrests are of illegal aliens who have been charged or convicted of a crime in the U.S." to "ICE cannot legally arrest people who are citizens for no reason, and yet they have done exactly that 30% of the time by their own admission." Like it's hard for me to even imply good faith if that's the stretch you made.
As the other commenter wrote, ICE is saying that 70% of arrests have a criminal conviction, implying something other than just being in the country illegally. First, many illegal aliens (e.g. those who overstay their visas) have not committed any criminal offense - overstaying a visa is a civil charge.
Yes, I do admit there is wiggle room for ICE to make it sound like all the people they are arresting are rapists and murderers (crossing the border illegally is itself a criminal offense), and as you point out, the Cato institute and many others have pointed out that high percentages of those deported don't have other criminal convictions. And given how much wide reporting there's been about how the administration is dissatisfied with the pace of deportations, it's clear there is pressure and incentive for ICE to deport as many people as possible.
So you can make all those valid arguments. Falsely stating (i.e. "making up" or "lying") that 30% of ICE arrests are citizens with no convictions doesn't help your point.
Technically not a word, but the US government uses "lawfully present individuals" in its policy docs. In addition to US citizens, this covers lawful permanent residents, people with valid non-immigrant visas/visa waivers, some country-specific exceptions (e.g. Canadian citizens visiting for short-term business and pleasure), and various humanitarian categories (refugees, people seeking asylum who have filed the proper paperwork, etc).
In short, an unfortunately very wide field of people for ICE to chew through without touching any citizens (even if one takes the most uncharitable interpretation, i.e. only 70% of arrests have been of unlawfully present individuals)
Just a minor correction, but I think it's important because some comments here seem to be giving bad information, but on OpenAI's model site it says that the knowledge cutoff for gpt-5 is Sept 30, 2024, https://platform.openai.com/docs/models/compare, which is later than the June 01, 2024 date of GPT-4.1.
Now I don't know if this means that OpenAI was able to add that 3 months of data to earlier models by tuning or if it was a "from scratch" pre-training run, but it has to be a substantial difference in the models.
IBM did a lot of pretty fragmented and often PR-adjacent work. And getting into some industry-specific (e.g. healthcare) things that didn't really work out. But my understanding is that it's better standardized and embedded in products these days.
Not to be rude, but that didn't answer my question.
Taking a look at IBM's Watson page, https://www.ibm.com/watson, it appears to me that they basically started over with "watsonx" in 2023 (after ChatGPT was released) and what's there now is basically just a hat tip to their previous branding.
I think that's essentially accurate even if some work from IBM Research in particular did carry over. As I recall my timelines, yes, IBM rebooted and reorganized Watson to a significant degree while continuing to use a derivation of the original branding (and took advantage of Red Hat platforms/products).
So you experienced a bug, which happens on software. I've traveled a lot and have never had an issue with my ChatGPT subscription. I'm not doubting you, but I don't think your anecdote adds much to the conversation of OpenAI vs Google.
This is pretty laughably false. Sure, the CEO has a lot of power and I've certainly seen companies relocate so they are basically within walking distance from the CEO's house.
But "execs" covers a lot of people, and nobody gives a shit where the CIO or VP of engineering lives. If anything, these folks are more career driven and are expected to up and move at the drop of a hat if business conditions warrant.
Look what happened in Austin, TX, which has much less housing regulation tamping down construction than CA (despite a good deal of local NIMBYism).
Prices spiked during the pandemic, and in response a shit ton of housing was built, much of it multifamily residential. Rents went down significantly and home prices are down 20% since the peak.
I agree with all the limitations you've written about the current state of AI and LLMs. But the fact is that the tech behind AI and LLMs never really gets worse. I also agree that just scaling and more compute will probably be a dead end, but that doesn't mean that I don't think that progress will still happen even when/if those barriers are broadly realized.
Unless you really believe human brains have some sort of "secret special sauce" (and, FWIW, I think it's possible - the ability of consciousness/sentience to arise from "dumb matter" is something that I don't think scientists have adequately explained or even really theorized), the steady progress of AI should, eventually, surpass human capabilities, and when it does, it will happen "all at once".
reply