I think this is an odd idea. For a lot of reasons, but one is simply that higher level languages _tend_ to be terser, and context window matters for LLMs. Expressing more in less is valuable.
Ironically, grand juries refusing to indict frivolous political charges has been in the news quite a lot in the past couple of months.
It's true that jury trials have a less than perfect history of applying justice (though of course I think it's fair to say that the judges presiding over those trials exhibited similar trials so the counterfactual of a bench trial may have been the same outcome). That said, my understanding is that jury trials are just generally favorable to defendants compared to bench trials.
FWIW jury trials are arguably less vulnerable to corruption, which is a benefit. Would be hard to pull off https://en.wikipedia.org/wiki/Kids_for_cash_scandal#Criminal... (which wrongly put thousands of children in jail for the financial benefit of judges) with juries.
I think calling it 'American exceptionalism" is a little reductive. The idea that a jury trial is a protector of civil rights in a system that upholds the law as something no-one is above literally dates back to Magna Carta. Suggesting that this throughline of civil liberty is "silly theater' is not a serious proposition.
The article mostly focuses on ChatGPT uses, but hard to say if ChatGPT is going to be the main revenue driver. It could be! Also unclear if the underlying report is underconsidering the other products.
It also estimates that LLM companies will capture 2% of the digital advertising market, which seems kind of low to me. There will be challenges in capturing it and challenges with user trust, but it seems super promising because it will likely be harder to block and has a lot of intent context that should make it like search advertising++. And for context, search advertising is 40% of digital ad revenue.
Seems like the error bars have to be pretty big on these estimates.
IMO the key problem that OpenAI have is that they are all-in on AGI. Unlike a Google, they don't have anything else of any value. If AGI is not possible, or is at least not in reach within the next decade or so, OpenAI will have a product in the form of AI models that have basically zero moat. They will be Netscape in a world where Microsoft is giving away Internet Explorer for free.
Meanwhile, Google would be perfectly fine. They can just integrate whatever improvements the actually existing AI models offer into their other products.
I've also thought of this and what's more, Google's platform provides them with training from YouTube, optimal backend access to the Google Search index for grounding from an engine they've honed for decades, training from their smartphones, smart home devices and TV's, Google Cloud... And as you say, also the reverse; empowering their services from said AI, too.
They can also run AI as a loss leader like with Antigravity.
Meanwhile, OpenAI looks like they're fumbling with that immediately controversial statement about allowing NSFW after adult verification, and that strange AI social network which mostly led to Sora memes outside of it.
I think they're going to need to do better. As for coding tools, Anthropic is an ever stronger contender there, if they weren't pressured from Google already.
> OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.
Note that it doesn't say: "Our mission is to maximize shareholder value, and we develop AI systems to do that".
In fairness, no company’s mission statement says “maximize shareholder value” because it doesn’t need to be said - it’s implicit. But I agree that AGI is at the forefront of OpenAI’s mission in a way it isn’t for Google - the nonprofit roots are not gone.
If your mission is to build AGI, and building and deploying it will take many years, an appropriate strategy to accomplish that goal is to find other revenue streams that will make the long haul possible.
I don't know what the moneyed insiders think OpenAI is about, but Sam Altman's public facing thoughts (which I consider to be marketing) are definitely oriented toward making it look like they are all-in on AGI:
See:
(1) https://blog.samaltman.com/the-gentle-singularity (June, 2025)
- "We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence, and at least so far it’s much less weird than it seems like it should be."
- " It’s hard to even imagine today what we will have discovered by 2035; maybe we will go from solving high-energy physics one year to beginning space colonization the next year; or from a major materials science breakthrough one year to true high-bandwidth brain-computer interfaces the next year."
- "In a decade, perhaps everyone on earth will be capable of accomplishing more than the most impactful person can today."
(3) https://blog.samaltman.com/reflections (Jan, 2025)
- "We started OpenAI almost nine years ago because we believed that AGI was possible, and that it could be the most impactful technology in human history"
- "We are now confident we know how to build AGI as we have traditionally understood it."
(4) https://ia.samaltman.com/ (Sep, 2024)
- "This may turn out to be the most consequential fact about all of history so far. It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I’m confident we’ll get there."
(5) https://blog.samaltman.com/the-merge (Dec, 2017)
- "A popular topic in Silicon Valley is talking about what year humans and machines will merge (or, if not, what year humans will get surpassed by rapidly improving AI or a genetically enhanced species). Most guesses seem to be between 2025 and 2075."
(I omitted about as many essays. The hype is strong in this one.)
OpenAI is still de facto the market leader in terms of selling tokens.
"zero moat" - it's a big enough moat that only maybe four companies in the world have that level of capability, they have the strongest global brand awareness and direct user base, they have some tooling and integrations which are relatively unique etc..
'Cloud' is a bigger business than AI at least today, and what is 'AWS moat'? When AWS started out, they had 0 reach into Enterprise while Google and AWS had infinity capital and integration with business and they still lost.
There's a lot of talk of this tech as though it's a commodity, it really isn't.
The evidence is in the context of the article aka this is an extraordinary expensive market to compete in. Their lack of deep pockets may be the problem, less so than everything else.
This should be an existential concern for AI market as a whole, much like Oil companies before highway project buildout as the only entities able to afford to build toll roads. Did we want Exxon owning all of the Highways 'because free market'?
Even more than Chips, the costs are energy and other issues, for which Chinese government has a national strategy which is absolutely already impacting the AI market. If they're able to build out 10x data centres at offer 1/10th the price at least for all the non-Frontier LLM, and some right at the Frontier, well, that would be bad in the geopolitical sense.
The AWS moat is a web of bespoke product lock-in and exorbitant egress fees. Switching cloud providers can be a huge hassle if you didn't architect your whole system to be as vendor-agnostic as possible.
If OpenAI eliminated their free tier today, how many customers would actually stick around instead is going to Google's free AI? It's way easier to swap out a model. I use multiple models every day until the free frontier tokens run out, then I switch.
That said, idk why Claude seems to be the only one that does decent agents, but that's not exactly a moat; it's just product superiority. Google and OAI offer the same exact product (albeit at a slightly lower level of quality) and switching is effortless.
There are quite large 'switching costs' from moving a solution that's dependent on on model and ecosystem, to another.
Models have to significantly outperform on some metric in order to even justify looking at it.
Even for smaller 'entrenchements' like individual developers - Gemeni 3 had our attention for all of 7 days, now that Opus 4.5 is out, well, none of my colleagues are talking abut G3 anymore. I mean, it's a great model, but not 'good enough' yet.
I use that as an example to illustrate broader dynamics.
Open AI, Anthropic and Google are the primary participants here, with Grok possibly playing a role, and of course all of the Chinese models being an unknown quantity because they're exceptional in different ways.
Switching a complex cloud deployment from AWS to GCP might take a dedicated team of engineers several months. Switching between models can be done by a single person in an afternoon (often just 5 minutes). That's what we're talking about.
That means that none of these products can ever have a high profit margin. They have to keep margins razor thin at best (deeply negative at present) to stay relevant. In order to achieve the kinds of margins that real moats provide, these labs need major research breakthroughs. And we haven't had any of those since Attention is All You Need.
" Switching between models can be done by a single person in an afternoon (often just 5 minutes). That's what we're talking about."
Good gosh, no, for comprehensive systems it's considerably more complicated than that. There's a lot of bespoke tuning, caching works completely differently etc..
"That means that none of these products can ever have a high profit margin."
No, it doesn't. Most cloud providers operate on a 'basis' of commodity (linux, storage, networking) with proprietary elements, similar to LLMs.
There doesn't need to be any 'breakthroughs' to find broad use cases.
The issue right now is the enormous underlying cost of training and inference - that's the qualifying characteristic that makes this landscape different.
Aren't you contradicting yourself? To even be considering all the various models, the switching cost can't be that large.
I think the issue here isn't really that it's "hard to switch" it's that it's easier yet to wait 1 more week to see what your current provider is cooking up.
But if any of them start lagging for a few months I'm sure a lot of folks will jump ship.
Selling tokens at a massive loss, burning billions a quarter isn't the win you think it is. They don't have a moat bc they literally just lost the lead, you only can have a moat when you are the dominant market leader which they never were in the first place.
> All indications are that selling tokens is a profitable activity for all of the AI companies - at least in terms of compute.
We actually don't this yet because the useful life of the capital assets (mainly NVIDIA GPUs) isn't really well understood yet. This is being hotly debated by wall st analysts for this exact reason.
Gemeni does not have 'the lead' in anything but a benchmark.
The most applicable benchmarks right now are in software, and devs will not switch from Claude Code or Codex to Antigravity, it's not even a complete product.
This again highlights quite well the arbitrary nature of supposed 'leads' and what that actually means in terms of product penetration.
And it's not easy to 'copy' these models or integrations.
I think you're measuring the moat of developing the first LLMs but the moat to care about is what it'll take to clone the final profit generating product. Sometimes the OG tech leader is also the long term winner, many times they are not. Until you know what the actual giant profit generator is (e.g. for Google it was ads) then it's not really possible to say how much of a moat will be kept around it. Right now, the giant profit generator is not seeming to be the number of tokens generated itself - that is really coming at a massive loss.
I mean, on your Cloud point I think AWS' moat might arguably be a set of deep integrations between services, and friendly API's that allow developers to quickly integrate and iterate.
If AWS' was still just EC2, and S3 then I would argue they had very little moat indeed.
Now, when it comes to Generative AI models, we will need to see where the dust settles. But open-weight alternatives have shown that you can get a decent level of performance on consumer grade hardware.
Training AI is absolutely a task that needs deep pockets, and heavy scale. If we settle into a world where improvements are iterative, the tooling is largely interoperable... Then OpenAI are going to have to start finding ways of making money that are not providing API access to a model. They will have to build a moat. And that moat may well be a deep set of integrations, and an ecosystem that makes moving away hard, as it arguably is with the cloud.
EC2 and S3 moat comes from extreme economies of scale. Only Google and Microsoft can compete. You would never be able to achieve S3 profitability because you are not going to get same hardware deals, same peering agreements, same data center optimization advantages. On top of that there is extremely optimized software stack (S3 runs at ~98% utilization, capacity deployed just couple weeks in advance, i.e. if they don’t install new storage, they will run out of capacity in a month).
I wouldn't call it a moat. A moat is more about switching costs rather than quality differentiation. You have a moat when your customers don't want to switch to a competitor despite that competitor having a superior product at a better price.
> IMO the key problem that OpenAI have is that they are all-in on AGI
I think this needs to be said again.
Also, not only do we not know if AGI is possible, but generally speaking, it doesn't bring much value if it is.
At that point we're talking about up-ending 10,000 years of human society and economics, assuming that the AGI doesn't decide humans are too dangerous to keep around and have the ability to wipe us out.
If I'm a worker or business owner, I don't need AGI. I need something that gets x task done with a y increase in efficiency. Most models today can do that provided the right training for the person using the model.
The SV obsession with AGI is more of a self-important Frankenstein-meets-Pascal's Wager proposition than it is a value proposition. It needs to end.
Theoretically possible doesn't mean we're capable of doing it. Like, it's easy to say "I'm gonna boil the ocean" and another thing for you personally to succeed at it while on a specific beach with the contents of several Home Depots.
Humans tend to vastly underestimate scale and complexity.
Because human brains are giant three-dimensional processors containing billions of neurons (each with computationally complex behaviors), each one performing computations >3 orders of magnitude more efficiently than transistors do, to train an intelligence with trillions of connections in real time, while being attached to incredibly sophisticated sensors and manipulators.
And despite all that, humans are still just made of dirt.
Even if we can get silicon to do some of these tricks, that'd require multiple breakthroughs, and it wouldn't be cost-competitive with humans for quite a while.
I would even think it's possible that building brain-equivalent structures that consume the same power, and can do all the stuff for the same amount of resources, is a so far out science fiction proposition, that we can't even give a prediction as to when it will happen, and for practical purposes, biological intelligences will have an insurmountable advantage for even the furthest foreseeable future once you consider the economics of humans vs machines.
That’s rather presupposing materialism (in the philosophy of mind sense) is correct. That seems to be the consensus theory, but it’s not be shown ‘definitely’ true.
So, you're a business owner and you've decided we need AGI bc you're fine. You've no one to blame when the Revolution comes.
You clearly do not understand AGI. It's a gamble that really is most easily explained by saying, creating a god. That thing won't hate us. We create its oxygen - data. If anything, it would empower us to make of it.
The moat for any frontier LLM developer will be access to proprietary training data. OpenAI is spending some of their cash to license exclusive rights to third party data, and also hiring human experts in certain fields just to create more internal training data. Of course their competitors are also doing the same. We may end up in a situation where each LLM ends up superior in some domains and inferior in others depending on access to high quality training data.
Not only this, but there is a compounded bet that it’ll be OpenAI that cracks AGI and not another lab, particularly Google from which LLMs come in the first place. What makes OpenAI researchers so special at this point?
What's more -- how long can they keep the lid on AGI? If anyone actually cracks it... surely competitors are only a couple months behind. At least that seems to be the case with every new model thus far.
Also, they'll have garbage because the curve is sinusoidal and not anything else. Regardless of the moat, the models won't be powerful enough to do a significant amount of work.
This is how I look at Meta as well. Despite how much it is hated on here fb/ig/whatsapp aren’t dying.
AI not getting much better from here is probably in their best interest even.
It’s just good enough to create the slop their users love to post and engage with. The tools for advertisers are pretty good and just need better products around current models.
And without new training costs “everyone” says inference is profitable now, so they can keep all the slopgen tools around for users after the bubble.
Right now the media is riding the wave of TPUs they for some reason didn’t know existed last week. But Google and meta have the most to gain from AI not having any more massive leaps towards agi.
They're both all in on being a starting point to the Internet. Painting with a broad brush that was Facebook or Google Search. Now it's Facebook, Google Search, and ChatGPT.
There is absolutely a moat. OpenAI is going to have a staggering amount of data on its users. People tell ChatGPT everything and it probably won't be limited to what people directly tell ChatGPT.
I think the future is something like how everyone built their website with Google Analytics. Everyone will use OpenAI because they will have a ton of context on their users that will make your chatbot better. It's a self perpetuating cycle because OpenAI will have the users to refine their product against.
yeah but your argument is true for every llm provider. so i don't see how it's a moat since everyone who can raise money to offer an llm can do the same thing. and google and microsoft doesn't need to find llm revenue it can always offer it at a loss if it chooses unless it's other revenue streams suddenly evaporate. and tbh i kind of doubt personalization is as deep of a moat as you think it is.
Google can offer their services for free for a lot longer than OpenAI can, and already does to students. DeepSeek offers their competitor product to ChatGPT for free to everyone already.
On what basis do you say they're within the range of profitability on inference today? Every source I see paints a different story based on their own bias.
You seem to have misread the article (which is not mine by the way), which makes the point that inference costs and revenue seem to scale with each other.
> It also estimates that LLM companies will capture 2% of the digital advertising market, which seems kind of low to me.
I'm not super bullish on "AI" in general (despite, or maybe because of working in this space the last few years), but strongly agree that the advertising revenue that LLM providers will capture can be potentially huge.
Even if LLMs never deliver on their big technical promises, I know so many casual users of LLMs that basically have replaced their own thought process with "AI". But this is an insane opportunity for marketing/advertising that stands to be a much of a sea change in the space as Google was (if not more so).
People trust LLMs with tons of personal information, and then also trust it to advise them. Give this behavior a few more years to continue to normalize and product recommendations from AI will be as trusted as those from a close friends. This is the holy grail of marketing.
I was having dinner with some friends and one asked "Why doesn't Claude link to Amazon when recommending a book? Couldn't they make a ton in affiliate links?" My response was that I suspect Anthropic would rather pass on that easy revenue to build trust so that one day they can recommend and sell the book to you.
And, because everything about LLMs is closed and private, I suspect we won't even know when this is happening. There's a world where you ask an LLM for a recipe, it provides all the ingredients for your meal from paid sponsors, then schedules to have them delivered to your door bypassing Amazon all together.
All of this can be achieved with just adding layers on to what AI already is today.
The "holy grail" of the AI business model is to build a feeling of trust and security with their product and then turn around to try and gouge you on hemmorrhoid cream and the like?
We really need to stop the worship of mustache twirling exploitation
There's no worship here on my part (in fact I got out of the AI space because was increasingly less about tech/solving problems, and more about pure hype), but my experience in this industry has been that the most dystopian path tends to be the most likely. I would prefer if Google search, Reddit and YouTube were closer to what they were 15 years ago, but I do recognize how they got here.
I mean, look at all this "alignment" research. I think the people working in this space sincerely believe they are protecting humanity of a "misaligned" AGI, but I also strongly believe the people paying for this research want to figure out how to make sure we can keep LLMs aligned with the interests of advertisers.
Meta put so much money into the Metaverse because they were looking for the next space that would be like the iPhone ecosystem: one of total control (but ideally better). Already people are using LLMs for more and more mundane tasks, I can easily imagine a world where an LLM is the interface for interacting online world rather than a web browser (isn't that what we want with all these "agents"?) People already have AI lovers, have AI telling them that they are gods, having people connecting with them on a deeper level than they should. You believe Sam Altman doesn't realize the potential for exploitation here is unbounded?
What AI represents is where a single company control every piece of information fed to you and has also established deep trust with you. All the benefits of running a social media company (unlimited free content creation, social trust) with none of the draw backs (having to manage and pay content creators).
In my experience LLMs suck at (product) recommendations - I was looking for books with certain themes, asked ChatGPT 5, the answer was vague, generic and didn't fit the bill. At another time I writing an essay and was looking for famous figures to cite as examples of an archetype, and ChatGPT's answers were barely related.
In both cases, LLMs gave me examples that were generally famous, but very tangentially related to the subject at hand (at times, ChatGPT was reaching or straight up made up stuff).
I don't know why it has this bias, but it certainly does.
The ideal here will be a multi tiered approach where the LLM first identifies that a book should be recommended, a traditional recommendation system chooses the best book for the user (from a bank of books that are part of an ads campaign), and then finally the LLM weaving that into the final response by prompt suggestion. All of this is individually well tested for efficacy within the social media industry.
I'll probably get comments calling this dystopian but I'm just addressing the claim that LLMs don't do good recommendations right now, which is not fundamental to the chatbot system.
All this would imply that the core value derives from better rec systems and not LLMs, which will merely embed the recommendation into their polite fluff.
Rec systems are in use right now everywhere, and they're not exactly mindblowing in practice. If we take my example of books with certain plotlines, it would need some super-high quality feature extraction from books (which would be even more valuable imo, than having better algorithms working on worse data). LLMs can certainly help with that, but that's just one domain.
And that would be a bespoke solution for just books, which would, if worked, would work with a standard search bar, no LLM needed in the final product.
We would need people to solve every domain for recommendation, whereas a group of knowledgeable humans can give you great tips on every domain they're familiar with on what to read, watch, buy to fix your leaky roof, etc.
So in essence, what you suggest would amount to giving up on LLMs (except as helpers for data curation and feature extraction) and going back to things we know work.
> It also estimates that LLM companies will capture 2% of the digital advertising market, which seems kind of low to me. There will be challenges in capturing it and challenges with user trust, but it seems super promising because it will likely be harder to block and has a lot of intent context that should make it like search advertising++. And for context, search advertising is 40% of digital ad revenue.
Yeah, I don't like that estimate. It's either way too low, or much too high. Like, I've seen no sign of OpenAI building an ads team or product, which they'd need to do soon if it's going to contribute meaningful revenue by 2030.
At least the description is not at all about building an adtech platform inside OpenAI, it's about optimizing their marketing spend (which being a big brand, makes sense).
There are a bunch of people from FB at OpenAI, so they could staff an adtech team internally I think, but I also think they might not be looking at ads yet, with having "higher" ambitions (at least not the typical ads machine ala FB/Google). Also if they really needed to monetize, I bet they could wire up Meta ads platform to buy on ChatGPT, saving themselves a decade of building a solid buying platform for marketers.
> There are a bunch of people from FB at OpenAI, so they could staff an adtech team internally I think
Well they have Fidji, so she could definitely recruit enough people to make it work.
> with having "higher" ambitions (at least not the typical ads machine ala FB/Google)
Everyone has higher ambitions till the bills come due. Instagram was once going to only have thoughtfully artisan brand content and now it's just DR (like every other place on the Internet).
> At least the description is not at all about building an adtech platform inside OpenAI, it's about optimizing their marketing spend (which being a big brand, makes sense).
The job description has both, suggesting that they're hedging their bets. They want someone to build attribution systems which is both wildly, wildly ambitious and not necessary unless they want to sell ads.
> I bet they could wire up Meta ads platform to buy on ChatGPT, saving themselves a decade of building a solid buying platform for marketers.
Wouldn't work. The Meta ads system is so tuned for feed based ranking that I suspect they wouldn't gain much from this approach.
Actually yes (I did mean to check again but I hadn't seen evidence of this before).
I do think that this seems odd, looks like they're hiring an IC to build some of this stuff, which seems odd as I would have expected them to be hiring multiple teams.
That being said, the earliest they could start making decent money from this is 2028, and if we don't see them hire a real sales team by next March then it's more likely to be 2030 or so.
no. this role is for running ads campaigns at scale (on google, meta, etc) to grow openai users. its at a large enough scale it's called "platform" but it would be internal use only.
> Your role will include projects such as developing campaign management tools, integrating with major ad platforms, building real-time attribution and reporting pipelines, and enabling experimentation frameworks to optimize our objectives.
> Like, I've seen no sign of OpenAI building an ads team or product
You just haven't been paying attention. They hired Fidji Simo to lead applications in may, she led monetization/ads at facebook for a decade and have been staffing up aggressively with pros.
Reading between the lines in interview with wired last week[0], they're about to go all in with ads across the board, not just the free version. Start with free, expand everywhere. The monetization opportunities in chatgpt are going to make what google offers with adwords look quaint, and every CMO/performance marketer is going to go in head first. 2% is tiny IMO.
I have indeed being paying attention, thanks. One executive does not an ads product make, though.
I think that ads are definitely a plausible way to make money, but it's legally required that they be clearly marked as such, and inline ads in the responses are at least 1-2 versions away.
The other option is either top ads or bottom ads. It's not clear to me if this will actually work (the precedents in messaging apps are not encouraging) but LLM chat boxes may be perceived differently.
And just because you have a good ad product doesn't mean you'll get loads of budget. You also need targeting options, brand safety, attribution and a massive sales team. It's a lot of work and I still maintain it will take till 2030 at least.
Thanks for calling this out. Here is a better comparison. Before Google was founded, the market for online search advertising was negligible. But the global market for all advertising media spend was on the order of 400B (NYT 1998). Today, Google's advertising revenue is around 260B / year or about 60% of the entire global advertising spend circa 1998.
If you think of openAI like a new google, as in a new category-defining primary channel for consumers to search and discover products. Well, 2% does seem pretty low.
>Today, Google's advertising revenue is around 260B / year or about 60% of the entire global advertising spend circa 1998.
Or about 30% of the global advertising spend circa 2024.
I wonder if there is an upper bound on what portion of the economy can be advertising. At some point it must become saturated. People can only consume so much marketing.
Advertising is in many market like a tax or tariff - something all businesses needs to pay. Think of selling consumer goods online - you need ads on social media to bring in customers. Spending 10% on ads as COGS is a no brainer. 20% too. Maybe it could go as high as 50% - if the companies do not really have an alternative, and all the competitors ard doing it too? They are just going to pass the bill to the consumer anyway...
But that occurred with a new form of media that people now use in more of their time than back before Google. It implies AI is growth in time spent. I think the trend is more likely that AI will replace other media.
i hate to be that guy, but.. before google was around, it was the first wave of commercial internet - for all of what five years? Online search was a thing, in-fact it was THE thing across many vendors and all relied on advertising revenue. Revenue on the internet which was ramping up still for dotcom era in those few years. Google's ad revenue vs 98 global ad spend revenue - is that inflation adjusted? Global markets development since then, internet economy expansion, even sheer number of people alive.. completely different worlds.
What might stand from comparison is google introduced a good product people wanted to use and innovative approach to marketing at the time which was unobtrusive. Product drive the traffic. It was quite a bit before Google figured it all out though.
There's also a possible scenario where the online ads market around search engines gets completely disrupted and the only remaining avenues for ad spending are around content delivery systems (social media, youtube, streaming, webpages, etc.). All other discovery happens within chatbots and they just get a revenue share whenever a chatbot refers a user to a particular product. I think ChatGPT is soon going to roll out this feature where you can do walmart shopping without leaving the chat.
Google, Meta and Microsoft have AI search as well, so OAI with no ad product or real time bidding platform isn't going to just walk in and take their market.
Google, Meta and Microsoft would have to compete on demand, i.e. users of the chat product. Not saying they won't manage, but I don't think the competition is about ad tech infrastructure as much as it is about eyeballs.
It might take Microsoft's Bing share, but Google and Meta pioneered the application of slot machine variable-reward mechanics to Facebook, Instagram and Youtube, so it would take a lot more than competing on demand to challenge them.
Tapping into AdTech is extremely hard, as it's hard driven by network effects. What you mean is "displaying ads inside OpenAI products" then, yes, achievable, but that's a miniscule part of targeted Ad markets - 2% is actually very optimistic. Otherwise, they can sell literally 0 products to existing players as they all have already established "AI" toolsets to help them for ad generation and targeting.
Query: LibraGPT, create a plan for my trip to Italia
Response: Book a car at <totally not an ad> and it will be waiting for you at arrival terminal, drive to Napoli and stay at <totally not an ad> with an amazing view. There's an amazing <totally not an ad> place that serves grandma's favorite carbonara! Do you want me to make the bookings with a totally not fake 20% discount?
I'm traveling like this all the time already, I don't understand why it's hard for people to understand that ad placement is actually easier for chat than search
But who wants that? And you're going to say that's exactly what a travel agent does, selling me stuff so he can get a kickback. But when stuff goes wrong, I'll yell at the travel agent so he has some incentive to curate his ads.
I'm not aware of any FTC rule that would preempt this sort of product as long as it met the endorsement disclosure rules (16 CFR Part 255), same as paid influencers do today.
friendzis's example showed a plausible way to generate revenue by inserting paid placements into the chat bot response without disclosures by pretending they are just honest, organic suggestions.
Right. That's not a novel idea, and this is a well-trod area of concern. That's why these FTC rules have been around for many years.
edit: to be clear, I am saying that in the absence of clear disclosures, that would run afoul of current FTC rules. And historically they have been quick to react to novel ways of misleading consumers.
All these chatbots are openly making recommendations for particular products since the day one. FTC (or any other regulatory body) does not even look at that direction.
Do you have at least a rough idea how many current product recommendations are influenced grok "musk is the bestest at everything" style?
Let's put an analogy to Google ads - the ads that appear at search results do not make up even 5% of their ad revenue. Even smaller for Meta. They earn their big ad revenues from their network, not from their main apps.
Every source I know (hard to link on mobile) shows Google Search to make up 50+% of their ad revenue, and there has been extensive reporting over the years on Google's struggle to diversify away from that.
I expect all hosted model providers will serve ads (regular, marked advertisements, no need for them to pretend not to, people don't care) once the first provider takes the lid off on the whole thing and it proves to be making money. There's no point in differentiating as the one hosted model with no ads because it only attracts a few people, the same way existing Google search and YouTube alternatives that respect the user are niche. Only offline, self hosted models will be ad free (in my estimation).
Assuming you know it's an ad. Ads in answers will generate a ton of revenue and you'll never know if that Hilton really is the best hotel or if they just paid the most.
This isn't a realistic concern unless FTC rules changed substantially from where they are today (see my other comment on this post for links). Sponsored link disclosures would be in place.
Everything else aside, it's simply not worth it for them to try to skirt these rules because the majority of their users (or Google's) simply don't care if something is paid placement or not, provided it meets their needs.
That's only true if you can demonstrate a substantial percentage of people would be unaware of it. The reason influencers have to disclose is some but not all take endorsement money. It would be pretty easy for OpenAI to say it was common knowledge or to bury disclosures in the fine print of the services and not every time it happened
The US federal government is now a mob-style organization. The laws, rules, and regulations that are written down are only applicable as far as Trump and those around him want them to be. Loyalty to the boss is the only inviolable rule.
In other words, if they want to put ads into chat, they just need to be perceived as well aligned to Trump to avoid any actual punishment.
Several ways, although I'm not sure whether the below will happen:
1. Paid ads - ChatGPT could offer paid listings at the top of its answers, just like Google does when it provides a results page. Not all people will necessarily leave Google/Gemini for future search queries, but some of the money that used to go to Google/Bing could now go to OpenAI.
2. Behavioral targeting based on past ChatGPT queries. If you have been asking about headache remedies, you might see ads for painkillers - both within ChatGPT and as display ads across the web.
3. Affiliate / commission revenue - if you've asked for product recommendations, at least some might be affiliate links.
The revenue from the above likely wouldn't cover all costs based on their current expenditure. But it would help a bit - particularly for monetizing free users.
Plus, I'm sure there will be new advertising models that emerge in time. If an advertiser could say "I can offer $30 per new customer" and let AI figure out how to get them and send a bill, that's very different to someone setting up an ad campaign - which involves everything from audience selection and creative, to bid management and conversion rate optimization.
So I don't necessarily disagree with your suggestions, but that is just not a $1T company you're describing. That's basically a X/Twitter size company, and most agree that $44B was overpaying.
It's not that OpenAI hasn't created something impressive, it just came at to high a price. We're talking space program money, but without all the neat technologies that came along as a result. OpenAI more or less develop ONE technology, no related product or technologies are spun out of the program. To top it all off, the thing they built, apparently not that hard to replicate.
ChatGPT usage is already significantly higher than Twitter at its peak, and there is a lot more scope activity with explicitly or implicitly commercial intent. Twitter was entertainment and self-promotion. Chatbots are going to be asked for advice on how to repair a dish washer, whether a rash is something to worry about, which European city with cheap flights has the best weather in March for a wedding, and an indefinite stream of other similar queries.
> It also estimates that LLM companies will capture 2% of the digital advertising market, which seems kind of low to me.
This cannot all be about advertising. They are selling a global paradigm shift not a fraction of low conversion rate eyeballs. If they start claiming advertising is a big part of their revenue stream then we will know that AI has reached a dead end.
Maybe users will employ LLMs to block ads? There's a problem in that local LLMs are less powerful and so would have a hard time blocking stealth ads crafted from a more powerful LLM, and would also add latency (remote LLMs add latency too, but the user may not want to pay double for that)
Seems like ad targeting might be a tough sell here though, it’d basically have to be “trust me bro”. Like - I want to advertise coca-cola when people ask about terraforming deserts? I think I wouldn’t be either surprised by amazing success or terrifying failure.
Perplexity actually did search with references linked to websites they could relate in a graph and even that only made them like $27k.
I think the problem is on Facebook and Google you can build an actual graph because content is a thing (a url, video link etc). It will be much harder to I think convert my philosophical musings into active insights.
So few people understand how advertising on the internet works and that is I guess why Google and Meta basically print money.
Even here the idea that it’s as simple as “just sell ads” is utterly laughable and yet it’s literally the mechanism by which most of the internet operates.
You have to take into consideration the source. FT is part of the Anthropic circle of media outlets and financial ties. It benefits them to create a draft of support to OpenAI competition, primarily Anthropic, but they(FT) also have deep ties to Google and the adtech regime.
They benefit from slowing and attacking OpenAI because there's no clear purpose for these centralized media platforms except as feeds for AI, and even then, social media and independents are higher quality sources and filters. Independents are often making more money doing their own journalism directly than the 9 to 5 office drones the big outlets are running. Print media has been on the decline for almost 3 decades now, and AI is just the latest asteroid impact, so they're desperate to stay relevant and profitable.
They're not dead yet, and they're using lawsuits and backroom deals to insert themselves into the ecosystem wherever they can.
This stuff boils down to heavily biased industry propaganda, subtly propping up their allies, overtly bashing and degrading their opponents. Maybe this will be the decade the old media institutions finally wither up and die. New media already captures more than 90% of the available attention in the market. There will be one last feeding frenzy as they bilk the boomers as hard as possible, but boomers are on their last hurrah, and they'll be the last generation for whom TV ads are meaningfully relevant.
Newspapers, broadcast TV, and radio are dead, long live the media. I, for one, welcome our new AI overlords.
All of which is great theory without any kind of evidence? Whereas the evidence pretty clearly shows OpenAI is losing tons of money and the revenue is not on track to recover it?
Well, for one, the model doesn't take into account various factors, assumes a fixed cost per token, and doesn't allow for the people in charge of buying and selling the compute to make decisions that make financial sense. Some of OpenAI research commitments and compute is going toward research, with no contracted need for profit or even revenue.
If you account for the current trajectory of model capabilities, bare-minimum competence and good faith on behalf of OpenAI and cloud compute providers, then it's nowhere near a money pit or shenanigan, it's typical VC medium to high risk investment plays.
At some point they'll pull back the free stuff and the compute they're burning to attract and retain free users, they'll also dial in costs and tweak their profit per token figure. A whole lot of money is being spent right now as marketing by providing free or subsidized access to ChatGPT.
If they wanted to maximize exposure, then dial in costs, they could be profitable with no funding shortfalls by 2030 if they pivot, dial back available free access, aggressively promote paid tiers and product integrations.
This doesn't even take into account the shopping assistant/adtech deals, just ongoing research trajectories, assumed improved efficiencies, and some pegged performance level presumed to be "good enough" at the baseline.
They're in maximum overdrive expansion mode, staying relatively nimble, and they've got the overall lead in AI, for now. I don't much care for Sam Altman on a personal level, but he is a very savvy and ruthless player of the VC game, with some of the best ever players of those games as his mentors and allies. I have a default presumption of competence and skillful maneuvering when it comes to OpenAI.
When an article like this FT piece comes out and makes assumptions of negligence and incompetence and projects the current state of affairs out 5 years in order to paint a negative picture, then I have to take FT and their biases and motivations into account.
The FT article is painting a worst case scenario based on the premise "what if everyone involved behaved like irresponsible morons and didn't do anything well or correctly!" Turns out, things would go very badly in that case.
ChatGPT was released less than 3 years ago. I think predicting what's going to happen in even 1 year is way beyond the capabilities of FT prognosticators, let alone 5 years. We're not in a regime where Bryce Elder, finance and markets journalist, is capable or qualified to make predictions that will be sensible over any significant period of time. Even the CEOs of the big labs aren't in a position to say where we'll be in 5 years. I'd start getting really skeptical when people start going past 2 years, across the board, for almost anything at this point.
Things are going to get weird, and the rate at which things get weird will increase even faster than our ability to notice the weirdness.
All of which is more theory. Of course nobody can predict the future. Your argument is essentially “they have enough money and enough ability to attract more that they’ll figure it out,” just like Amazon did, who were also famously unprofitable but could “turn it on at any time.”
FT’s argument is, essentially, “we’re in a bubble and OpenAI raised too much and may not make it out.”
Neither of us knows which is more correct. But it is certainly at least a very real possibility that the FT is more correct. Just like the Internet was a great “game changer” and “bubble maker,” so are LLMs/AI.
I think it’s quite obvious we’re in a bubble right now. At some point, those pop.
The question becomes: is OpenAI AOL? Or Yahoo? Or is it Google?
That's a fabulous tale you've told (the notion that there's a bunch of Anthropic leaning sites is my personal favourite) but alas, the article is reporting on a GSBC report which they are justifiably sceptical if, and does not in any way, shape or form represent the FTs beliefs.
AI can both be a transformative technology and the economics may also not make sense.
It is to the point of yellow journalism. They know that the "OpenAI is going to go belly up in a week!" take is going to be popular with AI skeptics, which includes a large number of HN viewers. This thread shot up to the top of the front page almost immediately. All of that adds to the chances of roping in more subscribers.
but cycling races are won by being able to put out a critical extra 50 watts for a few minutes at a key point in the race. I don't think anyone is trying to motor the whole way up a climb, but I can imagine how you could have a useful motor if you're just trying to run for ten minutes total? at that point it's analagous to the <250g drones that are out there.
there's probably just as much doping in distance running but it's easier to evade (top athletes spend most of the year in countries that have limited interest in testing)
IME it's a big part of it for a lot of people. People don't buy a car for what they do with it every day, they buy it for what they do with it a few times a year. If you have a boat on a trailer, you buy a vehicle that can pull the trailer. If you drive to the mountains in winter a few times a year, you buy a higher clearance AWD vehicle so that you can skip chain control.
You might say that this irrational and that people might be better off renting something on the occasion that they need to tow something, or go on a long road trip, or fit more than five people in their car. But people are irrational and they really do make these choices!
In addition, renting a large car for a few days is really expensive. If you have to do this 5-10 times a year, over 10 years of ownership, I'm not so sure that buying small and renting large make sense financially. Not to mention the inconvenience and loss of flexibility from having to collect and drop off a rental car, which typically isn't exactly right around the corner, especially in rural areas.
> GDP per capita doesn't mean what you think it does. Everything being overpriced in the US, and everything needing to have a middleman inflates GDP figures. Take health insurance, Americans pay multiple times what Europeans pay, to stuff the pockets of multiple for profit institutions and middlemen. GDP figures look better in the US, but really, which way is more efficient? Health outcomes are better across the EU, and the amount of medical bankruptcies is also telling.
Healthcare is a particularly _atypical_ example to choose, and the particularly poor health outcomes of MS are only partly explicable by healthcare cost/access: it's also cultural and lifestyle issues. So it's rather disingenuous to say "take health insurance", as though it can be used by analogy to comprehensively explain other aspects of American finance.
You don't need recourse to GDP, you can just look at household income which really is higher. Most things do _not_ actually have inflated prices relative to European countries.
Would I rather live in Mississippi than France? Are Mississipians living better lives than French people? I mean it depends on where specifically, but almost certainly no. Of course having more money doesn't necessarily make a place better to live in.
But that doesn't invalidate "people have more money available to spend on cars and easier access to credit to finance that purchase over five years at favorable interest rates" as part of the reason why Americans choose to spend more money on cars.
You really don't have to take every point of discussion of difference between the US and European countries as an obligation to rant about how much better Europe is on tangential topics.
> You don't need recourse to GDP, you can just look at household income which really is higher.
Income would include the money being immediately spent to cover debt (be it student loans, mortgage, medical, car).
> Most things do _not_ actually have inflated prices relative to European countries
I'm struggling to think of things which aren't inflated. Only one I can come up with is gas/petrol/fuel, because there are much less taxes on it. Everything else I can think of is more expensive in the US - healthcare, transportation, food (groceries, and absurdly so for restaurants, for worse quality at that), various types of recreation (cinema, theatre, netflix and co, cable, watching live sports, concerts) internet, phone bills. Electricity is way too location dependent so I'll skip that one.
> But that doesn't invalidate "people have more money available to spend on cars and easier access to credit to finance that purchase over five years at favorable interest rates" as part of the reason why Americans choose to spend more money on cars.
Are interest rates favourable? There are multiple concerning trends (like car payments being one of the top household expenses and people struggling with that, people owing more on car loans than what the vehicle is worth, etc. https://www.cnbc.com/2024/10/15/american-consumers-are-incre... )
> You really don't have to take every point of discussion of difference between the US and European countries as an obligation to rant about how much better Europe is on tangential topics.
I'm not ranting, I'm correcting a wrong comparison using a wrong metric incorrectly. I don't know what is it with Americans reassuring themselves with GDP metrics, but it's very confusing why anyone would throw in GDP numbers when talking about disposable income and the car market.
Everything else I can think of is more expensive in the US...food
Where in Europe are you? Because I've always found food ridiculously cheap in the US compared to the Europeans countries I've lived in or visited for an extended enough period of time that I had to regularly go food shopping (Scandinavia, UK, Germany, Switzerland). You can get 3 chickens, each 3 times the size of the chickens I'm used to, for what I pay for 2 chicken breasts. Many restaurants will give you a serving that could feed a family of 4 for what I might pay for starter back home.
I'm genuinely struggling to understand where you are pulling these conclusions from because they don't fit the trivially searchable data, nor do they fit the anecdotal conclusions that I think most people would make from spending time in these places.
> Are interest rates favourable? There are multiple concerning trends (like car payments being one of the top household expenses and people struggling with that, people owing more on car loans than what the vehicle is worth, etc. https://www.cnbc.com/2024/10/15/american-consumers-are-incre... )
Yes, they're more favorable. The interest rates available to US consumers on auto purchases are lower than those available to UK consumers. And again, it's a case where your need to moralize is getting in the way of the topic: I'm saying that easier access to credit is a contributor to Americans spending more on cars. You are saying "oh, but Americans then struggle with auto loans". Yes! These are not conflicting statements. You seem to be attaching a value judgement that isn't there to the statement that "Americans are able to spend more on cars". It doesn't have to be a good thing, but that doesn't necessarily make it untrue.
> I'm not ranting, I'm correcting a wrong comparison using a wrong metric incorrectly. I don't know what is it with Americans reassuring themselves with GDP metrics, but it's very confusing why anyone would throw in GDP numbers when talking about disposable income and the car market.
You were the first person in this thread to bring up GDP per capita! The person you are replying to said "richer". You're the one interpreting this to be a GDP reference, but it doesn't need to be since it's also true with regards to disposable income.
I also don't understand why you think it's people "reassuring themselves". I don't need reassuring of anything on this topic, and I'm not sure why you think you know what beliefs I might hold about the relative merits of living in MS versus various European countries. I think it's a pretty basic ability to be able to decouple the question of "is the median american is willing and able to spend more money on a car than the median german?" from "which country has an overall higher standard of living?".
> Standard plan is £5.99 in UK, €7.99 in france, $7.99 in the US. So the US is the cheapest of those after currency conversion
Standard with ads, which is distorting because ads have a different cost and benefit (more expensive and lucrative in the US). Standard Standard is 14.99€ in France, £12.99 in the UK, $17.99 in the US.
> US median price in 2022: $10.53. In the UK, £7.69 == $10.54 (uncanny tbh)
I like how you picked France, not Poland at $27, Spain at $35, UK at $35, Ireland at $39, Belgium at $42, Italy at $44, Germany at $46, etc.
> I'm genuinely struggling to understand where you are pulling these conclusions from because they don't fit the trivially searchable data, nor do they fit the anecdotal conclusions that I think most people would make from spending time in these places.
From visiting the US multiple times over relatively extended periods (few weeks at a time) over the past few years, while living and travelling extensively over the EU. Plus anecdotes from the internet. A lot of things are more expensive, when you count everything (tax, tips, etc).
> Yes, they're more favorable
You said they're favourable, not more favourable than e.g. in the UK. What's the average APR?
> You were the first person in this thread to bring up GDP per capita! The person you are replying to said "richer".
The only metric by which Mississipi is "richer" than France is GDP/GDP per capita.
> Standard with ads, which is distorting because ads have a different cost and benefit (more expensive and lucrative in the US). Standard Standard is 14.99€ in France, £12.99 in the UK, $17.99 in the US.
so it's 30 cents cheaper per month on that basis. that doesn't really support the claim.
> I like how you picked France, not Poland at $27, Spain at $35, UK at $35, Ireland at $39, Belgium at $42, Italy at $44, Germany at $46, etc.
I picked France because I had specifically mentioned France previously. I'm aiming to be consistent.
> Plus anecdotes from the internet.
It all becomes clearer.
> You said they're favourable, not more favourable than e.g. in the UK. What's the average APR?
I said "better access to favorable rates", not that every person is getting good rates. For what it's worth I would say that any interest rate that's below the expected return on money in the SP500 is quite favorable.
> The only metric by which Mississipi is "richer" than France is GDP/GDP per capita.
Clearly untrue: it has higher household disposable income, almost certainly the most relevant statistic.
I really don't think you're sincerely interested in this topic, you just want to dunk on America.
The problem (which Sotomayor raises in her dissent, pages 94 and 95 of the PDF) is that it may never reach the supreme court:
> There is a serious question, moreover, whether this Court will ever get the chance to rule on the constitutionality of a policy like the Citizenship Order. Contra, ante, at 6 (opinion of KAVANAUGH, J.) (“[T]he losing parties in the courts of appeals will regularly come to this Court in matters involving major new federal statutes and executive actions”). In the ordinary course, parties who prevail in the lower courts generally cannot seek review from this Court, likely leaving it up to the Government’s discretion whether a petition will be filed here. These cases prove the point: Every court to consider the Citizenship Order’s merits has found that it is unconstitutional in preliminary rulings. Because respondents prevailed on the merits and received universal injunctions, they have no reason to file an appeal. The Government has no incentive to file a petition here either, because the outcome of such an appeal would be preordained. The Government recognizes as much, which is why its emergency applications challenged only the scope of the preliminary injunctions
Wait, doesn't this just... End the constitution as a whole? So long as the current executive wants some unconstitutional thing, they get that unconstitutional thing in every state on their side in perpetuity? The constitution is now... per-litigant?
Oh, of course. Because it's federal law, being in a state with an injunction isn't actually a protection. A federal LEO can detain & relocate you, charging you with violating a law in another state where there is no such injunction.
This is a whole-sale shredding of the constitution.
So for example, seeking reproductive rights in one state which is forbidden in another?
Forgive a possibly silly question but in what sense does being "in" Florida mean you are bound by Florida state law when you leave? How long did you need to be in Florida before you became bound by its law? What if you fall pregnant after you left? Can you be in breach without ever having been in Florida, and a LEO can therefore take you there and charge you?
No, not quite. State laws only apply in that state. They are not technically allowed (but sometimes try) to enforce laws on actions outside of that state. So, in this case, you could not be charged with having an abortion outside of Florida, from inside of Florida, based on Florida law.
But let's look at the birthright case that this ruling comes from.
Let's say Nevada state sues the federal government. The ruling is made from their district court that birthright citizenship is clear and this EO is illegal. An injunction is placed against the EO.
The state of Kentucky does not sue.
Previously, the Nevada court injunction would apply nationally. The EO is unconstitutional. EOs are federal, the constitution is federal. So, clearly, it is unconstitutional everywhere and must be stopped.
The federal government can then go through several layers of appeal to prove that this was a mistake and the EO is legal. All the way up to SCOTUS, who makes the final judgement and cannot be appealed.
What SCOTUS just ruled is that the injunction against the federal government only holds the EO from applying to the specific litigant. That can be a whole state, a group of people, or a single individual. Even though the EO is now ruled unconstitutional in the eyes of the federal court de jure, it is de facto still the law of the land by default to all other entities.
And it gets worse. A litigant cannot appeal to the next court, only a defendant that loses. And SCOTUS only has to address cases that are appealed. There is no mandatory reconciliation process. That means, for an infinite amount of time, individual people will have different constitutional interpretations that require a background of every case that has ever involved them.
So, back to our example. If the federal government loses in Nevada and there is no ruling in Kentucky... What the fuck even happens? Someone is or is not a citizen, that's literally the point behind Dread Scott and Obergfell, but they've contradicted those cases and invented a constitutional superposition.
So, in Nevada a naturalized citizen with non-citizen parents is... A citizen? Because of the injunction? And what if they're in Kentucky, but were born in Nevada? Or vice versa?
But, no, this isn't a state law. It's federal. Which means it doesn't matter what state you're in when you do it, it's still illegal. And federal LEO had the authority to try you in a different location than where you were arrested. So - born in Nevada or Kentucky, where you are now, that doesn't matter. Effectively, you have no citizenship. Again, this is quite literally Dread Scott.
This SCOTUS ruling effectively disables the constitution and dissolves the union of states. I'm not being dramatic, this is also the opinion of Sotomayor.
Curiously, this does not actually extend to other cases. So, say, if McDonalds gets in trouble and an injunction placed against them. That still applies universally.
Oftentimes hard to model this from other nations. Here in Australia we have pretty well understood applicability of federal law everywhere, but when the states started enacting changes in abortion law, voluntary assisted dying, decriminalisation of recreational drugs, it all got a bit messy. Especially since we have non-state territories where enacting laws means .. conditional to the federal government with lower barriers than with the states.
Britain its mostly UK Law, with bits of "no, thats English law, this is Scotland" on top. I emigrated so long ago the national appeals structure has changed and I don't entirely understand when it applies and overrides. But immigration is clearly nation-wide.
I think "birthright" citizenship is pretty alien to most legal regimes. Ireland might be the one people think about in the Europe/Britain context. Used to be a lot of pregnant women flying in late stage. But, thats not to call it wrong or decry what the appeals in the US were trying to do. It was amended into the constitution a very long time ago, and until recently what the current WH is trying to do was seen as "fringe law" but now seems core.
The revocation is chilling, yes. Australia has regrettably been behaving of late as if ending your prison sentence is not the end, and deporting dual nationals to their native citizenship after sentence completed. It's just cruelty, and in many cases makes externalities of Australian problems.
Some people with no lived experience of "homeland" have been sent "back there" and in related cases, Australian indigenous have been threatened with never having had citizenship because of association with neighbouring countries. The courts came down pretty heavily on that one thank goodness.
asking in a narrower context than before: what happens when an Nth generation american is forced to prove their ancestors came to the country legally under threat of being stripped of the only citizenship they’ve ever had ? i doubt most people can produce documentation reaching back several generations…
see the following [1]doj enforcement priority on stripping citizenship ; i guess conceivably a citizen could be in a situation of having to prove that some ancestor didn’t lie on their naturalization application
That should be unconstitutional to either try to prosecute you for actions outside your state or to prevent you from leaving to make those actions, but conservatives are trying!
That's a good point.
I was under the impression that the current administration thinks it can win a case about 14th amendment in case both parents are not legally in US with current majority but if they are in fact not appealing it would mean they think they would lose.
Again, the point of being a judge is to not just make assumptions about what you think their argument is going to be. If your opinion is made up before you hear the case, then your opinion is bullshit.
https://en.wikipedia.org/wiki/Vexatious_litigation There's plenty of times you can see the litigant is just filing bullshit, venue shopping, and hey - maybe even packing the courts with friendly folks who will rubber stamp whatever you really want.
No, there is no other reading here. The 14th amendment is incredibly clear about the exact text and meaning in this scenario. I don't think people need to entertain opinions otherwise from those that either refuse to read the amendment or choose to ignore what it plainly says.
> All persons born or naturalized in the United States, and subject to the jurisdiction thereof, are citizens
So e.g. subjects to the British Crown and the jurisdiction of Her Majesty Queen Elizabeth, are not citizens.
Indeed, if my wife gave birth while we were vacationing China, the I would be furious if China claimed my child was a Chinese citizen and “subject to the jurisdiction of the peoples republic of China.” For all I know the laws of China might say the kid can’t leave. There are lots of countries where the dictator’s subjects are not free to leave. So, yeah, I would absolutely insist my kid is subject to the jurisdiction of the United States even when born in China.
> That is terrible jurisprudence, but at least it’s honest. Sotomayor is overtly stating that she’s made up her mind and will not consider the possibility that maybe the 14th Amendment might mean something other than she already thinks
can you quote that part where she says this or even offers her own opinion? Because the only relevant part
> Every court to consider the Citizenship Order’s merits has found that it is unconstitutional in preliminary rulings
seems like a statement of fact?
(of course, it's also beyond silly to suggest it's bad or even unusual for issued opinions (dissents or otherwise) not to contain, you know, opinions).
> The Government has no incentive to file a petition here either, because the outcome of such an appeal would be preordained. The Government recognizes as much, which is why its emergency applications challenged only the scope of the preliminary injunctions
She’s saying the outcome is already known (ie they would lose) and the government knows they would lose, and the place they would have lost would have been right here (in front of me in the Supreme Court).
I live near UPenn. Some locals call the end of the academic year "Penn Christmas". I definitely see some resentment, but having made an international move in my life I have sympathy for it. You need to buy things to live, shipping that stuff when you move away is often very expensive and time consuming, so you condense your life down to a few suitcases and do the best you can.
> Once the economic depression of 1873 was over, more housing was constructed, dropping the price of housing down, and subsequently people had less need to move as often.
oh there's precedent for this solution, what a concept
Having been in this situation a few times as an adult, it's a mixture of stressful and cathartic figuring out exactly what's worth keeping, storing or giving away.
The best approach I've found is to standardize packing into 60L industrial Euro crates. They're inexpensive, very strong, practically waterproof (will survive both puddles and rain) and you can even air freight them at close to $100/box if the contents weigh under 32kg. Most of the expense in shipping is volume and people massively underestimate how much they own. If you can keep things compact and dense, ground/sea freight is inexpensive if you don't have to do it very often and there is no practical weight restriction.
Furniture only makes sense if you can re-claim 80+% of the void space in items like shelves, or if it completely flat packs, and if the cost of re-acquisition would be high. Shipping companies usually have minimum billable volume (say 2 cubic meters). I was able to send an apartment's worth of contents in the same volume that a couch would occupy.
For everything else, either buy quality used things that you can sell without much depreciation, or cheap used things that you don't mind thrifting afterwards.
The real Penn Christmas miracle was getting used tech (laptops/tablets/mp3 players/etc) that was export-controlled. Some students legally couldn’t bring that stuff back to their country and didn’t have time to sell it.
In a city I lived in bedbugs were common enough that the health department spends all weekend on major move-out dates tagging furniture with bedbug PSAs.
The good stuff is in June when Boston College and other dorms move out for the year. The crap in Allston in September is from yearly tenants in off campus housing, was likely already second hand at least once, and is riddled with bugs. I guess Allston has gentrified, but I assume that just means the bed bugs now have credit cards too.
The weirdest thing about the original article is the author. Like yeah, you can get some great stuff in the trash. People value money wildly differently, and some people throw out practically new stuff. It boggles the mind. But it also boggles the mind that the author is still so focused on the retail prices of marked up "luxury" stuff, like they're still just solidly wed to the consumerist mindset. The used/dirty/soggy whatever can be fantastic, but it's certainly not worth anywhere close to its original retail price, especially accounting for your time to find, haul, clean, etc and how much comparable non-"luxury" brands would cost.
As a Penn grad student I definitely looked at the piles of stuff the undergrads threw out hoping I could get something good from them. (I don't recall if I ever did.)
As undergrad students in the 90s we couldn't forage after moveout day, but we did get tons of cool stuff from the CS building loading dock outbound trash heap. My God they were getting rid of some really strange 70s gear. One time we grabbed an old rackmount tape drive - it was enormous. We disassembled it for fun and the thing I remember the most was the cooling fan. It was a squirrel cage blower driven straight from mains power and it blew so hard you could not keep your hand on the exhaust. On. A. Tape. Drive.
About 15 years ago at this point a bunch of my friends/labmates and I salvaged enough discard PCs from the Levine Hall (Penn CS building) loading dock to assemble an entire small renderfarm, which we squirreled away in a corner of the graphics lab and used for learning and playing with RenderMan.