Another way to think about it is a good that we once bought for private use where it sat around underutilized the majority of the time is instead being allocated in data centers where we rent slices of it, allowing RAM to be more efficiently allocated and used.
Yes it sucks that demand for RAM has led to scarcity and higher prices but those resources moving to data centers is a natural consequence of a shareable resource becoming expensive. It doesn’t have to be a conspiracy.
How is this different from linux? People happily spend hours customizing defaults in their OS. It’s usually a point of praise for open source software.
They have 800M weekly active users that have yet to be monetized but enormous capital costs. It makes sense they'd be looking to raise large amounts of money in an IPO.
A capital investment is predicated on there being future profits. I don't understand why they can't be profitable already. If 800M AU is not a critical mass for flipping on the revenue switch, I can't imagine what is.
So what stops them from monetizing those users today? Why would it be any different in the future if they can't?
I say further limit the free tier and more aggressively push those free users to a paid plan. Raise the prices for Business users and API access. An IPO isn't going to raise the trillion they need to keep running off capital.
Once OpenAI reaches the eventual pricing needed to break even, I suspect we'll see that it no longer makes sense for many of their customers to replace humans with AI after all. As it stands now, their investors are essentially paying OpenAI to put employees of other businesses out of jobs by masking the true costs. The sooner they can reach the sustainable pricing phase the better.
Everyone. People are becoming dependent on chatgpt. They literally cannot function professionally or even socially without it. They will pay their last 20-30 dollars if needed. It's literally like a drug especially when it's asking you if you want to followup/continue.
Everyone also uses Google, YouTube or Instagram. No one would ever pay for any of it though and it is financed through ads. So far it is unclear, if this is also a viable option for chatgpt.
Out of about 2.7 billion users. So about 5% of all users are subscribed to YouTube.
If the same were true then Open AI at 1 billion weekly users they should have about 50 million subscribers. Right now that numbers sits at 20 million though and growth is slowing down. [1]
So people are more willing to pay for YouTube than ChatGPT and that is ignoring that YouTube is still largely relying on ad revenue and can only allow itself to have more and more ads as there are no alternatives. Open AI has plenty of competitors that would love to offer users free access if ChatGPT were to start showing you ads.
> Everyone also uses Google, YouTube or Instagram. No one would ever pay for any of it though and it is financed through ads.
A lot of people already pay for YouTube since the introduction of Premium. Google/Facebook don't push for paid versions of these products because the data from billions of "free" users is more valuable to them than payments from millions of paid users.
If Google search were paywalled (pre-AI) the most likely outcome would be a separation of consumers into "premium" customers paying for Google, some people paying for cheaper but not quite as good alternatives, and everyone else getting by with the free alternatives. There would also likely be some kind of enterprise tier for indexing your corporate resources or some such.
There's a reason Adobe is still extracting billions from its ~~victims~~ users despite many great free (or reasonably priced) alternatives existing.
This has nothing to do with what i said. I said they are addicted. The free limits are designed this way. If openai suddenly removed the free plan, i guarantee you a lot of people would buy. They dont have an alternative they cannot think independently anymore
At least one of us is inside a bubble. Nobody I interact with regularly uses chatgpt for anything more than novelty. Even people who used it as glorified google for looking up things reduced their use.
There are definitely big?bubbles where everyone has outsourced their thinking to AI. I’d like to think it’s mostly at the lower end of “knowledge work” - think Deloitte, but it seems that even people / orgs that you would expect more critical thinking of are using it uncritically.
Of course this all occurs in a very small segment of society, I think the majority of people don’t really use it, and certainly haven’t moved any of their day-to-day thinking over to it.
As someone who implemented phone verification at a company I worked for, it’s 100% for preventing spam signups intending to abuse free tiers. API companies can get huge volumes of fake signups from “multiplexers” who get around free tier limits by spreading their requests across multiple accounts.
I would caution any reader to generalize your statement. Just because you used it at your company to limit abuse, and yes that is a lazy approach and 100% what's going on with Anthropic and most API companies, doesn't mean that every company uses phone number gating for this purpose.
And it's not enough to say "well we don't use it for that". One, you can't prove it. And two, far more important, in an information leak, by taking and saving the phone number (necessarily, otherwise there's no account gating feature unless you're just giving fake friction), you expose the user to risk of connecting another dot. I would never give my phone number to some rinky dink company.
Now that said, I don't use lazy pejoratively. Products must launch.
Because SMS verification is so cheap (under a dollar per one-time validation, under $10/mo for ongoing validation), this approach really only makes sense for ultra-low-value services, where e.g. $0.50 per account costs more than the service itself is worth.
Because of this low value dynamic, there are many techniques that can be used to add "cost" to abusive users while being much less infringing upon user privacy: rate limiting, behavioral analysis, proof-of-work systems, IP restrictions, etc.
Using privacy-invasive methods to solve problems that could be easily addressed through simple privacy-respecting technical controls suggests unstated ulterior motives around data collection.
If your service is worth less than $0.50 per account, why are you collecting such invasive data for something so trivial?
If your service is worth more than $0.50 per account, SMS verification won't stop motivated abusers, so you're using the wrong tool.
If Reddit, Wikipedia, and early Twitter could handle abuse without phone numbers, why can't you?
Firstly, I can tell you phone number verification made a very meaningful impact. The cost of abuse can be quite high for services with high marginal costs like AI.
Second, all those alternatives you described are also not great for user privacy either. One way or another you have to try to associate requests with an individual entity. Each has its own limitations and downsides, so typically multiple methods are used for different scenarios with the hope that all together its enough of a deterrence.
Having to do abuse prevention is not great for UX and hurts legitimate conversion, I promise you most companies only do it when they reach a point where abuse has become a real problem and sometimes well after.
>Firstly, I can tell you phone number verification made a very meaningful impact. The cost of abuse can be quite high for services with high marginal costs like AI.
Nobody has made the argument that it's not a deterrent at all. The core argument is that it's privacy-infringing when it doesn't need to be, and the cost posed to attackers is extremely low. If your business is offering a service at a price below your business' own costs, the business itself is choosing to inflict cost asymmetry upon itself.
>Second, all those alternatives you described are also not great for user privacy either.
This is plainly and obviously false at face value. How would blocklisting datacenter IP's, or doing IP-based rate limiting, or a PoW challenge like Anubis be "also not great" for user privacy, particularly when compared to divulging a phone number? Phone numbers are linked to far more commercially available PII than an IP address by itself is, and PoW challenges don't even require you to log IP addresses. Behavioral analysis like blocking more than N sign-ups per minute from IP address X, or blocking headless UA's like curl, or even blocking registrations using email addresses from known temp-mail providers is nowhere remotely close to being as privacy-infringing as requiring phone numbers is.
The privacy difference between your stated practice and my proposed alternatives isn't a difference of degree; it's a fundamental difference of kind.
Being generous, this is lazy, corner-cutting engineering that seeks to impose an unknown amount of privacy risk from the perspective of end users by piggybacking off an existing channel that only good-faith users won't forge (phone number), at the possible expense of good-faith users' privacy, rather than implementing a better control.
Of course, there's no reason to be generous to for-profit corporations - the much more plausible explanation is that your business is data mining your own customers via this PII-linked registration requirement through a coercive ToS that refuses service unless customers provide this information, which is both entirely unnecessary for legitimate users and entirely insufficient to block even a slightly motivated abusive user.
...not that you'd ever admit to that practice if you were aware of it happening, or would even necessarily be aware of it happening if you were not a director or officer of the business.
This makes sense for free tiers of products, but if you provide CC info for a paid tier, you shouldn't also have to provide a phone number. One or the other.
I think people can use stolen / one-time use / prepaid / limited purchase size credit cards fairly easily, too. And you might not find out until after they've racked up a non-trivial amount of costs.
- Account creation usually happens before plan selection & payment. Most users start at free, then add a CC later either during on-boarding or after finishing their trial.
- Virtual credit cards are very easy to create. You can signup with credit card with a very low limit and just use the free tier tokens.
Even after reading this comment I went searching for the button and it took a while to find it.
And the homepage says in bold letters “The PeerTube mobile app for Android & iOS is out!” but it’s not a link! There’s a link further down but it goes to this article where you then have to scroll.
Not every site has to be super conversion optimized but it’s just common sense to put a CTA at the head of an announcement. joinmastodon.org gets it!
FWIW, I have USAA and had the almost exact same thing happen to me in 2020 - car stolen and totaled. They paid me a really fair deal on the cash back for the 2yo crosstrek.
It probably helped that it was a cut and dry case - the thief took the keys from inside my house and I called the cops seconds after I saw him drive away with it.
> The West Hollywood, California-based company also gave a severance package to staff who were unable to relocate, in what the CWA alleged was an attempt “to silence workers from speaking out about their working conditions,” according to a statement from the organization.
Another way to think about it is a good that we once bought for private use where it sat around underutilized the majority of the time is instead being allocated in data centers where we rent slices of it, allowing RAM to be more efficiently allocated and used.
Yes it sucks that demand for RAM has led to scarcity and higher prices but those resources moving to data centers is a natural consequence of a shareable resource becoming expensive. It doesn’t have to be a conspiracy.