Hacker Newsnew | past | comments | ask | show | jobs | submit | spenvo's commentslogin

I'm about to launch a new (now free) version of my Mac app, CurrentKey, which helps you keep track of workflows across macOS Spaces and track how you use your Mac. https://www.currentkey.com It had been a subscription app (4.5 stars) pulling in a few thousand per year, but I recently decided to try to broaden its appeal and make it free. The new version will launch within a day or two (the launch build is just "Waiting for Review" in App Store connect).


I'm working on Argon Chess, a deterministic chess variant with some degree of cheat resistance (hard to describe to chess engines like Fairy Stockfish) and tons of variety. A week ago, I added a way to play friends online a week ago (a Discord Activity) and a simple Play a Dumb AI feature on its website. You can also print the cards for free for offline play. https://argonchess.com/


On that $13.5B. How much of their massive spend on datacenters is obscured through various forms of Special Purpose Vehicles financing? (https://news.ycombinator.com/item?id=45448199)


I thought to post this after seeing this in the news today:

OpenAI CEO Sam Altman warned the financial industry of a “significant impending fraud crisis” because of AI voice print fraud.

https://finance.yahoo.com/news/openais-sam-altman-warns-ai-1...

I guess if it takes someone like San Altman saying something obvious for bank execs to listen (I'm sure security people at these companies have already told them these things), then so be it


OpenAI's tight spot:

1) They are far from profitability. 2) Meta is aggressively making their top talent more expensive, and outright draining it. 3) Deepseek/Baidu/etc are dramatically undercutting them. 4) Anthropic and (to a lesser extent?) Google appear to be beating them (or, charitably, matching them) on AI's best use case so far: coding. 5) Altman is becoming less like-able with every unnecessary episode of drama; and OpenAI has most of the stink from the initial (valid) grievance of "AI-companies are stealing from artists". The endless hype and FUD cycles, going back to 2022, have worn industry people out, as well as the flip flop on "please regulate us". 6) Its original, core strategic alliance with Microsoft is extremely strained. 7) and, related to #6, its corporate structure is extremely unorthodox and likely needs to change in order to attract more investment, which it must (to train new frontier models). Microsoft would need to sign off on the new structure. 8) Musk is sniping at its heels, especially through legal actions.

Barring a major breakthrough with GPT-5, which I don't see happening, how do they prevail through all of this and become a sustainable frontier AI lab and company? Maybe the answer is they drop the frontier model aspect of their business? If we are really far from AGI and are instead in a plateau of diminishing returns that may not be a huge deal, because having a 5% better model likely doesn't matter that much to their primary bright spot:

Brand loyalty from the average person to ChatGPT is the best bright spot, and OpenAI successfully eating Google's search market. Their numbers there have been truly massive from the beginning, and are I think the most defensible. Google AI Overviews continue to be completely awful in comparison.


They have majority of the attention and market cap. They have runway. And that part is the most important thing. Others don’t have the users to grand test developments.


I'm not so sure they have runway.

XAI has Elon's fortune to burn, and Spacex to fund it.

Gemini has the ad and search business of Google to fund it.

Meta has the ad revenue of IG+FB+WhatsApp+Messenger.

Whereas OpenAI $10 billion in annual revenue, but low switching costs for both consumers and developers using their APIs.

If you stay at the forefront of frontier models, you need to keep burning money like crazy, that requires raising rounds repeatedly for OpenAI, whereas the tech giants can just use their fortunes doing it.


They definitely have a very valuable brand name even if the switching costs are low. To many people, AI == ChatGPT


But that's just one good marketing campaign away of changing.


Ok, others have more runway, and less research talent.

OpenAI has enough runway to figure things out and place themselves in a healthier position.

And come to think of it, loosing a few researchers to other companies may not be so bad. Like you said that others have cash to burn. They might spend that cash more liberally and experiment with bolder riskier products and either fail spectacularly or succeed exponentially. And OpenAI can still learn from it well enough and still benefit even though it was never their cash.


Good analysis, my counter to it is that OpenAI has one of the leading foundational models, while Meta, despite being a top paying tech company, continued to release sub par models that don't come close to the other big three.

So, what happened? Is there something fundamentally wrong with the culture and/or infra at Meta? If it was just because Zuckerburg bet on the wrong horses to lead their LLM initiatives, what makes us think he got it right this time?


For one thing, all the trade secrets going from openai and anthropic to meta.


> how do they prevail through all of this and become a sustainable frontier AI lab and company?

I doubt that OpenAI needs or wants to be a sustainable company right now. They can probably continue to drum up hype and investor money for many years. As long as people keep writing them blank checks, why not keep spending them? Best case they invent AGI, worst case they go bankrupt, which is irrelevant since it's not their own money they're risking.


The biggest problem OAI has is that they don't own a data source. Meta, Google, and X all have existing platforms for sourcing real time data at global scale. OAI has ChatGPT, which gives them some unique data, but it is tiny and very limited compared to what their competitors have.

LLMs trained on open data will regress because there is too much LLM generated slop polluting the corpus now. In order for models to improve and adapt to current events they need fresh human created data, which requires a mechanism to separate human from AI content, which requires owning a platform where content is created, so that you can deploy surveillance tools to correctly identify human created content.


OAI has a deal with reddits corpus of data to use.

They will either have to acquire a data source or build their own moving forward imo. I could see them buying reddit.

Sam Altman also owns something like ~10% of reddits stock since they went public.


The flip-flop on regulation sounds like: “please regulate us (in a way that builds a moat for incumbents out of fear of an imagined future doom scenario)” and “please Don’t regulate us (in a way that prevents us from stealing, and causing actual harm now).”


If they can turn ChatGPT into a free cash flow machine, they will be in a much more comfortable position. They have the lever to do so (ads) but haven't shown much interest there yet.

I can't imagine how they will compete if they need to continue burning and needing to raise capital until 2030.


The interest and actions are there now: Hiring Fidji Simo to run "applications" strongly indicates a move to an ad-based business model. Fidji's meteoric rise at Facebook was because she helped land the pivot to the monster business that is mobile ads on Facebook, and she was supposedly tapped as Instacart's CEO because their business potential was on ads for CPGs, more than it was on skimming delivery fees and marked up groceries.


Maybe employees realised this and left OpenAI for this reason.


OpenAI has no shot without a huge cash infusion and to offer similar packages. Meta opened the door.


ah snap, didn't know it was a dupe. Now I've hidden it, tx


'hide' only applies for you. Not for everyone.


So, ChatGPT-like tools are accelerating(/or locking in) the decline of community sites like Stack Overflow, which used to be an indispensable tool for both junior developers to troubleshoot issues and also technology builders to see where users were hitting pain points with your tech (so they could improve it).

A couple of major ramifications from its decline:

#1 (the bigger one): The decline of Stack Overflow-like sites will (imo) degrade or cap the quality of ChatGPT tools themselves on questions pertaining to code post-2022. I doubt that advances like "reasoning" or other AI breakthroughs are going to fully make up for the oncoming draught of quality training data. Sites like SO were a crutch that companies who underinvested in documentation leaned on (their attitude essentially being: "we've done enough, let the coders figure it out amongst themselves, b/c it works"). I doubt companies are going to suddenly realize they need to invest more in solid docs (for both the developers and AI companies). -- While many initially saw this coming (AI killing the web it trained on), now we have pretty dramatic data show that has happened.

#2: questions about new technologies and their shortcomings will be asked in the dark, giving AI companies valuable data that used to otherwise exist in public forums. Among other things, this will make it harder for tech-builders to know what to improve, therefore preventing it from improving as quickly, and keeping people more reliant on AI tools for troubleshooting. This seems to be another example of AI companies _creating_ problems that they are best positioned to "solve".


Ah good, so one was finally not marked Dupe or Dead. I believe even that thread was marked 'dead' for the first 50 minutes or so


This is new reporting that a hacker has breached TeleMessage, the parent company of TM SGNL and other modified chat apps. The breach includes live data being passed across servers in production.


It's new reporting but it's already in the frontpage thread about the existing reporting.


There is new reporting that a hacker has breached the parent company, TeleMessage, including live data being passed across servers in production.

https://www.404media.co/the-signal-clone-the-trump-admin-use...

It was marked as a DUPE of this discussion, despite being a major new development https://news.ycombinator.com/item?id=43890034 Hopefully that decision can be reconsidered


You can just link the new development in an ongoing story that's already on the front page, just like you did. The alternative would be a second front page thread which splits the discussion and is worse all-round.


That's a fair point, and it's your call - however, if the new (major) development is covered in this way then 1) users on the front page won't see mention of it at headline level and 2) the discussion of that development on HN will be affected by/limited to the time-decay of a post that is 12 hours older. I understand that there are tradeoffs at play, it really comes down to if the development at hand is big-enough to justify another post, and, again, that's your call.


I concur. An analysis of potential risks and vulnerabilities is a different beast from actual proof that the app has indeed been hacked. I call for the other discussion to be restored.

Edit: Wanted to respond to the top-level comment but you get the point.


It's not my call, I'm just explaining how HN typically works. If you want some story handled differently, you should send an email to hn@ycombinator.com. But 'two or more things about the same thing on the fp at the same time' is a big barrier to overcome, it almost never happens.

There is mod commentary on 'people might miss things because of the title' as well, it's mostly 'it's ok for people to click through the story or thread to figure things out' and that's also a fairly longstanding 'how HN works most of the time' thing.

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...

The operating assumption here is that people are smart enough to follow the developments in the story themselves - in the the thread and outside.


> The data includes apparent message contents; the names and contact information for government officials; usernames and passwords for TeleMessage’s backend panel; and indications of what agencies and companies might be TeleMessage customers.


http://archive.today/HqMvy

It's insane that this isn't front page news. This takes the original Signalgate breach to an order of magnitude higher level of severity.


There seems to be a coordinated and consistent campaign to bury submissions from 404 Media on HN. Hopefully something can be done about that, too.


In August last year I got this from dang when reporting a dead 404 link: "The site 404media.co is banned on HN because it has been the source of too many low-quality posts and because many (most?) of their articles are behind a signup wall."

Not that I've really seen the low quality and the signup requirement doesn't stop other domains. There's quite a few things that originated from 404, so I hope HN gets over whatever it was that annoyed them originally.


The main issue is the (sometimes) hard signup wall. I've been a moderator on HN for longer than 404media has existed, and I know from experience that this changes from time to time or article to article. Other paywalled sites that appear on HN (WSJ, NYT etc) have a porous paywall; you can (almost) always get around it by using an archive site like Archive.today.

If it's a good article (contains significant new information and can be a topic of curious conversation) and a paywall workaround works for that article, we'll happily allow it.


If they do their own, original, investigative reporting, you may want to be a bit more permissive.


Since HN doesn't really facilitate any workarounds anyway and we've been doing manual archive links and content reposting as needed in other cases... I suspect we can handle 404 as well as a community.


Even porous paywalls can have a marked effect on story performance on HN.

The New York Times tightened its paywall markedly in August 2019, with a net effect that appearances in the top-30 stories on HN's front-page archive (the "Past" links in the site header) fell to ~25% of their previous level.

I'd asked dang at the time if HN had changed any of its own processes at the time. Apparently not.

I suspect then that this reflects frustrations and/or inability to access posted articles behind the paywall.

See: <https://news.ycombinator.com/item?id=36918251> (July 2023)


How does this happen when signal itself is open source?


They used an internal fork delivered via MDM. There are no guarantees that Signal can make about the software running on those phones and per the reports it’s a lot of phones.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: