Hacker Newsnew | past | comments | ask | show | jobs | submit | jillesvangurp's commentslogin

My home country the Netherlands became a republic after a long war with the Spanish that controlled the territory from Spain after having inherited it via various wars and conflicts that divided up the remains of the Carolian empires. The Austrians ended up with a lot of states across what is now Germany and Belgium. France emerged as well as a country.

The Netherlands was too far away from the courts in Spain for them to govern effectively. Travel time was measured in weeks. So, remote regions like that necessarily had a large degree of autonomy. That became the basis for power to centralize around Amsterdam as it was favorably located for for trading. There were a lot of grievances with religious issues (Catholicism vs. Protestantism), taxation, etc. But the Spanish failure to project power from a distance had everything to do with the centralized nature of their empire and long communication channels.

In the so called golden century (17th century), the Netherlands got filthy rich on global trade and expansion. Information and knowledge flowed to and from Amsterdam from all over the world.

The Dutch naval forces dominated the North Sea for quite some time and it's only later that the British emerged as the better/bigger empire. Navies and ships were the fastest way to move information around at the time. Until the British finally upgraded to cables and telegrams which enabled them to have colonies on all continents. They really nailed command and control across their empire for a while.

The Romans had their roads to move armies and information. Shipping and navigation technology leveled that up from the 1400s or so. These days, low latency communication is a commodity of course.


I don't see a bubble, I see a rapidly growing business case.

MS Office has about 345 million active users. Those are paying subscriptions. IMHO that's roughly the totally addressable market for OpenAI for non coding users. Coding users is another few 20-30 million.

If OpenAI can convert double digit percentages of those onto 20$ and 50$ per month subscriptions by delivering good enough AI that works well, they should be raking in cash by the billions per month adding up to close to the projected 2030 cash burn per year. That would be just subscription revenue. There is also going to be API revenue. And those expensive models used for video and other media creation are going to be indispensable for media and advertising companies which is yet more revenue.

The office market at 20$/month is worth about 82 billion per year in subscription revenue. Add maybe a few premium tiers to that at 50$/month and 100$/month and that 2030 130 billion per year in cash burn suddenly seems quite reasonable.

I've been quite impressed with Codex in the last few months. I only pay 20$/month for that currently. If that goes up, I won't loose sleep over it as it is valuable enough to me. Most programmers I know are on some paid subscription to that, Anthropic's Claude, or similar. Quite a few spend quite a bit more than that. My Chat GPT Plus subscription feels like really good value to me currently.

Agentic tooling for business users is currently severely lacking in capability. Most of the tools are crap. You can get models to generate text. But forget about getting them to format that text correctly in a word processor. I'm constantly fixing bullets, headings and what not in Google docs for my AI assisted writings. Gemini is close to ff-ing useless both with the text and the formatting.

But I've seen enough technology demos of what is possible to know that this is mostly a UX and software development problem, not a model quality problem. It seems companies are holding back from fully integrating things mainly for liability reasons (I suspect). But unlocking AI value like that is where the money is. Something similarly useful as codex for business usage with full access to your mail, drive, spread sheets, slides, word processors, CRMs, and whatever other tools you use running in YOLO mode (which is how I use codex in a virtual machine currently, --yolo). That would replace a shit ton of manual drudgery for me. It would be valuable to me and lots of other users. Valuable as in "please take my money".

Currently doing stuff like this is a very scary thing to do because it might make expensive/embarrassing mistakes. I do it for code because I can contain the risk to the vm. It actually seems to be pretty well behaved. The vm is just there to make me feel good. It could do all sorts of crazy shit. It mostly just does what I ask it to. Clearly the security model around this needs work and instrumentation. That's not a model training problem though.

Something like this for business usage is going to be the next step in agent powered utility that people will pay for at MS office levels of numbers of users and revenue. Google and MS could do it technically but they have huge legal exposure via their existing SAAS contracts and they seem scared shitless of their own lawyers. OpenAI doing something aggressive in this space in the next year or so is what I'm expecting to happen.

Anyway, the bubble predictors seem to be ignoring the revenue potential here. Could it go wrong for OpenAI? Sure. If somebody else shows up and takes most of the revenue. But I think we're past the point where that revenue is not looking very realistic. Five years is a long time for them to get to 130 billion per year in revenue. Chat GPT did not exist five years ago. OpenAI can mess this up by letting somebody else take most of that revenue. The question is who? Google, maybe but I'm underwhelmed so far. MS, seems to want to but unable to. Apple is flailing. Anthropic seems increasingly like an also ran.

There is a hardware cost bubble though. I'm betting OpenAI will get a lot more bang for its buck in terms of hardware by 2030. It won't be NVidia taking most of that revenue. They'll have competition and enter a race to the bottom in terms of hardware cost. If OpenAI burning 130 billion per year, it will probably be getting a lot more compute for it than currently projected. IMHO that's a reasonable cost level given the total addressable market for them. They should be raking in hundreds of billions by then.


With many traditional auto makers you at this point have to wonder if they are still going to be around in ten years. Companies like Ford, Toyota, BMW, etc. are not looking so great. They each have the dilemma of a market that's shrinking by double digit percentages year on year (ICE cars) while another market is growing by the same percentage (EVs).

The way Toyota and Ford deal with this is reducing investments in EVs while at the same time meeting increased EV demand by heavily leaning on other companies to make them some EVs. Ford is working with VW and Renault in Europe. Toyota is working with big Chinese manufacturers in China. So is Ford. BMW has some success with their recent EV models but it is taking big hits with demand for their overall products in markets like the US and China.

The US is clearly lagging the EU and China when it comes to electrification. It's not at all clear that Tesla is doing much better. Their market share has tanked in markets where EVs do well (China, EU). However, it does have its own tech and still plenty of money. Where other manufacturers are leaning on outside suppliers, Tesla is pushing their own technology hard for just about everything. Including self driving cars and batteries. It's a different strategy at least and one that isn't dependent on the ICE market doing well or Chinese manufacturers doing all the technical heavy lifting.

Tesla's stock price is based on investor expectations on some of those bets working out eventually. Even if a lot of that stuff seems like it is struggling right now, it's too early to write all of it off as failed. The 4680 is still expected to be a big part of the semi's Tesla is expected to finally start mass producing in 2026. Self driving tests are still continuing and might eventually add up to something that works well enough. And it's also a relavant format for LFP based chemistries.

The problem for all of them have right now (especially Tesla) is that the Chinese are moving full steam ahead and are doing really well on technology and growth currently. Including things like self driving and of course batteries. The 4680 seems like it is old news when solid state is happening and new chemistries other than NMC are starting to dominate. And FSD while impressive has plenty of competition from other vendors at this point. Rivian has its own version. So do several Chinese vendors. And of course Waymo is actually moving lots of passengers autonomously at this point.


AI employees are not employees. They are tools, not people.

The fallacy here is applying a closed world assumption where the amount of code doesn't change but more of it is written by AIs and sold for the same dollar amount. And therefore less taxes get paid. That assumption is wrong; and so are the conclusions and sand castles built on it.

The reality is that whenever we give programmers better tools, we get more programmers, not less. And then they start creating even more software than before. Also the value of software drops but the amount spent on software grows. That's because economies grow when you drop the cost of something and create more demand for that stuff. In this case software. Where in the past some companies might not have bothered with an app or a website or automation, many now do have some of that stuff. And now they get to raise the ambition level and maybe see if they can automate some more of their internal processes.

Most companies won't do that themselves. They'll pay others to do this for them. And those people and companies will compete with each other on who does the best job most cheaply for a given return on investment. Which is nothing new. The winners will likely be leaning heavily on AI tools to earn handsomely for solutions that deliver some kind of value. They'll want better/smarter apps, deeper integrations with stuff, more automation, a sprinkling of AI, and whatever else makes their companies run better and earn more money.

AI tools themselves are already a commodity and just a means to an end. Anyone can use them. But only some know how to use them well. The skill is in understanding how to use them, what to fix, where to apply them, etc. Those wielding them best will line up more happy customers and revenue. And then they pay their taxes. Over the value they created.

They here being companies that increase their profits, software companies that charge for helping them do that, people employed by those companies, and AI infrastructure and other suppliers that are part of the solution here. Economies grow over time. VAT, profit tax, income tax, etc.

There's a lot of work that will need to happen over the next few decades to pull a lot of this planets industries out of last century. Anyone that believes it's all going to be AIs doing that by themselves more or less unprompted is dreaming. This is going to be a lot of work that will involve a lot of investment for potentially very big returns on investment. A lot of that work wasn't happening because it was too hard or expensive. It just got cheaper, so more of it will start happening now. There's plenty left to do.


I agree. It's not like we're ever going to get to a state where we say "oh wow, all potential work is done, there's literally nothing left to do".

Like pretty much every technical innovation in history, when we have access to more tools, we just figure out how to solve bigger problems. People might have felt bad for horse breeders who lots out when planes, trains, and automobiles became ubiquitous, but people adapted around it. Now people can work and travel around the world, and there are industries around all these things. It's generally applied to parallelism, but I think it applies here: https://en.wikipedia.org/wiki/Gustafson%27s_law

While I've had my issues with the "vibe coding" performance right now, ultimately if I can get something to handle the boring and tedious parts of programming, then that frees up time for me to focus on stuff that I find more fun or interesting, or at the very least it frees me up to work on more complicated problems instead of spending half a day writing and deploying yet another "move stuff from one Kafka topic to another Kafka topic" program.


I think we'll see a lot more of this in the next months. A similar recent example was Anthropic buying bun. Also undisclosed value.

Anthropic and Bun shared a major investor. Looking at this it's not clear of Meta actually invested in Manus. But they clearly aren't showing much signs of turning into a unicorn meaning that its investors would have been looking for some kind of exit. An acquisition by Meta counts as a win. Meta has a lot of fingers in a lot of pies in terms of investors. Big companies like that helping out friendly investors is quite common. They all need each other in different contexts.

The reason I'm expecting more of this is that investors have been sinking a lot of money into all sorts of AI startups in the past few years. Most of those are most likely not stay independent or get to an IPO. Short of letting them fail, acquisitions with undisclosed amounts are a nice way out for investors and founders to liquidate their investments and save some face in the process.

Meta gets some fresh talent and tech; investors get some return on investment and can claim some kind of exit happened. I doubt a lot of cash changed hands here. Share swaps are a common tool here.

It will be interesting to see what Meta does with Manus. I don't expect they'll do a lot with it. Just speculating but I just don't see a great fit here for Meta. Unless it is to breathe some life into their Llama strategy.


This goes in the right direction. It could go further though. Types are indeed nice. So, why use a language why using those is optional? There are many reasons but many of those have to do with people and their needs/wants rather than tool requirements. AI agents benefit from good tool feedback, so maybe switch to languages and frameworks that provide plenty of that and quickly. Switching used to be expensive. Because you had to do a lot of the work manually. That's no longer true. We can make LLMs do all of the tedious stuff.

Including using more rigidly typed languages, making sure things are covered with tests, using code analysis tools to spot anti patterns and addressing all the warnings, etc. That was always a good idea but we now have even less excuses to skip all that.


I use an o2 DSL connection in Berlin. The domains I tested seem to resolve fine. And you can of course configure an alternate DNS. Which apparently I didn't yet on my new laptop. So, that is fixed now. Mostly that's just a performance fix. Operator DNS tends to be a bit slow to respond and it's nice to get back a few milliseconds. But I also don't mind my operator not spying on me.

Of course I also use Firefox so mostly that just bypasses the system DNS entirely and uses dns over https.


I'm sure the Chinese, Russians, and other adversaries of the west will welcome any intentional weakening of network security to "protect children".

Any back doors, crippled encryption, etc, is a way in for their intelligence services. I find it baffling that politicians are so careless with their national sovereignty. It's especially worrying that a lot of populist support for this nonsense is indirectly supported by the before mentioned adversaries. There's a well documented history of especially Russian and Chinese propaganda aimed at supporting fringe populist parties. The agenda with that is complex but it isn't necessarily with friendly intentions.

Both Russia and China have isolated their own populations from the normal internet and effectively their countries run on centralized infrastructure where private VPNs are no longer allowed and traffic is monitored, filtered, and analyzed. Additionally, especially China has long targeted academic and enterprise network security for industrial espionage reasons. Weak government security has caused a few embarrassing situations across especially EU governments (e.g. Germany) with scandals related to over reliance on Chinese technology for telecommunications (huawei) and components for energy, auto motive, etc.

The point here is that those countries calling for this the most are also the most at risk of being compromised like this.


> There's a well documented history of especially Russian and Chinese propaganda aimed at supporting fringe populist parties. The agenda with that is complex but it isn't necessarily with friendly intentions.

You can add the USA to that list now, who follow the exact same strategy in the EU.


Decouple your planning cycles and development cycles. You develop at a constant pace, release whenever it is time/convenient to release. You plan regularly.

Planning is hard. Not doing it is not a great plan. Conflating development cycles and planning cycles, which is what a lot of teams end up doing with sprints, either sets the pace too aggressively or not aggressive enough. If it's too aggressive you end up shipping stuff that isn't ready. If it's not aggressive enough, you end up sitting on ready to ship code for too long.

In a company with multiple teams, planning gets harder. Especially if they span multiple timezones. Company sprints are a thing in some companies. But it's not necessarily very effective or scalable.

Calendar driven planning cycles where you ship whatever is in a shippable state is much more scalable and predictable. A lot of large OSS projects practice this (most of them) and it works in large companies too. It allows teams to self organize around known deadlines and work towards them.

That doesn't mean there is no planning but it is acknowledged that plans sometimes don't work out and that that's generally not a reason to stop a release train. If some planned thing isn't ready, park it on a branch and try to get it in the next time. Many OSS projects are very strict on this actually and ship regular as clockwork at a scale and quality level that puts most enterprise teams to shame. A lot of large companies that are typically involved with such OSS work as well do this internally as well. They are too large to orchestrate company wide sprints. So they rally around the calendar instead.

It doesn't actually exclude some teams in such contexts using e.g. Scrum or other agile methodologies. It just doesn't require it. And if you know your agile history, a lot of the Agile manifesto signees were very much into teams electing to use an Agile methodology rather than that being imposed, like is the practice in a lot of companies. It's just that a lot of OSS teams don't seem to bother with that.


There are a couple of things that are being glossed over:

Hardware failures and automated fail overs. That's a thing AWS and other managed hosting solutions do. Hardware will eventually fail of course. In AWS this would be a non event. It will fail over, a replacement spins up, etc. Same with upgrades, and other stuff.

Configuration complexity. The author casually outlines a lot of fairly complex design involving all sorts of configuration tweaks, load balancing, etc. That implies skills most teams don't have. I know enough to know that I have quite a bit of reading up to do if I ever were to decide to self host postgresql. Many people would make bad assumptions about things being fine out of the box because they are not experienced postgresql DBAs.

Vacations/holidays/sick days. Databases may go down when it's not convenient to you. To mitigate that, you need to have several colleagues that are equally qualified to fix things when they go down while you are away from keyboard. If you haven't covered that risk, you are taking a bit of risk. In a normal company, at least 3-4 people would be a good minimum. If you are just measuring your own time, you are not being honest or not being as diligent as you should be. Either it's a risk you are covering at a cost or a risk you are ignoring.

With managed hosting, covering all of that is what you pay for. You are right that there are still failure modes beyond that that need covering. But an honest assessment of the time you, and your team, put in for this adds up really quickly.

Whatever the reasons you are self hosting, cost is probably a poor one.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: