As a worker in one of the companies above, I can tell you that we aren’t willing to do just “whatever” to win contracts. We have real responsible AI reviews etc. We would not just hand over data to the US government. It doesn’t seem that way at Palantir.
But the issue is, and I am not someone saying we should or can throw everything out, that if the US gov demands it, you have to hand over the data right? If your HQ is in the US? Palantir, for me, is worse because of what they are and their communications as you say, but all of these, when compelled by the courts, have to hand over right?
Not sure why you are getting downvoted, but this IS the key worry: That people lose contact with the code and really don’t understand what is going on, increasing “errors” in production (for some definition of error), that result in much more production firefighting that, then, reduce the amount of time to write code.
Losing contact with the code is definitely on my mind too. Just like how writing can be a method of thinking, so can programming. I fear that only by suffering through the implementation will you realise the flaws of your solution. If this is done by an LLM you are robbed the opportunity and produce a worse solution.
Still, I use LLM assisted coding fairly frequently, but this is a nagging feeling I have.
Very soon, because clearly OpenAI is in very serious trouble. They are scaled and have no business model and a competitor that is much better than them at almost everything (ads, hardware, cloud, consumer, scaling).
There has been decades of propaganda about how unions destroy jobs in the United States and most software engineers have grown up in those decades.
I'm not trying to argue that Unions are exact right answer (perhaps something like worker's councils would be better) but the underlying issue is that collective action in the United States has been effectively demonized for a very long time (going back to blaming unions for our uncompetitive cars vs. Japan).
A neutral observation: The pro-union camp really needs some better messaging if they want any hope of overcoming these objections.
Nearly every pro-union discussion I see online or even politician speaking to a crowd feels like they're in full-on preaching to the choir mode, where they don't even consider how to address anyone skeptical of unionization. It's always presented as the obvious choice. Any skepticism or critical questions are dismissed as the result of consuming propaganda (like the comment above).
If the hardcore pro-union people want to get anywhere, they need to stop treating anyone with critical questions or skepticism as being misinformed or the victim of propaganda.
Speakers like Pete Buttigieg are a good model for addressing mixed audiences without alienating the other side right off the bat. Not everyone is going to agree with him, but he does a much better job of speaking to a mixed audience as a group of people with differing opinions than most.
It's almost like all of the forms of communication and media people pay attention to is owned by billionaires with a vested interest in promoting anti-union views.
As a group we're probably the most profoundly ignorant people on the planet when it comes to labor relations. We can't even reason about this because we (again, as a group) have practically no experience and even less interest in the subject.
The union issue vs. Japan is a perfect example because you only need to sit in the cars both countries were making at the time to understand why we were uncompetitive.
There has also been decades of corruption in management (see donations to ballrooms) and yet nobody is saying it will take decades to overcome management.
The problems with management aren't the result of any one person - it is the ownership class, their lack of any feeling of societal obligation, the lack of consequences for their actions, and their ownership of media and messaging.
I think streaks are a good thing (consistency) if you push the user to look at them in aggregate (ala the Github green checkbox) not in terms of punishment for missing a day (aka a single number).
I like how Anki does it for example.
Also, guide the user to find a non-burnout rate. It is easy to set yourself up for destruction with learning apps and I like how Anki told me "slow down Cowboy" in terms of the new card rate because I hadn't worked out that going too fast on this would result in an avalanche in two weeks in terms of review cards.
Something doesn't square about this picture: either this is the best thing since sliced bread and it should be wildly profitable, or ... it's not, and it's losing a lot of money because they know there isn't a market at a breakeven price.
They're losing money because they are in a training arms race. If other companies weren't training competitive models OpenAI would be making a ton of money by now.
They have several billion dollars of annual revenue already.
I think it's also a cultural thing... I mean it takes time for companies and professionals to get used to the idea that it makes sense to pay hundreds of dollars per month to use an AI. That that expense (that for some is relatively affordable and for other can be a serious one) actually converts in much higher productivity or quality.
Google is always going to be training a new model and are doing so while profitable.
If OpenAI is only going to be profitable (aka has an actual business model) if other companies aren't training a competitive model, then they are toast. Which is my point. They are toast.
Is Google (and Meta) funding AI training from the profits of ad business, eating the losses in order to prevent pure-AI companies from making a profit, is this legal?
In principle, I mean. Obviously there's a sense in which it doesn't matter if they only get fined for cross-subsidising/predatory pricing/whatever *after* OpenAI et al run out of money.
I do think this is a bubble and I do expect most or all the players to fail, but that's because I think they're in an all-pay auction and may be incentivised to keep spending way past the break-even point just for a chance to cut their losses.
Do we know Google is operating at a loss? It seem most likely to me that they are paying for the model development straight out of search where it is employed on almost every search
Fair question. That's the kind of "who knows?" which might make it hard to defeat them in litigation, unless Google staff have been silly enough to write it down in easy-to-find-during-discovery emails.
But as a gut-check, even if all the people not complaining about it are getting use out of any given model, does this justify the ongoing cost of training new models?
If you could delete the ongoing training costs of new models from all the model providers, all of them look a lot healthier.
I guess I have a question about your earlier comment:
> Google is always going to be training a new model and are doing so while profitable.
While Google is profitable, or while the training of new models is profitable?
Anki IS amazing and DOES suck at the same time. I am very glad it exists and this is not meant as a dig at the maintainer for whom I am very grateful.
In particular, the UX is a mess. It is very hard for a beginner and frankly it feels like you are in an escape room whenever you want to do something new in terms of difficulty.
Once you are over that hump and just internalize its warts, it is AMAZING, but it IS a huge hurdle for a lot of people.
reply