You'd think the experience of ChatGPT would have taught them that the average person isn't blindly worshipping the output of LLMs, and has the requisite critical thinking skills to keep the useful bits and discard the rest. Instead we get this belittling rebuttal highlighting the immense danger of letting the public use a service that may, occasionally, be wrong, as though humans don't encounter and discriminate incorrect information throughout every day of their lives (not least on google's own existing services). Instead of pontificating about "earning the public trust", maybe stop positioning yourself as the unassailable source of truth and give people tools that will materially improve their lives today, if you really want to follow through on your mission.
I do think they sort of have a point... (lay) people are used to engaging critical thinking when using computers for interactions with other people.
They aren't used to thinking about computers own output in this sort of fuzzy way as it's not been a problem until recently... I can't see how my parents could suddenly conceptualise the whole idea that their computer may be lying to them or is confidently confused and wrong.
I submit that people could learn, provided the model output is framed appropriately. It does come across as rather patronizing from Google. Little wonder given the company’s DNA. Also just very risk averse, which goes without saying.
> You'd think the experience of ChatGPT would have taught them that the average person isn't blindly worshipping the output of LLMs
They aren't wrong though. Imagine Google had released LLM based chatbot first. There'd be a lot of negative coverage when (not just if) things went wrong: biases, inaccuracy, misinformation, dangerous information. With Google, people would expect to see "correct" information, and for it to not be biased, and safe.
for example, imagine if Google's bot give the recipe to make meth [1]. Every big publisher would be talking about dangers of AI, and how Google is being irresponsible. It's not just Google, you could replace Google with Microsoft and it would still apply.
With ChatGPT, it is a beta service, so users understand it can be wrong. Turn it into a product from another company and it becomes a liability issue.
It seems as though you've conflated the management of perception with the practice of ethical research.
Google is very rich and that enables them to control resources and build things that make them richer. They're in positive feedback loop, and their job is to protect their advantage. Yep, sometimes capitalism gets out of hand.
It's the government's job (on behalf of their citizens) to apply the brakes, and Google has become pretty skilled at managing perceptions to avoid the brakes and maintain their advantage. This document is one part of their strategy.
So, apologizing for Google on the grounds of "they'd get bad press if they shared their toys" rings a bit hollow. They've dressed up self interest as social responsibility, and exploited this weird, radical left wing moment of ours to their advantage.
I think Google is hobbled by this belief. They place their "ethics" above doing well in the market. This is evidenced by the degradation of their search results to prevent the spread of misinformation. Because of this they dropped most results that come from forums from their search results. The market will just have to correct them. They are even saying they might stop releasing research because they are worried about the ethics of their competitors. That will just be a disaster for technology. No wonder Sergei and Larry aren't that excited about the company anymore.
This means nothing. AI will invade any space it can. Any use case found by any company will immediately be x-copied, and the mvp mass deployed, by competitors. The idea of responsible profit maximization is absurd. This is a shitty attempt to explain the lack of progress.
> We are so focused on safety/diversity/risk/equity/wtf that we forgot to make cool profitable stuff.
To me it looks like Google never intended to turn its AI research into products. It probably sees these LLMs, that it itself was instrumental in inventing as a competitor to search and nothing that could ever get green lit as a product due to internal politics. I see OpenAI as an intentional effort to free this tech from the smothering of Google led by SV insiders in the know on what Google was sitting on.
The classic innovators dilemma. I think you're on to something. They just couldn't stand the ethical headache of this whole thing. I think this is the first really upfront case of highly useful technology being repressed. Sure, there have been rumors of all sorts of things being repressed, but this is the first time it's really out in the open publicly.
I think it is quite similar to EVs. No established player in the auto industry had any interest in developing them because it was seen as a risk to established business and partnerships. The EV1 was a great early start from 1996 that was literally crushed (see the documentary, Who Killed The Electric Car). Dealerships fight against EVs to this day. Wyoming just called for EV sales to be banned by 2035. Without Tesla sending the whole industry into a panic I just don’t think there would be any practical cars on the market today.
Usually, if it's a letter that's signed by the C-suite and reviewed by 5 layers of internal and external PR, it won't say anything interesting.
This is because, to be interesting, it has to be something unexpected.
But the job of the PR is to minimize disagreeableness.
It's rare to that something is both unexpected and minimally disagreeable. A sudden advance in medicine resulting in a cure for cancer would be both (but even then, if it were e.g. an mRNA vector some people might find it disagreeable). But a company's position on a technological advance almost never meets both [Google PR: "Google also supports cures for cancer!" Yawn.]
Interesting response from Google, signed off by all the top dogs, including Sundar — hard to not read between the lines as this being a rebuke to OpenAI and StabilityAI for letting the public have relatively unfettered access.