It's more efficient anyway because the inference is what everyone will use for forecasting. Researchers will be using huge amounts of compute to develop better models, but that's also currently the case, and it isn't the majority of weather simulation use.
There's an interesting parallel to Formula One, where there are limits on the computational resources teams can use to design their cars, and where they can use an aerodynamic model that was previously trained to get pretty good outcomes with less compute use in the actual design phase.
It varies a lot by store. I’ve been to HDs where they’re all useless, and others where there’s a good number of knowledgeable DIYers working there.
I think a lot of people just expect too much from a big box store employee making $17/hr… You go to HD because you have an easy job and you’re as cheap as their MBAs. If you need help, go to a supply house or an Ace Hardware or something.
Fully this. Every Ace or Do It Best I've been to in Washington has had at least one Rugged Grandpa ™ on staff who could have given me a PhD-level essay on whatever I asked them about; at Home Depot I'm lucky if the folks there have any idea what an impact-rated bit is or why I specifically need one and NO please stop trying to sell me this other crap if you're sold out of the impact bits, they are NOT the same!
(It gets worse the further from the power tools section you get, I find. I had to explain the difference between a three-prong and four-prong 240V plug once at HD and promptly told my friend to stop asking the staff for "help" finding things.)
> It gets worse the further from the power tools section you get, I find. I had to explain the difference between a three-prong and four-prong 240V plug once at HD and promptly told my friend to stop asking the staff for "help" finding things.
The best feature of Home Depot is order pickup. No need to explain to someone that some appliances use both 120V for control power and 240V power for the motor or heating element; or that you’re installing a receptacle to backfeed a 120/240V panel with a 120/240V generator and therefore you need a 4-wire NEMA 14 series receptacle with a neutral conductor, you just buy one and pick it up from a locker. It’s made buying things from Home Depot tolerable for me, I’m used to buying material from supply houses where the folks are knowledgeable, I know that’s not the case at HD so I don’t even bother asking.
The store I worked at for a while had a surprising number of real bearded experts, alongside at least a few younger folks who really understood the internal systems. It was great, but clearly was eroding as the experts retired and young folks with no experience were hired to replace them.
I asked an employee for something by part number and described it. The answer he gave was "why the hell would you want that anyways? I've worked here 13 years and never seen one". I found it on a shelf a few levels up and used a grounding rod from the electrical section to spear it and bring it down to ground level
It's unfortunate that there's so little (none in the article, just 1 comment here as of this writing) mention of the Turing Test. The whole premise of the paper that introduced that was that "do machines think" is such a hard question to define that you have to frame the question differently. And it's ironic that we seem to talk about the Turing Test less than ever now that systems almost everyone can access can arguably pass it now.
> “The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.” ~ Edsger W. Dijkstra
The point of the Turing Test is that if there is no extrinsic difference between a human and a machine the intrinsic difference is moot for practical purposes. That is not an argument to whether a machine (with linear algebra, machine learning, large language models, or any other method) can think or what constitutes thinking or consciousness.
I kind of agree but I think the point is what people mean by words is vague, so he said:
>Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.
which is can you tell the AI answers from the humans ones in a test. It then becomes an experimental result rather than what you mean by 'think' or maybe by 'extrinsic difference'.
The Chinese Room is a pretty useless thought exercise I think. It's an example which if you believe machines can't think seems like an utterly obvious result, and if you believe machines can think it's just obviously wrong.
People used to take it surprisingly seriously. Now it's hard to make the argument that machines can't understand say Chinese when you can give a Chinese document to a machine and ask it questions about it and get pretty good answers.
>And it's ironic that we seem to talk about the Turing Test less than ever now that systems almost everyone can access can arguably pass it now.
Has everyone hastily agreed that it has been passed? Do people argue that a human can't figure out it's talking to an LLM if the user is aware that LLMs exist in the world and is aware of their limitations and that the chat log is able to extend to infinity ( "infinity" is a proxy here for any sufficient time, it could be minutes, days, months, or years)?
In fact, it is blindly easy for these systems to fail the Turing test at the moment. No human would have the patience to continue a conversation indefinitely without telling the person on the other side to kindly fuck off.
No, they haven't agreed because there was never a practical definition of the test. Turing had a game:
>It is played with three people, a man (A), a
woman (B), and an interrogator (C) who may be of either sex. The
interrogator stays in a room apart front the other two. The object of the
game for the interrogator is to determine which of the other two is the man
and which is the woman. He knows them by labels X and Y, and at the end
of the game he says either "X is A and Y is B" or "X is B and Y is A." The
interrogator is allowed to put questions to A and B.
>We now ask the question, "What will happen when a machine takes the part
of A in this game?" Will the interrogator decide wrongly as often when the
game is played like this as he does when the game is played between a man
and a woman?
(some bits removed)
It was done more as thought experiment. As a practical test it would probably be too easy to fake with ELIZA type programs to be a good test. So computers could probably pass but it's not really hard enough for most people's idea of AI.
The definition seems to suffice if you give the interrogator as much time as they want and don't limit their world knowledge, which the definition that you cited doesn't seem to limit or constrain? By "world knowledge" I mean any knowledge that includes and is not limited to knowledge about how the machine works and its limitations. Therefore if the machine can't fool Alan Turing specifically then it fails even though it might have fooled some random Joe who's been living under a rock.
Hence since current LLMs are bound to hallucinate given enough time and seem not to able to maintain a conversation context window as robustly as humans, they would inevitably fail?
I’d argue the goalposts have moved substantially over the past decade. The LLMs we casually use in ChatGPT today would have been described as AGI by many people 15, 10, maybe even 5 years ago.
When choosing what my life's work would be, I filtered out tasks that involved genetically engineering humans so that my solution cold compete with "eating a nice, fresh orange". Maybe I'm just lazy and unambitious.
> we want to operate like the world’s largest startup
This is a phrase I hear repeated by leadership a lot, and it's usually code for "why doesn't everyone else just make the business grow faster?" It is almost always, as in this case, followed by statements that suggest they don't understand what is actually different about the way a start up functions and why they stopped operating that way at some point in the first place.
Sounds like some marketer got tasked into trying to convince a group of people that the company is looking at aggressive growth and unrealized markets for as long as they are willing to entertain that delusion.
Google seems to have always said this externally, but every time I met with Google pre-COVID, all their engineers dialed in from home. Managers in the office. I would even travel to their office. Engineers? Nope. At home.
Okay but I can think of 5 teams, all of whom showed this behavior? Even if it is "team by team" that is not the same as what Google seems to generalize about itself.
Not sure if you meant this as a counterpoint or as confirmation, but let's be clear that that ALSO doesn't sound like solidly-entrenched one-party rule is good.
I wasn't sure if the person I was responding to was expressing a sincere belief, or just one of the conservatives that love hypocritically pointing out SF's flaws. Figured with this we'll find out.
Not gonna go hunt for the link right now, but I think Minix 3 was intended to be more industrially applicable than it's predecessors: there's a talk somewhere where Tanenbaum talked about the need for a more fault tolerant kernel in all sorts of applications, and I think he got a grant from some European institution for that purpose.
There's an interesting parallel to Formula One, where there are limits on the computational resources teams can use to design their cars, and where they can use an aerodynamic model that was previously trained to get pretty good outcomes with less compute use in the actual design phase.