> I would state there is one other group that has any academic rigour, and is actually making significant and important progress.
I agree there's a lot of poorly written papers and unrigorous research.
I'm at the beginning of my PhD, so I still don't quite have every group vetted yet. Could you share your area, and what groups to follow (yours and the other good one)?
No, Fortnite is/was directly made by Epic Games and is their most well known product/service.
Better analogies would be Azure as "I have to use Windows" or AWS as "I have to use the Amazon store", which sound a lot less ridiculous than your analogies.
It's the only model provider that has offered a decent deal to students: a full year of google ai pro.
Granted, this doesn't give api access, only what google calls their "consumer ai products", but it makes a huge difference when chatgpt only allows a handful of document uploads and deep research queries per day.
Hallucination rate is hallucination/(hallucination+partial+ignored), while omniscience is correct-hallucination.
One hypothesis is that gemini 3 flash refuses to answer when unsuure less often than other models, but when sure is also more likely to be correct. This is consistent with it having the best accuracy score.
The opposite is also true, the tech world views itself as more sacred that any other part of humanity.
You say it's obvious that the existence of AI is valuable to offset a few artists' jobs, but it is far from obvious. The benefits of AI are still unproven (a more hallucinatory google? a tool to help programmers make architectural errors faster? a way to make ads easier to create and sloppier?). The discussion as to whether AI is valuable is common on hackernews even, so I really don't buy the "it's obvious" claim. Furthermore, the idea that it is only offsetting a few artists' jobs is also unproven: the future is uncertain, it may devastate entire industries.
I'm having a hard time understanding this article.
First of all, a quantum annealer is not a universal quantum computer, just to elucidate the title.
Then, it seems like they are comparing a simulation of p-computers to a physical realization of a quantum annealer (likely D-wave, but not named outright for some reason).
If this is true, it doesn't seem like a very relevant comparison, because D-wave systems actually exist, while their p-computer sounds like it is just a design.
But I may have misunderstood, because at times they make it sound like the p-computer actually exists.
Also, they talk about how p-computers can be scaled up with TSMC semiconductor technology. From what I know, this is also true for semiconductor-based (universal) quantum computers.
University press releases should not be posted on HN. a press release is just a published paper + PR spin. If the PR spin were true, it would be in the paper. Just link to the paper.
Personally, I'm downvoting the comment because it is literally just restating the parent comment, but more generically. It does not contribute to the conversation.
And I'm downvoting you because you are breaking the site guidelines:
> Please don't comment about the voting on comments. It never does any good, and it makes boring reading.
No, that is a terrible analogy. High level languages are deterministic, fully specified, non-leaky abstractions. You can write C and know for a fact what you are instructing the computer to do. This is not true for LLMs.
I was going to start this with "C's fine, but consider more broadly: one reason I dislike reactive programming is that the magic doesn't work reliably and the plumbing is harder to read than doing it all manually", but then I realised:
While one can in principle learn C as well as you say, in practice there's loads of cases of people getting surprised by undefined behaviour and all the famous classes of bug that C has.
There is still the important difference that you can reason with precision about a C implementation’s behavior, based on the C standard and the compiler and library documentation, or its source or machine code when needed. You can’t do that type of reasoning for LLMs, or only to a very limited extent.
Maybe, but buffer overflows would occur written in assembler written by experts as well. C is a fine portable assembler (could probably be better with the knowledge we have now) but programming is hard. My point: you can roughly expect an expert C programmer to produce as many bugs per unit of functionality as an expert assembly programmer.
I believe it to be likely that the C programmer would even writes the code faster and better because of the useful abstractions. An LLM will certainly write the code faster but it will contain more bugs (IME).
Scientific publishing is a decentralized system, there's no specific ban in place, just that publishers will likely not accept to publish your paper.
And the reason for that is accuracy nor bias, just that Wikipedia is not a primary source. You don't generally cite any encyclopedias in scientific papers.
> So? You aren't off the hook because someone did something unexpected or "was exercising poor operational security."
You might be. If it was Hezbollah's guns that exploded and not their pagers, I would expect most people to agree that you would be "off the hook" if someone else was handling that gun.
Not saying pagers = guns, but it's a spectrum surely.
I agree there's a lot of poorly written papers and unrigorous research. I'm at the beginning of my PhD, so I still don't quite have every group vetted yet. Could you share your area, and what groups to follow (yours and the other good one)?
reply