Hacker Newsnew | past | comments | ask | show | jobs | submit | d_burfoot's commentslogin

When I was a physics student, there were four forces: strong, weak, EM, and gravity. That picture seemed neat and clean. Strong kept the nucleus together, EM kept molecules and atoms together (or broke them apart), gravity kept astronomical bodies together, weak was some kind of momentum-accounting device.

Recently, GPT informed me that the strong force is really a tiny after-effect of the "QCD force" (in the same way that the Van der Waals forces are after-effect of EM). Also, more and more questions about "dark matter" seem to be building up, suggesting that the standard Newton-Einstein story of gravity is far from the complete picture.

25 years ago it seemed like physics was mostly complete, and the only remaining work was exploring the corner cases and polishing out all the imperfections. It doesn't feel that way anymore! The confusing part is that modern physics is so unbelievably successful and useful for technology - if the underlying theory was way off, how could the tech work?


> Recently, GPT informed me that the strong force is really a tiny after-effect of the "QCD force"

Maybe you should not take everything GPT tells you at face value? I have no idea what this QCD force is supposed to be. The strong force is _the_ force of QCD. The Standard Model still considers the electromagnetic, weak and strong force. The description of the weak and EM force can be unified into the electroweak force and there are theories that try to also unify it with the strong force and even gravity, but there are issues on the theory side and no clear evidence on the experimental side as to which direction is the correct one.

The Standard Model and General Relativity are still our most successful theories. It is clear that they don't tell the whole picture, but (annoyingly?) it is not clear at all where this is going.

Just for dark matter there are probably a dozen proposed hypothetical particles, but so far we have found none. But maybe it's something completely different...


> 25 years ago it seemed like physics was mostly complete, and the only remaining work was exploring the corner cases and polishing out all the imperfections

Around 125 years ago, many thought the same about physics, that physics is mostly complete and it just explaining and finishing some edge cases and polishing all our measurements. There was just two things that were a little bit puzzling, the "looming clouds" over physics (per Kelvin description) will later lead to both Quantum Theory and Theory of relativity (Black body radiation and Michelson–Morley experiment) and the fundamental change of our understanding for physics after that.

So I would not take this position. Does this mean we are in a similar moment? maybe, who knows?


"QCD force" is the same thing as the "strong" force. There is no reason whatsoever to invent any new name.

There are several hierarchical levels at which the strong interaction and the electromagnetic interaction bind the components of matter.

The electromagnetic interaction attempts to neutralize the electric charge. To a first approximation this is achieved in atoms. The residual forces caused by imperfect neutralization bind atoms in molecules. Even between molecules there remain some even weaker residual attraction forces, which are the Van der Waals forces, which are thus at the third hierarchical level.

For the strong interaction, there are only 2 hierarchical levels, approximate charge neutralization is achieved in nucleons, which are bound by residual attractive forces into nuclei.

So the forces between the nucleons of a nucleus correspond to the inter-atomic forces from inside a molecule, not to the Van der Waals forces between molecules.


> 25 years ago it seemed like physics was mostly complete, and the only remaining work was exploring the corner cases and polishing out all the imperfections. It doesn't feel that way anymore!

Physicists thought the same thing c. 1900, but then one of the "corner cases" turned into the ultraviolet catastrophe[1]. The consequences of the solution to that problem kept the whole field busy for a good part of the 20th century.

I'm highly skeptical of the idea that physics is anywhere near complete. The relative success of our technology gives us the illusory impression that we're almost done, but it's not obvious that physics even has a single, complete description that we can describe. We assume it does for convenience, in the same way that we assume the laws are constant everywhere in spacetime. I view this as both exciting and terrifying, but mostly exciting.

[1]: https://en.wikipedia.org/wiki/Ultraviolet_catastrophe


> Recently, GPT informed me that the strong force is really a tiny after-effect of the "QCD force"

This is kind of just semantics. QCD describes both the force binding quarks inside protons and neutrons, and the residual force binding protons and neutrons. This is all part of the Standard Model, which has been essentially unchanged for the last 50 years. The big theoretical challenge is to incorporate gravity into this picture, but this is an almost impossible thing to explore experimentally because gravity is very weak compared to the other 3 forces. That's why the Standard Model is so successful, even though it doesn't incorporated gravity.

You might enjoy https://en.wikipedia.org/wiki/List_of_unsolved_problems_in_p...


> The confusing part is that modern physics is so unbelievably successful and useful for technology - if the underlying theory was way off, how could the tech work?

Who says "way" off? It's not complete to explain everything, but it explains a lot correctly enough to use it for calculations, predictions and practical effects. Same way Newton was and remains useful, and how people have been using maths and technology to solve problems for a long time since before Newton was born.


I think physics has felt pretty incomplete since the confirmation of qm non-locality in the 60s.

Second sentence of abstract:

> Leave rates are lower in the life sciences and higher in AI and quantum science but overall have been stable for decades

The US has been completely dominant in technology innovation for the last several decades. So, the answer is no: the loss of 1/4 of the STEM scientists is not important.


Do quant traders use Mathematica? I would guess this would be a great use case for a tool that lots of people love. Pretty language, huge boatload of built-in tools, high powered mathematics, great visualization capabilities. Quant firms should be able to live with the price tag. I assume they have a compiler that can produce fast executables for HFT.


They do


Hypothesis C: failure of human memory. A human read Stephenson's book(s) 20 years ago, remembers that the endings were a bit unsatisfying. The same human also read some other book many years ago, which ends mid-sentence. In that person's mind, the two are conflated.


If I was writing a book review for my company (big famous VC who cares about their reputation) - I would’ve probably at least popped the book open and read a few chapters if it’s been years since I read it


Hypothesis D-for-Delany: The human thought Stephenson wrote Dhalgren.

"Waiting here, away from the terrifying weaponry, out of the halls of vapor and light, beyond holland into the hills, I have come to"


Hypothesis A is much more likely if you ask me


It's A16Z, they definitely had an LLM recommend a set of books that nobody there has actually ever read. Except maybe Snowcrash


Another hypothesis. Have AI generate a top 50 list of books, and add a book you want your website to promote into the mix somewhere near the top to increase its sales. Cheap marketing, It wouldn't be the first time.


> AI systems exist to reinforce and strengthen existing structures of power and violence. They are the wet dream of capitalists and fascists.

Persuasion tip: if you write comments like this, you are going to immediately alienate a large portion of your audience who might otherwise agree with you.


Wait a minute - the attackers were using the API to ask Claude for ways to run a cybercampaign, and it was only defeated because Anthropic was able to detect the malicious queries? What would have happened if they were using an open-source model running locally? Or a secret model built by the Chinese government?

I just updated by P(Doom) by a significant margin.


> What would have happened if they were using an open-source model running locally? Or a secret model built by the Chinese government?

In all likelihood, the exact same thing that is actually happening right now in this reality.

That said, local models specifically are perhaps more difficult to install given their huge storage and compute requirements.


If plain open-source local models were able to do what Claude API does, Anthropic would be out of business.

Local models are a different thing than those cloud-based assistants and APIs.


> If plain open-source local models were able to do what Claude API does, Anthropic would be out of business.

Not necessarily. Oracle has made billions selling a database that's less good than plain open-source ones, for example.


It wasn't originally less good. For at least 20 years it was much better.


Why would the increase be a significant margin? It's basically a security research tool, but with an agent in the loop that uses an LLM instead of another heuristic to decide what to try next.


I mean models exhibiting hacking behaviors has been predicted by cyberpunk for decades now, should be the first thing on any doom list.

Governments of course will have specially trained models on their corpus of unpublished hacks to be better at attacking than public models will.


I think AWS should use, and provide as an offering to big customers, a Chaos Monkey tool that randomly brings down specific services in specific AZs. Example: DynamoDB is down in us-east-1b. IAM is down in us-west-2a.

Other AWS services should be able to survive this kind of interruption by rerouting requests to other AZs. Big company clients might also want to test against these kinds of scenarios.



At some point AWS has so many services it's subject to a version of xkcd Rule 34 -- if you can imagine it, there's an AWS service for it.


I used to tell people there that my favorite development technique was to sit down and think about the system I wanted to build, then wait for it to be announced at that year's re:Invent. I called it "re:Invent and Simplify". "I" built my best stuff that way.


A big chunk, perhaps the majority, of the "Accidents" are from cars. Another infographic I observed recently showed that, for children, the risk of death due to traffic accidents was greater than all other risks combined.

People should be raving and screaming for faster rollout of self-driving cars. If self-driving cars were an experimental drug undergoing a clinical trial, they would cancel the trial at this point because it would be unethical to continue denying the drug to the control group.


> People should be raving and screaming for faster rollout of self-driving cars.

People should be raving to get rid of cars, period. Proper mass transit is always a better option.

Just because cars become self-driving doesn't mean that they are not a negative externality.


> People should be raving and screaming for faster rollout of self-driving cars

That's assuming it'll meaningfully reduce the rates of child deaths due to automobiles.

You know what will reduce the rate of child fatality due to automobiles for sure and to an even higher degree? Massively reducing the odds kids and automobiles mix. How do we do that? Have more protected walkable and bikeable spaces. Have fewer automobiles driving around. Design our cities better to not have kids walking along narrow sidewalks next to roads where speed limits are marked as 40 but in reality traffic often flows at 55+.

Its insane to me there are neighborhoods less than a mile from associated public schools that have to have bus service because there is no safe path for them to walk. What a true failure of city design.


Looks good to me. I use a tab-completion trick where the tab-completer tool calls the script I'm about to invoke with special arguments, and the script reflects on itself and responds with possible completions. But because of slow imports, it often takes a while for the completion to respond.

I could, and sometimes do, go through all the imports to figure out which ones are taking a long time to load, but it's a chore.


To me the reason ARC-AGI puzzles are difficult for LLMs and possible for humans is that they are expressed in a format for which humans have powerful preprocessing capabilities.

Imagine the puzzle layouts were expressed in JSON instead of as a pattern of visual blocks. How many humans could solve them in that case?


We have powerful preprocessing blocks for images: Strong computer vision capabilities predates LLMs by several years. Image classification, segmentation, object detection, etc. All differential and trainable in same way as LLMs, including jointly. To the best of my knowledge, no team has shown really high scores by adding in a image preprocessing block?


Every one who had access to a computer that could convert json into something more readable for humans, and would know that was the first thing they needed to do?

You might as well have asked how many English speakers could solve the questions if they were in Chinese. All of them. They would call up someone who spoke Chinese, pay them to translate the questions, then solve them. Or failing that, they would go to the bookstore, buy books on learning Chinese, and solve them three years from now.


Bingo. We simply made a test for which we are well trained. We are constantly making real time decisions with our eyes. Interestingly certain monkeys are much better at certain visual pattern recognition than we are. They might laugh and think humans haven’t reached AGI yet.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: