Can anyone with specific knowledge in a sophisticated/complex field such as physics or math tell me: do you regularly talk to AI models? Do feel like there's anything to learn? As a programmer, I can come to the AI with a problem and it can come up with a few different solutions, some I may have thought about, some not.
Are you getting the same value in your work, in your field?
Context: I finished a PhD in pure math in 2025 and have transitioned to being a data scientist and I do ML/stats research on the side now.
For me, deep research tools have been essential for getting caught up with a quick lit review about research ideas I have now that I'm transitioning fields. They have also been quite helpful with some routine math that I'm not as familiar with but is relatively established (like standard random matrix theory results from ~5 years ago).
It does feel like the spectrum of utility is pretty aligned with what you might expect: routine programming > applied ML research > stats/applied math research > pure math research.
I will say ~1 year ago they were still useless for my math research area, but things have been changing quickly.
I don't have a degree in either physics or math, but what AI helps me to do is to stay focused on the job before me rather than to have to dig through a mountain of textbooks or many wikipedia pages or scientific papers trying to find an equation that I know I've seen somewhere but did not register the location of and did not copy down. This saves many days, every day. Even then I still check the references once I've found it because errors can and do slip into anything these pieces of software produce, and sometimes quite large ones (those are easy to spot though).
So yes, there is value here, and quite a bit but it requires a lot of forethought in how you structure your prompts and you need to be super skeptical about the output as well as able to check that output minutely.
If you would just plug in a bunch of data and formulate a query and would then use the answer in an uncritical way you're setting yourself up for a world of hurt and lost time by the time you realize you've been building your castle on quicksand.
I do / have done research in building deep learning models and custom / novel attention layers, architectures, etc., and AI (ChatGPT) is tremendously helpful in facilitating (semantic) search for papers in areas where you may not quite know the magic key words / terminology for what you are looking for. It is also very good at linking you to ideas / papers that you might not have realized were related.
I also found it can be helpful when exploring your mathematical intuitions on something, e.g. like how a dropout layer might effect learned weights and matrix properties, etc. Sometimes it will find some obscure rigorous math that can be very enlightening or relevant to correcting clumsy intuitions.
I'm an active researcher in TCS. For me, AI has not been very helpful on technical things (or even technical writing), but has been super helpful for (1) literature reviews; (2) editing papers (e.g., changing a convention everywhere in the paper); and (3) generating Tikz figures/animations.
I talk to them (math research in algebraic geometry) not really helpful outside of literature search unfortunately. Others around me get a lot more utility so it varies. (Most powerful model i tried was Gemini 2.5 deep think and Gemini 3.0 pro) not sure if the new gpts are much better
I did a theoretical computer science PhD a few years ago and write one or two papers a year in industry. I have not had much success getting models to come up with novel ideas or even prove theorems, but I have had some success asking them to prove smaller and narrower results and using them as an assistant to read papers (why are they proving this result, what is this notation they're using, expand this step of their proof, etc). Asking it to find any bugs in a draft before Arxiving also usually turns up some minor things to clarify.
Overall: useful, but not yet particularly "accelerating" for me.
I work in quantum computing. There is quite a lot of material about quantum computing out there that these LLMs must have been trained on. I have tried a few different ones, but they all start spouting nonsense about anything that is not super basic.
But maybe that is just me. I have read some of Terence Tao's transcripts, and the questions he asks LLMs are higher complexity than what I ask. Yet, he often gets reasonable answers. I don't yet know how I can get these tools to do better.
This often feels like an annoying question to ask, but what models were you using?
The difference between free ChatGPT, GPT-5.2 Thinking, and GPT-5.2 Pro is enormous for areas like logic and math. Often the answer to bad results is just to use a better model.
Additionally, sometimes when I get bad results I just ask the question again with a slightly rephrased prompt. Often this is enough to nudge the models in the right direction (and perhaps get a luckier response in the process). However, if you are just looking at a link to a chat transcript, this may not be clear.
I have openrouter account, so I try different models easily. I have tried Sonnet, Opus, various versions of GPT, Deepseek. There are certainly differences in the quality. I also do rephrase prompts all the time. But ultimately, I can't quite get them to work in quantum computing. Far easier to get them to answer coding or writing related questions.
"I don't yet know how I can get these tools to do better."
I have wondered if he has access to a better model than I, the way some people get promotional merchandise. A year or two ago he was saying the models were as good as an average math grad student when to me they were like a bad undergrad. In the current models I don't get solutions to new problems. I guess we could do some debugging and try prompting our models with this Erdos problem and see how far we get. (edit: Or maybe not; I guess LLMs search the web now.)
I’m a hobbyist math guy (with a math degree) and LLMs can at least talk a little talk or entertain random attempts at proofs I make. In general they rebuke my more wild attempts, and will lead me to well-trodden answers for solved problems. I generally enjoy (as a hobby) finding fun or surprising solutions to basic problems more than solving novel maths, so LLMs are fun for me.
As the other person said, Deep Research is invaluable; but generating hypotheses is not as good at the true bleeding edge of the research. The ChatGPT 4.0 OG with no guardrails, briefly generated outrageously amazing hypotehses that actually made sense. After that they have all been neutered beyond use in this direction.
My experience has been mixed. Honestly though, talking to AI and discussing a problem with it is better than doing nothing and just procrastinating. It's mostly wrong, but the conversation helps me think. In the end, once my patience runs out and my own mind has been "refreshed" through the conversation (even if it was frustrating), I can work on it myself. Some bits of the conversation will help but the "one-shot" doesn't exist. tldr: ai chatbots can get you going, and may be better than just postponing and procrastinating over the problem you're trying to solve.
It really does not need any literacy to install FF and then ublock origin. Nothing else is needed, the default settings work just fine. Do I miss something?
A large portion of users (a majority, imo) think "web browser" is a specific app they open, rather than a type of app, and don't even understand that there are multiple different ones to choose from.
You need to be savvy enough to know how to deal with the inevitable "broken" site you run across (ideally by leaving and never returning, but sometimes that isn't an option).
Lots of ideas in here that fail to properly identify and address the root cause, which is disappointing given how many of us are programmers.
Trace the problem back. Where does it start?
Why is PE investing in homes? Because they can make money. Why does buying a home make money? Because the value appreciates. Why does the value appreciate on an asset that literally deteriorates over time? Because of restricted supply. Why is there a supply shortage?
The root cause of this issue is supply. Zoning, mostly, is to blame. There are down stream issues, like the capacity of the construction industry, but that is an effect of the current environment not a root cause in and of itself — if there is money to be made, construction will grow as fast as it can.
Fix supply, and you fix the issue. This is a uniquely (English) western world problem. Go look at western countries that speak English vs. non-English housing starts. It’s just a cultural failure. You can blame corporations all you want but at the end of the day most people in the anglosphere expect a return on the largest purchase of their lives, even if it comes at the cost of materially worsening the lives of those around them.
Usually a root-cause analysis asks for five why's so your analysis seems a bit short.
One should ask why it is that housing / land is more profitable than those investments in which banks and private equity previously invested. And then when you follow that line of reasoning (too much capital and credit -- thanks quantitative easing! -- poor performance on bonds due to low interest rates and restrictions on the quality of investment vehicles) and you'll see that the root cause is probably more financial rather than _just_ too little housing.
Debts need to be cleared, losses need to be accepted, and leverage unwound. Building more housing just gives banks and private equity more stuff to buy.
That there may be "too much capital and credit" is a red herring because investors won't pour money into assets that aren't lucrative. The main reason housing has been so lucrative is because there's more demand than supply, so building more housing what needs to happen!
You're absolutely right of course. Even a child could look at the present situation and determine that the solution is to simply build more housing, but for some reason many grown adults cannot accept such a simple solution and insist that what's actually needed is more government regulation or something.
What you’ll find is that a subset of players define a behaviour and now you have to prove that that behaviour is cheating. For most behaviours that could be cheating, it will overlap with skilled players.
Examples would be pre-aiming corners and >99th percentile reaction time.
It’s estimated that cod warzone has 45 million players - a 0.1% false positive rate at that player count is 45000 people. That’s a _lot_. It needs to be orders of magnitude less than that.
Anti-cheat is a necessity for an enjoyable game experience. If you are a casual who doesn’t care about game integrity, you probably aren’t the target audience.
I don’t want any cheaters in my games. I don’t care if a rootkit is required. Riot has a kernel level anti-cheat and it’s _really_ good. It’s so good in fact that it deters most cheaters from even trying. This is the dream for anyone who wants fair games.
I agree with you, but I think the best solution is just to let people run the game without anti-cheat, but they can only play with other people who also opt-out of anti cheat (or choose to allow themselves to be matched with people who opt-out).
Then people can choose to either accept they have to install a rootkit anti-cheat, or want to risk facing cheaters in return for not having to install the anti-cheat.
I wouldn't be able to enjoy life with the knowledge there's a rootkit installed on my machine, developed by the same people that make video games, and hate all levels of accountability, riddled with vulnerabilities that could grant an attacker the same ridiculous level of permissions.
> If you are a casual who doesn’t care about game integrity, you probably aren’t the target audience.
Friendly reminder. 90% of games are not competitive multiplayer and don't need any anti cheat to be enjoyable.
My main entertainment is video games and books (no TV) in equal proportions so I'm far from "casual". I play zero competitive multiplayer due to the "communities" being invariably toxic.
Last time I played something like that it was Starcraft 2 when it was new. Enjoyed being called a stupid noob when I won.
Yeah. I've found a nearly 1:1 correlation between "Does it try hard to be 'competitive' or an 'e-sport'?" and "Is a huge section of the playerbase just godawful toxic assholes?".
As the years ground on, I've learned to avoid games billing themselves as an "e-sport" or indicating that they are extremely focused on the "competitive" scene, unless there's something very compelling to offset the asshole players that will inevitably be pulled in.
Yea! There was a pretty popular /r/gamedev post talking about how like 95% of the linux bugs existed in windows also. Just that linux users are trained to report and provide quality logs/evidence.
But it did not happen, when you used a book and never executed any command you did not understand.
(But my own newbdays of linux troubleshooting? Copy paste any command on the internet loosely related to my problem, which I believe was/is the common way of how common people still do it. And AI in "Turbo mode" seems to mostly automated that workflow)
Once you internalize that flat-Earther-ism isn’t about the Earth being flat you realize that rational arguments are pointless.
To expand on that, it’s about community and finding people who share your interests. The movie Behind The Curve explores this idea and it’s quite revealing.
And the ego boost of it all - being one of the special few who sees "the truth" that others are too brainwashed/dumb/whatever to see. Makes one feel quite important.
Those are the simple cows to be milked, but numerous 'gurus' in these communities are very well aware of the bullshit they propagate to the weak and gullible, but its just such an easy noncritical prey. You can always just go deeper in paranoia.
Makes me think that mr trump switched from being democrat to republican and pushed for magaesque folks who often love him to the death due to very similar principle - just spit out some populist crap that stirs core emotions - the worse the better, make them feel victim, find easy target to blame which can't defend themselves well (immigrants), add some conspiracy (of which he is actually part of as wall street billionaire).
Extreme left wouldn't swallow easily that ridiculous mix from nepotic billionaire who managed to bankrupt casinos and avoided military duty (on top of some proper hebephilia with his close friend mr E and who knows what else).
But what do I know, just an outside observer, but nobody around the world has umbrella thick enough that this crap doesn't eventually fall on them too.
I think Trump's just been running a simple popularity-seeking loop for a while. Do a thing; if his people like it, do it more; otherwise do it less.
I've heard that even Hitler was like this: that he didn't start out hating Jews, but repeatedly reacted to the fact that he got louder cheers whenever he blamed things on Jews. But I don't know how to verify if this is true.
What could be expected to be the "shared interests" of a community of people organized around supposedly believing something that they aren't actually about believing?
It's since being replaced by similar isms like climate change hoax-ism. Very similar way of arguing, dealing with contradicting evidence and seeing a conspiracy whenever a large body of scientists has a consensus.
Unfortunately, the climate change deniers in all their forms have made it much further by having support in politics and having a real impact on people's lives. In contrast to flat earthers.
Just the mere fact that my post here could be interpreted as political (which it really isn't) is evidence of this.
It's more about discrediting conspiracy theories to shift the Overton window so the real ones with the flavor of 'the government is spying on you' also seems crazy to most people.
reply