- If LLM knows about your content, people don't really need to visit your site
- LLM crawlers can be pretty aggressive and eat up a lot of traffic
- Google will not suggest its misleading "summaries" from its search for your site
- Some people just hate LLMs that much ¯\_(ツ)_/¯
But LLMs still can't reason... in a reasonable sense. No matter how you look at it, it is still a statistical model that guesses next word, it doesn't think/reason per se.
Reasoning is the act of figuring out how to solve a problem for which you have no previous training set. If an AI can reason, and you give it a standard task of "write me a python file that does x and y", it should be able to complete that task without ever being trained on python code. Or english in general.
The way it would solve that problem would look more like some combination of Hebbian Learning and Mu Zero, where it starts to explore the space around it interms of interactions, information gathering, information parsing, forming associations, to where it eventually understands that your task involves the action of writing bytes to a file in a certain structure that when executed produces certain output, and the rules around the structure that make it give that output.
And it will be able to do this through running as a model on your computer, or a robot that can type on a keyboard, all from the same code.
LLMs appear to "reason" because most people don't actually reason - a lot of people even in technical fields operate on a principle of information lookup. I.e they look at the things that they have been taught to do, figure out which problem fits closest, and repeat steps with a few modifications a long the way. LLMs pretty much do the same thing. If you operate like this, then sure LLMs, "reason". But there is a reason why LLMs are barely useful in actual technical work - under the hood, to make them do things autonomously, you basically have to specify wrapper code/prompts that take often as long to write and finetune as actual code itself.
It is insane to think this in 2025 unless you define "reasoning" as some mechanical information lookup. This thinking (ironically) degrades the meaning of reasoning and of intelligence in general.
game just trying to tell that brute force solution might not work. use tools, use specific charms, use crests that are better suited for specific arena/boss. some tools (such as spiked traps) can literally one shot most mobs and leave you 1v1 with a boss.
Most AAA videogames seem to teach people "mash attack button until enemy goes away, move forward to the next shiny dot on the map, repeat". It's crazy that when a game asks the player to actually take stock of the situation and learn from their mistakes, it gets billed as "brutally difficult".
For what it's worth, I find Silksong challenging, but I really don't get the "impossible difficulty" complaints. I think I'm about halfway through the main game and the most I've spent on a single boss is ~30 minutes.
And I don't think I'm a crazy pro gamer with insane reflexes - I play only a few videogames a year (I mostly care about the art), every time I've tried a multiplayer FPS I get utterly destroyed by other players.
Why do libraries such as Requests or HTTPX not support this out of the box? It would be really useful to have automatic warning or sentry event after deprecation response.
I understand that this functionality can be easily added as a plugin, but not everyone is aware that such a thing even exists. With default support, it will be easier to upgrade to new API versions and keep stuff up to date.
I suggested exactly that (for the closely related Deprecation header) to Requests a few years ago, they feel it's the application's responsibility, discussion here: https://github.com/psf/requests/issues/5724
How would it behave? The standard as written doesn't suggest any appropriate client behaviour.
It explicitly doesn't have to mean deprecation, the standard says it could also be returned from any short-lived resource. There's no way to see if the header applies to the whole server or just the specific resource or even query parameters, and no way to deduplicate to ignore known warnings.
I'll argue that if these features are more widely known and respected, we wouldn't need to re-invent these kinds of elegant solutions with clunky and thick stacks, over and over again.
I'll argue that the major problem with everything in software is that there is no place for random developers to discuss standards they might want to implement
For example, if drag and drop and copy and paste didn't exist, it probably wouldn't be created today because you need 2 programs to agree on accepting the format (you can't even drag and drop from most software except file managers...). And even conventions that ALREADY exist are being forgotten with every year.
Isn't there though? I feel as though actually there usually is a place to ask if you bothered to do so and the trouble is more than people don't even check.