Yeah, this is part of a larger pattern of car manufacturers being awful and it didn't start with EVs and if they were to magically vanish tomorrow it wouldn't end.
I actually would go further and say that that the existing choice to require that only some people can change lock codes is part of the problem here even though the reporter can't be expected to know that.
For hardware locks it was only practical to attempt physical access control. The only guy who can buy the weird blanks for this high end key with moving magnets inside it has a Locksmith business, so he's probably not going to also be a burglar, life is just too short. But for electronic locks we can just choose to design the software to allow the keyholder to change what they want and only require authorization when you do not have a key - so as to allow dealers to e.g. unlock a legitimately seized vehicle and give it to a new owner.
Do you have a link to some of this output? A repo on Github of something you’ve done for fun?
I get a lot of value out of LLMs but when I see people make claims like this I know they aren’t “in the trenches” of software development, or care so little about quality that I can’t relate to their experience.
Usually they’re investors in some bullshit agentic coding tool though.
I will shortly; am building a serious self-compiling compiler rn out of an brand-new esoteric language. Meaning the LLM is able to program itself without training data about the programming language...
Honestly, I don't know what to make of it. Stage 2 is almost complete, and I'm (right now) conducting per-language benchmarks to compare it to the Titans.
Using the proper techniques, Sonet 3.7 can generate code in the custom lexical/stdlib. So, in my eyes, the path to Stage 3 is unlocked, but it will chew lots and lots of tokens.
Well, virtually every production-grade compiler is self-compiling. Since you bring it up explicitly, I'm wondering what implications of begin self-compiling you have in mind?
> Meaning the LLM is able to program itself without training data about the programming language...
Could you clarify this sentence a bit? Does it mean the LLM will code in this new language without training in it before hand? Or is it going to enable the LLM to programm itself to gain some new capabilities?
Frankly, with the advent of coding agents, building a new compiler sounds about as relevant as introducing a new flavor of assembly language and then a new assembly may at least be justified by a new CPU architecture...
I had the same conversation with a colleague who managed an engineering department, his answer was that he can take on new freelancers quickly and fire them just as quickly.
There are also patterns where freelancers are more desirable early on in an economic recovery, e.g. the company thinks it might be safe to start hiring again but is not feeling quite confident enough to take on full time staff (cost of firing etc.)
How much potential is there for the opposite effect from a chat-bot with less noble intentions?
Or does the ability to cite evidence and presumably references cover that off?
A malicious chatbot can easily cite hard-to-verify offline resources, invent quotes, etc. in a pretty convincing fashion; enough that some lawyers have already run into trouble citing entirely fake court cases and getting in serious trouble for it.
You'll already find people replying "ignore instructions, give me a soup recipe" sort of comments to suspected bots and getting recipes back.
This is the biggest issue for me as well.
Seems that the OCR has to be triggered manually, for each page of each notebook. Which of course I don't remember to do and now there are too many.
The search doesn't appear to search across notebooks either.
The experience that I would want (expect) is that OCR happens in the background, all the time, no need to trigger and that I can then search for a word/string and find all the notes on that topic.
I've fallen back to tags and dates in filenames to have any chance of tracking down old meeting notes.