jjk caused a lot of problems for me when using multiple agents. It's running some sort of jj command that snap-shotted stuff and caused divergence (might have benefited from `--ignore-working-copy`). Not sure what the precise details were, but I gave up and uninstalled it after a week.
Multiple agents is definitely tempting fate. Concurrent modification of the same git repo by multiple entities?
At that point you should use multiple repos so they can merge & resolve.
EDIT: of course, if a single agent uses git to modify a repo instead of jj, jj may have trouble understanding what's happened. You could compare it to using an app that uses an sqlite db, and then also editing that db by hand.
The point of jj is that it supports lockless concurrent writes to the same repo out of the box. It is what makes it a lot more suitable than git for agent workflows
The OP is talking about the `jj workspace create` command, which creates a separate working copy backed by the same repository. It’s not a bad way to work with multiple agents, but you do have to learn what to do about workspace divergence.
Even with multiple workspaces (like git worktrees), once you use something like jjk, both the agent and jjk in the associated VS Code are operating on the same workspace, so that doesn't isolate enough. I don't think jjk uses `--ignore-working-copy` for read-only status updates, so it's snapshotting every time it checks the repo status while the agent is editing.
On top of that, throw in whatever Claude does if you "rewind" a conversation that also "reverts" the code, and agents wrongly deciding to work on code outside their focus area.
It's possible watchman helps (I need to look into that), but I'm so rarely using jj in VS Code (all I really want is inline blame), that it was easier to remove jjk than try to debug it all.
Divergence won't hide or lose any work, but it's an annoying time-suck to straighten out.
Modern EVs with thermal management simply haven't been around for long enough for a significant quantity to reach 1 million miles, especially those with LFP cells.
There are some taxis and limo service Teslas that famously did make it to 300-400k+ miles on their original pack.
Battery swapping is a dead technology, it is simply not economical. It is too expensive, much harder to scale and incompatible with cell-to-chassis designs. Industry barely managed to agree on a charging connector!
Meanwhile, battery longevity is essentially a solved problem. Manufacturers do have an incentive to improve it due to customer demand, and modern NMC chemistry, cooling and BMS have improved significantly to the point where they're expected to maintain 70-85% capacity after 10 years[1], far from worthless. At this point, components like the motor likely fail before the battery does.
Given the much lower failure rate of everything else in an EV, TCO is dramatically better than ICE cars even with degradation[3].
Manufacturers like Mercedes even guarantee 70% health after 8 years (a worst-case estimate).
There is a significant commercial incentive for aftermarket battery repair shops. EVClinic[2] is very successful and a glimpse into the future.
The Tesla Model S has been out for almost 13 years, so you can already see it.
Your phone doesn't have liquid cooling temp management and is probably recharged daily. With a car that has 300 miles range, a lot of people probably only do a full cycle every week.
So 7000 to 8000 euros to replace a battery of a 80 to 100k car?
It depends on how many miles it has driven and how much other maintenance the car has had. It's a big expense but a battery dying is probably comparable to a timing belt breaking, those aren't cheap either and thats not even for luxury cars...
First of all, as your article shows, batteries rarely need replacement at all, even at very high mileage. And those are old vehicles, battery management and cell chemistry are much better now.
We had a 2010 Ford Transit van (diesel) and after 189000km, we sold it because the parts were becoming too hard to source (disclaimer: in New Zealand).
13 years old dead luxury cars are worthless, yes, especially when the tech is quickly evolving. That doesn't say anything about how long it takes for them to die or how reliable the tech is.
The minimum capital requirement is the whole point of a GmbH. If you don't want it, you can found a UG, which is the same thing with no capital requirement.
Trustworthiness. You know a GmbH has at least 25000€ you can sue them for. And a UG has to put parts of their profits into becoming a GmbH, so eventually everyone big enough is a GmbH.
If Kernel Lockdown is enabled, a zero-day exploit is required to bypass module restrictions without a reboot.
Unfortunately, threat actors tend to have a stash of them and the initial entry vector often involves one (container or browser sandbox escape), and once you have that, you are in ring 0 already and one flipped bit away from loading the module.
The Linux kernel is not really an effective privilege boundary.
A kvm hypervisor is not perfect, as sandbox escape was demonstrated even with https://qubes-os.org/ . On modern AMD/Intel/ARM64 consumer processors it is not possible to completely prevent bleeding keys across regions.
Only the old Sun systems with hardware encrypted mmu pages could actually enforce context isolation.
If performance is not important, and people are dealing with something particularly nasty... than running an emulator on another architecture is a better solution. For example, MacOS M4 with a read-only windows amd64 backing-image guest OS is a common configuration.
It was a POC from shortly after Spectre CVE dropped, and I'm not sure if the source code made it into the public. I heard about the exploit in a talk by Joanna Rutkowska, where she admitted the OS could no longer completely span TCSEC standards on consumer Intel CPUs. YMMV
The modern slop-web is harder to find things now, and I can't recall specifically if it was something more than just common hypervisor guest escape. =3
They have coexisted with humans just fine over the past couple years.
reply