- agents tend to need (already have) a filesystem anyway to be useful (not technically required but generally true, they’re already running somewhere with a filesystem)
- LLMs have a ton of CLI/filesystem stuff in their training data, while MCP is still pretty new (FUSE is old and boring)
- MCP tends to bloat context (not necessarily true but generally true)
UNIX philosophy is really compelling (moreso than MCP being bad). if you can turn your context into files, agents likely “just work” for your use case
Do I really need care about this? I really hoped that I can just not bother wrapping things in std::move and let the compiler figure it out?
I.e. if I have
```
std::string a = "hi";
std::string b = "world";
return {a, b}; // std::pair
```
I always assumed the compiler figures out that it can move these things?
If not, why not?
My ide tells me I should move, surely the compiler has more context to figure that out?
I think there's a consequence difference between the IDE being sure enough that a std::move is warranted to issue a lint, versus the compiler being 100% provably certain that inserting a move won't cause any issues.
Sure, but by the sound of the article, the compiler won't do the right thing?
Effectively, I'm a c++ novice, should I ever sprinkle move (under the constraints of the article)? Or will the compiler figure it out correctly for me and I can write my code without caring about this.
I do want a setup like this, however, most of my development is on Windows which means license cost is usually higher than the cost of the VM. I could run vm's on my home machine, but even then I feel like the terminal experience is quite poor. You want to have a mobile native code, to check the code/read the plans. So far I have been using teamviewer to access my home desktop which works, albeit annoying to use, plus I don't have fancy notifications. Perhaps a web first approach with a mobile responsive web app would work, that shows the files of the project as well as the terminal.
The anti-cheat streams executable code into the client, and that code is mostly for detecting tampering with the game, injected modules, etc.
Not sure they care about it running in an emulated environment.
They do effectively allocate an executable memory region, copy the machine code that was streamed into it, and jump to it.
I guess in this case the emulation is an actual vm, rather than "rewrite x86 instructions into arm" (don't know much about this subject, but assumed that was how rosetta worked)
Rosetta 2 rewrites x86 instructions into ARM, but it does this on the fly for generated instructions too. When you put x86 machine code into a buffer and then jump to execute it, Rosetta 2 dynamically translates those generated instructions into arm before executing them.
At least that's what I gathered around the time it was released. It seems to hold up; JITed x86 applications work great under Rosetta 2.
In my experience they don't work great. JVM straight up crashed when Rosetta 2 was released and few years later it worked but with huge performance drop. Better than nothing, for sure.
It looks like I can find Teradici card for $50-200 (used to new), which is in a similar range as the JetKVM. However, according to the installation manual that I found [0], you still need to plug in the DisplayPort connector on the Teradici host card to the GPU output port(s).
I guess could be a good contender for replacing spark, however, I suspect the fact spark is free and open source, which forms a community around it, means that dpolars might struggle to gain traction, when it's gated by a credit card.