Everything that's visible to the compiler is subject to automatic inlining. That is all code in the current crate (compilation unit), all concrete instantiations of generics (regardless where defined), and all functions marked inline (regardless which crate).
Stdlib containers are all in the generics category.
> all functions marked inline (regardless which crate).
Is this a problem is rust? Is too much/too little marked inline? What if you really need to inline some function from some library that was not marked inline by the author?
> Get this, say a project takes a year to complete. The concept is saying a 10x engineer can do this in about month.
The true 10x engineer looks at the project, sees the inherent needless complexity, goes back to the sponsor and uses his business knowledge to renegotiate the specs. Leading to a reduced scope with 98% of the business value and 10% of the work.
I don't think so, no, or rather, I think you might lose out on "the web" part of it. That is, that the web really is a bunch of stuff, all accessible in one "thing", and stuff can and does "link" to various other stuff.
E.g., consider an OIDC log in. It's really one app (the relying party), redirecting to a whole different app (your SSO of choice). You can't exactly do that in another app without, I think, really running into issues of "is it my SSO, or a phish?". The browser provides that trusted layer of "I am look at this app" (via the URL bar). And even then … that's fraught with absolutely immense tons of peril.
It's also a distribution mechanism: I don't have to download Slack, Discord, Postman, etc. — I just go to a URL, and the browser downloads the code needed. (I can and do download some of these, and there are some advantages to do so. But then extend it to every app I use on the web: my bank, Turbotax, my email, my three different loan payment sites, my landlord's payment site … that'd be far too many downloads.)
Never combine new tech with new functionality. If you want to learn new tech, use it to rewrite an old project that was due anyway. If you want to build new functionality, use tech that you know.
This has nothing to do with Rust. I've seen the exact same thing happening with golang in a C++ only environment. Long project, took forever, failed slowly, took a week to rewrite in C++.
You get the same with e.g. tokio::spawn, which runs the future concurrently and returns something that you can await and get the future's output. Or you can forget that something and the future will still run to completion.
Directly awaiting a future gives you more control, in a sense, as you can defer things until they're actually needed.
Then the app relies purely on the ssl cert of the server, for mitm mitigation. This way, the qr can contain a signed reply to the code, which adds a layer.
Wait, I don't get it. I understand that the server is signing a challenge with a key presumably known to the client. But why can't the app submit the challenge programmatically upon scanning a QR code? It would still verify the signature!
Apart from CGAT being a 2-bit alphabet, changing the alphabet does not change the information density. Expect that this kind of transformation has no impact on the compressed result, with most general purpose algorithms.
First, this is about data access and not programming.
Second, that quote is from a time when compiler optimizations did not exist, and the programmer was supposed to use all kinds of clever tricks to speed up code (what today you get for free with -O3). That kind of optimization is the context of the quote, and it's hardly ever appropriate to just throw into some discussion about optimization.
https://en.m.wikipedia.org/wiki/Lift_(force)