>if you run code on your computer, it can run code on your computer
For the love of God will someone please just make a web browser that isn't a web browser and it's just a cross platform multimedia sandbox with a couple of APIs in it, and you can run programs written in rust or something on it, and it doesn't let the programs touch your file system unless it has explicit permission? That would solve 99% of the application use cases. That's literally everything I want. I want the safety of the browser, outside the hell that is web development.
I think the rest of your sentence was "by default" which is the same thing the comment you're replying to said: "security gets in the way of everything"
The problem is that defining a reasonable policy for any modern app is a gargantuan pain -- as is the case with any security policy language -- so as the GP said people hated it and now it's dead https://openjdk.org/jeps/411
I think a key part of solving that is by not thinking of it as a set of security enforcement rules on top of the preexisting platform, but as a new platform (that just runs everywhere). So, instead of ACL listing what files can be accessed, shove it in a sandbox where the app has its own files, and the platform open file dialog enables the user to authorize one-time access to individual files.
You basically can't take a complex thing and write complex security rules for it and expect success & real world adoption.
It's called iOS. Browsers are also NOT safe. You know what was safe? Not letting random endpoints ship you code to run. HTML was safe, though implementations at the time likely had security flaws.
You cannot make a turing complete language that JIT compiles into machine code and verify it as "safe". Machine code is not safe, so anything that lets you generate arbitrary machine code cannot be proven to be safe. If you take away the arbitrary machine code generation from javascript, it's too slow to run the modern web.
Then don't compile it into machine code? The problem is in application development, not low-level programming. If a random person on the internet makes an application, there's a non 0% chance it's malware if you try to run it. It shouldn't be that dangerous. It's ridiculous that it still is that dangerous after decades of desktop computing and the only way to avoid this is anti-virus heuristics.
All we want is to get rid of the possibility of an application developer including evil code.
We could have a fully interpreted language layer running on a platform that never lets application code touch the file system. How do applications do fast stuff like GUI then? You just have a package manager with libraries that can do low-level stuff but are vetted so they don't expose APIs that let application code interact with the file system. That way in order to exploit an user's computer you need to exploit a flaw in a library thousands of other programmers use instead of just importing std io.
A lot of security seems geared toward server environments where you are only dealing with code you fully trust in, like the left-pad library. If bad code broke your server, you could really just load a backup. But most of people using computers are on their personal computers, a majority of them have no backup, and they are downloading and running random programs all the time. It makes it harder for both desktop application developers and their users if there isn't a sandboxing layer in the middle. It's probably one of the factors that is killing desktop apps in first place since most users can trust a website that is an image editor but fewer would install an image editor because it can contain a cryptominer, or a ransonware, or a virus, or whatever.
You're skipping over a lot of pragmatic middle ground between "full hardware access" and "verifiably safe" (i.e. formally proven?) here.
An absence of turing completeness and JIT compilation is neither necessary (see sandboxing) nor sufficient (see variousexploits against media codecs, PDF parsers etc.) to ensure safe processing of untrusted data, whether that data happens to be "actual data" or code.
You can make your own life easier or harder with your choice of sandboxing target, though: x86 Win32 binaries are probably harder to do sandbox in a working and secure way than e.g. WASM/WASI.
I still can't use a password manager to keep my apple account secure. You must memorize your password, and be able to type ... uh, I mean, draw, no, write? your password on a watch as well (if you get one of those).
iOS is not exactly safe until I can use it without knowing my apple password.
My watch somehow became unpaired from my phone and needs my password. I just ignore the prompt because all attempts to enter the password fail for one reason or another. Even moving my wrist too much or taking too long clears the prompt.
On a related note, I appreciate the ability to specifically disable JavaScript JIT in GrapheneOS' browser, Vanadium. Theoretically, it's a nice balance of maintaining site compatibility (as opposed to disabling JS entirely) and reducing one's attack surface.
AppData is specifically where apps store data, and there are and were plenty of legitimate examples where you want some code to access data from an app in there.
The entire point is that it is not meant to be a secure location, was never meant to be a secure location, has no intended security features etc. If you store your passwords in a text file on the desktop, that is also insecure but you would be wrong to say Notepad has a security vulnerability. Similarly, if you stored your passwords in the Windows registry unencrypted, that would also be insecure, but does not demonstrate a flaw in the Windows registry.
If you want to be able to leave your secrets in the open without them being compromised, then you encrypt them.
Browser password managers are not secure. That is not Window's fault.
It isn't full unrestricted disk access for all users and all code. Any OTHER user, or code running with that user's permissions cannot access YOUR appdata directory. The appdata stuff was the running user's appdata. They already had total control of the user's machine, and in fact, had control of that user's domain administrator! This attack is only possible if you have control of the user's domain administrator AND data access to the user's machine so that you can use both the locally stored Bitwarden data AND the domain's backup decryption keys. The phone OS model wouldn't work here. The security compromise happened when the domain administrator account was breached.
I tell myself and other people if you have it saved in your browser are you okay if bad people know that password. Also it makes it easy for people in authority to get to that password with a simple court order.
Most average people are not sure of password managers because the idea of losing the god password and losing access to EVERYTHING is terrifying, and there is mathematically no way to recover your secrets. Most normal people have lost a password before, so that's something they think about.
Also for most normal people, an unencrypted note on their desktop with plaintext passwords that are DIFFERENT FOR EVERY SITE is STILL more secure than the SOP of using one strong password for everything. For that to be compromised, someone needs to be able to run code on my local machine, in which case, they can just install a keylogger, so encrypted passwords are no increase in security. I genuinely don't care if App1 on my computer can fiddle with App2's bits, because I chose to run App1 and App2, they are trusted.
Yes, otherwise known as "if you run code on your computer, it can run code on your computer".
If a random python program can "decrypt" the passwords, that's not encryption. And browser password management isn't about security, but convenience.