Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Assume your devices are compromised (go350.com)
317 points by _oe8s on April 17, 2022 | hide | past | favorite | 188 comments


These are fun thought experiments, but I think having a personal Disaster Recovery plan is a far more applicable security exercise. What would you do if you lost your phone? If you were locked out of your google account? If you forgot your password manager master password? If your home was destroyed in a fire? Having a secure plan for quickly recovering from these scenarios is more important than trying to keep state actors or cybergangs out of your system, unless you are a VIP.


My plan is just printed backup recovery codes for things that need 2FA, written master passwords, and normal hard disk backups.

The extra hidden part of the plan is that I try to avoid things that aren't tracable to a trusted human help desk. Anything that involves the words "manage your own private key" is a point of failure that needs a lot of care.


> The extra hidden part of the plan is that I try to avoid things that aren't tracable to a trusted human help desk. Anything that involves the words "manage your own private key" is a point of failure that needs a lot of care.

How do you handle your password manager (assuming you use one)?


I just wrote up instructions for accessing my backups for my wife and friend, and my critical accounts have dead-man switches to give my wife access to everything she would need.

Assuming what she needs is access to my email (yes) and gigabyte of photos from my drunk college days (no).


What are you using for a deadman switch?


The system I'd trust most is to have 2 attorneys given sealed envelopes with instructions to give them to your wife on your death. One attorney is given an envelope with the passwords to your accounts (or to your password manager) encrypted with a one time pad. The other attorney is given an envelope with the key to it. Each envelope includes instructions to decrypt. If you're giving her a password to a password manager, make sure she has access to the latest password archive and the software to open it.

Tedious, yes. But fairly reliable and you don't have to place any trust in your attorneys at all, unless they find out who their counterpart is and start working together (very unlikely). If this system wouldn't work for you, you've probably got bigger problems than having to worry about your wife getting into your email after you die.


And for those of us who aren't wealthy enough to have two different attorneys on retainer, giving your spouse the encrypted password store along with the key to a safety deposit box containing the decryption key would have to do.

Personally, my wife would wonder why I'm going to so much trouble to keep my passwords secret from her until I die; but then my personal password store is for services we share like banking, and any passwords she doesn't know are benign things like my email addresses and various website logins that wouldn't matter anyway when I die. Of course I also have my work passwords (I'm the IT manager), but my supervisor and the company owner each have a secured store of all of my work passwords as well, plus the master password to access them, in the event something happens to me (or I'm just on vacation for a week and temporarily unreachable when access is needed).


This is more of a thought experiment, but would also serve as a decent backup in case of natural disaster, house burning down, etc. As for safety deposit boxes, those can be closed out for nonpayment, bank branches will shut down, etc. Not the best hands off long term solution. An attorney will let you know if there's an issue because it's their ass on the line if they don't.


Unless they die.


That's why you use Shamir's Secret Sharing...

https://en.wikipedia.org/wiki/Shamir%27s_Secret_Sharing


If they work in a firm that's all taken care of.


I'd just write down my accounts and passwords on an index card, laminate it, and give it to my wife to store in the safety deposit box.

Much less tedious, and concentrates trust in the one person who should have it - the spouse!


Last I paid attention, some jurisdictions make it very difficult to access the contents of a safe deposit box after one of the owners has died. (Supposedly to discourage cheating on inheritance taxes with gold coins, or ...)

A bit of research might be indicated, before trusting this strategy to perform when needed.


She could store it in her safety deposit box. Or in a fire safe we keep at home. Or buried in a tin can in the backyard.


Google has Inactive Account Manager https://myaccount.google.com/inactive


Google and Bitwarden both have built-in features for this.


So does Lastpass.


For sure. I first focused on setting up a homelab and de-googling my phone and whatnot for privacy reasons, but it's nearly impossible to not be spied on. You can certainly reduce it. In the end I'm still doing this for data sovereignty more than for privacy now, but I don't mind the boost in (imperfect) privacy.


Well, you gain privacy from corporations. You have to keep your software up-to-date, though, or you risk losing privacy to hackers.


Or you can simply run an application level default-deny firewall and never use a web browser. You don't need to keep your software up-to-date in a homelab.


But then you can't access your data outside your local network. That may be acceptable depending on your use case, but at that point it's not a 1-to-1 alternative to cloud services.


SSH tunnel with passwords disabled; use ed25519 keys. Or wireguard if you are feeling adventurous


So then you need to keep SSH or WireGuard up-to-date (at least in terms of security patches).

Also, are you going to SSH in every time you need to access a document from your phone? Again, use-cases differ, but that's not a 1-to-1 alternative to, say, Dropbox.


You would use wireguard for that use case, possibly on a regularly updated computer, and update your network firewall rules to accommodate that setup.


Yes, but you agree you need to apply security patches in that case, right?

Your original comment amounted to "you don't need to apply updates if you firewall everything", to which I replied "that's not a replacement for a cloud service". Your subsequent comments then amount to "well you can just poke a hole in your firewall for WireGuard". So which is it, do you need to apply updates (e.g. to WireGuard) or not?


I suppose you can maintain secure remote access if you run a very minimal wireguard server on a low power device similar to a raspberry pi running on a updated/patched distro. You can still keep 99% of your gear running in the back without updates. This way the amount of update churn can be minimized.


I forgot my master password two days after. I thought I had stored it someplace and my guess the phone shaked after typing the master key and it undid the characters. FML moment. Spent a week resetting passwords. I only set passwords once. We use bitwarden. It was a pretty fuckery.


This is why I have so far avoided password managers: single point of failure. What if you forget your master password. Or worse - what if your master password is stolen?



... or what if you are traveling and your phone and laptop are lost or stolen?


Your bitwarden password manager is hosted and is only stored encrypted on your phone/laptop. So unless the thief knows your master password you should be fine.


I understand that the thief won't have access to my accounts, but now I don't either - correct?


You would still have access to your accounts as the credentials are hosted by BitWarden or by yourself (on a personal server for example).


It depends on whether you have access to the second factor. I have Authy installed on two phones for that reason.


I debate this with myself often. Short of renting a security box and telling people I trust about it, I haven’t come up with a strategy for the master password. At the moment, I’ve resigned myself to the feeling that if I lose my memory, maybe it’ll be the opportunity for a fresh start, and so losing everything is a feature not a bug.


Shamir Secret Sharing could be the answer you're looking for... I'm sleeping better at night

here's a quick blog post I wrote with my plan. The app is trivial to write if you find a library for your preferred language

https://g3rv4.com/2022/04/a-plan-for-my-secrets


If you lose your memory you better have instructions written clearly on paper not some app which you won't remember how to build or install


instructions are on my blog :)


And if you can't update your blog or can't pay for it because you lost your memory? The low tech solutions can't be beat, especially if you expect others to help you pick up the pieces with minimial technical sophistication.


I agree, if I lost my memory and can't remember how to access my blog or run docker then I'd be out of my digital life.

In that scenario though, I'd also be out of my digital life even if I had access to 1password.


I wrote a tool for this[1,2], though it's still a work in progress (all the features work but I still need to finalise the QR data format and work on user-friendly interfaces).

[1]: https://github.com/cyphar/paperback [2]: https://youtu.be/GI9rKdM9rB8


I wish I found your app before I wrote mine [1] :) you seem to be way better versed in cryptography than I am. What's the advantage of having the main document and the keys separated?

[1]: https://g3rv4.com/2022/04/using-shamir-secret-sharing


I wouldn't say I'm very well-versed in cryptography. The reason they're separated is that it allows you to:

  * Further split up the trust such that the key shards can be held by one group but they don't have access to the document (maybe you keep a copy of the document with a lawyer but distribute the keys among your friends and family so that if your lawyer is hacked or bribed they can't reveal the secrets, same goes for if your friends conspire against you).
  * Make the shards small, independent of the document size, so that they're always practical for friends to store even if you have a very large document to save.
  * You can do a quorum expansion (create new shards that are compatible with the existing shards) without revealing the secret.
To be fair, for practical uses this is not super necessary but it adds flexibility without losing anything in return (I would argue the quorum expansion point is actually a useful feature).


Bitwarden (premium) has a really nice service of granting access to emergency contact based on prespecified wait time [1]. This can be used in any emergency situation.

[1] https://bitwarden.com/help/emergency-access/


Just send four trusted family members half of the passphrase in a sealed envelope and tell them what it's for. If your family has a lawyer or safe deposit box trusting that instead is a 1000x better option.


Lawyer yes, safe deposit hell no.

Three reasons:

- Banks fubar safe deposit boxes all of the time, in a variety of ways.

- Once the bank figures out that you’re dead, it’s sealed without a court order.

- As you get older it’s more likely that you’ll screw up payments, lose keys or codes, etc.

Also, the attorney will advise your loved ones on what they can do. For example, you need a power of attorney for many things.


Never, ever use a safe deposit box at banks:

https://www.nytimes.com/2019/07/19/business/safe-deposit-box...


>Never, ever

Despite the issues, there are still valid uses for a safe deposit box. I live in a highly fire-prone area and keep a backup drive with family photos and documents in a safe deposit box in a local place that won't burn when I do.


So long as you don’t expect the drive to be there when you retrieve it, sure. You should probably also encrypt any sensitive document on it. Here’s one example. Just one:

https://abc7news.com/archive/8973198/

Note the police are not classifying this as a criminal case (theft), but a civil case.


Can lawyers be trusted with this? Do they also properly manage their own death and other events? I don't have any experience and I'm genuinely curious how all this works.


> - Banks fubar safe deposit boxes all of the time, in a variety of ways.

Which is why you need to put a tamper-proof box INSIDE a security box in a bank. Key to that box will be in your house, far away from bank personnel.


excellent point!


I wrote mine down and put it in an envelope containing a few other secrets in a small fire-resistant, waterproof safe which my wife knows how to open.


Which safe did you get?


Fire safes are so shitty you'd probably be better off buying a small one to keep your documents/backups in and then a larger one to put that safe in for double insulation.


Itis a small portable one; more akinto an outrageously bulky, heavy, awkward briefcase than a bank vault.


So basically you bet everything on eternal love.


I publish or write down most of the things I want to keep in that scenario.


yeah, I've been thinking a lot about it... Shamir Secret Sharing and splitting the shares in a way that makes sense to me have me piece of mind.

I even wrote a trivial console app to let my wife restore my secrets if I were to drop dead tonight.


I was once locked out of some pretty important accounts while traveling overseas. Ever since then, I've been thinking about the importance of being able to "shard" both secrets and authority.

If I were to be imprisoned, for example, I might want my lawyer and family to be able to access all of my emails from two years ago up to one week ago. If I were to suddenly die, I would want my family to have full access to all of my accounts, with little hassle.

I would like to be able to tell my email provider (through my account settings) that if at least two people out of each of these three groups agree that such-and-such condition has been met, then these people will be granted this sort of access. The process would notify the other members of the groups I defined and have a delay to allow some kind of veto/vote if there is any disagreement. It may be a bit fiddly, but if a standard were defined for how the interaction works from a user's perspective (including steps to make sure you understand the consequences of how you've configured it), at least it could work consistently across all kinds of accounts.


here you can see how I'm using Shamir Secret Sharing, I gave clear instructions on how to use the shares and in what circumstances.

based on their dynamics, I'm feeling pretty good. I know I have some people there that are tech savvy + some that will take good care of their shares and when they should send those to whom.

Implementation is trivial (especially if you find a library) but maybe you can be inspired by my plan https://g3rv4.com/2022/04/a-plan-for-my-secrets


I was wondering, how much do you trust these tools? Cryptography can be extremely tricky to implement. For example, has the tool been checked for side-channel attacks? Has it had any other audits? (On the GitHub page of the library, it says it's no longer maintained)


I trust it enough no solve my DR scenario. If I was targeted by the NSA, I don't expect this to keep me safe.

I don't think SecretSharingDotNet has had any audits, and I'm pretty sure i hasn't been checked for side-channel attacks. I couldn't find anything in GitHub [1] saying it's no longer maintained though.

I'm pretty sure a well founded attacker would be able to hack me, but I think it's orders of magnitude more likely that I'll forget my master password, I'll get stolen, my apartment will catch on fire or I'll just die. Those are the scenarios I'm preparing for.

[1]: https://github.com/shinji-san/SecretSharingDotNet


Dark Crystal [https://darkcrystal.pw/] does Shamir Secret Sharing over several protocols.


All of my passwords are in the "pass" command line utility, where they're encrypted with gpg. I added my brother's gpg key as an encryption target, and his ssh key onto the sever where the git repo is stored, locked down to the git shell command. In the event of my untimely demise, my wife tells him the url of the git repo.


I personally wouldn’t go to the extent of using CLI tools, as my next of kins and family members aren’t at all technical. A printout of my 1Password emergency kit in a safe deposit box is probably doable, but then what - 598 passwords to projects on an old git repo on an ancient Synology NAS, or a throwaway account for some random website?

There is probably a lot to be said to curate your accounts to assist those sifting through your estate.

The ability to pass your information legacy is important, and complicated. The trope of your mother going through their mother’s papers and finding a long lost love letter - or an unfinished manuscript - is equally plausible today. What secrets lurk in your DMs, Messenger and Signal history? Does your draft blog post actually contain some amazingly insightful observation?

Maybe your family’s memory of you could be enriched with this information? …maybe not?

At the end of (your) day(s), you might take those secrets to your grave, and it’s unlikely that your tombstone will include your GUID, or the Glacier storage URI where your online self will remain until the TOS states otherwise.

REST In Blob

EDIT: RAM-mento Moar-i(sorry, got carried away.. couldn’t help myself :)


>What secrets lurk in your DMs, Messenger and Signal history? Does your draft blog post actually contain some amazingly insightful observation?

Things that need to stay secret. That's why they are secrets. If my passing means that these things are no longer accessible to anyone ever again? Perfect. Works as intended.


If you forgot your password manager master password?

Short of brain damage, I don't think that would ever happen.

It would be a hassle for my family if I died, though. I'm young, but I should still get that scenario worked out.


> Short of brain damage, I don't think that would ever happen.

I once forgot my phone's unlock pattern. The very same that i had used for years, daily. I'm not someone that normally has memory problems, but i guess a few synapses just refused to do their job for some reason. I actually had an ex tell it to me, otherwise the phone would be bricked (i tried recalling it for basically 2-3 days). Now i have it and the master password for my password database written down and given to a person that i trust.

Kind of a silly and a worrying situation, so it helps to have contingencies for even cases like that. One might worry about Alzheimer's and whatnot after a situation like that, but even healthy "HDDs" occasionally get "bad sectors". Of course, there have also been cases where i forget something that was almost a subconscious memory (e.g. muscle memory) just to remember it a while later.

For context: am in the 20-30 age bracket, no other memory problems or a history of memory problems in my family tree.


> Short of brain damage, I don't think that would ever happen.

Funny you say that. A year ago I went outside to break up a domestic violence situation. I woke up later face down with a brick next to my head. Due to the concussion I forgot my phone's password and that of my ATM card. It took me six months to remember them, although by then I had replaced both.

Shit happens.


When I am on a longer vacation and I do not use a certain password, I struggle remembering it.

If I were chucked into a prison and let out after several years with no computer use in between, I would likely forget all my passwords in the meantime. No brain damage needed, just disuse.


With a death certificate, you don't need passwords, or even account numbers, to access savings, accounts at fiscal types of businesses.

It helps pf course, to have account info, but just knowing the place of business is typically enough.

For clarity, living people lose account numbers and access all the time. The death cert. gives you this same power.


Not for, eg, lastpass.

Your master password is the key that decrypts your password vault.

Some sort of escrow would be good, that unlocks a document with access instructions upon receipt of a valid death certificate.


Yes but lastpass contains your password to something like your bank account. You don’t need your password for your bank account if you have a death certificate is what the poster is saying.


It can contain passwords to much more and other information


Can you offer some examples, besides financial accounts, of things that a person would prevent others from accessing while alive and grant access upon death? In sifting through the list of things in my password manager, none of them (besides finances) seem to have this quality. Seems like anything that should be seen by family after death could be seen by them before death as well.


Social Media accounts with private DMs are the first to pop in mind. Cloud storage like Dropbox/iCloud/Drive/etc. Lots of things really if you thing on it for just a minute or so


If someone DM'd me, they were probably expecting me to not share what they said. If I have stuff in cloud storage that could be useful to others, I'll share it now.

I'm sure there are use cases, but it's actually very hard for me to think of them, let alone in just a minute or so.


You can reset your lastpass password if you have access to any machine that was recently logged into it.


I forgot mine twice, after having used it for years. And no brain damage yet (at least I think so). Fortunately, in both cases it came back after a few days. Problem with piece of paper - in case of brain damage, I will forget where that piece of paper is hidden ...


I actually forgot my master password last year - luckily I had my partner's 1Password set as a recovery. About 3 weeks later I remembered it


I wish Google would sell me a letter mail with my Gmail recovery passwords on a nice durable laminated card.


Or you could just print it up yourself. Why is it G's responsibility and not yours?


I'm not asking them to do it for free I'm like "please let me pay you for this".


Your printer is connected to internet these days and quite probably has backdoors ...


Pretty easy to disconnect a printer from the internet though....


Wrong, we need mainstream, stable quantum encryption to prevent cyber attacks.


The lack of per-application isolation with desktops is one of those ugly truths people try and sweep under the rug.

I foresee two potential solutions to this.

1) Run everything in a VM like Qubes (essentially nerfs certain application like 3D acceleration without major R&D)

2) Utilize some container runtime to provide isolation for legacy applications and stub out features such as filesystem calls so they do not to be aware of its existence.

Microsoft tried to produce a crippled application runtime for Windows (UWP) with more security, but considering its lack of backwards compatibility and lesser feature-set it is not that surprising that adoption has been an uphill battle.


Apple will likely launch Armv9 CPUs (iDevice A16 and MacBook M2) this year. If they don't enable CCA and memory tagging, then we have to wait for Armv9 support in QEMU and a future Qualcomm SoC, https://www.anandtech.com/show/16584/arm-announces-armv9-arc...

> CCA introduces a new concept of dynamically created “realms”, which can be viewed as secured containerised execution environments that are completely opaque to the OS or hypervisor. The hypervisor would still exist, but be solely responsible for scheduling and resource allocation. The realms instead, would be managed by a new entity called the “realm manager”, which is supposed to be a new piece of code roughly 1/10th the size of a hypervisor.

> Applications within a realm would be able to “attest” a realm manager in order to determine that it can be trusted, which isn’t possible with say a traditional hypervisor. Arm didn’t go into more depth of what exactly creates this separation between the realms and the non-secure world of the OS and hypervisors, but it did sound like hardware backed address spaces which cannot interact with each other.


That's great! The processor's hypervisor-like firmware should handle task switching, page table manipulation, etc, and the OS kernel should use upcalls to the firmware instead of needing to have various special-case paths for various minor hardware variants. Had the x86 BIOS been a bit better designed (and a bit more performant), we likely would have seen OS kernels leaning much harder on firmware that shipped with the processor instead of having to make as many assumptions about the hardware and special-case checks.

Besides allowing for more easily isolated security domains, this allows things like (if properly designed) not needing to wait for kernel improvements to take advantage of more/wider vector registers or other changes that change the amount of processor state to serialize/deserialize when task switching.

The DEC Alpha AXP worked somewhat like this with its PALCode firmware. The Tru64 UNIX (and Linux, *BSD, etc.) and VMS kernels actually were unable to execute the privileged CPU instructions. The OS kernel needed to make upcalls to the PALCode, which then could use privileged instructions and could see model-specific registers, etc. The PALCode version used for Tru64 emulated two protection rings, and the PALCode version used with VMS emulated more (I think 4) rings of protection by just keeping an extra integer around for each task, and using that to determine which tasks could currently make which upcalls. One could (and probably should) extend this ring emulation to a bit vector of per-task revokable capabilities that could be passed to child tasks/processes/threads.

Hopefully we see something like this for RISC-V, using seL4 for the "realm manager". This would probably require an extra userspace driver process running to intermediate realm setup and manipulation, but wouldn't be in the critical path for system calls or other userspace drivers.

We're already running hypervisors so many places that it makes sense to run a formally verified separation kernel everywhere, and run hypervisors and OS kernels as userspace daemons. This avoids the hypervisor needing to emulate hardware as an ad-hoc upcall mechanism and instead simplifies both the hypervisor and the OS kernel. The overhead of modern microkernels is so low that your cell phone's baseband processor is likely running an L4 microkernel. It's called paravirtualization when the OS kernel is modified to use upcalls to the hypervisor instead of trying to perform privileged operations that will be trapped (and then emulated) by the hypervisor. Paravirtualization improves VM performance and potentially sidesteps hypervisor emulation bugs, but it would simplify the kernel (and potentially make it easier to optimize) if OS kernels ran paravirtualized even when there is one guest OS per physical computerp

Edit: Of course, there's a small performance hit in the single guest OS case, but if that's the common code path, presumably both hardware and the kernels could be better optimized. Also, if you're supporting OS-opaque realms, you're already paying this hypervisor cost all the time anyway.


Apple actually ships a proprietary ARM extension for lateral exception levels to help enforce kernel integrity, which includes gating access to code that fiddles with page tables.


That only works if you are running a trusted OS atop that hardware, and you simply cannot trust proprietary software. Apple do not permit iOS users to run any software they desire, and attempted to run spyware on all iOS devices; I think one cannot trust them not to have a backdoor in any 'secure' execution environment.

Now, if Linux or OpenBSD released support for that hardware, you might be able to trust it.


That's a ridiculous take, are you personally going to go through each line of that open source code or are you just going to trust someone else did ?


I think that the browser is going to eat the desktop/OS and that most apps will eventually be browser-based.

PWAs are the initial movement in that direction. As browser APIs expand and support more use-cases through WebAssembly, WebGPU, native filesystem APIs, etc. more and more apps that were primarily or only available as native can be supported in the browser.

I know that many people hate web apps because they're often slow, clunky, bloated, etc. but a lot of that is changing as the frontend ecosystem embraces new and more efficient frameworks and technologies. The browser provides everything one needs to build fast and responsive applications - It's an issue with incentives and culture more than anything to do with the fundamental tech.


This puts 100% of the trust on shared, high value server farms controlled by organizations that have a financial incentive to misuse people's data.

Edit: I guess that meshes with the article title; assume other people's servers are compromised too.


Will JavaScript-based web apps ever run as smoothly as FamiTracker (and to some degree Telegram Desktop) on a Core 2 Duo machine, or take up negligible memory (so you won't have to close some apps to start others) on today's 4-8GB machines?

But then again, modern native apps are dog slow on old hardware; Visual Studio 2022 would hang for over 10 seconds at a time, but it's arguably excusable since it doesn't support running on Windows 7 which I was doing.



I do all my monetary transactions on an OpenBSD desktop with their Chrome port.

This version of Chrome is a bit old (v93) but it is built with pledge().

I am running it on an older Core 2 Quad Q9550 where I have been able to completely remove the Intel ME malware (I posted the wiped bios elsewhere).

I hope that this is enough.


Fuschia from Google also looks to have a very good solution to this problem but is probably still a couple of years away.


They should really pick a name that's easier to spell...

https://en.wikipedia.org/wiki/Fuchsia_(operating_system)


I just spell it the rude word it shares the first three letters with, swap out the k with an h, and then add "sia" to the end.


I should stop posting while drinking wine too it seems :)


I don't think it's so much "ugly truths people try and sweep under the rug" as "we do not yet appear to have a practical way to actually do anything about it without vastly reducing the usefulness of the system". There are ways to improve things a bit with your choice of sandboxing tech, but those are frequently either ineffective (oh good, an attacker who compromises can only get to my bank account, but not my SSH keys), high-friction (flatpak portals are cool so long as you don't mind manually approving all file access), or both.


> flatpak portals are cool so long as you don't mind manually approving all file access

And by “manually approving all file access” you mean “opening the file in the file picker like normal”, right? There are some apps where using a file picker at all is awkward, but I’d argue in most applications it’s basically what you’d do anyway. Certainly most applications that non-developers would use.

The bigger problem is that lots of Flatpak applications still don’t use portals.


I remember my first time using a photo application in a flatpak, I had no idea how to get images it saved to a place I could then upload with my browser. It was rather frustrating.


Worth noting that Chromium upstream has support for the portals now, so this should be mostly fixed for that specific scenario.


> 2) Utilize some container runtime to provide isolation for legacy applications and stub out features such as filesystem calls so they do not to be aware of its existence.

I've been thinking about doing this on my laptop at work. With a bit of thought, it shouldn't be too hard to run for example software compilation or in fact most CLI/TUI tools using a minimal disk and network namespace using systemd or a container runtime.

Practically, this would allow me to put e.g. ever beloved NPM into a disk namespace where a ~/.ssh or even the .git of the repo it is in just doesn't exist and a network namespace in which the company VPN doesn't exist (or it just has a route for the NPM repository host). This can also be used to label the process using SELinux or apparmor as a second line of defense against and possibly after an escape of something bad.

However, time hasn't been available for this so far. And no, it wouldn't be end-user-friendly.


Per-application isolation sounds completely unworkable for a developer.

Maybe per-customer isolation, or per-usecase isolation.

Isolating customer work (or use case like "production deployment") into separate UNIX user accounts works fairly reasonably.


Does per-application isolation actually stop local privilege isolation in practice on any popular operating system?

Even hypervisors routinely have security issues. How often does qubes sandbox get broken by a zero day?


> How often does qubes sandbox get broken by a zero day?

Last time the hardware virtualization (which Qubes uses) was broken was in 2006, and it was done by the Qubes founder: https://en.wikipedia.org/wiki/Blue_Pill_(software).

See also: https://www.qubes-os.org/security/xsa/.


Something like fsverity could be a decent half solution. https://fedoraproject.org/wiki/Changes/FsVerityRPM

Of course you won't have that layer of isolation if something becomes compromised, but it should make it harder for malicious code to persist on your system without you knowing.


Does chrome OS do this?


Chrome OS has sandboxed developer environments, which is pretty neat: https://www.youtube.com/watch?v=pRlh8LX4kQI


For using normal Linux applications, Chrome OS uses Linux Containers (LXD) running inside a VM.


FireJail does at least some isolation


Or, just don't install viruses.

On Linux I'm sure some AppArmor or flatpak whatever will be the norm one day, once all the kinks are worked out... but for now it seems to work surprisingly well to just not install stuff that isn't popular and trusted.


Maintainers of popular, trusted projects can get compromised. Hackers steal their publishing tokens and then publish a new, malicious version.


When did this last happen with Debian or Ubuntu? (which actively vet contributors, at least compared to pip and npm)


It's not perfect, but it's still pretty good. Plus, if you update manually every few days and read tech news all the time, most malware will probably be discovered before you get it.


That's why I never update anything taps brain


I struggle a lot with this.

Secure isn't a binary state, it's a spectrum.

At the same time, what is my risk model? Are my NSFW activities THAT interesting? What about my personal notes that contain health details?

I keep an inventory of stuff in my home. Is that ok to keep in Dropbox? Sure the government can access it.. but even if a remote attacker does, is that useful to them?

And of course, as things get more secure they become less accessible. My "very secure" documents archive almost never gets updated.. cuz it's a pain to update it. My daily notes are just chucked in dropbox and get updated all day long...


The way I see it, if you're not a hot target then it's less about the individual and more about sweeping up millions of individuals data for bulk selling.

What the buyers can actually do with a million dropbox contents I'm not sure. But it's obviously better not to let that happen.

Best defense is the same as securing your home: Don't be the easy target on your block. Even just the bare minimum on all your sites (2FA, good password system, anti-virus on your computer) will stop you from being low hanging fruit.

I'd also love to hear anyone knowledgeable in this area to chime in!


So, the question is why current systems are architected to make an unnecessary tradeoff between privacy/security and convenience, and then how to make something that's competitive with current systems, and doesn't make that tradeoff.


It's not really an unnecessary tradeoff but rather a very natural one. Convenient means easy to access, and easy to access means insecure. Of course what people really want by convenience is ease of access for only you, but creating this notion of "you" seems to be the hard part


Sandboxing without any overhead is pretty accessible on any Linux distro, so I don't see why QubesOS should be the go-to choice. For example, I use firejail for all internet facing applications that I use (also for Wine and any proprietary software): https://wiki.archlinux.org/title/firejail#Using_Firejail_by_... The setup takes just a couple of minutes.


I routinely prune /home of sensitive info, and often move secrets into a Cryptomator or Veracrypt vault. I also compartmentalize my workflow. One for NSFW stuff, another for work, another for playing games, and the list goes on...I do this because a compromise of one system does not mean a compromise of my entire system. Virtual machines are great for this alongside Chrome/Firefox profiles for different things. How you slice and dice up your own system(s) is entirely up to you.


How do you compartmentalize?

With VMs or setting up different profiles?


I think there's varying degrees depending on what your goal is. Besides VMs you could have a different user login for each compartment which means things like browser profiles, shell history and other things will get their own settings.

I record a lot of videos and wrote a little script[0] to help backup and restore my shell history to avoid auto-complete and CTRL + r searches from showing sensitive info (client work, etc.) while recording. I only use one browser for recording which has its own history too.

For my use case that's enough separation, for others it might not be. For example I still need to be careful about running commands like `docker image ls` on video because it has potential to show client work. I just remember to black out sensitive info during editing if it happens to come up.

[0]: https://github.com/nickjj/dotfiles/blob/0076e508403c9981e393...


While true on a theoretical level this is largely impractical. To quote House

>Cuddy: "How is it that you always assume you're right?

>House: "I don't, I just find it hard to operate on the opposite assumption."

If you're on a personal desktop at home you've got to place some level of trust in it.

Same with local LAN.

Once you get to more sophisticated server microservices then you can start thinking of the various components as mutually untrusted (until proven otherwise)


> Same with local LAN.

Why? I agree that you have to trust something in order to function, but I would think you could distrust the LAN pretty easily at least for certain levels of internal service. That is, it might be a struggle to distrust the LAN if you need, say, NFS or HTTP without internal domain name (to get certs), or maybe some games? But if all you need is internet access you could fully block internal connections, if you need some access you can probably rely purely on SSH, and failing all else you could run wireguard or such and force everything over that.


> If you’re not a cyber criminal or don’t have a lot of crypto to steal, this will probably never happen to you…

This is a misunderstanding of the threat in two ways.

First, malware is not purely, or even primarily, a targeted threat. It's actually a shockingly easy attack to scale, and by far the most victims are not any kind of high profile target. They are either unsophisticated or careless computer users, who installed something they shouldn't have. And the thing is, from most malware authors' perspective it doesn't matter that much whom they compromise. All victims can be monetised to some extent, and there is an elaborate ecosystem to make sure that monetisation happens in practice, not just in theory.

Second, the list of high value targets is definitely not limited to criminals and cryptocurrency owners. They might be the only people for whom the risk model is specifically the theft of a key file from the local disk.

But you know what else is a file on the local disk? The browser cookie jar, full of bearer tokens granting access to all your online services. Have a short Instagram name? An established but not particularly popular YouTube channel? Do your banking online? Have an account on Steam with some bought games? All of that is worth money to an attacker, and them realising that value will hurt you.

As for what to do about it? Hardware crypto is the technical answer, but it will take ages to move the ecosystem there. Until then, segregate the things whose compromise would be really harmful to separate devices from the day to day, ideally ones that are actively supported and have a good security model (e.g an iPad or Chromebook).


The workstations of software developers and system/network/CI/AD admins are also high value targets for supply chain attacks.


I contemplated building an airgapped secret machine that could only communicate data with outside machines via qr codes and a webcam.

The main reason to do this isn't that the airgapped computer isn't compromised, but that even if it is, I could monitor all data moving in and out of it.

Even a USB drive passed back and forth could secretly transfer data I don't know about. Secret data is so small compared to the size of modern storage that data could easily hide in too many places.

Is that system a little paranoid? Maybe, but I haven't fully trusted any computer since heartbleed.



Wow, that article has a huge list of "data exfiltration channels" that are possible attack vectors.

This is an interesting problem, and I hope that I'm somehow able to trust again.


Honestly the only 100% secure way to store something is to write it down on physical paper and then guard that.


The problem with that is usability. If I need to sign a transaction that security is lost the moment the key is entered into the signing computer.

Proximity seems to be key to most of these attacks, so maybe physically excluding any possible eavesdroppers, and adding noise sources would create a shell equivalent to guarding that piece of paper.

I also anticipate gathering old/very limited electronics that can be visually inspected or don't have extra capacity to run malicious code to allow auditing the mechanisms of computation.


If only there were a method used daily in industry to move data back and forth from secure systems in a write-once fashion. Hear me out - you could construct some sort of polycarbonate disc that would contain a substrate. You could then permanently encode your data onto this substrate - so it couldn't be changed - with a "laser" perhaps. Then said disc could be read on another machine without worry of sneaky things hiding in your USB. On second thought that sounds way too complicated and I'd probably stick with the QR code-camera thing.


Ok, the key advantage is being able to visually see the amount of data being transmitted. With a CD, how do you know an extra kilobyte of data didn't hitch a ride on your disc.

You do realize that viruses existed before networks right? Your "method used daily in industry" can very easily carry an unwanted payload.

I'm trying to explore the intersection of high security and utility.


Air-gapping really does seem like the only option for true security. It also makes it quite difficult to do anything of use with the machine. Since you can't control the supply-chain you should assume that the air-gapped machine is malicious/compromised and the only protection you have is the air-gap.

Thus any USB used to transfer data/software to the air-gapped machine should be destroyed immediately afterwards and you should probably use something like pen & paper as your only allowed output method.


Air gapping doesn't really work these days. It could make noises (even with capacitors) to transmit data, or cause voltage fluctuations that something else could read.

I can imagine bootstrapping a system with trusted hardware (assuming you could get it) by typing in a bootloader + SHA implementation by hand, then using a narrow hardware interface to copy a trustworthy, audited operating system kernel (assuming that also existed) from some other host. The bootloader could check the SHA of that, and then bootstrap the system.


You can't trust the compiler so you would have to type it in to assembly, and even that is questionable since huge amounts of the hardware's microcode is now reprogrammable.

I still think air-gapping works it's just that you need a pretty large airgap. Turn on the shower, fire up the microwave, move around, and hit some incorrect keys with lots of deleting when entering passwords.


> use something like pen & paper

Stainless steel and stamp/engraver


Bought a desktop recently. Thought about setting up verified boot and disk encryption, but...

[M]y biggest takeaway was that all of this was quite complicated and did not really have anything to do with what I bought this system for. So I decided to throw in the towel and flip SecureBoot off.

(See last section here, on trust): https://cameronnemo.gitlab.io/posts/lagomorpha/


SecureBoot was as much an attempt by Microsoft to put in place a method to lock out other OSes as it was for local physical security. Most consumer desktop and laptop computer users don't password protect their BIOS/EFI, so an attacker with physical access can simply turn off SecureBoot themselves and then boot whatever toolkit they want to take over the system.


If this is the kind of protection you want you should be running everything in seperate VMs and containers. Preferably you would run the hypervisor (perhaps a hardened bare-metal hypervisor) on your server and remotely connect to the instances with your client. The client is solely used for connecting to those instances.

The hypervisor itself will need to be well protected and you do not want that accessible from your client or the VMs and containers - use a seperate NIC or VLAN. This is the reason why you want a seperate server - the only things the client will see are the shared containers. Let's assume here that VMs and containers are secure - if they aren't, you can replicate this with seperate physical machines.

On the server you can set firewall rules to control access between the different containers. Network storage etc. can also be setup for the containers that need it, with different permissions depending on the situation.

Depending on the stuff you are running, you may want to go the VDI or SSH route. Also there are other options like XPRA, etc. depending on your requirements. The more segmentation you do (i.e. one VM for the dev envionment for a specific app, another for chat and email, etc.), your security will increase at the cost of usability.

I personally do this in a limited fashion (I have secure workstations and VDIs for handling of private/financial information), but do not go the full route of seperating everything out for day-to-day computing.


Or just use Qubes OS


It's simple as a system engineer - nobody else can compromise your machine because it's already too broken for even you to use it.


People on Hacker News love discussing how to secure their information, but it's not clear if they have anything worth stealing.


Identities and financial information is worth stealing.

People, before you go nuts securing your computers routers and phones, talk to your doctors office about adding additional security to your medical records and freeze or add fraud alerts to your credit reports including NCTUE which I had never heard of until someone walked into Verizon and walked out with 4 unlocked iPhones after opening a new account in my name.


If it's worth stealing, then it's unlikely that they'd talk about it on the open Internet just to justify their security scheme to some skeptic commenter on HN.


A compromise of a device used to maintain a web site or a software package or product is a big step towards compromising the site, package or product itself.


My 20 million SHIB tokens must be protected until they reach the moon.


I thought this article was going to be about what people do when they assume their devices are compromised and yet realize it is too difficult to rebuild everything (which will just get compromised again, or the devices are compromised by design) so they alter their behavior to deal with the knowledge that the devices are compromised. So that would mean not keeping your most important secrets on any device and altering your behavior on all devices, assuming that communications are being viewed by someone, assuming location tracking is happening when your phone is on, and so on, which leads to some rational adjustments to your behavior. I think we're already there for anyone who has been paying attention (do you take your fitness tracker off when having sex? Do you try to manage when location services are turned on?). It's been called "the chilling effect" when applied to free speech. Maybe we need a new term for this kind of behavior effect applied to behavior on devices? I'd guess that one effect is reduced productivity in all digital aspects of life because people can't take full advantage of their digitally enhanced lives. Another would be an increased level of chronic stress due to worrying about being tracked (which everyone knows is ubiquitous. Maybe that should be the new name for IoT, "Ubiquitous Tracking". It all started with the "mother of all demos".)


While it's quite likely to have a device I own compromised at some point it's less likely for everything to be compromised at once. My phone can access some backends, my laptop can access some others. Full backups are accessible from either. 2-factor authentication makes compromise of all accounts less likely.

It should be possible, for someone who wants a very low chance of losing all their data, to remember 2 or 3 passphrases and compartmentalize access to servers and backups such that most backups are pull instead of push (or have restricted permission ala 'zfs allow') and compromising everything requires attacking multiple platforms all at once.

Make sure it's possible to access everything starting from fresh installs on fresh hardware; once it's clear that one device has been compromised it's best policy to begin fresh on all devices as soon as possible and then start restoring from backups. Have some offline backups.

To be fair, convenience trumps some of these guidelines. Security is hard and only organizations can achieve a high level of resilience since brain backups don't exist yet.


For workloads which don't require persistence (e.g. web browsing), you can boot a PC from an external storage device with write-blocking firmware. Kanguru sells flash, SATA and NVME drives with a physical write-protect switch.

Are there good tools for anomaly/intrusion detection on Linux? Even something as simple as comparing current resource usage with a baseline record of disk/network/CPU utilization.


QubesOS is obviously right and feels like home. Next we need a wave of great QubesOS-targetting hardware.

And an equivalent for phones, too.


> And an equivalent for phones, too.

Or to stop putting so much trust in them, go back to "using them for communication with other people directly," and assume they're being evil, because if they're not at the OS level, than some app on them is.

I know they're convenient, I've had a smartphone for years, and have in the past 6 months or so gone back to a flip phone, in which the most interesting thing on it is a halfway complete contacts list and some regularly pruned text message threads (regularly pruned because whatever KaiOS uses for a SMS database gets slow if you don't od that).


Add network isolation to your defense in depth strategy. Close all link listeners and inbound firewall ports. Open authorized-only, outbound-only, ephemeral sessions.


DNS and HTTPS are wide-open ports. Would be nice to have a subscription service that maps popular web services to known-good destination IP address ranges for firewall rules.

Is Suricata a good option for network intrusion detection?


You can somewhat mitigate that with something like LittleSnitch (macOS) / OpenSnitch (Linux). I', quite sure I've heard of something similar on Windows, but I can't recall what it was. But I think that the included Windows Firewall is able to do per-application filtering, although I don't know if it's dynamic.

The main advantage over a classic "filter firewall" is that it's able to work at the app level.

Forbidding outgoing HTTPS traffic is going to be painful if you regularly use a browser. But that doesn't mean random_local_only.app should be able to reach anything on the internet.


I like that Suricata is open source but haven't used it. You can close all your inbound ports and link listeners, e.g. locally resolved DNS and outbound-only https, only for authorized sessions.


air gapped tails running on a 1999 x86 box ftw

https://tails.boum.org/


To that point, how many people run browser proxies in the cloud to obfuscate their location and minimize the blast radius if compromised?

One could filter much of the crapology somewhere safe, and then have a relatively tidy local browsing experience.

I'm too busy to take this idea past the handwaving stage, but it seems like someone should have already done the homework.


I run the browser under different OS user accounts, and make my display manager display the user account in the window title bar.

https://megous.com/dl/tmp/8eaa15e187fa9a2e.png (ff1 being the user)

I trust it more than browser's internal isolation solutions, like tab containers, which I like to use for other things, like testing web apps using different login sessions at once.

I'd hate a remote solution. ;) Though this is not for privacy but more for protection and better low effort isolation.


Huh, I guess that, combined with a reasonable umask, would be a neat idea.

Does drag and drop work between windows owned by different users?


How would a browser proxy do anything aside from hiding from tracking, and maybe a tiny bit of protection from sites that use browser exploits(Not that it matters much since the only relevant ones are 0days that might not be in blocklists)?

It doesn't seem like it would do much against a device compromise.


Cloudflare (and others) are working on it: https://www.cloudflare.com/products/zero-trust/browser-isola...


Thaf looks industrial strength, i.e., not necessarily packaged for individual sale.


> I'm too busy to take this idea past the handwaving stage, but it seems like someone should have already done the homework.

Indeed. https://www.mightyapp.com/


I wonder if they could use firecracker or something to reboot the browser (and nuke persistent/ephemeral storage) on each page load. They could even bounce each user across all their hosts by default.

For trusted sites (ones the user logged into), this could be disabled.


As for repository access, you could have one qube with github access. Your dev qube would not have github access, but it would have access to a private repository. Your guthub qube can pull from the private repo and push to github without ever running or installing any of the code.


If I assumed all my devices are compromised, I would be getting no work done and instead would simply be formatting the devices over and over.

If you come to the conclusion that a device is compromised, the device should be wiped clean, if possible, or binned if not.


Tangentially related - how secure are password protected rars compressed with, say, 7Zip? Assuming that the password is long, secure, etc? In the back of my head, password-protected rars don't seem all that secure.


Sends me back to Ken Thompson's paper, Reflections on Trusting Trust, from 1984.


I see a lot of trust in VMs, but jailbreaks exist, in the future I anticipate more spectre/meltdown type vulns, or even physics based attacks like rowhammer. Assuming my threat model was infinite, can I really trust vms?


If your threat model is infinite can you trust anything?


The good news is that I think no one human is at risk of such an attack. The way I’d frame it: if you assume the NSA can break AES-256 in reasonable timeframes, or the CIA runs every single Tor node, that knowledge gets compartmentalized to such a degree that next to nobody knows, and attacks would only be used at the highest levels against the most significant state level threats. Hell, I wager they wouldn’t even risk parallel construction for fear that it’d tip their hand.


I have macsec (L2) separation for my dev hosts and out-of-band keying with Nitrokeys. So lateral movement in my LAN cannot reach my dev station on IP level. Don't trust my router (zyxel) Chinese firmware.


My home drive is constantly churning and I have no idea what is going on. And there are a lot of software in there, every apt update is a risk that some new supply chain attack is on the way.


My golden policy is anything I do on a digital device is public.

Obviously there's sometimes you have to break that rule, but I find it takes a lot of stress out of privacy issues for me.


Been doing this a long time. The device is the compromise.


Er, I’m confused. Is the point of this essay that endpoints might be compromised?

I mean, yeah? I agree, I guess? But also, that’s not an interesting observation, is it?


It runs counter to popular belief. The author makes however a good case for it going the way of "deleting cookies / using VPNs ... anonymizes you" soon. You get a while of "this is only theoretical" till one day its common knowledge.

I blame complexity btw. Burn it all down and we might be able to start over in rather acceptable digital stone age.


Your title is SMART. I will read this now and comment no more.


I use a separate user account for games.


Air-gap your workstation.


xkcd made the same observation years ago: https://xkcd.com/1200/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: