These are fun thought experiments, but I think having a personal Disaster Recovery plan is a far more applicable security exercise. What would you do if you lost your phone? If you were locked out of your google account? If you forgot your password manager master password? If your home was destroyed in a fire? Having a secure plan for quickly recovering from these scenarios is more important than trying to keep state actors or cybergangs out of your system, unless you are a VIP.
My plan is just printed backup recovery codes for things that need 2FA, written master passwords, and normal hard disk backups.
The extra hidden part of the plan is that I try to avoid things that aren't tracable to a trusted human help desk. Anything that involves the words "manage your own private key" is a point of failure that needs a lot of care.
> The extra hidden part of the plan is that I try to avoid things that aren't tracable to a trusted human help desk. Anything that involves the words "manage your own private key" is a point of failure that needs a lot of care.
How do you handle your password manager (assuming you use one)?
I just wrote up instructions for accessing my backups for my wife and friend, and my critical accounts have dead-man switches to give my wife access to everything she would need.
Assuming what she needs is access to my email (yes) and gigabyte of photos from my drunk college days (no).
The system I'd trust most is to have 2 attorneys given sealed envelopes with instructions to give them to your wife on your death. One attorney is given an envelope with the passwords to your accounts (or to your password manager) encrypted with a one time pad. The other attorney is given an envelope with the key to it. Each envelope includes instructions to decrypt. If you're giving her a password to a password manager, make sure she has access to the latest password archive and the software to open it.
Tedious, yes. But fairly reliable and you don't have to place any trust in your attorneys at all, unless they find out who their counterpart is and start working together (very unlikely). If this system wouldn't work for you, you've probably got bigger problems than having to worry about your wife getting into your email after you die.
And for those of us who aren't wealthy enough to have two different attorneys on retainer, giving your spouse the encrypted password store along with the key to a safety deposit box containing the decryption key would have to do.
Personally, my wife would wonder why I'm going to so much trouble to keep my passwords secret from her until I die; but then my personal password store is for services we share like banking, and any passwords she doesn't know are benign things like my email addresses and various website logins that wouldn't matter anyway when I die. Of course I also have my work passwords (I'm the IT manager), but my supervisor and the company owner each have a secured store of all of my work passwords as well, plus the master password to access them, in the event something happens to me (or I'm just on vacation for a week and temporarily unreachable when access is needed).
This is more of a thought experiment, but would also serve as a decent backup in case of natural disaster, house burning down, etc. As for safety deposit boxes, those can be closed out for nonpayment, bank branches will shut down, etc. Not the best hands off long term solution. An attorney will let you know if there's an issue because it's their ass on the line if they don't.
Last I paid attention, some jurisdictions make it very difficult to access the contents of a safe deposit box after one of the owners has died. (Supposedly to discourage cheating on inheritance taxes with gold coins, or ...)
A bit of research might be indicated, before trusting this strategy to perform when needed.
For sure. I first focused on setting up a homelab and de-googling my phone and whatnot for privacy reasons, but it's nearly impossible to not be spied on. You can certainly reduce it. In the end I'm still doing this for data sovereignty more than for privacy now, but I don't mind the boost in (imperfect) privacy.
Or you can simply run an application level default-deny firewall and never use a web browser. You don't need to keep your software up-to-date in a homelab.
But then you can't access your data outside your local network. That may be acceptable depending on your use case, but at that point it's not a 1-to-1 alternative to cloud services.
So then you need to keep SSH or WireGuard up-to-date (at least in terms of security patches).
Also, are you going to SSH in every time you need to access a document from your phone? Again, use-cases differ, but that's not a 1-to-1 alternative to, say, Dropbox.
Yes, but you agree you need to apply security patches in that case, right?
Your original comment amounted to "you don't need to apply updates if you firewall everything", to which I replied "that's not a replacement for a cloud service". Your subsequent comments then amount to "well you can just poke a hole in your firewall for WireGuard". So which is it, do you need to apply updates (e.g. to WireGuard) or not?
I suppose you can maintain secure remote access if you run a very minimal wireguard server on a low power device similar to a raspberry pi running on a updated/patched distro. You can still keep 99% of your gear running in the back without updates. This way the amount of update churn can be minimized.
I forgot my master password two days after. I thought I had stored it someplace and my guess the phone shaked after typing the master key and it undid the characters. FML moment. Spent a week resetting passwords. I only set passwords once. We use bitwarden. It was a pretty fuckery.
This is why I have so far avoided password managers: single point of failure. What if you forget your master password. Or worse - what if your master password is stolen?
Your bitwarden password manager is hosted and is only stored encrypted on your phone/laptop. So unless the thief knows your master password you should be fine.
I debate this with myself often. Short of renting a security box and telling people I trust about it, I haven’t come up with a strategy for the master password. At the moment, I’ve resigned myself to the feeling that if I lose my memory, maybe it’ll be the opportunity for a fresh start, and so losing everything is a feature not a bug.
And if you can't update your blog or can't pay for it because you lost your memory? The low tech solutions can't be beat, especially if you expect others to help you pick up the pieces with minimial technical sophistication.
I wrote a tool for this[1,2], though it's still a work in progress (all the features work but I still need to finalise the QR data format and work on user-friendly interfaces).
I wish I found your app before I wrote mine [1] :) you seem to be way better versed in cryptography than I am. What's the advantage of having the main document and the keys separated?
I wouldn't say I'm very well-versed in cryptography. The reason they're separated is that it allows you to:
* Further split up the trust such that the key shards can be held by one group but they don't have access to the document (maybe you keep a copy of the document with a lawyer but distribute the keys among your friends and family so that if your lawyer is hacked or bribed they can't reveal the secrets, same goes for if your friends conspire against you).
* Make the shards small, independent of the document size, so that they're always practical for friends to store even if you have a very large document to save.
* You can do a quorum expansion (create new shards that are compatible with the existing shards) without revealing the secret.
To be fair, for practical uses this is not super necessary but it adds flexibility without losing anything in return (I would argue the quorum expansion point is actually a useful feature).
Bitwarden (premium) has a really nice service of granting access to emergency contact based on prespecified wait time [1]. This can be used in any emergency situation.
Just send four trusted family members half of the passphrase in a sealed envelope and tell them what it's for. If your family has a lawyer or safe deposit box trusting that instead is a 1000x better option.
Despite the issues, there are still valid uses for a safe deposit box. I live in a highly fire-prone area and keep a backup drive with family photos and documents in a safe deposit box in a local place that won't burn when I do.
So long as you don’t expect the drive to be there when you retrieve it, sure. You should probably also encrypt any sensitive document on it. Here’s one example. Just one:
Can lawyers be trusted with this? Do they also properly manage their own death and other events? I don't have any experience and I'm genuinely curious how all this works.
Fire safes are so shitty you'd probably be better off buying a small one to keep your documents/backups in and then a larger one to put that safe in for double insulation.
I was once locked out of some pretty important accounts while traveling overseas. Ever since then, I've been thinking about the importance of being able to "shard" both secrets and authority.
If I were to be imprisoned, for example, I might want my lawyer and family to be able to access all of my emails from two years ago up to one week ago. If I were to suddenly die, I would want my family to have full access to all of my accounts, with little hassle.
I would like to be able to tell my email provider (through my account settings) that if at least two people out of each of these three groups agree that such-and-such condition has been met, then these people will be granted this sort of access. The process would notify the other members of the groups I defined and have a delay to allow some kind of veto/vote if there is any disagreement. It may be a bit fiddly, but if a standard were defined for how the interaction works from a user's perspective (including steps to make sure you understand the consequences of how you've configured it), at least it could work consistently across all kinds of accounts.
here you can see how I'm using Shamir Secret Sharing, I gave clear instructions on how to use the shares and in what circumstances.
based on their dynamics, I'm feeling pretty good. I know I have some people there that are tech savvy + some that will take good care of their shares and when they should send those to whom.
I was wondering, how much do you trust these tools? Cryptography can be extremely tricky to implement. For example, has the tool been checked for side-channel attacks? Has it had any other audits? (On the GitHub page of the library, it says it's no longer maintained)
I trust it enough no solve my DR scenario. If I was targeted by the NSA, I don't expect this to keep me safe.
I don't think SecretSharingDotNet has had any audits, and I'm pretty sure i hasn't been checked for side-channel attacks. I couldn't find anything in GitHub [1] saying it's no longer maintained though.
I'm pretty sure a well founded attacker would be able to hack me, but I think it's orders of magnitude more likely that I'll forget my master password, I'll get stolen, my apartment will catch on fire or I'll just die. Those are the scenarios I'm preparing for.
All of my passwords are in the "pass" command line utility, where they're encrypted with gpg. I added my brother's gpg key as an encryption target, and his ssh key onto the sever where the git repo is stored, locked down to the git shell command. In the event of my untimely demise, my wife tells him the url of the git repo.
I personally wouldn’t go to the extent of using CLI tools, as my next of kins and family members aren’t at all technical. A printout of my 1Password emergency kit in a safe deposit box is probably doable, but then what - 598 passwords to projects on an old git repo on an ancient Synology NAS, or a throwaway account for some random website?
There is probably a lot to be said to curate your accounts to assist those sifting through your estate.
The ability to pass your information legacy is important, and complicated. The trope of your mother going through their mother’s papers and finding a long lost love letter - or an unfinished manuscript - is equally plausible today. What secrets lurk in your DMs, Messenger and Signal history? Does your draft blog post actually contain some amazingly insightful observation?
Maybe your family’s memory of you could be enriched with this information? …maybe not?
At the end of (your) day(s), you might take those secrets to your grave, and it’s unlikely that your tombstone will include your GUID, or the Glacier storage URI where your online self will remain until the TOS states otherwise.
REST In Blob
EDIT: RAM-mento Moar-i(sorry, got carried away.. couldn’t help myself :)
>What secrets lurk in your DMs, Messenger and Signal history? Does your draft blog post actually contain some amazingly insightful observation?
Things that need to stay secret. That's why they are secrets. If my passing means that these things are no longer accessible to anyone ever again? Perfect. Works as intended.
> Short of brain damage, I don't think that would ever happen.
I once forgot my phone's unlock pattern. The very same that i had used for years, daily. I'm not someone that normally has memory problems, but i guess a few synapses just refused to do their job for some reason. I actually had an ex tell it to me, otherwise the phone would be bricked (i tried recalling it for basically 2-3 days). Now i have it and the master password for my password database written down and given to a person that i trust.
Kind of a silly and a worrying situation, so it helps to have contingencies for even cases like that. One might worry about Alzheimer's and whatnot after a situation like that, but even healthy "HDDs" occasionally get "bad sectors". Of course, there have also been cases where i forget something that was almost a subconscious memory (e.g. muscle memory) just to remember it a while later.
For context: am in the 20-30 age bracket, no other memory problems or a history of memory problems in my family tree.
> Short of brain damage, I don't think that would ever happen.
Funny you say that. A year ago I went outside to break up a domestic violence situation. I woke up later face down with a brick next to my head. Due to the concussion I forgot my phone's password and that of my ATM card. It took me six months to remember them, although by then I had replaced both.
When I am on a longer vacation and I do not use a certain password, I struggle remembering it.
If I were chucked into a prison and let out after several years with no computer use in between, I would likely forget all my passwords in the meantime. No brain damage needed, just disuse.
Yes but lastpass contains your password to something like your bank account. You don’t need your password for your bank account if you have a death certificate is what the poster is saying.
Can you offer some examples, besides financial accounts, of things that a person would prevent others from accessing while alive and grant access upon death? In sifting through the list of things in my password manager, none of them (besides finances) seem to have this quality. Seems like anything that should be seen by family after death could be seen by them before death as well.
Social Media accounts with private DMs are the first to pop in mind. Cloud storage like Dropbox/iCloud/Drive/etc. Lots of things really if you thing on it for just a minute or so
If someone DM'd me, they were probably expecting me to not share what they said. If I have stuff in cloud storage that could be useful to others, I'll share it now.
I'm sure there are use cases, but it's actually very hard for me to think of them, let alone in just a minute or so.
I forgot mine twice, after having used it for years. And no brain damage yet (at least I think so). Fortunately, in both cases it came back after a few days. Problem with piece of paper - in case of brain damage, I will forget where that piece of paper is hidden ...
The lack of per-application isolation with desktops is one of those ugly truths people try and sweep under the rug.
I foresee two potential solutions to this.
1) Run everything in a VM like Qubes (essentially nerfs certain application like 3D acceleration without major R&D)
2) Utilize some container runtime to provide isolation for legacy applications and stub out features such as filesystem calls so they do not to be aware of its existence.
Microsoft tried to produce a crippled application runtime for Windows (UWP) with more security, but considering its lack of backwards compatibility and lesser feature-set it is not that surprising that adoption has been an uphill battle.
Apple will likely launch Armv9 CPUs (iDevice A16 and MacBook M2) this year. If they don't enable CCA and memory tagging, then we have to wait for Armv9 support in QEMU and a future Qualcomm SoC, https://www.anandtech.com/show/16584/arm-announces-armv9-arc...
> CCA introduces a new concept of dynamically created “realms”, which can be viewed as secured containerised execution environments that are completely opaque to the OS or hypervisor. The hypervisor would still exist, but be solely responsible for scheduling and resource allocation. The realms instead, would be managed by a new entity called the “realm manager”, which is supposed to be a new piece of code roughly 1/10th the size of a hypervisor.
> Applications within a realm would be able to “attest” a realm manager in order to determine that it can be trusted, which isn’t possible with say a traditional hypervisor. Arm didn’t go into more depth of what exactly creates this separation between the realms and the non-secure world of the OS and hypervisors, but it did sound like hardware backed address spaces which cannot interact with each other.
That's great! The processor's hypervisor-like firmware should handle task switching, page table manipulation, etc, and the OS kernel should use upcalls to the firmware instead of needing to have various special-case paths for various minor hardware variants. Had the x86 BIOS been a bit better designed (and a bit more performant), we likely would have seen OS kernels leaning much harder on firmware that shipped with the processor instead of having to make as many assumptions about the hardware and special-case checks.
Besides allowing for more easily isolated security domains, this allows things like (if properly designed) not needing to wait for kernel improvements to take advantage of more/wider vector registers or other changes that change the amount of processor state to serialize/deserialize when task switching.
The DEC Alpha AXP worked somewhat like this with its PALCode firmware. The Tru64 UNIX (and Linux, *BSD, etc.) and VMS kernels actually were unable to execute the privileged CPU instructions. The OS kernel needed to make upcalls to the PALCode, which then could use privileged instructions and could see model-specific registers, etc. The PALCode version used for Tru64 emulated two protection rings, and the PALCode version used with VMS emulated more (I think 4) rings of protection by just keeping an extra integer around for each task, and using that to determine which tasks could currently make which upcalls. One could (and probably should) extend this ring emulation to a bit vector of per-task revokable capabilities that could be passed to child tasks/processes/threads.
Hopefully we see something like this for RISC-V, using seL4 for the "realm manager". This would probably require an extra userspace driver process running to intermediate realm setup and manipulation, but wouldn't be in the critical path for system calls or other userspace drivers.
We're already running hypervisors so many places that it makes sense to run a formally verified separation kernel everywhere, and run hypervisors and OS kernels as userspace daemons. This avoids the hypervisor needing to emulate hardware as an ad-hoc upcall mechanism and instead simplifies both the hypervisor and the OS kernel. The overhead of modern microkernels is so low that your cell phone's baseband processor is likely running an L4 microkernel. It's called paravirtualization when the OS kernel is modified to use upcalls to the hypervisor instead of trying to perform privileged operations that will be trapped (and then emulated) by the hypervisor. Paravirtualization improves VM performance and potentially sidesteps hypervisor emulation bugs, but it would simplify the kernel (and potentially make it easier to optimize) if OS kernels ran paravirtualized even when there is one guest OS per physical computerp
Edit: Of course, there's a small performance hit in the single guest OS case, but if that's the common code path, presumably both hardware and the kernels could be better optimized. Also, if you're supporting OS-opaque realms, you're already paying this hypervisor cost all the time anyway.
Apple actually ships a proprietary ARM extension for lateral exception levels to help enforce kernel integrity, which includes gating access to code that fiddles with page tables.
That only works if you are running a trusted OS atop that hardware, and you simply cannot trust proprietary software. Apple do not permit iOS users to run any software they desire, and attempted to run spyware on all iOS devices; I think one cannot trust them not to have a backdoor in any 'secure' execution environment.
Now, if Linux or OpenBSD released support for that hardware, you might be able to trust it.
I think that the browser is going to eat the desktop/OS and that most apps will eventually be browser-based.
PWAs are the initial movement in that direction. As browser APIs expand and support more use-cases through WebAssembly, WebGPU, native filesystem APIs, etc. more and more apps that were primarily or only available as native can be supported in the browser.
I know that many people hate web apps because they're often slow, clunky, bloated, etc. but a lot of that is changing as the frontend ecosystem embraces new and more efficient frameworks and technologies. The browser provides everything one needs to build fast and responsive applications - It's an issue with incentives and culture more than anything to do with the fundamental tech.
Will JavaScript-based web apps ever run as smoothly as FamiTracker (and to some degree Telegram Desktop) on a Core 2 Duo machine, or take up negligible memory (so you won't have to close some apps to start others) on today's 4-8GB machines?
But then again, modern native apps are dog slow on old hardware; Visual Studio 2022 would hang for over 10 seconds at a time, but it's arguably excusable since it doesn't support running on Windows 7 which I was doing.
I don't think it's so much "ugly truths people try and sweep under the rug" as "we do not yet appear to have a practical way to actually do anything about it without vastly reducing the usefulness of the system". There are ways to improve things a bit with your choice of sandboxing tech, but those are frequently either ineffective (oh good, an attacker who compromises can only get to my bank account, but not my SSH keys), high-friction (flatpak portals are cool so long as you don't mind manually approving all file access), or both.
> flatpak portals are cool so long as you don't mind manually approving all file access
And by “manually approving all file access” you mean “opening the file in the file picker like normal”, right? There are some apps where using a file picker at all is awkward, but I’d argue in most applications it’s basically what you’d do anyway. Certainly most applications that non-developers would use.
The bigger problem is that lots of Flatpak applications still don’t use portals.
I remember my first time using a photo application in a flatpak, I had no idea how to get images it saved to a place I could then upload with my browser. It was rather frustrating.
> 2) Utilize some container runtime to provide isolation for legacy applications and stub out features such as filesystem calls so they do not to be aware of its existence.
I've been thinking about doing this on my laptop at work. With a bit of thought, it shouldn't be too hard to run for example software compilation or in fact most CLI/TUI tools using a minimal disk and network namespace using systemd or a container runtime.
Practically, this would allow me to put e.g. ever beloved NPM into a disk namespace where a ~/.ssh or even the .git of the repo it is in just doesn't exist and a network namespace in which the company VPN doesn't exist (or it just has a route for the NPM repository host). This can also be used to label the process using SELinux or apparmor as a second line of defense against and possibly after an escape of something bad.
However, time hasn't been available for this so far. And no, it wouldn't be end-user-friendly.
Of course you won't have that layer of isolation if something becomes compromised, but it should make it harder for malicious code to persist on your system without you knowing.
On Linux I'm sure some AppArmor or flatpak whatever will be the norm one day, once all the kinks are worked out... but for now it seems to work surprisingly well to just not install stuff that isn't popular and trusted.
It's not perfect, but it's still pretty good. Plus, if you update manually every few days and read tech news all the time, most malware will probably be discovered before you get it.
At the same time, what is my risk model? Are my NSFW activities THAT interesting? What about my personal notes that contain health details?
I keep an inventory of stuff in my home. Is that ok to keep in Dropbox? Sure the government can access it.. but even if a remote attacker does, is that useful to them?
And of course, as things get more secure they become less accessible. My "very secure" documents archive almost never gets updated.. cuz it's a pain to update it. My daily notes are just chucked in dropbox and get updated all day long...
The way I see it, if you're not a hot target then it's less about the individual and more about sweeping up millions of individuals data for bulk selling.
What the buyers can actually do with a million dropbox contents I'm not sure. But it's obviously better not to let that happen.
Best defense is the same as securing your home: Don't be the easy target on your block. Even just the bare minimum on all your sites (2FA, good password system, anti-virus on your computer) will stop you from being low hanging fruit.
I'd also love to hear anyone knowledgeable in this area to chime in!
So, the question is why current systems are architected to make an unnecessary tradeoff between privacy/security and convenience, and then how to make something that's competitive with current systems, and doesn't make that tradeoff.
It's not really an unnecessary tradeoff but rather a very natural one. Convenient means easy to access, and easy to access means insecure. Of course what people really want by convenience is ease of access for only you, but creating this notion of "you" seems to be the hard part
Sandboxing without any overhead is pretty accessible on any Linux distro, so I don't see why QubesOS should be the go-to choice. For example, I use firejail for all internet facing applications that I use (also for Wine and any proprietary software): https://wiki.archlinux.org/title/firejail#Using_Firejail_by_... The setup takes just a couple of minutes.
I routinely prune /home of sensitive info, and often move secrets into a Cryptomator or Veracrypt vault. I also compartmentalize my workflow. One for NSFW stuff, another for work, another for playing games, and the list goes on...I do this because a compromise of one system does not mean a compromise of my entire system. Virtual machines are great for this alongside Chrome/Firefox profiles for different things. How you slice and dice up your own system(s) is entirely up to you.
I think there's varying degrees depending on what your goal is. Besides VMs you could have a different user login for each compartment which means things like browser profiles, shell history and other things will get their own settings.
I record a lot of videos and wrote a little script[0] to help backup and restore my shell history to avoid auto-complete and CTRL + r searches from showing sensitive info (client work, etc.) while recording. I only use one browser for recording which has its own history too.
For my use case that's enough separation, for others it might not be. For example I still need to be careful about running commands like `docker image ls` on video because it has potential to show client work. I just remember to black out sensitive info during editing if it happens to come up.
While true on a theoretical level this is largely impractical. To quote House
>Cuddy: "How is it that you always assume you're right?
>House: "I don't, I just find it hard to operate on the opposite assumption."
If you're on a personal desktop at home you've got to place some level of trust in it.
Same with local LAN.
Once you get to more sophisticated server microservices then you can start thinking of the various components as mutually untrusted (until proven otherwise)
Why? I agree that you have to trust something in order to function, but I would think you could distrust the LAN pretty easily at least for certain levels of internal service. That is, it might be a struggle to distrust the LAN if you need, say, NFS or HTTP without internal domain name (to get certs), or maybe some games? But if all you need is internet access you could fully block internal connections, if you need some access you can probably rely purely on SSH, and failing all else you could run wireguard or such and force everything over that.
> If you’re not a cyber criminal or don’t have a lot of crypto to steal, this will probably never happen to you…
This is a misunderstanding of the threat in two ways.
First, malware is not purely, or even primarily, a targeted threat. It's actually a shockingly easy attack to scale, and by far the most victims are not any kind of high profile target. They are either unsophisticated or careless computer users, who installed something they shouldn't have. And the thing is, from most malware authors' perspective it doesn't matter that much whom they compromise. All victims can be monetised to some extent, and there is an elaborate ecosystem to make sure that monetisation happens in practice, not just in theory.
Second, the list of high value targets is definitely not limited to criminals and cryptocurrency owners. They might be the only people for whom the risk model is specifically the theft of a key file from the local disk.
But you know what else is a file on the local disk? The browser cookie jar, full of bearer tokens granting access to all your online services. Have a short Instagram name? An established but not particularly popular YouTube channel? Do your banking online? Have an account on Steam with some bought games? All of that is worth money to an attacker, and them realising that value will hurt you.
As for what to do about it? Hardware crypto is the technical answer, but it will take ages to move the ecosystem there. Until then, segregate the things whose compromise would be really harmful to separate devices from the day to day, ideally ones that are actively supported and have a good security model (e.g an iPad or Chromebook).
I contemplated building an airgapped secret machine that could only communicate data with outside machines via qr codes and a webcam.
The main reason to do this isn't that the airgapped computer isn't compromised, but that even if it is, I could monitor all data moving in and out of it.
Even a USB drive passed back and forth could secretly transfer data I don't know about. Secret data is so small compared to the size of modern storage that data could easily hide in too many places.
Is that system a little paranoid? Maybe, but I haven't fully trusted any computer since heartbleed.
The problem with that is usability. If I need to sign a transaction that security is lost the moment the key is entered into the signing computer.
Proximity seems to be key to most of these attacks, so maybe physically excluding any possible eavesdroppers, and adding noise sources would create a shell equivalent to guarding that piece of paper.
I also anticipate gathering old/very limited electronics that can be visually inspected or don't have extra capacity to run malicious code to allow auditing the mechanisms of computation.
If only there were a method used daily in industry to move data back and forth from secure systems in a write-once fashion. Hear me out - you could construct some sort of polycarbonate disc that would contain a substrate. You could then permanently encode your data onto this substrate - so it couldn't be changed - with a "laser" perhaps. Then said disc could be read on another machine without worry of sneaky things hiding in your USB. On second thought that sounds way too complicated and I'd probably stick with the QR code-camera thing.
Ok, the key advantage is being able to visually see the amount of data being transmitted. With a CD, how do you know an extra kilobyte of data didn't hitch a ride on your disc.
You do realize that viruses existed before networks right? Your "method used daily in industry" can very easily carry an unwanted payload.
I'm trying to explore the intersection of high security and utility.
Air-gapping really does seem like the only option for true security. It also makes it quite difficult to do anything of use with the machine. Since you can't control the supply-chain you should assume that the air-gapped machine is malicious/compromised and the only protection you have is the air-gap.
Thus any USB used to transfer data/software to the air-gapped machine should be destroyed immediately afterwards and you should probably use something like pen & paper as your only allowed output method.
Air gapping doesn't really work these days. It could make noises (even with capacitors) to transmit data, or cause voltage fluctuations that something else could read.
I can imagine bootstrapping a system with trusted hardware (assuming you could get it) by typing in a bootloader + SHA implementation by hand, then using a narrow hardware interface to copy a trustworthy, audited operating system kernel (assuming that also existed) from some other host. The bootloader could check the SHA of that, and then bootstrap the system.
You can't trust the compiler so you would have to type it in to assembly, and even that is questionable since huge amounts of the hardware's microcode is now reprogrammable.
I still think air-gapping works it's just that you need a pretty large airgap. Turn on the shower, fire up the microwave, move around, and hit some incorrect keys with lots of deleting when entering passwords.
Bought a desktop recently. Thought about setting up verified boot and disk encryption, but...
[M]y biggest takeaway was that all of this was quite complicated and did not really have anything to do with what I bought this system for. So I decided to throw in the towel and flip SecureBoot off.
SecureBoot was as much an attempt by Microsoft to put in place a method to lock out other OSes as it was for local physical security. Most consumer desktop and laptop computer users don't password protect their BIOS/EFI, so an attacker with physical access can simply turn off SecureBoot themselves and then boot whatever toolkit they want to take over the system.
If this is the kind of protection you want you should be running everything in seperate VMs and containers. Preferably you would run the hypervisor (perhaps a hardened bare-metal hypervisor) on your server and remotely connect to the instances with your client. The client is solely used for connecting to those instances.
The hypervisor itself will need to be well protected and you do not want that accessible from your client or the VMs and containers - use a seperate NIC or VLAN. This is the reason why you want a seperate server - the only things the client will see are the shared containers. Let's assume here that VMs and containers are secure - if they aren't, you can replicate this with seperate physical machines.
On the server you can set firewall rules to control access between the different containers. Network storage etc. can also be setup for the containers that need it, with different permissions depending on the situation.
Depending on the stuff you are running, you may want to go the VDI or SSH route. Also there are other options like XPRA, etc. depending on your requirements. The more segmentation you do (i.e. one VM for the dev envionment for a specific app, another for chat and email, etc.), your security will increase at the cost of usability.
I personally do this in a limited fashion (I have secure workstations and VDIs for handling of private/financial information), but do not go the full route of seperating everything out for day-to-day computing.
Identities and financial information is worth stealing.
People, before you go nuts securing your computers routers and phones, talk to your doctors office about adding additional security to your medical records and freeze or add fraud alerts to your credit reports including NCTUE which I had never heard of until someone walked into Verizon and walked out with 4 unlocked iPhones after opening a new account in my name.
If it's worth stealing, then it's unlikely that they'd talk about it on the open Internet just to justify their security scheme to some skeptic commenter on HN.
A compromise of a device used to maintain a web site or a software package or product is a big step towards compromising the site, package or product itself.
I thought this article was going to be about what people do when they assume their devices are compromised and yet realize it is too difficult to rebuild everything (which will just get compromised again, or the devices are compromised by design) so they alter their behavior to deal with the knowledge that the devices are compromised. So that would mean not keeping your most important secrets on any device and altering your behavior on all devices, assuming that communications are being viewed by someone, assuming location tracking is happening when your phone is on, and so on, which leads to some rational adjustments to your behavior. I think we're already there for anyone who has been paying attention (do you take your fitness tracker off when having sex? Do you try to manage when location services are turned on?). It's been called "the chilling effect" when applied to free speech. Maybe we need a new term for this kind of behavior effect applied to behavior on devices? I'd guess that one effect is reduced productivity in all digital aspects of life because people can't take full advantage of their digitally enhanced lives. Another would be an increased level of chronic stress due to worrying about being tracked (which everyone knows is ubiquitous. Maybe that should be the new name for IoT, "Ubiquitous Tracking". It all started with the "mother of all demos".)
While it's quite likely to have a device I own compromised at some point it's less likely for everything to be compromised at once. My phone can access some backends, my laptop can access some others. Full backups are accessible from either. 2-factor authentication makes compromise of all accounts less likely.
It should be possible, for someone who wants a very low chance of losing all their data, to remember 2 or 3 passphrases and compartmentalize access to servers and backups such that most backups are pull instead of push (or have restricted permission ala 'zfs allow') and compromising everything requires attacking multiple platforms all at once.
Make sure it's possible to access everything starting from fresh installs on fresh hardware; once it's clear that one device has been compromised it's best policy to begin fresh on all devices as soon as possible and then start restoring from backups. Have some offline backups.
To be fair, convenience trumps some of these guidelines. Security is hard and only organizations can achieve a high level of resilience since brain backups don't exist yet.
For workloads which don't require persistence (e.g. web browsing), you can boot a PC from an external storage device with write-blocking firmware. Kanguru sells flash, SATA and NVME drives with a physical write-protect switch.
Are there good tools for anomaly/intrusion detection on Linux? Even something as simple as comparing current resource usage with a baseline record of disk/network/CPU utilization.
Or to stop putting so much trust in them, go back to "using them for communication with other people directly," and assume they're being evil, because if they're not at the OS level, than some app on them is.
I know they're convenient, I've had a smartphone for years, and have in the past 6 months or so gone back to a flip phone, in which the most interesting thing on it is a halfway complete contacts list and some regularly pruned text message threads (regularly pruned because whatever KaiOS uses for a SMS database gets slow if you don't od that).
Add network isolation to your defense in depth strategy. Close all link listeners and inbound firewall ports. Open authorized-only, outbound-only, ephemeral sessions.
DNS and HTTPS are wide-open ports. Would be nice to have a subscription service that maps popular web services to known-good destination IP address ranges for firewall rules.
Is Suricata a good option for network intrusion detection?
You can somewhat mitigate that with something like LittleSnitch (macOS) / OpenSnitch (Linux). I', quite sure I've heard of something similar on Windows, but I can't recall what it was. But I think that the included Windows Firewall is able to do per-application filtering, although I don't know if it's dynamic.
The main advantage over a classic "filter firewall" is that it's able to work at the app level.
Forbidding outgoing HTTPS traffic is going to be painful if you regularly use a browser. But that doesn't mean random_local_only.app should be able to reach anything on the internet.
I like that Suricata is open source but haven't used it. You can close all your inbound ports and link listeners, e.g. locally resolved DNS and outbound-only https, only for authorized sessions.
I trust it more than browser's internal isolation solutions, like tab containers, which I like to use for other things, like testing web apps using different login sessions at once.
I'd hate a remote solution. ;) Though this is not for privacy but more for protection and better low effort isolation.
How would a browser proxy do anything aside from hiding from tracking, and maybe a tiny bit of protection from sites that use browser exploits(Not that it matters much since the only relevant ones are 0days that might not be in blocklists)?
It doesn't seem like it would do much against a device compromise.
I wonder if they could use firecracker or something to reboot the browser (and nuke persistent/ephemeral storage) on each page load. They could even bounce each user across all their hosts by default.
For trusted sites (ones the user logged into), this could be disabled.
As for repository access, you could have one qube with github access. Your dev qube would not have github access, but it would have access to a private repository. Your guthub qube can pull from the private repo and push to github without ever running or installing any of the code.
Tangentially related - how secure are password protected rars compressed with, say, 7Zip? Assuming that the password is long, secure, etc? In the back of my head, password-protected rars don't seem all that secure.
I see a lot of trust in VMs, but jailbreaks exist, in the future I anticipate more spectre/meltdown type vulns, or even physics based attacks like rowhammer. Assuming my threat model was infinite, can I really trust vms?
The good news is that I think no one human is at risk of such an attack. The way I’d frame it: if you assume the NSA can break AES-256 in reasonable timeframes, or the CIA runs every single Tor node, that knowledge gets compartmentalized to such a degree that next to nobody knows, and attacks would only be used at the highest levels against the most significant state level threats. Hell, I wager they wouldn’t even risk parallel construction for fear that it’d tip their hand.
I have macsec (L2) separation for my dev hosts and out-of-band keying with Nitrokeys. So lateral movement in my LAN cannot reach my dev station on IP level. Don't trust my router (zyxel) Chinese firmware.
My home drive is constantly churning and I have no idea what is going on. And there are a lot of software in there, every apt update is a risk that some new supply chain attack is on the way.
It runs counter to popular belief. The author makes however a good case for it going the way of "deleting cookies / using VPNs ... anonymizes you" soon. You get a while of "this is only theoretical" till one day its common knowledge.
I blame complexity btw. Burn it all down and we might be able to start over in rather acceptable digital stone age.