Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Setting Up Your Tech on the Assumption You’ll Be Hacked (nytimes.com)
64 points by rbanffy on Oct 7, 2018 | hide | past | favorite | 9 comments


My setup is to basically have two systems: a work laptop, on which I try to keep everything professional so that if and when it is hacked the damage will be minimal, and then a second laptop, which I use for anything personal and never connect to my work system.

Sounds like a perfect use case for Qubes OS.


Partitioning into two computers seems a better strategy. May make sense to use Qubes on the work one but just isolating risks physically and partitioning life is safer.


>>My setup is to basically have two systems: a work laptop, on which I try to keep everything professional so that if and when it is hacked the damage will be minimal, and then a second laptop, which I use for anything personal and never connect to my work system.

>Sounds like a perfect use case for Qubes OS.

There's the "pre-owned" aspect of a work PC though - your employer could install software to monitor everything you do.

Qubes is cool but it's not practical to carry 2 laptops. If your threat model is just an employer snooping and not a nation state, a smartphone with a 4G connection should be sufficient.

(Out of an abundance of caution I also run my 4G traffic through a VPN)


Things like the Intel ME vulnerabilities and some of the fancy audio based or RF based attacks really make that assumption important. Maybe a safer assumption to make: any computer can be hacked at any time.


IMHO security is a primary concern for only the founders/lead engineers during early days. When it reaches hundreds of thousands of users, there is little you can do about security. There will be several managers, dozens of developers and testers. It's pretty easy to hire someone who doesn't really care about your company and product, and ruin a lot of your hard work. I might be wrong but many breaches have occurred because someone working on a small module has made a laughable commit, and it has slipped from the hands of a lousy tester.


Strong code review, enforcing good security practice (like sanitizing user input,) unit testing, fuzz testing, auditing, logging, layered security and more are things you can implement to mitigate risk. Obviously there's a trade off and almost anything that matters gets hacked, but it will show if you did your homework: you will know before your customers, and hopefully you have at least limited the damage.

A disturbing amount of serious breaches come from simply using versions of software known to be insecure.


Not sure this is really relevant to the article but:-

Protective Monitoring - Take that auditing and logging and alert people based on things happening. I.E. is it normal for your production app to try and call out to the internet? Frequency of actions of a user. Usage of privledged accounts. Rulesets will be app specific, but people should actually try to write these.

Rotation of secrets (or better yet, avoid static secrets) - Leavers shouldn't have your root db creds, because they used them once to setup the system. If someone hijacks a system, it's better they have a temporary token, rather than lots of credentials.

Zero trust network - You should run every system like it's connected to the internet. They should all have auth, auditing, monitoring and layered security. Just because something is on your internal network, doesn't make it secure.

Patch - Systems are like children, but many product teams are like deadbeat dad's you leave once the kid is born. Actually having a support policy that keeps systems up to date. This should be formalised in larger orgs.

Scan - CVEs are available for every popular language and OS for free. You should probably have SAST, DAST and Dependcy Scanning. Have this in your pipeline. Do it nightly against prod.

Use a framework - If it's on the web, you should probably be using a framework like Django, Rails or Symfony, or whatever seems sensible for the language you're using. Even your API responses should use your templating layer.

Basically, if you assume you're going to get hacked, which is a fair assumption, you want to:

a) Know that you've been hacked. Think about what a hacker is going to do once they breach your front door, or come in through a CI pipeline or a bad dependency.

b) Make recovery easy as possible (just patch the issue).

c) Limit what could be done with the hacked access. If your leaked access tokens only give you access to the hijacked app, deny and alert when someone trys to use it against something else.


Static type systems (like Haskell's not Java's) and CI tests that perform quickcheck like tests or program verification can help here too. You can create monads that provide the principle of least privilege to consumers.


And there is a further problem: you can't just create byrocracy to fix it. I don't want the company to suffer a breach, but as an employee I will be more interested in showing that I have complied with the rules than in preventing the breach.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: