What can we do to try and reduce "alert fatigue"? I've lost track of the number of super-scary-looking regex DoS "high" vulnerabilities I've had to review for an app that only uses client-side JS and is incredibly unlikely to be exploitable in practice (or particularly where the vulnerable dependencies are build-time only).
One of the problems I've also had with Snyk is low-quality duplicative entries (for example, cataloguing each deserialisation blacklist bypass in Jackson as a separate "new" vulnerability because "yay CVE numbers to put on CVs") which then wastes the time of folks triaging vulnerabilities who may have already concluded there's no exploitation risk (due to e.g. not deserialising user input, or not using polymorphic deserialisation anywhere) and have to review issues again.
A lot. Honestly, GitHub dropped the ball for a while here. (The inside story is that we bought a SAST company, shifted a lot of focus into making that acquisition successful, and didn't give enough attention to our open source security offerings for a couple of years.)
On the alerting side, we have a couple of things coming. Neither are magic bullets, but both will help.
- Better handling of vulnerabilities in dev dependencies. Some vulnerabilities matter if they're in a dev dependency - anything that exfiltrates your local filesystem, for example. Other's don't - DoS vulnerabilities, for example. At the moment, GitHub doesn't even tell you whether the dependency a vulnerability affects is a runtime or development dependency. We can and will get better there.
- Analysis of whether the vulnerable code in a dependency is called. You almost certainly want to react faster to vulnerabilities in your code that your application is actually exposed to than to ones that it may be exposed to in future. (You probably want to respond to the unreachable ones, too, especially if you can get an auto-generated PR to do so, but there's much less urgency.) We have this in private beta for Python right now, and expect to have it in public beta in the next few months.
Beyond alerting, the other big thing is that GitHub's incentives for this database and the experiences it triggers are fundamentally different from other vendors. We aren't selling its contents, so don't have an incentive to inflate it. Open source maintainers are at the heart of our platform, and we really don't want low quality advisories go out about their software. And developers are our core customers, and we want to deliver experiences they love above all else. That difference in incentives will likely manifest in lots of little differences, but at a high level, we're aligned on wanting to reduce the alert fatigue.
Sorry we dropped the ball on this for the last couple of years. You're going to see steady improvements from here on.
Thank you, this is awesome to hear. Sadly (to my own detriment) I've gotten slow to investigate the alerts because 90% of them are false positives.
That said, this offering is amazing, and IMHO a huge value add of using Github, so even if you left it exactly how it is it's still appreciated. I especially appreciate that you support many different languages (on that, would love to see Erlang and Elixir added). An app server that runs an older PHP service got exploited and was mining crypto currency. The investigation went way, way faster because I happened to notice the security warning on Github. I was able to get it patched pretty quickly thanks to that. Even though updating deps is one of the first things I do, I may never have actually figured out where the vulnerability was without github, so thank you so much!
That’s awesome to hear. And I hear you on Elixir/Erlang. I have personal skin in the game on that one - in my Dependabot days I created the open source Elixir Advisory Database and very much want to transition that to the GitHub Advisory Database (and get alerts working).
Personally, I'd stop vendoring dependencies and stop checking lock files into git and use version ranges instead. That way people always get the latest CVE fixes when they use the software. Then have good automated testing so that if one of the dependencies breaks something, it gets flagged quickly.
For example, if there is a reported problem in production, with lock files I can check out the same commit and be able reproduce (if the provided steps are correct).
Without lock files one or more dependency versions might be higher on my machine than production and then I don’t know if failure to reproduce is because of the steps I’m trying or because the problem doesn’t exist in the updated dependencies.
And then because not all package maintainers are good about following semantic versioning, the build on the CI server can sometimes break itself due to dependency updates which aren’t backwards compatible.
Version range dependencies seem like a nice solution, but in practice I’ve found them to be a nightmare.
>What can we do to try and reduce "alert fatigue"?
The more you do something the easier it is to do. There is nothing wrong with it no longer feeling like an alert. Patching security vulnerabilities is just a normal part of software development and the easier and more comfortable people are with it the better.
The more you do something the easier it is to do. There is nothing wrong with it no longer feeling like an alert.
That is almost the definition of alert fatigue. The problem is tools presenting minor issues as major ones because they might be a major issue in certain circumstances. Then supposedly major alerts start to feel normal, and when there is an actually major alert nobody has a sense of urgency about it.
I've never used GitHubs version of this, but I've used others and as someone who only develops internal tools I wish there was an setting for "I mostly trust my authenticated users." Which I think would downgrade "possible DOS from a specially crafted regex from an authenticated user."
>Then supposedly major alerts start to feel normal
Major alerts should feel normal. I should have said that you shouldn't feel alarmed instead of suggesting that it shouldn't be treated as an event. Maybe that doesn't quite capture what I mean, but you should get the picture. You should be prepared to handle them. Unfortunately, security defects are to be expected and it shouldn't be a surprise that they might exist in your system.
>and when there is an actually major alert nobody has a sense of urgency about it.
Why? You should be urgent with all security issues. You shouldn't have people putting off security updates because they are minor.
Sure, but it's like the boy who cried wolf. If the tool keeps saying things are a bigger issue than they are, then people will stop believing the tool.
See also almost every oil refinery catastrophe. "It's normal for that alarm to go off/to not go off, or for that minor leak to flare up from time to time" and then one day the ignored or missed alert could've prevented death.
One of the problems I've also had with Snyk is low-quality duplicative entries (for example, cataloguing each deserialisation blacklist bypass in Jackson as a separate "new" vulnerability because "yay CVE numbers to put on CVs") which then wastes the time of folks triaging vulnerabilities who may have already concluded there's no exploitation risk (due to e.g. not deserialising user input, or not using polymorphic deserialisation anywhere) and have to review issues again.