Many commenters here seem to be completely misunderstanding the situation.
Browser extensions are really dangerous; if you need to keep your machine secure, you shouldn't use any IMHO. By definition, browser extensions need to be able to access things such as page content. What would stop someone from writing a extension that captures your bank credentials? Nothing.
Obviously no security-conscious user is going to install a bank credential stealing extension. But what about bugs in extensions? If a buggy extension can be made to execute arbitrary code, it is as dangerous as a malicious extension (if the arbitrary code execution works in the same circumstances).
Angular 1.x basically runs eval on DOM content. That's how it works, it's not a vulnerability in normal use. You make a web page using Angular, and possibly the user has a way to eval arbitrary JS code through Angular, but then they have the developer console so they can run arbitrary code anyway.
With browser extensions it's different. The extension is from one source and runs with one set of privileges, and the page comes from someone else and has less privileges. Now if anything from the page can be eval'd in the extension, that's privilege escalation. Someone creating a site can run malicious content as a browser extension.
It's probably possible to sanitize all external inputs used in the browser extension such that privilege escalation isn't possible, but the Angular team has tried hard with their sandbox solution with no success. Extension developers will hardly do much better, so it makes sense for Mozilla to ban the whole library.
Angular wasn't designed for browser extensions.
WRT the security researcher and Mozilla not disclosing other known sandbox vulnerabilities, that's missing the point (but an interesting discussion in itself).
> It's probably possible to sanitize all external inputs used in the browser extension such that privilege escalation isn't possible, but the Angular team has tried hard with their sandbox solution with no success. Extension developers will hardly do much better, so it makes sense for Mozilla to ban the whole library.
AngularJS runs expressions that are in your page's HTML when it initializes (or when you explicitly call angular.bootstrap on an element), but only on the page where it is loaded. If an extension uses Angular within the extension, that's perfectly fine, security wise. Unless the developer explicitly requests it, pages visited cannot move code into an extension's context (which would be dangerous in any case, Angular or not).
Even when the developer explicitly moves data from a web page into the extension context, this does not cause a security issue, even when the data ends up in the DOM. Once loaded, Angular does not revisit the DOM to run expressions in it (otherwise all Angular pages would be compromised). Care has to be taken when using things like ng-bind-html, but the security profile for an extension is the same as a regular web page here. With Angular's $sce service and automatic escaping/sanitizing, it's actually reasonably easy to write safe web applications that properly escape user input.
All of this is unrelated to the expression sandbox. The sandbox was never intended to be a security feature, but rather a feature to keep developers from shooting into their own foot (e.g. by creating global variables). It was considered to be defense-in-depth mechanism for a while, but it turns out it is at best misleading for users who believe it protects them. That is why we removed it.
Thanks for the reply. I mistook the sandbox to be related to $sce. It's been a while since I last coded Angular.
So you're saying that Mozilla is mistaken in their decision, and the only way for page content to be eval'd with extension privileges is if the developer was careless with ng-bind-html or $compile?
Yes, Angular itself is fine, and there's no problem with escaping or eval'ing per se.
However there is a corner condition in which Angular being present in an extension might weaken some security measures. It requires multiple issues to happen together, including the victim page being vulnerable in the first place. I'm actually not sure if that is the issue that Mozilla was thinking about, but it is a problem. We will put some defense in depth into Angular to mitigate this, but I believe it's a general issue with how extensions are handled, not limited to Angular.
Sorry for being a bit vague, but no patch has been released yet.
> The sandbox was never intended to be a security feature, but rather a feature to keep developers from shooting into their own foot (e.g. by creating global variables). It was considered to be defense-in-depth mechanism for a while, but it turns out it is at best misleading for users who believe it protects them. That is why we removed it.
For apps using Angular 1.5, do any changes need to be made to move up to Angular 1.6 without the sandbox? Does Angular lose any features with the removal of the sandbox?
> By definition, browser extensions need to be able to access things such as page content. What would stop someone from writing a extension that captures your bank credentials? Nothing.
Completely agreed. This is why it's so frustrating that all of the browser vendors have moved to this, "gut every minor option/feature possible, people can just get an extension" attitude.
For Firefox:
* removing the option to not maintain download history
* removing the option for the compact drop-down menu from the URL bar
* forcing tabs on top
* forcing refresh button to the right-hand side of the interface
For Chrome:
* removing backspace navigation (you may dislike it, but others don't)
* disabling middle-click to scroll on Linux
* removing the option to set your new tab page (eg to about:blank)
* not letting you prevent HTML5 video autoplay
* not letting you disable WebRTC
Just the backspace extension alone requires basically carte blanche access to everything just to be able to insert a tiny Javascript function to catch the keypress.
I'm not asking for us to go back to the Mozilla suite with integrated mail client, news reader, etc. Just ... it's okay to have an "advanced options" section that lets us control some of this really simple, really basic stuff. And not only okay, a major security benefit to do so. All the focus on web security, you'd think they'd take this stuff more seriously.
They could at least make it possible choosing not to upgrade a very simple extension which we may have been personally reviewed.
I want a fixed (read: no updates) extension that does this: On key press, check if the key is backspace, when it is, check if any form element is focused, when not, go back. One line of JS, I guess three with nice formatting. If I create this, I need to submit it to the Chrome Store so Chrome won't complain about untrusted extensions.
Of course they also removed the ability to tell it that I know what I'm doing.
I agree but unfortunately it's not a very popular design pattern these days.
Because of this I basically have no applications installed on my android smartphone since even trivial applications often end up requiring ridiculous amounts of privileges (often for relatively minor features) and of course there's no way to fine tune what you allow and what you don't.
Honestly I think that's a terrible habit to give your users, just ignore the privilege list since there's nothing you can do about it and click "sure whatever".
Devs should have to justify why the app needs the feature and I should be able to disable it if it's not critical for the application to work correctly. It would make it a bit harder to write and test those apps but it's not like it's rocket science either...
Sidenote: It is pretty much exactly the Googles (alas, auto-updated) "Go Back With Backspace" [0] does: [1] (you forgot the shift backspace, Google as well in past versions).
> It's probably possible to sanitize all external inputs used in the browser extension such that privilege escalation isn't possible, but the Angular team has tried hard with their sandbox solution with no success. Extension developers will hardly do much better, so it makes sense for Mozilla to ban the whole library.
Sandboxing in JS should be possible these days. Spin up an iframe, add the sandboxing attribute, load javascript into it, postMessage the code you want to execute to it, await the return value. voila, you executed untrusted code in an isolated origin context.
You can't run code that depends on variables in the page context though. If all the input values are serializable, then you can postMessage them into the iframe too, but you can't serialize objects with arbitrary methods, etc. The code you run in the sandbox can't return back a rich object with arbitrary methods because that has to get serialized back out. You can't use getters and setters to transparently proxy all accesses because postMessage is asynchronous. Even if you restrict everything to only dealing with objects with promise-returning functions, I'm not entirely sure if you can get this all to play nicely with garbage collection (say you have an object outside of the iframe which is only referenced -- through the proxy system -- by an object inside the iframe which is only referenced by that object outside of the iframe)...
Iframe-sandboxing is far from a drop-in solution to this type of problem into an existing codebase.
Access to arbitrary methods would go against the idea of sandboxing anyway. If something references the global object for example you suddenly would have access to a privileged fetch API or other extension APIs.
Maybe if you use a WeakMap with a transferrable object and assuming that if you postMessage a transferrable object and then get it back later and still have the WeakMap recognize it as a key. I'm not very sure that last part would work. I don't think GC-links reach through iframes in a way that would enable that.
WeakMaps are more limited than most people seem to expect. They're powerful tools that enable many new powerful patterns, but they don't expose the workings of the garbage collector. They're very different from Java's WeakHashMap for example. In Javascript, the only way you can tell whether the browser is using a garbage collector or not is to try to run out of memory. Without crashing, there's no way for some Javascript code to observe the runtime's GC behavior at all by design.
Object identity is not preserved with Transferring or regular structured cloning. Transferring is really about the resource represented it held onto by the object being transferred.
If object identity (for the purpose of WeakMap) was preserved, your idea would provide a way to observe GC behavior. Which wouldn't automatically be a complete disaster, though I am personally dead set against it except maybe in some sort of privileged context. Weak references keep getting proposed, and there are some pretty compelling use cases, but there is also some very major risk that we could never back out of once weakrefs were available, and they could permanently hold back future GC performance. (You could easily break deployed web applications by improving GC behavior.)
(Source: I implemented Transferring in general and Transferring ArrayBuffers in particular in Spidermonkey, and I work on the GC engine. And it's nice to see a comment on HN like the parent that gets it right for once!)
But the issue of preventing arbitrary Javascript code from running based on user input isn't limited to Angular, it's a problem since the beginning of time!
What about Angular is so special that it needs to be blacklisted? It will likely still be safer than ad-hoc client-side templating that people will do instead.
> By definition, browser extensions need to be able to access things such as page content.
On Firefox/XUL maybe. Web Extensions (like in Chrome) work much like Android apps: You need to acknowledge their desired permissions up front. They can’t request more later.
Of course you may need these permissions to create your extension.
The vulnerability described by op applies to chrome just the same. Once the addon has `pageCapture` permission, an angular 1 exploit would work just the same.
Many extensions have full access to all webpages, also in chrome; there's no sandbox or anything similar safely separating extension from page. That's not a bug or missing feature: it's by design! Many extensions fiddle with all kinds of page aspects as part of their core functionality.
An adblocker that can't inspect the page dom would likely not work very well.
BROWSERS are really dangerous; if you need to keep your machine secure, you shouldn't use any IMHO. By definition, browsers need to be able to access things such as page content. What would stop someone from writing a browser that captures your bank credentials? Nothing.
Obviously no security-conscious user is going to install a bank credential stealing browser. But what about bugs in browsers? If a buggy browser can be made to execute arbitrary code, it is as dangerous as a malicious browser...
At the end, it's a matter of trust in your browser or your extensions.
I see where you're getting at, but with only a handful of browsers* maintained by large organisations eager to protect their reputations vs a plathora of extensions out there, your argument doesn't hold so well.
* I'm assuming usage of Chrome/IE/Firefox/Safari here.
The quoted paragraph is buildup to the fact that AngularJS evals content on purpose, and does not really even try to be secure against maliciously-crafted DOM. Browsers, on the other hand, are designed to resist attacks.
But yes, certainly you need to trust the browser more than an extension.
Browser extensions are really dangerous; if you need to keep your machine secure, you shouldn't use any IMHO. By definition, browser extensions need to be able to access things such as page content. What would stop someone from writing a extension that captures your bank credentials? Nothing.
Obviously no security-conscious user is going to install a bank credential stealing extension. But what about bugs in extensions? If a buggy extension can be made to execute arbitrary code, it is as dangerous as a malicious extension (if the arbitrary code execution works in the same circumstances).
Angular 1.x basically runs eval on DOM content. That's how it works, it's not a vulnerability in normal use. You make a web page using Angular, and possibly the user has a way to eval arbitrary JS code through Angular, but then they have the developer console so they can run arbitrary code anyway.
With browser extensions it's different. The extension is from one source and runs with one set of privileges, and the page comes from someone else and has less privileges. Now if anything from the page can be eval'd in the extension, that's privilege escalation. Someone creating a site can run malicious content as a browser extension.
It's probably possible to sanitize all external inputs used in the browser extension such that privilege escalation isn't possible, but the Angular team has tried hard with their sandbox solution with no success. Extension developers will hardly do much better, so it makes sense for Mozilla to ban the whole library.
Angular wasn't designed for browser extensions.
WRT the security researcher and Mozilla not disclosing other known sandbox vulnerabilities, that's missing the point (but an interesting discussion in itself).