I'm glad that someone demonstrated Microsoft's FUD and their double-standards for security with a simple bit of code.
Even if Microsoft fixes this particular issue, they either can't fix every possible variation or the same techniques that make this fixable can be used to fix it in WebGL.
As long as you can send an arbitrary shader (which is essentially a small piece of executable code that GPU executes) to a GPU, you can overload the GPU. This capability is available in WebGL, Silverlight 5 and Flash.
You don't want to remove this capability, because it's the core reason GPUs can do some things so much faster than a CPU.
You can try to sanitize shaders as much as possible (and I'm sure both Silverlight and WebGL implementation will do their best) but it's not even theoretically possible to decide a time complexity of arbitrary assembly code.
Writing code is about features vs. risk trade offs. Microsoft clearly has decided that the risks are acceptable for Silverlight but somehow the same category of risks is not acceptable for WebGL.
And if you ask me, I agree with WebGL folks and the Silverlight part of Microsoft. The worst that can happen is slowing down your computer. This is an annoyance but also an issue we encounter daily: both my Mac and Windows software, including the OSes, crash and misbehave rather regularly. I accept that because the annoyance that it causes is much less important than my desire to use the software.
It's also an annoyance that can be generated with technologies that already are part of the browser. A web page can exhaust any computer's memory (causing swapping) but sending arbitrarily large images. It's easy to write JavaScript that will eat CPU cycles. It's easy to DOS a browser by creating arbitrarily complex DOM and update it frequently enough. All those issues can be mitigated by browser implementors but none of them can really be fixed. And yet we happily use the internet and we'll happily keep using it after WebGL is implemented.
From a security perspective, a shader-based DoS is annoying, but not a huge issue. Arbitrary code execution, potentially in the kernel, is. There are just too many moving pieces, too many un/poorly-tested components, and too little standing between a webpage and the kernel for my liking. While I want WebGL to become huge (I really dig it), from a security standpoint I simply can't support it in its current incarnation, and won't for a long while most likely. That said, these are not inherent design flaws, just realities of the stack it's on.
"Arbitrary code execution, potentially in the kernel".
Luckily, the kernel component of graphics drivers (at least the NVIDIA one) is comparatively small these days, and mainly concerns memory/buffer allocation and resource scheduling. Apart from that, it provides a memory-mapped command queue directly to the GPU. Memory protection is enforced in hardware.
The user-space part of the driver, on the other hand, is the complex beast that handles all the GL rendering commands. It's much more likely that an exploit would happen there (still not good, but heh it sounds less scary and can be controlled with user-space security restrictions).
shaders are not arbitrary code nor are they executed in the kernel. the same goes with API calls. while opening APIs to new parts of a system always exposes new risks, specious reasoning is not sufficient to determine if the risks are manageable and mitigable.
notice that khronos and the webgl implementers have not responded with content-less dismissals of these concerns. they have outlined the exact steps they've taken to secure their API from the limits of the GPU model and their work to prevent buggy GPUs/drivers from executing webgl content at all. If someone can actually demonstrate (or even speculate on a mechanism of action of) an actual attack that isn't addressed by their current systems or (in the case of DoS attacks) their ongoing work with OS and GPU vendors, then we can talk about fundamentally flawed. Until then it's a matter of risk assessment.
I never said that shaders are arbitrary code or that they're executed in the kernel. WebGL in general opens up a huge attack surface, however, much of which is in the kernel. As soon as WebGL hit Webkit, I started testing it for two reasons: 1) I do 3d art, and 2) I work in security. I understand the systems involved quite well, and I'm still very concerned about it.
just to be clear, I certainly did not mean "specious" here as a personal attack. I meant it in the sense that it is easy to talk about buggy drivers -- every one of us has encountered some kind of instability or crash from them -- but security assessments rely on specifics.
If a group is claiming that this kind of system interaction is securable (for some given definition of "securable"), we can evaluate both that claim and if that level of security is sufficient. For instance, we can enumerate the ways that WebGL can interact with buffers; given those, what attacks are possible at those points and how are they being defended against?
Obviously a web forum isn't the best place for this, but it's important to keep in mind.
They closed my bug as "fixed" but didn't give any details. I'll have a look at it again when the next beta / final build of silverlight 5 comes out. In the WebGL WG we are very confident that this can't be fixed without working with GPU vendors on new robustness features in the drivers.
This seems like an odd way to go about doing an open standard. Don't you usually want to let vendors work on getting things working and then come back and standardize based on best practices?
It would seem like the WG should work with MS and Adobe to get Silverlight and Flash working really well. Get it out in peoples hands for a year or two. And then come back and say, "OK, we generally get 3D on the web now. Now lets standardize it."
It really feels like this WG is in a race to hurry to get something out the door as fast as possible.
It seems to me the guy who reported the issue to Microsoft is doing the only thing that he can effectively do to help them.
WebGL is a standard developed in an open way. If someone wants to contribute, including Microsoft, they can send an e-mail to a mailing list and that will reach other people working on WebGL and hopefully a productive conversation would ensue.
Microsoft choose not to do that. Instead they issued a blanket statement to the whole world that WebGL is insecure. They made no effort to improve the security of WebGL and didn't leave any opening for the discussion. They just communicated a decision.
Silverlight or IE engineers are not easily reachable and cannot be engaged in an open, technical dialogue the way WebGL folks can.
The only venue that Microsoft provides to give feedback and bug reports is their connect website. This is the venue that Benoit used because it was the only venue available to him, as an individual.
No one, including WebGL engineers, has special powers to engage Microsoft in discussion about their products. No one needs special powers to engage WebGL folks in discussion about WebGL. So you really have your power structure backwards.
Notice also how this bug report is constructive: it shows a specific problem that Microsoft can fix.
Notice how Microsoft's FUD wasn't constructive: they just labeled WebGL insecure using non-specific (therefore non-fixable) arguments.
The only venue that Microsoft provides to give feedback and bug reports is their connect website. This is the venue that Benoit used because it was the only venue available to him, as an individual.
This makes sense when you think about it. They likely get the crash dumps from Mozilla's beta testers too. They definitely have a better handle on determining their exploitability.
Nobody has more experience with video driver bugs than Microsoft.
Nitpicking. Either way, usually using this way to report (what Microsoft claims) is a security bug would be more stand up way to do it but in this specific case, given Microsoft's FUD designed to kill WebGL (with the consequence of Flash and Silverlight having an upper hand wrt. 3d graphics in the browser), doing it publicly was the right call. It nicely shows Microsoft's double standards which is important to the overall discussion.
Google and Mozilla would not consider it "nitpicking" if I posted a security flaw to a public bug tracking site, then claimed "that's the only place to send them". All three of those vendors went out of their way to establish and communicate the method needed to safely publish security flaws in their products.
The appearance of this report is that the reporter ignored that process out of spite. Is that the message the Web GL people are intending to send? I doubt it.
Moreover, who's being punished here? Microsoft? Actions like this score PR points for Microsoft. It's users who pay the price of casual disclosure. I know that because that's what Google effectively says with their disclosure policy, and what Mozilla says with theirs.
I don't get it, this software (Silverlight 5) is still in development. It's not meant to be used in production. If there is a flaw it should be reported. I would too have thought Microsoft Connect was the right place to report it. I don't see why it has to be hidden. ContextIS released their Firefox image stealing bug in public and it was quickly fixed by Mozilla within a week. I think this worked pretty well as far a security release goes. And this was for software already released to the public.
You too would have been wrong. Here's what Mozilla says about the same issue:
IMPORTANT: Anyone who believes they have found a Mozilla-related security vulnerability can and should report it by sending email to the address security@mozilla.org. For more information read the rest of this document.
Here's what Google says about it:
If you believe you have discovered a vulnerability in a Google product or have a security incident to report, email security@google.com.
Here's what Apple says about it:
To report security issues that affect Apple products, please contact: product-security@apple.com
Here's what Cisco says (they even provide a toll-free phone number!):
Individuals or organizations that are experiencing a product security issue are strongly encouraged to contact the Cisco PSIRT. Cisco welcomes reports from independent researchers, industry organizations, other vendors, customers, and any other sources concerned with product or network security. Please contact the Cisco PSIRT directly using one of the following methods
You can Google for virtually any major vendor, in the form [report XXX security vulnerability], and get instructions on how to report flaws to them.
Posting flaws to public bug tracking systems is just about the worst conceivable way to do it. Public bug trackers are not always (or even usually) monitored by product security teams. As a result, for many vendors, you can find vulnerabilities in their bug trackers they don't even know about, and nobody else does either, because they were reported to a black hole. You're actually better off writing an angry blog post than putting in the public bug tracker.
I've reported a bug or multiple bugs to all of those guys, and for the record, Microsoft has been by far the most proactive and aggressive at wanting to get the details to their researchers the fastest.
Ok I'll great you the fact that for security issues trying to contact them privately is probably their preference. But in this case where talking about a bug that causes a machine crash. I don't know but does that even constitute a security flaw? This is more a major software flaw then a security bug. If all bug that makes browsers crash were sent to that security report email, they would be overwhelmed quickly. If it could be used to exploit the machine I'd be with you and would suggest that the bug be reported through the vendors security channel.
The proof-of-concept is really just painting 10,000 rectangles the size of the window. It's that stupid. It has nothing to do with shaders or anything fancy. As long as you allow painting many large triangles as a single GPU command, you have the vulnerability. If you don't allow that, then you're not fast.
Everybody has known forever about that vulnerability in 3D APIs. So there was not much of a point using a private reporting mechanism. However, Microsoft took this well-known universal vulnerability and presented it as something specific to WebGL. There was no point in replying privately to that.
Sure. Plenty of people also deliberately don't use "official" channels. Look at the Metasploit people, who I respect a lot. I'm not saying it has to be done that way. I'm saying that the guy who says "it got posted to the public bug tracker because there's no other place for an individual to send security flaws" is wrong. Doesn't know what he's talking about.
There's also nothing wrong with that. Why should everyone need to know the ins and outs of vulnerability research? But probably he should dial back the stridency.
The repro was a crash, which by itself, is not an exploit and it was reported against a beta software that is not deployed widely. There is no end-user installable silverlight 5 plugin and in fact a developer would be breaking Silverlight 5 license if he put a Silverlight 5 app on a publicly accessible web page (http://drc.ideablade.com/xwiki/bin/view/Documentation/code-s..., see "go live" terms).
At the risk of generalizing a bit, what bothers me about your comments specifically and security people in general, is that once something gets labeled with "security issue" label it apparently becomes a black-and-white issue (and I apologize to security people who don't do that).
WebGL shouldn't be implemented. Reporting non-exploit crash in software that is not available except to developers that don't run other people's code gets put (implicitly) into the same bucket as 0-day exploit for widely deployed software.
The severity of problem exposed is nowhere close to what security disclosure protocols are designed for i.e. it's not an exploit.
But you're content with labeling it a "security flaw" and not doing any further analysis of severity or impact and your condemnation of that particular bug report is based on this binary mislabeling.
You wrote "The only venue that Microsoft provides to give feedback and bug reports is their connect website." That was simply, overtly, directly, incontrovertably false.
Now, because you are a message board geek, instead of saying "oh, interesting, I didn't know that, thanks for letting me know", you've given me 6 grafs of random stuff about security people, black-and-white, you-didn't-even-read-the-report, 0-day-not-crash-whatever. I don't care. You were wrong, that wasn't the way to report a security issue. It's either a security issue --- which your original argument depends on it being --- or it's not. If it's a security issue, posting it on a public bug tracking server was the wrong call.
Glad to clear that up for you. Feel free to the last word.
It seems to me the guy who reported the issue to Microsoft is doing the only thing that he can effectively do to help them.
My complaint isn't with the bug report per se. I think filing bugs is a very valid mechanism for handling defects.
WebGL is a standard developed in an open way. If someone wants to contribute, including Microsoft, they can send an e-mail to a mailing list and that will reach other people working on WebGL and hopefully a productive conversation would ensue.
My point is that open standards shouldn't be blazing new trails with somewhat unknown security surfaces (etc). Open standards should take what works and say, "This is now the accepted way to do it, we have good evidence that this way works best in practice."
They made no effort to improve the security of WebGL and didn't leave any opening for the discussion. They just communicated a decision.
They just communicated their decision. Just as Apple said they wouldn't support Flash. In fact MS went into a fair bit more detail why they wouldn't. Honestly, I think that was a mistake. They simply should have just said they wouldn't support WebGl, and left it at that.
But my broader point is that I don't like standards bodies trying to bully organizations into supporting a nascent standard that AFAICT hasn't really been vetted or sufficiently thought through yet.
The C++ standards committee ran into this last time with export. They pushed in a feature no one had used in practice and hadn't really been vetted by thousands of hours of testing. The end result, a dead feature, and lots of wasted person hours.
Take the time, do it right, let the vendors push forward first, and learn.
> My point is that open standards shouldn't be blazing new trails with somewhat unknown security surfaces (etc). Open standards should take what works and say, "This is now the accepted way to do it, we have good evidence that this way works best in practice."
I was going to rebut this with JavaScript and the "canvas" element as examples, but I did some research and that's about right.
Perhaps there's a push for early standardization because non-IE browsers have a lot to gain from being cross compatible?
A bunch of programmers involved in designing WebGL bullying Microsoft. Really?
There are 2 issues that Microsoft raised:
1. Sending arbitrary GPU shaders can slow down GPUs.
2. Using standard, public, official graphics APIs provided by Windows might crash buggy drivers.
Re 1: Adobe does that in Flash, Microsoft is planning to do that in Silverlight 5. WebGL isn't proposing anything that Microsoft isn't doing already. If you think WebGL is not thought out properly in that regard, neither is stuff that Microsoft is doing.
Re 2: I'm really dumbfounded how Microsoft can even make this argument with a straight face. They've spent the last 15 years designing and improving DirectX but they don't really want you to use those APIs? WebGL folks can't fix the drivers, Microsoft and Intel and ATI and Nvidia can and should, regardless of WebGL.
Finally, how do you propose that we vet the standard? What criteria do you propose to decide that WebGL standard is good enough?
Your position (that WebGL is not good enough) cannot be disproved because you don't provide any objective criteria to decide what is "good enough". It's non-constructive hand-waving.
On the other hand, WebGL is based on OpenGL, an API that has been vetted in the past 20 years, has proven to work and as far as I can tell, WebGL folks are highly competent people, both in 3D and security space.
A bunch of programmers involved in designing WebGL bullying Microsoft. Really?
It's not just a bunch of programmers. It is a working group intended to represent the recommendation for an important standard. Or at least I thought so. Maybe not?
Your position (that WebGL is not good enough)
I never said that.
Finally, how do you propose that we vet the standard? What criteria do you propose to decide that WebGL standard is good enough?
Standards should be based on techniques and technologies that are fairly well known. While there will always be debates about what goes in a standard, the arguments should almost always be able to point to existing implementations of a given technology. Especially in areas where there might be controversy or concern.
What I do NOT like is a standard body criticizing an organization for not picking up a standard, if the standard hasn't really been vetted or tested yet.
If you want to push a non-vetted technology, go ahead, and let Mozilla and Chrome pick it up, but don't criticize those who don't at the onset. Now in three years, once the kinks have been ironed out, then I think you can make the claim that the standard is solid -- everyone on board, but at this point I think its not fair to push anyone to implement it.
why would you wait for microsoft or adobe to implement it? what you describe is exactly what has been happening for two years now, just with multiple browser makers working on independent experimental implementations.
It just renders 10,000 rectangles of the size of the browser window, which by itself is enough to DOS; to make it a bit worse, it also uses a large texture and adresses it with very little respect for memory locality ;-)
Hehe... made me smile... but isn't this just Mozilla people winding up Microsoft? It makes a pretty big point of "just like WebGL". I doubt this will make Microsoft say "oh look they were right about WebGL, we'd better implement it now". Everyone knows Microsoft want to promote DirectX over OpenGL, and the legitimate-but-not-unsolvable security issues in WebGL serve as a nice excuse.
This is more than just a wind-up: it shows that Microsoft can't attack WebGL on one hand and push Silverlight w/shaders on the other. This puts pressure on them to come up with a consistent message.
Did any of the Microsoft-bashers here even read the article? Microsoft responds in the comments and says the issue is already fixed in internal builds and will ship with Silverlight 5 RTM.
Even if Microsoft fixes this particular issue, they either can't fix every possible variation or the same techniques that make this fixable can be used to fix it in WebGL.
As long as you can send an arbitrary shader (which is essentially a small piece of executable code that GPU executes) to a GPU, you can overload the GPU. This capability is available in WebGL, Silverlight 5 and Flash.
You don't want to remove this capability, because it's the core reason GPUs can do some things so much faster than a CPU.
You can try to sanitize shaders as much as possible (and I'm sure both Silverlight and WebGL implementation will do their best) but it's not even theoretically possible to decide a time complexity of arbitrary assembly code.
Writing code is about features vs. risk trade offs. Microsoft clearly has decided that the risks are acceptable for Silverlight but somehow the same category of risks is not acceptable for WebGL.
And if you ask me, I agree with WebGL folks and the Silverlight part of Microsoft. The worst that can happen is slowing down your computer. This is an annoyance but also an issue we encounter daily: both my Mac and Windows software, including the OSes, crash and misbehave rather regularly. I accept that because the annoyance that it causes is much less important than my desire to use the software.
It's also an annoyance that can be generated with technologies that already are part of the browser. A web page can exhaust any computer's memory (causing swapping) but sending arbitrarily large images. It's easy to write JavaScript that will eat CPU cycles. It's easy to DOS a browser by creating arbitrarily complex DOM and update it frequently enough. All those issues can be mitigated by browser implementors but none of them can really be fixed. And yet we happily use the internet and we'll happily keep using it after WebGL is implemented.