What would they gain by open sourcing it ? They have no incentive to get it to run on other platforms as it implements an open standard already in widespread use. In fact they have an incentive to provide a better browser on their platform only (not saying they are managing to do it, but they have an incentive to do that).
As for why they aren't using Blink/WebKit - I'm guessing they have huge amount of work based on IE and related tech and also integrating LGPL code in to Windows is probably a big no-no.
The fact that they're actively developing it and trying to best the open-source browsers in performance helps competition which is a good thing. Microsoft have actually continued to improve IE a lot since 9 in areas like web standards support which helps drive the web forward, particularly for technology-noobish people who don't install 3rd party browsers.
StatCounter's stats are very similar to Wikimedia's though. Net Applications is the outlier.
I've only seen IE in the #1 position in the stats of corporate extranet applications.
For all the regular websites it's generally Chrome and IE is either in 2nd or 3rd place. Firefox is kinda popular here, you see.
"Unreliable" is a pretty odd word choice, by the way. Global stats are always different from your own stats. By catering to a particular demographic you're inevitably introducing a massive bias.
Naturally, the only thing that matters are your own stats for a particular website. If it's 95% IE8 then that's how things are. IE8 has to work perfectly. There is no way around it.
Yes, Intel has been doing amazing work here, both on the spec and on writing code for multiple browsers. Really helping to push the entire industry forward on SIMD for the web.
Why are SIMD instructions fixed-number-of-lanes? It seems to me that something like a semi-generic MAP_START(start, number, stride) <bunch of instructions> MAP_END (where it's explicitly executing the instructions in an undefined order between lanes) would be easier to deal with, and easier to upgrade down the line too.
For the same reason that CPUs don't operate on bignums. ALUs are fixed width, and you don't want to microcode every last instruction.
That said, the original vector processors from the '70s and '80s were kind of like what you describe. Not coincidentally, they died off with the RISC movement.
In part because if <bunch of instructions> is arbitrary JS, it may or may not be able to run in SIMD mode, so then you get surprising performance cliffs because your code runs in scalar sometimes.
And in part because there's more to "short SIMD" programming than just charging through large arrays. Sometimes you need to swizzle elements around, for example, and this is easier if you can think about multiple lanes at a time.
Not arbitrary JS. Semi-arbitrary machine code. The first few CPUs would restrict it to a couple instructions, with later generations potentially adding instructions.
And as for your second, don't think of it as a standard map. I should have elaborated. Think of it more of a for loop with an index variable and temporary variable per-iteration. You can access (read and write) other things outside of that index too - the only restriction being that the iterations of the loop will execute in an arbitrary order, including simultaneously.
The JS SIMD effort is aimed at finer-grained, hand-written but highly reliable and predictable SIMD as compared to the large-scale, convenient but kinda magic of RiverTrail.
I think you missed the start of my first comment: namely, I was not talking about JS. I was talking about SIMD at the instruction-set level in general.
Adding fuel to your fire: hardware will often be implemented with multiple SIMD compute units (transistors are cheap!), and pipeline multiple SIMD calls. With instruction sets like AVX, the length is increasing.
SIMD is non-controversial: it's just like another ALU instruction but maps the inputs to the outputs differently. Doing something more exotic, like VLIW, is an uphill battle. However, as seen with GPUs, that can be fruitful.
That has been tried multiple times by multiple companies in the past. Most recently (for CPUs, to my knowledge) by ARM in the armv6-era VFP ISA, which was deprecated and replaced by the "normal SIMD" (fixed width) NEON instructions in armv7.
It's a good enough idea that people keep trying to make it work. But it's never been really successful, either.
The major advantage of the "almost fixed"(1) width is the possibility to tightly control (that is, do it directly in the registers) operations "in both dimensions", which actually matters in real life algorithms.
1) It's not fixed as the processors actually get wider vectors instruction sets: SSE is 128 bits, but AVX is 256 bits.
Still, having the fixed width allows you to optimize the existing code better than when you introduce the abstraction without the fixed width. See the shuffle operations and similar, which operate on the parts of the wide registers and which are actually needed for the real life optimizations to avoid expensive memory accesses and allow operating on the registers alone.
I think you're asking about the distinction between SIMT (supported by GPUs/compute coprocessors), and SIMD. In SIMT, you want to issue a "bunch of the same work on a crapload of pseudo-threads"—in the most hardware-efficient manner possible (vectorized).
SIMD is different—it is all about the interactions between lanes of data.
As a bit of a tautology: the vector unit in x86 parts is SIMD, because it was designed for SIMD workloads.
Angular 2 exists in 3 forms: TypeScript, Dart and ES5. And a lot of ideas came to ng2 from AngularDart.
> What is the business value to use Dart instead of any other language that transpiles into JavaScript?
It's the question of preferences. Dart gives great tooling (IDE, debugging, testing) and more advantaged language, but universe of Dart is much smaller, so there are pros and cons, as always. There is no one ideal size of shoes.
My understanding is that Dart is being used directly, just not in any mainstream browsers. I believe Google Fiber uses Dart for their set-top boxes, but there is also use internally at the company (i.e. intranet).
I expect that every browser implicitly uses SIMD by way of the compiler identifying and inserting the correct instructions (MMX, AVX2...), what exactly is Intel's contribution in this, did they manually vectorized some tight loop?
They list Intel's contribution extensively in the post. I'm not sure why you'd assume they are trying to deceive anyone.
> Intel has been contributing to Chakra, the JavaScript engine for Microsoft Edge
> Some examples of Intel’s direct contributions to Chakra’s JIT compiler include better instruction selection and scheduling.
> [they] also helped us identify opportunities of missed redundant instruction elimination, and inline overhead reduction.
> Intel engineers are collaborating with us closely to implement Single Instruction Multiple Data (SIMD) [1], a future ECMAScript proposal, in Chakra
> Intel recently contributed an optimization to improve navigation (load) time for pages containing several inline elements, optimizations to reduce DOM parse times for text-area elements, and participated in investigations and root cause analysis to improve page load/response times
"Implicit" use that you believe exists isn't actually SIMD. JIT-ers in the browsers can implicitly use SSE instruction set as such, but not in the efficient SIMD way (specifically under the constrains of JIT-ing reasonably fast and not possibly knowing the types). What is here developed is the support for the explicit use of SIMD instructions by the JavaScript programmer:
var a = SIMD.float32x4(1, 2, 3, 4);
var b = SIMD.float32x4(5, 6, 7, 8);
var c = SIMD.float32x4.add(a,b);
The benefit of such instructions is that they are fully typed and can easily map directly to the machine instructions. And of course, you don't have to write such code unless you actually program something that calculates a lot.
It is a Google/Mozilla/Intel collaboration, started 2013:
is already signed by people from Google, Intel and Mozilla and Intel obviously actively works with the browser vendors.
"This paper introduces an explicit SIMD programming model for Dart and JavaScript, we show that it can be compiled to efficient x86/SSE or ARM/Neon code by both Dart and JavaScript virtual machines achieving a 300%-600% speed increase across a variety of benchmarks. The result of this work is that more sophisticated and performant applications can be built to run in web browsers. The ideas introduced in this paper can also be used in other dynamically typed scripting languages to provide a similarly performant interface to SIMD co-processors."
What I don't know is if Intel keeps some patents or some sources. That could explain that Intel must be involved. But maybe they simply like to be involved.
This is for Chakra, IE's JS JIT. Of course the browser's C++ code is likely compiled with SIMD support, but that doesn't help the codegen in the JIT compiler for JS.
This could be a breath of fresh air into the browser game. Since Opera has effectively stopped innovating, there's nothing Chrome and Mozilla can rip off.
The Asm.js feature described in the article, a precondition for SIMD support to matter was actually invented in Mozilla. As far as I know they were also the first with SIMD in browser, helped by Intel.
Any of these interesting achievements are made somewhat ho-hum by the fact that there's no way to see what happened.