Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Correct, the overhead is minimal - it basically just makes the float->int conversion use a fixed set of rounding and clamping modes, irrespective of what the current mode flags are set to.

The problem is JS's double->int conversion was effectively defined as "what wintel does by default", so on arm, ppc, etc you need a follow on branch that checks for the clamping requirements and corrects the result value to what x86 does.

Honestly it would not surprise me if the perf gains are due to removing the branch rather than the instruction itself.



Not quite. JS is round towards zero, ie. the same as C. If you look at the x86 instruction set then until SSE2 (when Intel specifically added an extra instruction to achieve this) this was extremely awkward to achieve. x86 always did round-to-nearest as the default.

The use of INT_MIN as the overflow value is an x86-ism however, in C the exact value is undefined.


> The problem is JS's double->int conversion was effectively defined as "what wintel does by default",

No, JavaScript "double to int conversion" (which only happens implicitly in bitwise operations such as |, &, etc) is not like any hardware instruction at all. It is defined to be the selection of the low-order 32-bits of a floating point number as-if it were expanded to its full-width integer representation, dropping any fractional part.


Interesting. So it's as much an x86 legacy issue as JS and presumably JS followed x86 because it was more efficient to do so (or maybe by default).

Sounds too like performance gains will depend on how often the branch is taken which seems highly dependent on the values that are being converted?


> Interesting. So it's as much an x86 legacy issue as JS and presumably JS followed x86 because it was more efficient to do so (or maybe by default).

Most languages don't start with a spec, so the semantics of a lot of these get later specced as "uhhhhh whatever the C compiler did by default on the systems we initially built this on".


Seeing as JavaScript was designed and implemented in two weeks, I'm betting this is the answer


Today's JavaScript is so divorced, so radically different from the the original implementation to be considered a different language, though.


Isn’t modern JS backward compatible with 1.1?


Mostly. See https://tc39.es/ecma262/#sec-additions-and-changes-that-intr... for a comprehensive list of backward-incompatible changes in the spec.

Using that list to answer your question is a bit tricky, since it also includes backward-compatibility breaks with newer features. But, e.g.,

> In ECMAScript 2015, ToNumber applied to a String value now recognizes and converts BinaryIntegerLiteral and OctalIntegerLiteral numeric strings. In previous editions such strings were converted to NaN.

and

> In ECMAScript 2015, the Date prototype object is not a Date instance. In previous editions it was a Date instance whose TimeValue was NaN.

sound like backward-incompatible changes to a JS 1.1 behavior.

Another notable example is the formalization of function-in-block semantics, which broke compatibility with various implementations in order to find a least-bad compromise everyone could interop on. I'm not sure if JS 1.1 even had blocks though, much less functions in blocks...


>Another notable example is the formalization of function-in-block semantics, which broke compatibility with various implementations in order to find a least-bad compromise everyone could interop on. I'm not sure if JS 1.1 even had blocks though, much less functions in blocks...

can you explain what you mean? Did early js implementations not have functions inheriting the block scope?


If they’re referring to lexical block scope then indeed, js historically did not have block scoping, it was function and global scoped.

There are also differences in how eval works, etc


No. Modern javascript engines are compatible with 1.1, but the syntax of modern javascript is not.


Modern JS really isn't even compatible with JavaScript. When people talk about "modern JS" it usually includes TypeScript and a bunch of NodeJS-isms, none of which are compatible with TC-39. It's only by proximity (and perversity) that it even gets to be considered JS.


Rounding mode was defined then and hasn't changed since though.


Your comment made me realize this is true now. Through the 80s it was the other way around. You counted as having a language with a spec, even if there was no implementation, but an implementation without a spec was a "toy"


> Through the 80s it was the other way around. You counted as having a language with a spec, even if there was no implementation, but an implementation without a spec was a "toy"

Not to my recollection. I don’t recall anyone at uni discussing a C standard until 1989, and even by 2000 few compilers were fully compliant with that C89 spec.

There were so many incompatible dialects of FORTRAN 77 that most code had to be modified at least a bit for a new compiler or hardware platform.

All of the BASIC and Pascal variants were incompatible with each other. They were defined by “what this implementation does” and not a formal specification.


C was specified by K&R in 1978. Pascal had a specification for the core language, and BASIC was largely seen as a toy.


> C was specified by K&R in 1978.

No it wasn’t. K&R was far removed from many C implementations of the time, wasn’t ever written as a formal spec, and had gaping holes of undefined behavior.

An educational textbook isn’t a formal language specification.


C was the K&R book, PASCAL was the User manual and Report. These were the platonic ideals of the languages.

The specification was the language, the fact that there was an implementation was a bonus. I never once in my comments above said "formal" so perhaps we are meaning two very different things by "specification." No version of Cs specification since K&R has done away with undefined, nor implementation defined behavior.


> C was the K&R book

How could this be when zero implementations behaved in accordance with K&R?

The authors and fans may have claimed it as a “spec”, but it never was a functional “spec” in the real world.


That's the point. The language spec was the platonic ideal, and implementations were just imperfect realizations. There were actual languages that still have no implementations.


It was 100% [ed] due to [/ed] the default wintel behavior. All other architectures have to produce the same value for that conversion.


The branch always has to be taken if executing JavaScript, because otherwise how would you tell if the value was correct or not? You'd have to calculate it using this method regardless and then compare!


If you always take a branch, it's not a branch.

Or did you mean by "taken" that the branch instruction has to be executed regardless of whether the branch is taken or not?


JavaScript JITs always emit this instruction when ToInt32 is required, since checking would be more expensive in user code. And the instruction always used the JS rounding method, since that's cheaper in silicon. I used branch since the parent used "branch".


What are you defining as correct?


"correct" in this case being consistent with the JavaScript specification.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: