Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Personally I find the advantage of using ES2015 over ES5 marginal for most use cases, so I actually went back to writing "traditional" ES5 JS with require.js imports and only use a JSX->JS transpiler for the React part of my code. This helps me a lot to stay "closer" to my code and reduce the complexity of my build chain.

Many things that ES2015 provides are nice of course and the code looks a bit cleaner, but apart from a few real innovations most changes seem to be syntactic sugar.

Also, I found that each step in my build chain made it more complicated to build and maintain my code, especially for other developers. I eventually even abandoned Gulp (which in my opinion tries to reinvent Unix pipes but does it all wrong) in favor of a simple Makefile that chains a few build commands and uses inotifywait to watch the filesystem for changes in order to automatically rebuild the code during development.

Another thing I do which may shock many JS people is to actually check in the build directory of my setup into version control, because this makes deployment much easier and ensures that I will always have a working version of the code in the repository, even if some external dependencies should change in the future. This also eliminates installing extensive tooling on my production servers, which itself is a large burden and creates many security issues (for a simple setup consisting of rabel, react, require.js and a few support libraries, node.js downloads about 350 MB of source files onto the machine).



I agree, and I think ES5 even has a benefit as a language: it's extremely simple. The trajectory of ES2015 seems to be, basically: add as much cool stuff as possible. Thus increasing the syntactic surface area, and the amount of stuff to learn.

With a slightly more clever and more concise definition of `createElement`, I also have no need for JSX.

    tag("div.widget.fancy", {}, [
      tag("h1#title", {}, "Hello, world!")
    ])
I'm fine with that. My editor understands the indentation.

And... such an enormous benefit... there is no compilation step.

Right now at work, our code base takes 20 seconds to compile with Babel. Enough said.

The feeling, after being used to all this transpilation business, of writing code that's just already ready to serve, is very nice. I can even work on it with nothing but a text editor and a web browser.

Unfortunately everyone thinks I am crazy for preferring this.


That looks a lot like the element builder function that Fastmail put together back in 2012.

https://blog.fastmail.com/2012/02/20/building-the-new-ajax-m...

I too am fond of that approach.


Yeah, same idea.

The dream that designers, being comfortable with HTML, would be able to mess around with JSX is, as far as I can tell, unrealistic anyway, because it's always full of React-specific weird stuff that the designer doesn't understand and doesn't want to mess with.


You should separate your Presentational and Container components [0]. Presentational components can be pure and stateless, containing no/minimal React-specific weird stuff - that should go into the Containers.

[0] https://medium.com/@dan_abramov/smart-and-dumb-components-7c...


I've started going back to straight ES5 for my side projects. It removes the annoying "compile" step, which I find pretty dumb for an interpreted language. For deployment, of course, I concat and minify. But the speed with which I can iterate using vanilla Javascript is refreshing in this day and age.


I would be interested in a compilation step if it gives me something like PureScript, that is, real, serious benefits from the compiler: algebraic data types, type checking, type classes, and so on.

But ES2015 is just a bunch of syntax sugar. Nice syntax sugar, but still. Okay, async/await is significantly useful... but you still have to understand how the promises work under the hood... and I can live with raw promise coding.


This is basically exactly how my function "cre" works (except you can omit the empty attribute object): https://github.com/stuartpb/cre


Nope, you're not crazy. I'm doing the same thing at work (but have looked into setting up TSX in the future).


I've been avoiding the compile step too just for simplicity.

But have you found sensible ways to test without them?


What's the problem with testing, you mean? Sure, I've done plenty of testing of ES5 code without transpilers.


during development, doesn't react with addons allows to use JSX without pulling in any tooling or additional dependencies?


There is a strong case for transpiling if you have a node backend. Presently we use ES2015 features on the backend that wont work in the front-end code without a transpiler. It's not really inconvenient, but it would be nice to use many of the ES2015 features in both places. Shared libraries in particular can be built using ES2015 features instead of settling for the lowest-common-denominator. We also use make in our build tooling.


Yes, I am a big fan of checking in the build directory. We work on lots of little one-off apps that need to last for 5-8 years. Having the build dir in the version control makes it so much easier to fix things and update content years down the road. It insures us from things like npm (insert your package manager here) going away. Which I know sounds ridiculous, but try to re-download some essential Flash/Actionscript library from 2009, in 2016.


Just wondering how you deal with all the extra noise in the diff. What happens when multiple people are working on the project, are you not constantly having to deal with merge conflicts? I feel like checking in the build directory makes the diffs pretty much useless.


> Just wondering how you deal with all the extra noise in the diff. What happens when multiple people are working on the project, are you not constantly having to deal with merge conflicts? I feel like checking in the build directory makes the diffs pretty much useless.

Why would you ever bother merging a build artifact? Those files should always be rebuilt fresh before committing. Just quickly do whatever to clear it from the conflict queue, merge the source as necessary, then rebuild. Commit the rebuilt version.


I honestly assumed this was common practice and everyone did it this way. It also makes it easy for someone to clone the repo and run the app without having to download a bunch of npm stuff and figure out how to get the build tool working.


To be fair, if you do it right then npm should be the only dependency, and then installing all other dependencies (locally) and running the build should be handled completely by npm and its scripts. If this is done well then installing and building can be as simple as running two or even one npm commands. There's not really much of a 'figuring out' stage, if you know npm and its common patterns.

If the solution to the perceived complexity of the build is to just commit the built artefacts to source control, it's possible that's a warning sign that the build process itself needs to be better organised and simplified, and make better use of the build tools so they help rather than hinder a developer coming to the project for the first time. That's what they're there for.


Unless any of the libraries are natively built with node-gyp.


It's kind of surprising to read about the same experience and even reaching exactly the same conclusions.

Anyway, I've recently experimented with checking the build directory in only in production/staging branches to avoid constant merge conflicts and the repository bloat. My approach is to have those build-excluding lines in .gitignore file present in development branches and to comment them out in production/staging branches. So far it works well, but it's sometimes quite confusing to other people on the team.


I'm just wondering why not use npm scripts instead of Makefiles? They work in Windows as well and they're easier to understand (at least imo).


As long as they're single commands. Running multiple commands using `&&`, etc. is unfortunately not portable.


What platform does `&&` not work on? It works fine in Windows `cmd`, which is the typical shell to worry about portability with.


cmd's `&&` is the equivalent of bash's `;`. bash's `&&` has no equivalent in cmd.

Edit: Never mind, I'm apparently wrong.


On Windows 7; note the lack of version information from Java:

> rm test.txt && java -version

rm: cannot unlink entry "test.txt": The system cannot find the file specified.


That's not correct. I use npm scripts with `&&` on Windows many times per day. Other features may not be portable, but that one definitely is.



I haven't looked at those yet, sounds interesting though!


Careful checking in you build folder. If there are any native modules, and you are not on the same architecture, everything will explode.

This can be avoided by adding "npm rebuild" to your deployment.


I guess you mean the node_modules folder, where in fact this makes a lot of sense. What OP means is the build folder to which the frontend part of project is compiled.


I've been contemplating just checking in the build folder.

It does feel like a code smell, but it'd make deployment easier.

Our frontend is Javascript, wherein there's no worry about native modules.

Our backend is Python running Pypy, so there's no native code there either.


I dunno about checking in the build directory. Having a working version of the build in the repo isn't necessary when your dependency versions are locked in shinkwrap.json. Therefore your build is completely reproducible. As for eliminating extensive tooling on your prod server, I believe a common practice is to build on your dev laptop and rsync the build directory to the server.

Do you ever run into merge conflicts in the build directory?


> Do you ever run into merge conflicts in the build directory?

Those files should always be rebuilt fresh before committing. Just quickly do whatever to clear it from the conflict queue -- probably some version of "take mine" or "take theirs". Merge the actual source files as necessary, then rebuild. Commit and just blindly clobber whatever existed in the build directory, because you just rebuilt it and you know it's the correct version.


>Another thing I do which may shock many JS people is to actually check in the build directory of my setup into version control, because this makes deployment much easier and ensures that I will always have a working version of the code in the repository, even if some external dependencies should change in the future. This also eliminates installing extensive tooling on my production servers, which itself is a large burden and creates many security issues (for a simple setup consisting of rabel, react, require.js and a few support libraries, node.js downloads about 350 MB of source files onto the machine).

Kudos. I can't remember how many 'open/source/Free' git repo that I cloned can't compile or have problems compiling. Personally I think it is NOT open source unless the user can compile and get an exact copy of the software that is in the app store.


I don't understand why so many people use Gulp and file-system watchers during development. You have a web server, why not use it?

In our apps, when you request /app.js or /app.css or whatever, it goes through the tools (Browserify and SASS in our case) and delivers the transpiled versions on the fly.

We use a little NPM package called Staticr [1] to declare the pipelines so that the on-the-fly version used during development is identical to the production-time one. In production, we simply run Staticr against the pipelines, which produces the necessary minified files.

[1] https://github.com/bjoerge/staticr


1000x yes gulp is a nightmare. I too have replaced an extremely complex Gulpfile with a few shell scripts.


I agree so much on this. ES2015 is a YAGN not a crucial feature. It's not because you can that you should! The most simple and flexible way I have found for a setup is Webpack without any of the fancy stuff. Really small configuration file, only support CommonJS modules and CSS imports. Everything else has proven to be unnecessary sugar in my experience.

I do use ES2015 in CLI apps though. Just make sure to lock the Node.js engine at version 4 in the manifest file.


I personally have NPM install packages local to the project and check them in. This allows for managing the dependencies and reproducible builds without the hassle of files that change every commit.

Example:

http://ttrscorer.codeplex.com/SourceControl/latest


After wrestling with creating a nice gulpfile for the past two days (essentially wasting two days), I think this is the right route. My biggest gripe with gulp is that creating reusable pipelines even with lazypipe is unnecessarily difficult and can make debugging build scripts (!) hard.

Can you post an example of one of your makefiles?


Sure!

https://github.com/adewes/gitboard/blob/master/Makefile

This one is rather simple but includes steps for building and optimizing JS and CSS. It also has support for different build environments (Gitboard has a Chrome version and a web version).


I used to go with Makefiles maybe two years ago but there's really no point on using anything else other than npm scripts if you already have a Node.js set of tools (therefore an npm manifest).

One great thing about npm scripts is that every time you run a command Node.js will export the binaries path to your $PATH so you don't have to manually add absolute paths.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: