Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Show HN: The static, static site generator (github.com/xeoncross)
187 points by Xeoncross on May 19, 2014 | hide | past | favorite | 96 comments


What a great idea, I'm really impressed with the cleverness and execution.

A few thoughts and questions for you (you may have already thought of/know about):

- Have you thought about "well-formedness" for the HTML? I realize that adding anything besides the script tag would sort of ruin your point, and it would be nice if browsers accepted that, but also I feel there are sometimes hidden benefits to serving a well-formed document under the correct content type that the server said was being sent (or extension).

- This makes your site awesome to read with a text browser or curl.

- About 1/2 or 1/3 of the times I click "back" or "next", I can't scroll the page after it renders. (Chrome/MacOS)

- The footer support is cool, have you thought about how you would do more extensive template support? (Maybe there's no reason anything extra couldn't be placed in the footer -- analytics, even a header -- although I think a favicon might need to be in a `<head>` tag). Integrating this directly with Markdown, too, could be really cool.

- P.S. Your footer link says "Chanpter" :)

Great work, I love when I find something like this -- really clean, simple, and challenges the norms in a clever way. The result even appears quite polished.


Thanks for the feedback. That is really what this project was about - thinking outside the box to solve problems.

Like most projects, I would expect elegant solutions to some of these problems to appear as more people think about the concept. I must admit, it works pretty nicely for such an abuse of technology.


> - This makes your site awesome to read with a text browser or curl.

Unfortunately, Pocket (getpocket.com) slams everything into a single paragraph, with a # at the beginning.

(Which, granted, is better than the silent failure you get with Blogspot on Pocket, and the garbage you get with it via curl.)


Another neat show of letting the client deal with the rendering.

From the example[1]:

"You see, there is really no need for the server to generate anything for simple article-based sites like this. If the user wants to read your blog they can spend a few processor cyles[sic] to render the page themselves."

This paradigm is beyond me and it seems to be growing for whatever reason. Client-side processors rendering bytes through a separate engine, other than the HTML one, only to have it eventually run through the HTML engine so the client can actually read it.

The thought that everyone has a decent machine to do the processing is, in my experience, still a false one. This was true 5-10 years ago, and I haven't seen evidence that it isn't still true today.

Your server probably has many, many cores. Probably SSD architecture. It is barely blinking to answer a request and deliver text. Why on earth would you leave the simple job of wrapping markup around text, to a machine that you know nothing about?

The average Internet user isn't terribly savvy. Their browser is possibly cluttered with add-ons. Their computer more than likely running 100s of background processes they know nothing about.

Depending on the stats you look at, in 2014, we're looking at a lot of folks with dual-core machines and a couple GB of memory, if they are lucky.

I can't understand why you would want to delay or hinder the experience to getting to your content.

While it may load really fast on your Macbook Pro Retina, or ThinkPad X1, that has a few text editors open and an up to date browser, the experience won't be true for everyone.

When did it become trendy for developers to put burdens on their clients with all of this front-end first thought? Just because it makes it easier for you to write and deploy, doesn't give you the excuse to put the burden on the user for rendering your stuff.

How many more times do we need to read about companies being forced into dropping this ill-conceived paradigm because they realized it made the experience for the client worse, and in many cases, made the development worse as well?

In this example, we end up with an HTML file that is all of 6,084 bytes.

To get there in this example, we used jr.js, a 5,616 byte file, to load showdown.js, a 14,859 byte file, and render 651 bytes of text. Sometimes, the loading and rendering is so slow, that the code itself has handling for it:

   	// Empty the content in case it takes a while to parse the markdown (leaves a blank screen)
	jr.body.innerHTML = '<div class="spinner"></div>';
21,126 bytes to generate 6,084 bytes of text which has to now be rendered (one more time) by the browser.

Wouldn't it be great if there was some standard about the bytes that were delivered over the wire that everyone could use and build upon? Folks should get together and build a really good processor of bytes coming over the wire that's delivered in a particular format to be rendered on the screen. It would be great for the Internet! You could browse the entire thing!

[1] - http://xeoncross.github.io/jr/


I share the distaste for terribly suboptimal solutions, but I'd also like to add another thing - I don't understand this complete lack of concern for wasting power. Bits aren't free; if you make a million of computers redo the same computation that could be done on a single computer once, you're literally making humanity burn a million times more coal for it than needed. I know that this feels irrelevant for any single page, but when scaled globally, this kind of thinking scares and saddens me.


The amount of energy we spend on rendering web pages each day pales in comparison to the amount we spend each day on transportation or even Bitcoin mining. A million times a relatively tiny amount is still relatively tiny.

It's an understandable argument, but I don't think it applies to client-side rendering.


Let me point out that html, css, pngs and jpegs are not bitmap data that can be blitted directly to the screen.

It's not obvious that some simple js transforms is going to be much more (measurably more) resource intensive than laying out a rich html page, even if it is "static" ("just" css1,2,3 -- some of which may be animations).

Hypertext is hypermedia, hypermedia means an object oriented system -- an object oriented system means (some form of) programmability. I don't like the general trend towards "single page apps" for things that are just hypertext applications (such systems can be built with html and css and some javascript for enhancement) -- but this is sort of going the other way: If you just have "text" content, send text -- and then fix the broken useragent with some js so that said text presents nicely. If your useragent already handles text nicely (hello w3m) -- do nothing.

This solution has a certain elegance, with some careful (system) design, it allows for graceful degrading/progressive enhancement. It reads nicely in w3m, it's quite amendable to scraping (one could argue that semantic html is better, and it probably is in isolation, but, as Google seem to have concluded -- semantic ml simply doesn't scale when a large number of sites get it wrong (or just "different")).

[edit: Let me take some of that back -- this* system doesn't render well in w3m, but the rest of my points stand: the approach should be viable -- but maybe there needs to be something along the lines of:

    #at url along the lines of: example.com/some/post/
    <html>
      <noscript>To read without javascript go directly
      to: <a..> example.com/some/post/post.rst</noscript>
    <script src="magick.js">
    </html>
Where magick.js "knows" where to find post.rst, parse it and display it... ]


> When did it become trendy for developers to put burdens on their clients with all of this front-end first thought?

There's a certain sense to it from a scalability perspective -- sure, you've got the issue of whether you can really rely on the client to have the power to do the work you are offloading (and whether, especially in the mobile case, there are undesirable impacts to your users of placing more client processing demand even if they have the resources to do it), but the more is offloaded to the client, the less scaling demand is placed on the server -- as you get more users, the additional resource needs are met by their clients rather than by your server.


> the more is offloaded to the client, the less scaling demand is placed on the server

If you're using an ordinary static site generator, your server scales the same: it's just sending static pages to clients either way. In fact, if you're like me, you don't even run the site generator on the server; you run it on your own machine, and upload the generated HTML files to the server, where they act just like any other static web pages.


> but the more is offloaded to the client, the less scaling demand is placed on the server -- as you get more users, the additional resource needs are met by their clients rather than by your server.

I'm sorry, but this is the kind of thinking that is perpetuating this awful paradigm.

As you get more users your data set grows.

Are your clients going to be responsible for the memory necessary for the larger indexes? What about your cache, should we just introduce HTTP into the caching mechanism and hope that one of your clients has the data you wanted to deliver to someone else?

Are your clients going to be responsible for ensuring you don't max out your server with the amount of processes your httpd can handle?

Are your clients going to be responsible for load-balancing your requests between servers because your pipes are simply not big enough to be handled by a single web-server?

When was the last time you saw a person lugging around 16+ cores with 64GB of memory, SSD architecture, and fibre networking tied into multiple switches?

> the more is offloaded to the client, the less scaling demand is placed on the server -- as you get more users, the additional resource needs are met by their clients rather than by your server.

Clients are not replacements for servers at all.

Why do people keep thinking that they're somehow being clever "offloading" work to the client? Do they know how much work has been put into networking stacks to ensure this type of offloading never happens?

How is it that you scale your clients? Do you ship them more memory? Pay more for their Internet connection? Send them a new phone?

Decades of networking work, by extremely talented individuals, to make sure the client has to do very little lifting at the layers closer to the metal by people much, much smarter than us.

Ignorantly nullified by a developer's misguided desires.


I'd like to reply because I use this paradigm quite often (shipping work to the client).

What I used to do was this: write an API, write a web layer that talks to the API and serves up HTML, and write a bunch of javascript that runs in the browser give the app a bit more juice.

What ended up happening as more and more browsers started to not completely suck is I moved 99% of the web tier that sits inbetween the API and the browser into javascript. When clients hit the app, they download the entire app: javascript, HTML templates (not fully rendered views), etc. The javascript ties everything together and uses pushState to replicate what happens when a user is browsing a normal HTML page. Any data that is needed is grabbed directly from the API in the background.

This offers significant advantages: not only are you farming off the entire generation of your app's views to the user's browser (taking work that you would have to do yourself and distribute it), you are reducing the complexity of the app. There are now two pieces to maintain: an API and a set of javascript...no web layer in between.

This makes things a lot snappier for users. Pages switch instantly because everything they need to run the app is already there and ready to go.

So, no, I'm not shipping 800GB of data to each of my users and expecting them to crunch SQL queries, but I do like to send any work that I would have to do to their browsers, assuming they are capable.


I battle this daily. IE9 can barely render its own chrome, let alone content on a webpage.

A couple months back, a certain project I was working on was taking over 45 seconds to render. 45. seconds.


Have you looked into the (re)rendering on the client? We had a similar issue with an extjs application rendering dead slow on ie9. By suspending the rendering while the app was loading/transitioning (using batchLayouts in the ext case), IE loading time improved significantly.


> What about your cache, should we just introduce HTTP into the caching mechanism and hope that one of your clients has the data you wanted to deliver to someone else?

HTTP caching is a standard built explicitly for this. And it's gotten better at precisely these jobs over the years (substitute for server memory and indexes). So if that "someone" is the user who has the cache, yes.

> Are your clients going to be responsible for ensuring you don't max out your server with the amount of processes your httpd can handle?

HTTP 307 Temporary Redirect. 420 Enhance Your Calm. Keep-alive. Etc. There are many standards in place to let the client participate in not overloading the server. The server must always be willing to deal with a new actor, but that doesn't mean that there aren't systems in place to help clients be nice to servers that are overloaded without breaking other parts of the spec (caching).

> Decades of networking work, by extremely talented individuals, to make sure the client has to do very little lifting at the layers closer to the metal by people much, much smarter than us.

I don't see any evidence that all that networking work is to make sure that clients don't need to do work. Instead, I see many clever systems on both client and server being created to push the boundaries of resource usage past previous limits.

On the server, it sounds like you are familiar with these techniques. Load-balancing, edge-caching, distributed applications, etc.

On the client, it must come about through standards and behavior specifications. If you're imagining a world where servers are placing requirements on clients before they are ready to handle them, I agree, that would be awful. Servers should always have ultimate responsibility for being a good host to any potential connection.

However, once a client behavior standard has been well agreed upon for a significant amount of time, then servers should absolutely take advantage of the behavior. Should programmers of web application skip any of the behaviors above (HTTP caching and REST), or shun JSON APIs or pushState or any of those things?

No way! They should take full advantage, as long as they know what they are doing. And yes, you do have to still be a responsible citizen (for your own good as much as anyone's).

For example, if you use pushState, you still must keep in mind how search engines may observe your pages[1]. Using cache expirations aggressively but correctly still takes some careful planning. And so on.

But the principle of "offloading" as you put brings the possibility of optimization by factor of N, where N is your audience size, instead of some constant K as many other server-side optimization are limited.

This is not "nullifying" anyone's networking work. Clients, under carefully grown and tested behavior standards, _should_ be able to do some lifting.

If you want to talk about user experience, that's a whole other issue. And yes, there is significant evidence (responding to your earlier comment) that things have gotten better -- just a couple years ago, web fonts were almost unusable because of the flicker that happened on load. I am sure some clients still see it, but for my own personal use, the flicker is virtually 100% gone.

If the client can't display the page in a reasonable replacement time for the server, then it's a no-go. But if it can, it's a huge win and IMHO doesn't subtract anything from the (admittedly genius) networking stacks underneath.

Finally, if the technology in these networking layers could so easily be nullified by so-called "ignorant" individuals, wouldn't it be not-so-brilliant to begin with?

[1]: http://stackoverflow.com/a/6194427/143295


I can only assume that, while you do actually have some appreciation of concepts like separation of concerns and network-based computing, you're ignoring them completely for the purpose of advancing an argument which, to those of us with extensive professional experience in such matters, is every bit as ignorant as it's suffused with utterly unwarranted condescension.

Either that, or you're assuming that, because this thread revolves around a toy project which pushes literally all the work off to the client, any project of any sort which does likewise must also do exactly likewise. Granted that I'm not sure how such an assumption could possibly survive even a moment's consideration, but it seems a plausible origin for your misapprehension here, and so I feel constrained to mention it all the same.

Either way, my experience has been that clients vastly prefer to see their CPU activity bump a little above idle every now and again, in exchange for a responsive and reliable user experience, rather than find themselves in a situation where, thanks to a developer who assumes today's web browsers must be on the same level of capability as the IBM-3270-with-pretty-pictures style of fifteen years ago, the server's so wound around the axle, trying to do work its clients could do faster and better, that it's unable to respond to HTTP requests at all.

"But," you might sneer, "that's a problem on the backend! Do it right the first time and you don't have this problem!" Welcome to real life, where legacy code exists, and where a rewrite from scratch often isn't an option -- especially when the budget for it would have to come from a contract client who's already on a shoestring, and whose operations are funded by registrations completed through their website.

In one example from a past professional engagement, the proposition I might've made would've reduced to approximately this: "I know you're facing bankruptcy because your athletes have grown frustrated with poor response and taken their registration fees to better-managed events," I might say, "but how about paying us fifteen grand to build you a completely new website from the ground up? It'll work great!"

Perhaps you can conceive of a world in which such a suggestion would go over well. If so, I applaud the floridity and scope of your imagination, which must be truly unparalleled.

Here in reality, we must cope as best we can with the situations in which we find ourselves. Oftentimes, that means finding some way in which the client can do part of the work -- specifically, as in the case I describe, that part which is intimately bound up with the particular user's unique experience, and whose result therefore doesn't generalize no matter where it's handled.

Lest we lose track of the topic, the "static site generator" under discussion is, I grant, a very poor example of my thesis, because the work it pushes off to the client could very well be done on the server, or indeed could be done once as a compilation step and the result served statically to all comers -- it's a cute toy, but nothing I'd ever expect to see anywhere near production. But, as I said before, to attempt to generalize, from this toy project, to every case in which some work is offloaded to the client, redoles most strongly of arrogance, ignorance, and foolishness.

I mean, I can appreciate that, when your major professional concern at the moment is apparently a website dedicated to giving stoners an idea of which strain of cannabis provides the best high, you probably don't have to worry much about scaling and offloading, and that's fantastic! But some of us find ourselves forced by circumstance to operate in a less utterly trivial sphere.


A server should serve. I to think too much front-end magic will slow down a site.

However, I actually created this on that dual-core laptop you are talking about. So not to take away from your point, but let me also take this from the other perspective - bandwidth.

The 6kB + 15kB Javascript files only initially seem to be a waste. After you think about the bandwidth you save transferring all the additional pages in plain markdown and using the (now) cached JS build each page actually results in much faster loading times even if the browser might have to spend a few hundred milliseconds rendering.


> A server should serve. I to think too much front-end magic will slow down a site.

> However, I actually created this on that dual-core laptop you are talking about.

Right, which is why you have stuff in your code about dealing with the fact that sometimes rendering the bytes coming down the wire is painfully slow. You commented out the spinner GIF, which has replaced the Java applet loading, or the Shockwave loading, we knew so well in the 90s and 00s.

> So not to take away from your point, but let me also take this from the other perspective - bandwidth.

It would take visiting 3.4 posts of the size in the example before you would even reach the necessary bandwidth of the very first post.

Blog posts, by and large, get traffic for the single post in which someone is visiting and very, very rarely get hit up for a 2nd, and even more rarely a 3rd story on the same site.

Taking that into consideration, who is wasting more bandwidth?

> even if the browser might have to spend a few hundred milliseconds rendering

Gather up 3 or 4 times waiting a few hundred milliseconds and now you're waiting multiple seconds. Which is more than enough time for folks to hit the back button.

I'm not dogging your efforts or your library. I hope you don't see it that way. I am simply saying that the amount of work going into client-heavy development these days, and the amount of folks hopping on the: "Wow, that's such a great idea!" bandwagon, should be limited (educated).

A server should serve clients. There are human beings attached to the requests. We, as developers, should hold ourselves to a higher standard of doing whatever we can to remove burden from the clients.

I don't know where the idea that developers should be unburdened and servers shouldn't work hard came from, but it stinks.

Pay $0.00000001 to ask the 16 CPUs to wrap text in markup. Sheesh.


I'm sure there exist edge cases where the average user's browsing experience is slowed more by the additional bandwidth used downloading gzipped HTML tags on the fourth and fifth pages they visit than by the rendering script download on the first page and their browser running a script to generate a static layout features on every page.

But even then I'd wonder if the answer to save all that wasted time and bandwidth wasn't "maybe we could do even better if we compressed those images?"


I think the "edge case" here is "many if not most mobile connections" - not everyone has LTE and even among those who do, it's a highly variable experience.

In addition you are hitting those clients with a double-whammy: slow load over a slow connection, and slow rendering on a [relatively] slow CPU.


The edge case is one where the extra bytes in a set of plain old HTML files are actually more of a significant overhead than the JS/markdown alternative, which has a higher page weight for the first visit anyway as well as making more demands on client-side renderers. (In retrospect I could have worded the first post more clearly). Mobile is hardly likely to be this edge case since as you point out yourself mobile browsers will have a more perceptible delay when it comes to generating a page on the client side in javascript (and also can't display anything until the script is downloaded which is possibly a big first page performance hit, and aren't necessarily effective at caching the script for repeat visits)


Who suggested that having the client bear some of the load unburdens developers? I've done that kind of work, and it was some of the hardest work I've ever done.

And "servers shouldn't work hard" is a similar oversimplification, perhaps even a misapprehension. It's not so much that the server's cores should ideally sit idle, as that in almost any non-trivial application where scalability is even a concern, the server is probably already working hard, and having the client bear some load where it makes sense to do so can offer vast improvements in how quickly and well you're able to scale the application to meet demand.


I think your static, static blog generator is a cool idea and I nice gimmick. It lets you pause for a second and really think about how the responsibilities of serving a site and rendering its pages should be distributed.

But you can't really think that rendering a page with javascript is faster than serving a static html page with all the possible optimizations like gzip, etag and byte-range, etc.

Even though a gzipped markdown file might be a tiny bit smaller than the same content as a gzipped HTML file, so the bandwith difference is negligible, the browser will have the HTML file at once, whereas with jr it still needs to compile the markdown file into HTML and finally render it.


> But you can't really think that rendering a page with javascript is faster

I don't really. In my comment I just wanted to mention more aspects to the problem. This is mainly a catalyst to show people there are more tools out there at our disposal that we sometimes think about.

And some of our tools need more research...


Most blogs have insanely high bounce rates after the first page usually so all that extra script downloading rarely pays off.


More important, that's not only example of shifting where things render, but replacing documents with programs.


Indeed. I referred to that same thing in another thread couple months ago:

> This is just the result of transitioning HTML to an applications framework. Now the application you use to view articles is not your browser, but Blogger (or WP etc).

https://news.ycombinator.com/item?id=7558912

(The context was rant about webapps that "hijacked" keyboard shortcuts unnecessarily)


http://peter-the-tea-drinker.com/pages/longjing-exmple.html

Summary - if you don't want to break the web you should do this:

1. If you must use Javascript templating, inline the JSON into the document.

2. If possible, actually template the document too.

Yes, single page apps are faster (because of the joke called javascript loading ... yes there's async, but it's broken, and you can load a script to load other scripts, but really ...). Except for when they load. Then there's a trip to your server, a trip to the CDN for the Javascript, then another trip to your server to get the document you wanted.

Inlining the JSON saves 1 trip. Having the document already prepared saves another (since most browsers will render before the Javascript at the bottom).

You won't notice running on localhost. It probably works ok on a good connection, if you're located in the same timezone. But with 200ms latency (on a mobile) and two round trips (plus your initial get) it makes a difference. And people using lynx or no-script won't tell you they can't read your site.

For the ~80% of people who only visit your site once, this is close to optimal.

You could trim it down a bit by querying your document, and building your data model from the static content, but that's just over engineering.


Moreover, on any relatively popular blog a post would be rendered by each visitor, so many thousands of times if the post is viewed a lot, rather than being rendered just once by the server. As an exercise in wastefulness it's a way up there.


My laptop has "many cores" and 16gb ram and SSD architecture.

My phone has quad cores, 2gb ram, 32G ssd.


My laptop has a single core and 768MB RAM with a spinning-rust architecture.

Know your user base is probably the answer here.


Interesting, I wonder do search engines crawl pages like this?

You'd need to add at least a `doctype` and `title` for this to be valid HTML5 (not that it necessarily matters for a search engine crawler).

Edit: Also if you added the script to the top of the doc then you could `display: none` the doc and wait for the css to load before making it `display: block`. This would overcome the FOUC effect.


As I understand it, wouldn't Google render this page while crawling? Perhaps they'd punish it for doing so, but I think Google would have no issue with the content itself.

I also wonder how much FOUC you could incite by increasing the size of the markdown document.


True, but don't forget that the JS and CSS is cached so after the first page load - every other page is instantly ready to be rendered.


That's not true in this case - the JS and CSS are cached but the output of the JS (the rendered HTML) is dependent on the execution of the JS. If that meets any delay (ie, through parsing 1,000 nodes in a document for example), the page will look unformatted until the parsing is complete.


FOUC is easily enough fixed with some css rules. Can even give it a snazzy transition effect after it's rendered.


I did something similar years ago with xslt rendering xml in browser. It used an xsl stylesheet loaded in the xml itself, similar to this approach. It was a pain to debug, and I'd imagine this approach is easier because tooling has caught up with this sort of thing.

I use httpsb so this comes through as a pile of text until I allow the js to do its thing. I'm ok with this, since a browser plugin that does markdown would work here too.

Sometimes I miss things like Archie that had very small network footprints due to technical requirements of the past. They really were able to focus on the content, like this solution.


I loved the idea of using XSLT rendering to take an well formatted XML document and process it client side in the browser, but it came with more problems than it solved.

The tooling was terrible to accomplish it, but different browsers reacted slightly different to the XSLT, some had a flash of unstyled XML followed by it rendering the page using XSLT, JavaScript didn't work right since the page had to be served as XML not as HTML, Adsense my ad network at the time didn't work with it either.

XSLT had potential, but it never really caught on, and now we just have JavaScript frameworks that do all the rendering client side using JavaScript instead.


I noticed a similar flash with this project, although the end result is very cool, and to think except for links, this is somewhat all Lynx compatible.


The obvious solution would be to create XSLT processor in JavaScript to override the browsers processing.

(no, don't do that, i was just joking)


Blogging shit is too complicated in all kinds of ways.

Think for a second about why we're writing here of all places.

All kinds of people write insightful stuff here for no reason other than that they're bored at work or because someone is wrong.

Because it's utterly simple and text-based.

It's a medium of communication, not a flashy thing to create your individual brand and promote yourself as an author and blah-blah-blah.

I wish people would always think about why certain places on the web become popular in such a way that they integrate themselves into peoples' lives and let them easily express their natural spontaneous creativity.

HN, reddit, LiveJournal, Craigslist, mailing lists, Usenet, forums, GeoCities, IRC, BBS, etc.

They all obviously focus on "content." More importantly they get out of the way and let human beings communicate without hinders.


Now we just need a static site generator generator.


Have the server inject the javascript into the page?


If only I had finished my framework framework.



You may have fun reading about the three Futamura projections :-).


Dang it, I was hoping for a 'pick your features & download your customized version of jekyll'! I guess that would be a static site generator generator...


Static Site Generators seem like the Twitter client of today (which itself has been the "Hello, World!" of Web 2.0)

Here is a site that compares 270 of them: http://staticsitegenerators.net/


Most (all?) of those 270 static site generators generate HTML files that sit on disk on the server. This tool takes markdown files with a single script tag and serves that up to the client. While I'm pretty sure I've seen this idea before, it is at least different from the typical static site generator.


Many static site generators use (or allow you to use) markdown to write your pages, and then generate static html from it. And I can only see benefits to the no-javascript approach.


If there's javascript in the output, it's not really static is it?


Well, you can deploy it with a simple web server. No server side processing.

Theoretically this is the best scaling solution.

Practically it makes the site slower for every client.


>Theoretically this is the best scaling solution.

Html markup on your pages is probably minuscule after compression (you're using zopfli with 5000 cycles and minifying your html, right?), and amortizing the upfront cost of that extra js over the average number of page views is definitely worse than plain ol static html for your blog 99.9% of the time.

But let's get real; theoretically the best? I doubt markdown is even near the optimum in terms of bits on the wire. Don't even talk to me unless you're writing your own binary markdown serialization format.


Well, with 1 client, you have one machine which renders the page and with 1000 clients, you have 1000 machines which render the page.

The processing capacity depends on the amount of clients.

With a static site generator, the processing capacity depends on your own machines and is independent of the clients.

But yes, for a static site, this just doesn't help much, since every client gets the same data, so why should every one process it on its own.


While you can deploy it with a simple web server, it may not always be the best scaling solution. Consider a website built with true "built" static sites, which do not load any javascript. They make only one request to the web server where as jr makes two on the initial load (and again on each page, though cache helps here significantly).


That's the whole point of every static site generator. It's pretty absurd that people see this as "web scale" versus other static site generators.

Theoretically and practically, this isn't the best scaling solution. One shouldn't even talk about scaling in that sense for mere blog posts and such.

Another issue is the fact that your SEO will be greatly impacted. They don't execute (last I heard) JavaScript. Links and such will go to waste.

While I'm a big proponent of client-side SPAs, only when they are appropriate -- can I emphasis this more? Blogs or static sites are not appropriate for this.


So am I right in thinking that this is like a template, but the template gets added dynamically by javascript?


You can confirm this by inspecting the document - the markdown is parsed into nodes through regexp and then a full HTML document is constructed and injected into the DOM (or rather, creating the DOM and then injecting it).

Cool, fun, but probably not something for which I can see a practical use.


I guess the only downside is that if your client has NoScript, they just see the raw Markdown. With HTML if the client has a text based browser things like links, images, etc will still work. I still use Lynx over SSH if I need to grab paywalled content from my work machine or check something on the local intranet when I'm out of the office.

The text based browser is splitting hairs, but NoScript isn't. Unless there are browsers that will natively render Markdown if served/detected (i.e. not a plugin)?


You could see that as an upside - Markdown is designed to be human-readable and is pretty successful in that design goal. So a NoScript user will see a rather nice plaintext.


uhhh, Markdown is "nicely" formatted for geeks, and is great for easing the burden of content creation.

however My mom (and I imagine any non-geek) would have trouble reading the hyperlink format, and be completely confused by the strong vs italics, code, or block quote sections of Markdown.


Maybe this is just geek privilege talking here, but I think that emphasis and block quotes in markdown are perfectly legible.

The link format is weird (and I never remember how to type it), but I feel like it's not too cryptic.


Only geeks really use NoScript. Or people who know geeks that put NoScript on their computer. NoScript breaks many non-geek websites as it is, and they have to have geek skill to fix those breakages.


Remember that we're talking about markdown being shown to people with noscript, something I highly doubt non-geeks are using.


Wouldn't it also be shown to people who merely have Javascript disabled?


> I guess the only downside is that if your client has NoScript, they just see the raw Markdown.

Correct, which is pretty awful.

But, curiously, 100% better than any other 'static site' solution I've seen here.


Using Google Chrome Version 34.0.1847.132 on Linux, if I open pages (e.g. http://xeoncross.github.io/jr/, http://xeoncross.github.io/jr/john1.html) in a new tab that is not immediately focused, they never visibly render. I see that frequently with client-side-rendered pages.


While not what I'd do for every site, it's a pretty neat tool for some use cases.

In particular, Heroku has moved to a similar setup with Boomerang [1] - a JS include that puts the nice Heroku branding at the top of your add-on configuration pages.

It neatly sidesteps the need to make a component/template for every single framework and backend in use by their different partners.

I could also see it being useful as an easy "drop in" way of tying the branding+nav together on a number of different sites within an organization (so your auto generated docs, your tutorials, etc all live on different systems but easily look the same).

1 - https://github.com/heroku/boomerang


What about accessibility?


This comes down to reimplementing a HTML rendering engine in Javascript. Accessibility is the least of their problems.


And now i want to generate a TOC ;)


Great point. Client-side can't do any meta analysis about the data, unless it fetches all the data. Which is a big waste of bandwidth and cpu.


Furthermore, unless you do some hackery in your .htaccess or something like that, there is not even a way to discover all existing pages.


At first I balked because I positively hate frivolous use of javascript and in fact browse with noscript and only a few sites allowed, but then I realised something.

This is actually really good for people like me. If I visit with javascript disabled, I get a nice, readable markdown. If I visit with lynx, I get markdown. I can actually read your blog with curl, if I want to. This is pretty much the holy grail of graceful degradation right here.


This is an awesome idea, and looks like it is very well executed. I have two suggestions:

- Add some sparse html tags (i.e. a basic doctype/body), which can help with search engine parsing.

- You'll notice that for a few milliseconds on page load, the text is shown before the JS rendering takes over. This can probably be solved via Javascript, just find a way to cache or pre-load the pages.


I was hoping this was going to be a generator that generates static site generators, since they seem to be the hot new project.


Neat idea. Is the "Download Jr" part required, or could it just be included from GH pages of the original repo?

Could make for a very quick & simple way to put up some content online while keeping it looking decent, and users could contribute new themes etc.


Neat idea for very small, quick and dirty sites.

Pros:

* No build process (yet), source == build & no need for dev server * Easy to integrate client specific code (e.g. browser compatibility)

Cons:

* How to transpile to CSS/JS? * Apples-to-apples, slower than static sites with "build" process * SEO?

TBD:

* Client processing speed


That demo looks amazing w/o JavaScript: http://i.imgur.com/sgC4sdN.png (</sarcasm>). Adding JavaScript as a SPOF cannot be a good approach -.-


I would wrap the markdown in a gigantic &lt; pre &gt; so that when noscript is enabled, you end up seeing at leat markdown, and not a wall of letters.

As for advantages of this, it seems like it would be better for a more open web. If you put this on the web, it doesn't matter where you serve it from, I can send you really good well formed patches. On thing the web currently lacks is the ability to participate at a web scale. If I see something that I can improve and can get to the source, I'll send a patch or pull request. These days 'view source' means 'view generated code that nobody has seen'. This gets back to the roots of the web.


Do we really need this level of overly extreme optimizing? Today's modern Web browsers are already doing the most layout and rendering with CSS plus CDN and client side cache.


Broken? "curl -i http://xeoncross.github.io/jr/" Content-Type: text/html.


I have had a, static, static site generator for many years. It is called gedit. I've heard Notepad++ is pretty good too.


I think the idea is pretty neat, but the flash of unstyled content before the js kicks in really ruins it for me.


> $then = "email" + "@" + "davidpennington.me"

Is this supposed to be PHP or Javascript?


So here is my dilemma. I wanted to write it in Javascript, but the "then" looked lonely without some kind of starting context. I was going to write it in Go, but "var then..." kind of messed up the sentence. So I wrote it in PHP since everyone knows what that horrible $ is all about.


'.' is the proper string concatenation operator in PHP.


I made a less-tricky, less-cool thing like this[0] and I still use it on my site today. It's really great to not have to recompile markdown or do anything other than a git-push on text files. The fact that view-source works on yours is super cool, though; nicely done!

[0]: https://github.com/haldean/docstore


This is cool. I value pageload speed though, so I wouldn't use this myself.


All we need is browsers that support and render Markdown.

Good work !!


This is a fantastic little toy project! Thanks for showing me. I'm thinking about building this into something for hosting on servers with extremely limited CPU, such as an RPi.


I'm fine with octopress but if I had to change to another static site generator I'd probably go with 'Go' due to speed improvements.


Honest question why would you care about the speed of a static site generator? As long as it generates the site in a reasonable time I'm fine with it.


Hugo is a fully featured SSG written in Go. It's considerably faster than other SSGs and has a very easy installation.

http://hugo.spf13.com


I know spf13 :-), that's what I had in mind.


This is a great idea that might be useful on free hosts that only allow static pages but I don't think that I would use it otherwise.


I would be curious to see what other uses it has




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: