1. The data is in the process of being updated. Prices change quickly and from the trend of past changes I fully expect these prices to drop as soon as I make the change.
2. While the ITU (and other data sources) try their best to look at all sources, I have to take them at their word. I've heard several times that they seem to have overlooked plan A or plan B, so it doesn't surprise me if there are other plans lurking out there.
Long-story short, it's absolutely best to consider this as a gauge/appromixation, rather than a scientific exact number.
The site has a pre-paid US plan at $103 / GiB. ($0.14 / (1.39 MiB / 1024 MiB/GiB)) This doesn't seem like "plans change" levels of error, this is more than twice what I pay for all service (SMS, voice, data) since I first got a plan that included (allegedly unlimited, but actually not) data a decade ago…
The site also says "Prices were collected from the operator with the largest marketshare in the country" but follows it with "Because these numbers are based on the least expensive plan, they are best case scenarios." which doesn't logically follow. But going with that, the largest carrier in the US is apparently AT&T, and just the first plan on their site offers a 4GB plan at $50, or ≈$12.50 / GiB, which is an order of magnitude lower than the figure on the site?
But the conclusions are wrong by orders and orders of magnitude.
For example your calculation is that 1.39 MB costs Canadians 0.17, that would mean 1 GB costs Canadians 125 US dollars. There isn't a single plan in Canada that charges 125 dollars for 1 GB of data.
This isn't just a minor inaccuracy of +/-10%, this is off by literally a factor that's closer to 15-20x.
Using jQuery is definitely a different style of development than using something like Angular/Vue/React. I tried to clarify in the post (under "The Big Picture") that the data shouldn't be construed as me saying "React is slower than Vue or jQuery" but rather that there are characteristics of the way we build when we use React (combo of ecosystem, documentation, technical approach, etc) that leads to a lot more work happening on the device. The question is whether that's something that we're ok with.
Hopefully that came across in the post! I certainly don't want anyone thinking this is like a benchmark or anything that is pitting jQuery execution versus React execution or similar.
Hopefully that's not the way it comes off! The entire last section of the article is my attempt to make it clear that this _doesn't_ happen because we're bad people.
> So clearly, folks who have built a heavy site are bad, unethical people, right?
> Here’s the thing. I have never in my career met a single person who set out to make a site perform poorly. Not once.
> People want to do good work. But a lot of folks are in situations where that’s very difficult.
> The business models that support much of the content on the web don’t favor better performance. Nor does the culture of many organizations who end up prioritizing the next feature over improving things like performance or accessibility or security.
First off, you're right: any metric can be gamed. Even so, Google is already using performance to factor into its algorithm, so some level of reasonable accuracy must have already been agreed upon.
Ensuring a certain level of performance is not a very easy problem to solve. But it's doable.
Myself and a few others sat down with the AMP team awhile back and chatted about what that would look like and there were definitely a few different layers that would be needed to get it to work.
The first is already in progress. There's a spec for something called "Feature Policy" (https://wicg.github.io/feature-policy/). Feature Policy would let you tell the browser you don't need certain features which would in turn let browsers take shortcuts (so to speak).
For example, you could declare that you will not be allowing document.write. Enough features being declared could serve as a way for Google (and others) to say "Ok, this is going to be reasonably fast."
There's more needed of course: we had discussions about a standard related to declaring a limit on asset sizes, etc. But it's a start. And while it will always be a little fuzzy, the same is true of AMP. It's completely possible to build a slow AMP page.
The best Google can do is say "if they're doing X, then the odds are good that it's performant".
> Ensuring a certain level of performance is not a very easy problem to solve.
What I'm getting at is maybe AMP is Google's way to make the problem easy. Sometimes when a general problem is difficult, you're better adding restrictions to make the problem tractable.
> It's completely possible to build a slow AMP page.
Can you elaborate? How easy would it be to test AMP pages for that?
> What I'm getting at is maybe AMP is Google's way to make the problem easy. Sometimes when a general problem is difficult, you're better adding restrictions to make the problem tractable.
It absolutely is! (Sorry if I made it sound like it wasn't). That's what the goal is of Feature Policy and other standards too: to add restrictions to make it easier to solve.
> Can you elaborate? How easy would it be to test AMP pages for that?
I keep meaning to make an example, but real work gets in the way. :)
Basic gist though is that AMP is, again, only a fuzzy approximation of good performance. You can still mess it up.
As for testing, the best thing to do is probably test from the origin server. That way you eliminate the Google cache layer and just focus on what the format itself actually accomplishes.
The conclusion, for anyone who doesn't want to click through, is that the vast majority of the performance benefits don't come from AMP itself. They come from A) the Google cache layer and B) the fact that Google preloads the AMP page in the background before you click on it.
> What we know about AMP is that the technical standard in itself enforces good performance. For example, CSS is limited to 50KB and only inline, custom JS is not allowed (other than via an iframe method), and all images are guaranteed to be lazy loading.
Hence, this is why AMP pages load instantly compared to a “normal” web page. It’s enforced by the standard. Right?
Wrong.
If you're including several MBs of CSS and JS in a render blocking manner and even more MBs of non-lazy loading images (which really isn't unusual) you're going to have a significantly slower page. There's a lot of hate for AMP but those restrictions make a lot of sense.
FWIW, they did let performance factor, slightly, into ranking, but there was never a "fast" icon (one was rumored, but never shipped). An icon like that would've easily motivated folks, no need for AMP to be involved.
That being said, there are technical reasons why this hasn't happened yet. The good news is that some web standards (ex: https://wicg.github.io/feature-policy/) are being worked on that would (hopefully) allow Google to verify performance of sites without relying on AMP.
This would give Google no excuse to give AMP content special treatment, and would hopefully relegate AMP to what it should've been since day one: a framework for performance, but one that wasn't required or bolstered by any Google-colored carrots.
Not by itself, but it does provide a portion of what is needed. Specifically:
> The developer may want to use the policy to assert a promise to a client or an embedder about the use—or lack of thereof—of certain features and APIs. For example, to enable certain types of "fast path" optimizations in the browser, or to assert a promise about conformance with some requirements set by other embedders - e.g. various social networks, search engines, and so on.
Automated tooling is a must, yes. The riskiest part about relying on ONLY GH's solution (IMO) is the NVD/CVE limitation.
I agree, CVE would be _awesome_ in theory. In reality, very few file for CVE's and so the coverage is iffy (~11% of npm package vulns and about ~67% of rubygem vulns https://snyk.io/stateofossecurity/).
But it goes beyond that. There was a great paper earlier this year (https://arxiv.org/abs/1705.05347) that highlighted many other issues: lag between CVE and NVD (which is where all the useful info comes from), mismatched CPE's, nonexistent CPE's, etc.
I would love to see us get to a point where the CVE/NVD was enough, but we're far from it right now.
Completely agree! The post actually alludes to that a bit towards the end.
> Single Page Apps increase the amount of client side logic and user input processing. This makes them more likely to be vulnerable to DOM-based XSS, which, as previously mentioned, is very difficult for website owners to detect.
The more significant work we do on the client, the more interesting it becomes as an attack vector.
Two things:
1. The data is in the process of being updated. Prices change quickly and from the trend of past changes I fully expect these prices to drop as soon as I make the change.
2. While the ITU (and other data sources) try their best to look at all sources, I have to take them at their word. I've heard several times that they seem to have overlooked plan A or plan B, so it doesn't surprise me if there are other plans lurking out there.
Long-story short, it's absolutely best to consider this as a gauge/appromixation, rather than a scientific exact number.