Yeah, the interior looks rather unfinished...it would be unsurprising to me if the production dashboard had a different layout. On the other hand, minimizing the number of parts that need to be made twice for a left- vs right-side driver would presumably keep costs down.
This is really amazing, man. It's honestly the first 3D printing application I've seen that I can see quickly improving thousands of lives. Just to think of all the people who right now can't afford this procedure, that soon will be able to... it's just really wonderful.
I was having foot pain a few years ago and decided to give those a try before shelling out to see a podiatrist/get custom orthotics. My foot pain was gone within a week and only came back on one occasion; when I bought new shoes and forgot to put them in the new pair.
Still, I've wondered if I'm missing out by not getting custom inserts. It sounds like (at least in your case) there was no major difference?
Wiiv[0] was recently crowdfunded. They make custom 3D-printed orthotic inserts created from photographic information via your smartphone. Sadly it won't be generally available for quite some time yet, but I have my hopes up.
There's also Superfeet[1]. They're not custom, but there's many options.
He's saying that the older someone is the more their ideology defines them, which is itself probably inaccurate. And he says that that "could be a problem."
If I recall correctly, there was a noticeable difference in the HN front page on Christmas day. More unusual articles were upvoted than what usually gets posted. It could be due to the difference in visitors, but it might also be just that fewer people were voting, and there was less people to upvote what might normally be a popular post and that caused less generic posts to stay visible.
For loading two-factor authentication secrets. Or anything else you can think of that has too much information to convey with a short link. If you are directing people to a web site using QR codes then you're doing it wrong. Well, wrong now that everyone has realized just how limited QR codes are.
I do for any printed media. It makes it easier for people to look up the corresponding electronic version of a document. QR code has its uses, but tends fail when misused. QR code is best when interacting with it is strictly optional.
I think that www vs. no-www, as a matter of "putting the customer first" is so INCREDIBLY insignificant, compared to the thousands of other decisions that go into a product, that it's ridiculous we're even having this conversation. This is looking for optimization in the wrong places at its finest.
Ever since the first web browsers, way back in the early 1990s, it has been commonplace to leave out the port number. The web browser adds it automatically.
Similar logic would lead us to leave off the "http". And similar logic would lead us to leave off the "www". The trend has been to simplify the URL as much as possible.
Nobody is suggesting the user should be forced to type in the protocol, the subdomain, or the port number. If the user types in:
apple.com
It should lead to where the user wants to go.
However, there are good reasons for using the www subdomain as the canonical URL, and it is also worth noting that some users will habitually type in www anyway.
If you don't want to include the subdomain in marketing material, then there's nothing stopping you from leaving it out, just as there's nothing stopping you from leaving out the protocol.
Redirection ensures that visitors who type in your URL reach you regardless of which form they use, and also ensures that search engines index your canonical URLs properly.
So I don't think you're advocating anything they don't.
Annoyingly, this can be difficult to do if the A record for your domain doesn't point to a webserver. There are other legitimate things it could point to, like an authentication domain controller or a session border controller. Do you really want to be running a webserver there, even if it's only redirecting?
I'm not familiar with session border controllers, but DCs can be and usually are placed somewhere at the apex of the domain, via SRV records.
If you really have trouble, you can tell your firewall to route ports 80 and 443 to another machine and everything else to the device you need. (I've run a hackier version of this that used netcat inside inetd: we didn't want a web server on the machine that owned the domain name, but there was another nearby web server cluster that we added a virtual host to. And we were fine running inetd and netcat on the machine.)
I've done something similar but using iptables and dnsmasq to route DNS differently depending on where the requests were coming from and what domain they were asking about.
Your argument that eliding the www is better for users is logical, but you failed to make any argument whatsoever that it is significant. Given that you were responding to a statement that the difference isn't significant, you're basically attacking a straw man.
> www vs. no-www, as a matter of "putting the customer first" is so INCREDIBLY insignificant
I don't know, I've always been fascinated by the premium of domains that are just one character shorter, and all of the startups that exclude a vowel to get a compressed (or maybe just available) name. That could reflect actual user preferences.
UX theorists convinced me over the last decade that user behavior is shaped by tiny moments and irritations that we think are insignificant at first glance. A few 100 ms extra in delays may seem barely perceptible, but they can kill a site. It's not implausible that a few extra keystrokes could do the same.[0]
On the other hand, redirects seem like a happy medium, so long as they're fast enough. nasa.gov uses a redirect, that seems fine. Note that they were driven to that (from 'www'-only) after confused fans kept writing in to complain that "http://nasa.gov" was a dead end and that they didn't "even know the basics of running a website."[1]
[1] https://blogs.nasa.gov/nasadotgov/2011/05/31/post_1306860816...
NASA's case provides a real example of something Jakob Nielsen pointed out in the first link: usability is a slave to expectations. So if enough popular sites are using naked domains, and your naked domain just 404s, some users will dismiss your site as unreliable.
I was going to write the exact same thing as you. But then I thought about Chrome removing "http://" to avoid displaying useless and dense information to the user. What about www?
Well, if the org hosting the content is being nice and following convention, sure.
But, there's nothing stopping me from running a webserver listening on port 80 (so, accessed in the browser at http://example.com) that serves a picture of a baseball.
I can also run a webserver listening on port 443 only (with SSL/TLS set up, so accessed in a browser at https://example.com), on the same machine, that serves a picture of a dog instead.
This sort of breaks the rules/conventions though because you expect the resource to be the same by nature of the URL you're using to access it. But nobody has to follow that rule
That's one way to set it up, but there's no guarantee of that. Servers may switch anything based on TLS status or really any other property of a client request.
I agree with this. The people saying it's technically possible to get a different webpage at that resource address are missing the point - it's trivial to do A/B with the same site, same port, same protocol, based on user IP, time of day or a RNG.
The point of URL is to identify a single address owned by one guy. Removing the www subdomain means you have two addresses, possibly owned by two guys.
The protocol name is part of the URL, so it is possible and technically valid that the https version leads to different content than the http version. Some things are certainly different, for example if there are ads and other linked resources on the https page, they all need to go through https, while on http page, there is no such requirement.
Technically I guess you could make it a subdomain but no one would actually do that in a production site and in my 14 years have never seen it used, not to mention that major search engine bots will lookup both. The general user will simply type in the domain name more times than not without the www
`www` used to literally be a different host in a network (and in some cases, I'm sure still is) specifically designated for WWW traffic. Think of universities in the 90s which had their existing infrastructure and an Internet facing host on their primary domain and they want to add a web server. They may have had a firewall, probably no load balances, so routing port 80 around their primary host was much more complicated than just throwing up a new host and DNS entry.
I regularly come across sites that only work with "www.". Common with university sites that use the subdomain hierarchy a lot. For extra fun, make the behavior reversed depending on if you are inside or outside their network.
Same with my uni. And then you have sites that only work with, and others that only work without the www.
But it makes sense. the first subdomain before the uni domain specifies the faculty, many of which have their own datacenters. Then many of those have yet their own servers in their network, and often www. is one added later on.
Oh yeah. And then someone enables HSTS and a subdomain doesn't support HTTPS, so now you have to keep a browser around that never ever is allowed to contact the parent site...
A redirect is also completely invisible to the user, and should only add a few microseconds to the page load. But you should always redirect www. if you are not using www. Many people instinctively type that into the address bar when you tell them a domain name.
$ time (curl -L www.reddit.com > /dev/null 2>&1)
real 0m0.410s
user 0m0.040s
sys 0m0.008s
$ time (curl -L reddit.com > /dev/null 2>&1)
real 0m0.389s
user 0m0.036s
sys 0m0.012s
So for Reddit, I'm going to make the cost of the redirect 21 milliseconds.
I cleared Firefox's cache, opened the waterfall diagram, and looked at the time between when I hit enter (assuming that's 0ms) and the time when the request for the www.* address went out. I was planning to subtract off DNS time if necessary but in all cases it hit the cache and contributed 0ms.
I didn't have wireshark open so I don't really know what happened with google. It surprised me too. Maybe something had to be re-transmitted? Now it seems to take 90-100ms. Perhaps I should have done best-of-three, but my point wasn't about precise numbers, it was about orders of magnitude, and "tens to hundreds of ms" is definitely more in line with what I expected than "a few us".
So first off what this does is, it expands to the expression:
'-o /dev/null' '-o /dev/null '
Even if we remove the latter space by just using `{,}` instead of `{,\ }` curl still returns for me an error code 23 -- CURLE_WRITE_ERROR.
curl seems to interpret `'-o /wtf'` as a command to write to the file ` /wtf`, so this only makes sense if you have a directory called ` ` in the folder you're running from.
You can therefore do this correctly with:
-o/dev/null{,}
and that correctly writes the contents to /dev/null without issuing a curl write error.
Thanks, it sure looks less ugly with -o/dev/null{,}
I couldn't find any other way to get curl to stay silent and still output redirect times. Hence the crude hack.
(Obviously my bash and curl versions had no problem with the spaces or I wouldn't have posted it)
you're calculating load time the wrong way - you should not do time check outside the process as you do calling 'time' as another process from shell. consider using curl profiling option next time
I'm aware I'm nitpicking and maybe too theoretical now, but the processing time would vary in whatever application the user is using.. I'm with negus here who basically means the same I guess. The generic danger here (regarding benchmarking) is that you're explicitly also benchmarking curl. In case curl/"the web client" would handle 30x redirects super inefficiently, these results could lead to wrong assumptions.
There are plenty of places in the world where internet latency is a big issue, not to mention mobile networks everywhere. There's no reason to add a roundtrip unless absolutely necessary.
HTTP/301 redirects are "permanent" per the RFC, and therefore cacheable. Subsequent requests for the apex zone by the user should cause the browser to skip the first request entirely.
microseconds? That's definitely not the case. It takes 10s of milliseconds just to leave your internet router on busy wifi home networks. A 301 redirect is an extra network roundtrip for no gain and much more (perceived) latency.
that is actually false. The majority of people no longer type in www.
For example, in the last 10 years branding has completely remove the www and so likewise user reaction has followed suit. Unless you are targeting an older crowd, the vast majority ignore www on a search. With chrome being the major browser now and with browsers allowing search from the address bar, lookup without www is pretty much standard - again assuming your users are in a class of under 35 years of age.
I left the NSA a few months ago after working for them four years for that very reason. I realized nothing I did was going to affect what they were doing, and while I personally couldn't effect any change, I could at least leave and not contribute to it. So I did.
Did you consider IAD or one of the defense contractors focused on highly-secure technology? Hell, even groups doing R&D funded by NSF or DARPA on secure systems work. There's a ton of stuff going on out there that either (a) benefits security of government/military or (b) could benefit everyone at some point.
I can understand if (a) didn't interest you but being ex-NSA might help with positions doing (b). Not sure how INFOSEC people would react given the climate, though. Wise ones should see a skill and character reference from getting and quiting the job respectively.