Hacker Newsnew | past | comments | ask | show | jobs | submit | forgotpasswd3x's commentslogin

It appears that the entire dashboard has been relocated to the center panel area (see @1:58). Seems to be a strange choice.


Yeah, the interior looks rather unfinished...it would be unsurprising to me if the production dashboard had a different layout. On the other hand, minimizing the number of parts that need to be made twice for a left- vs right-side driver would presumably keep costs down.


I don't know you, but you get my respect for that statement.


This is really amazing, man. It's honestly the first 3D printing application I've seen that I can see quickly improving thousands of lives. Just to think of all the people who right now can't afford this procedure, that soon will be able to... it's just really wonderful.


Next I need someone to do orthopedic shoe inserts. Mine cost $600 and only fit my running shoes.


I'm a physician, and I used to get custom made ones, until I found these for around $45 in the US: https://www.drscholls.com/productsandbrands/customfitorthoti...


I was having foot pain a few years ago and decided to give those a try before shelling out to see a podiatrist/get custom orthotics. My foot pain was gone within a week and only came back on one occasion; when I bought new shoes and forgot to put them in the new pair.

Still, I've wondered if I'm missing out by not getting custom inserts. It sounds like (at least in your case) there was no major difference?


Wiiv[0] was recently crowdfunded. They make custom 3D-printed orthotic inserts created from photographic information via your smartphone. Sadly it won't be generally available for quite some time yet, but I have my hopes up.

There's also Superfeet[1]. They're not custom, but there's many options.

[0] https://wiivv.com/

[1] https://www.superfeet.com/


Have you ever tried cheap generic inserts?


Yes, I have McDonald's size arches and off-the-shelf ones aren't sufficient. Going to look into some of these other suggestions though.


If you re-read the post, that's actually not his argument.


He's saying that the older someone is the more their ideology defines them, which is itself probably inaccurate. And he says that that "could be a problem."


He's arguing about ideological stagnation. That's bad regardless of just about what the ideology is and whether you agree with the particular.

To quote Max Planck, "Science progresses one funeral at a time." That's as true of anything as it is of science.


Right, but that's different than saying he doesn't like old people because he disagrees with them.


Regarding B), Can OS updates even be pushed to a locked phone?


Yes, via DFU mode.


If I recall correctly, there was a noticeable difference in the HN front page on Christmas day. More unusual articles were upvoted than what usually gets posted. It could be due to the difference in visitors, but it might also be just that fewer people were voting, and there was less people to upvote what might normally be a popular post and that caused less generic posts to stay visible.


People still use QR codes?


For loading two-factor authentication secrets. Or anything else you can think of that has too much information to convey with a short link. If you are directing people to a web site using QR codes then you're doing it wrong. Well, wrong now that everyone has realized just how limited QR codes are.


I do for any printed media. It makes it easier for people to look up the corresponding electronic version of a document. QR code has its uses, but tends fail when misused. QR code is best when interacting with it is strictly optional.


They ever did?


Well I think people did, once, but then immediately discovered it wasn't worth the effort.


They're great for managing parts in a manufacturer's storeroom. A QR code is more information-dense than a barcode.


I use them at bus stops to pull up live tracking


I think that www vs. no-www, as a matter of "putting the customer first" is so INCREDIBLY insignificant, compared to the thousands of other decisions that go into a product, that it's ridiculous we're even having this conversation. This is looking for optimization in the wrong places at its finest.


Do you a think a user should have to write:

http://www.apple.com:80/

or:

http://www.apple.com/

Ever since the first web browsers, way back in the early 1990s, it has been commonplace to leave out the port number. The web browser adds it automatically.

Similar logic would lead us to leave off the "http". And similar logic would lead us to leave off the "www". The trend has been to simplify the URL as much as possible.


Nobody is suggesting the user should be forced to type in the protocol, the subdomain, or the port number. If the user types in:

apple.com

It should lead to where the user wants to go.

However, there are good reasons for using the www subdomain as the canonical URL, and it is also worth noting that some users will habitually type in www anyway.

If you don't want to include the subdomain in marketing material, then there's nothing stopping you from leaving it out, just as there's nothing stopping you from leaving out the protocol.


And even just typing:

apple

Should lead to where they want to go.


Which leads to 127.0.53.53 on Iceweasel 38.5.0 https://www.icann.org/namecollision


From the article:

Should I redirect no-www to www?

Yes.

Redirection ensures that visitors who type in your URL reach you regardless of which form they use, and also ensures that search engines index your canonical URLs properly.

So I don't think you're advocating anything they don't.


Annoyingly, this can be difficult to do if the A record for your domain doesn't point to a webserver. There are other legitimate things it could point to, like an authentication domain controller or a session border controller. Do you really want to be running a webserver there, even if it's only redirecting?


I'm not familiar with session border controllers, but DCs can be and usually are placed somewhere at the apex of the domain, via SRV records.

If you really have trouble, you can tell your firewall to route ports 80 and 443 to another machine and everything else to the device you need. (I've run a hackier version of this that used netcat inside inetd: we didn't want a web server on the machine that owned the domain name, but there was another nearby web server cluster that we added a virtual host to. And we were fine running inetd and netcat on the machine.)


I've done something similar but using iptables and dnsmasq to route DNS differently depending on where the requests were coming from and what domain they were asking about.


Your argument that eliding the www is better for users is logical, but you failed to make any argument whatsoever that it is significant. Given that you were responding to a statement that the difference isn't significant, you're basically attacking a straw man.


> www vs. no-www, as a matter of "putting the customer first" is so INCREDIBLY insignificant

I don't know, I've always been fascinated by the premium of domains that are just one character shorter, and all of the startups that exclude a vowel to get a compressed (or maybe just available) name. That could reflect actual user preferences.

UX theorists convinced me over the last decade that user behavior is shaped by tiny moments and irritations that we think are insignificant at first glance. A few 100 ms extra in delays may seem barely perceptible, but they can kill a site. It's not implausible that a few extra keystrokes could do the same.[0]

On the other hand, redirects seem like a happy medium, so long as they're fast enough. nasa.gov uses a redirect, that seems fine. Note that they were driven to that (from 'www'-only) after confused fans kept writing in to complain that "http://nasa.gov" was a dead end and that they didn't "even know the basics of running a website."[1]

[0] https://www.nngroup.com/articles/response-times-3-important-... N.B.: In that link, Jakob Nielsen recommended making "www" optional through redirects. It's been a while, so not sure his current thoughts, but the same reasons would apply today. https://www.nngroup.com/articles/compound-domain-names/

[1] https://blogs.nasa.gov/nasadotgov/2011/05/31/post_1306860816... NASA's case provides a real example of something Jakob Nielsen pointed out in the first link: usability is a slave to expectations. So if enough popular sites are using naked domains, and your naked domain just 404s, some users will dismiss your site as unreliable.


I was going to write the exact same thing as you. But then I thought about Chrome removing "http://" to avoid displaying useless and dense information to the user. What about www?


That would be misleading as example.com and www.example.com are not guaranteed to be the same site.


You could say the same for http:// and https://.


I don't think chrome (or any browser) removes https:// from url bars. They only remove http://.


Ah you're right! For some reason I had it in my head that Chrome was hiding that. I really just hate that it hides any part of the full URL.


Doesn’t safari mobile do that?


Safari desktop as well.


No you couldn't, with http vs https you are accessing the exact same resource, just through a different protocol.


Not necessarily. I've written my share of rewrite rules that change behavior based on https vs http.

Imagining that were true, you would never be redirected to an https version of a page.


Well, if the org hosting the content is being nice and following convention, sure.

But, there's nothing stopping me from running a webserver listening on port 80 (so, accessed in the browser at http://example.com) that serves a picture of a baseball. I can also run a webserver listening on port 443 only (with SSL/TLS set up, so accessed in a browser at https://example.com), on the same machine, that serves a picture of a dog instead.

This sort of breaks the rules/conventions though because you expect the resource to be the same by nature of the URL you're using to access it. But nobody has to follow that rule


That's one way to set it up, but there's no guarantee of that. Servers may switch anything based on TLS status or really any other property of a client request.


I agree with this. The people saying it's technically possible to get a different webpage at that resource address are missing the point - it's trivial to do A/B with the same site, same port, same protocol, based on user IP, time of day or a RNG.

The point of URL is to identify a single address owned by one guy. Removing the www subdomain means you have two addresses, possibly owned by two guys.


The protocol name is part of the URL, so it is possible and technically valid that the https version leads to different content than the http version. Some things are certainly different, for example if there are ads and other linked resources on the https page, they all need to go through https, while on http page, there is no such requirement.


Technically I guess you could make it a subdomain but no one would actually do that in a production site and in my 14 years have never seen it used, not to mention that major search engine bots will lookup both. The general user will simply type in the domain name more times than not without the www


`www` used to literally be a different host in a network (and in some cases, I'm sure still is) specifically designated for WWW traffic. Think of universities in the 90s which had their existing infrastructure and an Internet facing host on their primary domain and they want to add a web server. They may have had a firewall, probably no load balances, so routing port 80 around their primary host was much more complicated than just throwing up a new host and DNS entry.


I regularly come across sites that only work with "www.". Common with university sites that use the subdomain hierarchy a lot. For extra fun, make the behavior reversed depending on if you are inside or outside their network.


Same with my uni. And then you have sites that only work with, and others that only work without the www.

But it makes sense. the first subdomain before the uni domain specifies the faculty, many of which have their own datacenters. Then many of those have yet their own servers in their network, and often www. is one added later on.


My uni makes this extra fun with different sites on http and https.


Oh yeah. And then someone enables HSTS and a subdomain doesn't support HTTPS, so now you have to keep a browser around that never ever is allowed to contact the parent site...


Apple actually does this on the iPhone.


I know IE used to do this too.


A redirect is also completely invisible to the user, and should only add a few microseconds to the page load. But you should always redirect www. if you are not using www. Many people instinctively type that into the address bar when you tell them a domain name.


A few microseconds, huh? I decided to do some experiments. Here's how long some 301s on common sites took:

    google.com:    529ms
    apple.com:     261ms
    microsoft.com: 142ms
    reddit.com:     61ms
Which is not only 100,000 times longer than a few microseconds but more importantly well above the perception threshold.


Where in the world are you that google.com's 301 takes over half a second ? It's under 100ms for me.


I don't get the parent's numbers either. I did:

    $ time (curl -L www.reddit.com > /dev/null 2>&1)
    real	0m0.410s
    user	0m0.040s
    sys	0m0.008s

    $ time (curl -L reddit.com > /dev/null 2>&1)
    real	0m0.389s
    user	0m0.036s
    sys	0m0.012s
So for Reddit, I'm going to make the cost of the redirect 21 milliseconds.


I cleared Firefox's cache, opened the waterfall diagram, and looked at the time between when I hit enter (assuming that's 0ms) and the time when the request for the www.* address went out. I was planning to subtract off DNS time if necessary but in all cases it hit the cache and contributed 0ms.

I didn't have wireshark open so I don't really know what happened with google. It surprised me too. Maybe something had to be re-transmitted? Now it seems to take 90-100ms. Perhaps I should have done best-of-three, but my point wasn't about precise numbers, it was about orders of magnitude, and "tens to hundreds of ms" is definitely more in line with what I expected than "a few us".


This is because reddit.com and www.reddit.com gets you the http-versions, which both are redirected to https

Try this: curl -sL https://{www.,}reddit.com -o\ /dev/null{,\ } -w "%{time_redirect}\n"


Can I ask you to explain that "-o\ /dev/null{,\ }" magic ?


It's not magic -- it's some form of crude error.

So first off what this does is, it expands to the expression:

    '-o /dev/null' '-o /dev/null '
Even if we remove the latter space by just using `{,}` instead of `{,\ }` curl still returns for me an error code 23 -- CURLE_WRITE_ERROR.

curl seems to interpret `'-o /wtf'` as a command to write to the file ` /wtf`, so this only makes sense if you have a directory called ` ` in the folder you're running from.

You can therefore do this correctly with:

    -o/dev/null{,}
and that correctly writes the contents to /dev/null without issuing a curl write error.


Thanks, it sure looks less ugly with -o/dev/null{,} I couldn't find any other way to get curl to stay silent and still output redirect times. Hence the crude hack. (Obviously my bash and curl versions had no problem with the spaces or I wouldn't have posted it)


Using the curl'ing tips up this branch of comments I programmed a highly sophisticated script and dropped it on github. :)

https://github.com/dougsimmons/301debate


Nice!

I would have gone for a simpler loop:

echo -e "sec\tmethod\turl";for url in {https,http}://{www.,}{en.wikipedia.org,{google,reddit,facebook,youtube,netflix,amazon,twitter,linkedin,msn}.com,google.co.in}; do curl -sL "$url" -w "%{time_redirect}\t${url%:}\t${url#//}\n" -o/dev/null; done|sort


Wow. Thanks for teaching me that. Yeah, yours is better, I'll lay it on github with a thank you and maybe "embrace and extend" it.

Or learn python and port it to python, I could swing it that way.. Cheers.


you're calculating load time the wrong way - you should not do time check outside the process as you do calling 'time' as another process from shell. consider using curl profiling option next time


With `time` you also calculated the process time curl needs to evaluate the 30x response and the reissued http request to www.


Which is valid since this processing time will be included in whatever application the user is using to access the website.


I'm aware I'm nitpicking and maybe too theoretical now, but the processing time would vary in whatever application the user is using.. I'm with negus here who basically means the same I guess. The generic danger here (regarding benchmarking) is that you're explicitly also benchmarking curl. In case curl/"the web client" would handle 30x redirects super inefficiently, these results could lead to wrong assumptions.


There are plenty of places in the world where internet latency is a big issue, not to mention mobile networks everywhere. There's no reason to add a roundtrip unless absolutely necessary.


In most cases it should stay low too: your browser should retain a keep-alive to the web server, so you're not throwing away the connection.

Future requests will auto-resolve due to caching of the 301.


Mediocre Wifi can easily add seconds of latency.


HTTP/301 redirects are "permanent" per the RFC, and therefore cacheable. Subsequent requests for the apex zone by the user should cause the browser to skip the first request entirely.


touche.


And this is EXACTLY why you should use www.


microseconds? That's definitely not the case. It takes 10s of milliseconds just to leave your internet router on busy wifi home networks. A 301 redirect is an extra network roundtrip for no gain and much more (perceived) latency.


that is actually false. The majority of people no longer type in www. For example, in the last 10 years branding has completely remove the www and so likewise user reaction has followed suit. Unless you are targeting an older crowd, the vast majority ignore www on a search. With chrome being the major browser now and with browsers allowing search from the address bar, lookup without www is pretty much standard - again assuming your users are in a class of under 35 years of age.


"many people" != "the majority". Having the site not work with www. would be very stupid.


Your simple explanation still fails to prove that's why Etsy is losing so much value.


With everything revealed in the Snowden leaks, it doesn't seem like the good people are having much of an impact.


I left the NSA a few months ago after working for them four years for that very reason. I realized nothing I did was going to affect what they were doing, and while I personally couldn't effect any change, I could at least leave and not contribute to it. So I did.


I just wanted to say that reading your comment was encouraging and made me happy.


Did you consider IAD or one of the defense contractors focused on highly-secure technology? Hell, even groups doing R&D funded by NSF or DARPA on secure systems work. There's a ton of stuff going on out there that either (a) benefits security of government/military or (b) could benefit everyone at some point.

I can understand if (a) didn't interest you but being ex-NSA might help with positions doing (b). Not sure how INFOSEC people would react given the climate, though. Wise ones should see a skill and character reference from getting and quiting the job respectively.


Thank you not only for posting this, but also for refusing to contribute.


Thanks for posting this.


I commend you for your decision.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: