Hacker Newsnew | past | comments | ask | show | jobs | submit | puzzle's commentslogin

Google has tracked office air quality for years, e.g. through the Aclima partnership.

That's basically because Larry Page really, really cares about it. He's kinda like your friend with a Kubrick obsession that can't stop bringing up facts:

https://twitter.com/elonmusk/status/727189428142235648

He was on to something! Jokes aside, I think he just has a heightened sense of smell and that's why he had air filters stronger than law requirements installed everywhere, at least in Mountain View.


> I think he just has a heightened sense of smell and that's why he had air filters stronger than law requirements installed everywhere

Quite possible. I have a very strong sense of smell ever since I started hormone replacement therapy and it's driving me crazy how people can put up with some bad smells.

- I can pinpoint mold with a surprising accuracy -- it's everywhere, you'd be scared.

- I can tell when a bathroom is not properly ventilated -- had the office test it out and they changed the entire air ventilation system of that part of the building because it was not up to code.

- I could even smell if the driver in front of me is smoking in their windows closed car while I'm driving behind it with my own windows closed -- changed my air filters.

I'm absolutely certain that a lot of buildings are lacking in air quality and if I were the owner of any industrial or commercial building I'd make sure to invest in top quality air filter.


It was even crazier: when the original download server was written, local disk was faster, mainly because the network wasn't too fast (rack locality was a concern, way back when), but also because GFS chunk servers weren't, either. At the time of the rewrite, Firehose and co. were being deployed everywhere, D did a better job at serving bytes and, later, local disk use was placed in a lower QOS level. Unless you were one of the few teams that had a good rationale for dedicated machines, if you fought for I/O time on a given disk against D, invariably you lost.


I bet there's a very long tail, i.e. the majority of images are rarely accessed, if ever.


Resizing images in Golang is not as trivial as you might imagine at first. You need to be able to handle all sorts of formats and color spaces. You'd be surprised by the kind of weird garbage your users will upload. Then you need to use SIMD, not pure Golang solutions, or performance will suffer. So you end up adopting a wrapper around libvips or similar, at which point you will start to ponder if you should have stuck with C/C++ in the first place. (It all depends on how much of Go's features you use or if it's just a nicer, safer C for you.)


But Yahoo search itself has been, for the past four years, a mishmash of Bing, Google and Yahoo results. I wonder which mix of the three Yahoo will pass along when using its API, versus using its search page.


Google retrains models all the time. They gave a presentation about ML and production last year:

https://www.usenix.org/conference/srecon18asia/presentation/...

You can see there's a section on privacy and deleted data as well.

Each team has its own policies, because each product is different: at a bare minimum they might be using different storage systems, but it's very likely that their data pipelines are quite different, too. In any case, each team's targets are at least as strict as any published ones, of course.


And additionally, it's probably useful for external folks to know that each product team has a designated product counsel and [usually] designated/dedicated POC on several different security & privacy teams, all of whom provide strict guidance on what is or is not allowed, and why.


Also, it is likely related to the timeframes for auto deletion they are giving. 3 months is good enough for ML. If only a day, then it would not be worth anything to G.


I'm an ex employee. There's actually a whole team whose sole job is making sure that all other teams have policies, measurements and alerting for deleting data. They'll chase you if any of the above doesn't hold. It's non-trivial work and slows down your development, if you believe in releasing early and fast. For everybody else, it makes total sense.

I bet it's not free to run, but it's cheaper and easier than elsewhere, because Google's infrastructure is built in-house and mostly integrated. I don't envy other companies that want to do the same.


Maybe they're the same service behind the scenes. Would you store the same music twice, if you had to run both?


They don't seem to be. At least not in terms of content. YouTube music has songs that Google play doesn't, and vice versa.

The YouTube music experience is kind of odd. It seems that you can only listen to full albums if someone has uploaded them to YouTube as separate songs. Which makes sense, but it means that currently a lot of the music I listen to is nowhere to be found on YouTube music.



Is this on desktop or mobile? It could match the theory in the grandparent post that they preferred sticking to one backend. That also allows handling conflicts in one place, with one protocol. E.g. what would the behaviour be if you edited the home address with a ZIP code on your phone, while offline? What if you try to make the same change from your laptop and e.g. you set a ZIP+4 code? And then what happens when your phone is online again?


It's on mobile. I haven't tested it much, but I'm really not interested in doing so. I only keep Google Maps because my wife and I like to share locations, which I am trying to do via some direct method between our phones eventually anyways.


Oh, God. I forgot about the empty src bug. YouTube wasn't the only substantial Google service affected by it from time to time. But I remember it differently.

Yes, it triggered a GET for /. But that generated HTML (usually the service's homepage, as was our case), which the browser would attempt to parse as an image, obviously failing. It would not trigger a recursive fetching of all the resources on the page. Even without recursion, it already inflicted major damage, because our service's homepage was dynamic, while the resources linked from it were mostly static (and thus a lot cheaper, as well as cacheable). I think I would have noticed if it multiplied other traffic, not just the homepage.

This was the bane of my existence for many months. Every few weeks I would have to fire up Dremel and try to figure what was causing the spurious page loads. I hated and still hate SQL, so that was no fun. I knew when it was time to investigate thanks to our human monitoring system: our PMs would get excited or puzzled by a sudden jump in the page view dashboards. (They lived by those graphs...)

Thank you Chris and co. for your contributions in killing the browser version from hell.


The particular change Chris is referring to was in implementing delay load for the related video thumbnails. Now as you may imagine the /watch page gets most of the traffic and is hyper optimized. Now imagine every load of /watch hitting the much more dynamic and personalized and much less optimized homepage 10-15 times :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: