Ask your friend to build a post about what happened. I can imagine the heat in the online conference, one thing is to get all nodes from one service down; another thing is to get all the servers down at the same time.
Clearly they're using the same infrastructure for everything which IMHO is a huge mistake
Clearly they're using the same infrastructure for everything which IMHO is a huge mistake.
Let's not forget that most Google services have exceptional reliability and things like today's outage are incredibly rare. I'd bet that failures due to interoperability between disparate services would cause a lot more downtime than Google suffers right now if teams ran different infrastructures.
If they had multiple instances of infrastructure, then yes the outage wouldn't be prevented but at least it would be minimized as only a few services would be down instead of the whole thing.
It's easy to prove it, when Google was down, except for those who depend directly on it, how many other services from the internet went down? None. If Google was truly distributed internally like the internet is, then they wouldn't have this problem. This is clearly a symptom of a crap thought out infrastructure. I know folks who work there, and if you think Google Services architecture is great then you better think twice. There are many smaller businesses with much better availability from that of Google.
Be aware that Search wasn't down, so it clearly doesn't share something in common between all the other services.
-l (ell) lists the filenames of matching files only (overrides -n)
-A, -B, and -C specify how many lines of context after, before, or around the match to display.
"$@" adds any additional command line arguments passed to the functions.
* selects all the files and directories in the current directory.
So, without other arguments, the first four functions list matches with line numbers for all files (ignoring binary files) in the current directory and below, with 0 lines context (match only), 5 lines context after, 5 lines context before, and 5 lines context around the match respectively. The final function lists the filenames only and provides no context or line numbers.
-r searches directories recursively instead of single files; -n displays line numbers; -I ignores binary files. -A, -B, -C, and -l set how much context is shown: 5 lines of content before the match, after the match, or both; or just the filename and no content.
Yeah, we're still facing issues with, erm, Github issues.
Also, while they haven't updated this blog post for a while, their status page has been very up-to-date and informative: https://status.github.com/messages
Consider this a lesson on serverlessness. (We have been similarly afflicted, but their git backend seems to be up; and even further, we have rediscovered what we stopped paying attention to: that with Git, a centralized repo is just a convenience, not a requirement.)