Hacker Newsnew | past | comments | ask | show | jobs | submit | shashwat986's commentslogin

A friend at Google's telling me it's partially back up. Anyone still facing issues?


Ask your friend to build a post about what happened. I can imagine the heat in the online conference, one thing is to get all nodes from one service down; another thing is to get all the servers down at the same time.

Clearly they're using the same infrastructure for everything which IMHO is a huge mistake


Clearly they're using the same infrastructure for everything which IMHO is a huge mistake.

Let's not forget that most Google services have exceptional reliability and things like today's outage are incredibly rare. I'd bet that failures due to interoperability between disparate services would cause a lot more downtime than Google suffers right now if teams ran different infrastructures.


I'm not talking about running the same code or the same platform. I'm talking about not sharing infrastructure stuff at RUNTIME, including storage.

Google outages are not that rare, see https://en.wikipedia.org/wiki/2020_Google_services_outages

If they had multiple instances of infrastructure, then yes the outage wouldn't be prevented but at least it would be minimized as only a few services would be down instead of the whole thing.

It's easy to prove it, when Google was down, except for those who depend directly on it, how many other services from the internet went down? None. If Google was truly distributed internally like the internet is, then they wouldn't have this problem. This is clearly a symptom of a crap thought out infrastructure. I know folks who work there, and if you think Google Services architecture is great then you better think twice. There are many smaller businesses with much better availability from that of Google.

Be aware that Search wasn't down, so it clearly doesn't share something in common between all the other services.


All fine for me


It's up again!


I've used these so much:

    function gc() { grep -rnI "$@" * ;}
    function gcA() { grep -rnI -A 5 "$@" * ;}
    function gcB() { grep -rnI -B 5 "$@" * ;}
    function gcC() { grep -rnI -C 5 "$@" * ;}
    function gcf() { grep -rnIl "$@" * ;}
Just helper functions built on top of grep


And what do they do? It's not obvious.

edit: Thank you both!


-r enables recursion

-n enables line numbers

-I (eye) ignores matches on binary files

-l (ell) lists the filenames of matching files only (overrides -n)

-A, -B, and -C specify how many lines of context after, before, or around the match to display.

"$@" adds any additional command line arguments passed to the functions.

* selects all the files and directories in the current directory.

So, without other arguments, the first four functions list matches with line numbers for all files (ignoring binary files) in the current directory and below, with 0 lines context (match only), 5 lines context after, 5 lines context before, and 5 lines context around the match respectively. The final function lists the filenames only and provides no context or line numbers.


-r searches directories recursively instead of single files; -n displays line numbers; -I ignores binary files. -A, -B, -C, and -l set how much context is shown: 5 lines of content before the match, after the match, or both; or just the filename and no content.


Yeah, we're still facing issues with, erm, Github issues.

Also, while they haven't updated this blog post for a while, their status page has been very up-to-date and informative: https://status.github.com/messages


Yeah but their recovery estimates were completely off. Pulls, hooks, checks and issues still are unusable


Releasss and gh pages are still completely broken too.


> very up-to-date and informative

Is that satire? It said 2-hour ETA 5 hours ago and the last update was over two hours ago.


I see an update 7 minutes ago.

>12:56 British Summer Time

>The majority of restore processes have completed. We anticipate all data stores will be fully consistent within the next hour.


Every hour they promise something will be done until the next hour. I haven’t been able to work all day so far.


Consider this a lesson on serverlessness. (We have been similarly afflicted, but their git backend seems to be up; and even further, we have rediscovered what we stopped paying attention to: that with Git, a centralized repo is just a convenience, not a requirement.)


Yes, I agree, but here it’s not up to me to choose the infrastructure.

I wonder what the total cost of this ordeal must be. Surely in the tens of millions.


The latest message appears to be:

"We are validating the consistency of information across all data stores. Webhooks and Pages builds remain paused."

Which is a bit scary. Half my requests appear to hit some storage which is still many hours behind. They should be seeing that...


Something very similar happened to me. To this day, I don't run any python script without the `-i` flag


Yep, and the "End Session" button deletes the entry.


I'm so happy to see this! MDN has always been my go-to resource for any JS/CSS help


I'm hoping this translates into more budget for NASA, so it isn't just a pipe dream.


HN hug of death?


I like her description of the event: "Down on 1 knee. He said four words. And /r/isaidyes"


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: