Hacker Newsnew | past | comments | ask | show | jobs | submit | nicc's commentslogin

Someone wants to sell his supplements, uh?

Title is: "Multivariate genomic scan implicates novel loci and haem metabolism in human ageing".


Nice try.


Did I miss something..?



Seems like you're exaggerating and overly bitter, for some reason.

I'll just move on, this is nothing.


I dont get it

The premise is stupid and incorrect.


Wrong website for saying this.

You'll be cancelled and flagged (and so will I). COULD NOT AGREE MORE!

I also want to see what happens after in 2020 the whole world has stopped. Let's how many degrees the temperature lowers even with most factories and economical activity gone for months.


Atmospheric CO2 sticks around for a long time. Cutting emissions slows warming, but doesn’t decrease temperatures.

You might be thinking of Methane which does break down reasonably quickly.


People are advocating hosting their own Git repos, but wouldn't those go down, too, and wreck the day even more?

Or, are you guys all devops geniuses better than those who work at GitHub?


You get to spend time fixing it rather than waiting it done :p


I used to self-host everything and while I never had hiccups or problems, I got tired of the additional work of maintaining a git server and moved my personal projects to github just so I don't have to deal with it. I suppose the comments are fueled by frustration and the fact that when you have something in-house, you can directly go and push the devops teams to fix it whereas github you just sit and wait. I never understood why people think that poking someone with a stick will magically speed up the fixing process but there we go...


Of course everything goes down one day or another, it's just it's been a very common occurrence with Github recently.

When my own side-project has more uptime than Github, there's something wrong somewhere.


Git is notionally decentralized. It should be neither as difficult or uncommon as it is to have multiple repositories, but we have collectively let convenience get in the way of reliability.


But those wouldn't take most of the world's public git repos down all at once just because of a single issue. Single points of failure have a bad reputation for a reason.


Absolutely not, but I guess that most of "internal repos" access patterns are widely less trafficked than GH. So, it's not that difficult to have a "stable enough" configuration for most organizations. Sure, for a 5-10 devs shop it's overkill, but if you have a mid-sized team and already having someone caring for internal tooling/systems, it's not that a bad idea. Unless you have the "hosted here" syndrome.


Well, if you host it yourself, and it goes down, then you have control over getting it back up, rather than waiting for someone else to fix it.

Though really, if it is that important for it to be up, you should mirror it to at least one other provider (ex. self-hosted and github).


I did work somewhere once with GitLab self-hosted in a VM. The person in charge believed it was important to reboot the VM every so often.

One day, for whatever reason, he couldn't bring the VM back up. Self-hosted GitLab was out for the rest of the day. I found this pretty funny.


When I was running subversion we had no downtime in 6 years during office hours.

Really git is designed to be serverless and decentralised this centralised GitHub malarkey is probably wrong.


Umm, so if you are hosting your own Git Repos, and not fiddling with the configurations much once you have found the perfect fit.

Isn't the only reason it will go down be because of network issues or power failure? What other possible cause could be for system failure?

I have been running HA, Pi-Hole, Maria-DB and my own API instances on my Raspberry Pi and in the 22 days of uptime till now, there have been 0 failures


Many reasons, including your hosting provider going down.

At that point you'd have to create and manage a cluster.

You have to update servers, etc. etc. and if you count the hours is many times more than working locally and wait a couple of hours until technicians at GitHub fix the problem for you.


Self-hosting means something different.

Also, most people don't need to provide access to hundred of thousands of users so they won't ever need a cluster of Git servers.[1] (Some bloated UI like GitLab may need more computing to host even for a moderate amount of users, though).

Self-hosting Git is easy. Besides power failures there is not much reason this could go down if you don't help with that actively. But if you don't touch it besides OS updates it can run 24h a day for many years without any effort. (Biggest issue is backup actually, but you have to think about that point anyway if you run something on your own).

[1] https://news.ycombinator.com/item?id=23804373


Not sure why you're being downvoted; I absolutely agree. For a smallish project, especially made up of volunteers, free time spent toward maintaining infra is time not spent on working on your project.

For larger projects where you have the resources to have dedicated infra people, I guess it depends.


> Or, are you guys all devops geniuses better than those who work at GitHub?

Do you call IBM to maintain your pi-hole?


People are mad/frustrated/busy, that's how it goes


> I'm fairly confident I could reduce their server costs to below their revenue in a matter of days or weeks.

> I couldn't afford to put in the time for free, but surely there are others who can.

OK.


With all the actual problems in the world, it's amazing how much energy gets wasted on idiotic stuff like this.

There are actual concentration camps in China TODAY.

So sad.


What I find ever more insane is how much energy people spend fighting it, writing paragraphs and paragraphs of argumentation.

"That sounds far-fetched to me but I can vaguely imagine that these terminologies aren't helpful in these times, lemme search-and-replace, done"


And even more insane is how people can completely disregard common sense and their own opinion and blindly obey an idea that sounds far-fetched to them just to placate easily-offended people—even if it's just copy/paste (which it isn't, if anything because people will need to re-learn an API).


This seems to be grossly miscalculated--at least for Italy.

No idea how these prices are calculated, but when I was still in Italy, I paid $15/mo. and only had 2GB. In contrast, here in Poland I pay $6/mo. and I get 20GB.

Also, I've never seen an infographic as confusing as this one.

Extremely bad job all around.



It says:

1. Packages were recorded up to a maximum of 60 per country – records beyond this number have negligible impact on the average

2. Averages are calculated as the MEDIAN of all recorded package prices/data limits

But It doesn't mean most citizen will use median plan, I believe everyone will tend to use cheapest plan. Is average number really meaningful?


prices have gone down significantly in Italy. you can get 70GB for 6€/month nowadays.


Uhhhh, no.


Thank God you're here to save the day!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: