Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Anyone using GitLab have any insight on how well their operations are running these days?

We originally left GitLab for GitHub after being bit by a major outage that resulted in data loss. Our code was saved, but we lost everything else.

But that was almost 10 years ago at this point.



We use GitLab on the daily. Roughly 200 repos pushing to ~20 on any given day. There have been a few small, unpublished outages that we determined were server side since we have a geo-distributed team, but as a platform seems far more stable than 5-6 years ago.

My only real current complaint is that the webhooks that are supposed to fire in repo activity have been a little flaky for us over the past 6-8 months. We have a pretty robust chatops system in play, so these things are highly noticeable to our team. It’s generally consistent, but we’ve had hooks fail to post to our systems on a few different occasions which forced us to chase up threads until we determined our operator ingestion service never even received the hooks.

That aside, we’re relatively happy customers.


FWIW, GitHub is also unreliable with webhooks. Many recent GH outages have affected webhooks.

They are pretty good, in my experience, at *eventually* delivering all updates. The outages take the form of a "pause" in delivery, every so often... maybe once every 5 weeks?

Usually the outages are pretty brief but sometimes it can be up to a few hours. Basically I'm unaware of any provider whose webhooks are as reliable as their primary API. If you're obsessive about maintaining SLAs around timely state, you can't really get around maintaining some sort of fall-back poll.


> you can't really get around maintaining some sort of fall-back poll.

This has been my experience with GitHub Actions as well, which I imagine rely on the same underlying event system as webhooks.

Every so often, an Action will not be triggered or otherwise go into the void. So for Actions that trigger on push, I usually just add a cron schedule to them as well.


Completely agree on all points. We've had dual remotes running on a few high traffic repos pushing to both GitLab and GitHub simultaneously as a debug mechanism and our experiences mirror yours.


Not sure what specific operational services are of interest - but here's a link to their historical service status [0]

[0] https://status.gitlab.com/pages/history/5b36dc6502d06804c083...


We’re using gitlab, loads of issues and outages, we want to go to github


No issues on GitLab.

Haven't seen any outage from GitLab in like, ever.


That has definitely not been my experience. I like Gitlab, but they've had regular incidents all along. If a git push failed I wouldn't question it, it's almost never my network. I'd just open Gitlab's Gitlab and find the current active issue.

To Gitlab's credit their observability seems to be good, and they do a good job communicating and resolving incidents quickly.

Some companies that shall not be named have status pages that always show green and might as well be a static picture. Some use words like "some customers may have experienced partial service degradation" to mean "complete downtime". Gitlab also has incidents, but they're a lot more trustworthy. You can just open the issue tracker and there's the full incident complete with diagnosis.


Hmmm.

You must be doing GitLab wrong.



Never had any problems really.

GitHub on the other hand has outages more frequently.


My org hosts it on prem, and while I don't like the way pages are organized for projects, I only really interact with the PR page and that is laid out well. Most of my interaction with git is happening from my terminal anyway so ¯\_ (ツ)_/¯




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: