Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Deis v1.0 – Production Ready (deis.io)
188 points by ferrantim on Nov 11, 2014 | hide | past | favorite | 74 comments


Let me preface this by saying that I'm a huge fan of Deis and the team. I went through the process of setting up a cluster a few months ago and they were incredibly helpful and responsive in IRC. Highly suggest, would love to work with the project again.

That being said, I'm a little confused as to how Deis can be considered production ready when the underlying infrastructure (Etcd and Fleet from CoreOS) aren't. Both are making great progress, and I wouldn't worry too much about using them directly, but "production ready" abstractions on top of something maybe-not-so-production-ready makes me cringe a tiny bit.

https://github.com/coreos/etcd/blob/master/Documentation/pro...


Your point is taken: some of the underlying technologies we use are still under active development. That said, we've worked with enough production Deis deployments to know that etcd and fleet (for example) run much better than some might think.

As early adopters of these technologies, we have strong relationships (and often formal L3 support) with the teams involved. We are often the first to report issues as well as the first to test fixes. You can view this announcement as a vote of confidence in those projects and our ability to roll out fixes as bugs surface.

Hope to see you back in the #deis channel soon!


From reading the release notes, it looks like the current 1.0.0 release doesn't work with the latest stable docker 1.3.1 due to incompatibilities with the tls layer?

  https://github.com/deis/deis/releases
  https://github.com/boot2docker/boot2docker/issues/571
  https://github.com/deis/deis/issues/2230
Why not wait until that is resolved before announcing production ready? :-(


Project creator here. It's been a fun 16 months getting to this point. Happy to answer questions.


Great work! I played around with deis a few months ago and it was the first PaaS I was able to get up and running and actually, you know, deploy an app to!

Couple questions:

1. What about storage? I know you use ceph for the Docker registry, etc. Is that exposed so that apps can somehow use it?

2. When I tried it, performance was kind of lacking (for things like pushing apps and making config changes). Was this just a limitation of my setup (I used 3x2GB DigitalOcean nodes) or something you guys are working out?

Thanks again!


Hi, I'm also a maintainer :) to answer your questions:

1) we expose ceph in the router, so your apps could use it. This will have to be managed by you however as we don't have a user story for plugging apps into backing services asides from `deis config:set`[1]... Yet.

2) Performance is still something we're working on. When you deploy an application using a Heroku buildpack[2], your app is compiled into a container which sits on top of Heroku's cedar stack, which is around 800MB[3]. It's a beefy stack so fleet will take some time deploying that image. This should only occur the first time it's scheduled onto a new host as future images will just use the cache for that image, significantly speeding up deployment times. If you deploy an app using a Dockerfile and it's been optimized correctly (we have an app sitting under 5MB for test purposes)[4], deployment times should be very speedy.

Hope that answers your question!

[1]: https://github.com/deis/deis/issues/231

[2]: http://docs.deis.io/en/latest/using_deis/using-buildpacks/

[3]: https://github.com/progrium/cedarish

[4]: https://registry.hub.docker.com/u/deis/example-go/


I'm not a huge Drupal or Wordpress fan but I can see how people would want to deploy either of those CMS options. I would be interested in hearing your thoughts regarding this discussion:

https://github.com/deis/deis/issues/448


This is one of the general problems about PaaS. One of the tenets of 12 factor is that "12 factor processes are stateless and share nothing. Any data that needs to persist must be stored in a stateful backing service, typically a database"[1] which is why we want to avoid persisting state in the container.

Content Management Systems like Drupal or Wordpress typically store file uploads or configuration files directly onto the filesystem. With Deis or any other PaaS, this filesystem is ephemeral unless there is a shared filesystem mounted inside the container, typically via SSHFS[2] or some other remote filesystem.

We want to tackle this problem with a service gateway[3] for apps to attach backing services like a shared filesystem, but for the moment there's always S3 :)

[1]: http://12factor.net/processes

[2]: http://fuse.sourceforge.net/sshfs.html

[3]: https://github.com/deis/deis/issues/231


I'll skip abstract questions and jump right to our concrete situation. We have a bunch of services (Go app, RethinkDB, redis, haraka MTA, server for an Angular app) running in Docker containers on Hetzner servers. Is Deis compatible with our use case?


Since the router only does http right now (or so it seems) it probably isn't a good fit for you, or at least not for all services. I'd be really keen on trying Deis with RethinkDB if it got TCP support.


Thank you!


What does Deis do about app logs? This is a main problem I have with Dokku at the moment.


application logs are aggregated and shipped to a logger component[1]. if you scale processes and run `deis logs`, you'll see the log output of every running process for that app.

[1]: https://github.com/deis/deis/tree/master/logger


After reading a bit through it and i am wondering what the limitations are. I am a bit "scared" that deis talks about heroku so much and about HTTP. From the documentation it seems that the containers can ONLY use one port and that must be HTTP. Is that right? Also, the external storage is mentioned everywhere but not exactly what it is. I have read that Postgres is a database for the applications. What about plain files? Can my apps persist files on the external storage?

If this is only about pushing the next Go-based Todo web app to some servers, i'm disappointed... :(

Usecase: Is it possible to run mail servers on it?


Yes the Deis router only supports HTTP at the moment, sadly no TCP. I hope/think it's something they are going to work on after v1 is stable. It also has no persistent storage; the database you are referring to is for the deis application itself. You could use it, but it's not wise. Realistically you should use something similar to Amazon RDS or Rackspace cloud databases, as each deis node is ephemeral.

It's still really early days, I have seen a few mentions on their github page that outlines the features you're wanting, and I can't wait for them myself.


yes, tcp support in the router is something we'd like to work on but is not there yet for v1.0.


Good to hear! Any plans for supporting databases in the future? That's the main deal breaker for me at the moment. Keep up the awesome work.


I don't know about Deis, but I'm using CoreOS, fleet, etcd, and docker for HTTP, mail, and other network protocols. It's been an amazing combo.


Can somebody explain what the use-case for this is? The site says it is a "PaaS that makes it easy to deploy and manage applications on your own servers".

Most apps I've been making follow a similar recipe: run single ubuntu instance on digitalocean, use nginx for http to serve static content for single page app (ex. angular project), and nginx proxy to gunicorn to serve a python flask API that stores and retrieves data to/from mongodb.

Where in that picture would something like Deis fit in? Or is this not for me?


Pretty much the same as Heroku. As a developer, I want to deploy my application as quickly as possible without setting up nginx, gunicorn, etc. per service. I have a front-end service, three API services, and some daemon services. I'd like to be able to scale eventually.

With Deis, you first allocate the number of machines you want to be your cluster and install Deis on them. Think of the cluster as a large physical machine that can run many services. You might have a production cluster (7 machines), dev (3 machines), staging (3 machines), etc. Now, deploy your apps to that cluster via a git push for each.

Need to boot a new front-end server to handle load? Just run an extra container in your cluster. Same with API.

How would you do that with your current setup? I'm guessing provision an entire second machine (or VM, same thing) and put them behind a load balancer.

With Deis, each cluster is exposed behind a single load balancer and each app/service is exposed as a subdomain on that loadbalancer. Deis handles the internal load balancing.

Cluster getting full? Just add a new machine/vm to it.

So, if you like the idea of Heroku, you might like Deis, especially if you'd like to use your own hardware/VPC or want to use Docker locally and in production.


How long was the install process for you? Was there much involved?


In the beta, it took me a while to get the cluster up and running. I'd say on the order of a couple hours.


Deis is, in a sense, a deployment framework that allows you to deploy your stack (basically a python app) in such a way that you can scale and configure it easily.

I deploy python apps (running behind gunicorn) on heroku and basically pay for the convenience of dealing with their client and ecosystem rather working with VM boxes directly. Deis provides similar abstractions without the per-process price tag.

If you don't have a problem manually managing / self rolling your app deployments, you don't need it.


The use-case is a heroku-style system that you control.


"Deis must make it simple for ops folks to publish a set of reusable backing services (databases, queues, storage, etc.) and to allow developers to attach those services to applications."

What are some projects focused on implementing these *-as-a-service components in a cloud-agnostic manner? Rancher is one that came up earlier today on HN.

[0] http://www.ibuildthecloud.com/blog/2014/11/11/announcing-ran...


We're working creating database appliances (think RDS/Elasticache/Heroku Postgres) at Flynn[0] (I'm one of the creators).

[0] https://flynn.io


rancher is a bit different. It's based on lower-level APIs which allow your containers to hook into a certain cloud-provider features like Amazon's EBS, ELBs and VPCs. Deis uses the docker engine to ship and run applications, providing an easy developer workflow. We're similar to other PaaS solutions like Dokku[1], Flynn[2], and Cloud Foundry[3].

[1]: https://github.com/progrium/dokku

[2]: https://flynn.io/

[3]: http://cloudfoundry.org/


I think you misunderstood my question.

I know that Deis is focused on running 12-factor apps and does not try to provide a platform on which to implement backing services. There are proprietary implementations of backing services that are tied into specific cloud providers, such as Amazon's RDS. There are open-source implementations of backing services that are tied into specific cloud implementations, such as OpenShift's Trove. What I am asking is this: what projects are attempting to implement open-source backing services in a cloud-agnostic manner?

Put another way, if I choose to run Deis right now, I have the benefits of a PaaS without lock-in. However, I still have to either accept a degree of lockin to a service like RDS or attempt to roll my own automated database management. What projects should I be watching or contributing to that are trying to change this for databases? For queues? Etc.

I mentioned Rancher because they included creating clones of EBS and RDS in their goals, but surely they're not the only project biting this off.


What are the main differences between Deis and Flynn?


Flynn co-creator here.

Short answer: A lot. At Flynn we're building solutions to what we see as the big problems that developers will face in the next few years. Deis seems to be solely focused on Heroku-like featureset.

Flynn runs anything that runs on Linux, including stateful services like databases. In fact, we package major open source databases as "appliances" that run within the Flynn cluster for ease of use.

Deis has historically required users to use very specific technologies (initially Chef, and now CoreOS). Flynn does not rely on a specific Linux distribution or configuration management system. In fact, Flynn was designed to be modular to give users the greatest possible choice of components (Deis uses a few components we created for Flynn).

Flynn is designed to be a single toolkit that lets you run everything, not just twelve-factor stateless webapps.

We have created a lot of new technology where necessary but we also use off-the-shelf components like CoreOS's etcd, which Deis also uses. Neither we nor CoreOS consider etcd production-ready, and we don't consider any platforms (including Flynn) production-ready until etcd and all underlying components are stable (Deis also depends on the CoreOS alpha channel).

When you see a "1.0" release from Flynn it will come with our full confidence in the stability of all of its components. We also aim to go well beyond what's possible in Heroku today.


Thanks for the insights!


From the outside, not much. Both model the Heroku-style workflow with application deployment.

Asides from Deis being production-ready and tacking on a few extra features like `deis pull` and Dockerfile deployment workflows, we take a different approach to our components. We use the best-of-breed components from other OSS projects and focus entirely on the application deployment workflow. Flynn is focused on building the platform from the ground up. They provide the network equipment and messaging primitives (layer 0, as they call it) as well as the platform itself (layer 1) completely from scratch. Deis uses common OSS tools such as nginx and ceph to provide our layer 0 and part of our layer 1 for us and focus entirely on using those components to serve our needs.


Thanks, very helpful!


We've been using Dokku for a month or so for deploys on a project and aside from a few initial minor bumps it's made the deploying as simple as Heroku for me and taken a ton of the pain out of devops.

Deis/Dokku are great tools for developers looking to abstract away a lot of the pain of getting an app into production.


Yeah, my dokku server has been up for over a year and has only gone down completely once, due to a third-party plugin. Very satisfied so far!

EDIT: I was thinking of switching it over to Deis, but it needs a minimum of 2gb of ram, with a suggested amount of 4gb...not a great fit for my 512mb DO droplet :(


This is exactly why we sponsor Dokku: http://deis.io/deis-sponsors-dokku/ Go Dokku! :)


Can I use Deis with only one CoreOS node?


No. As a PaaS designed to run enterprise workloads with high availability, Deis has a 3-node minimum.

If you're looking for a similar workflow but with a smaller footprint, Deis now sponsors the Dokku project: http://deis.io/deis-sponsors-dokku/


Congratulations, @gabrtv. And thanks for all the hard work. I've been following this work weekly since we spoke over a hangout in the latter OpsDemand days. I've tried out a couple beta versions, but I can't wait to try out the real deal here at KnowledgeTree.


Correct me if I'm wrong.

Deis = heroku-like interface on top of your own private cloud which must be running CoreOS and certain other things like Ceph distributed filesystem, but can be set to run on a variety of third party commercial cloud providers' infrastructure.

So basically, if you only ever want to use these features, and you are happy with a heroku-like interface to your infrastructure, Deis is useful.

OTOH, if you might want to run a non-Linux node, a specific filesystem or node-local configuration for performance, security or other purposes, or have infrastructure that is not feasibly manageable with a manual process, Deis is not currently useful.

That said, neither are most of the latest-gen infrastructure solutions.


> Deis = heroku-like interface on top of your own private cloud

There are a few things like Dockerfile deployments and a `deis pull` workflow for importing your docker images from either DockerHub or an in-house registry as an app, but for the most part you have the right idea. Open-source Heroku has been the general elevator pitch but there are a few docker goodies in there.

> and certain other things like Ceph distributed filesystem

That's an implementation detail of the platform itself, not a hard requirement that you need to set up yourself. We spin up a containerized ceph cluster for database, docker-registry and logger HA.

> if you might want to run a non-Linux node, a specific filesystem or node-local configuration for performance, security or other purposes, or have infrastructure that is not feasibly manageable with a manual process, Deis is not currently useful.

We're quite receptive to suggestions and proposals such as compliance, security, node-local configurations et al. We're always happy to discuss changes in either the mailing list or in an issue in order to make the platform work for a range of environments, so it is definitely possible to bridge these two sides together :)


Why should I use this instead of Cloud Foundry?

Not trying to bash, just curious.


I used to work on Cloud Foundry for a while. I eventually left the project and became a core maintainer of Deis and have never looked back. Why?

The community is absolutely amazing here. I have had pleasant conversations with everyone from day 1. The OpDemand folks were very welcoming to my suggestions and they really helped me get started in the early days. everything is done in the open. Not a single feature goes by without a fair trial or a discussion from the community.

Hacking on Deis, getting started and tuning it to your liking has always been there from the get-go. Cloud foundry... not so much.

Cloud Foundry provides the application deployment workflow, but they're missing a few core features, most notably application rollbacks and zero-downtime app migrations at the router level. There is also no scheduler tag support so you cannot control what node your apps will be scheduled to.

They use forks of heroku buildpacks, making them heroku-incompatible in some cases. Their java buildpack is specific to cloud foundry. Hello vendor lock-in.

In-place upgrades work beautifully. I've been dogfooding a running Deis cluster for about a month now, testing upgrade paths from version to version. Just stop the components, bump the platform version and boot them up again. Boom, v0.14.1 to v0.15.0. Cloud Foundry's upgrade path is to basically stand up a second production cluster, migrate your data, app slugs, cluster config and cut over DNS records to your new cluster.

If you have any more questions on Cloud Foundry vs. Deis, Gabriel or myself would be more than happy to discuss with you over IRC or email. My email is matthewf@opdemand.com


CF user here.

While I agree tag support would be cool, complaining about CF buildpack incompatibility is a bit disingenuous, no? I mean, you generally can use a Heroku build pack on CF, but if you have unique requirements, that's why there are CF-only buildpacks. The general reason is that some CF deploys don't have internet access and therefore need a completely offline cache of the binaries.

Secondly,

> they're missing a few core features, most notably application rollbacks and zero-downtime app migrations at the router level

I'm curious why you think this? For example:

http://blog.pivotal.io/cloud-foundry-pivotal/case-studies-2/...


Thank you.


Short answer: Deis is easier to install/operate and it's designed to leverage Docker from the ground up. You won't be promoting Docker images through a CI/CD pipeline with Cloud Foundry any time soon.

Email me for a longer answer: gabriel@opdemand.com. ;)



Same question; we're getting ready to 'go big' into CF, but this 'feels' more focused.


I'm interested in hearing pros/cons of Deis vs Kubernetes.

The primary disadvantage I can see for Deis is that it can only deploy single Docker containers (vs Kubernetes pods). Given the Heroku-style setup it makes sense, but it is limiting for more complex applications.

On the other hand, the simplicity of Deis is attractive. I kind of wish I could use it, but I feel like it just isn't feasible for a full suite of microservices + dependencies.

Regardless, congrats on 1.0!


Deis and Kubernetes are solving different problems. Deis is a PaaS that provides a developer self-service capability a la Heroku. Kubernetes is a declarative container manager that provides lower-level orchestration features. Both are complimentary. For example, Red Hat is using Kubernetes to deliver their OpenShift v3 PaaS. We've explored a similar integration with Kubernetes but are currently more focused on integrating with Mesos a next step.


We hear this conversation quite a bit. Kubernetes is not a competitor to Deis. Kubernetes is a container cluster manager; a scheduler, if you will. You give it a container and some desired state and it will schedule it out to a cluster of nodes. Deis utilizes schedulers to deploy apps. We currently support Fleet at the moment but we have attended a few mesos hackathons to get a PoC working with Deis as well as kubernetes. If you have a use case, we'd be happy to hear it in the mailing list or in a github issue as a proposal :)


I'm still not grasping the concept of Docker and such...

Can I use Deis (albeit in a much smaller scale) to quickly spin up lightweight VM-ish entitys for prototyping new webapps/backends (i have an unused box at home), or am I coming at this from the wrong end :) (Link to a _good_ explanation of Docker, preferably a video ;), would be gratefully appreciated)


The way I think about it:

Docker - specify a config file and boom you have a VM with the system, the language, the libraries preinstalled

Deis/Dokku - gives you tooling to deploy/stop/access apps on your VM with an easy cli


Ok.. But if I push 5 docker containers based on say Ubuntu 14.04, will they take 5 times the size of an Ubuntu 14.04 base install, or will they share base install?


A little Docker example I keep coming back to:

https://github.com/radekstepan/im-runnable

It creates a little service that lets you run code (like Python, Node) in Docker sandboxed environment.


this is more of a Docker than a Deis question, but a Docker image is composed of layers. Each tagged image keeps an ancestry list of all of the layers it requires. In your case, if you run 5 docker containers on the same host on top of ubuntu:14.04, it will initially pull the parent's layers (ubuntu:14.04), then it will pull its own layers. They all share the same layers, which is why it's so powerful. You get the exact same execution environment every time.


They will share, and will spin up very fast, so size is not an issue.


They share much of the underlying OS.


Key here is to realize the difference between VMs and LXC (which is what Docker builds on).


Clarifying note: Docker uses libcontainer NY default for its execution engine. LXC is an option, not enabled by default.


I already use CoreOS/etcd/fleet to manage a cluster of apps and services. What does Deis do that I can't already? Does it have solid automation for service discovery and the like? I want to be able to quickly add and remove nodes depending on demand. Does Deis make this easy?


I'm a big fan of Linode but I see Linode isn't amongst the supported cloud providers. Is there some fundamental issue here or is it just lack of time / resources to develop support?



That's ancient documentation -- v0.3.0!

That being said, we support any provider which runs CoreOS. Deis is just an platform which utilizes Fleet, Etcd and Docker. The best place to start would be to take a look at the Quick Start documentation: http://docs.deis.io/en/latest/installing_deis/quick-start/


I'm really struggling to understand what this is. I've looked at it a few times and never really get a good grip on what it is.

Also, how does it fit in with Ansible?


are there any deis with ruby or rails application tutorials? I didn't see any, but for getting people from heroku, it would be great to see some tutorials so people can easily migrate.

on the minimum cluster of 3 nodes with 2 gb each, what is the minimum amount of apps that could be run?


we have a bunch of sample applications which we run nightly integration tests against. As for application tutorials themselves, we have the buildpack[1] and the dockerfile[2] workflows.

[1]: http://docs.deis.io/en/latest/using_deis/using-buildpacks/

[2]: http://docs.deis.io/en/latest/using_deis/using-dockerfiles/

I can't give you an answer on how many apps you can run on a 3x2GB cluster. It really depends on how much memory/clock cycles your apps require.


how many small 50 mb sinatra ruby apps?


Does anyone have any feedback from using this? Would be interested in hearing how it went.


Spent a couple days attempting to deploy using Rackspace tutorial. Wasn't able to get it to work. I will be taking another look, hopefully deployments have been worked out.


Just a quick note, Rackspace has been notoriously slow with getting CoreOS's latest images up, so we've had to work around that with noting that you have to update your machines first before deploying Deis. They just updated to v494 though so it should be good to go IITB now :)


Awesome experience. It's simple to launch in any AWS environment with their CloudFormation template. We're using it to replace the way we deploy micro services. It's made my life as an ops guy much easier - I truly believe it's the future.


There's a great summary here: http://stackshare.io/deis


thank you!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: