We recently switched (like 2 weeks ago) our project from deployment on ubuntu servers via 'git pull' managed with supervisord to docker/coreos/fleet, and it's been epic. While coreos is built for large clusters, we run a 3 host cluster in ec2, and couldn't be happier. We switched from multiple servers running 1 instance of each service to load balancing all instances on these 3 hosts. This increased uptime, made deployment and management easier, and gave us the benefits of docker as well (verifying things work locally).
There's only 2 real problems, both of them very minor:
* fleet managing state. We've had to manually kill containers sometimes, and destroy systemd services before we could start it again.
* all EC2 amis use ebs backed instances. We haven't used a higher-IOPs ebs backed instance because the only delay we see are in startup times (which doesn't matter, just longer rolling deploys). But an instance-backed ami would be nice.
How do you handle persistent storage in a fault-tolerant way?
You could technically pin your MySQL container to one host, but that seems to defeat the point of fleet. I considered trying to mount an iSCSI target to run the database from, but CoreOS doesn't have a working iSCSI initiator.
I guess I could just run all the persistent stuff on a more traditional OS, but then why mess with CoreOS at all?
- one to attach the EBS volume (wrapping the above container)
- one to mount the device to a directory on the host
- one to run the DB container, with a bind mount to the host directory
If the CoreOS host terminates, the cluster will reschedule the units to another host.
CoreOS seems to do a decent job as the application layer for something operating in a larger cloud that provides database and file storage as services, like Amazon or OpenStack. However, I think Docker has the potential to eliminate the need for infrastructure-level "clouds" to be so complex. The basic unit can be the container instead of the instance, so you don't need fancy automated instance management and all that. It'd be nice if you could also obviate the need for the other external services, and just run everything in containers on metal.
I was really excited by CoreOS as a way to do this, but it seems to fighting me at every turn. It seems much easer to just wire together the REST APIs of several Ubuntu/CentOS nodes running Docker.
That explains how they chose to minimize deadlocks.
Yes, if you use it incorrectly and/or for the wrong problem space, you will have issues. It isn't a magical solution to all problems. It is, however, an effective solution for a specific problem space.
I've been using it in our development environment. It locks like crazy.
This is after configuring HAProxy to have a primary node which all connections go to.
OpenStack seems to be OK with it though, it chugs along without fully breaking down. I'm sure I would see a pretty big performance delta if I ran a benchmark against it with and without Galera.
Using CoreOS as a "base image" under docker would serve little purpose because CoreOS isn't really designed to run anything other than its core services. People usually use Ubuntu or CentOS.
He's saying that there isn't a point of running CoreOS as the base image inside of a container, which is true for most people. There isn't a good reason to run CoreOS containers on a CoreOS host.
Oh! I had a different sense of "base ... under Docker" in mind.
My interpretation was that Docker was running on top of an OS, which you could call the "base", and say it was "under" Docker in the sense of "underlying".
Whereas he was using the opposite spatial metaphor: the Docker image, which is a "base" in the Docker sense because you derive other images from it, is "under" Docker in that it's within Docker, or lower in the process tree.
George Lakoff would be thrilled by this example of conflicting metaphors.
Have you written up this in a blog post anywhere? Would be an interesting read and very valuable for dev/test scenarios in addition to production ones.
Vultr.com allows you to launch a VPS with your own uploaded ISO / image, so if you made one for CoreOS it'd probably work. They also support FreeBSD. I've been using them in their Los Angeles location for a while and have had no issues at all. Their costs are about the same as DO... basically a DO clone. I think there's an older hosting company behind them so it's not a basement operation.
I personally think they will pull this one off, mainly because CoreOS will be providing support to the DO team and help however possible to make sure DO is added to the list of supported cloud vendors.
You've read more into that tweet then I ever would, saying " we are a small startup with limited resources" doesn't really imply anything to me on whether they are or have been working on BSD support. Has DO made any promises about delivery date here? Did they imply anything that could be construed as "overpromising"? I mean any company or OS project has plenty of stuff they "have been working on" for many years, I wouldn't say they've overpromised on anything unless they made explicit commitments and failed to meet them.
When you official declare that you started a feature over 15 months ago and then provide no uodate on schedule or release - and this is just one if many times they so this, you begin to feel like these features they promise too will never happen.
> Alex Polvi stopped by our office a couple of weeks ago and we spoke about integrating CoreOS. He offered to provide a lot of additional support to get this rolled out including managing them images so that we can have official CoreOS images that are maintained by them when they are ready.
Given how their systems are laid out and spun up we discussed the changes that would be needed and it looks like the metadata service that we have begun writing solves a lot of the challenges.
> We’re moving this to planned stage, as soon as we finish the metadata service we can begin testing it internally around getting CoreOS as a supported distro in DigitalOcean.
If anyone from Digital Ocean is reading this, please help if you can! A lot of developers really want to check out CoreOS, and having it supported on DO would help so much. Let us give you money!
We are reading this and we will be helping soon. We met with Alex Polvi when he stopped our office a few weeks ago and we discussed integrating CoreOS.
Given how their file systems are laid out and a couple of other items on their side as well as that we are building out a metadata service that will be launching soon we thought it would be best to wait on the integration until after the metadata service was ready.
This way we could use it to integrate with CoreOS and make the deployment process a lot simpler.
Alex has been super helpful offering a lot of support to help us get this done and we are very excited to move forward, but we thought it would be better to do it right with the metadata service instead of launching it without that, knowing that we would have to rebuild the way the image was being provisioned and every early adopter of CoreOS would then need to redeploy their droplets.
Because we have a lot of projects in flight at the same time I don't have an ETA and we've certainly found it difficult to give product release estimates when we are also in high growth mode because there are always many interruptions and reprioritizations that happen on the fly every week but I'm hopeful that this one launching this quarter.
I'm a bit confused on the licensing here. CoreOS says it's Apache 2.0 licensed. But it also says its Linux. If it includes the Linux kernel, as it appears to, then those bits is licensed under the GPL and can not be re-licensed as Apache 2.0. So, it's a bit disingenuous to claim the whole package is Apache 2.0, since it isn't.
How relevant is the issue in practice though? It doesn't seem too likely that people are going to want to provide commercial forks of the whole project including the inux kernel. And it's pretty likely they could package their changes to the CoreOS code together with a stock kernel.
The claim on the site is that "Code produced by the CoreOS team is licensed under the Apache 2.0 license. " - Which seems reasonable enough.
Where is the text you find confusing? What I found about licensing says "Code produced by the CoreOS team is licensed under the Apache 2.0 license." That's extremely clear about what the license applies to.
Yes, for people who understand licensing stuff. For the many people who don't, they could say that 3rd party components of CoreOS are under various open source licenses, but the code they produce is Apache 2.0.
It's like Android. Everything that wasn't already GPL and is produced by the project is released with an Apache license. Everything that is GPL is still GPL.
If you fork it, you still have to keep releasing the source for the kernel, and possibly part of the user land, but the rest of the user land is fair game.
I'm very excited about this release. CoreOS, Docker and etcd are a great fit for one another. I love the separation of concerns that is provided.
IMHO, the weakest part of CoreOS is fleet (https://github.com/coreos/fleet). Compared to the other components in the stack, it just feels very inelegant. The systemd configuration syntax is complex and ugly. I wonder if there will be work invested to upgrade fleet to something that is as elegant as e.g. etcd/Docker/CoreOS itself.
fleet definitely does not provide a nice integration with docker, and we're definitely trying to decide how we can provide a better integration story. And yes, systemd is complex, but with that complexity comes great power.
Glad to hear you're thinking about this. Understanding the complexity provides power, but the tradeoff in this instance doesn't feel quite right.
For example, etcd is a powerful primitive, and then more complex/sophisticated systems can be built on top of it.
I wonder if an 80/20 solution that is simpler than fleet/systemd for pushing work into a CoreOS cluster would be a win, and then more complex systems (e.g. Kubernetes-esque orchestraction) could live on top of that.
I took one look at Kubernetes, and decided that for our purposes writing our own orchestration scripts from scratch would be simpler... So many of these systems are hopelessly overcomplicated.
I'm puzzled as to why it's called "stable," while at the same time it appears to require btrfs-on-root to be useful (i.e. for hosting Docker containers) but that part is "experimental."
I've seen a talk by someone from Suse. He had a two part sentence.
The first part was: btrfs is good and stable (paraphrasing)
The second part was: when used with a single device
So that means something like RAID1 on 2 disks is a still a bad idea.
The way CoreOS ends up using btrfs is also on single devices I believe.
This is excellent news. Two weeks ago I looked into flynn and the various repos had very little activity compared to deis so I went with that for now.
Edit: I posted this just FYI. I thought it'd be useful to know. Perhaps this is a consequence of splitting the project across a large number of repos, where some of them don't get many commits.
Flynn itself costs $3,000/year at the most basic service level, before you even start covering infrastructure. Why shouldn't I just write what I need in a few hundred dollars worth of hours?
Flynn co-founder here. Flynn is free open source software and always will be.
Our managed service is for users who want to outsource the management of the cluster to us. There's no secret extra technology (it's just Flynn underneath), what you pay for is a professional ops team to manage your cluster. The service is also freemium (and free for the vast majority of users).
I've been using Vagrant/Virtual to run Ubuntu LTS for my Javascript dev env. Does it make sense switch to CoreOS? I generally run it as a headless server. No X-windows or GUI needed.
Stay for now on Ubuntu. Personally I would be sceptical for some time before the hype of a new Linux settles down.
Ubuntu is proven for many many years and stands on the shoulders of giants named Debian. CoreOS is a pretty new shiny product which still needs to prove itself.
I know Hyper-V isn't particularly sexy around these parts but it appears to work there as well. It doesn't support the Hyper-V integration services but that's par for the course for most Linux distributions.
Hyper-V support interests, me, as I have a cluster of Windows hosts and would love to add CoreOS to the mix and deploy to my private cloud when needed.
CoreOS is putting some effort into Hyper-V support. There's a way to get it working, though I haven't tried.
There's only 2 real problems, both of them very minor:
* fleet managing state. We've had to manually kill containers sometimes, and destroy systemd services before we could start it again.
* all EC2 amis use ebs backed instances. We haven't used a higher-IOPs ebs backed instance because the only delay we see are in startup times (which doesn't matter, just longer rolling deploys). But an instance-backed ami would be nice.