I almost never heard anything negative about vault until I did a presentation on all the troubles I had with it. Here are the slides about the issues I had:
At the end of the day, Vault gives you secret keys to manage which makes automating the unsealing process not fun if you want full automation.
Talking with Armon from Hashicorp they planned to work on some much improved docs around vault which should help with a lot of the issues of making vault usable, because quite frankly they are very challenging to understand right now.
I unseal using ansible (with the unseal keys in ansible-vault) and automate the configuration fully through ansible. For example you can use the ansible expect module:
- name: unseal 1
expect:
command: '/usr/bin/vault unseal'
responses:
'Key \(will be hidden\): ': "{{vault_seal_key_1}}"
echo: yes
when: vault_sealed_result.rc == 2 and vault_seal_key_1 is defined
tags:
unseal
I'm interested in your solution, you are using ansible-vault to store the Hashicorp Vault unseal key(s)? Isn't this just pushing the problem out another level or am I missing something? Thanks.
I read this presentation a little while ago and I felt it was slightly disingenuous. Many of the "problems" are process related and have nothing to do with vault itself. Then your approach is using ParameterStore which provides a UI and is built on the AWS ecosystem proper which I guess works if you are sticking to only AWS. I guess it comes down to how you view "secrets" but i'd rather that be spelled out than suggesting hard to automate and not easy to work with.
aws-vault and chamber both look fantastic. When was the talk given? Has the situation improved since? Would you still recommend both those tools over Hashicorp Vault?
Another option for Ansible is Ansible Vault (which is not related to Hashicorp Vault) -- you can use it to password protect secrets used for playbooks (you need to supply the password when you run the playbook).
Yes but Hashicporp Vault has a greater scope in that it addresses the secure introduction problem, provides single usage/read once tokens, tokens with TTLs and limited use, has audit capabilities etc. Although there is some overlap in general secret management between Ansible Vault and Hashicorp Vault, the latter is much broader than just a means of secure storage for config management.
A lot of what Hashicorp Vault does is already provided for (in AWS at least) with KMS, Cloudtrail, Parameter Store, and IAM (which can be used in symphony with Ansible Vault).
I have very little experience with GCP and Azure, but it seems like Hashicorp is reinventing the wheel in AWS with Vault.
> you need to supply the password when you run the playbook
You can also specify vault-password-file in ansible.cfg [1]
It can be a shell script rather than plaintext, so you can use it to call the CLI password manager "pass"[2] for instance. This is handy for automation.
The parent's recommendation was to put a filename into the ansible.cfg, and that file could contain either the password, or a script that is then run which prints the password to stdout. For example, we have the script pull the password from a gpg-encrypted file.
Even if you do put a plaintext password into ansible.cfg, encrypting secrets in the playbook is still worthwhile so you don't commit them to your source code repository or accidentally share the secrets with the world when you publish your playbooks.
We also use Ansible Vaults extensively and they work great.
Oddly enough though, I recently tried AWX (the open sourced Ansible Tower), and it wouldn't decrypt our vaults when trying to get the inventory, even though I gave it vault credentials (there was nowhere to associate it with the inventory run though).
So, we are still using RunDeck for a web UI/scheduling/web triggers/Slack integration of our Ansible runs.
developer of AWX here. We're working on this! unvaulting is available during playbook runs but we definitely need to make it available during inventory syncs also. The features coming up in Ansible 2.4 will enable us to do this.
Are there any pointers on how to work around this?
I think a large part of our problem is that we are using Ganeti for most of our VMs, rather than something supported native by AWX like OpenStack/EC2/Azure. I have an inventory script we have been using, but couldn't get it to run in AWX due to AWS credentials not being made available in a way that boto recognized (the inventory is pulled both from Ganeti and EC2).
Can't recommend Vault enough. By far the easiest and most capable solution to work with. The only downside I can point out is that the multi-cluster/region HA requires expensive enterprise licensing, but that is something most user cases don't require.
Secret management is as complex as the system which relies upon it. Vault is not as easy as many other tools designed for simple systems. GPG by itself is often enough to manage secrets.
There are probably 40 or more secret management solutions out there, many tailored for specific uses. Most CMS's have their own secret management baked in. Most orchestration and infrastructure tools do too. Four different solutions are called 'Vault'. There are at least 5 solutions just for Amazon. Depending on your platform, something other than HashiVault may be easier to adopt.
Maybe that was poor wording on my part, but ease of use in combination with the features are important.
Take the database secret backend for example. Getting that same feature out of other simple systems, would be a lot of work. Audit trails are another low effort high reward feature. When you start to get into combining those features, it is an even greater pay off. When you start to take into consideration HA clusters...well if you want to put that together on your own have at it.
Much of "easiest" has to do with familiarity as well. I've seen new users get Vault up in minutes that still don't have GPG setup "cause it is hard". If you are already working with DynamoDB or Consul, you already know how to setup the storage.
Those are common skills. I'm sure there are folks that fall on the other side of that use GPG often, but it doesn't make either side more objectively easy.
If you only have the need for a simple system, then Vault may be overkill. I would say Vault is comparable to Kubernetes. Does everyone need it: No. Simpler stuff can be done with config management or tools like Nomad. Does it have features that most people will eventually want to use: Absolutely.
Secret management is one of those things were simple solutions end up easily becoming more and more involved just like container orchestration. Especially when you get into chicken/egg scenarios regarding stuff like some of the tools you mentioned.
Side note: I've tried at least 12 other tools for this purpose , and I would recommend Vault over all of them for most every scenario that is more involved then "Use 1Password".
IIRC, it was on the order of $100k+ per cluster for the Premium offering. Has been several months so I would suggest contacting them to see what it currently is.
For some projects I've been on, that wouldn't be a big deal. For smaller projects it can easily be several times your opex though.
You can run an open source cluster in HA without it though. The big draw for Enterprise is the multi-cluster and HSM support. Anything HSM is typically quite expensive so their pricing really isn't out of line.
We use StackExchange's blackbox [1] to check in GPG encrypted secrets into our monorepo (contains both ansible deployment playbooks and the code itself). This way, we can introduce secrets and the code that uses it in a single commit!
There are a handful of other wrappers around AWS KMS to solve this problem - it seems like the best approach if you're already on AWS, no additional major infrastructure to manage. Thanks for sharing.
Vault is a pretty good thing. For some time, we used chef attributes and environments, but by now we have migrate most stuff into vault. The migration was easy, because chef-attributes form a big JSON object, so you can translate that into a secret tree in vault in a straightforward way.
The worst part about vault is managing access in a secure way and granting access to the right parts imo. We leverage the pki backend (or rather, three dozen pki backends) to map nodes to their respective policies and that required quite a bit of tooling to make work. But now it does, it's secure, and if need comes, it should be easy to revoke secret access for a cluster.
Hesitant why? It's pretty darned good.
Kubernetes also has a Secret abstraction, but you probably don't want to start setting up Kubernetes just for secret storage. Vault is good at that.
Responding not so much with a solution but with a question. We currently encrypt files in place with gpg (secretKeys.json => secretKeys.json.encrypted) and have the source file (secretKeys.json) git ignored while the derivative file is just added to the repo.
This is admittedly a bit low tech, but could someone more well versed than me tell me what's wrong with this setup.
Then as part of your code or deployment, your private key is on a server and some process decrypts it? It is low tech, but a good solution. Only concern would be how that private key is stored, and if it could be compromised. (edited)
I work at NuCypher (YC S16), and we're doing some pretty interesting stuff on the Ethereum block chain to build a decentralized KMS using proxy re-encryption.
git-crypt[1] is quite handy in cases where the credentials have to live along the code. It's a nice improvement over having credentials in clear in a git repo and relatively easy to implement.
All of the major clouds already have good secrets management built in. We have a simple library that uses Google's Key Management Service in a standalone project to encrypt/decrypt files held in a private storage bucket. Access to keys and files are controlled by service account roles. Seamless, efficient, no-ops model with built-in auditing and fine-grained control that works everywhere.
This sounds way simpler and 10x better than what most organizations do (secrets in virsion control, secrets on local file system or environment variables). Do you mind doing a quick how-to? It could probably help 90% of organizations take a step towards better security.
Key Management Service creates and maintains private keys for you and provides an API to easily encrypt or decrypt some data. Basically call method KMS.Decrypt("name_of_key_to_use", <bytes>) and get back decrypted content. Secrets are simple text files encrypted with KMS and stored in a private storage bucket. For example we have something like "database-secrets.dev.json.encrypted".
We have a small library that took a day to write, used in all of our projects that does the following on startup: open private storage bucket, download encrypted file, call the KMS API, decrypt the file, and parse the raw contents as json. Now the app has the secrets in-memory to be used anywhere. No infrastructure required, nothing on disk and this is universally accessible whether inside Google cloud or on local machine. Takes under 1 second when running in the cloud.
I dont think I can do better than the documentation but let me know if you have any questions.
As stated, this works everywhere, whether on premise or inside a cloud provider. Key management and storage have public APIs, the only thing you need to access them is a service account key file which authorizes everything else.
Service accounts are necessary to run anything in GCP anyway but can be used externally (like the gcloud CLI on your desktop) or a similar setup specific to AWS or Azure if that's your primary provider.
Sorry I misread what you are saying, yes that is a nice basic setup, as the other reply mentions, better than what most orgs start with. That said, I don't think you should be so quick to poo-poo Vault as it provides a lot of very nice things in a fairly flexible package.
Didn't poo-poo Vault - just saying that there's a very good system already built in that involves wiring up just 2 API calls and integrates perfectly into the existing IAM security roles.
Anything secret involves a master key. If you don't trust AWS, then you need to supply your own master key. But for most setup, IMO, you should just let AWS handle the key management, and you use role to decrypt. Rotation is a big deal though. For server, SSH key can be encrypted in KMS and we either completely replace the box, or we rotate one box at a time. For DB servers, it's important to choose a DB that can stream data to a new box with as little impact as possible (or allows replication). But these takes time to develop (I can't use container to host DB or critical applications because the network performance, at least a year ago).
BTW Mozilla's sops [1] is quite interesting. I've been testing this for a while now.
It's not really clear for me from the docs. But can you now use kubernetes secrets to not be stored in etcd but in vault? Or is just the token retrieval part fixed? The docs are a bit terse and don't mention much stuff on how you'd actually use it.
If I create a kubernetes secret will it be stored in vault if I set some magic switch? Or are we not there yet?
Not there yet. You can store secrets in Vault, and now a kubernetes pod can authenticate against Vault which will allow it to retrieve secrets. If you're running your app in k8s, your app will be able to use the configured token to get to vault.
I worry a lot about how these megacorps will treat "collaborators" vs "non collaborators" in the coming years. Obviously you can't just outright buy everyone, but they seem to be increasingly abusive towards technologies and teams that aren't on board with their interests and ideology.
Actually I'm more worried about how Facebook and Amazon treat non compliance, but Google sure seems to be getting shadier every day.
This combined with the W3C evolving into a corrupt entity just makes me want to get out of tech completely. Maybe if I could get some awesome dev job at the EFF?
From what I can tell, this is all opensource using their publicly documented API. That is, you could implement the same support for GCP in your own auth backend product, and you could implement the same support for your own cloud platform in Vault.
So… I don't really get what you're talking about in this context.
"We're working to enhance the integration between HashiCorp Vault and GCP, including Vault authentication backends for IAM and signed VM metadata."
There's not much detail in that. But, you could certainly read it in a way that using Hashicorp products might be lower friction than using other products on GCP.
This is a genuine collaboration effort by two of the players whose services many people are already using together, and they are making that experience better for their users.
The kind of dismissiveness and hyperbole in your comment is why we can't have nice things.
> How does Snowden have anything to do with a collaboration between Google and Hashicorp?
I don't think they have anything to do with each other. My point is that we know some weird things are happening and it's a put-down and a conversation stopper to call someone a tinfoil hat wearer.
(The best you can say is that the person you're insulting might have a mental illness.)
This is a pet peeve of mine because I knew about some of the hijinks that have gone down (and are going down) since before Snowden flushed his life down the toilet to get people to pay attention and I've been dismissed with that exact term. It's naive in a post-Snowden world to not postulate conspiracies.
The Hashicorp stack is pretty widely used in part for its open source cross-platform capabilities. This seems more along the lines of "Hey! You already use Terraform/Vault for provider X. Now GCP works even better with the tools you already use!"
You concern is probably worthwhile, but I think this is not an example of it.
I work for Pivotal and we donate engineering for a product (CredHub) which is comparable to Vault, though with a slightly different set of motivating problems.
We, like HashiCorp, cooperate with Google on a lot of things.
They don't pick winners. What works best for Google is to get your workload into GCP. It matters little whether the bits you run are Pivotal bits, HashiCorp bits, Docker bits, Red Hat bits, IBM bits, Microsoft bits or your own bits.
What matters is that they're being processed on GCP atoms.
The analogy I have used before is that Shell, BP and ExxonMobil don't care whether you burn their fuel in a Ford or a Toyota. They mostly care that you burn their fuel.
I don't mean to paint a cynical picture here. As a partner Google is excellent, responsive and respectful, our engineering cultures have good compatibility and there are deep common interests. But Google's goal is to make GCP the most attractive place to run your workload. That means that they are going to be ecumenical. They want to help us to win, but they want to help everyone to win, because that helps them to win.
> Obviously you can't just outright buy everyone, but they seem to be increasingly abusive towards technologies and teams that aren't on board with their interests and ideology.
I 'm not sure where you're coming from or going with this. Do you have an example to illustrate? What are you considering collaborators and non-collaborators?
Any two corporations can work together to create partnerships and better integrations of their products and services. This is how most business is done.
I've heard a lot of good things of Hashicorp Vault (https://www.vaultproject.io) but been hesitant to go with it.