Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So why isn't the metadata server authenticated?

It would seem simple enough for googles metadata server to have a valid HTTPS certificate and be hosted on a non-internal domain. Or use an internal domain, but make pre-built images use a custom CA.

Or Google could make a 'trusted network device', rather like a VPN, which routes traffic for 169.254.169.254 (the metadata server IP address) and add metadata.google.internal to the hosts file as 169.254.169.254.



how do you get the certs to the machines? ever had to rotate certs for all the machine in a datacenter?


https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configur...

Not mTLS, but AWS metadata v2 has moved to an authenticated session-based system. Of course, an attacker who can make arbitrary requests can create tokens for limited sessions, but it's certainly an improvement.


Google happens to have a widely trusted CA they could sign the metadata server cert with


my question is, if the CA cert needs to be rotated, how do you do this for all machines? it can be done, but it's not trivial


Presumably the machines have a mechanism for managing their CAs, (the trust store that ships with the OS). If machines aren't being updated frequently enough to get a new CA, they're badly outdated in other ways


oh yeah? and how do you update the machines/CAs?


Using the package manager like people have been doing for years. Keeping OS CAs updated is a long solved problem


To the metadata servers? They presumably hold keys to access all kinds of backend systems anyway. The certs don't require any additional trust. There must already be infrastructure in place for deploying said keys.


yes and no. when doing stuff like this you will always have a chicken and egg problem.


You could also do a hybrid where each machine gets a volume with an x509 cert and key only root has access to which can then be used to mTLS to a network service (which can then manage the certs)

That'd be a hybrid of cloud init data volume and network service


you could, the problem with this approach is how do you manage these volumes and the infrastructure around it. How do you get the keys on that volume?

Usually, this trust is established when the machine is built the first time and it gets an identity and a cert assigned to it. You have the same problems (of how you control the infra and ensure that you and only you can do this on the network).


The hypervisor or VM provisioning system can set it up. With something like certs you can just drop a <1mb iso on the host for each VM

The cert only needs to prove the VM is who it says it is

>ensure that you and only you can do this on the network You've already solved that with your VM provisioning system

If you're talking about physical/hardware, you can take more liberties with the network since it can be isolated during the initial provisioning step




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: