Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LOAS Daemon #2209

Closed
erictune opened this issue Nov 6, 2014 · 9 comments
Closed

LOAS Daemon #2209

erictune opened this issue Nov 6, 2014 · 9 comments
Labels
priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. sig/auth Categorizes an issue or PR as relevant to SIG Auth. sig/service-catalog Categorizes an issue or PR as relevant to SIG Service Catalog.

Comments

@erictune
Copy link
Member

erictune commented Nov 6, 2014

LOAS Daemon Proposal

A LOAS (Local Opaque Auth Service) daemon runs on each node.
The loasd stores credentials that are needed by pods on a machine, but it does not let the Pods see the credentials they are using. This is the Opaque part. It is useful in several ways:

  1. It allows giving a user the authority to run a pod that uses secrets that he is not allowed to see (requires additional infrastructure not described here.)
  2. It protects against file-reading exploits on servers in pods which would otherwise be able to find secrets.
  3. In the case of an arbitrary code execution exploit of the process in the container:
    1. the secret is safe and does not necessarily have to be changed after the attacker is repelled
    2. Although the attacker can use the powers granted by the secret for a limited duration, they have to be used through a proxy, which can be configured to do audit logging, even if the remote service does not have adequate audit logging.

There are a few assumptions:

  • that the requests going through the LOASD proxy are relatively low rate, so that excessive system resources are not consumed.
  • that the requests use http or https, and use one of several standard flavors of auth.
  • that the client code can either be told to use HTTP between it and the proxy instead of HTTPS, or that if it must use HTTPS, that it can be taught to trust the cert of the proxy.

Examples of services that might use this proxy (to be verified):

  • kubernetes api server
  • amazon s3 service
  • various GCP services
  • github
  • docker

Implementation sketch

How credentials get into the LOASD's memory is TBD. Probably it securely communicates with the APIserver, and perhaps also with a separate keystore. However this works, it is a detail hidden from the container API.

"Normal" pod traffic does not use the proxy. Only traffic that goes to certain IP addresses use the proxy. Iptables rules which are setup for each pod cause packets to go to LOASD. Pods learn what dest IP addresses they should use for given proxied service from an env var, e.g. LOASD_PROXY_IP_FOR_K8S_API=xxx.xxx.xxx.xxx
or
LOASD_PROXY_IP_FOR_AMAZON_S3=yyy.yyy.yyy.yyy
These IP addresses are allocated from a special address range. There will be considerable overlap between this code and the K8s Services and Portal code.

The LOASD runs a http server. It identifies which pod is talking to it based on the source IP of the request. It identifies the ultimate destination of the traffic based on the dest IP of the request. The dest IP is not the ultimate destination. It is one of the special IP addresses mentioned just above. The LOASD maps checks if it has a "recipe" installed for that (sourceIP, destIP) pair. If it does, then it uses that recipe to rewrite the http request to add authentication to it. Typically, this might mean injecting an Authorization header, or replacing a dummy value in an existing Authorization header, or in the case of amazon, computing a signature and adding that to the request.

Scope of use

We might be able to make something generic enough that other cluster infrastructure people might want to use it and people would contribute "recipes" for other end points.

However, even if we only ever used this for authenticating Pod to APIserver traffic, I think it still has value.

@smarterclayton
Copy link
Contributor

This has a lot of attack surface reduction advantages.

@jbeda
Copy link
Contributor

jbeda commented Nov 6, 2014

I'm not a huge fan of doing this based on IP address. I'd love to find a better way to direct traffic here.

We should weight this with a "token vending machine" that mints short term tokens vs. a "auth stamping proxy" that will transparently apply auth.

I'm a big fan of the former and it is what we did with GCE. If creds do leak, they are short term.

Another option is to support generic secret distribution with support for rotation.

This is obviously related to #2030.

@erictune
Copy link
Member Author

erictune commented Nov 6, 2014

@thockin had an idea on how to move from IPs to something with DNS.

Maybe he can remind me what he said.

Token vending machine seems good for GCE, when the whole ecosystem uses tokens. Not sure how well it works for a broader ecosystem of services, not all of which use tokens (for example, could the vending machine work with Amazon's Authorization: AWS ... approach?

@jbeda
Copy link
Contributor

jbeda commented Nov 6, 2014

Yeah -- AWS has a way to mint short lived tokens -- the problem is that each cloud/system will be slightly different. See: http://aws.amazon.com/code/7351543942956566

@smarterclayton
Copy link
Contributor

Not only that, but the tokens will only work for tightly coupled systems in the cloud. So NFS servers, Git repositories behind SSH, Docker registries behind anything, etc.

@eparis
Copy link
Contributor

eparis commented Nov 7, 2014

@smarterclayton But the proposal is specific to auth for http requests. I don't initially see how this can expand to kerberos for NFS or ssh public keys for git access....

@thockin
Copy link
Member

thockin commented Nov 7, 2014

My idea was to offer a new type of Service "every node" - rather than
intercepting the VIP in kube-proxy, just redirect teh VIP to the node's
IP:port for loasd. Repeat for the local kubelet, ...

On Fri, Nov 7, 2014 at 5:21 AM, Eric Paris notifications@github.com wrote:

@smarterclayton https://github.com/smarterclayton But the proposal is
specific to auth for http requests. I don't initially see how this can
expand to kerberos for NFS or ssh public keys for git access....

Reply to this email directly or view it on GitHub
#2209 (comment)
.

@bgrant0607 bgrant0607 added area/security priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels Dec 4, 2014
@erictune erictune added sig/auth Categorizes an issue or PR as relevant to SIG Auth. and removed area/security labels Apr 12, 2016
@bgrant0607 bgrant0607 added the sig/service-catalog Categorizes an issue or PR as relevant to SIG Service Catalog. label Aug 31, 2016
@bgrant0607
Copy link
Member

@erictune
Copy link
Member Author

erictune commented Jul 13, 2017

I consider the original request resolved by github.com/istio/istio

Thank you all the Istio project members, for an amazing project that works great with Kubernetes, and which closes this 2.5 year old Kubernetes FR!
@frankbu @andraxylia @rshriram @sebastienvas @ldemailly @ayj @geeknoid @douglas-reid @esnible @ @myidpt @kyessenov @mandarjog @lookuptable @liamawhite @ZackButcher @smawson @GregHanson @costinm @louiscryan @wenchenglu @duglin @dcberg @odeke-em @elevran

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. sig/auth Categorizes an issue or PR as relevant to SIG Auth. sig/service-catalog Categorizes an issue or PR as relevant to SIG Service Catalog.
Projects
None yet
Development

No branches or pull requests

6 participants