Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow actors against the apiserver to be authenticated and actions to be authorized #443

Closed
smarterclayton opened this issue Jul 13, 2014 · 18 comments
Assignees

Comments

@smarterclayton
Copy link
Contributor

It should be possible to control access to the Kubernetes API via some mechanism which is easily integrated into large organizations, as well as simple to configure for single administrators. There are three related concepts - identity (who is acting), authentication (how does an API request get associated with an identity), and authorization (whether the action that is being taken with an identity in a context should be allowed).

A few thoughts:

  • The details of identity, authentication, and authorization can vary across environments. For Kubernetes, it would make sense to ensure these concerns are separated from the primary resource APIs so that different deployments can provide their own solutions
  • As a best practice, separating authentication from general API calls helps reduce the cost of integration. This may suggest something like the use of OAuth2 tokens for the API, with custom authentication solutions integrating into the token acquisition flow.
  • Allowing authorization to be restricted using scopes is important for machine integration and delegation of trust to 3rd parties - a monitoring program does not automatically need the authority to delete pods
  • Proper auditing (important in many business domains) depends heavily on ensuring that an action taken can be traced to a specific identity, time and place of authentication, and the authorizations the identity was operating under. While not necessarily a core resource, it should be easy for administrators to audit Kubernetes in the context of their larger structure.
@brendandburns
Copy link
Contributor

There are a number of small steps here to start:

  1. Get the basic auth out of the nginx proxy, and into the apiserver (providing suitable abstractions to make auth pluggable)
  2. Implement OAuth2 (using https://github.com/RangelReale/osin?)

Those are good places to start while we figure out the details of scope, acls and auditing.

@smarterclayton
Copy link
Contributor Author

osin looks pretty good - it seems to be the most feature complete and active oauth2 server impl.

For pluggable auth, what makes the most sense? Normally I'd make this a middleware function in the handler chain, and depending on a config value add that in so that a context object can be retrieved representing the authenticated identity. However, the implementations must be statically compilable today, delegated to an external service (may not be that bad), or implemented via something like otto (embeddable js). The first seems like the simplest option right now.

@erictune
Copy link
Member

OAuth2 seems to offer lots of different usage models. Getting it right seems like it could take some time.

Is there a way we could unblock development on all the other things that rely on identity, while authn work is underway? What do people think about using a list of user/passwds created at cluster startup + http basic auth to identify the principals that are making API calls. Obviously this is not a good final solution, but it could be useful in test scenarios and it would unblock work on things that depend on authn.

@smarterclayton
Copy link
Contributor Author

So in general we feel that no API going forward should ever be protected by anything other than client certs, token auth, or kerberos ticket exchange. Basic opens the door to browser CSRF, and also typically has undesirable performance characteristics. How about supporting simple bearer auth tokens from a file configured on the apiserver? That's as easy to implement as user/passwords, and is directly compatible with future OAuth.

@erictune
Copy link
Member

Should the nginx server terminate SSL before forwarding to the apiserver? It seems like this puts the certs in a less-touched place, which is a good thing. For simple setups, nginx and apiserver trust each other because they are on the same machine?

@lavalamp
Copy link
Member

Should the nginx server terminate SSL before forwarding to the apiserver?

I intuitively hate that we currently do this. I'd much rather us have some idea of identity/authentication/authorization. I can see outsourcing that, maybe, but definitely not to nginx. I'd also like all of our traffic to be SSL component-to-component...

(Am I missing some argument as to why what we're doing is not terrible? I think it was just a stop-gap measure, and not intended to stay this way...)

@smarterclayton
Copy link
Contributor Author

The best use case for separating SSL from Go is whether you have a legal / corporate / business requirement to use OpenSSL / FIPS mode. The best way to integrate with common authentication systems in big corporate deployments is to use apache+mod_auth as an authenticating proxy in front of your OAuth client-credentials flow, with a trusted connection assumed between apache -> the auth server that exchanges the tokens (assuming OAuth of course). Once you have an auth token, I'd say we should recommend direct component to component flows over SSL with certs (itself a PITA in common envs) except for the aforementioned SSL proxies.

I don't think nginx is special, although the point about keeping certs in an isolated place is not an uncommon request and should be considered.

@erictune
Copy link
Member

My (evolving) thoughts:

For the "client somewhere on the internet" --> "kubernetes frontend" segment, you definitely want private communication, both for transporting OAuth tokens, and for making data that is sent and returned private. SSL is the obvious choice for privacy, regardless of how auth handled.

For the "kubernetes frontend" --> "other k8s components", I agree that we should make https possible. I suspect we will want to support a variety of options:

  • no https for ease of setup of a demo cluster
  • http for a common case
  • pluggable options for large-corp / hosted env.

This is just talking about privacy, not auth.

So, nginx seems like it is at a somewhat meaningful transition point. Whether that means it should terminate SSL, I'm still not sure.

@lavalamp
Copy link
Member

I agree that we should support "SSL added and removed here" for the purpose of consumers connecting to a corporate gateway with an arbitrarily arcane configuration, although I'd argue that we should also require SSL between the gateway and our own apiserver.

I still think the k8s cluster itself should natively (in go code) handle SSL. I don't think there's anything nginx can do that go can't, and to me it feels simpler to have as much of the stack as possible in the same language (go), as opposed to having part of it in nginx config files. There's also fewer attack vectors...

@erictune
Copy link
Member

I'd like to rip out all of the basic auth code. However, I see from the nginx config that it is proxying /etcd as well as apiserver, with basic auth.

I don't see any evidence that any scripts are relying on this behavior, so I'd like to rip it out as part of implementing this Issue. But, it must be there for a reason. Who is using the /etcd/ proxying?

@erictune erictune self-assigned this Jul 28, 2014
@lavalamp
Copy link
Member

Hm, maybe minions? I'd say take it out and see if e2e still passes...

@erictune
Copy link
Member

Next couple of small steps for this issue:

  1. Stopy proxying etcd (localhost:4001) on https:/$MASTER_IP/etcd. #666
  2. Move struct AuthInfo out of pkg/client/client.go and into separate pkg that apiserver can access too.
  3. Make the Apiserver use AuthInfo and check Basic Auth. Stop giving htpasswd to nginx. Start giving a file of user:pass to the apiserver.
  4. Change struct AuthInfo { user, pass string } to struct AuthInfo { bearer_token string }. Change cluster setup to generate a since bearer token instead of a user:pass.
  5. Beyond that:
    1. Figure out Oauth.
    2. figure out multiple identities
    3. figure out fine grain authorization of actions and delegation.

@lavalamp
Copy link
Member

Those steps SGTM.

@smarterclayton
Copy link
Contributor Author

SGTM as well

@erictune
Copy link
Member

Status update:

Several proposals added:
https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/security.md
https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/access.md

Another pending:
#1430

Not a lot of auth code committed yet.

Prototype underway by @smarterclayton on openshift:
openshift/origin#86

@smarterclayton
Copy link
Contributor Author

The OpenShift prototype right now is:

  • An OAuth2 compatible server implementation (RangelReale/osin) that can store tokens in etcd
  • A simple user model that stores users in etcd and can be integrated into an OAuth2 flow easily
  • Some miscellaneous components (what's in pkg/authn) that can glue together disparate pieces
    • "login" and "implicit login" endpoints that can serve to do simple OAuth2 grants against the components above
    • Stubs of some filters and http handlers that will eventually be useful to enable simple bearer token -> user -> context flows

All of these things are standalone components that we can move into another registry / kube/plugins directory as needed. It's also not rebased on latest yet (see below).

I realized that I need something like the context work in order to make this truly carry through, and so right now I'm focused on plumbing the Kube client to enable flexible transport level auth. After that's available I'll probably rebase the user/authn/oauth branch on top.

----- Original Message -----

Status update:

Several proposals added:
https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/security.md
https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/access.md

Another pending:
#1430

Not a lot of auth code committed yet.

Prototype underway by @smarterclayton on openshift:
openshift/origin#86


Reply to this email directly or view it on GitHub:
#443 (comment)

This was referenced Nov 6, 2014
@erictune
Copy link
Member

erictune commented Nov 6, 2014

Quite a bit of this is now done.
See docs/authorization.md and docs/authentication.md for an overview of what we have right now.

Filed new issues for remaining items:

Closing/

@erictune erictune closed this as completed Nov 6, 2014
@erictune
Copy link
Member

erictune commented Nov 6, 2014

#2212

keontang pushed a commit to keontang/kubernetes that referenced this issue May 14, 2016
keontang pushed a commit to keontang/kubernetes that referenced this issue Jul 1, 2016
mqliang pushed a commit to mqliang/kubernetes that referenced this issue Dec 8, 2016
mqliang pushed a commit to mqliang/kubernetes that referenced this issue Mar 3, 2017
seans3 pushed a commit to seans3/kubernetes that referenced this issue Apr 10, 2019
release-1.8: tweak release team secondaries
wking pushed a commit to wking/kubernetes that referenced this issue Jul 21, 2020
sjenning pushed a commit to sjenning/kubernetes that referenced this issue Dec 2, 2020
…rry-pick-432-to-release-4.6

[release-4.6] Bug 1896318: UPSTREAM: 95236: vsphere: improve logging message on node cache refresh event
linxiulei pushed a commit to linxiulei/kubernetes that referenced this issue Jan 18, 2024
…e-repair

Set auto-repair=true by default for health check monitors.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants