Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support multiple CLI configuration profiles #1755

Closed
thockin opened this issue Oct 13, 2014 · 15 comments
Closed

Support multiple CLI configuration profiles #1755

thockin opened this issue Oct 13, 2014 · 15 comments
Assignees
Labels
area/app-lifecycle area/kubectl area/usability priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@thockin
Copy link
Member

thockin commented Oct 13, 2014

I should be able to have multiple clusters that I am auth'ed into that I can flip with a symbolic name. Having a single auth for GCE is really painful when flipping between real and e2e clusters. Having to set KUBERNETES_PROVIDER sometimes and KUBERNETES_MASTER other is somethign I easily forget.

Something like (obviously half-baked):

kubectl kube add timsgce https://1.2.3.4
kubectl kube add timse2e https://5.6.7.8
kubectl kube add vagrant

kubectl --kube=timsgce get pods
kubectl --kube=vagrant get pods

kubectl kube set timse2e
kubectl get pods

@ghodss for possible kubectl thinking.

@smarterclayton
Copy link
Contributor

@derekwaynecarr who also added some of this w.r.t namespaces. We argued that the namespace choice should infer a master as well and auth. Since namespace is the smallest possible top level grouping (server is broader than namespace) switching namespaces felt right. Also cc @jwforres

@ghodss
Copy link
Contributor

ghodss commented Oct 22, 2014

I think https://github.com/spf13/viper can help us with storing and retrieving this config. It also seems like the right subcommand is kubectl config which works generically across all config (similar to git config). So you could issue commands like this:

kubectl config colors on
kubectl config clusters.timsgce.host https://1.2.3.4
kubectl config clusters.timsgce.cert /path/to/cert
kubectl config clusters.timse2e.host https://5.6.7.8

And this stores everything in a YAML file in ~/.kubectl:

colors: on
clusters:
- timsgce:
    host: https://1.2.3.4
    cert: /path/to/cert
  timse2e:
    host: https://5.6.7.8

And you use the cluster config as such:

kubectl --cluster=timsgce get pods

This would replace our current ~/.kubernetes_auth and ~/.kubernetes_vagrant_auth files.

@thockin
Copy link
Member Author

thockin commented Oct 23, 2014

This sounds awesome

On Tue, Oct 21, 2014 at 7:26 PM, Sam Ghods notifications@github.com wrote:

I think https://github.com/spf13/viper can help us with storing and
retrieving this config. It also seems like the right subcommand is kubectl
config which works generically across all config (similar to git config).
So you could issue commands like this:

kubectl config colors on
kubectl config clusters.timsgce.host https://1.2.3.4
kubectl config clusters.timsgce.cert /path/to/cert
kubectl config clusters.timse2e.host https://5.6.7.8

And this stores everything in a YAML file in ~/.kubectl:

colors: on
clusters:

And you use the cluster config as such:

kubectl --cluster=timsgce get pods

This would replace our current ~/.kubernetes_auth and
~/.kubernetes_vagrant_auth files.

Reply to this email directly or view it on GitHub
#1755 (comment)
.

@bgrant0607 bgrant0607 changed the title CLIs should allow multiple auths & masters Support multiple CLI configuration profiles Oct 28, 2014
@bgrant0607
Copy link
Member

Moving the discussion here from #1941.

I agree that specification of lots of command-line flags and/or environment variables is tedious and error-prone, though environment variable initializations could be organized in scripts, for which we could provide templates:

. ./timsgce-profile.sh
kubectl ...
kubectl ...
kubectl ...

Requirements:

  • Reusable/reproducible configuration
  • Support multiple configuration profiles
  • kubectl should remember which profile is selected and/or get it from the environment so it doesn't have to be explicitly selected in every command
  • possible to dump the current configuration
  • Cover all CLI parameters that enable connecting to a particular scope (namespace) within a particular apiserver
    • master name/addr
    • auth
    • namespace
  • Must be able to read the configuration without talking to apiserver

Nice to haves:

@smarterclayton
Copy link
Contributor

They can be organized in scripts, and that use case should always work (I can easily script kubectl in a way natural to the shell).

However, I feel that pushing the problem to end users to script creates a gap in the client experience. I believe a significant proportion of users will deal with multiple namespaces and kubernetes servers over the lifetime of interfacing with kubectl (20-40%). Transitioning between those namespaces or servers will form a large part of their kubectl interactions. Changing/scripting environment variables and switches rapidly wears on an end user much like reauthenticating via prompt or keyword every time. On the other hand, having disjoint settings (change namespace but not server, or server but not namespace) brings a significant risk of administrative users unintentionally performing destructive actions on the server. So the two extremes to me indicate a need to clearly and unambiguously manage the transition in a predictable fashion, and to couple namespace and server so users are not transitioning one without the other.

----- Original Message -----

Moving the discussion here from #1941.

I agree that specification of lots of command-line flags and/or environment
variables is tedious and error-prone, though environment variable
initializations could be organized in scripts, for which we could provide
templates:

. ./timsgce-profile.sh
kubectl ...
kubectl ...
kubectl ...

Requirements:

  • Reusable/reproducible configuration
  • Support multiple configuration profiles
  • kubectl should remember which profile is selected and/or get it from the
    environment so it doesn't have to be explicitly selected in every command
  • Cover all CLI parameters that enable connecting to a particular scope
    (namespace) within a particular apiserver
    • master name/addr
    • auth
    • namespace
  • Must be able to read the configuration without talking to apiserver

Nice to haves:


Reply to this email directly or view it on GitHub:
#1755 (comment)

@bgrant0607
Copy link
Member

@smarterclayton I wasn't actually suggesting that we should rely on scripts, but was using the example to motivate the requirements.

@smarterclayton
Copy link
Contributor

Sorry, didn't mean to imply that you were. Was trying to better articulate our thought process around the pattern.

On Oct 28, 2014, at 2:39 PM, bgrant0607 notifications@github.com wrote:

@smarterclayton I wasn't actually suggesting that we should rely on scripts, but was using the example to motivate the requirements.


Reply to this email directly or view it on GitHub.

@ghodss
Copy link
Contributor

ghodss commented Dec 4, 2014

@derekwaynecarr, @jlowdermilk and I (#kubernetesunconference2014) sketched out a design for this. kubectl will look for a .kubeconfig file, first in the current directory, and then in ~/.kube/.kubeconfig (it's in a directory to have an easy default place for certs and other config-related files). You can also specify an env var (KUBECONFIG=/path/to/kubeconfig) or command line param (--kubeconfig=/path/to/kubeconfig) to a file which takes highest precedence. The format looks like this:

preferences:
  colors: on
clusters:
  tims-gce:
    host: https://1.2.3.4
  tims-e2e:
    host: https://5.6.7.8
users:
  tim-dev:
    cert: ~/.kubeauth/tim-dev-cert.key
  tim-prod:
    cert: ~/.kubeauth/prod-cert.key
contexts:
  gce:
    cluster: tims-gce
    user: tim
    namespace: dev   # if namespace is undefined means default   
current-context: gce

It then unmarshals into the following Go struct:

package kubectl

type Config struct {
    Preferences Preferences
    Clusters map[string]Cluster
    Users map[string]authfile.KubeAuth // see #2302
    Contexts map[string]Contexts
    CurrentContext string
}

It's designed in a way that the slices can be merged across many files so things like Clusters, Users and Contexts are all composable and additive across .kubeconfig files.

You can override anything at the command line.

  • kubectl --context=tims-gce get pods - Override the current context.
  • kubectl --cluster=tims-gce get pods - If found, use the current context, but override the cluster.
  • kubectl --host=1.2.3.4 --namespace=prod get pods - If found, use the current context, but override the specific parameters in host and namespace.

You can create and manage this file by hand, using tooling, or with the kubectl interface like so:

  • kubectl config colors on - Turn on syntax highlighting in preferences.
  • kubectl config create-context <name-of-context> <cluster> <user> - Easy setup for a new context.
  • kubectl config list-context - List an expanded context taking into account all overrides.
  • kubectl config use-context <context> - Change the current-context in the highest-precedent ~/.kubeconfig.
  • There are other commands as well, but how the initial bootstrapping of the ~/.kubeconfig file gets populated is out of scope and will be filed and tracked as a separate issue after kubectl has been migrated to be powered by kubeconfig.

@thockin
Copy link
Member Author

thockin commented Dec 4, 2014

Your create-context args does not include namespace.

Should there be a way to simply change aspects of a context, or is a
context immutable? E.g. should 'kubectl config context namespace foo
change just the namespace field?

Otherwise looks awesome.

On Wed, Dec 3, 2014 at 4:46 PM, Sam Ghods notifications@github.com wrote:

@derekwaynecarr https://github.com/derekwaynecarr, @jlowdermilk
https://github.com/jlowdermilk and I (#kubernetesunconference2014)
sketched out a design for this. kubectl will look for a .kubeconfig file,
first in the current directory, and then in ~/.kube/.kubeconfig (it's in a
directory to have an easy default place for certs and other config-related
files). You can also specify an env var (KUBECONFIG=/path/to/kubeconfig) or
command line param (--kubeconfig=/path/to/kubeconfig) to a file which takes
highest precedence. The format looks like this:

preferences:
colors: on
clusters:
tims-gce:
host: https://1.2.3.4
tims-e2e:
host: https://5.6.7.8
users:
tim-dev:
cert: ~/.kubeauth/tim-dev-cert.key
tim-prod:
cert: ~/.kubeauth/prod-cert.key
contexts:
gce:
cluster: tims-gce
user: tim
namespace: dev # if namespace is undefined means default
current-context: gce

It then unmarshals into the following Go struct:

package kubectl

type Config struct {
Preferences Preferences
Clusters []Cluster
Users []authfile.KubeAuth // see #2302
Contexts []Contexts
CurrentContext string
}

It's designed in a way that the slices can be merged across many files so
things like Clusters, Users and Contexts are all composable and additive
across .kubeconfig files.

You can override anything at the command line.

  • kubectl --context=tims-gce get pods - Override the current context.
  • kubectl --cluster=tims-gce get pods - If found, use the current
    context, but override the cluster.
  • kubectl --host=1.2.3.4 --namespace=prod get pods - If found, use the
    current context, but override the specific parameters in host and namespace.

You can create and manage this file by hand, using tooling, or with the
kubectl interface like so:

  • kubectl config colors on - Turn on syntax highlighting in
    preferences.
  • kubectl config create-context -
    Easy setup for a new context.
  • kubectl config list-context - List an expanded context taking into
    account all overrides.
  • kubectl config use-context - Change the current-context in
    the highest-precedent ~/.kubeconfig.
  • There are other commands as well, but how the initial bootstrapping
    of the ~/.kubeconfig file gets populated is out of scope and will be filed
    and tracked as a separate issue after kubectl has been migrated to be
    powered by kubeconfig.

Reply to this email directly or view it on GitHub
#1755 (comment)
.

@bgrant0607 bgrant0607 added the priority/backlog Higher priority than priority/awaiting-more-evidence. label Dec 4, 2014
@smarterclayton
Copy link
Contributor

@deads2k will tackle this

@smarterclayton smarterclayton self-assigned this Dec 8, 2014
@ghodss
Copy link
Contributor

ghodss commented Dec 8, 2014

Cool. https://github.com/imdario/mergo might be a good solution for merging and overriding config structs from different sources.

@zmerlynn
Copy link
Member

zmerlynn commented Dec 8, 2014

It's not necessarily something that should block progress on this issue, but a nice-to-have along the way for this issue might be the ability to use more than one cluster in the e2e tests. (This issue may be necessary-but-not-sufficient to get there.) It would be nice if (a) we could shard tests across multiple clusters, and/or (b) run tests in the background and still work on other clusters. (a) is particularly useful if you have the resources and are trying to track down a flaky test.

@roberthbailey
Copy link
Contributor

@jlowdermilk might be interested in this

@j3ffml
Copy link
Contributor

j3ffml commented Dec 13, 2014

I think this is a good starting point. We probably we also want a command like

  • kubectl config unset <config-key>

To remove config entries, but that's obvious, apart from choice of name. I'd also add is that I think we should limit the number of places we read a .kubeconfig file from to no more 2 current directory and a known location (the proposed ~/.kube/.kubeconfig sgtm). An alternative is to search for a .kubeconfig file in each parent directory of the current one as well, but I can't think of use cases that would need that approach.

Another thing to consider: the current proposal implies storing cert/auth files in arbitrary locations (config just needs a path). We could also store them in a defined directory hierarchy and have kubectl look for the top level directory. That is, kubectl would look for config first in ./.kube/.kubeconfig, then in ~/.kube/.kubeconfig, and would derive auth file and cert file paths from the cluster and user respectively. An example .kubeconfig file and .kube directory could look like

# .kubeconfig
preferences:
  colors: on
clusters:
  tims-gce
  tims-e2e
users:
  tim-dev
  tim-prod
contexts:
  gce:
    cluster: tims-gce
    user: tim
    namespace: dev   # if namespace is undefined means default   
current-context: gce

# .kube directory
.kube/
    .kubeconfig
    clusters/
        tims-gce/
            .kubernetes_auth
        tims-e2e/
            .kubernetes_auth
    users/
        tim-dev/
            cert.key
        tim-prod/
            cert.key

(Note the above assumes "host":"https://..." is added into the .kubernetes_auth file). An advantage of this approach is that the .kubeconfig file doesn't have to change if we want to alter the set of files that are associated with cluster/user/etc, since filenames are uniform and paths to them can be derived from the current context's cluster/user/namespace.

@bgrant0607
Copy link
Member

Done

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/app-lifecycle area/kubectl area/usability priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

7 participants