Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Higher level image and deployment concepts in Kubernetes #503

Closed
smarterclayton opened this issue Jul 17, 2014 · 16 comments
Closed

Higher level image and deployment concepts in Kubernetes #503

smarterclayton opened this issue Jul 17, 2014 · 16 comments
Labels
area/app-lifecycle area/usability kind/design Categorizes issue or PR as related to design. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery.

Comments

@smarterclayton
Copy link
Contributor

In Kubernetes, the reference from a container manifest to an image is a "name" - that name is arbitrary and it is up to the user to specific how that name interacts with their docker build and docker registry scenarios. That includes ensuring that the name and label the user uses to refer to their image is not changed accidentally (so that new images aren't introduced outside of a controlled deployment process) and that the registry DNS that hosts the images is continuously available as long as that image may be needed (see docker image discussions for how this might change).

That loose coupling is valuable for flexibility, but the lack of a concrete process leaves room for error and requires thought and control. In addition, the resolution of those names is tightly bound to the execution of the container in the Kubelet.

We think there is value in Kubernetes providing a set of higher level concepts above pods/replication controllers that can be used to create deployable units of containers. Two concepts we see as valuable are "builds" and "deployments" - the former can be used to compose new images (by leveraging the Kubernetes cluster for build slaves with resource control) and the latter can manage the process of transitioning between one set of podTemplates to another (and can be triggered by builds).

First, is this something that should be in Kubernetes? Should it be on top of Kubernetes as a separate server? Or is it something that could be optionally enabled by those who wish to work on it? We've got some ideas of how we could make this flow work really cleanly with Docker and images, but we'd want to get feedback on those ideas.

@thockin
Copy link
Member

thockin commented Jul 19, 2014

I think there's value in those abstractions, but to my naive ears they
sound like something built atop the core k8s primitives. We might still
want to endorse and ambrace them, but they are, by principle, a layer
above. I think.

On Thu, Jul 17, 2014 at 11:46 AM, Clayton Coleman notifications@github.com
wrote:

In Kubernetes, the reference from a container manifest to an image is a
"name" - that name is arbitrary and it is up to the user to specific how
that name interacts with their docker build and docker registry scenarios.
That includes ensuring that the name and label the user uses to refer to
their image is not changed accidentally (so that new images aren't
introduced outside of a controlled deployment process) and that the
registry DNS that hosts the images is continuously available as long as
that image may be needed (see docker image discussions for how this might
change moby/moby#6805).

That loose coupling is valuable for flexibility, but the lack of a
concrete process leaves room for error and requires thought and control. In
addition, the resolution of those names is tightly bound to the execution
of the container in the Kubelet.

We think there is value in Kubernetes providing a set of higher level
concepts above pods/replication controllers that can be used to create
deployable units of containers. Two concepts we see as valuable are
"builds" and "deployments" - the former can be used to compose new images
(by leveraging the Kubernetes cluster for build slaves with resource
control) and the latter can manage the process of transitioning between one
set of podTemplates to another (and can be triggered by builds).

First, is this something that should be in Kubernetes? Should it be on top
of Kubernetes as a separate server? Or is it something that could be
optionally enabled by those who wish to work on it?

Reply to this email directly or view it on GitHub
#503.

@smarterclayton
Copy link
Contributor Author

Agreed - I don't think pods or replication controllers know anything about builds or deployments, in fact, the layering is reversed - a type of build should be able to use a run once pod to accomplish its goal, while a type of deployment may depend on a particular sequence of calls to replication controllers.

@ncdc
Copy link
Member

ncdc commented Jul 25, 2014

We think a comprehensive platform should include deployment capabilities and a means to build images without requiring external infrastructure. To build images, you need hosting infrastructure. At scale, we'd prefer to use the cluster’s resources where possible, and schedule builds just like any other task (i.e. pod). In order to model this problem, we need a notion of a pod that runs only once. We wanted to get a feel for what this integration might feel like.

To that end, we've been working on a prototype to add the ability to build images in Kubernetes. We feel like there should be something fundamental between a build and a pod, so we’ve also added a simple job framework and a POC implementation of run-once semantics for pods as well.

A job contains a pod template, status, success flag, and a reference to the resulting pod. We expect that there will be different types of jobs in the future - for example, running a process inside an existing container or running a pod with multiple containers that all have to complete. We also expect to add dependency information such as predecessors and successors to jobs.

A new job controller (similar to the replication controller) looks for new jobs in storage and acts on them. It creates a run-once pod from the job's pod template and monitors the pod's status to completion.

The job controller will support different job types through delegation in the future.

A build is a user's request to create a new Docker image from one or more inputs (such as a Dockerfile or Docker context). In our POC we implement Dockerfile builds - we expect to support multiple build types such as STI (source to images), packer, Dockerfile2, etc. We are especially interested in feedback about how this problem should be modeled to facilitate other build extensions.

A new build controller (similar to the replication controller) looks for new builds in storage and acts on them. It creates a job for the build, executes the job, and monitors its status to completion (success or failure).

The build controller can support different build implementations, with the initial prototype defining a container that runs its own Docker daemon (Docker-in-Docker) and then executes docker build using the Docker context specified as a parameter to the Kubernetes build.

Implementation Notes:

We had to prototype/provide a couple of new capabilities to implement this proof of concept:

  • Run-once containers
  • Launching privileged containers (for Docker-in-Docker)

Link to our prototype: https://github.com/ironcladlou/kubernetes/tree/build-poc
@ironcladlou, @pmorie

We'll have a screencast demonstrating our prototype shortly! We appreciate all feedback - thanks!

@pmorie
Copy link
Member

pmorie commented Jul 25, 2014

@smarterclayton
Copy link
Contributor Author

On a Venn diagram a Job and a ReplicationController definitely overlap - to me, I felt like there was value in a Job object which could be driven by an external state machine and the job status to be used as the state register (with the special states NOTSTARTED, RUNNING, and COMPLETE). I'd be interested in how others would model a consumable state machine on top of pod execution for reuse or whether you would instead implement independent resources dependent only on pods. I pushed this towards separating Job from Build, but I could equally see it without that shared concept.

@bgrant0607
Copy link
Member

I definitely think this is a topic worth discussing, and we could host solutions on the kubernetes repo, either in the main tree or subrepos or something, even if the APIs don't necessarily land in the main apiserver.

We definitely need to do something (or multiple somethings) to make deployment simpler. Some deployment concepts mechanisms have been discussed, such as declarative configuration (#113), pod templates and configuration generation (#170), and rolling updates (several issues).

So, we'd love to hear your ideas.

@smarterclayton
Copy link
Contributor Author

@ncdc let's move build to a WIP pull so we can have a focused discussion on that.

For deployment I'll create a separate issue on Monday for the various features an admin focused or developer focused deployment service might desire. I've created #635 to talk about api policy.

@shykes
Copy link

shykes commented Jul 28, 2014

Whatever comes out of this we will consider it for merge in upstream Docker.

@ncdc
Copy link
Member

ncdc commented Jul 28, 2014

@shykes which part(s) specifically?

@bgrant0607 bgrant0607 added kind/design Categorizes issue or PR as related to design. area/app-lifecycle labels Oct 4, 2014
@bgrant0607 bgrant0607 added priority/backlog Higher priority than priority/awaiting-more-evidence. kind/documentation Categorizes issue or PR as related to documentation. labels Dec 3, 2014
@bgrant0607
Copy link
Member

We need a deployment solution and should document the recommended approach(es).

@davidopp davidopp added the sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. label Feb 17, 2015
@bgrant0607 bgrant0607 added area/usability priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. and removed priority/backlog Higher priority than priority/awaiting-more-evidence. labels Feb 28, 2015
@bgrant0607
Copy link
Member

Not urgent, but how hard would it be to package most of this functionality as independent plugins (once we have a plugin mechanism)?

@smarterclayton
Copy link
Contributor Author

Very little. Builds are decoupled except for the ability to watch for upstream images that change, and the same applies to deployments.

On Feb 27, 2015, at 9:53 PM, Brian Grant notifications@github.com wrote:

Not urgent, but how hard would it be to package most of this functionality as independent plugins (once we have a plugin mechanism)?


Reply to this email directly or view it on GitHub.

@davidopp
Copy link
Member

@davidopp

@goltermann goltermann added this to the v1.0-candidate milestone Jun 24, 2015
@bgrant0607 bgrant0607 removed the kind/documentation Categorizes issue or PR as related to documentation. label Jun 25, 2015
@bgrant0607 bgrant0607 removed this from the v1.0-candidate milestone Jun 25, 2015
@ghodss
Copy link
Contributor

ghodss commented Sep 1, 2015

Is this now a dupe to #1743? Or do we still want to keep this open for the idea of builds? In which case it may help to close and fork given this issue as-is covers a lot.

@ghodss
Copy link
Contributor

ghodss commented Sep 1, 2015

The issue for a job controller is #1624, with the proposal for it at #11746.

@bgrant0607
Copy link
Member

I'm closing this now. Jobs and Deployments are underway. If builds appear in Kubernetes, it will be as some kind of extension. We might need image metadata at some point, but that's not really discussed here in any detail.

b3atlesfan pushed a commit to b3atlesfan/kubernetes that referenced this issue Feb 5, 2021
linxiulei pushed a commit to linxiulei/kubernetes that referenced this issue Jan 18, 2024
changing the label names as per the standards
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/app-lifecycle area/usability kind/design Categorizes issue or PR as related to design. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery.
Projects
None yet
Development

No branches or pull requests

9 participants