From ca934c7386a1ac9f5021b0c875ccff153d9f3745 Mon Sep 17 00:00:00 2001 From: Karl Isenberg Date: Tue, 14 Jul 2015 13:03:16 -0700 Subject: [PATCH] Walkthrough example cleanup - Add kubectl command examples - Add tables of contents - Skip 3rd header tier to make sections more clear - Reference cmd-exec example for curling pod & service IPs - Make section layout, text patterns, changes & links more consistent - Canonical yaml formatting --- docs/user-guide/walkthrough/README.md | 180 +++++++++++---- docs/user-guide/walkthrough/k8s201.md | 209 ++++++++++++++---- .../walkthrough/pod-nginx-with-label.yaml | 12 + docs/user-guide/walkthrough/pod-nginx.yaml | 10 + docs/user-guide/walkthrough/pod-redis.yaml | 14 ++ docs/user-guide/walkthrough/pod1.yaml | 8 - docs/user-guide/walkthrough/pod2.yaml | 16 -- docs/user-guide/walkthrough/service.yaml | 2 +- examples/examples_test.go | 5 +- 9 files changed, 341 insertions(+), 115 deletions(-) create mode 100644 docs/user-guide/walkthrough/pod-nginx-with-label.yaml create mode 100644 docs/user-guide/walkthrough/pod-nginx.yaml create mode 100644 docs/user-guide/walkthrough/pod-redis.yaml delete mode 100644 docs/user-guide/walkthrough/pod1.yaml delete mode 100644 docs/user-guide/walkthrough/pod2.yaml diff --git a/docs/user-guide/walkthrough/README.md b/docs/user-guide/walkthrough/README.md index 12aa415a44a35..2f47c6b354cd6 100644 --- a/docs/user-guide/walkthrough/README.md +++ b/docs/user-guide/walkthrough/README.md @@ -20,77 +20,176 @@ certainly want the docs that go with that version. -# Kubernetes 101 - Walkthrough +# Kubernetes 101 - Kubectl CLI & Pods + +For Kubernetes 101, we will cover kubectl, pods, volumes, and multiple containers + +In order for the kubectl usage examples to work, make sure you have an examples directory locally, either from [a release](https://github.com/GoogleCloudPlatform/kubernetes/releases) or [the source](https://github.com/GoogleCloudPlatform/kubernetes). + +Table of Contents +- [Kubectl CLI](#kubectl-cli) + - [Install Kubectl](#install-kubectl) + - [Configure Kubectl](#configure-kubectl) +- [Pods](#pods) + - [Pod Definition](#pod-definition) + - [Pod Management](#pod-management) + - [Volumes](#volumes) + - [Multiple Containers](#multiple-containers) +- [What's Next?](#whats-next) + + +## Kubectl CLI + +The easiest way to interact with Kubernetes is via the command-line interface. + +If you downloaded a pre-compiled release, kubectl should be under `platforms//`. + +If you built from source, kubectl should be either under `_output/local/bin//` or `_output/dockerized/bin//`. + +For more info about kubectl, including its usage, commands, and parameters, see the [kubectl CLI reference](../kubectl/kubectl.md). + +#### Install Kubectl + +The kubectl binary doesn't have to be installed to be executable, but the rest of the walkthrough will assume that it's in your PATH. + +The simplest way to install is to copy or move kubectl into a dir already in PATH (like `/usr/local/bin`). + +An alternate method, useful if you're building from source and want to rebuild without re-installing is to use `./cluster/kubectl.sh` instead of kubectl. That script will auto-detect the location of kubectl and proxy commands to it (ex: `./cluster/kubectl.sh cluster-info`). + +#### Configure Kubectl + +If you used `./cluster/kube-up.sh` to deploy your Kubernetes cluster, kubectl should already be locally configured. + +By default, kubectl configuration lives at `~/.kube/config`. + +If your cluster was deployed by other means (e.g. a [getting started guide](../../getting-started-guides/README.md)), you may want to configure the path to the Kubernetes apiserver in your shell environment: + +```sh +export KUBERNETES_MASTER=http://:/api +``` + +Check that kubectl is properly configured by getting the cluster state: + +```sh +kubectl cluster-info +``` + ## Pods -The first atom of Kubernetes is a _pod_. A pod is a collection of containers that are symbiotically grouped. +In Kubernetes, a group of one or more containers is called a _pod_. Containers in a pod are deployed together, and are started, stopped, and replicated as a group. See [pods](../../../docs/user-guide/pods.md) for more details. -### Intro -Trivially, a single container might be a pod. For example, you can express a simple web server as a pod: +#### Pod Definition + +The simplest pod definition describes the deployment of a single container. For example, an nginx web server pod might be defined as such: ```yaml apiVersion: v1 kind: Pod metadata: - name: www + name: nginx spec: containers: - - name: nginx - image: nginx + - name: nginx + image: nginx + ports: + - containerPort: 80 ``` -A pod definition is a declaration of a _desired state_. Desired state is a very important concept in the Kubernetes model. Many things present a desired state to the system, and it is Kubernetes' responsibility to make sure that the current state matches the desired state. For example, when you create a Pod, you declare that you want the containers in it to be running. If the containers happen to not be running (e.g. program failure, ...), Kubernetes will continue to (re-)create them for you in order to drive them to the desired state. This process continues until you delete the Pod. +A pod definition is a declaration of a _desired state_. Desired state is a very important concept in the Kubernetes model. Many things present a desired state to the system, and it is Kubernetes' responsibility to make sure that the current state matches the desired state. For example, when you create a Pod, you declare that you want the containers in it to be running. If the containers happen to not be running (e.g. program failure, ...), Kubernetes will continue to (re-)create them for you in order to drive them to the desired state. This process continues until the Pod is deleted. See the [design document](../../../DESIGN.md) for more details. -### Volumes -Now that's great for a static web server, but what about persistent storage? We know that the container file system only lives as long as the container does, so we need more persistent storage. To do this, you also declare a ```volume``` as part of your pod, and mount it into a container: +#### Pod Management + +Create a pod containing an nginx server ([pod-nginx.yaml](pod-nginx.yaml)): + +```sh +kubectl create -f docs/user-guide/walkthrough/pod-nginx.yaml +``` + +List all pods: + +```sh +kubectl get pods +``` + +On most providers, the pod IPs are not externally accessible. The easiest way to test that the pod is working is to create a busybox pod and exec commands on it remotely. See the [command execution documentation](../kubectl/kubectl_exec.md) for details. + +Provided the pod IP is accessible, you should be able to access its http endpoint with curl on port 80: + +```sh +curl http://$(kubectl get pod nginx -o=template -t={{.status.podIP}}) +``` + +Delete the pod by name: + +```sh +kubectl delete pod nginx +``` + + +#### Volumes + +That's great for a simple static web server, but what about persistent storage? + +The container file system only lives as long as the container does. So if your app's state needs to survive relocation, reboots, and crashes, you'll need to configure some persistent storage. + +For this example, we'll be creating a Redis pod, with a named volume and volume mount that defines the path to mount the volume. + +1. Define a volume: + + ```yaml + volumes: + - name: redis-persistent-storage + emptyDir: {} + ``` + +1. Define a volume mount within a container definition: + + ```yaml + volumeMounts: + # name must match the volume name below + - name: redis-persistent-storage + # mount path within the container + mountPath: /data/redis + ``` + +Example Redis pod definition with a persistent storage volume ([pod-redis.yaml](pod-redis.yaml)): + ```yaml apiVersion: v1 kind: Pod metadata: - name: storage + name: redis spec: containers: - - name: redis - image: redis - volumeMounts: - # name must match the volume name below - - name: redis-persistent-storage - # mount path within the container - mountPath: /data/redis - volumes: + - name: redis + image: redis + volumeMounts: - name: redis-persistent-storage - emptyDir: {} -``` - -Ok, so what did we do? We added a volume to our pod: -``` + mountPath: /data/redis volumes: - - name: redis-persistent-storage - emptyDir: {} + - name: redis-persistent-storage + emptyDir: {} ``` -And we added a reference to that volume to our container: -``` - volumeMounts: - # name must match the volume name below - - name: redis-persistent-storage - # mount path within the container - mountPath: /data/redis -``` +Notes: +- The volume mount name is a reference to a specific empty dir volume. +- The volume mount path is the path to mount the empty dir volume within the container. -In Kubernetes, ```emptyDir``` Volumes live for the lifespan of the Pod, which is longer than the lifespan of any one container, so if the container fails and is restarted, our persistent storage will live on. +##### Volume Types -If you want to mount a directory that already exists in the file system (e.g. ```/var/logs```) you can use the ```hostPath``` directive. +- **EmptyDir**: Creates a new directory that will persist across container failures and restarts. +- **HostPath**: Mounts an existing directory on the minion's file system (e.g. `/var/logs`). See [volumes](../../../docs/user-guide/volumes.md) for more details. -### Multiple Containers + +#### Multiple Containers _Note: The examples below are syntactically correct, but some of the images (e.g. kubernetes/git-monitor) don't exist yet. We're working on turning these into working examples._ @@ -124,12 +223,13 @@ spec: emptyDir: {} ``` -Note that we have also added a volume here. In this case, the volume is mounted into both containers. It is marked ```readOnly``` in the web server's case, since it doesn't need to write to the directory. +Note that we have also added a volume here. In this case, the volume is mounted into both containers. It is marked `readOnly` in the web server's case, since it doesn't need to write to the directory. + +Finally, we have also introduced an environment variable to the `git-monitor` container, which allows us to parameterize that container with the particular git repository that we want to track. -Finally, we have also introduced an environment variable to the ```git-monitor``` container, which allows us to parameterize that container with the particular git repository that we want to track. +## What's Next? -### What's next? Continue on to [Kubernetes 201](k8s201.md) or for a complete application see the [guestbook example](../../../examples/guestbook/README.md) diff --git a/docs/user-guide/walkthrough/k8s201.md b/docs/user-guide/walkthrough/k8s201.md index 89b67e33bc2d9..382e4d37e59cf 100644 --- a/docs/user-guide/walkthrough/k8s201.md +++ b/docs/user-guide/walkthrough/k8s201.md @@ -20,29 +20,78 @@ certainly want the docs that go with that version. -# Kubernetes 201 - Labels, Replication Controllers, Services and Health Checking +# Kubernetes 201 - Labels, Replication Controllers, Services & Health Checking -### Overview -When we had just left off in the [previous episode](README.md) we had learned about pods, multiple containers and volumes. -We'll now cover some slightly more advanced topics in Kubernetes, related to application productionization, deployment and +If you went through [Kubernetes 101](README.md), you learned about kubectl, pods, volumes, and multiple containers. +For Kubernetes 201, we will pick up where 101 left off and cover some slightly more advanced topics in Kubernetes, related to application productionization, deployment and scaling. -### Labels -Having already learned about Pods and how to create them, you may be struck by an urge to create many, many pods. Please do! But eventually you will need a system to organize these pods into groups. The system for achieving this in Kubernetes is Labels. Labels are key-value pairs that are attached to each object in Kubernetes. Label selectors can be passed along with a RESTful ```list``` request to the apiserver to retrieve a list of objects which match that label selector. For example: +In order for the kubectl usage examples to work, make sure you have an examples directory locally, either from [a release](https://github.com/GoogleCloudPlatform/kubernetes/releases) or [the source](https://github.com/GoogleCloudPlatform/kubernetes). + +Table of Contents +- [Labels](#labels) +- [Replication Controllers](#replication-controllers) + - [Replication Controller Management](#replication-controller-management) +- [Services](#services) + - [Service Management](#service-management) +- [Health Checking](#health-checking) + - [Process Health Checking](#process-health-checking) + - [Application Health Checking](#application-health-checking) +- [What's Next?](#whats-next) + + +## Labels + +Having already learned about Pods and how to create them, you may be struck by an urge to create many, many pods. Please do! But eventually you will need a system to organize these pods into groups. The system for achieving this in Kubernetes is Labels. Labels are key-value pairs that are attached to each object in Kubernetes. Label selectors can be passed along with a RESTful `list` request to the apiserver to retrieve a list of objects which match that label selector. + +To add a label, add a labels section under metadata in the pod definition: + +```yaml + labels: + app: nginx +``` + +For example, here is the nginx pod definition with labels ([pod-nginx-with-label.yaml](pod-nginx-with-label.yaml)): + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: nginx + labels: + app: nginx +spec: + containers: + - name: nginx + image: nginx + ports: + - containerPort: 80 +``` + +Create the labeled pod ([pod-nginx-with-label.yaml](pod-nginx-with-label.yaml)): + +```sh +kubectl create -f docs/user-guide/walkthrough/pod-nginx-with-label.yaml +``` + +List all pods with the label `app=nginx`: ```sh -kubectl get pods -l name=nginx +kubectl get pods -l app=nginx ``` -Lists all pods who name label matches 'nginx'. Labels are discussed in detail [elsewhere](../../../docs/user-guide/labels.md), but they are a core concept for two additional building blocks for Kubernetes, Replication Controllers and Services +For more information, see [Labels](../labels.md). +They are a core concept used by two additional Kubernetes building blocks: Replication Controllers and Services. -### Replication Controllers -OK, now you have an awesome, multi-container, labelled pod and you want to use it to build an application, you might be tempted to just start building a whole bunch of individual pods, but if you do that, a whole host of operational concerns pop up. For example: how will you scale the number of pods up or down and how will you ensure that all pods are homogenous? +## Replication Controllers -Replication controllers are the objects to answer these questions. A replication controller combines a template for pod creation (a "cookie-cutter" if you will) and a number of desired replicas, into a single Kubernetes object. The replication controller also contains a label selector that identifies the set of objects managed by the replication controller. The replication controller constantly measures the size of this set relative to the desired size, and takes action by creating or deleting pods. The design of replication controllers is discussed in detail [elsewhere](../../../docs/user-guide/replication-controller.md). +OK, now you know how to make awesome, multi-container, labeled pods and you want to use them to build an application, you might be tempted to just start building a whole bunch of individual pods, but if you do that, a whole host of operational concerns pop up. For example: how will you scale the number of pods up or down and how will you ensure that all pods are homogenous? + +Replication controllers are the objects to answer these questions. A replication controller combines a template for pod creation (a "cookie-cutter" if you will) and a number of desired replicas, into a single Kubernetes object. The replication controller also contains a label selector that identifies the set of objects managed by the replication controller. The replication controller constantly measures the size of this set relative to the desired size, and takes action by creating or deleting pods. + +For example, here is a replication controller that instantiates two nginx pods ([replication-controller.yaml](replication-controller.yaml)): -An example replication controller that instantiates two pods running nginx looks like: ```yaml apiVersion: v1 kind: ReplicationController @@ -53,7 +102,7 @@ spec: # selector identifies the set of Pods that this # replication controller is responsible for managing selector: - name: nginx + app: nginx # podTemplate defines the 'cookie cutter' used for creating # new pods when necessary template: @@ -61,41 +110,100 @@ spec: labels: # Important: these labels need to match the selector above # The api server enforces this constraint. - name: nginx + app: nginx spec: containers: - - name: nginx - image: nginx - ports: - - containerPort: 80 + - name: nginx + image: nginx + ports: + - containerPort: 80 +``` + +#### Replication Controller Management + +Create an nginx replication controller ([replication-controller.yaml](replication-controller.yaml)): + +```sh +kubectl create -f docs/user-guide/walkthrough/replication-controller.yaml +``` + +List all replication controllers: + +```sh +kubectl get rc ``` -### Services -Once you have a replicated set of pods, you need an abstraction that enables connectivity between the layers of your application. For example, if you have a replication controller managing your backend jobs, you don't want to have to reconfigure your front-ends whenever you re-scale your backends. Likewise, if the pods in your backends are scheduled (or rescheduled) onto different machines, you can't be required to re-configure your front-ends. In Kubernetes, the Service object achieves these goals. A Service basically combines an IP address and a label selector together to form a simple, static rallying point for connecting to a micro-service in your application. +Delete the replication controller by name: + +```sh +kubectl delete rc nginx-controller +``` + +For more information, see [Replication Controllers](../replication-controller.md). + + +## Services + +Once you have a replicated set of pods, you need an abstraction that enables connectivity between the layers of your application. For example, if you have a replication controller managing your backend jobs, you don't want to have to reconfigure your front-ends whenever you re-scale your backends. Likewise, if the pods in your backends are scheduled (or rescheduled) onto different machines, you can't be required to re-configure your front-ends. In Kubernetes, the service abstraction achieves these goals. A service provides a way to refer to a set of pods (selected by labels) with a single static IP address. It may also provide load balancing, if supported by the provider. + +For example, here is a service that balances across the pods created in the previous nginx replication controller example ([service.yaml](service.yaml)): -For example, here is a service that balances across the pods created in the previous nginx replication controller example: ```yaml apiVersion: v1 kind: Service metadata: - name: nginx-example + name: nginx-service spec: ports: - - port: 8000 # the port that this service should serve on - # the container on each pod to connect to, can be a name - # (e.g. 'www') or a number (e.g. 80) - targetPort: 80 - protocol: TCP + - port: 8000 # the port that this service should serve on + # the container on each pod to connect to, can be a name + # (e.g. 'www') or a number (e.g. 80) + targetPort: 80 + protocol: TCP # just like the selector in the replication controller, # but this time it identifies the set of pods to load balance # traffic to. selector: - name: nginx + app: nginx +``` + +#### Service Management + +Create an nginx service ([service.yaml](service.yaml)): + +```sh +kubectl create -f docs/user-guide/walkthrough/service.yaml +``` + +List all services: + +```sh +kubectl get services ``` -When created, each service is assigned a unique IP address. This address is tied to the lifespan of the Service, and will not change while the Service is alive. Pods can be configured to talk to the service, and know that communication to the service will be automatically load-balanced out to some pod that is a member of the set identified by the label selector in the Service. Services are described in detail [elsewhere](../../../docs/user-guide/services.md). +On most providers, the service IPs are not externally accessible. The easiest way to test that the service is working is to create a busybox pod and exec commands on it remotely. See the [command execution documentation](../kubectl/kubectl_exec.md) for details. + +Provided the service IP is accessible, you should be able to access its http endpoint with curl on port 80: + +```sh +SERVICE_IP=$(kubectl get service nginx-service -o=template -t={{.spec.clusterIP}}) +SERVICE_PORT=$(kubectl get service nginx-service -o=template '-t={{(index .spec.ports 0).port}}') +curl http://${SERVICE_IP}:${SERVICE_PORT} +``` + +To delete the service by name: + +```sh +kubectl delete service nginx-controller +``` + +When created, each service is assigned a unique IP address. This address is tied to the lifespan of the Service, and will not change while the Service is alive. Pods can be configured to talk to the service, and know that communication to the service will be automatically load-balanced out to some pod that is a member of the set identified by the label selector in the Service. + +For more information, see [Services](../services.md). + + +## Health Checking -### Health Checking When I write code it never crashes, right? Sadly the [kubernetes issues list](https://github.com/GoogleCloudPlatform/kubernetes/issues) indicates otherwise... Rather than trying to write bug-free code, a better approach is to use a management system to perform periodic health checking @@ -104,14 +212,14 @@ application and taking action to fix it. It's important that the system be outs your application fails, and the health checking agent is part of your application, it may fail as well, and you'll never know. In Kubernetes, the health check monitor is the Kubelet agent. -#### Low level process health-checking +#### Process Health Checking The simplest form of health-checking is just process level health checking. The Kubelet constantly asks the Docker daemon if the container process is still running, and if not, the container process is restarted. In all of the Kubernetes examples you have run so far, this health checking was actually already enabled. It's on for every single container that runs in Kubernetes. -#### Application health-checking +#### Application Health Checking However, in many cases, this low-level health checking is insufficient. Consider for example, the following code: @@ -145,7 +253,8 @@ In all cases, if the Kubelet discovers a failure, the container is restarted. The container health checks are configured in the "LivenessProbe" section of your container config. There you can also specify an "initialDelaySeconds" that is a grace period from when the container is started to when health checks are performed, to enable your container to perform any necessary initialization. -Here is an example config for a pod with an HTTP health check: +Here is an example config for a pod with an HTTP health check ([pod-with-http-healthcheck.yaml](pod-with-http-healthcheck.yaml)): + ```yaml apiVersion: v1 kind: Pod @@ -153,23 +262,27 @@ metadata: name: pod-with-healthcheck spec: containers: - - name: nginx - image: nginx - # defines the health checking - livenessProbe: - # an http probe - httpGet: - path: /_status/healthz - port: 80 - # length of time to wait for a pod to initialize - # after pod startup, before applying health checking - initialDelaySeconds: 30 - timeoutSeconds: 1 - ports: - - containerPort: 80 + - name: nginx + image: nginx + # defines the health checking + livenessProbe: + # an http probe + httpGet: + path: /_status/healthz + port: 80 + # length of time to wait for a pod to initialize + # after pod startup, before applying health checking + initialDelaySeconds: 30 + timeoutSeconds: 1 + ports: + - containerPort: 80 ``` -### What's next? +For more information about health checking, see [Container Probes](../pod-states.md#container-probes). + + +## What's Next? + For a complete application see the [guestbook example](../../../examples/guestbook/). diff --git a/docs/user-guide/walkthrough/pod-nginx-with-label.yaml b/docs/user-guide/walkthrough/pod-nginx-with-label.yaml new file mode 100644 index 0000000000000..7053af0be4b34 --- /dev/null +++ b/docs/user-guide/walkthrough/pod-nginx-with-label.yaml @@ -0,0 +1,12 @@ +apiVersion: v1 +kind: Pod +metadata: + name: nginx + labels: + app: nginx +spec: + containers: + - name: nginx + image: nginx + ports: + - containerPort: 80 diff --git a/docs/user-guide/walkthrough/pod-nginx.yaml b/docs/user-guide/walkthrough/pod-nginx.yaml new file mode 100644 index 0000000000000..e65ec6f5b01bf --- /dev/null +++ b/docs/user-guide/walkthrough/pod-nginx.yaml @@ -0,0 +1,10 @@ +apiVersion: v1 +kind: Pod +metadata: + name: nginx +spec: + containers: + - name: nginx + image: nginx + ports: + - containerPort: 80 diff --git a/docs/user-guide/walkthrough/pod-redis.yaml b/docs/user-guide/walkthrough/pod-redis.yaml new file mode 100644 index 0000000000000..4b8613347baa1 --- /dev/null +++ b/docs/user-guide/walkthrough/pod-redis.yaml @@ -0,0 +1,14 @@ +apiVersion: v1 +kind: Pod +metadata: + name: redis +spec: + containers: + - name: redis + image: redis + volumeMounts: + - name: redis-persistent-storage + mountPath: /data/redis + volumes: + - name: redis-persistent-storage + emptyDir: {} diff --git a/docs/user-guide/walkthrough/pod1.yaml b/docs/user-guide/walkthrough/pod1.yaml deleted file mode 100644 index 9aa81ad8272ec..0000000000000 --- a/docs/user-guide/walkthrough/pod1.yaml +++ /dev/null @@ -1,8 +0,0 @@ -apiVersion: v1 -kind: Pod -metadata: - name: www -spec: - containers: - - name: nginx - image: nginx diff --git a/docs/user-guide/walkthrough/pod2.yaml b/docs/user-guide/walkthrough/pod2.yaml deleted file mode 100644 index 5dd7a8210d988..0000000000000 --- a/docs/user-guide/walkthrough/pod2.yaml +++ /dev/null @@ -1,16 +0,0 @@ -apiVersion: v1 -kind: Pod -metadata: - name: storage -spec: - containers: - - name: redis - image: redis - volumeMounts: - # name must match the volume name below - - name: redis-persistent-storage - # mount path within the container - mountPath: /data/redis - volumes: - - name: redis-persistent-storage - emptyDir: {} diff --git a/docs/user-guide/walkthrough/service.yaml b/docs/user-guide/walkthrough/service.yaml index a8ddbc351722b..e3335367afac5 100644 --- a/docs/user-guide/walkthrough/service.yaml +++ b/docs/user-guide/walkthrough/service.yaml @@ -1,7 +1,7 @@ apiVersion: v1 kind: Service metadata: - name: nginx-example + name: nginx-service spec: ports: - port: 8000 # the port that this service should serve on diff --git a/examples/examples_test.go b/examples/examples_test.go index 0e17933c16018..e860ada4cbecc 100644 --- a/examples/examples_test.go +++ b/examples/examples_test.go @@ -161,8 +161,9 @@ func TestExampleObjectSchemas(t *testing.T) { "redis-slave-service": &api.Service{}, }, "../docs/user-guide/walkthrough": { - "pod1": &api.Pod{}, - "pod2": &api.Pod{}, + "pod-nginx": &api.Pod{}, + "pod-nginx-with-label": &api.Pod{}, + "pod-redis": &api.Pod{}, "pod-with-http-healthcheck": &api.Pod{}, "service": &api.Service{}, "replication-controller": &api.ReplicationController{},