Skip to content

Commit

Permalink
Replace `` with when emphasizing something inline in docs/
Browse files Browse the repository at this point in the history
  • Loading branch information
a-robinson committed Jul 19, 2015
1 parent b80c0e5 commit 68d6e3a
Show file tree
Hide file tree
Showing 40 changed files with 218 additions and 218 deletions.
30 changes: 15 additions & 15 deletions docs/admin/admission-controllers.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ support all the features you expect.

## How do I turn on an admission control plug-in?

The Kubernetes API server supports a flag, ```admission_control``` that takes a comma-delimited,
The Kubernetes API server supports a flag, `admission_control` that takes a comma-delimited,
ordered list of admission control choices to invoke prior to modifying objects in the cluster.

## What does each plug-in do?
Expand All @@ -102,16 +102,16 @@ commands in those containers, we strongly encourage enabling this plug-in.
### ServiceAccount

This plug-in implements automation for [serviceAccounts](../user-guide/service-accounts.md).
We strongly recommend using this plug-in if you intend to make use of Kubernetes ```ServiceAccount``` objects.
We strongly recommend using this plug-in if you intend to make use of Kubernetes `ServiceAccount` objects.

### SecurityContextDeny

This plug-in will deny any pod with a [SecurityContext](../user-guide/security-context.md) that defines options that were not available on the ```Container```.
This plug-in will deny any pod with a [SecurityContext](../user-guide/security-context.md) that defines options that were not available on the `Container`.

### ResourceQuota

This plug-in will observe the incoming request and ensure that it does not violate any of the constraints
enumerated in the ```ResourceQuota``` object in a ```Namespace```. If you are using ```ResourceQuota```
enumerated in the `ResourceQuota` object in a `Namespace`. If you are using `ResourceQuota`
objects in your Kubernetes deployment, you MUST use this plug-in to enforce quota constraints.

See the [resourceQuota design doc](../design/admission_control_resource_quota.md) and the [example of Resource Quota](../user-guide/resourcequota/) for more details.
Expand All @@ -122,35 +122,35 @@ so that quota is not prematurely incremented only for the request to be rejected
### LimitRanger

This plug-in will observe the incoming request and ensure that it does not violate any of the constraints
enumerated in the ```LimitRange``` object in a ```Namespace```. If you are using ```LimitRange``` objects in
enumerated in the `LimitRange` object in a `Namespace`. If you are using `LimitRange` objects in
your Kubernetes deployment, you MUST use this plug-in to enforce those constraints. LimitRanger can also
be used to apply default resource requests to Pods that don't specify any; currently, the default LimitRanger
applies a 0.1 CPU requirement to all Pods in the ```default``` namespace.
applies a 0.1 CPU requirement to all Pods in the `default` namespace.

See the [limitRange design doc](../design/admission_control_limit_range.md) and the [example of Limit Range](../user-guide/limitrange/) for more details.

### NamespaceExists

This plug-in will observe all incoming requests that attempt to create a resource in a Kubernetes ```Namespace```
and reject the request if the ```Namespace``` was not previously created. We strongly recommend running
This plug-in will observe all incoming requests that attempt to create a resource in a Kubernetes `Namespace`
and reject the request if the `Namespace` was not previously created. We strongly recommend running
this plug-in to ensure integrity of your data.

### NamespaceAutoProvision (deprecated)

This plug-in will observe all incoming requests that attempt to create a resource in a Kubernetes ```Namespace```
and create a new ```Namespace``` if one did not already exist previously.
This plug-in will observe all incoming requests that attempt to create a resource in a Kubernetes `Namespace`
and create a new `Namespace` if one did not already exist previously.

We strongly recommend ```NamespaceExists``` over ```NamespaceAutoProvision```.
We strongly recommend `NamespaceExists` over `NamespaceAutoProvision`.

### NamespaceLifecycle

This plug-in enforces that a ```Namespace``` that is undergoing termination cannot have new objects created in it.
This plug-in enforces that a `Namespace` that is undergoing termination cannot have new objects created in it.

A ```Namespace``` deletion kicks off a sequence of operations that remove all objects (pods, services, etc.) in that
A `Namespace` deletion kicks off a sequence of operations that remove all objects (pods, services, etc.) in that
namespace. In order to enforce integrity of that process, we strongly recommend running this plug-in.

Once ```NamespaceAutoProvision``` is deprecated, we anticipate ```NamespaceLifecycle``` and ```NamespaceExists``` will
be merged into a single plug-in that enforces the life-cycle of a ```Namespace``` in Kubernetes.
Once `NamespaceAutoProvision` is deprecated, we anticipate `NamespaceLifecycle` and `NamespaceExists` will
be merged into a single plug-in that enforces the life-cycle of a `Namespace` in Kubernetes.

## Is there a recommended set of plug-ins to use?

Expand Down
6 changes: 3 additions & 3 deletions docs/admin/cluster-components.md
Original file line number Diff line number Diff line change
Expand Up @@ -135,15 +135,15 @@ network rules on the host and performing connection forwarding.

### docker

```docker``` is of course used for actually running containers.
`docker` is of course used for actually running containers.

### rkt

```rkt``` is supported experimentally as an alternative to docker.
`rkt` is supported experimentally as an alternative to docker.

### monit

```monit``` is a lightweight process babysitting system for keeping kubelet and docker
`monit` is a lightweight process babysitting system for keeping kubelet and docker
running.


Expand Down
4 changes: 2 additions & 2 deletions docs/admin/cluster-troubleshooting.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,12 +48,12 @@ Run
kubectl get nodes
```

And verify that all of the nodes you expect to see are present and that they are all in the ```Ready``` state.
And verify that all of the nodes you expect to see are present and that they are all in the `Ready` state.

## Looking at logs

For now, digging deeper into the cluster requires logging into the relevant machines. Here are the locations
of the relevant log files. (note that on systemd-based systems, you may need to use ```journalctl``` instead)
of the relevant log files. (note that on systemd-based systems, you may need to use `journalctl` instead)

### Master

Expand Down
46 changes: 23 additions & 23 deletions docs/admin/high-availability.md
Original file line number Diff line number Diff line change
Expand Up @@ -99,21 +99,21 @@ describe easy installation for single-master clusters on a variety of platforms.

On each master node, we are going to run a number of processes that implement the Kubernetes API. The first step in making these reliable is
to make sure that each automatically restarts when it fails. To achieve this, we need to install a process watcher. We choose to use
the ```kubelet``` that we run on each of the worker nodes. This is convenient, since we can use containers to distribute our binaries, we can
the `kubelet` that we run on each of the worker nodes. This is convenient, since we can use containers to distribute our binaries, we can
establish resource limits, and introspect the resource usage of each daemon. Of course, we also need something to monitor the kubelet
itself (insert who watches the watcher jokes here). For Debian systems, we choose monit, but there are a number of alternate
choices. For example, on systemd-based systems (e.g. RHEL, CentOS), you can run 'systemctl enable kubelet'.

If you are extending from a standard Kubernetes installation, the ```kubelet``` binary should already be present on your system. You can run
```which kubelet``` to determine if the binary is in fact installed. If it is not installed,
If you are extending from a standard Kubernetes installation, the `kubelet` binary should already be present on your system. You can run
`which kubelet` to determine if the binary is in fact installed. If it is not installed,
you should install the [kubelet binary](https://storage.googleapis.com/kubernetes-release/release/v0.19.3/bin/linux/amd64/kubelet), the
[kubelet init file](../../cluster/saltbase/salt/kubelet/initd) and [high-availability/default-kubelet](high-availability/default-kubelet)
scripts.

If you are using monit, you should also install the monit daemon (```apt-get install monit```) and the [high-availability/monit-kubelet](high-availability/monit-kubelet) and
If you are using monit, you should also install the monit daemon (`apt-get install monit`) and the [high-availability/monit-kubelet](high-availability/monit-kubelet) and
[high-availability/monit-docker](high-availability/monit-docker) configs.

On systemd systems you ```systemctl enable kubelet``` and ```systemctl enable docker```.
On systemd systems you `systemctl enable kubelet` and `systemctl enable docker`.


## Establishing a redundant, reliable data storage layer
Expand All @@ -140,14 +140,14 @@ First, hit the etcd discovery service to create a new token:
curl https://discovery.etcd.io/new?size=3
```

On each node, copy the [etcd.yaml](high-availability/etcd.yaml) file into ```/etc/kubernetes/manifests/etcd.yaml```
On each node, copy the [etcd.yaml](high-availability/etcd.yaml) file into `/etc/kubernetes/manifests/etcd.yaml`

The kubelet on each node actively monitors the contents of that directory, and it will create an instance of the ```etcd```
server from the definition of the pod specified in ```etcd.yaml```.
The kubelet on each node actively monitors the contents of that directory, and it will create an instance of the `etcd`
server from the definition of the pod specified in `etcd.yaml`.

Note that in ```etcd.yaml``` you should substitute the token URL you got above for ```${DISCOVERY_TOKEN}``` on all three machines,
and you should substitute a different name (e.g. ```node-1```) for ${NODE_NAME} and the correct IP address
for ```${NODE_IP}``` on each machine.
Note that in `etcd.yaml` you should substitute the token URL you got above for `${DISCOVERY_TOKEN}` on all three machines,
and you should substitute a different name (e.g. `node-1`) for ${NODE_NAME} and the correct IP address
for `${NODE_IP}` on each machine.


#### Validating your cluster
Expand All @@ -164,7 +164,7 @@ and
etcdctl cluster-health
```

You can also validate that this is working with ```etcdctl set foo bar``` on one node, and ```etcd get foo```
You can also validate that this is working with `etcdctl set foo bar` on one node, and `etcd get foo`
on a different node.

### Even more reliable storage
Expand All @@ -181,7 +181,7 @@ Alternatively, you can run a clustered file system like Gluster or Ceph. Finall

Regardless of how you choose to implement it, if you chose to use one of these options, you should make sure that your storage is mounted
to each machine. If your storage is shared between the three masters in your cluster, you should create a different directory on the storage
for each node. Throughout these instructions, we assume that this storage is mounted to your machine in ```/var/etcd/data```
for each node. Throughout these instructions, we assume that this storage is mounted to your machine in `/var/etcd/data`


## Replicated API Servers
Expand All @@ -196,7 +196,7 @@ First you need to create the initial log file, so that Docker mounts a file inst
touch /var/log/kube-apiserver.log
```

Next, you need to create a ```/srv/kubernetes/``` directory on each node. This directory includes:
Next, you need to create a `/srv/kubernetes/` directory on each node. This directory includes:
* basic_auth.csv - basic auth user and password
* ca.crt - Certificate Authority cert
* known_tokens.csv - tokens that entities (e.g. the kubelet) can use to talk to the apiserver
Expand All @@ -209,9 +209,9 @@ The easiest way to create this directory, may be to copy it from the master node

### Starting the API Server

Once these files exist, copy the [kube-apiserver.yaml](high-availability/kube-apiserver.yaml) into ```/etc/kubernetes/manifests/``` on each master node.
Once these files exist, copy the [kube-apiserver.yaml](high-availability/kube-apiserver.yaml) into `/etc/kubernetes/manifests/` on each master node.

The kubelet monitors this directory, and will automatically create an instance of the ```kube-apiserver``` container using the pod definition specified
The kubelet monitors this directory, and will automatically create an instance of the `kube-apiserver` container using the pod definition specified
in the file.

### Load balancing
Expand All @@ -224,17 +224,17 @@ Platform can be found [here](https://cloud.google.com/compute/docs/load-balancin
Note, if you are using authentication, you may need to regenerate your certificate to include the IP address of the balancer,
in addition to the IP addresses of the individual nodes.

For pods that you deploy into the cluster, the ```kubernetes``` service/dns name should provide a load balanced endpoint for the master automatically.
For pods that you deploy into the cluster, the `kubernetes` service/dns name should provide a load balanced endpoint for the master automatically.

For external users of the API (e.g. the ```kubectl``` command line interface, continuous build pipelines, or other clients) you will want to configure
For external users of the API (e.g. the `kubectl` command line interface, continuous build pipelines, or other clients) you will want to configure
them to talk to the external load balancer's IP address.

## Master elected components

So far we have set up state storage, and we have set up the API server, but we haven't run anything that actually modifies
cluster state, such as the controller manager and scheduler. To achieve this reliably, we only want to have one actor modifying state at a time, but we want replicated
instances of these actors, in case a machine dies. To achieve this, we are going to use a lease-lock in etcd to perform
master election. On each of the three apiserver nodes, we run a small utility application named ```podmaster```. It's job is to implement a master
master election. On each of the three apiserver nodes, we run a small utility application named `podmaster`. It's job is to implement a master
election protocol using etcd "compare and swap". If the apiserver node wins the election, it starts the master component it is managing (e.g. the scheduler), if it
loses the election, it ensures that any master components running on the node (e.g. the scheduler) are stopped.

Expand All @@ -250,14 +250,14 @@ touch /var/log/kube-controller-manager.log
```

Next, set up the descriptions of the scheduler and controller manager pods on each node.
by copying [kube-scheduler.yaml](high-availability/kube-scheduler.yaml) and [kube-controller-manager.yaml](high-availability/kube-controller-manager.yaml) into the ```/srv/kubernetes/```
by copying [kube-scheduler.yaml](high-availability/kube-scheduler.yaml) and [kube-controller-manager.yaml](high-availability/kube-controller-manager.yaml) into the `/srv/kubernetes/`
directory.

### Running the podmaster

Now that the configuration files are in place, copy the [podmaster.yaml](high-availability/podmaster.yaml) config file into ```/etc/kubernetes/manifests/```
Now that the configuration files are in place, copy the [podmaster.yaml](high-availability/podmaster.yaml) config file into `/etc/kubernetes/manifests/`

As before, the kubelet on the node monitors this directory, and will start an instance of the podmaster using the pod specification provided in ```podmaster.yaml```.
As before, the kubelet on the node monitors this directory, and will start an instance of the podmaster using the pod specification provided in `podmaster.yaml`.

Now you will have one instance of the scheduler process running on a single master node, and likewise one
controller-manager process running on a single (possibly different) master node. If either of these processes fail,
Expand All @@ -272,7 +272,7 @@ If you have an existing cluster, this is as simple as reconfiguring your kubelet
restarting the kubelets on each node.

If you are turning up a fresh cluster, you will need to install the kubelet and kube-proxy on each worker node, and
set the ```--apiserver``` flag to your replicated endpoint.
set the `--apiserver` flag to your replicated endpoint.

## Vagrant up!

Expand Down
2 changes: 1 addition & 1 deletion docs/design/admission_control_limit_range.md
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,7 @@ $ kube-apiserver -admission_control=LimitRanger

kubectl is modified to support the **LimitRange** resource.

```kubectl describe``` provides a human-readable output of limits.
`kubectl describe` provides a human-readable output of limits.

For example,

Expand Down
2 changes: 1 addition & 1 deletion docs/design/admission_control_resource_quota.md
Original file line number Diff line number Diff line change
Expand Up @@ -163,7 +163,7 @@ this being the resource most closely running at the prescribed quota limits.

kubectl is modified to support the **ResourceQuota** resource.

```kubectl describe``` provides a human-readable output of quota.
`kubectl describe` provides a human-readable output of quota.

For example,

Expand Down
Loading

0 comments on commit 68d6e3a

Please sign in to comment.