Skip to content

Commit

Permalink
Run gendocs
Browse files Browse the repository at this point in the history
  • Loading branch information
thockin committed Jul 17, 2015
1 parent aacc4c8 commit 33f1862
Show file tree
Hide file tree
Showing 210 changed files with 599 additions and 27 deletions.
1 change: 1 addition & 0 deletions docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE -->

<!-- END MUNGE: UNVERSIONED_WARNING -->

# Kubernetes Documentation: releases.k8s.io/HEAD

* The [User's guide](user-guide/README.md) is for anyone who wants to run programs and
Expand Down
2 changes: 2 additions & 0 deletions docs/admin/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE -->

<!-- END MUNGE: UNVERSIONED_WARNING -->

# Kubernetes Cluster Admin Guide

The cluster admin guide is for anyone creating or administering a Kubernetes cluster.
Expand Down Expand Up @@ -72,6 +73,7 @@ If you are modifying an existing guide which uses Salt, this document explains [
project.](salt.md).

## Upgrading a cluster

[Upgrading a cluster](cluster-management.md).

## Managing nodes
Expand Down
3 changes: 3 additions & 0 deletions docs/admin/accessing-the-api.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE -->

<!-- END MUNGE: UNVERSIONED_WARNING -->

# Configuring APIserver ports

This document describes what ports the kubernetes apiserver
Expand All @@ -42,6 +43,7 @@ in [Accessing the cluster](../user-guide/accessing-the-cluster.md).


## Ports and IPs Served On

The Kubernetes API is served by the Kubernetes APIServer process. Typically,
there is one of these running on a single kubernetes-master node.

Expand Down Expand Up @@ -93,6 +95,7 @@ variety of uses cases:
setup time. Kubelets use cert-based auth, while kube-proxy uses token-based auth.

## Expected changes

- Policy will limit the actions kubelets can do via the authed port.
- Scheduler and Controller-manager will use the Secure Port too. They
will then be able to run on different machines than the apiserver.
Expand Down
1 change: 1 addition & 0 deletions docs/admin/admission-controllers.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE -->

<!-- END MUNGE: UNVERSIONED_WARNING -->

# Admission Controllers

**Table of Contents**
Expand Down
1 change: 1 addition & 0 deletions docs/admin/authentication.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE -->

<!-- END MUNGE: UNVERSIONED_WARNING -->

# Authentication Plugins

Kubernetes uses client certificates, tokens, or http basic auth to authenticate users for API calls.
Expand Down
3 changes: 3 additions & 0 deletions docs/admin/authorization.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE -->

<!-- END MUNGE: UNVERSIONED_WARNING -->

# Authorization Plugins


Expand All @@ -53,6 +54,7 @@ The following implementations are available, and are selected by flag:
`ABAC` allows for user-configured authorization policy. ABAC stands for Attribute-Based Access Control.

## ABAC Mode

### Request Attributes

A request has 4 attributes that can be considered for authorization:
Expand Down Expand Up @@ -105,6 +107,7 @@ To permit any user to do something, write a policy with the user property unset.
To permit an action Policy with an unset namespace applies regardless of namespace.

### Examples

1. Alice can do anything: `{"user":"alice"}`
2. Kubelet can read any pods: `{"user":"kubelet", "resource": "pods", "readonly": true}`
3. Kubelet can read and write events: `{"user":"kubelet", "resource": "events"}`
Expand Down
2 changes: 2 additions & 0 deletions docs/admin/cluster-components.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE -->

<!-- END MUNGE: UNVERSIONED_WARNING -->

# Kubernetes Cluster Admin Guide: Cluster Components

This document outlines the various binary components that need to run to
Expand Down Expand Up @@ -92,6 +93,7 @@ These controllers include:
selects a node for them to run on.

### addons

Addons are pods and services that implement cluster features. They don't run on
the master VM, but currently the default setup scripts that make the API calls
to create these pods and services does run on the master VM. See:
Expand Down
3 changes: 3 additions & 0 deletions docs/admin/cluster-large.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,9 +30,11 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE -->

<!-- END MUNGE: UNVERSIONED_WARNING -->

# Kubernetes Large Cluster

## Support

At v1.0, Kubernetes supports clusters up to 100 nodes with 30 pods per node and 1-2 container per pod (as defined in the [1.0 roadmap](../../docs/roadmap.md#reliability-and-performance)).

## Setup
Expand All @@ -59,6 +61,7 @@ To avoid running into cloud provider quota issues, when creating a cluster with
* Gating the setup script so that it brings up new node VMs in smaller batches with waits in between, because some cloud providers rate limit the creation of VMs.

### Addon Resources

To prevent memory leaks or other resource issues in [cluster addons](../../cluster/addons/) from consuming all the resources available on a node, Kubernetes sets resource limits on addon containers to limit the CPU and Memory resources they can consume (See PR [#10653](https://github.com/GoogleCloudPlatform/kubernetes/pull/10653/files) and [#10778](https://github.com/GoogleCloudPlatform/kubernetes/pull/10778/files)).

For example:
Expand Down
1 change: 1 addition & 0 deletions docs/admin/cluster-management.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE -->

<!-- END MUNGE: UNVERSIONED_WARNING -->

# Cluster Management

This doc is in progress.
Expand Down
6 changes: 6 additions & 0 deletions docs/admin/cluster-troubleshooting.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,13 +30,16 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE -->

<!-- END MUNGE: UNVERSIONED_WARNING -->

# Cluster Troubleshooting

This doc is about cluster troubleshooting; we assume you have already ruled out your application as the root cause of the
problem you are experiencing. See
the [application troubleshooting guide](../user-guide/application-troubleshooting.md) for tips on application debugging.
You may also visit [troubleshooting document](../troubleshooting.md) for more information.

## Listing your cluster

The first thing to debug in your cluster is if your nodes are all registered correctly.

Run
Expand All @@ -48,15 +51,18 @@ kubectl get nodes
And verify that all of the nodes you expect to see are present and that they are all in the ```Ready``` state.

## Looking at logs

For now, digging deeper into the cluster requires logging into the relevant machines. Here are the locations
of the relevant log files. (note that on systemd-based systems, you may need to use ```journalctl``` instead)

### Master

* /var/log/kube-apiserver.log - API Server, responsible for serving the API
* /var/log/kube-scheduler.log - Scheduler, responsible for making scheduling decisions
* /var/log/kube-controller-manager.log - Controller that manages replication controllers

### Worker Nodes

* /var/log/kubelet.log - Kubelet, responsible for running containers on the node
* /var/log/kube-proxy.log - Kube Proxy, responsible for service load balancing

Expand Down
1 change: 1 addition & 0 deletions docs/admin/dns.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE -->

<!-- END MUNGE: UNVERSIONED_WARNING -->

# DNS Integration with Kubernetes

As of kubernetes 0.8, DNS is offered as a [cluster add-on](../../cluster/addons/README.md).
Expand Down
18 changes: 17 additions & 1 deletion docs/admin/high-availability.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE -->

<!-- END MUNGE: UNVERSIONED_WARNING -->

# High Availability Kubernetes Clusters

**Table of Contents**
Expand All @@ -43,6 +44,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: GENERATED_TOC -->

## Introduction

This document describes how to build a high-availability (HA) Kubernetes cluster. This is a fairly advanced topic.
Users who merely want to experiment with Kubernetes are encouraged to use configurations that are simpler to set up such as
the simple [Docker based single node cluster instructions](../../docs/getting-started-guides/docker.md),
Expand All @@ -52,6 +54,7 @@ Also, at this time high availability support for Kubernetes is not continuously
be working to add this continuous testing, but for now the single-node master installations are more heavily tested.

## Overview

Setting up a truly reliable, highly available distributed system requires a number of steps, it is akin to
wearing underwear, pants, a belt, suspenders, another pair of underwear, and another pair of pants. We go into each
of these steps in detail, but a summary is given here to help guide and orient the user.
Expand All @@ -68,6 +71,7 @@ Here's what the system should look like when it's finished:
Ready? Let's get started.

## Initial set-up

The remainder of this guide assumes that you are setting up a 3-node clustered master, where each machine is running some flavor of Linux.
Examples in the guide are given for Debian distributions, but they should be easily adaptable to other distributions.
Likewise, this set up should work whether you are running in a public or private cloud provider, or if you are running
Expand All @@ -78,6 +82,7 @@ instructions at [https://get.k8s.io](https://get.k8s.io)
describe easy installation for single-master clusters on a variety of platforms.

## Reliable nodes

On each master node, we are going to run a number of processes that implement the Kubernetes API. The first step in making these reliable is
to make sure that each automatically restarts when it fails. To achieve this, we need to install a process watcher. We choose to use
the ```kubelet``` that we run on each of the worker nodes. This is convenient, since we can use containers to distribute our binaries, we can
Expand All @@ -98,6 +103,7 @@ On systemd systems you ```systemctl enable kubelet``` and ```systemctl enable do


## Establishing a redundant, reliable data storage layer

The central foundation of a highly available solution is a redundant, reliable storage layer. The number one rule of high-availability is
to protect the data. Whatever else happens, whatever catches on fire, if you have the data, you can rebuild. If you lose the data, you're
done.
Expand All @@ -109,6 +115,7 @@ size of the cluster from three to five nodes. If that is still insufficient, yo
[even more redundancy to your storage layer](#even-more-reliable-storage).

### Clustering etcd

The full details of clustering etcd are beyond the scope of this document, lots of details are given on the
[etcd clustering page](https://github.com/coreos/etcd/blob/master/Documentation/clustering.md). This example walks through
a simple cluster set up, using etcd's built in discovery to build our cluster.
Expand All @@ -130,6 +137,7 @@ for ```${NODE_IP}``` on each machine.


#### Validating your cluster

Once you copy this into all three nodes, you should have a clustered etcd set up. You can validate with

```
Expand All @@ -146,6 +154,7 @@ You can also validate that this is working with ```etcdctl set foo bar``` on one
on a different node.

### Even more reliable storage

Of course, if you are interested in increased data reliability, there are further options which makes the place where etcd
installs it's data even more reliable than regular disks (belts *and* suspenders, ftw!).

Expand All @@ -162,9 +171,11 @@ for each node. Throughout these instructions, we assume that this storage is mo


## Replicated API Servers

Once you have replicated etcd set up correctly, we will also install the apiserver using the kubelet.

### Installing configuration files

First you need to create the initial log file, so that Docker mounts a file instead of a directory:

```
Expand All @@ -183,12 +194,14 @@ Next, you need to create a ```/srv/kubernetes/``` directory on each node. This
The easiest way to create this directory, may be to copy it from the master node of a working cluster, or you can manually generate these files yourself.

### Starting the API Server

Once these files exist, copy the [kube-apiserver.yaml](high-availability/kube-apiserver.yaml) into ```/etc/kubernetes/manifests/``` on each master node.

The kubelet monitors this directory, and will automatically create an instance of the ```kube-apiserver``` container using the pod definition specified
in the file.

### Load balancing

At this point, you should have 3 apiservers all working correctly. If you set up a network load balancer, you should
be able to access your cluster via that load balancer, and see traffic balancing between the apiserver instances. Setting
up a load balancer will depend on the specifics of your platform, for example instructions for the Google Cloud
Expand All @@ -203,6 +216,7 @@ For external users of the API (e.g. the ```kubectl``` command line interface, co
them to talk to the external load balancer's IP address.

## Master elected components

So far we have set up state storage, and we have set up the API server, but we haven't run anything that actually modifies
cluster state, such as the controller manager and scheduler. To achieve this reliably, we only want to have one actor modifying state at a time, but we want replicated
instances of these actors, in case a machine dies. To achieve this, we are going to use a lease-lock in etcd to perform
Expand All @@ -226,6 +240,7 @@ by copying [kube-scheduler.yaml](high-availability/kube-scheduler.yaml) and [kub
directory.

### Running the podmaster

Now that the configuration files are in place, copy the [podmaster.yaml](high-availability/podmaster.yaml) config file into ```/etc/kubernetes/manifests/```

As before, the kubelet on the node monitors this directory, and will start an instance of the podmaster using the pod specification provided in ```podmaster.yaml```.
Expand All @@ -236,6 +251,7 @@ the kubelet will restart them. If any of these nodes fail, the process will mov
node.

## Conclusion

At this point, you are done (yeah!) with the master components, but you still need to add worker nodes (boo!).

If you have an existing cluster, this is as simple as reconfiguring your kubelets to talk to the load-balanced endpoint, and
Expand All @@ -244,7 +260,7 @@ restarting the kubelets on each node.
If you are turning up a fresh cluster, you will need to install the kubelet and kube-proxy on each worker node, and
set the ```--apiserver``` flag to your replicated endpoint.

##Vagrant up!
## Vagrant up!

We indeed have an initial proof of concept tester for this, which is available [here](../../examples/high-availability/).

Expand Down
1 change: 1 addition & 0 deletions docs/admin/kube-apiserver.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE -->

<!-- END MUNGE: UNVERSIONED_WARNING -->

## kube-apiserver


Expand Down
1 change: 1 addition & 0 deletions docs/admin/kube-controller-manager.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE -->

<!-- END MUNGE: UNVERSIONED_WARNING -->

## kube-controller-manager


Expand Down
1 change: 1 addition & 0 deletions docs/admin/kube-proxy.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE -->

<!-- END MUNGE: UNVERSIONED_WARNING -->

## kube-proxy


Expand Down
1 change: 1 addition & 0 deletions docs/admin/kube-scheduler.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE -->

<!-- END MUNGE: UNVERSIONED_WARNING -->

## kube-scheduler


Expand Down
1 change: 1 addition & 0 deletions docs/admin/kubelet.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE -->

<!-- END MUNGE: UNVERSIONED_WARNING -->

## kubelet


Expand Down
2 changes: 2 additions & 0 deletions docs/admin/multi-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE -->

<!-- END MUNGE: UNVERSIONED_WARNING -->

# Considerations for running multiple Kubernetes clusters

You may want to set up multiple kubernetes clusters, both to
Expand Down Expand Up @@ -65,6 +66,7 @@ Reasons to have multiple clusters include:
- test clusters to canary new Kubernetes releases or other cluster software.

## Selecting the right number of clusters

The selection of the number of kubernetes clusters may be a relatively static choice, only revisited occasionally.
By contrast, the number of nodes in a cluster and the number of pods in a service may be change frequently according to
load and growth.
Expand Down
1 change: 1 addition & 0 deletions docs/admin/networking.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE -->

<!-- END MUNGE: UNVERSIONED_WARNING -->

# Networking in Kubernetes

**Table of Contents**
Expand Down
1 change: 1 addition & 0 deletions docs/admin/node.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE -->

<!-- END MUNGE: UNVERSIONED_WARNING -->

# Node

**Table of Contents**
Expand Down
1 change: 1 addition & 0 deletions docs/admin/ovs-networking.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE -->

<!-- END MUNGE: UNVERSIONED_WARNING -->

# Kubernetes OpenVSwitch GRE/VxLAN networking

This document describes how OpenVSwitch is used to setup networking between pods across nodes.
Expand Down
Loading

0 comments on commit 33f1862

Please sign in to comment.