Skip to content

Commit

Permalink
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
docs: move older design docs into the git repo
Browse files Browse the repository at this point in the history
Signed-off-by: Tiago Castro <[email protected]>
tiagolobocastro committed Jan 24, 2025

Verified

This commit was signed with the committer’s verified signature.
tiagolobocastro Tiago Castro
1 parent 032e144 commit 01d75c2
Showing 10 changed files with 1,003 additions and 28 deletions.
46 changes: 21 additions & 25 deletions doc/csi.md
Original file line number Diff line number Diff line change
@@ -7,27 +7,25 @@ document.
Basic workflow starting from registration is as follows:

1. csi-node-driver-registrar retrieves information about csi plugin (mayastor) using csi identity service.
1. csi-node-driver-registrar registers csi plugin with kubelet passing plugin's csi endpoint as parameter.
1. kubelet uses csi identity and node services to retrieve information about the plugin (including plugin's ID string).
1. kubelet creates a custom resource (CR) "csi node info" for the CSI plugin.
1. kubelet issues requests to publish/unpublish and stage/unstage volume to the CSI plugin when mounting the volume.
2. csi-node-driver-registrar registers csi plugin with kubelet passing plugin's csi endpoint as parameter.
3. kubelet uses csi identity and node services to retrieve information about the plugin (including plugin's ID string).
4. kubelet creates a custom resource (CR) "csi node info" for the CSI plugin.
5. kubelet issues requests to publish/unpublish and stage/unstage volume to the CSI plugin when mounting the volume.

The registration of the storage nodes (i/o engines) with the control plane is handled
by a gRPC service which is independent from the CSI plugin.
by a gRPC service which is independent of the CSI plugin.

<br>

```mermaid
graph LR;
PublicApi["Public
API"]
CO["Container
Orchestrator"]
graph LR
;
PublicApi{"Public<br>API"}
CO[["Container<br>Orchestrator"]]
subgraph "Mayastor Control-Plane"
Rest["Rest"]
InternalApi["Internal
API"]
InternalApi["Internal<br>API"]
InternalServices["Agents"]
end
@@ -36,20 +34,18 @@ graph LR;
end
subgraph "Mayastor CSI"
Controller["Controller
Plugin"]
Node_1["Node
Plugin"]
Controller["Controller<br>Plugin"]
Node_1["Node<br>Plugin"]
end
%% Connections
CO --> Node_1
CO --> Controller
Controller --> |REST/http| PublicApi
PublicApi --> Rest
Rest --> |gRPC| InternalApi
InternalApi --> |gRPC| InternalServices
%% Connections
CO -.-> Node_1
CO -.-> Controller
Controller -->|REST/http| PublicApi
PublicApi -.-> Rest
Rest -->|gRPC| InternalApi
InternalApi -.->|gRPC| InternalServices
Node_1 <--> PublicApi
Node_1 --> |NVMeOF| IO_Node_1
IO_Node_1 <--> |gRPC| InternalServices
Node_1 -.->|NVMeOF| IO_Node_1
IO_Node_1 <-->|gRPC| InternalServices
```
172 changes: 172 additions & 0 deletions doc/design/control-plane-behaviour.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,172 @@
# Control Plane Behaviour

This document describes the types of behaviour that the control plane will exhibit under various situations. By
providing a high-level view it is hoped that the reader will be able to more easily reason about the control plane. \
<br>

## REST API Idempotency

Idempotency is a term used a lot but which is often misconstrued. The following definition is taken from
the [Mozilla Glossary](https://developer.mozilla.org/en-US/docs/Glossary/Idempotent):

> An [HTTP](https://developer.mozilla.org/en-US/docs/Web/HTTP) method is**idempotent**if an identical request can be
> made once or several times in a row with the same effect while leaving the server in the same state. In other words,
> an
> idempotent method should not have any side-effects (except for keeping statistics). Implemented correctly, the`GET`,
`HEAD`,`PUT`, and`DELETE`methods are idempotent, but not the`POST`method.
> All[safe](https://developer.mozilla.org/en-US/docs/Glossary/Safe)methods are also ***idempotent***.
OK, so making multiple identical requests should produce the same result ***without side effects***. Great, so does the
return value for each request have to be the same? The article goes on to say:

> To be idempotent, only the actual back-end state of the server is considered, the status code returned by each request
> may differ: the first call of a`DELETE`will likely return a`200`, while successive ones will likely return a`404`.
The control plane will behave exactly as described above. If, for example, multiple “create volume” calls are made for
the same volume, the first will return success (`HTTP 200` code) while subsequent calls will return a failure status
code (`HTTP 409` code) indicating that the resource already exists. \
<br>

## Handling Failures

There are various ways in which the control plane could fail to satisfy a `REST` request:

- Control plane dies in the middle of an operation.
- Control plane fails to update the persistent store.
- A gRPC request to Mayastor fails to complete successfully. \
<br>

Regardless of the type of failure, the control plane has to decided what it should do:

1. Fail the operation back to the callee but leave any created resources alone.

2. Fail the operation back to the callee but destroy any created resources.

3. Act like kubernetes and keep retrying in the hope that it will eventually succeed. \
<br>

Approach 3 is discounted. If we never responded to the callee it would eventually timeout and probably retry itself.
This would likely present even more issues/complexity in the control plane.

So the decision becomes, should we destroy resources that have already been created as part of the operation? \
<br>

### Keep Created Resources

Preventing the control plane from having to unwind operations is convenient as it keeps the implementation simple. A
separate asynchronous process could then periodically scan for unused resources and destroy them.

There is a potential issue with the above described approach. If an operation fails, it would be reasonable to assume
that the user would retry it. Is it possible for this subsequent request to fail as a result of the existing unused
resources lingering (i.e. because they have not yet been destroyed)? If so, this would hamper any retry logic
implemented in the upper layers.

### Destroy Created Resources

This is the optimal approach. For any given operation, failure results in newly created resources being destroyed. The
responsibility lies with the control plane tracking which resources have been created and destroying them in the event
of a failure.

However, what happens if destruction of a resource fails? It is possible for the control plane to retry the operation
but at some point it will have to give up. In effect the control plane will do its best, but it cannot provide any
guarantees. So does this mean that these resources are permanently leaked? Not necessarily. Like in
the [Keep Created Resources](#keep-created-resources) section, there could be a separate process which destroys unused
resources. \
<br>

## Use of the Persistent Store

For a control plane to be effective it must maintain information about the system it is interacting with and take
decision accordingly. An in-memory registry is used to store such information.

Because the registry is stored in memory, it is volatile - meaning all information is lost if the service is restarted.
As a consequence critical information must be backed up to a highly available persistent store (for more detailed
information see [persistent-store.md](./persistent-store.md)).

The types of data that need persisting broadly fall into 3 categories:

1. Desired state

2. Actual state

3. Control plane specific information \
<br>

### Desired State

This is the declarative specification of a resource provided by the user. As an example, the user may request a new
volume with the following requirements:

- Replica count of 3

- Size

- Preferred nodes

- Number of nexuses

Once the user has provided these constraints, the expectation is that the control plane should create a resource that
meets the specification. How the control plane achieves this is of no concern.

So what happens if the control plane is unable to meet these requirements? The operation is failed. This prevents any
ambiguity. If an operation succeeds, the requirements have been met and the user has exactly what they asked for. If the
operation fails, the requirements couldn’t be met. In this case the control plane should provide an appropriate means of
diagnosing the issue i.e. a log message.

What happens to resources created before the operation failed? This will be dependent on the chosen failure strategy
outlined in [Handling Failures](#handling-failures).

### Actual State

This is the runtime state of the system as provided by Mayastor. Whenever this changes, the control plane must reconcile
this state against the desired state to ensure that we are still meeting the users requirements. If not, the control
plane will take action to try to rectify this.

Whenever a user makes a request for state information, it will be this state that is returned (Note: If necessary an API
may be provided which returns the desired state also). \
<br>

## Control Plane Information

This information is required to aid the control plane across restarts. It will be used to store the state of a resource
independent of the desired or actual state.

The following sequence will be followed when creating a resource:

1. Add resource specification to the store with a state of “creating”

2. Create the resource

3. Mark the state of the resource as “complete”

If the control plane then crashes mid-operation, on restart it can query the state of each resource. Any resource not in
the “complete” state can then be destroyed as they will be remnants of a failed operation. The expectation here will be
that the user will reissue the operation if they wish to.

Likewise, deleting a resource will look like:

1. Mark resources as “deleting” in the store

2. Delete the resource

3. Remove the resource from the store.

For complex operations like creating a volume, all resources that make up the volume will be marked as “creating”. Only
when all resources have been successfully created will their corresponding states be changed to “complete”. This will
look something like:

1. Add volume specification to the store with a state of “creating”

2. Add nexus specifications to the store with a state of “creating”

3. Add replica specifications to the store with a state of “creating”

4. Create replicas

5. Create nexus

6. Mark replica states as “complete”

7. Mark nexus states as “complete”

8. Mark volume state as “complete”
46 changes: 46 additions & 0 deletions doc/design/k8s/diskpool-cr.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
# DiskPool Custom Resource for K8s

The DiskPool operator is a [K8s] specific component which managed pools in a K8s environment. \
Simplistically, it drives pools between into the various states listed below.

In [K8s], mayastor pools are represented as [Custom Resources][k8s-cr], which is an extension on top of the existing [K8s API][k8s-api]. \
This allows users to declarative create [diskpool], and mayastor will not only eventually create the corresponding mayastor pool but will
also ensure that it gets re-imported after pod restarts, node restarts, crashes, etc...

> **NOTE**: mayastor pool (msp) has been renamed to diskpool (dsp)
## DiskPool States

> *NOTE*
> Non-exhaustive enums could have additional variants added in the future. Therefore, when matching against variants of non-exhaustive enums, an extra > > wildcard arm must be added to account for future variants.
- Creating \
The pool is a new OR missing resource, and it has not been created or imported yet. The pool spec ***MAY*** be present but ***DOES NOT*** have a status field.

- Created \
The pool has been created in the designated i/o engine node by the control-plane.

- Terminating \
A deletion request has been issued by the user. The pool will eventually be deleted by the control-plane and eventually the DiskPool Custom Resource will also get removed from the K8s API.

- Error (*Deprecated*) \
Trying to converge to the next state has exceeded the maximum retry counts. The retry counts are implemented using an exponential back-off, which by default is set to 10. Once the error state is entered, reconciliation stops. Only external events (a new resource version) will trigger a new attempt. \
> NOTE: this State has been deprecated since API version **v1beta1**
## Reconciler actions

The operator responds to two types of events:

- Scheduled \
When, for example, we try to submit a new PUT request for a pool. On failure (i.e., network) we will reschedule the operation after 5 seconds.

- CRD updates \
When the CRD is changed, the resource version is changed. This will trigger a new reconcile loop. This process is typically known as “watching.”

- Observability \
During the transition, the operator will emit events to K8s, which can be obtained by kubectl. This gives visibility into the state and its transitions.

[K8s]: https://kubernetes.io/
[k8s-cr]: https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/
[k8s-api]: https://kubernetes.io/docs/concepts/overview/kubernetes-api/
[diskpool]: https://openebs.io/docs/user-guides/replicated-storage-user-guide/replicated-pv-mayastor/rs-configuration
187 changes: 187 additions & 0 deletions doc/design/k8s/kubectl-plugin.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,187 @@
# Kubectl Plugin

## Overview

The kubectl-mayastor plugin follows the instructions outlined in
the [K8s] [official documentation](https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/).

The name of the plugin binary dictates how it is used. From the documentation:
> For example, a plugin named `kubectl-foo` provides a command `kubectl foo`.
In our case the name of the binary is specified in the Cargo.toml file as `kubectl-mayastor`, therefore the command is
`kubectl mayastor`.

This document captures all the workflows and interaction of the plugin with the mayastor control-plane and [K8s]. This
will provide a high level view of how the plugin operates in general and what all features it currently supports and how
those features extend to the APIs.

This is the general flow of the request to generate an output from the plugin:

1. The flow starts with the CLI command, to be entered from console.

2. The respective command is supposed to hit the specific API endpoint dedicated for that purpose.

3. The API request is then forwarded to the Core Agent of the Control Plane.

4. Core Agent is responsible for the further propagation of the request based on its METHOD and purpose.

5. A GET request would not bring in any change in spec or state, it would get the needed information from registry and
return it as a response to the request.

6. A PUT request would bring a change in the spec, and thus a synchronous action would be performed by the mayastor
which
is being communicated by grpc calls, and would bring a change in state and spec, this updated spec and state would
thus
be returned as a response after making the necessary update in registry and persistent store.

> ***NOTE***: A command might have targets other than the Core Agent, and it might not even be sent to the
> control-plane, example: could be sent to a K8s endpoint.
For a list of commands you can refer to the
docs [here](https://github.com/openebs/mayastor-extensions/blob/HEAD/k8s/plugin/README.md#usage).

## Command Line Interface

Some goals for the kubectl-mayastor plugin are:

- Provide an intuitive and user-friendly CLI for Mayastor.
- Function in similar ways to existing Kubernetes CLI tools.
- Support common Mayastor operations.

> **NOTE**: There are many principles for a good CLI. An interesting set of guidelines can be
> seen [here](https://clig.dev/) for example.
All the plugin commands are verb based, providing the user with a similar experience to
the official [kubectl](https://kubernetes.io/docs/reference/kubectl/#operations).

All the plugin commands and their arguments are defined using a very powerful cli library: [clap]. Among many others, it
allows us to:

- defined every command and their arguments in a type-safe way
- add default values for any argument
- custom long and short (single letter) argument names
- parse any argument with a powerful value parser
- add custom or well-defined possible values for an argument
- define conflicts between arguments
- define requirements between arguments
- flatten arguments for code encapsulation
- many more!

Each command can be output in either tabled, json or yaml format.
The tabled format is mainly useful for human usage where the others allow for integration with tools (ex: jq, yq) which
can capture, parse and filter.

Each command (and sub-commands) accepts the `--help | -h` argument, which documents the operation and the supported
arguments.

> **NOTE**: Not all commands and their arguments are as well documented as we'd wish, and any help improving this would
> be very welcome! \
> We can also consider auto-generating CLI documenting as markdown.
## Connection to the K8s Cluster

Exactly like the K8s kubectl, the kubectl-mayastor plugin runs on the users' system whereas mayastor is running in the K8s cluster.
A mechanism is then required in order to bridge this gap and allow the plugin to talk to the mayastor services running in the cluster.

The plugin currently supports 2 distinct modes:

1. Kube ApiServer Proxy
2. Port Forwarding

### Kube ApiServer Proxy

It's builtin to the K8s apiserver and allows a user outside of the cluster to connect via the apipserver to a clusterIp which would otherwise
be unreachable.
It proxies using HTTPS and it's capable of doing load balancing for service endpoints.

```mermaid
graph LR
subgraph Control Plane
APIServer["Api Server"]
end
subgraph Worker Nodes
Pod_1["pod"]
Pod_2["pod"]
Pod_3["pod"]
SLB["Service<br>LB"]
end
subgraph Internet
InternetIco(<img src='https://icons.terrastruct.com/azure%2FCompute%20Service%20Color%2FCloud%20Services%20%28Classic%29.svg' />)
end
subgraph Users
User(<img src='https://icons.terrastruct.com/essentials/005-programmer.svg' width='32' />)
end
User ==> |"kubectl"| APIServer
User -.- |proxied| Pod_1
APIServer -.-> |"kubectl"| Pod_1
Internet --> SLB
SLB --> Pod_1
SLB --> Pod_2
SLB --> Pod_3
```

Above we highlight the different between this approach and a load balancer service which exposes the IP externally.
You can try this out yourself with the [kubect-plugin][kubectl-proxy].

### Port Forwarding

K8s provides a [Port Forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) to access
applications in a cluster.
This works by forwarding local ports to the cluster.
You can try this out yourself with the [kubect-plugin][kubectl-port-forward].

> *NOTE*: kubect port-forward is currently implemented for TCP ports only.
<br>

## Distribution

We distribute the plugin in similar ways to what's recommended by the kubectl plugin docs:

1. [Krew] \
[Krew] offers a cross-platform way to package and distribute your plugins. This way, you use a single packaging format
for all target platforms (Linux, Windows, macOS etc) and deliver updates to your users. \
Krew also maintains a plugin index so that other people can discover your plugin and install it.
2. "Naked" binary packaged in a tarball \
This is available as a [GitHub] release asset for the specific version: \
`vX.Y.Z: https://github.com/openebs/mayastor/releases/download/v$X.$Y.$Z/kubectl-mayastor-$platform.tar.gz` \
Example, you can get the x86_64 plugin for v2.7.3 can be
retrieved [here](https://github.com/openebs/mayastor/releases/download/v2.7.3/kubectl-mayastor-x86_64-linux-musl.tar.gz).
3. Source code \
You can download the source code for the released version and build it yourself. \
You can check the build docs for reference [here](../../build-all.md#building).

## Supported Platforms

Although the mayastor installation is only officially supported for Linux x86_64 at the time of writing, the plugin
actually supports a wider range of platforms. \
This is because although most production K8s cluster are running Linux x86_64, users and admins may interact with the
clusters from a wider range of platforms.

- [x] Linux
- [x] x86_64
- [x] aarch64
- [x] MacOs
- [x] x86_64
- [x] aarch64
- [ ] Windows
- [x] x86_64
- [ ] aarch64

[K8s]: https://kubernetes.io/

[clap]: https://docs.rs/clap/latest/clap/

[GitHub]: https://github.com/openebs/mayastor

[Krew]: https://krew.sigs.k8s.io/

[kube-proxy]: https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/#synopsis

[kubectl-proxy]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#proxy

[kubectl-port-forward]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#port-forward
6 changes: 3 additions & 3 deletions doc/lvm.md → doc/design/lvm.md
Original file line number Diff line number Diff line change
@@ -98,9 +98,9 @@ graph TD;
end
subgraph Physical Volumes
PV_1 --> VG_1["Volume Group - VG 1"]
PV_2 --> VG_1
PV_3 --> VG_2["Volume Group - VG 2"]
PV_1["PV 1"] --> VG_1["Vol Group 1"]
PV_2["PV 2"] --> VG_1
PV_3["PV 3"] --> VG_2["Vol Group 2"]
end
subgraph Node1
366 changes: 366 additions & 0 deletions doc/design/mayastor.md

Large diffs are not rendered by default.

63 changes: 63 additions & 0 deletions doc/design/persistent-store.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
# Persistent Configuration Storage for the control-plane

The Mayastor Control Plane requires a persistent store for storing information about most useful things like nodes, pools, volumes, etc..

A key-value store has been selected as the appropriate type of store. More specifically [etcd] will be used.

<br>

## etcd

<br>

etcd is widely used and is a fundamental component of Kubernetes itself. As such, it has been “battle hardened” in production making it a reasonable first choice for storing configuration.

<br>

> NOTE from their own documentation:
>
> etcd is designed to reliably store infrequently updated data…
<br>

This limitation is acceptable for the control plane as, by design, we shouldn’t be storing information at anywhere near the limits of etcd, given we
want to use this store for configuration and not the volume data itself.

Given all of the above, if there is a justifiable reason for moving away from etcd, the implementation should make this switch simple.

<br>

## Persistent Information

<br>

There are two categories of information that the control plane wishes to store:

1. System state
- Volume states
- Node states
- Pool states

2. Per volume policies
- Replica replacement policy
- Nexus replacement policy

<br>

### System State

The control plane requires visibility of the state of the system in order to make autonomous decisions. \
For example, should a volume transition from a healthy state to a degraded state, the control plane could inspect the state of its children and
optionally (based on the policy) replace any that are unhealthy.

Additionally, this state information would be useful for implementing an early warning system. If any resource (volume, node, pool) changed state,
any etcd watchers would be notified. \
We could then potentially have a service which watches for state changes and notifies the upper layers (i.e. operators) that an error has occurred.

### Per Volume Policies

When creating a volume, the REST API allows a set of nodes to be supplied which denotes the placement of nexuses/replicas. This information is placed in the persistent store and is used as the basis for the replacement policy.

Should a volume become degraded, the control plane can look up the unhealthy replica, the nodes that replicas are allowed to be placed on (the policy) and can replace the unhealthy replica with a new one.

[etcd]: https://etcd.io/docs
30 changes: 30 additions & 0 deletions doc/design/public-api.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
# Mayastor Public API

Mayastor exposes a public api from its [REST] service.
This is a [RESTful][REST] API which can be leveraged by external to mayastor (ex: users or 3rd party tools) as well as
mayastor components which are part of the control-plane.

## OpenAPI

The mayastor public API is defined using the [OpenAPI] which has many benefits:

1. Standardized: OpenAPI allows us to define an API in a standard way, well-used in the industry.

2. Integration: As a standard, it's easy to integrate with other systems, tools, and platforms (anyone can write a
plugin for it!).

3. Automation: Auto generate the server and client libraries, reducing manual effort and the potential for errors.

4. Documentation: Each method and type is documented which makes it easier to understand.

5. Tooling: There's an abundance of tools and libraries which support the OpenAPI spec, making it easier to develop,
test, and deploy.

The spec is
available [here](https://raw.githubusercontent.com/openebs/mayastor-control-plane/HEAD/control-plane/rest/openapi-specs/v0_api_spec.yaml),
and you interact with it using one of the many ready-made
tools [here](https://editor.swagger.io/?url=https://raw.githubusercontent.com/openebs/mayastor-control-plane/HEAD/control-plane/rest/openapi-specs/v0_api_spec.yaml).

[OpenAPI]: https://www.openapis.org/what-is-openapi

[REST]: https://en.wikipedia.org/wiki/REST
115 changes: 115 additions & 0 deletions doc/design/rest-authentication.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,115 @@
# REST Authentication

## References

- https://auth0.com/blog/build-an-api-in-rust-with-jwt-authentication-using-actix-web/
- https://jwt.io/
- https://russelldavies.github.io/jwk-creator/
- https://blog.logrocket.com/how-to-secure-a-rest-api-using-jwt-7efd83e71432/
- https://blog.logrocket.com/jwt-authentication-in-rust/

## Overview

The [REST API][REST] provides a means of controlling Mayastor. It allows the consumer of the API to perform operations
such as creation and deletion of pools, replicas, nexus and volumes.

It is important to secure the [REST] API to prevent access to unauthorised personnel. This is achieved through the use
of
[JSON Web Tokens (JWT)][JWT] which are sent with every [REST] request.

Upon receipt of a request the [REST] server extracts the [JWT] and verifies its authenticity. If authentic, the request
is
allowed to proceed otherwise the request is failed with an [HTTP] `401` Unauthorized error.

## JSON Web Token (JWT)

Definition taken from here:

> JSON Web Token ([JWT]) is an open standard ([RFC 7519][JWT]) that defines a compact and self-contained way for
> securely transmitting information between parties as a JSON object. \
> This information can be verified and trusted because it is digitally signed. \
> [JWT]s can be signed using a secret (with the [HMAC] algorithm) or a public/private key pair using [RSA] or
> [ECDSA].
The [REST] server expects the [JWT] to be signed with a private key and for the public key to be accessible as
a [JSON Web Key (JWK)][JWK].

The JWK is used to authenticate the [JWT] by checking that it was indeed signed by the corresponding private key.

The [JWT] comprises three parts, each separated by a fullstop:

`<header>.<payload>.<signature>`

Each of the above parts are [Base64-URL] encoded strings.

## JSON Web Key (JWK)

Definition taken from here:

> A [JSON] Web Key ([JWK]) is a JavaScript Object Notation ([JSON - RFC 7159][JSON]) data structure that represents a
> cryptographic key.
An example of the [JWK] structure is shown below:

```json
{
"kty": "RSA",
"n": "tTtUE2YgN2te7Hd29BZxeGjmagg0Ch9zvDIlHRjl7Y6Y9Gankign24dOXFC0t_3XzylySG0w56YkAgZPbu-7NRUbjE8ev5gFEBVfHgXmPvFKwPSkCtZG94Kx-lK_BZ4oOieLSoqSSsCdm6Mr5q57odkWghnXXohmRgKVgrg2OS1fUcw5l2AYljierf2vsFDGU6DU1PqeKiDrflsu8CFxDBAkVdUJCZH5BJcUMhjK41FCyYImtEb13eXRIr46rwxOGjwj6Szthd-sZIDDP_VVBJ3bGNk80buaWYQnojtllseNBg9pGCTBtYHB-kd-NNm2rwPWQLjmcY1ym9LtJmrQCXvA4EUgsG7qBNj1dl2NHcG03eEoJBejQ5xwTNgQZ6311lXuKByP5gkiLctCtwn1wGTJpjbLKo8xReNdKgFqrIOT1mC76oZpT3AsWlVH60H4aVTthuYEBCJgBQh5Bh6y44ANGcybj-q7sOOtuWi96sXNOCLczEbqKYpeuckYp1LP",
"e": "AQAB",
"alg": "RS256",
"use": "sig"
}
```

The meaning of these keys (as defined on [RFC 7517][[JWK]]) are:

| Key Name | Meaning | Purpose |
|:---------|:------------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| kty | Key Type | Denotes the cryptographic algorithm family used |
| n | Modulus | The modulus used by the public key |
| e | Exponent | The exponent used by the public key |
| alg | The algorithm used | This corresponds to the algorithm used to sign/encrypt the [JWT] |
| use | Public Key Use | Can take one of two values sig or enc. sig indicates the public key should be used only for signature verification, whereas enc denotes that it is used for encrypting the data |

<br>

## REST Server Authentication

### Prerequisites

1. The [JWT] is included in the [HTTP] Authorization Request Header
2. The [JWK], used for signature verification, is accessible

### Process

The [REST] server makes use of the [jsonwebtoken] crate to perform [JWT] authentication.

Upon receipt of a [REST] request the [JWT] is extracted from the header and split into two parts:

1. message (comprising the header and payload)
2. signature

This is passed to the jsonwebtoken crate along with the decoding key and algorithm extracted from the [JWK].

If authentication succeeds the [REST] request is permitted to continue. If authentication fails, the [REST] request is
rejected with an [HTTP] `401` Unauthorized error.

[REST]: https://en.wikipedia.org/wiki/REST

[JWT]: https://datatracker.ietf.org/doc/html/rfc7519

[JWK]: https://datatracker.ietf.org/doc/html/rfc7517

[HTTP]: https://developer.mozilla.org/en-US/docs/Web/HTTP

[Base64-URL]: https://base64.guru/standards/base64url

[HMAC]: https://datatracker.ietf.org/doc/html/rfc2104

[RSA]: https://en.wikipedia.org/wiki/RSA_(cryptosystem)

[ECDSA]: https://en.wikipedia.org/wiki/Elliptic_Curve_Digital_Signature_Algorithm

[JSON]: https://datatracker.ietf.org/doc/html/rfc7159

[jsonwebtoken]: https://github.com/Keats/jsonwebtoken
Binary file added doc/img/4kVS2m-tlb-misses.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit 01d75c2

Please sign in to comment.