diff --git a/docs/charon/charon-cli-reference.md b/docs/charon/charon-cli-reference.md
index 89497a3aae..4407874e32 100644
--- a/docs/charon/charon-cli-reference.md
+++ b/docs/charon/charon-cli-reference.md
@@ -11,7 +11,7 @@ The `charon` client is under heavy development, interfaces are subject to change
:::
-The following is a reference for charon version [`v0.17.1`](https://github.com/ObolNetwork/charon/releases/tag/v0.17.1). Find the latest release on [our Github](https://github.com/ObolNetwork/charon/releases).
+The following is a reference for charon version [`v0.18.0`](https://github.com/ObolNetwork/charon/releases/tag/v0.18.0). Find the latest release on [our Github](https://github.com/ObolNetwork/charon/releases).
The following are the top-level commands available to use.
diff --git a/docs/int/quickstart/activate-dv.md b/docs/int/quickstart/activate-dv.md
index e4b778f254..e7015532af 100644
--- a/docs/int/quickstart/activate-dv.md
+++ b/docs/int/quickstart/activate-dv.md
@@ -8,7 +8,7 @@ import TabItem from '@theme/TabItem';
# Activate a DV
:::caution
-Charon is in an alpha state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf).
+Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf).
:::
If you have successfully created a distributed validator and you are ready to activate it, congratulations! 🎉
diff --git a/docs/int/quickstart/advanced/quickstart-builder-api.md b/docs/int/quickstart/advanced/quickstart-builder-api.md
index 565b5cda43..5f6656fc09 100644
--- a/docs/int/quickstart/advanced/quickstart-builder-api.md
+++ b/docs/int/quickstart/advanced/quickstart-builder-api.md
@@ -9,7 +9,7 @@ import TabItem from '@theme/TabItem';
# Run a cluster with MEV enabled
:::caution
-Charon is in an alpha state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf).
+Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf).
:::
This quickstart guide focuses on configuring the builder API for Charon and supported validator and consensus clients.
diff --git a/docs/int/quickstart/advanced/quickstart-combine.md b/docs/int/quickstart/advanced/quickstart-combine.md
index 83a5a1f99c..5dd6395288 100644
--- a/docs/int/quickstart/advanced/quickstart-combine.md
+++ b/docs/int/quickstart/advanced/quickstart-combine.md
@@ -82,7 +82,7 @@ Run the following command:
```sh
# Combine a clusters private keys
-docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.17.1 combine --cluster-dir /opt/charon/validators-to-be-combined
+docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.18.0 combine --cluster-dir /opt/charon/validators-to-be-combined
```
This command will create one subdirectory for each validator private key that has been combined, named after its public key.
diff --git a/docs/int/quickstart/advanced/quickstart-sdk.md b/docs/int/quickstart/advanced/quickstart-sdk.md
index 08442261e0..a658d9937b 100644
--- a/docs/int/quickstart/advanced/quickstart-sdk.md
+++ b/docs/int/quickstart/advanced/quickstart-sdk.md
@@ -9,8 +9,7 @@ import TabItem from '@theme/TabItem';
# Create a DV using the SDK
:::caution
-
-The Obol-SDK is in an alpha state and should be used with caution., particularly on mainnet.
+The Obol-SDK is in a beta state and should be used with caution on testnets only.
:::
This is a walkthrough of using the [Obol-SDK](https://www.npmjs.com/package/@obolnetwork/obol-sdk) to propose a four-node distributed validator cluster for creation using the [DV Launchpad](../../../dvl/intro.md).
diff --git a/docs/int/quickstart/advanced/quickstart-split.md b/docs/int/quickstart/advanced/quickstart-split.md
index e469c978ce..44638ec22f 100644
--- a/docs/int/quickstart/advanced/quickstart-split.md
+++ b/docs/int/quickstart/advanced/quickstart-split.md
@@ -6,7 +6,7 @@ description: Split existing validator keys
# Split existing validator private keys
:::caution
-Charon is in an alpha state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf).
+Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf).
This process should only be used if you want to split an *existing validator private key* into multiple private key shares for use in a Distributed Validator Cluster. If your existing validator is not properly shut down before the Distributed Validator starts, your validator may be slashed.
@@ -56,7 +56,7 @@ At the end of this process, you should have a tree like this:
Run the following docker command to split the keys:
```shell
-CHARON_VERSION= # E.g. v0.17.1
+CHARON_VERSION= # E.g. v0.18.0
CLUSTER_NAME= # The name of the cluster you want to create.
WITHDRAWAL_ADDRESS= # The address you want to use for withdrawals.
FEE_RECIPIENT_ADDRESS= # The address you want to use for fee payments.
diff --git a/docs/int/quickstart/alone/create-keys.md b/docs/int/quickstart/alone/create-keys.md
index ee5cf664ae..aee5550cc6 100644
--- a/docs/int/quickstart/alone/create-keys.md
+++ b/docs/int/quickstart/alone/create-keys.md
@@ -9,7 +9,7 @@ import TabItem from '@theme/TabItem';
# Create the private key shares
:::caution
-Charon is in an alpha state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf).
+Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf).
:::
:::info
@@ -42,7 +42,7 @@ Alternatively, the private key shares can be created in a lower-trust manner wit
Then, run this command to create all the key shares and cluster artifacts locally:
diff --git a/docs/int/quickstart/alone/test-locally.md b/docs/int/quickstart/alone/test-locally.md
index 08b1ba5af0..f25eebecaa 100644
--- a/docs/int/quickstart/alone/test-locally.md
+++ b/docs/int/quickstart/alone/test-locally.md
@@ -60,7 +60,7 @@ The default cluster consists of:
FEE_RECIPIENT_ADDR=
# Create a distributed validator cluster
- docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.17.1 create cluster --name="mycluster" --cluster-dir=".charon/cluster/" --withdrawal-addresses="${WITHDRAWAL_ADDR}" --fee-recipient-addresses="${FEE_RECIPIENT_ADDR}" --nodes 6 --network goerli --num-validators=1
+ docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.18.0 create cluster --name="mycluster" --cluster-dir=".charon/cluster/" --withdrawal-addresses="${WITHDRAWAL_ADDR}" --fee-recipient-addresses="${FEE_RECIPIENT_ADDR}" --nodes 6 --network goerli --num-validators=1
```
These commands will create six folders within `.charon/cluster`, one for each node created. You will need to rename `node*` to `.charon` for each folder to be found by the default `charon run` command, or you can use `charon run --private-key-file=".charon/cluster/node0/charon-enr-private-key" --lock-file=".charon/cluster/node0/cluster-lock.json"` for each instance of charon you start.
diff --git a/docs/int/quickstart/group/index.md b/docs/int/quickstart/group/index.md
index 96ee831c51..6eafcd77d7 100644
--- a/docs/int/quickstart/group/index.md
+++ b/docs/int/quickstart/group/index.md
@@ -1,7 +1,7 @@
# Run a cluster as a group
:::caution
-Charon is in an alpha state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf).
+Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf).
:::
:::info
diff --git a/docs/int/quickstart/group/quickstart-cli.md b/docs/int/quickstart/group/quickstart-cli.md
index b329f09287..06e70e6ec1 100644
--- a/docs/int/quickstart/group/quickstart-cli.md
+++ b/docs/int/quickstart/group/quickstart-cli.md
@@ -6,7 +6,7 @@ description: Run one node in a multi-operator distributed validator cluster usin
# Using the CLI
:::caution
-Charon is in an alpha state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf).
+Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf).
:::
The following instructions aim to assist a group of operators coordinating together to create a distributed validator cluster via the CLI.
@@ -32,7 +32,7 @@ git clone https://github.com/ObolNetwork/charon-distributed-validator-node.git
cd charon-distributed-validator-node
# Create your charon ENR private key, this will create a charon-enr-private-key file in the .charon directory
-docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.17.1 create enr
+docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.18.0 create enr
```
You should expect to see a console output like
@@ -59,7 +59,7 @@ Finally, share your ENR with the leader or creator so that he/she can proceed to
3. Run the `charon create dkg` command that generates DKG cluster-definition.json file.
```
- docker run --rm -v "$(pwd):/opt/charon" --env-file .env.create_dkg obolnetwork/charon:v0.17.1 create dkg
+ docker run --rm -v "$(pwd):/opt/charon" --env-file .env.create_dkg obolnetwork/charon:v0.18.0 create dkg
```
This command should output a file at `.charon/cluster-definition.json`. This file needs to be shared with the other operators in a cluster.
@@ -72,7 +72,7 @@ Every cluster member then participates in the DKG ceremony. For Charon v1, this
```
# Participate in DKG ceremony, this will create .charon/cluster-lock.json, .charon/deposit-data.json and .charon/validator_keys
-docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.17.1 dkg
+docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.18.0 dkg
```
>This is a helpful [video walkthrough](https://www.youtube.com/watch?v=94Pkovp5zoQ&ab_channel=ObolNetwork).
diff --git a/docs/int/quickstart/group/quickstart-group-leader-creator.md b/docs/int/quickstart/group/quickstart-group-leader-creator.md
index d39af8c2ba..0909b44f75 100644
--- a/docs/int/quickstart/group/quickstart-group-leader-creator.md
+++ b/docs/int/quickstart/group/quickstart-group-leader-creator.md
@@ -8,7 +8,7 @@ import TabItem from '@theme/TabItem';
# Creator & Leader Journey
:::caution
-Charon is in an alpha state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf).
+Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf).
:::
The following instructions aim to assist with the preparation of a distributed validator key generation ceremony. Select the *Leader* tab if you **will** be an operator participating in the cluster, and select the *Creator* tab if you **will NOT** be an operator in the cluster.
@@ -52,7 +52,7 @@ Before starting the cluster creation, you will need to collect one Ethereum addr
cd charon-distributed-validator-node
# Create your charon ENR private key, this will create a charon-enr-private-key file in the .charon directory
- docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.17.1 create enr
+ docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.18.0 create enr
```
You should expect to see a console output like
diff --git a/docs/int/quickstart/group/quickstart-group-operator.md b/docs/int/quickstart/group/quickstart-group-operator.md
index bf0a135df0..771dbdb035 100644
--- a/docs/int/quickstart/group/quickstart-group-operator.md
+++ b/docs/int/quickstart/group/quickstart-group-operator.md
@@ -6,7 +6,7 @@ description: A node operator joins a DV cluster
# Operator Journey
:::caution
-Charon is in an alpha state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf).
+Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf).
:::
The following instructions aim to assist a group of operators coordinating together to create a distributed validator cluster after receiving an cluster invite link from a leader or creator.
@@ -35,7 +35,7 @@ git clone https://github.com/ObolNetwork/charon-distributed-validator-node.git
cd charon-distributed-validator-node
# Create your charon ENR private key, this will create a charon-enr-private-key file in the .charon directory
-docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.17.1 create enr
+docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.18.0 create enr
```
You should expect to see a console output like
diff --git a/docs/int/quickstart/index.md b/docs/int/quickstart/index.md
index 5d4cb15900..dc6e712533 100644
--- a/docs/int/quickstart/index.md
+++ b/docs/int/quickstart/index.md
@@ -1,7 +1,7 @@
# Quickstart Guides
:::caution
-Charon is in an alpha state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf).
+Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf).
:::
There are two ways to set up a distributed validator and each comes with its own quickstart
diff --git a/docs/int/quickstart/quickstart-exit.md b/docs/int/quickstart/quickstart-exit.md
index 875b844b8f..704b992e3c 100644
--- a/docs/int/quickstart/quickstart-exit.md
+++ b/docs/int/quickstart/quickstart-exit.md
@@ -9,7 +9,7 @@ import TabItem from '@theme/TabItem';
# Exit a DV
:::caution
-Charon is in an alpha state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf).
+Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf).
:::
Users looking to exit staking entirely and withdraw their full balance back must also sign and broadcast a "voluntary exit" message with validator keys which will start the process of exiting from staking. This is done with your validator client and submitted to your beacon node, and does not require gas. In the case of a DV, each charon node needs to broadcast a partial exit to the other nodes of the cluster. Once a threshold of partial exits has been received by any node, the full voluntary exit will be sent to the beacon chain.
diff --git a/docs/int/quickstart/quickstart-mainnet.md b/docs/int/quickstart/quickstart-mainnet.md
index 3afd3632bd..fa23b47591 100644
--- a/docs/int/quickstart/quickstart-mainnet.md
+++ b/docs/int/quickstart/quickstart-mainnet.md
@@ -6,7 +6,7 @@ description: Run a cluster on mainnet
# Run a DV on mainnet
:::caution
-Charon is in an alpha state, and you should proceed only if you accept the risk, the [terms of use](https://obol.tech/terms.pdf), and have tested running a Distributed Validator on a testnet first.
+Charon is in a beta state, and you should proceed only if you accept the risk, the [terms of use](https://obol.tech/terms.pdf), and have tested running a Distributed Validator on a testnet first.
Distributed Validators created for goerli cannot be used on mainnet and vice versa, please take caution when creating, backing up, and activating mainnet validators.
:::
diff --git a/src/components/HomepageFeatures.tsx b/src/components/HomepageFeatures.tsx
index d9f082d817..76f6c5905a 100644
--- a/src/components/HomepageFeatures.tsx
+++ b/src/components/HomepageFeatures.tsx
@@ -49,11 +49,11 @@ const FeatureList: FeatureItem[] = [
alt: "Image courtesy of the Noun Project",
description: (
<>
- Obol Managers are
- smart contracts for the coordination of Distributed Validators.
+ Obol Splits are
+ smart contracts for the distribution of rewards from Distributed Validators.
>
),
- link: "/docs/sc/introducing-obol-managers",
+ link: "/docs/sc/introducing-obol-splits",
},
{
title: "Join the Upcoming Testnets",
diff --git a/versioned_docs/version-v0.18.0/cg/_category_.json b/versioned_docs/version-v0.18.0/cg/_category_.json
new file mode 100644
index 0000000000..6658367ce5
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/cg/_category_.json
@@ -0,0 +1,5 @@
+{
+ "label": "Contribution & Feedback",
+ "position": 10,
+ "collapsed": true
+}
diff --git a/versioned_docs/version-v0.18.0/cg/bug-report.md b/versioned_docs/version-v0.18.0/cg/bug-report.md
new file mode 100644
index 0000000000..9a10b3b553
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/cg/bug-report.md
@@ -0,0 +1,57 @@
+# Filing a bug report
+
+Bug reports are critical to the rapid development of Obol. In order to make the process quick and efficient for all parties, it is best to follow some common reporting etiquette when filing to avoid double issues or miscommunications.
+
+## Checking if your issue exists
+
+Duplicate tickets are a hindrance to the development process and, as such, it is crucial to first check through Charon's existing issues to see if what you are experiencing has already been indexed.
+
+To do so, head over to the [issue page](https://github.com/ObolNetwork/charon/issues) and enter some related keywords into the search bar. This may include a sample from the output or specific components it affects.
+
+If searches have shown the issue in question has not been reported yet, feel free to open up a new issue ticket.
+
+## Writing quality bug reports
+
+A good bug report is structured to help the developers and contributors visualize the issue in the clearest way possible. It's important to be concise and use comprehensive language, while also providing all relevant information on-hand. Use short and accurate sentences without any unnecessary additions, and include all existing specifications with a list of steps to reproduce the expected problem. Issues that cannot be reproduced **cannot be solved**.
+
+If you are experiencing multiple issues, it is best to open each as a separate ticket. This allows them to be closed individually as they are resolved.
+
+An original bug report will very likely be preserved and used as a record and sounding board for users that have similar experiences in the future. Because of this, it is a great service to the community to ensure that reports meet these standards and follow the template closely.
+
+
+## The bug report template
+
+Below is the standard bug report template used by all of Obol's official repositories.
+
+```sh
+
+
+## Expected Behavior
+
+
+## Current Behavior
+
+
+## Steps to Reproduce
+
+1.
+2.
+3.
+4.
+5.
+
+## Detailed Description
+
+
+## Specifications
+
+Operating system:
+Version(s) used:
+
+## Possible Solution
+
+
+## Further Information
+
+
+ ## What is Charon?
+
+
+
+ ## Charon explained
+ ```
+
+#### Bold text
+
+Double asterisks `**` are used to define **boldface** text. Use bold text when the reader must interact with something displayed as text: buttons, hyperlinks, images with text in them, window names, and icons.
+
+```markdown
+In the **Login** window, enter your email into the **Username** field and click **Sign in**.
+```
+
+#### Italics
+
+Underscores `_` are used to define _italic_ text. Style the names of things in italics, except input fields or buttons:
+
+```markdown
+Here are some American things:
+
+- The _Spirit of St Louis_.
+- The _White House_.
+- The United States _Declaration of Independence_.
+
+```
+
+Quotes or sections of quoted text are styled in italics and surrounded by double quotes `"`:
+
+```markdown
+In the wise words of Winnie the Pooh _"People say nothing is impossible, but I do nothing every day."_
+```
+
+#### Code blocks
+
+Tag code blocks with the syntax of the core they are presenting:
+
+````markdown
+ ```javascript
+ console.log(error);
+ ```
+````
+
+#### List items
+
+All list items follow sentence structure. Only _names_ and _places_ are capitalized, along with the first letter of the list item. All other letters are lowercase:
+
+1. Never leave Nottingham without a sandwich.
+2. Brian May played guitar for Queen.
+3. Oranges.
+
+List items end with a period `.`, or a colon `:` if the list item has a sub-list:
+
+1. Charles Dickens novels:
+ 1. Oliver Twist.
+ 2. Nicholas Nickelby.
+ 3. David Copperfield.
+2. J.R.R Tolkien non-fiction books:
+ 1. The Hobbit.
+ 2. Silmarillion.
+ 3. Letters from Father Christmas.
+
+##### Unordered lists
+
+Use the dash character `-` for un-numbered list items:
+
+```markdown
+- An apple.
+- Three oranges.
+- As many lemons as you can carry.
+- Half a lime.
+```
+
+#### Special characters
+
+Whenever possible, spell out the name of the special character, followed by an example of the character itself within a code block.
+
+```markdown
+Use the dollar sign `$` to enter debug-mode.
+```
+
+#### Keyboard shortcuts
+
+When instructing the reader to use a keyboard shortcut, surround individual keys in code tags:
+
+```bash
+Press `ctrl` + `c` to copy the highlighted text.
+```
+
+The plus symbol `+` stays outside of the code tags.
+
+### Images
+
+The following rules and guidelines define how to use and store images.
+
+#### Storage location
+
+All images must be placed in the `/static/img` folder. For multiple images attributed to a single topic, a new folder within `/img/` may be needed.
+
+#### File names
+
+All file names are lower-case with dashes `-` between words, including image files:
+
+```text
+concepts/
+├── content-addressed-data.md
+├── images
+│ └── proof-of-spacetime
+│ └── post-diagram.png
+└── proof-of-replication.md
+└── proof-of-spacetime.md
+```
+
+_The framework and some information for this was forked from the original found on the [Filecoin documentation portal](https://docs.filecoin.io)_
+
diff --git a/versioned_docs/version-v0.18.0/cg/feedback.md b/versioned_docs/version-v0.18.0/cg/feedback.md
new file mode 100644
index 0000000000..76042e28aa
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/cg/feedback.md
@@ -0,0 +1,5 @@
+# Feedback
+
+If you have followed our quickstart guides, and whether you succeeded or failed at running the distributed validator successfully, we would like to hear your feedback on the process and where you encountered difficulties.
+- Please let us know by joining and posting on our [Discord](https://discord.gg/n6ebKsX46w).
+- Also, feel free to add issues to our [GitHub repos](https://github.com/ObolNetwork).
\ No newline at end of file
diff --git a/versioned_docs/version-v0.18.0/charon/_category_.json b/versioned_docs/version-v0.18.0/charon/_category_.json
new file mode 100644
index 0000000000..5ed247b0e5
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/charon/_category_.json
@@ -0,0 +1,5 @@
+{
+ "label": "Charon",
+ "position": 3,
+ "collapsed": false
+}
diff --git a/versioned_docs/version-v0.18.0/charon/charon-cli-reference.md b/versioned_docs/version-v0.18.0/charon/charon-cli-reference.md
new file mode 100644
index 0000000000..4407874e32
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/charon/charon-cli-reference.md
@@ -0,0 +1,382 @@
+---
+description: A go-based middleware client for taking part in Distributed Validator clusters.
+sidebar_position: 5
+---
+
+# CLI reference
+
+:::caution
+
+The `charon` client is under heavy development, interfaces are subject to change until a first major version is published.
+
+:::
+
+The following is a reference for charon version [`v0.18.0`](https://github.com/ObolNetwork/charon/releases/tag/v0.18.0). Find the latest release on [our Github](https://github.com/ObolNetwork/charon/releases).
+
+The following are the top-level commands available to use.
+
+```markdown
+Charon enables the operation of Ethereum validators in a fault tolerant manner by splitting the validating keys across a group of trusted parties using threshold cryptography.
+
+Usage:
+ charon [command]
+
+Available Commands:
+ alpha Alpha subcommands provide early access to in-development features
+ combine Combines the private key shares of a distributed validator cluster into a set of standard validator private keys.
+ completion Generate the autocompletion script for the specified shell
+ create Create artifacts for a distributed validator cluster
+ dkg Participate in a Distributed Key Generation ceremony
+ enr Prints a new ENR for this node
+ help Help about any command
+ relay Start a libp2p relay server
+ run Run the charon middleware client
+ version Print version and exit
+
+Flags:
+ -h, --help Help for charon
+
+Use "charon [command] --help" for more information about a command.
+```
+
+## The `create` subcommand
+
+The `create` subcommand handles the creation of artifacts needed by charon to operate.
+
+```markdown
+charon create --help
+Create artifacts for a distributed validator cluster. These commands can be used to facilitate the creation of a distributed validator cluster between a group of operators by performing a distributed key generation ceremony, or they can be used to create a local cluster for single operator use cases.
+
+Usage:
+ charon create [command]
+
+Available Commands:
+ cluster Create private keys and configuration files needed to run a distributed validator cluster locally
+ dkg Create the configuration for a new Distributed Key Generation ceremony using charon dkg
+ enr Create an Ethereum Node Record (ENR) private key to identify this charon client
+
+Flags:
+ -h, --help Help for create
+
+Use "charon create [command] --help" for more information about a command.
+```
+
+### Creating an ENR for charon
+
+An `enr` is an Ethereum Node Record. It is used to identify this charon client to its other counterparty charon clients across the internet.
+
+```markdown
+charon create enr --help
+Create an Ethereum Node Record (ENR) private key to identify this charon client
+
+Usage:
+ charon create enr [flags]
+
+Flags:
+ --data-dir string The directory where charon will store all its internal data (default ".charon")
+ -h, --help Help for enr
+```
+
+### Create a full cluster locally
+
+`charon create cluster` creates a set of distributed validators locally, including the private keys, a `cluster-lock.json` file, and deposit and exit data. However, this command should only be used for solo use of distributed validators. To run a Distributed Validator with a group of operators, it is preferable to create these artifacts using the `charon dkg` command. That way, no single operator custodies all of the private keys to a distributed validator.
+
+```markdown
+Creates a local charon cluster configuration including validator keys, charon p2p keys, cluster-lock.json and a deposit-data.json. See flags for supported features.
+
+Usage:
+ charon create cluster [flags]
+
+Flags:
+ --cluster-dir string The target folder to create the cluster in. (default "./")
+ --definition-file string Optional path to a cluster definition file or an HTTP URL. This overrides all other configuration flags.
+ --fee-recipient-addresses strings Comma separated list of Ethereum addresses of the fee recipient for each validator. Either provide a single fee recipient address or fee recipient addresses for each validator.
+ -h, --help Help for cluster
+ --insecure-keys Generates insecure keystore files. This should never be used. It is not supported on mainnet.
+ --keymanager-addresses strings Comma separated list of keymanager URLs to import validator key shares to. Note that multiple addresses are required, one for each node in the cluster, with node0's keyshares being imported to the first address, node1's keyshares to the second, and so on.
+ --keymanager-auth-tokens strings Authentication bearer tokens to interact with the keymanager URLs. Don't include the "Bearer" symbol, only include the api-token.
+ --name string The cluster name
+ --network string Ethereum network to create validators for. Options: mainnet, goerli, gnosis, sepolia, holesky.
+ --nodes int The number of charon nodes in the cluster. Minimum is 3.
+ --num-validators int The number of distributed validators needed in the cluster.
+ --publish Publish lock file to obol-api.
+ --publish-address string The URL to publish the lock file to. (default "https://api.obol.tech")
+ --split-existing-keys Split an existing validator's private key into a set of distributed validator private key shares. Does not re-create deposit data for this key.
+ --split-keys-dir string Directory containing keys to split. Expects keys in keystore-*.json and passwords in keystore-*.txt. Requires --split-existing-keys.
+ --threshold int Optional override of threshold required for signature reconstruction. Defaults to ceil(n*2/3) if zero. Warning, non-default values decrease security.
+ --withdrawal-addresses strings Comma separated list of Ethereum addresses to receive the returned stake and accrued rewards for each validator. Either provide a single withdrawal address or withdrawal addresses for each validator.
+```
+
+### Creating the configuration for a DKG Ceremony
+
+This `charon create dkg` command creates a cluster_definition file used for the `charon dkg` command.
+
+```markdown
+charon create dkg --help
+Create a cluster definition file that will be used by all participants of a DKG.
+
+Usage:
+ charon create dkg [flags]
+
+Flags:
+ --dkg-algorithm string DKG algorithm to use; default, keycast, frost (default "default")
+ --fee-recipient-addresses strings Comma separated list of Ethereum addresses of the fee recipient for each validator. Either provide a single fee recipient address or fee recipient addresses for each validator.
+ -h, --help Help for dkg
+ --name string Optional cosmetic cluster name
+ --network string Ethereum network to create validators for. Options: mainnet, goerli, gnosis, sepolia, holesky. (default "mainnet")
+ --num-validators int The number of distributed validators the cluster will manage (32ETH staked for each). (default 1)
+ --operator-enrs strings [REQUIRED] Comma-separated list of each operator's Charon ENR address.
+ --output-dir string The folder to write the output cluster-definition.json file to. (default ".charon")
+ -t, --threshold int Optional override of threshold required for signature reconstruction. Defaults to ceil(n*2/3) if zero. Warning, non-default values decrease security.
+ --withdrawal-addresses strings Comma separated list of Ethereum addresses to receive the returned stake and accrued rewards for each validator. Either provide a single withdrawal address or withdrawal addresses for each validator.
+```
+
+## The `dkg` subcommand
+
+### Performing a DKG Ceremony
+
+The `charon dkg` command takes a `cluster_definition.json` file that instructs charon on the terms of a new distributed validator cluster to be created. Charon establishes communication with the other nodes identified in the file, performs a distributed key generation ceremony to create the required threshold private keys, and signs deposit data for each new distributed validator. The command outputs the `cluster-lock.json` file and key shares for each Distributed Validator created.
+
+```markdown
+charon dkg --help
+Participate in a distributed key generation ceremony for a specific cluster definition that creates
+distributed validator key shares and a final cluster lock configuration. Note that all other cluster operators should run
+this command at the same time.
+
+Usage:
+ charon dkg [flags]
+
+Flags:
+ --data-dir string The directory where charon will store all its internal data (default ".charon")
+ --definition-file string The path to the cluster definition file or an HTTP URL. (default ".charon/cluster-definition.json")
+ -h, --help Help for dkg
+ --keymanager-address string The keymanager URL to import validator keyshares.
+ --keymanager-auth-token string Authentication bearer token to interact with keymanager API. Don't include the "Bearer" symbol, only include the api-token.
+ --log-color string Log color; auto, force, disable. (default "auto")
+ --log-format string Log format; console, logfmt or json (default "console")
+ --log-level string Log level; debug, info, warn or error (default "info")
+ --no-verify Disables cluster definition and lock file verification.
+ --p2p-allowlist string Comma-separated list of CIDR subnets for allowing only certain peer connections. Example: 192.168.0.0/16 would permit connections to peers on your local network only. The default is to accept all connections.
+ --p2p-denylist string Comma-separated list of CIDR subnets for disallowing certain peer connections. Example: 192.168.0.0/16 would disallow connections to peers on your local network. The default is to accept all connections.
+ --p2p-disable-reuseport Disables TCP port reuse for outgoing libp2p connections.
+ --p2p-external-hostname string The DNS hostname advertised by libp2p. This may be used to advertise an external DNS.
+ --p2p-external-ip string The IP address advertised by libp2p. This may be used to advertise an external IP.
+ --p2p-relays strings Comma-separated list of libp2p relay URLs or multiaddrs. (default [https://0.relay.obol.tech])
+ --p2p-tcp-address strings Comma-separated list of listening TCP addresses (ip and port) for libP2P traffic. Empty default doesn't bind to local port therefore only supports outgoing connections.
+ --publish Publish lock file to obol-api.
+ --publish-address string The URL to publish the lock file to. (default "https://api.obol.tech")
+ --shutdown-delay duration Graceful shutdown delay. (default 1s)
+```
+
+## The `run` subcommand
+
+### Run the Charon middleware
+
+This `run` command accepts a `cluster-lock.json` file that was created either via a `charon create cluster` command or `charon dkg`. This lock file outlines the nodes in the cluster and the distributed validators they operate on behalf of.
+
+```markdown
+charon run --help
+Starts the long-running Charon middleware process to perform distributed validator duties.
+
+Usage:
+ charon run [flags]
+
+Flags:
+ --beacon-node-endpoints strings Comma separated list of one or more beacon node endpoint URLs.
+ --builder-api Enables the builder api. Will only produce builder blocks. Builder API must also be enabled on the validator client. Beacon node must be connected to a builder-relay to access the builder network.
+ --feature-set string Minimum feature set to enable by default: alpha, beta, or stable. Warning: modify at own risk. (default "stable")
+ --feature-set-disable strings Comma-separated list of features to disable, overriding the default minimum feature set.
+ --feature-set-enable strings Comma-separated list of features to enable, overriding the default minimum feature set.
+ -h, --help Help for run
+ --jaeger-address string Listening address for jaeger tracing.
+ --jaeger-service string Service name used for jaeger tracing. (default "charon")
+ --lock-file string The path to the cluster lock file defining distributed validator cluster. If both cluster manifest and cluster lock files are provided, the cluster manifest file takes precedence. (default ".charon/cluster-lock.json")
+ --log-color string Log color; auto, force, disable. (default "auto")
+ --log-format string Log format; console, logfmt or json (default "console")
+ --log-level string Log level; debug, info, warn or error (default "info")
+ --loki-addresses strings Enables sending of logfmt structured logs to these Loki log aggregation server addresses. This is in addition to normal stderr logs.
+ --loki-service string Service label sent with logs to Loki. (default "charon")
+ --manifest-file string The path to the cluster manifest file. If both cluster manifest and cluster lock files are provided, the cluster manifest file takes precedence. (default ".charon/cluster-manifest.pb")
+ --monitoring-address string Listening address (ip and port) for the monitoring API (prometheus, pprof). (default "127.0.0.1:3620")
+ --no-verify Disables cluster definition and lock file verification.
+ --p2p-allowlist string Comma-separated list of CIDR subnets for allowing only certain peer connections. Example: 192.168.0.0/16 would permit connections to peers on your local network only. The default is to accept all connections.
+ --p2p-denylist string Comma-separated list of CIDR subnets for disallowing certain peer connections. Example: 192.168.0.0/16 would disallow connections to peers on your local network. The default is to accept all connections.
+ --p2p-disable-reuseport Disables TCP port reuse for outgoing libp2p connections.
+ --p2p-external-hostname string The DNS hostname advertised by libp2p. This may be used to advertise an external DNS.
+ --p2p-external-ip string The IP address advertised by libp2p. This may be used to advertise an external IP.
+ --p2p-relays strings Comma-separated list of libp2p relay URLs or multiaddrs. (default [https://0.relay.obol.tech])
+ --p2p-tcp-address strings Comma-separated list of listening TCP addresses (ip and port) for libP2P traffic. Empty default doesn't bind to local port therefore only supports outgoing connections.
+ --private-key-file string The path to the charon enr private key file. (default ".charon/charon-enr-private-key")
+ --private-key-file-lock Enables private key locking to prevent multiple instances using the same key.
+ --simnet-beacon-mock Enables an internal mock beacon node for running a simnet.
+ --simnet-beacon-mock-fuzz Configures simnet beaconmock to return fuzzed responses.
+ --simnet-slot-duration duration Configures slot duration in simnet beacon mock. (default 1s)
+ --simnet-validator-keys-dir string The directory containing the simnet validator key shares. (default ".charon/validator_keys")
+ --simnet-validator-mock Enables an internal mock validator client when running a simnet. Requires simnet-beacon-mock.
+ --synthetic-block-proposals Enables additional synthetic block proposal duties. Used for testing of rare duties.
+ --validator-api-address string Listening address (ip and port) for validator-facing traffic proxying the beacon-node API. (default "127.0.0.1:3600")
+```
+
+## The `combine` subcommand
+
+### Combine distributed validator keyshares into a single Validator key
+
+The `combine` command combines many validator keyshares into a single Ethereum validator key.
+
+To run this command, one needs all the node operator's `.charon` directories, which need to be organized in the following way:
+
+```shell
+validators-to-be-combined/
+├── node0
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+├── node1
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+├── node2
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+└── node3
+ ├── charon-enr-private-key
+ ├── cluster-lock.json
+ ├── deposit-data.json
+ └── validator_keys
+ ├── keystore-0.json
+ ├── keystore-0.txt
+ ├── keystore-1.json
+ └── keystore-1.txt
+```
+
+That is, each operator '.charon' directory must be placed in a parent directory, and renamed.
+
+Note that all validator keys are required for the successful execution of this command.
+
+If for example the lock file defines 2 validators, each `validator_keys` directory must contain exactly 4 files, a JSON and TXT file for each validator.
+
+Those files must be named with an increasing index associated with the validator in the lock file, starting from 0.
+
+The chosen name doesn't matter, as long as it's different from `.charon`.
+
+At the end of the process `combine` will create a new set of directories containing one validator key each, named after its public key:
+
+```shell
+validators-to-be-combined/
+├── 0x822c5310674f4fc4ec595642d0eab73d01c62b588f467da6f98564f292a975a0ac4c3a10f1b3a00ccc166a28093c2dcd # contains private key
+│ └── validator_keys
+│ ├── keystore-0.json
+│ └── keystore-0.txt
+├── 0x8929b4c8af2d2eb222d377cac2aa7be950e71d2b247507d19b5fdec838f0fb045ea8910075f191fd468da4be29690106 # contains private key
+│ └── validator_keys
+│ ├── keystore-0.json
+│ └── keystore-0.txt
+├── node0
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+├── node1
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+├── node2
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+└── node3
+ ├── charon-enr-private-key
+ ├── cluster-lock.json
+ ├── deposit-data.json
+ └── validator_keys
+ ├── keystore-0.json
+ ├── keystore-0.txt
+ ├── keystore-1.json
+ └── keystore-1.txt
+```
+
+By default, the `combine` command will refuse to overwrite any private key that is already present in the destination directory.
+
+To force the process, use the `--force` flag.
+
+```markdown
+charon combine --help
+Combines the private key shares from a threshold of operators in a distributed validator cluster into a set of validator private keys that can be imported into a standard Ethereum validator client.
+
+Warning: running the resulting private keys in a validator alongside the original distributed validator cluster *will* result in slashing.
+
+Usage:
+ charon combine [flags]
+
+Flags:
+ --cluster-dir string Parent directory containing a number of .charon subdirectories from the required threshold of nodes in the cluster. (default ".charon/cluster")
+ --force Overwrites private keys with the same name if present.
+ -h, --help Help for combine
+ --no-verify Disables cluster definition and lock file verification.
+ --output-dir string Directory to output the combined private keys to. (default "./validator_keys")
+```
+
+## Host a relay
+
+Relays run a libp2p [circuit relay](https://docs.libp2p.io/concepts/nat/circuit-relay/) server that allows charon clusters to perform peer discovery and for charon clients behind NAT gateways to be communicated with. If you want to self-host a relay for your cluster(s) the following command will start one.
+
+```markdown
+charon relay --help
+Starts a libp2p relay that charon nodes can use to bootstrap their p2p cluster
+
+Usage:
+ charon relay [flags]
+
+Flags:
+ --auto-p2pkey Automatically create a p2pkey (secp256k1 private key used for p2p authentication and ENR) if none found in data directory. (default true)
+ --data-dir string The directory where charon will store all its internal data (default ".charon")
+ -h, --help Help for relay
+ --http-address string Listening address (ip and port) for the relay http server serving runtime ENR. (default "127.0.0.1:3640")
+ --log-color string Log color; auto, force, disable. (default "auto")
+ --log-format string Log format; console, logfmt or json (default "console")
+ --log-level string Log level; debug, info, warn or error (default "info")
+ --loki-addresses strings Enables sending of logfmt structured logs to these Loki log aggregation server addresses. This is in addition to normal stderr logs.
+ --loki-service string Service label sent with logs to Loki. (default "charon")
+ --monitoring-address string Listening address (ip and port) for the prometheus and pprof monitoring http server. (default "127.0.0.1:3620")
+ --p2p-advertise-private-addresses Enable advertising of libp2p auto-detected private addresses. This doesn't affect manually provided p2p-external-ip/hostname.
+ --p2p-allowlist string Comma-separated list of CIDR subnets for allowing only certain peer connections. Example: 192.168.0.0/16 would permit connections to peers on your local network only. The default is to accept all connections.
+ --p2p-denylist string Comma-separated list of CIDR subnets for disallowing certain peer connections. Example: 192.168.0.0/16 would disallow connections to peers on your local network. The default is to accept all connections.
+ --p2p-disable-reuseport Disables TCP port reuse for outgoing libp2p connections.
+ --p2p-external-hostname string The DNS hostname advertised by libp2p. This may be used to advertise an external DNS.
+ --p2p-external-ip string The IP address advertised by libp2p. This may be used to advertise an external IP.
+ --p2p-max-connections int Libp2p maximum number of peers that can connect to this relay. (default 16384)
+ --p2p-max-reservations int Updates max circuit reservations per peer (each valid for 30min) (default 512)
+ --p2p-relay-loglevel string Libp2p circuit relay log level. E.g., debug, info, warn, error.
+ --p2p-relays strings Comma-separated list of libp2p relay URLs or multiaddrs. (default [https://0.relay.obol.tech])
+ --p2p-tcp-address strings Comma-separated list of listening TCP addresses (ip and port) for libP2P traffic. Empty default doesn't bind to local port therefore only supports outgoing connections.
+```
diff --git a/versioned_docs/version-v0.18.0/charon/cluster-configuration.md b/versioned_docs/version-v0.18.0/charon/cluster-configuration.md
new file mode 100644
index 0000000000..aab4104033
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/charon/cluster-configuration.md
@@ -0,0 +1,161 @@
+---
+description: Documenting a Distributed Validator Cluster in a standardised file format
+sidebar_position: 3
+---
+
+# Cluster configuration
+
+:::caution
+These cluster definition and cluster lock files are a work in progress. The intention is for the files to be standardised for operating distributed validators via the [EIP process](https://eips.ethereum.org/) when appropriate.
+:::
+
+This document describes the configuration options for running a charon client or cluster.
+
+A charon cluster is configured in two steps:
+
+- `cluster-definition.json` which defines the intended cluster configuration before keys have been created in a distributed key generation ceremony.
+- `cluster-lock.json` which includes and extends `cluster-definition.json` with distributed validator BLS public key shares.
+
+In the case of a solo operator running a cluster, the [`charon create cluster`](./charon-cli-reference.md#create-a-full-cluster-locally) command combines both steps into one and just outputs the final `cluster-lock.json` without a DKG step.
+
+## Cluster Definition File
+
+The `cluster-definition.json` is provided as input to the DKG which generates keys and the `cluster-lock.json` file.
+
+### Using the CLI
+
+The [`charon create dkg`](./charon-cli-reference.md#creating-the-configuration-for-a-dkg-ceremony) command is used to create the `cluster-definition.json` file which is used as input to `charon dkg`.
+
+The schema of the `cluster-definition.json` is defined as:
+
+```json
+{
+ "name": "best cluster", // Optional cosmetic identifier
+ "creator": {
+ "address": "0x123..abfc", //ETH1 address of the creator
+ "config_signature": "0x123654...abcedf" // EIP712 Signature of config_hash using creator privkey
+ },
+ "operators": [
+ {
+ "address": "0x123..abfc", // ETH1 address of the operator
+ "enr": "enr://abcdef...12345", // Charon node ENR
+ "config_signature": "0x123456...abcdef", // EIP712 Signature of config_hash by ETH1 address priv key
+ "enr_signature": "0x123654...abcedf" // EIP712 Signature of ENR by ETH1 address priv key
+ }
+ ],
+ "uuid": "1234-abcdef-1234-abcdef", // Random unique identifier.
+ "version": "v1.2.0", // Schema version
+ "timestamp": "2022-01-01T12:00:00+00:00", // Creation timestamp
+ "num_validators": 2, // Number of distributed validators to be created in cluster-lock.json
+ "threshold": 3, // Optional threshold required for signature reconstruction
+ "validators": [
+ {
+ "fee_recipient_address": "0x123..abfc", // ETH1 fee_recipient address of validator
+ "withdrawal_address": "0x123..abfc" // ETH1 withdrawal address of validator
+ },
+ {
+ "fee_recipient_address": "0x123..abfc", // ETH1 fee_recipient address of validator
+ "withdrawal_address": "0x123..abfc" // ETH1 withdrawal address of validator
+ }
+ ],
+ "dkg_algorithm": "foo_dkg_v1", // Optional DKG algorithm for key generation
+ "fork_version": "0x00112233", // Chain/Network identifier
+ "config_hash": "0xabcfde...acbfed", // Hash of the static (non-changing) fields
+ "definition_hash": "0xabcdef...abcedef" // Final hash of all fields
+}
+```
+
+### Using the DV Launchpad
+
+- A [`leader/creator`](docs/int/quickstart/group/index.md), that wishes to coordinate the creation of a new Distributed Validator Cluster navigates to the launchpad and selects "Create new Cluster"
+- The `leader/creator` uses the user interface to configure all of the important details about the cluster including:
+ - The `Withdrawal Address` for the created validators
+ - The `Fee Recipient Address` for block proposals if it differs from the withdrawal address
+ - The number of distributed validators to create
+ - The list of participants in the cluster specified by Ethereum address(/ENS)
+ - The threshold of fault tolerance required
+- These key pieces of information form the basis of the cluster configuration. These fields (and some technical fields like DKG algorithm to use) are serialized and merklized to produce the definition's `cluster_definition_hash`. This merkle root will be used to confirm that there is no ambiguity or deviation between definitions when they are provided to charon nodes.
+- Once the `leader/creator` is satisfied with the configuration they publish it to the launchpad's data availability layer for the other participants to access. (For early development the launchpad will use a centralized backend db to store the cluster configuration. Near production, solutions like IPFS or arweave may be more suitable for the long term decentralization of the launchpad.)
+
+## Cluster Lock File
+
+The `cluster-lock.json` has the following schema:
+
+```json
+{
+ "cluster_definition": {...}, // Cluster definiition json, identical schema to above,
+ "distributed_validators": [ // Length equal to cluster_definition.num_validators.
+ {
+ "distributed_public_key": "0x123..abfc", // DV root pubkey
+ "public_shares": [ "abc...fed", "cfd...bfe"], // Length equal to cluster_definition.operators
+ "fee_recipient": "0x123..abfc" // Defaults to withdrawal address if not set, can be edited manually
+ }
+ ],
+ "lock_hash": "abcdef...abcedef", // Config_hash plus distributed_validators
+ "signature_aggregate": "abcdef...abcedef" // BLS aggregate signature of the lock hash signed by each DV pubkey.
+}
+```
+
+## Cluster Size and Resilience
+
+The cluster size (the number of nodes/operators in the cluster) determines the resilience of the cluster; its ability remain operational under diverse failure scenarios.
+Larger clusters can tolerate more faulty nodes.
+However, increased cluster size implies higher operational costs and potential network latency, which may negatively affect performance
+
+Optimal cluster size is therefore trade-off between resilience (larger is better) vs cost-efficiency and performance (smaller is better).
+
+Cluster resilience can be broadly classified into two categories:
+ - **[Byzantine Fault Tolerance (BFT)](https://en.wikipedia.org/wiki/Byzantine_fault)** - the ability to tolerate nodes that are actively trying to disrupt the cluster.
+ - **[Crash Fault Tolerance (CFT)](https://en.wikipedia.org/wiki/Fault_tolerance)** - the ability to tolerate nodes that have crashed or are otherwise unavailable.
+
+Different cluster sizes tolerate different counts of byzantine vs crash nodes.
+In practice, hardware and software crash relatively frequently, while byzantine behaviour is relatively uncommon.
+However, Byzantine Fault Tolerance is crucial for trust minimised systems like distributed validators.
+Thus, cluster size can be chosen to optimise for either BFT or CFT.
+
+The table below lists different cluster sizes and their characteristics:
+ - `Cluster Size` - the number of nodes in the cluster.
+ - `Threshold` - the minimum number of nodes that must collaborate to reach consensus quorum and to create signatures.
+ - `BFT #` - the maximum number of byzantine nodes that can be tolerated.
+ - `CFT #` - the maximum number of crashed nodes that can be tolerated.
+
+| Cluster Size | Threshold | BFT # | CFT # | Note |
+|--------------|-----------|-------|-------|------------------------------------|
+| 1 | 1 | 0 | 0 | ❌ Invalid: Not CFT nor BFT! |
+| 2 | 2 | 0 | 0 | ❌ Invalid: Not CFT nor BFT! |
+| 3 | 2 | 0 | 1 | ⚠️ Warning: CFT but not BFT! |
+| 4 | 3 | 1 | 1 | ✅ CFT and BFT optimal for 1 faulty |
+| 5 | 4 | 1 | 1 | |
+| 6 | 4 | 1 | 2 | ✅ CFT optimal for 2 crashed |
+| 7 | 5 | 2 | 2 | ✅ BFT optimal for 2 byzantine |
+| 8 | 6 | 2 | 2 | |
+| 9 | 6 | 2 | 3 | ✅ CFT optimal for 3 crashed |
+| 10 | 7 | 3 | 3 | ✅ BFT optimal for 3 byzantine |
+| 11 | 8 | 3 | 3 | |
+| 12 | 8 | 3 | 4 | ✅ CFT optimal for 4 crashed |
+| 13 | 9 | 4 | 4 | ✅ BFT optimal for 4 byzantine |
+| 14 | 10 | 4 | 4 | |
+| 15 | 10 | 4 | 5 | ✅ CFT optimal for 5 crashed |
+| 16 | 11 | 5 | 5 | ✅ BFT optimal for 5 byzantine |
+| 17 | 12 | 5 | 5 | |
+| 18 | 12 | 5 | 6 | ✅ CFT optimal for 6 crashed |
+| 19 | 13 | 6 | 6 | ✅ BFT optimal for 6 byzantine |
+| 20 | 14 | 6 | 6 | |
+| 21 | 14 | 6 | 7 | ✅ CFT optimal for 7 crashed |
+| 22 | 15 | 7 | 7 | ✅ BFT optimal for 7 byzantine |
+
+The table above is determined by the QBFT consensus algorithm with the
+following formulas from [this](https://arxiv.org/pdf/1909.10194.pdf) paper:
+
+```
+n = cluster size
+
+Threshold: min number of honest nodes required to reach quorum given size n
+Quarom(n) = ceiling(2n/3)
+
+BFT #: max number of faulty (byzantine) nodes given size n
+f(n) = floor((n-1)/3)
+
+CFT #: max number of unavailable (crashed) nodes given size n
+crashed(n) = n - Quorum(n)
+```
diff --git a/versioned_docs/version-v0.18.0/charon/dkg.md b/versioned_docs/version-v0.18.0/charon/dkg.md
new file mode 100644
index 0000000000..86b0e28d2d
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/charon/dkg.md
@@ -0,0 +1,73 @@
+---
+description: Generating private keys for a Distributed Validator requires a Distributed Key Generation (DKG) Ceremony.
+sidebar_position: 2
+---
+
+# Distributed Key Generation
+
+## Overview
+
+A [**distributed validator key**](docs/int/key-concepts.md#distributed-validator-key) is a group of BLS private keys that together operate as a threshold key for participating in proof-of-stake consensus.
+
+To make a distributed validator with no fault-tolerance (i.e. all nodes need to be online to sign every message), due to the BLS signature scheme used by Proof of Stake Ethereum, each key share could be chosen by operators independently. However, to create a distributed validator that can stay online despite a subset of its nodes going offline, the key shares need to be generated together (4 randomly chosen points on a graph don't all necessarily sit on the same order three curve). To do this in a secure manner with no one party being trusted to distribute the keys requires what is known as a [**distributed key generation ceremony**](docs/int/key-concepts.md#distributed-validator-key-generation-ceremony).
+
+The charon client has the responsibility of securely completing a distributed key generation ceremony with its counterparty nodes. The ceremony configuration is outlined in a [cluster definition](../charon/cluster-configuration).
+
+## Actors Involved
+
+A distributed key generation ceremony involves `Operators` and their `Charon clients`.
+
+- An `Operator` is identified by their Ethereum address. They will sign a message with this address to authorize their charon client to take part in the DKG ceremony.
+
+- A `Charon client` is also identified by a public/private key pair, in this instance, the public key is represented as an [Ethereum Node Record](https://eips.ethereum.org/EIPS/eip-778) (ENR). This is a standard identity format for both EL and CL clients. These ENRs are used by each charon node to identify its cluster peers over the internet, and to communicate with one another in an [end to end encrypted manner](https://github.com/libp2p/go-libp2p/tree/master/p2p/security/noise). These keys need to be created (and backed up) by each operator before they can participate in a cluster creation.
+
+## Cluster Definition Creation
+
+This cluster definition specifies the intended cluster configuration before keys have been created in a distributed key generation ceremony. The `cluster-definition.json` file can be created with the help of the [Distributed Validator Launchpad](./cluster-configuration.md#using-the-dv-launchpad) or via the [CLI](./cluster-configuration.md#using-the-cli).
+
+## Carrying out the DKG ceremony
+
+Once all participants have signed the cluster definition, they can load the `cluster-definition` file into their charon client, and the client will attempt to complete the DKG.
+
+Charon will read the ENRs in the definition, confirm that its ENR is present, and then will reach out to relays that are deployed to find the other ENRs on the network. (Fresh ENRs just have a public key and an IP address of 0.0.0.0 until they are loaded into a live charon client, which will update the IP address and increment the ENRs nonce and resign with the clients private key. If an ENR with a higher nonce is seen by a charon client, they will update the IP address of that ENR in their address book.)
+
+Once all clients in the cluster can establish a connection with one another and they each complete a handshake (confirm everyone has a matching `cluster_definition_hash`), the ceremony begins.
+
+No user input is required, charon does the work and outputs the following files to each machine and then exits.
+
+## Backing up the ceremony artifacts
+
+At the end of a DKG ceremony, each operator will have a number of files outputted by their charon client based on how many distributed validators the group chose to generate together.
+
+These files are:
+
+- **Validator keystore(s):** These files will be loaded into the operator's validator client and each file represents one share of a Distributed Validator.
+- **A distributed validator cluster lock file:** This `cluster-lock.json` file contains the configuration a distributed validator client like charon needs to join a cluster capable of operating a number of distributed validators.
+- **Validator deposit data:** This file is used to activate one or more distributed validators on the Ethereum network.
+
+Once the ceremony is complete, all participants should take a backup of the created files. In future versions of charon, if a participant loses access to these key shares, it will be possible to use a key re-sharing protocol to swap the participants old keys out of a distributed validator in favor of new keys, allowing the rest of a cluster to recover from a set of lost key shares. However for now, without a backup, the safest thing to do would be to exit the validator.
+
+## DKG Verification
+
+For many use cases of distributed validators, the funder/depositor of the validator may not be the same person as the key creators/node operators, as (outside of the base protocol) stake delegation is a common phenomenon. This handover of information introduces a point of trust. How does someone verify that a proposed validator `deposit data` corresponds to a real, fair, DKG with participants the depositor expects?
+
+There are a number of aspects to this trust surface that can be mitigated with a "Don't trust, verify" model. Verification for the time being is easier off chain, until things like a [BLS precompile](https://eips.ethereum.org/EIPS/eip-2537) are brought into the EVM, along with cheap ZKP verification on chain. Some of the questions that can be asked of Distributed Validator Key Generation Ceremonies include:
+
+- Do the public key shares combine together to form the group public key?
+ - This can be checked on chain as it does not require a pairing operation
+ - This can give confidence that a BLS pubkey represents a Distributed Validator, but does not say anything about the custody of the keys. (e.g. Was the ceremony sybil attacked, did they collude to reconstitute the group private key etc.)
+- Do the created BLS public keys attest to their `cluster_definition_hash`?
+ - This is to create a backwards link between newly created BLS public keys and the operator's eth1 addresses that took part in their creation.
+ - If a proposed distributed validator BLS group public key can produce a signature of the `cluster_definition_hash`, it can be inferred that at least a threshold of the operators signed this data.
+ - As the `cluster_definition_hash` is the same for all distributed validators created in the ceremony, the signatures can be aggregated into a group signature that verifies all created group keys at once. This makes it cheaper to verify a number of validators at once on chain.
+- Is there either a VSS or PVSS proof of a fair DKG ceremony?
+ - VSS (Verifiable Secret Sharing) means only operators can verify fairness, as the proof requires knowledge of one of the secrets.
+ - PVSS (Publicly Verifiable Secret Sharing) means anyone can verify fairness, as the proof is usually a Zero Knowledge Proof.
+ - A PVSS of a fair DKG would make it more difficult for operators to collude and undermine the security of the Distributed Validator.
+ - Zero Knowledge Proof verification on chain is currently expensive, but is becoming achievable through the hard work and research of the many ZK based teams in the industry.
+
+## Appendix
+
+### Sample Configuration and Lock Files
+
+Refer to the details [here](../charon/cluster-configuration).
diff --git a/versioned_docs/version-v0.18.0/charon/intro.md b/versioned_docs/version-v0.18.0/charon/intro.md
new file mode 100644
index 0000000000..767a6a3ded
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/charon/intro.md
@@ -0,0 +1,82 @@
+---
+description: Charon - The Distributed Validator Client
+sidebar_position: 1
+---
+
+# Introduction
+
+This section introduces and outlines the Charon *[kharon]* middleware, Obol's implementation of DVT. Please see the [key concepts](/docs/int/key-concepts) section as background and context.
+
+## What is Charon?
+
+Charon is a GoLang-based, HTTP middleware built by Obol to enable any existing Ethereum validator clients to operate together as part of a distributed validator.
+
+Charon sits as a middleware between a normal validating client and its connected beacon node, intercepting and proxying API traffic. Multiple Charon clients are configured to communicate together to come to consensus on validator duties and behave as a single unified proof-of-stake validator together. The nodes form a cluster that is _byzantine-fault tolerant_ and continues to progress assuming a supermajority of working/honest nodes is met.
+
+![Charon Cluster](/img/DVCluster.png)
+
+## Charon Architecture
+Charon is an Ethereum proof of stake distributed validator (DV) client. Like any validator client, its main purpose is to perform validation duties for the Beacon Chain, primarily attestations and block proposals. The beacon client handles a lot of the heavy lifting, leaving the validator client to focus on fetching duty data, signing that data, and submitting it back to the beacon client.
+
+Charon is designed as a generic event-driven workflow with different components coordinating to perform validation duties. All duties follow the same flow, the only difference being the signed data. The workflow can be divided into phases consisting of one or more components:
+
+![Charon Workflow](/img/workflow.jpg)
+
+### Determine **when** duties need to be performed
+The beacon chain is divided into [slots](https://eth2book.info/bellatrix/part3/config/types/#slot) and [epochs](https://eth2book.info/bellatrix/part3/config/types/#epoch), which divides it into deterministically fixed-size time chunks.
+The first step is to determine when (which slot/epoch) duties need to be performed. This is done by the `scheduler` component.
+It queries the beacon node to detect which validators defined in the cluster lock are active, and what duties they need to perform for
+the upcoming epoch and slots. When such a slot starts, the `scheduler` emits an event indicating which validator needs to perform what duty.
+
+### Fetch and come to consensus on **what** data to sign
+A DV cluster consists of multiple operators each provided with one of the M-of-N threshold BLS private key shares per validator.
+The key shares are imported into the validator clients which produce partial signatures.
+Charon threshold aggregates these partial signatures before broadcasting them to the Beacon Chain.
+*But to threshold aggregate partial signatures, each validator must sign the same data.*
+The cluster must therefore coordinate and come to a consensus on what data to sign.
+
+`Fetcher` fetches the unsigned duty data from the beacon node upon receiving an event from `Scheduler`.
+For attestations, this is the unsigned attestation, for block proposals, this is the unsigned block.
+
+The `Consensus` component listens to events from Fetcher and starts a [QBFT](https://docs.goquorum.consensys.net/configure-and-manage/configure/consensus-protocols/qbft/) consensus game with the other
+Charon nodes in the cluster for that specific duty and slot.
+When consensus is reached, the resulting unsigned duty data is stored in the `DutyDB`.
+
+### **Wait** for the VC to sign
+Charon is a **middleware** distributed validator client. That means Charon doesn’t have access to the
+validator private key shares and cannot sign anything on demand.
+Instead, operators import the key shares into industry-standard validator clients (VC)
+that are configured to connect to their local Charon client instead of their local Beacon node directly.
+
+Charon, therefore, serves the [Ethereum Beacon Node API](https://ethereum.github.io/beacon-APIs/#/) from the `ValidatorAPI` component and
+intercepts some endpoints while proxying other endpoints directly to the upstream Beacon node.
+
+The VC queries the `ValidatorAPI` for unsigned data which is retrieved from the `DutyDB`. It then signs it and submits it
+back to the `ValidatorAPI` which stores it in the `PartialSignatureDB`.
+
+### **Share** partial signatures
+The `PartialSignatureDB` stores the partially signed data submitted by the local Charon client’s VC.
+But it also stores all the partial signatures submitted by the VCs of other peers in the cluster.
+This is achieved by the `PartialSignatureExchange` component that exchanges partial signatures between all peers in the cluster.
+All charon clients, therefore, store all partial signatures the cluster generates.
+
+### **Threshold Aggregate** partial signatures
+The `SignatureAggregator` is invoked as soon as sufficient (any M of N) partial signatures are stored in the `PartialSignatureDB`.
+It performs BLS threshold aggregation of the partial signatures resulting in a final signature that is valid for the beacon chain.
+
+### **Broadcast** final signature
+Finally, the `Broadcaster` component broadcasts the final threshold aggregated signature to the Beacon client, thereby completing the duty.
+
+### Ports
+
+The following is an outline of the services that can be exposed by charon.
+
+- **:3600** - The validator REST API. This is the port that serves the consensus layer's [beacon node API](https://ethereum.github.io/beacon-APIs/). This is the port validator clients should talk to instead of their standard consensus client REST API port. Charon subsequently proxies these requests to the upstream consensus client specified by `--beacon-node-endpoints`.
+
+- **:3610** - Charon P2P port. This is the port that charon clients use to communicate with one another via TCP. This endpoint should be port-forwarded on your router and exposed publicly, preferably on a static IP address. This IP address should then be set on the charon run command with `--p2p-external-ip` or `CHARON_P2P_EXTERNAL_IP`.
+
+- **:3620** - Monitoring port. This port hosts a webserver that serves prometheus metrics on `/metrics`, a readiness endpoint on `/readyz` and a liveness endpoint on `/livez`, and a pprof server on `/debug/pprof`. This port should not be exposed publicly.
+
+## Getting started
+
+For more information on running charon, take a look at our [Quickstart Guides](docs/int/quickstart/index.md).
diff --git a/versioned_docs/version-v0.18.0/charon/networking.md b/versioned_docs/version-v0.18.0/charon/networking.md
new file mode 100644
index 0000000000..5b56a09dcc
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/charon/networking.md
@@ -0,0 +1,99 @@
+---
+description: Networking
+sidebar_position: 4
+---
+
+# Charon networking
+
+## Overview
+
+This document describes Charon's networking model which can be divided into two parts: the [*internal validator stack*](#internal-validator-stack) and the [*external p2p network*](#external-p2p-network).
+
+## Internal Validator Stack
+
+
+
+Charon is a middleware DVT client and is therefore connected to an upstream beacon node and a downstream validator client is connected to it.
+Each operator should run the whole validator stack (all 4 client software types), either on the same machine or on different machines. The networking between
+the nodes should be private and not exposed to the public internet.
+
+Related Charon configuration flags:
+- `--beacon-node-endpoints`: Connects Charon to one or more beacon nodes.
+- `--validator-api-address`: Address for Charon to listen on and serve requests from the validator client.
+
+## External P2P Network
+
+![External P2P Network](/img/ExternalP2PNetwork.png)
+The Charon clients in a DV cluster are connected to each other via a small p2p network consisting of only the clients in the cluster. Peer IP addresses are
+discovered via an external "relay" server. The p2p connections are over the public internet so the charon p2p port must be publicly accessible. Charon leverages
+the popular [libp2p](https://libp2p.io/) protocol.
+
+Related [Charon configuration flags](docs/charon/charon-cli-reference.md):
+- `--p2p-tcp-addresses`: Addresses for Charon to listen on and serve p2p requests.
+- `--p2p-relays`: Connect charon to one or more relay servers.
+- `--private-key-file`: Private key identifying the charon client.
+
+### LibP2P Authentication and Security
+
+Each charon client has a secp256k1 private key. The associated public key is encoded into the [cluster lock file](cluster-configuration.md#Cluster-Lock-File) to identify the nodes in the cluster.
+For ease of use and to align with the Ethereum ecosystem, Charon encodes these public keys in the [ENR format](https://eips.ethereum.org/EIPS/eip-778),
+not in [libp2p’s Peer ID format](https://docs.libp2p.io/concepts/fundamentals/peers/).
+
+:::caution
+Each Charon node's secp256k1 private key is critical for authentication and must be kept secure to prevent cluster compromise.
+
+Do not use the same key across multiple clusters, as this can lead to security issues.
+
+For more on p2p security, refer to [libp2p's article](https://docs.libp2p.io/concepts/security/security-considerations).
+:::
+
+Charon currently only supports libp2p tcp connections with [noise](https://noiseprotocol.org/) security and only accepts incoming libp2p connections from peers defined in the cluster lock.
+
+### LibP2P Relays and Peer Discovery
+
+Relays are simple libp2p servers that are publicly accessible supporting the [circuit-relay](https://docs.libp2p.io/concepts/nat/circuit-relay/) protocol.
+Circuit-relay is a libp2p transport protocol that routes traffic between two peers over a third-party “relay” peer.
+
+Obol hosts a publicly accessible relay at https://0.relay.obol.tech and will work with other organisations in the community to host alternatives Anyone can host their own relay server for their DV cluster.
+
+Each charon node knows which peers are in the cluster from the ENRs in the cluster lock file, but their IP addresses are unknown. By connecting to the same relay,
+nodes establish “relay connections” to each other. Once connected via relay they exchange their known public addresses via libp2p’s [identify](https://docs.libp2p.io/concepts/fundamentals/protocols/#identify)
+protocol. The relay connection is then upgraded to a direct connection. If a node’s public IP changes, nodes once again connect via relay, exchange the new IP, and then connect directly once again.
+
+Note that in order for two peers to discover each other, they must connect to the same relay. Cluster operators should therefore coordinate which relays to use.
+
+Libp2p’s [identify](https://docs.libp2p.io/concepts/fundamentals/protocols/#identify) protocol attempts to automatically detect the public IP address of a charon
+client without the need to explicitly configure it. If this however fails, the following two configuration flags can be used to explicitly set the publicly advertised
+address:
+- `--p2p-external-ip`: Explicitly sets the external IP address.
+- `--p2p-external-hostname`: Explicitly sets the external DNS host name.
+
+:::caution
+If a pair of charon clients are not publicly accessible, due to being behind a NAT, they will not be able to upgrade their relay connections to a direct connection.
+Even though this is supported, it isn’t recommended as relay connections introduce additional latency and reduced throughput and will result in decreased validator effectiveness
+and possible missed block proposals and attestations.
+:::
+
+Libp2p’s circuit-relay connections are end-to-end encrypted, even though relay servers accept connections between nodes from multiple different clusters, relays are merely
+routing opaque connections. And since Charon only accepts incoming connections from other peers in its cluster, the use of a relay doesn’t allow connections between clusters.
+
+Only the following three libp2p protocols are established between a charon node and a relay itself:
+- [circuit-relay](https://docs.libp2p.io/concepts/nat/circuit-relay/): To establish relay e2e encrypted connections between two peers in a cluster.
+- [identify](https://docs.libp2p.io/concepts/fundamentals/protocols/#identify): Auto-detection of public IP addresses to share with other peers in the cluster.
+- [peerinfo](https://github.com/ObolNetwork/charon/blob/main/app/peerinfo/peerinfo.go): Exchanges basic application [metadata](https://github.com/ObolNetwork/charon/blob/main/app/peerinfo/peerinfopb/v1/peerinfo.proto) for improved operational metrics and observability.
+
+All other charon protocols are only established between nodes in the same cluster.
+
+### Scalable Relay Clusters
+
+In order for a charon client to connect to a relay, it needs the relay's [multiaddr](https://docs.libp2p.io/concepts/fundamentals/addressing/) (containing its public key and IP address).
+But a single multiaddr can only point to a single relay server which can easily be overloaded if too many clusters connect to it. Charon therefore supports resolving a relay’s multiaddr
+via HTTP GET request. Since charon also includes the unique `cluster-hash` header in this request, the relay provider can use
+[consistent header-based load-balancing](https://cloud.google.com/load-balancing/docs/https/traffic-management-global#traffic_steering_header-based_routing) to map clusters to one of many relays using a single HTTP address.
+
+The relay supports serving its runtime public multiaddrs via its `--http-address` flag.
+
+E.g., https://0.relay.obol.tech is actually a load-balancer that routes HTTP requests to one of many relays based on the `cluster-hash` header returning the target relay’s multiaddr
+which the charon client then uses to connect to that relay.
+
+The charon `--p2p-relays` flag therefore supports both multiaddrs as well as HTTP URls.
diff --git a/versioned_docs/version-v0.18.0/dvl/_category_.json b/versioned_docs/version-v0.18.0/dvl/_category_.json
new file mode 100644
index 0000000000..b7a2cf3a69
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/dvl/_category_.json
@@ -0,0 +1,5 @@
+{
+ "label": "DV Launchpad",
+ "position": 4,
+ "collapsed": true
+}
diff --git a/versioned_docs/version-v0.18.0/dvl/intro.md b/versioned_docs/version-v0.18.0/dvl/intro.md
new file mode 100644
index 0000000000..f227f32130
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/dvl/intro.md
@@ -0,0 +1,18 @@
+---
+description: A dapp to securely create Distributed Validators alone or with a group.
+sidebar_position: 1
+---
+
+# Introduction
+
+![DV Launchpad Promo Image](/img/DistributeYourValidators.svg)
+
+In order to activate an Ethereum validator, 32 ETH must be deposited into the official deposit contract.
+
+The vast majority of users that created validators to date have used the **[~~Eth2~~ Staking Launchpad](https://launchpad.ethereum.org/)**, a public good open source website built by the Ethereum Foundation alongside participants that later went on to found Obol. This tool has been wildly successful in the safe and educational creation of a significant number of validators on the Ethereum mainnet.
+
+To facilitate the generation of distributed validator keys amongst remote users with high trust, the Obol Network developed and maintains a website that enables a group of users to come together and create these threshold keys: [**The DV Launchpad**](https://goerli.launchpad.obol.tech/).
+
+## Getting started
+
+For more information on running charon in a UI friendly way through the DV Launchpad, take a look at our [Quickstart Guides](docs/int/quickstart/index.md).
\ No newline at end of file
diff --git a/versioned_docs/version-v0.18.0/fr/_category_.json b/versioned_docs/version-v0.18.0/fr/_category_.json
new file mode 100644
index 0000000000..72d8434c21
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/fr/_category_.json
@@ -0,0 +1,5 @@
+{
+ "label": "Further reading",
+ "position": 11,
+ "collapsed": true
+}
diff --git a/versioned_docs/version-v0.18.0/fr/eth.md b/versioned_docs/version-v0.18.0/fr/eth.md
new file mode 100644
index 0000000000..159e4ec1b1
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/fr/eth.md
@@ -0,0 +1,44 @@
+# Ethereum and its Relationship with DVT
+
+Our goal for this page is to equip you with the foundational knowledge needed to actively contribute to the advancement of Obol while also directing you to valuable Ethereum and DVT related resources. Additionally, we will shed light on the intersection of DVT and Ethereum, offering curated articles and blog posts to enhance your understanding.
+
+## **Understanding Ethereum**
+
+To grasp the current landscape of Ethereum's PoS development, we encourage you to delve into the wealth of information available on the [Official Ethereum Website.](https://ethereum.org/en/learn/)
+The Ethereum website serves as a hub for all things Ethereum, catering to individuals at various levels of expertise, whether you're just starting your journey or are an Ethereum veteran. Here, you'll find a trove of resources that cater to diverse learning needs and preferences, ensuring that there's something valuable for everyone in the Ethereum community to discover.
+
+## **DVT & Ethereum**
+### Distributed Validator Technology
+> "Distributed validator technology (DVT) is an approach to validator security that spreads out key management and signing responsibilities across multiple parties, to reduce single points of failure, and increase validator resiliency.
+>
+> It does this by splitting the private key used to secure a validator across many computers organized into a "cluster". The benefit of this is that it makes it very difficult for attackers to gain access to the key, because it is not stored in full on any single machine. It also allows for some nodes to go offline, as the necessary signing can be done by a subset of the machines in each cluster. This reduces single points of failure from the network and makes the whole validator set more robust." (ethereum.org, 2023)
+#### Learn More About Distributed Validator technology from [The Official Ethereum Website](https://ethereum.org/en/staking/dvt/)
+
+### How Does DVT Improve Staking on Ethereum?
+If you haven’t yet heard, Distributed Validator Technology, or DVT, is the next big thing on The Merge section of the Ethereum roadmap. Learn more about this in our blog post: [What is DVT and How Does It Improve Staking on Ethereum?](https://blog.obol.tech/what-is-dvt-and-how-does-it-improve-staking-on-ethereum/)
+
+
+
+***Vitalik's Ethereum Roadmap***
+
+### Deep Dive Into DVT and Charon’s Architecture
+Minimizing correlation is vital when designing DVT as Ethereum Proof of Stake is designed to heavily punish correlated behavior. In designing Obol, we’ve made careful choices to create a trust-minimized and non-correlated architecture.
+
+[**Read more about Designing Non-Correlation Here**](https://blog.obol.tech/deep-dive-into-dvt-and-charons-architecture/)
+
+### Performance Testing Distributed Validators
+In our mission to help make Ethereum consensus more resilient and decentralised with distributed validators (DVs), it’s critical that we do not compromise on the performance and effectiveness of validators. Earlier this year, we worked with MigaLabs, the blockchain ecosystem observatory located in Barcelona, to perform an independent test to validate the performance of Obol DVs under different configurations and conditions. After taking a few weeks to fully analyse the results together with MigaLabs, we’re happy to share the results of these performance tests.
+
+[**Read More About The Performance Test Results Here**](https://blog.obol.tech/performance-testing-distributed-validators/)
+
+
+
+### More Resources
+
+- [Sorting out Distributed Validator Technology](https://medium.com/nethermind-eth/sorting-out-distributed-validator-technology-a6f8ca1bbce3)
+- [A tour of Verifiable Secret Sharing schemes and Distributed Key Generation protocols](https://medium.com/nethermind-eth/a-tour-of-verifiable-secret-sharing-schemes-and-distributed-key-generation-protocols-3c814e0d47e1)
+- [Threshold Signature Schemes](https://medium.com/nethermind-eth/threshold-signature-schemes-36f40bc42aca)
+
+
+#### References
+- ethereum.org. (2023). Distributed Validator Technology. [online] Available at: https://ethereum.org/en/staking/dvt/ [Accessed 25 Sep. 2023].
\ No newline at end of file
diff --git a/versioned_docs/version-v0.18.0/fr/golang.md b/versioned_docs/version-v0.18.0/fr/golang.md
new file mode 100644
index 0000000000..5bc5686805
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/fr/golang.md
@@ -0,0 +1,9 @@
+# Golang resources
+
+* [The Go Programming Language](https://www.amazon.com/Programming-Language-Addison-Wesley-Professional-Computing/dp/0134190440) \(Only recommended book\)
+* [Ethereum Development with Go](https://goethereumbook.org)
+* [How to Write Go Code](http://golang.org/doc/code.html)
+* [The Go Programming Language Tour](http://tour.golang.org/)
+* [Getting Started With Go](http://www.youtube.com/watch?v=2KmHtgtEZ1s)
+* [Go Official Website](https://golang.org/)
+
diff --git a/versioned_docs/version-v0.18.0/int/Overview.md b/versioned_docs/version-v0.18.0/int/Overview.md
new file mode 100644
index 0000000000..6be6a6061a
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/int/Overview.md
@@ -0,0 +1,47 @@
+---
+sidebar_position: 2
+description: An overview of the Obol network
+---
+
+## The Network
+
+The network can be best visualized as a work layer that sits directly on top of base layer consensus. This work layer is designed to provide the base layer with more resiliency and promote decentralization as it scales. As Ethereum matures over the coming years, the community will move onto the next great scaling challenge, which is stake centralization. To effectively mitigate these risks, community building and credible neutrality must be used as primary design principles.
+
+Obol is focused on scaling consensus by providing permissionless access to Distributed Validators (DV's). We believe that distributed validators will and should make up a large portion of mainnet validator configurations. In preparation for the first wave of adoption, the network currently utilizes a middleware implementation of Distributed Validator Technology (DVT), to enable the operation of distributed validator clusters that can preserve validators current client and remote signing infrastructure.
+
+Similar to how roll-up technology laid the foundation for L2 scaling implementations, we believe DVT will do the same for scaling consensus while preserving decentralization. Staking infrastructure is entering its protocol phase of evolution, which must include trust-minimized staking networks that can be plugged into at scale. Layers like Obol are critical to the long term viability and resiliency of public networks, especially networks like Ethereum. We believe DVT will evolve into a widely used primitive and will ensure the security, resiliency, and decentralization of the public blockchain networks that adopt it.
+
+The Obol Network consists of four core public goods:
+
+- The [Distributed Validator Launchpad](../dvl/intro), a [User Interface](https://goerli.launchpad.obol.tech/) for bootstrapping Distributed Validators
+- [Charon](../charon/intro), a middleware client that enables validators to run in a fault-tolerant, distributed manner
+- [Obol Splits](../sc/introducing-obol-splits.md), a set of solidity smart contracts for the distribution of rewards from Distributed Validators
+- [Obol Testnets](../testnet.md), a set of on-going public incentivized testnets that enable any sized operator to test their deployment before serving for the mainnet Obol Network
+
+### Sustainable Public Goods
+
+The Obol Ecosystem is inspired by previous work on Ethereum public goods and experimenting with circular economics. We believe that to unlock innovation in staking use cases, a credibly neutral layer must exist for innovation to flow and evolve vertically. Without this layer highly available uptime will continue to be a moat and stake will accumulate amongst a few products.
+
+The Obol Network will become an open, community governed, self-sustaining project over the coming months and years. Together we will incentivize, build, and maintain distributed validator technology that makes public networks a more secure and resilient foundation to build on top of.
+
+## The Vision
+
+The road to decentralizing stake is a long one. At Obol we have divided our vision into two key versions of distributed validators.
+
+### V1 - Trusted Distributed Validators
+
+![Multi Operator DV Cluster](/img/MultiOperator7.png)
+
+The first version of distributed validators will have dispute resolution out of band. Meaning you need to know and communicate with your counterparty operators if there is an issue with your shared cluster.
+
+A DV without in-band dispute resolution/incentivization is still extremely valuable. Individuals and staking as a service providers can deploy DVs on their own to make their validators fault tolerant. Groups can run DVs together, but need to bring their own dispute resolution to the table, whether that be a smart contract of their own, a traditional legal service agreement, or simply high trust between the group.
+
+Obol V1 will utilize retroactive public goods principles to lay the foundation of its economic ecosystem. The Obol Community will responsibly allocate the collected ETH as grants to projects in the staking ecosystem for the entirety of V1.
+
+### V2 - Trustless Distributed Validators
+
+V1 of charon serves a small by count, large by stake-weight group of individuals. The long tail of home and small stakers also deserve to have access to fault tolerant validation, but they may not know enough other operators personally to a sufficient level of trust to run a DV cluster together.
+
+Version 2 of charon will layer in an incentivization scheme to ensure any operator not online and taking part in validation is not earning any rewards. Further incentivization alignment can be achieved with operator bonding requirements that can be slashed for unacceptable performance.
+
+To add an un-gameable incentivization layer to threshold validation requires complex interactive cryptography schemes, secure off-chain dispute resolution, and EVM support for proofs of the consensus layer state, as a result, this will be a longer and more complex undertaking than V1, hence the delineation between the two.
diff --git a/versioned_docs/version-v0.18.0/int/_category_.json b/versioned_docs/version-v0.18.0/int/_category_.json
new file mode 100644
index 0000000000..f23f9bc2d0
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/int/_category_.json
@@ -0,0 +1,5 @@
+{
+ "label": "Getting started",
+ "position": 2,
+ "collapsed": false
+}
diff --git a/versioned_docs/version-v0.18.0/int/faq/_category_.json b/versioned_docs/version-v0.18.0/int/faq/_category_.json
new file mode 100644
index 0000000000..c5279cb839
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/int/faq/_category_.json
@@ -0,0 +1,5 @@
+{
+ "label": "FAQ",
+ "position": 10,
+ "collapsed": true
+}
\ No newline at end of file
diff --git a/versioned_docs/version-v0.18.0/int/faq/dkg_failure.md b/versioned_docs/version-v0.18.0/int/faq/dkg_failure.md
new file mode 100644
index 0000000000..49d84ef980
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/int/faq/dkg_failure.md
@@ -0,0 +1,82 @@
+---
+sidebar_position: 4
+description: Handling DKG failure
+---
+
+# Handling DKG failure
+
+While the DKG process has been tested and validated against many different configuration instances, it can still encounter issues which might result in failure.
+
+Our DKG is designed in a way that doesn't allow for inconsistent results: either it finishes correctly for every peer, or it fails.
+
+This is a **safety** feature: you don't want to deposit an Ethereum distributed validator that not every operator is able to participate to, or even reach threshold for.
+
+The most common source of issues lies in the network stack: if any of the peer's Internet connection glitches substantially, the DKG will fail.
+
+Charon's DKG doesn't allow peer reconnection once the process is started, but it does allow for re-connections before that.
+
+When you see the following message:
+
+```
+14:08:34.505 INFO dkg Waiting to connect to all peers...
+```
+
+this means your Charon instance is waiting for all the other cluster peers to start their DKG process: at this stage, peers can disconnect and reconnect at will, the DKG process will still continue.
+
+A log line will confirm the connection of a new peer:
+
+```
+14:08:34.523 INFO dkg Connected to peer 1 of 3 {"peer": "fantastic-adult"}
+14:08:34.529 INFO dkg Connected to peer 2 of 3 {"peer": "crazy-bunch"}
+14:08:34.673 INFO dkg Connected to peer 3 of 3 {"peer": "considerate-park"}
+```
+
+As soon as all the peers are connected, this message will be shown:
+
+```
+14:08:34.924 INFO dkg All peers connected, starting DKG ceremony
+```
+
+Past this stage **no disconnections are allowed**, and _all peers must leave their terminals open_ in order for the DKG process to complete: this is a synchronous phase, and every peer is required in order to reach completion.
+
+If for some reason the DKG process fails, you would see error logs that resemble this:
+
+```
+14:28:46.691 ERRO cmd Fatal error: sync step: p2p connection failed, please retry DKG: context canceled
+```
+
+As the error message suggests, the DKG process needs to be retried.
+
+## Cleaning up the `.charon` directory
+
+One cannot simply retry the DKG process: Charon refuses to overwrite any runtime file in order to avoid inconsistencies and private key loss.
+
+When attempting to re-run a DKG with an unclean data directory -- which is either `.charon` or what was specified with the `--data-dir` CLI parameter -- this is the error that will be shown:
+
+```
+14:44:13.448 ERRO cmd Fatal error: data directory not clean, cannot continue {"disallowed_entity": "cluster-lock.json", "data-dir": "/compose/node0"}
+```
+
+The `disallowed_entity` field lists all the files that Charon refuses to overwrite, while `data-dir` is the full path of the runtime directory the DKG process is using.
+
+In order to retry the DKG process one must delete the following entities, if present:
+
+ - `validator_keys` directory
+ - `cluster-lock.json` file
+ - `deposit-data.json` file
+:::caution
+The `charon-enr-private-key` file **must be preserved**, failure to do so requires the DKG process to be restarted from the beginning by creating a new cluster definition.
+:::
+If you're doing a DKG with a custom cluster definition - for example, create with `charon create dkg` rather than the Obol Launchpad - you can re-use the same file.
+
+Once this process has been completed, the cluster operators can retry a DKG.
+
+## Further debugging
+
+If for some reason the DKG process fails again, node operators are adviced to reach out to the Obol team by opening an [issue](https://github.com/ObolNetwork/charon/issues), detailing what troubleshooting steps were taken and providing **debug logs**.
+
+To enable debug logs first clean up the Charon data directory as explained in [the previous paragraph](#cleaning-up-the-charon-directory), then run your DKG command by appending `--log-level=debug` at the end.
+
+In order for the Obol team to debug your issue as quickly and precisely as possible please provide full logs in textual form, not through screenshots or display photos.
+
+Providing complete logs is particularly important, since it allows the team to reconstruct precisely what happened.
diff --git a/versioned_docs/version-v0.18.0/int/faq/errors.mdx b/versioned_docs/version-v0.18.0/int/faq/errors.mdx
new file mode 100644
index 0000000000..d8b29e07d0
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/int/faq/errors.mdx
@@ -0,0 +1,413 @@
+---
+sidebar_position: 2
+description: Errors & Resolutions
+---
+
+# Errors & Resolutions
+
+All operators should try to restart their nodes and should check if they are on the latest stable version before attempting anything other configuration change as we are still in beta and frequently releasing fixes. You can restart and update with the following commands:
+
+```
+docker compose down
+git pull
+docker compose up
+```
+
+You can check your logs using
+
+```
+docker compose logs
+```
+
+
+
ENRs & Keys
+
+
+
+
What is an ENR?
+
+ An ENR is shorthand for an Ethereum Node Record.
+ It is a way to represent a node on a public network, with a reliable mechanism to update its information.
+
+ At Obol we use ENRs to identify charon nodes to one another such that they can form clusters with the right charon nodes and not impostors.
+ ENRs have private keys they use to sign updates to the data contained in their ENR.
+ This private key is by default found at .charon/charon-enr-private-key, and should be kept secure, and not checked into version control.
+
How do I get my ENR if I want to generate it again?
+
+
+
+ cd to the directory where your private keys are located (ex: cd /path/to/charon/enr/private/key)
+
+
Run docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:latest enr. This prints the ENR on your screen.
+
Please note that this ENR is not the same as the one generated when you created it for the first time. This is because the process of generating ENRs includes the current timestamp.
+
+
+
+
+
What do I do if lose my charon-enr-private-key?
+
+
+
For now, ENR rotation/replacement is not supported, it will be supported in a future release.
+
Therefore, it's advised to always keep a backup of your{" "} private-key in a secure location (ex: cloud storage, USB Flash drive etc.)
+
+
+
+
+
I can't find the keys anywhere
+
+
+
The charon-enr-private-key is generated inside a hidden folder .charon.
+
To view it, run ls -al in your terminal.
+
You can then copy the key to your ~/Downloads folder for easy access by running cp .charon/charon-enr-private-key ~/Downloads. This step maybe a bit different for windows.
+
Else, if you are on macOS, press Cmd + Shift + . to view the .charon folder in the finder application.
+
+
+
+
+
+
Lighthouse
+
+
+
+
Downloading historical blocks
+ This means that Lighthouse is still syncing which will throw a lot of errors down the line. Wait for the sync before moving further.
+
+
+
+
+ Failed to request attester duties error
+
+ Indicates there is something wrong with your lighthouse beacon node. This might be because the request buffer is full as your node is never starting consensus since it never gets the duties.
+
+
+
+
+ Not enough time for a discovery seach error
+
+ This could be linked to a internet connection being to slow or relying on a slow third-party service such as Infura.
+
+
+
+
+
Beacon Node
+
+
+
+
+ Error communicating with Beacon Node API & Error while connecting to beacon node event stream
+
+ This is likely due to lighthouse not done syncing, wait and try again once synced. Can also be linked to Teku keystore issue.
+
+
+
+
Clock sync issues
+ Either your clock server time is off, or you are talking to a remote beacon client that is super slow (this is why we advise against using services like infura).
+
+
+
+
My beacon node API is flaky with lots of errors and timeouts
+
+ A good quality beacon node API is critical to validator performance.
+ It is always advised to run your own beacon node to ensure low latencies to boost validator performance.
+
+ Using 3rd party services like Infura's beacon node API has significant disadvantages since the quality is often low.
+ Requests often return 500s or timeout (Charon times out after 2s).
+ This results in lots of warnings and errors and failed duties.
+ We are working on an issue to mitigate against this, but running a local beacon node is still always preferred. We are not yet considering increasing the 2s timeout since that can have knock-on effects.
+
+
+
+
+
Charon
+
+
+
+
+ Attester failed in consensus component error
+
+ The required number of operators defined in your cluster-lock file is probably not online to sign successfully. Make sure all operators are running the latest version of charon. To check if some peers are not online: docker logs charon-distributed-validator-node-charon-1 2>&1 | grep 'absent'
+
+
+
+
+ Load private key error
+
+ Make sure you have successfully run a DKG before running the node. The key should be created and placed in the right directory during the ceremony Also, make sure you are working in the right directory: charon-distributed-validator-node
+
+
+
+
+ Failed to confirm node connection error
+
+ Wait for Teku & Lighthouse sync to be complete.
+
+
+
+
+
+ RESERVATION_REFUSED is returned by the libp2p relay when some maximum limit has been reached.
+ This is most often due to "maximum reservations per IP/peer".
+ This is when your charon node is restarting or in some error loop and constantly attempting to create new relay reservations reaching the maximum.
+
+ To fix this error, stop your charon node for 30mins before restarting it.
+ This should allow the relay enough time to reset your ip/peer limits and should then allow new reservations.
+ This could also be due to the relay being overloaded in general, so reaching a server wide "maximum connections" limit.
+ This is an issue with relay scalability and we are working in a long term fix for this.
+
+
+
+
+
+ Error opening relay circuit NO_RESERVATION (204) indicates the peer isn't connected to the relay, so the the charon
+ client cannot connect to the peer via the relay. That might be because the peer is offline or the
+ peer is configured to connect to a different relay.
+
+ To fix this error, ensure the peer is online and configured with the exact same `--p2p-relays` flag.
+
+
+
+
+ Couldn't fetch duty data from the beacon node error
+
+
+ msgFetcher indicates a duty failed in the fetcher component when it failed to fetch the required data from the beacon node API. This indicates a problem with the upstream beacon node.
+
+
+
+
+ Couldn't aggregate attestation due to failed attester duty error
+
+
+ msgFetcherAggregatorNoAttData indicates an attestation aggregation duty failed in the fetcher component since it couldn't fetch the prerequisite attestation data. This indicates the associated attestation duty failed to obtain a cluster agreed upon value.
+
+
+
+
+ Couldn't aggregate attestation due to insufficient partial v2 committee subscriptions error
+
+
+ msgFetcherAggregatorZeroPrepares indicates an attestation aggregation duty failed in the fetcher component since it couldn't fetch the prerequisite aggregated v2 committee subscription. This indicates the associated prepare aggregation duty failed due to no partial v2 committee subscription submitted by the cluster validator clients.
+
+
+
+
+ Couldn't aggregate attestation due to failed prepare aggregator duty error
+
+
+ msgFetcherAggregatorFailedPrepare indicates an attestation aggregation duty failed in the fetcher component since it couldn't fetch the prerequisite aggregated v2 committee subscription. This indicates the associated prepare aggregation duty failed.
+
+
+
+
+ Couldn't propose block due to insufficient partial randao signatures error
+
+
+ msgFetcherProposerFewRandaos indicates a block proposer duty failed in the fetcher component since it couldn't fetch the prerequisite aggregated RANDAO. This indicates the associated randao duty failed due to insufficient partial randao signatures submitted by the cluster validator clients.
+
+
+
+
+ Couldn't propose block due to zero partial randao signatures error
+
+
+ msgFetcherProposerZeroRandaos indicates a block proposer duty failed in the fetcher component since it couldn't fetch the prerequisite aggregated RANDAO. This indicates the associated randao duty failed due to no partial randao signatures submitted by the cluster validator clients.
+
+
+
+
+ Couldn't propose block due to failed randao duty error
+
+
+ msgFetcherProposerZeroRandaos indicates a block proposer duty failed in the fetcher component since it couldn't fetch the prerequisite aggregated RANDAO. This indicates the associated randao duty failed.
+
+
+
+
+ Consensus algorithm didn't complete error
+
+
+ msgConsensus indicates a duty failed in consensus component. This could indicate that insufficient honest peers participated in consensus or p2p network connection problems.
+
+
+
+
+ Signed duty not submitted by local validator client error
+
+
+ msgValidatorAPI indicates that partial signature were never submitted by the local validator client. This could indicate that the local validator client is offline, or has connection problems with charon, or has some other problem. See validator client logs for more details.
+
+
+
+
+
+ msgParSigDBInternal indicates a bug in the partial signature database as it is unexpected.
+
+
+
+
+ No partial signatures received from peers error
+
+
+ msgParSigEx indicates that no partial signature for the duty was received from any peer. This indicates all peers are offline or p2p network connection problems.
+
+
+
+
+
+ msgParSigDBThreshold indicates that insufficient partial signatures for the duty was received from peers. This indicates problems with peers or p2p network connection problems.
+
+
+
+
+ Bug: threshold aggregation of partial signatures failed due to inconsistent signed data error
+
+
+ msgSigAgg indicates that BLS threshold aggregation of sufficient partial signatures failed. This indicates inconsistent signed data. This indicates a bug in charon as it is unexpected.
+
+
+
+
+ Existing private key lock file found, another charon instance may be running on your machine error
+
+
+ When you turn on the --private-key-file-lock option in Charon, it checks for a special file called the private key lock file. This file has the same name as the ENR private key file but with a .lock extension.
+ If the private key lock file exists and is not older than 5 seconds, Charon won't run. It doesn't allow running multiple Charon instances with the same ENR private key.
+ If the private key lock file has a timestamp older than 5 seconds, Charon will replace it and continue with its work.
+ If you're sure that no other Charon instances are running, you can delete the private key lock file.
+
+
+
+
+
+ The issue revolves around an invalid setup or deployment, where keyshares are not tied to the enr private key. There appears to be a mix-up during deployment, leading to a mismatching validator client key share index.
+
For example:
+
Imagine node N is Alice, and node M is Bob, the error would read:mismatching validator client key share index, Bob's key share submitted to Alice's charon node.
+
Bob's private key share(s) are imported to a VC that is connected to Alice's charon node. This is a invalid setup/deployment. Alice's charon node should only be connected to Alice's VC.
+ Check the keyshare of each node inside cluster-lock.json and see that matches with the key inside node(num)/validator_keys/keystore-0.json
+
+
+
+
+
Teku
+
+
+
+
Teku keystore file error
+ Teku sometimes logs an error which looks like Keystore file /opt/charon/validator_keys/keystore-0.json.lock already in use. This can be solved by deleting the file(s) ending with .lock in the folder .charon/validator_keys. It is caused by an unsafe shut down of Teku (usually by double pressing `Ctrl+C` to shutdown containers faster).
+
+
+
+
+
Grafana
+
+
+
+
How to fix the grafana dashboard?
+ Sometimes, grafana dashboard doesn't load any data first time around.You can solve this by following the steps below:
+
Click the Wheel Icon > Datasources
+
Click prometheus
+
Change the "Access" field from Server (default) to Browser. Press "Save & Test". It should fail.
+
Change the "Access" field back to Server (default) and press "Save & Test". You should be presented with a green success icon saying "Data source is working" and you can return to the dashboard page.
+ You can ignore this error unless you have been contacted by the Obol Team with monitoring credentials. In that case, follow Getting Started Monitoring your Node in our advanced guides. It does not affect cluster performance or prevent the cluster from running.
+
+
+
+
+
Docker
+
+
+
+
How to fix permission denied errors?
+ Permission denied errors can come up in a variety of manners, particularly on Linux and WSL for Windows systems. In the interest of security, the charon docker image runs as a non-root user, and this user often does not have the permissions to write in the directory you have checked out the code to. This can be generally be fixed with some of the following:
Changing the permissions of the .charon folder with the commands:
+
+
+ mkdir .charon (if it doesn't already exist)
+
+
+ sudo chmod -R 666 .charon
+
+
+
+
+
+
+
I see a lot of errors after running docker compose up
+
+ It's because both geth and lighthouse start syncing and so there's connectivity issues among the containers. Simply let the containers run for a while. You won't observe frequent errors when geth finishes syncing. You can also add a second beacon node endpoint for something like infura by adding a comma separated API URL to the end of CHARON_BEACON_NODE_ENDPOINTS in the docker-compose(./docker-compose.yml#84).
+
+
+
+
How do I fix the plugin "loki" not found error?
+
+ If you get the following error when calling `docker compose up`:
+ Error response from daemon: error looking up logging plugin loki: plugin "loki" not found. Then it probably means that the Loki docker driver isn't installed. In that case, run the following command to install loki:
+ docker plugin install grafana/loki-docker-driver:latest --alias loki --grant-all-permissions
+
+
+
+
+
Relay
+
+
+
+
+ Resolve IP of p2p external host flag: lookup replace.with.public.ip.or.hostname: no such host {" "} error
+
+ Replace replace.with.public.ip.or.hostname in the relay/docker-compose.yml with your real public IP or DNS hostname.
+
+
+
+
+ Lodestar logs these warnings because charon is not able to return proper dependent_root value in getAttesterDuties API response whenever lodestar calls this API. This is because charon uses go-eth2-client for all the beacon API calls and it doesn't provide dependent_root value in responses. We have reported this to them here.
+
+
\ No newline at end of file
diff --git a/versioned_docs/version-v0.18.0/int/faq/general.md b/versioned_docs/version-v0.18.0/int/faq/general.md
new file mode 100644
index 0000000000..8022256a30
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/int/faq/general.md
@@ -0,0 +1,64 @@
+---
+sidebar_position: 1
+description: Frequently asked questions
+---
+
+# Frequently asked questions
+
+## General
+
+### Does Obol have a token?
+
+No. Distributed validators use only Ether.
+
+### Where can I learn more about Distributed Validators?
+
+Have you checked out our [blog site](https://blog.obol.tech) and [twitter](https://twitter.com/ObolNetwork) yet? Maybe join our [discord](https://discord.gg/n6ebKsX46w) too.
+
+### Where does the name Charon come from?
+
+[Charon](https://www.theoi.com/Khthonios/Kharon.html) [kharon] is the Ancient Greek Ferryman of the Dead. He was tasked with bringing people across the Acheron river to the underworld. His fee was one Obol coin, placed in the mouth of the deceased. This tradition of placing a coin or Obol in the mouth of the deceased continues to this day across the Greek world.
+
+### What are the hardware requirements for running a Charon node?
+Charon alone uses negligible disk space of not more than a few MBs. However, if you are running your consensus client and execution client on the same server as charon, then you will typically need the same hardware as running a full Ethereum node:
+
+At minimum:
+- A CPU with 2+ physical cores (or 4 vCPUs)
+- 8GB RAM
+- 1.5TB+ free SSD disk space (for mainnet)
+- 10mb/s internet bandwidth
+
+Recommended specifications:
+- A CPU with 4+ physical cores
+- 16GB+ RAM
+- 2TB+ free disk on a high performance SSD (e.g. NVMe)
+- 25mb/s internet bandwidth
+
+For more hardware considerations, check out the [ethereum.org guides](https://ethereum.org/en/developers/docs/nodes-and-clients/run-a-node/#environment-and-hardware) which explores various setups and trade-offs, such as running the node locally or in the cloud.
+
+For now, Geth, Teku & Lighthouse clients are packaged within the docker compose file provided in the [quickstart guides](../quickstart/group), so you don't have to install anything else to run a cluster. Just make sure you give them some time to sync once you start running your node.
+
+### What is the difference between a node, a validator and a cluster?
+A node is a single instance of Ethereum EL+CL clients that can communicate with other nodes to maintain the Ethereum blockchain.
+
+A validator is a node that participates in the consensus process by verifying transactions and creating new blocks. Multiple validators can run from the same node.
+
+A cluster is a group of nodes that act together as one or several validators, which allows for a more efficient use of resources, reduces operational costs, and provides better reliability and fault tolerance.
+
+### Can I migrate an existing Charon node to a new machine?
+
+It is possible to migrate your Charon node to another machine running the same config by moving the `.charon` folder with its contents to your new machine. Make sure the EL and CL on the new machine are synced before proceeding to the move to minimize downtime.
+
+## Distributed Key Generation
+
+### What are the min and max numbers of operators for a Distributed Validator?
+
+Currently, the minimum is 4 operators with a threshold of 3.
+
+The threshold (aka quorum) corresponds to the minimum numbers of operators that need to be active for the validator(s) to be able to perform its duties. It is defined by the following formula `n-(ceil(n/3)-1)`. We strongly recommend using this default threshold in your DKG as it maximises liveness while maintaining BFT safety. Setting a 4 out of 4 cluster for example, would make your validator more vulnerable to going offline instead of less vulnerable. You can check the recommended threshold values for a cluster [here](../key-concepts.md#distributed-validator-threshold).
+
+## Debugging Errors in Logs
+
+You can check if the containers on your node are outputting errors by running `docker compose logs` on a machine with a running cluster.
+
+Diagnose some common errors and view their resolutions [here](./errors.mdx).
\ No newline at end of file
diff --git a/versioned_docs/version-v0.18.0/int/faq/risks.md b/versioned_docs/version-v0.18.0/int/faq/risks.md
new file mode 100644
index 0000000000..6931cd253f
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/int/faq/risks.md
@@ -0,0 +1,40 @@
+---
+sidebar_position: 3
+description: Centralization Risks and mitigation
+---
+
+# Centralization risks and mitigation
+
+## Risk: Obol hosting the relay infrastructure
+**Mitigation**: Self-host a relay
+
+One of the risks associated with Obol hosting the [LibP2P relays](docs/charon/networking.md) infrastructure allowing peer discovery is that if Obol-hosted relays go down, peers won't be able to discover each other and perform the DKG. To mitigate this risk, external organizations and node operators can consider self-hosting a relay. This way, if Obol's relays go down, the clusters can still operate through other relays in the network. Ensure that all nodes in the cluster use the same relays, or they will not be able to find each other if they are connected to different relays.
+
+The following non-Obol entities run relays that you can consider adding to your cluster (you can have more than one per cluster, see the `--p2p-relays` flag of [`charon run`](../../charon/charon-cli-reference.md#the-run-subcommand)):
+
+| Entity | Relay URL |
+|-----------|---------------------------------------|
+| [DSRV](https://www.dsrvlabs.com/) | https://charon-relay.dsrvlabs.dev |
+| [Infstones](https://infstones.com/) | https://obol-relay.infstones.com:3640/ |
+| [Hashquark](https://www.hashquark.io/) | https://relay-2.prod-relay.721.land/ |
+| [Figment](https://figment.io/) | https://relay-1.obol.figment.io/ |
+
+## Risk: Obol being able to update Charon code
+**Mitigation**: Pin specific docker versions or compile from source on a trusted commit
+
+Another risk associated with Obol is having the ability to update the [Charon code](https://github.com/ObolNetwork/charon) running on the network which could introduce vulnerabilities or malicious code. To mitigate this risk, operators can consider pinning specific versions of the code that have been thoroughly tested and accepted by the network. This would ensure that any updates are carefully vetted and reviewed by the community.
+
+## Risk: Obol hosting the DV Launchpad
+**Mitigation**: Use [`create cluster`](docs/charon/charon-cli-reference.md) or [`create dkg`](docs/charon/charon-cli-reference.md) locally and distribute the files manually
+
+Hosting the first Charon frontend, the [DV Launchpad](docs/dvl/intro.md), on a centralized server could create a single point of failure, as users would have to rely on Obol's server to access the protocol. This could limit the decentralization of the protocol and could make it vulnerable to attacks or downtime. Obol hosting the launchpad on a decentralized network, such as IPFS is a first step but not enough. This is why the Charon code is open-source and contains a CLI interface to interact with the protocol locally.
+
+To mitigate the risk of launchpad failure, consider using the `create cluster` or `create dkg` commands locally and distributing the key shares files manually.
+
+
+## Risk: Obol going bust/rogue
+**Mitigation**: Use key recovery
+
+The final centralization risk associated with Obol is the possibility of the company going bankrupt or acting maliciously, which would lead to a loss of control over the network and potentially cause damage to the ecosystem. To mitigate this risk, Obol has implemented a key recovery mechanism. This would allow the clusters to continue operating and to retrieve full private keys even if Obol is no longer able to provide support.
+
+A guide to recombine key shares into a single private key can be accessed [here](../quickstart/advanced/quickstart-combine.md).
diff --git a/versioned_docs/version-v0.18.0/int/key-concepts.md b/versioned_docs/version-v0.18.0/int/key-concepts.md
new file mode 100644
index 0000000000..c6014ba322
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/int/key-concepts.md
@@ -0,0 +1,110 @@
+---
+sidebar_position: 3
+description: Some of the key terms in the field of Distributed Validator Technology
+---
+
+# Key concepts
+This page outlines a number of the key concepts behind the various technologies that Obol is developing.
+
+## Distributed validator
+
+![A Distributed Validator](/img/32Eth.png)
+
+A distributed validator is an Ethereum proof-of-stake validator that runs on more than one node/machine. This functionality is possible with the use of **Distributed Validator Technology** (DVT).
+
+Distributed validator technology removes some of the single points of failure in validation. Should <33% of the participating nodes in a DV cluster go offline, the remaining active nodes can still come to consensus on what to sign and can produce valid signatures for their staking duties. This is known as Active/Active redundancy, a common pattern for minimizing downtime in mission critical systems.
+
+
+## Distributed Validator Node
+
+![A Distributed Validator Node](/img/DVNode.png)
+
+A distributed validator node is the set of clients an operator needs to configure and run to fulfil the duties of a Distributed Validator Operator. An operator may also run redundant execution and consensus clients, an execution payload relayer like [mev-boost](https://github.com/flashbots/mev-boost), or other monitoring or telemetry services on the same hardware to ensure optimal performance.
+
+In the above example, the stack includes Geth, Lighthouse, Charon and Teku.
+
+### Execution Client
+
+![A Geth Client](/img/POWNodeV2.png)
+
+An execution client (formerly known as an Eth1 client) specializes in running the EVM and managing the transaction pool for the Ethereum network. These clients provide execution payloads to consensus clients for inclusion into blocks.
+
+Examples of execution clients include:
+
+- [Go-Ethereum](https://geth.ethereum.org/)
+- [Nethermind](https://docs.nethermind.io/nethermind/)
+- [Erigon](https://github.com/ledgerwatch/erigon)
+
+### Consensus Client
+
+![A Geth Client](/img/POSClient.png)
+
+A consensus client's duty is to run the proof of stake consensus layer of Ethereum, often referred to as the beacon chain.
+
+Examples of Consensus clients include:
+
+- [Prysm](https://docs.prylabs.network/docs/how-prysm-works/beacon-node)
+- [Teku](https://docs.teku.consensys.net/en/stable/)
+- [Lighthouse](https://lighthouse-book.sigmaprime.io/api-bn.html)
+- [Nimbus](https://nimbus.guide/)
+- [Lodestar](https://github.com/ChainSafe/lodestar)
+
+### Distributed Validator Client
+
+![A Charon Client](/img/CharonBrick.png)
+
+A distributed validator client intercepts the validator client ↔ consensus client communication flow over the [standardised REST API](https://ethereum.github.io/beacon-APIs/#/ValidatorRequiredApi), and focuses on two core duties.
+
+- Coming to consensus on a candidate duty for all validators to sign
+- Combining signatures from all validators into a distributed validator signature
+
+The only example of a distributed validator client built with a non-custodial middleware architecture to date is [charon](../charon/intro).
+
+### Validator Client
+
+![A Lighthouse Client](/img/ValidatorBrick.png)
+
+A validator client is a piece of code that operates one or more Ethereum validators.
+
+Examples of validator clients include:
+
+- [Vouch](https://www.attestant.io/posts/introducing-vouch/)
+- [Prysm](https://docs.prylabs.network/docs/how-prysm-works/prysm-validator-client/)
+- [Teku](https://docs.teku.consensys.net/en/stable/)
+- [Lighthouse](https://lighthouse-book.sigmaprime.io/api-bn.html)
+
+## Distributed Validator Cluster
+
+![A Distributed Validator Cluster](/img/DVCluster.png)
+
+A distributed validator cluster is a collection of distributed validator nodes connected together to service a set of distributed validators generated during a DVK ceremony.
+
+### Distributed Validator Key
+
+![A Distributed Validator Key](/img/ThresholdSigning.png)
+
+A distributed validator key is a group of BLS private keys, that together operate as a threshold key for participating in proof of stake consensus.
+
+### Distributed Validator Key Share
+
+One piece of the distributed validator private key.
+
+### Distributed Validator Threshold
+
+The number of nodes in a cluster that needs to be online and honest for their distributed validators to be online is outlined in the following table.
+
+| Cluster Size | Threshold | Note |
+|:------------:|:---------:|:------------------|
+| 4 | 3/4 | Minimum threshold |
+| 5 | 4/5 | |
+| 6 | 4/6 | Minimum to tolerate two offline nodes|
+| 7 | 5/7 | Minimum to tolerate two **malicious** nodes |
+| 8 | 6/8 | |
+| 9 | 6/9 | Minimum to tolerate three offline nodes |
+| 10 | 7/10 | Minimum to tolerate three **malicious** nodes |
+
+### Distributed Validator Key Generation Ceremony
+
+To achieve fault tolerance in a distributed validator, the individual private key shares need to be generated together. Rather than have a trusted dealer produce a private key, split it and distribute it, the preferred approach is to never construct the full private key at any point, by having each operator in the distributed validator cluster participate in what is known as a Distributed Key Generation ceremony.
+
+A distributed validator key generation ceremony is a type of DKG ceremony. A ceremony produces signed validator deposit and exit data, along with all of the validator key shares and their associated metadata. Read more about these ceremonies [here](../charon/dkg).
diff --git a/versioned_docs/version-v0.18.0/int/quickstart/_category_.json b/versioned_docs/version-v0.18.0/int/quickstart/_category_.json
new file mode 100644
index 0000000000..3ef52ab186
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/int/quickstart/_category_.json
@@ -0,0 +1,5 @@
+{
+ "label": "Quickstart Guides",
+ "position": 3,
+ "collapsed": false
+}
diff --git a/versioned_docs/version-v0.18.0/int/quickstart/activate-dv.md b/versioned_docs/version-v0.18.0/int/quickstart/activate-dv.md
new file mode 100644
index 0000000000..e7015532af
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/int/quickstart/activate-dv.md
@@ -0,0 +1,51 @@
+---
+sidebar_position: 4
+description: Activate the Distributed Validator using the deposit contract
+---
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Activate a DV
+
+:::caution
+Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf).
+:::
+
+If you have successfully created a distributed validator and you are ready to activate it, congratulations! 🎉
+
+Once you have connected all of your charon clients together, synced all of your ethereum nodes such that the monitoring indicates that they are all healthy and ready to operate, **ONE operator** may proceed to deposit and activate the validator(s).
+
+The `deposit-data.json` to be used to deposit will be located in each operator's `.charon` folder. The copies across every node should be identical and any of them can be uploaded.
+
+:::warning
+If you are being given a `deposit-data.json` file that you didn't generate yourself, please take extreme care to ensure this operator has not given you a malicious `deposit-data.json` file that is not the one you expect. Cross reference the files from multiple operators if there is any doubt. Activating the wrong validator or an invalid deposit could result in complete theft or loss of funds.
+:::
+
+Use any of the following tools to deposit. Please use the third-party tools at your own risk and always double check the staking contract address.
+
+
+
+
From a SAFE Multisig (Repeat these steps for every validator to deposit in your cluster)
+
+
From the SAFE UI, click on New Transaction then Transaction Builder to create a new custom transaction
+
Enter the beacon chain contract for Deposit on mainnet - you can find it here
+
Fill the transaction information
+
+
Set amount to 32 in ETH
+
Use your deposit-data.json to fill the required data : pubkey,withdrawal credentials,signature,deposit_data_root. Make sure to prefix the input with 0x to format them in bytes
+
+
Click on Add transaction
+
Click on Create Batch
+
Click on Send Batch, you can click on Simulate to check if the transaction will execute successfully
+
Get the minimum threshold of signatures from the other addresses and execute the custom transaction
+
+
+
+
+The activation process can take a minimum of 16 hours, with the maximum time to activation being dictated by the length of the activation queue, which can be weeks.
\ No newline at end of file
diff --git a/versioned_docs/version-v0.18.0/int/quickstart/advanced/_category_.json b/versioned_docs/version-v0.18.0/int/quickstart/advanced/_category_.json
new file mode 100644
index 0000000000..c606c5656f
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/int/quickstart/advanced/_category_.json
@@ -0,0 +1,5 @@
+{
+ "label": "Advanced Guides",
+ "position": 10,
+ "collapsed": true
+ }
\ No newline at end of file
diff --git a/versioned_docs/version-v0.18.0/int/quickstart/advanced/adv-docker-configs.md b/versioned_docs/version-v0.18.0/int/quickstart/advanced/adv-docker-configs.md
new file mode 100644
index 0000000000..d14de53e8b
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/int/quickstart/advanced/adv-docker-configs.md
@@ -0,0 +1,38 @@
+---
+sidebar_position: 8
+description: Use advanced docker-compose features to have more flexibility and power to change the default configuration.
+---
+
+# Advanced Docker Configs
+
+:::info
+This section is intended for *docker power users*, i.e., for those who are familiar with working with `docker-compose` and want to have more flexibility and power to change the default configuration.
+:::
+
+We use the "Multiple Compose File" feature which provides a very powerful way to override any configuration in `docker-compose.yml` without needing to modify git-checked-in files since that results in conflicts when upgrading this repo.
+See [this](https://docs.docker.com/compose/extends/#multiple-compose-files) for more details.
+
+There are some additional compose files in [this repository](https://github.com/ObolNetwork/charon-distributed-validator-node/), `compose-debug.yml` and `docker-compose.override.yml.sample`, along-with the default `docker-compose.yml` file that you can use for this purpose.
+
+- `compose-debug.yml` contains some additional containers that developers can use for debugging, like `jaeger`. To achieve this, you can run:
+
+```
+docker compose -f docker-compose.yml -f compose-debug.yml up
+```
+
+- `docker-compose.override.yml.sample` is intended to override the default configuration provided in `docker-compose.yml`. This is useful when, for example, you wish to add port mappings or want to disable a container.
+
+- To use it, just copy the sample file to `docker-compose.override.yml` and customise it to your liking. Please create this file ONLY when you want to tweak something. This is because the default override file is empty and docker errors if you provide an empty compose file.
+
+```
+cp docker-compose.override.yml.sample docker-compose.override.yml
+
+# Tweak docker-compose.override.yml and then run docker compose up
+docker compose up
+```
+
+- You can also run all these compose files together. This is desirable when you want to use both the features. For example, you may want to have some debugging containers AND also want to override some defaults. To achieve this, you can run:
+
+```
+docker compose -f docker-compose.yml -f docker-compose.override.yml -f compose-debug.yml up
+```
diff --git a/versioned_docs/version-v0.18.0/int/quickstart/advanced/monitoring.md b/versioned_docs/version-v0.18.0/int/quickstart/advanced/monitoring.md
new file mode 100644
index 0000000000..fdbec169b9
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/int/quickstart/advanced/monitoring.md
@@ -0,0 +1,100 @@
+---
+sidebar_position: 4
+description: Add monitoring credentials to help the Obol Team monitor the health of your cluster
+---
+# Getting Started Monitoring your Node
+
+Welcome to this comprehensive guide, designed to assist you in effectively monitoring your Charon cluster and nodes, and setting up alerts based on specified parameters.
+
+## Pre-requisites
+
+Ensure the following software are installed:
+
+- Docker: Find the installation guide for Ubuntu **[here](https://docs.docker.com/engine/install/ubuntu/)**
+- Prometheus: You can install it using the guide available **[here](https://prometheus.io/docs/prometheus/latest/installation/)**
+- Grafana: Follow this **[link](https://grafana.com/docs/grafana/latest/setup-grafana/installation/)** to install Grafana
+
+## Import Pre-Configured Charon Dashboards
+
+- Navigate to the **[repository](https://github.com/ObolNetwork/monitoring/tree/main/dashboards)** that contains a variety of Grafana dashboards. For this demonstration, we will utilize the Charon Dashboard json.
+
+- In your Grafana interface, create a new dashboard and select the import option.
+
+- Copy the content of the Charon Dashboard json from the repository and paste it into the import box in Grafana. Click "Load" to proceed.
+
+- Finalize the import by clicking on the "Import" button. At this point, your dashboard should begin displaying metrics. Ensure your Charon client and Prometheus are operational for this to occur.
+
+## Example Alerting Rules
+
+To create alerts for Node-Exporter, follow these steps based on the sample rules provided on the "Awesome Prometheus alerts" page:
+
+1. Visit the **[Awesome Prometheus alerts](https://samber.github.io/awesome-prometheus-alerts/rules.html#host-and-hardware)** page. Here, you will find lists of Prometheus alerting rules categorized by hardware, system, and services.
+
+2. Depending on your need, select the category of alerts. For example, if you want to set up alerts for your system's CPU usage, click on the 'CPU' under the 'Host & Hardware' category.
+
+3. On the selected page, you'll find specific alert rules like 'High CPU Usage'. Each rule will provide the PromQL expression, alert name, and a brief description of what the alert does. You can copy these rules.
+
+4. Paste the copied rules into your Prometheus configuration file under the `rules` section. Make sure you understand each rule before adding it to avoid unnecessary alerts.
+
+5. Finally, save and apply the configuration file. Prometheus should now trigger alerts based on these rules.
+
+
+For alerts specific to Charon/Alpha, refer to the alerting rules available on this [ObolNetwork/monitoring](https://github.com/ObolNetwork/monitoring/tree/main/alerting-rules).
+
+## Understanding Alert Rules
+
+1. `ClusterBeaconNodeDown`This alert is activated when the beacon node in a specified Alpha cluster is offline. The beacon node is crucial for validating transactions and producing new blocks. Its unavailability could disrupt the overall functionality of the cluster.
+2. `ClusterBeaconNodeSyncing`This alert indicates that the beacon node in a specified Alpha cluster is synchronizing, i.e., catching up with the latest blocks in the cluster.
+3. `ClusterNodeDown`This alert is activated when a node in a specified Alpha cluster is offline.
+4. `ClusterMissedAttestations`:This alert indicates that there have been missed attestations in a specified Alpha cluster. Missed attestations may suggest that validators are not operating correctly, compromising the security and efficiency of the cluster.
+5. `ClusterInUnknownStatus`: This alert is designed to activate when a node within the cluster is detected to be in an unknown state. The condition is evaluated by checking whether the maximum of the app_monitoring_readyz metric is 0.
+6. `ClusterInsufficientPeers`:This alert is set to activate when the number of peers for a node in the Alpha M1 Cluster #1 is insufficient. The condition is evaluated by checking whether the maximum of the **`app_monitoring_readyz`** equals 4.
+7. `ClusterFailureRate`: This alert is activated when the failure rate of the Alpha M1 Cluster #1 exceeds a certain threshold.
+8. `ClusterVCMissingValidators`: This alert is activated if any validators in the Alpha M1 Cluster #1 are missing.
+9. `ClusterHighPctFailedSyncMsgDuty`: This alert is activated if a high percentage of sync message duties failed in the cluster. The alert is activated if the sum of the increase in failed duties tagged with "sync_message" in the last hour divided by the sum of the increase in total duties tagged with "sync_message" in the last hour is greater than 0.1.
+10. `ClusterNumConnectedRelays`: This alert is activated if the number of connected relays in the cluster falls to 0.
+11. PeerPingLatency: 1. This alert is activated if the 90th percentile of the ping latency to the peers in a cluster exceeds 500ms within 2 minutes.
+
+## Best Practices for Monitoring Charon Nodes & Cluster
+
+- **Establish Baselines**: Familiarize yourself with the normal operation metrics like CPU, memory, and network usage. This will help you detect anomalies.
+- **Define Key Metrics**: Set up alerts for essential metrics, encompassing both system-level and Charon-specific ones.
+- **Configure Alerts**: Based on these metrics, set up actionable alerts.
+- **Monitor Network**: Regularly assess the connectivity between nodes and the network.
+- **Perform Regular Health Checks**: Consistently evaluate the status of your nodes and clusters.
+- **Monitor System Logs**: Keep an eye on logs for error messages or unusual activities.
+- **Assess Resource Usage**: Ensure your nodes are neither over- nor under-utilized.
+- **Automate Monitoring**: Use automation to ensure no issues go undetected.
+- **Conduct Drills**: Regularly simulate failure scenarios to fine-tune your setup.
+- **Update Regularly**: Keep your nodes and clusters updated with the latest software versions.
+
+## Third-Party Services for Uptime Testing
+
+- [updown.io](https://updown.io/)
+- [Grafana synthetic Monitoring](https://grafana.com/grafana/plugins/grafana-synthetic-monitoring-app/)
+
+## Key metrics to watch to verify node health based on jobs
+
+- Node Exporter:
+
+**CPU Usage**: High or spiking CPU usage can be a sign of a process demanding more resources than it should.
+
+**Memory Usage**: If a node is consistently running out of memory, it could be due to a memory leak or simply under-provisioning.
+
+**Disk I/O**: Slow disk operations can cause applications to hang or delay responses. High disk I/O can indicate storage performance issues or a sign of high load on the system.
+
+**Network Usage**: High network traffic or packet loss can signal network configuration issues, or that a service is being overwhelmed by requests.
+
+**Disk Space**: Running out of disk space can lead to application errors and data loss.
+
+**Uptime**: The amount of time a system has been up without any restarts. Frequent restarts can indicate instability in the system.
+
+**Error Rates**: The number of errors encountered by your application. This could be 4xx/5xx HTTP errors, exceptions, or any other kind of error your application may log.
+
+**Latency**: The delay before a transfer of data begins following an instruction for its transfer.
+
+It is also important to check:
+
+- NTP clock skew
+- Process restarts and failures (eg. through `node_systemd`)
+- alert on high error and panic log counts.
\ No newline at end of file
diff --git a/versioned_docs/version-v0.18.0/int/quickstart/advanced/obol-monitoring.md b/versioned_docs/version-v0.18.0/int/quickstart/advanced/obol-monitoring.md
new file mode 100644
index 0000000000..8d9e0ceca1
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/int/quickstart/advanced/obol-monitoring.md
@@ -0,0 +1,40 @@
+---
+sidebar_position: 5
+description: Add monitoring credentials to help the Obol Team monitor the health of your cluster
+---
+
+# Push Metrics to Obol Monitoring
+
+:::info
+This is **optional** and does not confer any special privileges within the Obol Network.
+:::
+
+You may have been provided with **Monitoring Credentials** used to push distributed validator metrics to Obol's central prometheus cluster to monitor, analyze, and improve your Distributed Validator Cluster's performance.
+
+The provided credentials needs to be added in `prometheus/prometheus.yml` replacing `$PROM_REMOTE_WRITE_TOKEN` and will look like:
+```
+obol20!tnt8U!C...
+```
+
+The updated `prometheus/prometheus.yml` file should look like:
+```
+global:
+ scrape_interval: 30s # Set the scrape interval to every 30 seconds.
+ evaluation_interval: 30s # Evaluate rules every 30 seconds.
+
+remote_write:
+ - url: https://vm.monitoring.gcp.obol.tech/write
+ authorization:
+ credentials: obol20!tnt8U!C...
+
+scrape_configs:
+ - job_name: 'charon'
+ static_configs:
+ - targets: ['charon:3620']
+ - job_name: "lodestar"
+ static_configs:
+ - targets: [ "lodestar:5064" ]
+ - job_name: 'node-exporter'
+ static_configs:
+ - targets: ['node-exporter:9100']
+```
\ No newline at end of file
diff --git a/versioned_docs/version-v0.18.0/int/quickstart/advanced/quickstart-builder-api.md b/versioned_docs/version-v0.18.0/int/quickstart/advanced/quickstart-builder-api.md
new file mode 100644
index 0000000000..5f6656fc09
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/int/quickstart/advanced/quickstart-builder-api.md
@@ -0,0 +1,163 @@
+---
+sidebar_position: 2
+description: Run a distributed validator cluster with the builder API (MEV-Boost)
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Run a cluster with MEV enabled
+
+:::caution
+Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf).
+:::
+
+This quickstart guide focuses on configuring the builder API for Charon and supported validator and consensus clients.
+
+## Getting started with Charon & the Builder API
+
+Running a distributed validator cluster with the builder API enabled will give the validators in the cluster access to the builder network. This builder network is a network of "Block Builders"
+who work with MEV searchers to produce the most valuable blocks a validator can propose.
+
+[MEV-Boost](https://boost.flashbots.net/) is one such product from flashbots that enables you to ask multiple
+block relays (who communicate with the "Block Builders") for blocks to propose. The block that pays the largest reward to the validator will be signed and returned to the relay for broadcasting to the wider
+network. The end result for the validator is generally an increased APR as they receive some share of the MEV.
+
+:::info
+Before completing this guide, please check your cluster version, which can be found inside the `cluster-lock.json` file. If you are using cluster-lock version `1.7.0` or higher release verions, Obol seamlessly accommodates all validator client implementations within a mev-enabled distributed validator cluster.
+
+For clusters with a cluster-lock version `1.6.0` and below, charon is compatible only with [Teku](https://github.com/ConsenSys/teku). Use the version history feature of this documentation to see the instructions for configuring a cluster in that manner (`v0.16.0`).
+:::
+
+## Client configuration
+
+:::note
+You need to add CLI flags to your consensus client, charon client, and validator client, to enable the builder API.
+
+You need all operators in the cluster to have their nodes properly configured to use the builder API, or you risk missing a proposal.
+:::
+
+### Charon
+
+Charon supports builder API with the `--builder-api` flag. To use builder API, one simply needs to add this flag to the `charon run` command:
+
+```
+charon run --builder-api
+```
+
+### Consensus Clients
+
+The following flags need to be configured on your chosen consensus client. A flashbots relay URL is provided for example purposes, you should choose a relay that suits your preferences from [this list](https://github.com/eth-educators/ethstaker-guides/blob/main/MEV-relay-list.md#mev-relay-list-for-mainnet).
+
+
+
+ Teku can communicate with a single relay directly:
+
+ You should also consider adding --local-block-value-boost 3 as a flag, to favour locally built blocks if they are withing 3% in value of the relay block, to improve the chances of a successful proposal.
+
+
+
+
+
+
+## Verify your cluster is correctly configured
+
+It can be difficult to confirm everything is configured correctly with your cluster until a proposal opportunity arrives, but here are some things you can check.
+
+When your cluster is running, you should see if charon is logging something like this each epoch:
+```
+13:10:47.094 INFO bcast Successfully submitted validator registration to beacon node {"delay": "24913h10m12.094667699s", "pubkey": "84b_713", "duty": "1/builder_registration"}
+```
+
+This indicates that your charon node is successfully registering with the relay for a blinded block when the time comes.
+
+If you are using the [ultrasound relay](https://relay.ultrasound.money), you can enter your cluster's distributed validator public key(s) into their website, to confirm they also see the validator as correctly registered.
+
+You should check that your validator client's logs look healthy, and ensure that you haven't added a `fee-recipient` address that conflicts with what has been selected by your cluster in your cluster-lock file, as that may prevent your validator from producing a signature for the block when the opportunity arises. You should also confirm the same for all of the other peers in your cluster.
+
+Once a proposal has been made, you should look at the `Block Extra Data` field under `Execution Payload` for the block on [Beaconcha.in](https://beaconcha.in/block/18450364), and confirm there is text present, this generally suggests the block came from a builder, and was not a locally constructed block.
\ No newline at end of file
diff --git a/versioned_docs/version-v0.18.0/int/quickstart/advanced/quickstart-combine.md b/versioned_docs/version-v0.18.0/int/quickstart/advanced/quickstart-combine.md
new file mode 100644
index 0000000000..5dd6395288
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/int/quickstart/advanced/quickstart-combine.md
@@ -0,0 +1,153 @@
+---
+sidebar_position: 9
+description: Combine distributed validator private key shares to recover the validator private key.
+---
+
+# Combine DV private key shares
+
+:::warning
+Reconstituting Distributed Validator private key shares into a standard validator private key is a security risk, and can potentially cause your validator to be slashed.
+
+Only combine private keys as a last resort and do so with extreme caution.
+:::
+
+Combine distributed validator private key shares into an Ethereum validator private key.
+
+## Pre-requisites
+
+- Ensure you have the `.charon` directories of at least a threshold of the cluster's node operators.
+- Ensure you have [docker](https://docs.docker.com/engine/install/) installed.
+- Make sure `docker` is running before executing the commands below.
+
+## Step 1. Set up the key combination directory tree
+
+Rename each cluster node operator `.charon` directory in a different way to avoid folder name conflicts.
+
+We suggest naming them clearly and distinctly, to avoid confusion.
+
+At the end of this process, you should have a tree like this:
+
+```shell
+$ tree ./validators-to-be-combined
+
+validators-to-be-combined/
+├── node0
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+├── node1
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+├── node2
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+...
+└── node*
+ ├── charon-enr-private-key
+ ├── cluster-lock.json
+ ├── deposit-data.json
+ └── validator_keys
+ ├── keystore-0.json
+ ├── keystore-0.txt
+ ├── keystore-1.json
+ └── keystore-1.txt
+```
+
+:::caution
+Make sure to never mix the various `.charon` directories with one another.
+
+Doing so can potentially cause the combination process to fail.
+:::
+
+## Step 2. Combine the key shares
+
+Run the following command:
+
+```sh
+# Combine a clusters private keys
+docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.18.0 combine --cluster-dir /opt/charon/validators-to-be-combined
+```
+
+This command will create one subdirectory for each validator private key that has been combined, named after its public key.
+
+```shell
+$ tree ./validators-to-be-combined
+
+validators-to-be-combined/
+├── 0x822c5310674f4fc4ec595642d0eab73d01c62b588f467da6f98564f292a975a0ac4c3a10f1b3a00ccc166a28093c2dcd
+│ └── validator_keys
+│ ├── keystore-0.json
+│ └── keystore-0.txt
+├── 0x8929b4c8af2d2eb222d377cac2aa7be950e71d2b247507d19b5fdec838f0fb045ea8910075f191fd468da4be29690106
+│ └── validator_keys
+│ ├── keystore-0.json
+│ └── keystore-0.txt
+├── node0
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+├── node1
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+├── node2
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+└── node3
+ ├── charon-enr-private-key
+ ├── cluster-lock.json
+ ├── deposit-data.json
+ └── validator_keys
+ ├── keystore-0.json
+ ├── keystore-0.txt
+ ├── keystore-1.json
+ └── keystore-1.txt
+```
+
+We can verify that the directory names are correct by looking at the lock file:
+
+```shell
+$ jq .distributed_validators[].distributed_public_key validators-to-be-combined/node0/cluster-lock.json
+"0x822c5310674f4fc4ec595642d0eab73d01c62b588f467da6f98564f292a975a0ac4c3a10f1b3a00ccc166a28093c2dcd"
+"0x8929b4c8af2d2eb222d377cac2aa7be950e71d2b247507d19b5fdec838f0fb045ea8910075f191fd468da4be29690106"
+```
+
+:::info
+
+The generated private keys are in the standard [EIP-2335](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-2335.md) format, and can be imported in any Ethereum validator client that supports it.
+
+Ensure your distributed validator cluster is completely shut down before starting a replacement validator or you are likely to be slashed.
+:::
diff --git a/versioned_docs/version-v0.18.0/int/quickstart/advanced/quickstart-sdk.md b/versioned_docs/version-v0.18.0/int/quickstart/advanced/quickstart-sdk.md
new file mode 100644
index 0000000000..a658d9937b
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/int/quickstart/advanced/quickstart-sdk.md
@@ -0,0 +1,133 @@
+---
+sidebar_position: 1
+description: Create a DV cluster using the Obol Typescript SDK
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Create a DV using the SDK
+
+:::caution
+The Obol-SDK is in a beta state and should be used with caution on testnets only.
+:::
+
+This is a walkthrough of using the [Obol-SDK](https://www.npmjs.com/package/@obolnetwork/obol-sdk) to propose a four-node distributed validator cluster for creation using the [DV Launchpad](../../../dvl/intro.md).
+
+## Pre-requisites
+
+- You have [node.js](https://nodejs.org/en) installed.
+
+## Install the package
+
+Install the Obol-SDK package into your development environment
+
+
+
+
+ npm install --save @obolnetwork/obol-sdk
+
+
+
+
+ yarn add @obolnetwork/obol-sdk
+
+
+
+
+## Instantiate the client
+
+The first thing you need to do is create a instance of the Obol SDK client. The client takes two constructor parameters:
+
+- The `chainID` for the chain you intend to use.
+- An ethers.js [signer](https://docs.ethers.org/v6/api/providers/#Signer-signTypedData) object.
+
+```ts
+import { Client } from "@obolnetwork/obol-sdk";
+import { ethers } from "ethers";
+
+// Create a dummy ethers signer object with a throwaway private key
+const mnemonic = ethers.Wallet.createRandom().mnemonic?.phrase || "";
+const privateKey = ethers.Wallet.fromPhrase(mnemonic).privateKey;
+const wallet = new ethers.Wallet(privateKey);
+const signer = wallet.connect(null);
+
+// Instantiate the Obol Client for goerli
+const obol = new Client({ chainId: 5 }, signer);
+```
+
+## Propose the cluster
+
+List the Ethereum addresses of participating operators, along with withdrawal and fee recipient address data for each validator you intend for the operators to create.
+
+```ts
+// A config hash is a deterministic hash of the proposed DV cluster configuration
+const configHash = await obol.createClusterDefinition({
+ name: "SDK Demo Cluster",
+ operators: [
+ { address: "0xC35CfCd67b9C27345a54EDEcC1033F2284148c81" },
+ { address: "0x33807D6F1DCe44b9C599fFE03640762A6F08C496" },
+ { address: "0xc6e76F72Ea672FAe05C357157CfC37720F0aF26f" },
+ { address: "0x86B8145c98e5BD25BA722645b15eD65f024a87EC" },
+ ],
+ validators: [
+ {
+ fee_recipient_address: "0x3CD4958e76C317abcEA19faDd076348808424F99",
+ withdrawal_address: "0xE0C5ceA4D3869F156717C66E188Ae81C80914a6e",
+ },
+ ],
+});
+
+console.log(
+ `Direct the operators to https://goerli.launchpad.obol.tech/dv?configHash=${configHash} to complete the key generation process`
+);
+```
+
+## Invite the Operators to complete the DKG
+
+Once the Obol-API returns a `configHash` string from the `createClusterDefinition` method, you can use this identifier to invite the operators to the [Launchpad](../../../dvl/intro.md) to complete the process
+
+1. Operators navigate to `https://.launchpad.obol.tech/dv?configHash=` and complete the [run a DV with others](../group/quickstart-group-operator.md) flow.
+1. Once the DKG is complete, and operators are using the `--publish` flag, the created cluster details will be posted to the Obol API
+1. The creator will be able to retrieve this data with `obol.getClusterLock(configHash)`, to use for activating the newly created validator.
+
+## Retrieve the created Distributed Validators using the SDK
+
+Once the DKG is complete, the proposer of the cluster can retrieve key data such as the validator public keys and their associated deposit data messages.
+
+```js
+const clusterLock = await obol.getClusterLock(configHash);
+```
+
+Reference lock files can be found [here](https://github.com/ObolNetwork/charon/tree/main/cluster/testdata).
+
+## Activate the DVs using the deposit contract
+
+In order to activate the distributed validators, the cluster operator can retrieve the validators' associated deposit data from the lock file and use it to craft transactions to the `deposit()` method on the deposit contract.
+
+```js
+const validatorDepositData =
+ clusterLock.distributed_validators[validatorIndex].deposit_data;
+
+const depositContract = new ethers.Contract(
+ DEPOSIT_CONTRACT_ADDRESS, // 0x00000000219ab540356cBB839Cbe05303d7705Fa for Mainnet, 0xff50ed3d0ec03aC01D4C79aAd74928BFF48a7b2b for Goerli
+ depositContractABI, // https://etherscan.io/address/0x00000000219ab540356cBB839Cbe05303d7705Fa#code for Mainnet, and replace the address for Goerli
+ signer
+);
+
+const TX_VALUE = ethers.parseEther("32");
+
+const tx = await depositContract.deposit(
+ validatorDepositData.pubkey,
+ validatorDepositData.withdrawal_credentials,
+ validatorDepositData.signature,
+ validatorDepositData.deposit_data_root,
+ { value: TX_VALUE }
+);
+
+const txResult = await tx.wait();
+```
+
+## Usage Examples
+
+Examples of how our SDK can be used are found [here](https://github.com/ObolNetwork/obol-sdk-examples).
diff --git a/versioned_docs/version-v0.18.0/int/quickstart/advanced/quickstart-split.md b/versioned_docs/version-v0.18.0/int/quickstart/advanced/quickstart-split.md
new file mode 100644
index 0000000000..44638ec22f
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/int/quickstart/advanced/quickstart-split.md
@@ -0,0 +1,91 @@
+---
+sidebar_position: 3
+description: Split existing validator keys
+---
+
+# Split existing validator private keys
+
+:::caution
+Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf).
+
+This process should only be used if you want to split an *existing validator private key* into multiple private key shares for use in a Distributed Validator Cluster. If your existing validator is not properly shut down before the Distributed Validator starts, your validator may be slashed.
+
+If you are starting a new validator, you should follow a [quickstart guide](../index.md) instead.
+:::
+
+Split an existing Ethereum validator key into multiple key shares for use in an [Obol Distributed Validator Cluster](../../key-concepts#distributed-validator-cluster).
+
+
+## Pre-requisites
+
+- Ensure you have the existing validator keystores (the ones to split) and passwords.
+- Ensure you have [docker](https://docs.docker.com/engine/install/) installed.
+- Make sure `docker` is running before executing the commands below.
+
+## Step 1. Clone the charon repo and copy existing keystore files
+
+Clone the [charon](https://github.com/ObolNetwork/charon) repo.
+
+ ```sh
+ # Clone the repo
+ git clone https://github.com/ObolNetwork/charon.git
+
+ # Change directory
+ cd charon/
+
+ # Create a folder within this checked out repo
+ mkdir split_keys
+ ```
+
+Copy the existing validator `keystore.json` files into this new folder. Alongside them, with a matching filename but ending with `.txt` should be the password to the keystore. E.g., `keystore-0.json` `keystore-0.txt`
+
+At the end of this process, you should have a tree like this:
+```shell
+├── split_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ ├── keystore-1.txt
+│ ...
+│ ├── keystore-*.json
+│ ├── keystore-*.txt
+```
+
+## Step 2. Split the keys using the charon docker command
+
+Run the following docker command to split the keys:
+
+```shell
+CHARON_VERSION= # E.g. v0.18.0
+CLUSTER_NAME= # The name of the cluster you want to create.
+WITHDRAWAL_ADDRESS= # The address you want to use for withdrawals.
+FEE_RECIPIENT_ADDRESS= # The address you want to use for fee payments.
+NODES= # The number of nodes in the cluster.
+
+docker run --rm -v $(pwd):/opt/charon obolnetwork/charon:${CHARON_VERSION} create cluster --name="${CLUSTER_NAME}" --withdrawal-addresses="${WITHDRAWAL_ADDRESS}" --fee-recipient-addresses="${FEE_RECIPIENT_ADDRESS}" --split-existing-keys --split-keys-dir=/opt/charon/split_keys --nodes ${NODES} --network goerli
+```
+
+The above command will create `validator_keys` along with `cluster-lock.json` in `./.charon/cluster` for each node.
+
+Command output:
+
+```shell
+***************** WARNING: Splitting keys **********************
+ Please make sure any existing validator has been shut down for
+ at least 2 finalised epochs before starting the charon cluster,
+ otherwise slashing could occur.
+****************************************************************
+
+Created charon cluster:
+ --split-existing-keys=true
+
+.charon/cluster/
+├─ node[0-*]/ Directory for each node
+│ ├─ charon-enr-private-key Charon networking private key for node authentication
+│ ├─ cluster-lock.json Cluster lock defines the cluster lock file which is signed by all nodes
+│ ├─ validator_keys Validator keystores and password
+│ │ ├─ keystore-*.json Validator private share key for duty signing
+│ │ ├─ keystore-*.txt Keystore password files for keystore-*.json
+```
+
+These split keys can now be used to start a charon cluster.
diff --git a/versioned_docs/version-v0.18.0/int/quickstart/advanced/self-relay.md b/versioned_docs/version-v0.18.0/int/quickstart/advanced/self-relay.md
new file mode 100644
index 0000000000..ae157214b7
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/int/quickstart/advanced/self-relay.md
@@ -0,0 +1,36 @@
+---
+sidebar_position: 7
+description: Self-host a relay
+---
+
+# Self-Host a Relay
+
+If you are experiencing connectivity issues with the Obol hosted relays, or you want to improve your clusters latency and decentralization, you can opt to host your own relay on a separate open and static internet port.
+
+```
+# Figure out your public IP
+curl v4.ident.me
+
+# Clone the repo and cd into it.
+git clone https://github.com/ObolNetwork/charon-distributed-validator-node.git
+
+cd charon-distributed-validator-node
+
+# Replace 'replace.with.public.ip.or.hostname' in relay/docker-compose.yml with your public IPv4 or DNS hostname # Replace 'replace.with.public.ip.or.hostname' in relay/docker-compose.yml with your public IPv4 or DNS hostname
+
+nano relay/docker-compose.yml
+
+docker compose -f relay/docker-compose.yml up
+```
+
+Test whether the relay is publicly accessible. This should return an ENR:
+`curl http://replace.with.public.ip.or.hostname:3640/enr`
+
+Ensure the ENR returned by the relay contains the correct public IP and port by decoding it with https://enr-viewer.com/.
+
+Configure **ALL** charon nodes in your cluster to use this relay:
+
+- Either by adding a flag: `--p2p-relays=http://replace.with.public.ip.or.hostname:3640/enr`
+- Or by setting the environment variable: `CHARON_P2P_RELAYS=http://replace.with.public.ip.or.hostname:3640/enr`
+
+Note that a local `relay/.charon/charon-enr-private-key` file will be created next to `relay/docker-compose.yml` to ensure a persisted relay ENR across restarts.
diff --git a/versioned_docs/version-v0.18.0/int/quickstart/alone/_category_.json b/versioned_docs/version-v0.18.0/int/quickstart/alone/_category_.json
new file mode 100644
index 0000000000..9f98f73841
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/int/quickstart/alone/_category_.json
@@ -0,0 +1,5 @@
+{
+ "label": "Create a DV alone",
+ "position": 1,
+ "collapsed": true
+}
\ No newline at end of file
diff --git a/versioned_docs/version-v0.18.0/int/quickstart/alone/create-keys.md b/versioned_docs/version-v0.18.0/int/quickstart/alone/create-keys.md
new file mode 100644
index 0000000000..aee5550cc6
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/int/quickstart/alone/create-keys.md
@@ -0,0 +1,56 @@
+---
+sidebar_position: 2
+description: Run all nodes in a distributed validator cluster
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Create the private key shares
+
+:::caution
+Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf).
+:::
+
+:::info
+Running a Distributed Validator alone means that a single operator manages all of the nodes of the DV. Depending on the operators security preferences, the private key shares can be created centrally, and distributed securely to each node. This is the focus of the below guide.
+
+Alternatively, the private key shares can be created in a lower-trust manner with a [Distributed Key Generation](../../key-concepts.md#distributed-validator-key-generation-ceremony) process, which avoids the validator private key being stored in full anywhere, at any point in its lifecycle. Follow the [group quickstart](./../group/index.md) instead for this latter case.
+:::
+
+## Pre-requisites
+
+- Ensure you have [docker](https://docs.docker.com/engine/install/) installed.
+- Make sure `docker` is running before executing the commands below.
+
+## Create the key shares locally
+
+
+
+ Create the artifacts needed to run a DV cluster by running the following command to setup the inputs for the DV.
+ Check the Charon CLI reference for additional optional flags to set.
+
+
+
+ WITHDRAWAL_ADDR=[ENTER YOUR WITHDRAWAL ADDRESS HERE]
+
+ FEE_RECIPIENT_ADDR=[ENTER YOUR FEE RECIPIENT ADDRESS HERE]
+
+ NB_NODES=[ENTER AMOUNT OF DESIRED NODES]
+
+
+ Then, run this command to create all the key shares and cluster artifacts locally:
+
+
+ Go to the Obol Launchpad and select Create a distributed validator alone. Follow the steps to configure your DV cluster.
+
+
+
+After successful completion, a subdirectory `.charon/cluster` should be created. In it are as many folders as nodes of the cluster. Each folder contains partial private keys that together make up the distributed validator described in `.charon/cluster/cluster-lock.json`.
+
+Once ready, you can move to [deploying this cluster physically](./deploy.md).
\ No newline at end of file
diff --git a/versioned_docs/version-v0.18.0/int/quickstart/alone/deploy.md b/versioned_docs/version-v0.18.0/int/quickstart/alone/deploy.md
new file mode 100644
index 0000000000..bdd5d3af8c
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/int/quickstart/alone/deploy.md
@@ -0,0 +1,26 @@
+---
+sidebar_position: 3
+description: Move the private key shares to the nodes and run the cluster
+---
+
+# Deploy the cluster
+To distribute your cluster physically and start the DV, each node needs a directory called `.charon` with one (or several) private key shares within it as per the structure below.
+
+```
+├── .charon
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── ...
+│ ├── keystore-N.json
+│ └── keystore-N.txt
+```
+
+:point_right: Use the single node [docker compose](https://github.com/ObolNetwork/charon-distributed-validator-node), the kubernetes [manifests](https://github.com/ObolNetwork/charon-k8s-distributed-validator-node), or the [helm chart](https://github.com/ObolNetwork/helm-charts) example repos to get your nodes up and connected after loading the `.charon` folder artifacts into them appropriately.
+
+:::warning
+Right now, the `charon-distributed-node-cluster` repo [used earlier to create the private keys](./create-keys) outputs a folder structure like `.charon/ cluster/node0/validator_keys`. Make sure to grab the `./node0/*` folder, RENAME it to `.charon` and then move it to one of the single node repo above to have a working cluster as per the folder structure shown above.
+:::
\ No newline at end of file
diff --git a/versioned_docs/version-v0.18.0/int/quickstart/alone/test-locally.md b/versioned_docs/version-v0.18.0/int/quickstart/alone/test-locally.md
new file mode 100644
index 0000000000..f25eebecaa
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/int/quickstart/alone/test-locally.md
@@ -0,0 +1,81 @@
+---
+sidebar_position: 1
+description: Test the solo cluster locally
+---
+
+# Run a test cluster locally
+:::warning
+This is a demo repo to understand how Distributed Validators work and is not suitable for a production deployment.
+
+This guide only runs one Execution Client, one Consensus Client, and 6 Distributed Validator Charon Client + Validator Client pairs on a single docker instance. As a consequence, if this machine fails, there will not be fault tolerance.
+
+Follow these two guides sequentially instead for production deployment: [create keys centrally](./create-keys.md) and [how to deploy them](./deploy.md).
+:::
+
+The [`charon-distributed-validator-cluster`](https://github.com/ObolNetwork/charon-distributed-validator-cluster) repo contains six charon clients in separate docker containers along with an execution client and consensus client, simulating a Distributed Validator cluster running.
+
+The default cluster consists of:
+- [Nethermind](https://github.com/NethermindEth/nethermind), an execution layer client
+- [Lighthouse](https://github.com/sigp/lighthouse), a consensus layer client
+- Six [charon](https://github.com/ObolNetwork/charon) nodes
+- A mixture of validator clients:
+ - VC0: [Lighthouse](https://github.com/sigp/lighthouse)
+ - vc1: [Teku](https://github.com/ConsenSys/teku)
+ - vc2: [Nimbus](https://github.com/status-im/nimbus-eth2)
+ - vc3: [Lighthouse](https://github.com/sigp/lighthouse)
+ - vc4: [Teku](https://github.com/ConsenSys/teku)
+ - vc5: [Nimbus](https://github.com/status-im/nimbus-eth2)
+
+## Pre-requisites
+
+- Ensure you have [docker](https://docs.docker.com/engine/install/) installed.
+- Ensure you have [git](https://git-scm.com/downloads) installed.
+- Make sure `docker` is running before executing the commands below.
+
+## Create the key shares locally
+
+1. Clone the [charon-distributed-validator-cluster](https://github.com/ObolNetwork/charon-distributed-validator-cluster) repo and `cd` into the directory.
+
+ ```sh
+ # Clone the repo
+ git clone https://github.com/ObolNetwork/charon-distributed-validator-cluster.git
+
+ # Change directory
+ cd charon-distributed-validator-cluster/
+ ```
+
+2. Prepare the environment variables
+
+ ```sh
+ # Copy the sample environment variables
+ cp .env.sample .env
+ ```
+ `.env.sample` is a sample environment file that allows overriding default configuration defined in `docker-compose.yml`. Uncomment and set any variable to override its value.
+
+3. Create the artifacts needed to run a DV cluster by running the following command:
+
+ ```sh
+ # Enter required validator addresses
+ WITHDRAWAL_ADDR=
+ FEE_RECIPIENT_ADDR=
+
+ # Create a distributed validator cluster
+ docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.18.0 create cluster --name="mycluster" --cluster-dir=".charon/cluster/" --withdrawal-addresses="${WITHDRAWAL_ADDR}" --fee-recipient-addresses="${FEE_RECIPIENT_ADDR}" --nodes 6 --network goerli --num-validators=1
+ ```
+
+These commands will create six folders within `.charon/cluster`, one for each node created. You will need to rename `node*` to `.charon` for each folder to be found by the default `charon run` command, or you can use `charon run --private-key-file=".charon/cluster/node0/charon-enr-private-key" --lock-file=".charon/cluster/node0/cluster-lock.json"` for each instance of charon you start.
+
+## Start the cluster
+
+Run this command to start your cluster containers
+
+```sh
+# Start the distributed validator cluster
+docker compose up --build
+```
+Check the monitoring dashboard and see if things look all right
+
+```sh
+# Open Grafana
+open http://localhost:3000/d/laEp8vupp
+```
\ No newline at end of file
diff --git a/versioned_docs/version-v0.18.0/int/quickstart/group/_category_.json b/versioned_docs/version-v0.18.0/int/quickstart/group/_category_.json
new file mode 100644
index 0000000000..e4c14eb202
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/int/quickstart/group/_category_.json
@@ -0,0 +1,5 @@
+{
+ "label": "Create a DV as a group",
+ "position": 2,
+ "collapsed": true
+}
\ No newline at end of file
diff --git a/versioned_docs/version-v0.18.0/int/quickstart/group/index.md b/versioned_docs/version-v0.18.0/int/quickstart/group/index.md
new file mode 100644
index 0000000000..6eafcd77d7
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/int/quickstart/group/index.md
@@ -0,0 +1,18 @@
+# Run a cluster as a group
+
+:::caution
+Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf).
+:::
+
+:::info
+Running a Distributed Validator with others typically means that several operators run the various nodes of the cluster. In such a case, the key shares should be created with a [distributed key generation process](../../key-concepts.md#distributed-validator-key-generation-ceremony), avoiding the private key being stored in full, anywhere.
+:::
+
+There are two sequential user journeys when setting up a DV cluster with others. Each comes with its own quickstart:
+
+1. The [**Creator** (**Leader**) Journey](./group/quickstart-group-leader-creator), which outlines the steps to propose a Distributed Validator Cluster.
+ - In the **Creator** case, the person creating the cluster *will NOT* be a node operator in the cluster.
+ - In the **Leader** case, the person creating the cluster *will* be a node operator in the cluster.
+
+
+2. The [**Operator** Journey](./group/quickstart-group-operator) which outlines the steps to create a Distributed Validator Cluster proposed by a leader or creator using the above process.
\ No newline at end of file
diff --git a/versioned_docs/version-v0.18.0/int/quickstart/group/quickstart-cli.md b/versioned_docs/version-v0.18.0/int/quickstart/group/quickstart-cli.md
new file mode 100644
index 0000000000..06e70e6ec1
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/int/quickstart/group/quickstart-cli.md
@@ -0,0 +1,129 @@
+---
+sidebar_position: 3
+description: Run one node in a multi-operator distributed validator cluster using the CLI
+---
+
+# Using the CLI
+
+:::caution
+Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf).
+:::
+
+The following instructions aim to assist a group of operators coordinating together to create a distributed validator cluster via the CLI.
+
+## Pre-requisites
+
+- Ensure you have [docker](https://docs.docker.com/engine/install/) installed.
+- Ensure you have [git](https://git-scm.com/downloads) installed.
+- Make sure `docker` is running before executing the commands below.
+- Decide who the Leader or Creator of your cluster will be. Only them have to perform [step 2](#step-2-leader-creates-the-dkg-configuration-file-and-distributes-it-to-everyone-else) and [step 5](#step-5-activate-the-deposit-data) in this quickstart. They do not get any special privilege.
+ - In the **Leader** case, the operator creating the cluster will also operate a node in the cluster.
+ - In the **Creator** case, the cluster is created by an external party to the cluster.
+
+## Step 1. Create and back up a private key for charon
+
+In order to prepare for a distributed key generation ceremony, all operators (including the leader but NOT a creator) need to create an [ENR](../../faq/errors.mdx) for their charon client. This ENR is a public/private key pair, and allows the other charon clients in the DKG to identify and connect to your node.
+
+```sh
+# Clone this repo
+git clone https://github.com/ObolNetwork/charon-distributed-validator-node.git
+
+# Change directory
+cd charon-distributed-validator-node
+
+# Create your charon ENR private key, this will create a charon-enr-private-key file in the .charon directory
+docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.18.0 create enr
+```
+
+You should expect to see a console output like
+
+ Created ENR private key: .charon/charon-enr-private-key
+ enr:-JG4QGQpV4qYe32QFUAbY1UyGNtNcrVMip83cvJRhw1brMslPeyELIz3q6dsZ7GblVaCjL_8FKQhF6Syg-O_kIWztimGAYHY5EvPgmlkgnY0gmlwhH8AAAGJc2VjcDI1NmsxoQKzMe_GFPpSqtnYl-mJr8uZAUtmkqccsAx7ojGmFy-FY4N0Y3CCDhqDdWRwgg4u
+
+:::caution
+Please make sure to create a backup of the private key at `.charon/charon-enr-private-key`. Be careful not to commit it to git! **If you lose this file you won't be able to take part in the DKG ceremony and start the DV cluster successfully.**
+:::
+
+Finally, share your ENR with the leader or creator so that he/she can proceed to Step 2.
+
+## Step 2. Leader or Creator creates the DKG configuration file and distribute it to cluster operators
+
+1. The leader or creator of the cluster will prepare the `cluster-definition.json` file for the Distributed Key Generation ceremony using the `charon create dkg` command.
+
+ ```
+ # Prepare an environment variable file
+ cp .env.create_dkg.sample .env.create_dkg
+ ```
+2. Populate the `.env.create_dkg` file created with the `cluster name`, the `fee recipient` and `withdrawal Ethereum addresses`, and the `ENRs` of all the operators participating in the cluster.
+ - The file generated is hidden by default. To view it, run `ls -al` in your terminal. Else, if you are on `macOS`, press `Cmd + Shift + .` to view all hidden files in the finder application.
+
+3. Run the `charon create dkg` command that generates DKG cluster-definition.json file.
+ ```
+ docker run --rm -v "$(pwd):/opt/charon" --env-file .env.create_dkg obolnetwork/charon:v0.18.0 create dkg
+ ```
+
+ This command should output a file at `.charon/cluster-definition.json`. This file needs to be shared with the other operators in a cluster.
+
+## Step 3. Run the DKG
+
+After receiving the `cluster-definition.json` file created by the leader, cluster operators should ideally save it in the `.charon/` folder that was created during step 1, alternatively the `--definition-file` flag can override the default expected location for this file.
+
+Every cluster member then participates in the DKG ceremony. For Charon v1, this needs to happen relatively synchronously between participants at an agreed time.
+
+```
+# Participate in DKG ceremony, this will create .charon/cluster-lock.json, .charon/deposit-data.json and .charon/validator_keys
+docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.18.0 dkg
+```
+
+>This is a helpful [video walkthrough](https://www.youtube.com/watch?v=94Pkovp5zoQ&ab_channel=ObolNetwork).
+
+Assuming the DKG is successful, a number of artefacts will be created in the `.charon` folder. These include:
+
+- A `deposit-data.json` file. This contains the information needed to activate the validator on the Ethereum network.
+- A `cluster-lock.json` file. This contains the information needed by charon to operate the distributed validator cluster with its peers.
+- A `validator_keys/` folder. This folder contains the private key shares and passwords for the created distributed validators.
+
+:::caution
+Please make sure to create a backup of `.charon/validator_keys`. **If you lose your keys you won't be able to start the DV cluster successfully.**
+:::
+
+:::info
+The `cluster-lock` and `deposit-data` files are identical for each operator and can be copied if lost.
+:::
+
+## Step 4. Start your Distributed Validator Node
+
+With the DKG ceremony over, the last phase before activation is to prepare your node for validating over the long term. This repo is configured to sync an execution layer client (`geth`) and a consensus layer client (`lighthouse`).
+
+Before completing these instructions, you should assign a static local IP address to your device (extending the DHCP reservation indefinitely or removing the device from the DCHP pool entirely if you prefer), and port forward the TCP protocol on the public port `:3610` on your router to your device's local IP address on the same port. This step is different for every person's home internet, and can be complicated by the presence of dynamic public IP addresses. We are currently working on making this as easy as possible, but for the time being, a distributed validator cluster isn't going to work very resiliently if all charon nodes cannot talk directly to one another and instead need to have an intermediary node forwarding traffic to them.
+
+**Caution**: If you manually update `docker-compose` to mount `lighthouse` from your locally synced `~/.lighthouse`, the whole chain database may get deleted. It'd be best not to manually update as `lighthouse` checkpoint-syncs so the syncing doesn't take much time.
+
+**Note**: If you have a `geth` node already synced, you can simply copy over the directory. For ex: `cp -r ~/.ethereum/goerli data/geth`. This makes everything faster since you start from a synced geth node.
+
+```
+# Delete lighthouse data if it exists
+rm -r ./data/lighthouse
+
+# Spin up a Distributed Validator Node with a Validator Client
+docker compose up
+
+# Open Grafana dashboard
+open http://localhost:3000/d/singlenode/
+```
+
+You should use the grafana dashboard to infer whether your cluster is healthy. In particular you should check:
+
+- That your charon client can connect to the configured beacon client.
+- That your charon client can connect to all peers.
+
+Most components in the dashboard have some help text there to assist you in understanding your cluster performance.
+
+You might notice that there are logs indicating that a validator cannot be found and that APIs are returning 404. This is to be expected at this point, as the validator public keys listed in the lock file have not been deposited and acknowledged on the consensus layer yet (usually ~16 hours after the deposit is made).
+
+If at any point you need to turn off your node, you can run:
+
+```
+# Shut down the currently running distributed validator node
+docker compose down
+```
\ No newline at end of file
diff --git a/versioned_docs/version-v0.18.0/int/quickstart/group/quickstart-group-leader-creator.md b/versioned_docs/version-v0.18.0/int/quickstart/group/quickstart-group-leader-creator.md
new file mode 100644
index 0000000000..0909b44f75
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/int/quickstart/group/quickstart-group-leader-creator.md
@@ -0,0 +1,194 @@
+---
+sidebar_position: 1
+description: A leader/creator creates a cluster configuration to be shared with operators
+---
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Creator & Leader Journey
+
+:::caution
+Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf).
+:::
+
+The following instructions aim to assist with the preparation of a distributed validator key generation ceremony. Select the *Leader* tab if you **will** be an operator participating in the cluster, and select the *Creator* tab if you **will NOT** be an operator in the cluster.
+
+These roles hold no position of privilege in the cluster, they only set the initial terms of the cluster that the other operators agree to.
+
+
+
+ The person creating the cluster will be a node operator in the cluster.
Make sure docker is running before executing the commands below.
+
+
+
+ The person creating the cluster will not be a node operator in the cluster.
+
+
+
+## Overview Video
+
+
+
+## Step 1. Collect Ethereum addresses of the cluster operators
+Before starting the cluster creation, you will need to collect one Ethereum address per operator in the cluster. They will need to be able to sign messages through metamask with this address. Broader wallet support will be added in future.
+
+## Step 2. Create and back up a private key for charon
+
+
+
+
+ In order to prepare for a distributed key generation ceremony, you need to create an [ENR](docs/int/faq/errors.mdx#enrs-keys) for your charon client. Operators in your cluster will also need to do this step, as per their [quickstart](./quickstart-group-operator#step-2-create-and-back-up-a-private-key-for-charon). This ENR is a public/private key pair, and allows the other charon clients in the DKG to identify and connect to your node.
+
+ ```sh
+ # Clone this repo
+ git clone https://github.com/ObolNetwork/charon-distributed-validator-node.git
+
+ # Change directory
+ cd charon-distributed-validator-node
+
+ # Create your charon ENR private key, this will create a charon-enr-private-key file in the .charon directory
+ docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.18.0 create enr
+ ```
+
+You should expect to see a console output like
+
+ Created ENR private key: .charon/charon-enr-private-key
+ enr:-JG4QGQpV4qYe32QFUAbY1UyGNtNcrVMip83cvJRhw1brMslPeyELIz3q6dsZ7GblVaCjL_8FKQhF6Syg-O_kIWztimGAYHY5EvPgmlkgnY0gmlwhH8AAAGJc2VjcDI1NmsxoQKzMe_GFPpSqtnYl-mJr8uZAUtmkqccsAx7ojGmFy-FY4N0Y3CCDhqDdWRwgg4u
+
+If instead of being shown your `enr` you see an error saying `permission denied` then you may need to [update docker permissions](/docs/int/faq/errors#docker-permission-denied-error) to allow the command to run successfully.
+
+:::caution
+Please make sure to create a backup of the private key at `.charon/charon-enr-private-key`. Be careful not to commit it to git! **If you lose this file you won't be able to take part in the DKG ceremony and start the DV cluster successfully.**
+:::
+
+
+
+
+ This step is not needed and you can move on to [Step 3](#step-3-create-the-dkg-configuration-file-and-distribute-it-to-cluster-operators).
+
+
+
+
+## Step 3. Create the DKG configuration file and distribute it to cluster operators
+
+You will prepare the configuration file for the distributed key generation ceremony using the launchpad.
+
+1. Go to the [DV Launchpad](https://goerli.launchpad.obol.tech)
+2. Connect your wallet
+
+ ![Connect your Wallet](/img/Guide01.png)
+
+3. Select `Create a Cluster with a group` then `Get Started`.
+
+ ![Get Started](/img/Guide02.png)
+
+4. Follow the flow and accept the advisories.
+5. Configure the Cluster
+ - Input the `Cluster Name` & `Cluster Size` (i.e. number of operators in the cluster). The threshold for the cluster to operate sucessfully will update automatically.
+
+
+
+
+
+ ⚠️ Leave the `Non-Operator` toggle OFF.
+
+
+
+
+
+
+ ⚠️ Turn the `Non-Operator` toggle ON.
+
+
+
+
+
+ - Input the Ethereum addresses for each operator collected during [step 1](#step-1-collect-ethereum-addresses-of-the-cluster-operators).
+ - Select the desired amount of validators (32 ETH each) the cluster will run.
+ - Paste your `ENR` generated at [Step 2](#step-2-create-and-back-up-a-private-key-for-charon).
+ - Select the `Withdrawal Addresses` method. Use `Single address` to receive the principal and fees to a single address or `Splitter Contracts` to share them among operators.
+
+
+
+
+
Enter the Withdrawal Address that will receive the validator effective balance at exit and when balance skimming occurs.
+
Enter the Fee Recipient Address to receive MEV rewards (if enabled), and block proposal priority fees.
+
+ You can set them to be the same as your connected wallet address in one click.
+
+
+ ![Create Group](/img/Guide03.png)
+
+
+
+
+
Enter the Ethereum address to claim the validator principal (32 ether) at exit.
+
Enter the Ethereum addresses and their percentage split of the validator's rewards. Validator rewards include consensus rewards, MEV rewards and proposal priority fees.
6. Deploy the Obol Splits contracts by signing the transaction with your wallet.
+
+
+
+
+
+
+ 7. You will be asked to confirm your configuration and to sign:
+
+
+
+
The config_hash. This is a hashed representation of the details of this cluster, to ensure everyone is agreeing to an identical setup.
+
The operator_config_hash. This is your acceptance of the terms as a participating node operator.
+
Your ENR. Signing your ENR authorises the corresponding private key to act on your behalf in the cluster.
+
+
+
+
+
+ 7. You will be asked to confirm your configuration and to sign:
+
+
+
+
The config_hash. This is a hashed representation of the details of this cluster, to ensure everyone is agreeing to an identical setup.
+
+
+
+
+
+8. Share your cluster invite link with the operators. Following the link will show you a screen waiting for other operators to accept the configuration you created.
+
+ ![Invite Operators](/img/Guide04.png)
+
+
+
+
+ 👉 Once every participating operator has signed their approval to the terms, you will continue the [**Operator** journey](./quickstart-group-operator#step-3-run-the-dkg) by completing the distributed key generation step.
+
+
+
+
+ Your journey ends here and you can monitor with the link whether the operators confirm their agreement to the cluster by signing their approval. Future versions of the launchpad will allow a creator to track a distributed validator's lifecycle in its entirety.
+
+
+
+
diff --git a/versioned_docs/version-v0.18.0/int/quickstart/group/quickstart-group-operator.md b/versioned_docs/version-v0.18.0/int/quickstart/group/quickstart-group-operator.md
new file mode 100644
index 0000000000..771dbdb035
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/int/quickstart/group/quickstart-group-operator.md
@@ -0,0 +1,144 @@
+---
+sidebar_position: 1
+description: A node operator joins a DV cluster
+---
+
+# Operator Journey
+
+:::caution
+Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf).
+:::
+
+The following instructions aim to assist a group of operators coordinating together to create a distributed validator cluster after receiving an cluster invite link from a leader or creator.
+
+## Overview Video
+
+
+## Pre-requisites
+
+- Ensure you have [docker](https://docs.docker.com/engine/install/) installed.
+- Ensure you have [git](https://git-scm.com/downloads) installed.
+- Make sure `docker` is running before executing the commands below.
+
+## Step 1. Share an Ethereum address with your Leader or Creator
+Before starting the cluster creation, make sure you have shared an Ethereum address with your cluster **Leader** or **Creator**. If you haven't chosen someone as a Leader or Creator yet, please go back to the [Quickstart intro](./index.md) and define one person to go through the [Leader & Creator Journey](./quickstart-group-leader-creator) before moving forward.
+
+## Step 2. Create and back up a private key for charon
+
+In order to prepare for a distributed key generation ceremony, you need to create an [ENR](docs/int/faq/errors.mdx#enrs-keys) for your charon client. This ENR is a public/private key pair, and allows the other charon clients in the DKG to identify and connect to your node.
+
+```sh
+# Clone this repo
+git clone https://github.com/ObolNetwork/charon-distributed-validator-node.git
+
+# Change directory
+cd charon-distributed-validator-node
+
+# Create your charon ENR private key, this will create a charon-enr-private-key file in the .charon directory
+docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.18.0 create enr
+```
+
+You should expect to see a console output like
+
+ Created ENR private key: .charon/charon-enr-private-key
+ enr:-JG4QGQpV4qYe32QFUAbY1UyGNtNcrVMip83cvJRhw1brMslPeyELIz3q6dsZ7GblVaCjL_8FKQhF6Syg-O_kIWztimGAYHY5EvPgmlkgnY0gmlwhH8AAAGJc2VjcDI1NmsxoQKzMe_GFPpSqtnYl-mJr8uZAUtmkqccsAx7ojGmFy-FY4N0Y3CCDhqDdWRwgg4u
+
+If instead of being shown your `enr` you see an error saying `permission denied` then you may need to [update docker permissions](/docs/int/faq/errors#docker-permission-denied-error) to allow the command to run successfully.
+
+:::caution
+Please make sure to create a backup of the private key at `.charon/charon-enr-private-key`. Be careful not to commit it to git! **If you lose this file you won't be able to take part in the DKG ceremony and start the DV cluster successfully.**
+:::
+
+## Step 3. Join and sign the cluster configuration
+
+After receiving the invite link created by the **Leader** or **Creator**, you will be able to join and sign the cluster configuration created.
+
+1. Go to the DV launchpad link provided by the leader or creator.
+2. Connect your wallet using the Ethereum address provided to the leader in [step 1](#step-1-share-an-ethereum-address-with-your-leader-or-creator).
+
+ ![Connect Wallet](/img/Guide05.png)
+
+3. Review the operators addresses submitted and click `Get Started` to continue.
+
+ ![Get Started](/img/Guide06.png)
+
+4. Review and accept the advisories.
+5. Review the configuration created by the leader or creator and add your `ENR` generated in [step 2](#step-2-create-and-back-up-a-private-key-for-charon).
+
+ ![Review Config](/img/Guide07.png)
+
+6. Sign the following with your wallet
+ - The config hash. This is a hashed representation of all of the details for this cluster.
+ - Your own `ENR`. This signature authorises the key represented by this ENR to act on your behalf in the cluster.
+
+7. Wait for all the other operators in your cluster to do the same.
+
+## Step 4. Run the DKG
+:::info
+For the [DKG](docs/charon/dkg.md) to complete, all operators need to be running the command simultaneously. It helps to coordinate an agreed upon time amongst operators at which to run the command.
+:::
+
+### Overview
+
+
+1. Once all operators successfully signed, your screen will automatically advance to the next step and look like this. Click `Continue`. If you closed the tab, just go back to the invite link shared by the leader and connect your wallet.
+
+ ![Config Signing Success](/img/Guide08.png)
+
+2. You have two options to perform the DKG.
+ 1. **Option 1** and default is to copy and run the `docker` command on the screen into your terminal. It will retrieve the remote cluster details and begin the DKG process.
+
+ 2. **Option 2** (Manual DKG) is to download the `cluster-definition` file manually and move it to the hidden `.charon` folder. Then, every cluster member participates in the DKG ceremony by running the command displayed.
+
+ ![Run the DKG](/img/Guide10.png)
+
+3. Assuming the DKG is successful, a number of artefacts will be created in the `.charon` folder. These include:
+
+ - A `deposit-data.json` file. This contains the information needed to activate the validator on the Ethereum network.
+ - A `cluster-lock.json` file. This contains the information needed by charon to operate the distributed validator cluster with its peers.
+ - A `validator_keys/` folder. This folder contains the private key shares and passwords for the created distributed validators.
+
+:::caution
+Please make sure to create a backup of `.charon/validator_keys`. **If you lose your keys you won't be able to start the DV cluster successfully.**
+:::
+
+:::info
+The `cluster-lock` and `deposit-data` files are identical for each operator and can be copied if lost.
+:::
+
+## Step 5. Start your Distributed Validator Node
+
+With the DKG ceremony over, the last phase before activation is to prepare your node for validating over the long term. This repo is configured to sync an execution layer client (`geth`) and a consensus layer client (`lighthouse`).
+
+Before completing these instructions, you should assign a static local IP address to your device (extending the DHCP reservation indefinitely or removing the device from the DCHP pool entirely if you prefer), and port forward the TCP protocol on the public port `:3610` on your router to your device's local IP address on the same port. This step is different for every person's home internet, and can be complicated by the presence of dynamic public IP addresses. We are currently working on making this as easy as possible, but for the time being, a distributed validator cluster isn't going to work very resiliently if all charon nodes cannot talk directly to one another and instead need to have an intermediary node forwarding traffic to them.
+
+**Caution**: If you manually update `docker compose` to mount `lighthouse` from your locally synced `~/.lighthouse`, the whole chain database may get deleted. It'd be best not to manually update as `lighthouse` checkpoint-syncs so the syncing doesn't take much time.
+
+**Note**: If you have a `geth` node already synced, you can simply copy over the directory. For ex: `cp -r ~/.ethereum/goerli data/geth`. This makes everything faster since you start from a synced geth node.
+
+```
+# Delete lighthouse data if it exists
+rm -r ./data/lighthouse
+
+# Spin up a Distributed Validator Node with a Validator Client
+docker compose up
+
+# Open Grafana dashboard
+open http://localhost:3000/d/singlenode/
+```
+
+You should use the grafana dashboard to infer whether your cluster is healthy. In particular you should check:
+
+- That your charon client can connect to the configured beacon client.
+- That your charon client can connect to all peers.
+
+Most components in the dashboard have some help text there to assist you in understanding your cluster performance.
+
+You might notice that there are logs indicating that a validator cannot be found and that APIs are returning 404. This is to be expected at this point, as the validator public keys listed in the lock file have not been deposited and acknowledged on the consensus layer yet (usually ~16 hours after the deposit is made).
+
+If at any point you need to turn off your node, you can run:
+
+```
+# Shut down the currently running distributed validator node
+docker compose down
+```
\ No newline at end of file
diff --git a/versioned_docs/version-v0.18.0/int/quickstart/index.md b/versioned_docs/version-v0.18.0/int/quickstart/index.md
new file mode 100644
index 0000000000..dc6e712533
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/int/quickstart/index.md
@@ -0,0 +1,11 @@
+# Quickstart Guides
+
+:::caution
+Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf).
+:::
+
+There are two ways to set up a distributed validator and each comes with its own quickstart
+1. [Run a DV cluster as a **group**](./group/index.md), where several operators run the nodes that make up the cluster. In this setup, the key shares are created using a distributed key generation process, avoiding the full private keys being stored in full in any one place.
+This approach can also be used by single operators looking to manage all nodes of a cluster but wanting to create the key shares in a trust-minimised fashion.
+
+2. [Run a DV cluster **alone**](./quickstart/alone/create-keys), where a single operator runs all the nodes of the DV. Depending on trust assumptions, there is not necessarily the need to create the key shares via a DKG process. Instead the key shares can be created in a centralised manner, and distributed securely to the nodes.
\ No newline at end of file
diff --git a/versioned_docs/version-v0.18.0/int/quickstart/quickstart-exit.md b/versioned_docs/version-v0.18.0/int/quickstart/quickstart-exit.md
new file mode 100644
index 0000000000..704b992e3c
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/int/quickstart/quickstart-exit.md
@@ -0,0 +1,128 @@
+---
+sidebar_position: 5
+description: Exit a validator
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Exit a DV
+
+:::caution
+Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf).
+:::
+
+Users looking to exit staking entirely and withdraw their full balance back must also sign and broadcast a "voluntary exit" message with validator keys which will start the process of exiting from staking. This is done with your validator client and submitted to your beacon node, and does not require gas. In the case of a DV, each charon node needs to broadcast a partial exit to the other nodes of the cluster. Once a threshold of partial exits has been received by any node, the full voluntary exit will be sent to the beacon chain.
+
+This process will take 27 hours or longer depending on the current length of the exit queue.
+
+:::info
+- A threshold of operators needs to run the exit command for the exit to succeed.
+- If a charon client restarts after the exit command is run but before the threshold is reached, it will lose the partial exits it has received from the other nodes. If all charon clients restart and thus all partial exits are lost before the required threshold of exit messages are received, operators will have to rebroadcast their partial exit messages.
+:::
+
+## Run the `voluntary-exit` command on your validator client
+
+Run the appropriate command on your validator client to broadcast an exit message from your validator client to its upstream charon client.
+
+It needs to be the validator client that is connected to your charon client taking part in the DV, as you are only signing a partial exit message, with a partial private key share, which your charon client will combine with the other partial exit messages from the other operators.
+
+:::info
+- All operators need to use the same `EXIT_EPOCH` for the exit to be successful. Assuming you want to exit as soon as possible, the default epoch of `162304` included in the below commands should be sufficient.
+- Partial exits can be broadcasted by any validator client as long as the sum reaches the threshold for the cluster.
+:::
+
+
+
+
+
+
+ The following executes an interactive command inside the Nimbus VC container. It copies all files and directories from the Keystore path /home/user/data/charon to the newly created /home/user/data/wd directory.
+
+ For each file in the /home/user/data/wd/secrets directory, it:
+
Extracts the filename without the extension as the file name is the public key
+
Appends {String.raw`--validator=`} to the command variable.
+
Executes a program called nimbus_beacon_node with the following arguments:
+
+
deposits exit : Exits validators
+
$command : The generated command string from the loop.
+
--epoch=162304 : The epoch upon which to submit the voluntary exit.
+
--rest-url=http://charon:3600/ : Specifies the Charon host:port
+
--data-dir=/home/user/charon/ : Specifies the Keystore path which has all the validator keys. There will be a secrets and a validators folder inside it.
+
+
+ The following executes an interactive command inside the Loestar VC container to exit all validators. It executes
+ node /usr/app/packages/cli/bin/lodestar validator voluntary-exit with the arguments:
+
+
--beaconNodes="http://charon:3600" : Specifies the Charon host:port.
+
--data-dir=/opt/data : Specifies the folder where the key stores were imported.
+
--exitEpoch=162304 : The epoch upon which to submit the voluntary exit.
+
+
+
+Once a threshold of exit signatures has been received by any single charon client, it will craft a valid voluntary exit message and will submit it to the beacon chain for inclusion. You can monitor partial exits stored by each node in the [Grafana Dashboard](https://github.com/ObolNetwork/charon-distributed-validator-node).
+
+## Exit epoch and withdrawable epoch
+The process of a validator exiting from staking takes variable amounts of time, depending on how many others are exiting at the same time.
+
+Immediately upon broadcasting a signed voluntary exit message, the exit epoch and withdrawable epoch values are calculated based off the current epoch number. These values determine exactly when the validator will no longer be required to be online performing validation, and when the validator is eligible for a full withdrawal respectively.
+1. Exit epoch - epoch at which your validator is no longer active, no longer earning rewards, and is no longer subject to slashing rules.
+ :::caution
+ Up until this epoch (while "in the queue") your validator is expected to be online and is held to the same slashing rules as always. Do not turn your DV node off until this epoch is reached.
+ :::
+2. Withdrawable epoch - epoch at which your validator funds are eligible for a full withdrawal during the next validator sweep.
+This occurs 256 epochs after the exit epoch, which takes ~27.3 hours.
+
+## How to verify a validator exit
+
+Consult the examples below and compare them to your validator's monitoring to verify that exits from each operator in the cluster are being received. This example is a cluster of 4 nodes with 2 validators and threshold of 3 nodes broadcasting exits are needed.
+
+1. Operator 1 broadcasts an exit on validator client 1.
+ ![Verify in Grafana Exit panel](/img/ExitPromQuery-01.png)
+ ![Verify in Grafana Exit panel](/img/DutyExit-01.png)
+2. Operator 2 broadcasts an exit on validator client 2.
+ ![Verify in Grafana Exit panel](/img/ExitPromQuery-02.png)
+ ![Verify in Grafana Exit panel](/img/DutyExit-02.png)
+3. Operator 3 broadcasts an exit on validator client 3.
+ ![Verify in Grafana Exit panel](/img/ExitPromQuery-03.png)
+ ![Verify in Grafana Exit panel](/img/DutyExit-03.png)
+
+At this point, the threshold of 3 has been reached and the validator exit process will start. The logs will show the following:
+ ![Verify in Grafana Exit panel](/img/ExitLogs.png)
+
+:::tip
+Once a validator has broadcasted an exit message, it must continue to validate for at least 27 hours or longer. Do not shut off your distributed validator nodes until your validator is fully exited.
+:::
\ No newline at end of file
diff --git a/versioned_docs/version-v0.18.0/int/quickstart/quickstart-mainnet.md b/versioned_docs/version-v0.18.0/int/quickstart/quickstart-mainnet.md
new file mode 100644
index 0000000000..fa23b47591
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/int/quickstart/quickstart-mainnet.md
@@ -0,0 +1,95 @@
+---
+sidebar_position: 7
+description: Run a cluster on mainnet
+---
+
+# Run a DV on mainnet
+
+:::caution
+Charon is in a beta state, and you should proceed only if you accept the risk, the [terms of use](https://obol.tech/terms.pdf), and have tested running a Distributed Validator on a testnet first.
+
+Distributed Validators created for goerli cannot be used on mainnet and vice versa, please take caution when creating, backing up, and activating mainnet validators.
+:::
+
+This section is intended for users who wish to run their Distributed Validator on Ethereum mainnet.
+
+### Pre-requisites
+
+- Ensure you have [docker](https://docs.docker.com/engine/install/) installed.
+- Ensure you have [git](https://git-scm.com/downloads) installed.
+- Make sure `docker` is running before executing the commands below.
+
+### Steps
+
+1. Clone the [charon-distributed-validator-node](https://github.com/ObolNetwork/charon-distributed-validator-node) repo and `cd` into the directory.
+
+```sh
+# Clone this repo
+git clone https://github.com/ObolNetwork/charon-distributed-validator-node.git
+
+# Change directory
+cd charon-distributed-validator-node
+```
+
+2. If you have already cloned the repo, make sure that it is [up-to-date](./update).
+
+3. Copy the `.env.sample` file to `.env`
+```
+cp -n .env.sample.mainnet .env
+```
+
+Your DV stack is now mainnet ready 🎉
+
+#### Remote mainnet beacon node
+
+:::caution
+Using a remote beacon node will impact the performance of your Distributed Validator and should be used sparingly.
+:::
+
+If you already have a mainnet beacon node running somewhere and you want to use that instead of running EL (`geth`) & CL (`lighthouse`) as part of the repo, you can disable these images. To do so, follow these steps:
+
+1. Copy the `docker-compose.override.yml.sample` file
+```
+cp -n docker-compose.override.yml.sample docker-compose.override.yml
+```
+2. Uncomment the `profiles: [disable]` section for both `geth` and `lighthouse`. The override file should now look like this
+```
+services:
+ geth:
+ # Disable geth
+ profiles: [disable]
+ # Bind geth internal ports to host ports
+ #ports:
+ #- 8545:8545 # JSON-RPC
+ #- 8551:8551 # AUTH-RPC
+ #- 6060:6060 # Metrics
+
+ lighthouse:
+ # Disable lighthouse
+ profiles: [disable]
+ # Bind lighthouse internal ports to host ports
+ #ports:
+ #- 5052:5052 # HTTP
+ #- 5054:5054 # Metrics
+...
+```
+3. Then, uncomment and set the `CHARON_BEACON_NODE_ENDPOINTS` variable in the `.env` file to your mainnet beacon node's URL
+```
+...
+# Connect to one or more external beacon nodes. Use a comma separated list excluding spaces.
+CHARON_BEACON_NODE_ENDPOINTS=
+...
+```
+
+#### Exit a mainnet distributed validator
+
+If you want to exit your mainnet validator, you need to uncomment and set the `EXIT_EPOCH` variable in the `.env` file
+
+```
+...
+# Cluster wide consistent exit epoch. Set to latest for fork version, see `curl $BEACON_NODE/eth/v1/config/fork_schedule`
+# Currently, the latest fork is capella (epoch: 194048)
+EXIT_EPOCH=194048
+...
+```
+Note that `EXIT_EPOCH` should be `194048` after the [shapella fork](https://blog.ethereum.org/2023/03/28/shapella-mainnet-announcement).
diff --git a/versioned_docs/version-v0.18.0/int/quickstart/update.md b/versioned_docs/version-v0.18.0/int/quickstart/update.md
new file mode 100644
index 0000000000..3187bbc0bf
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/int/quickstart/update.md
@@ -0,0 +1,76 @@
+---
+sidebar_position: 6
+description: Update your DV cluster with the latest Charon release
+---
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Update a DV
+
+It is highly recommended to upgrade your DV stack from time to time. This ensures that your node is secure, performant, up-to-date and you don't miss important hard forks.
+
+To do this, follow these steps:
+
+### Navigate to the node directory
+
+
+
+
+
+ cd charon-distributed-validator-node
+
+
+
+
+
+
+
+ cd charon-distributed-validator-cluster
+
+
+
+
+
+### Pull latest changes to the repo
+```
+git pull
+```
+
+### Create (or recreate) your DV stack
+```
+docker compose up -d --build
+```
+:::warning
+If you run more than one node in a DV Cluster, please take caution upgrading them simultaneously. Particularly if you are updating or changing the validator client used or recreating disks. It is recommended to update nodes on a sequential basis to minimse liveness and safety risks.
+:::
+
+### Conflicts
+
+:::info
+You may get a `git conflict` error similar to this:
+:::
+```markdown
+error: Your local changes to the following files would be overwritten by merge:
+prometheus/prometheus.yml
+...
+Please commit your changes or stash them before you merge.
+```
+This is probably because you have made some changes to some of the files, for example to the `prometheus/prometheus.yml` file.
+
+To resolve this error, you can either:
+
+- Stash and reapply changes if you want to keep your custom changes:
+ ```
+ git stash # Stash your local changes
+ git pull # Pull the latest changes
+ git stash apply # Reapply your changes from the stash
+ ```
+ After reapplying your changes, manually resolve any conflicts that may arise between your changes and the pulled changes using a text editor or Git's conflict resolution tools.
+
+- Override changes and recreate configuration if you don't need to preserve your local changes and want to discard them entirely:
+ ```
+ git reset --hard # Discard all local changes and override with the pulled changes
+ docker-compose up -d --build # Recreate your DV stack
+ ```
+ After overriding the changes, you will need to recreate your DV stack using the updated files.
+ By following one of these approaches, you should be able to handle Git conflicts when pulling the latest changes to your repository, either preserving your changes or overriding them as per your requirements.
\ No newline at end of file
diff --git a/versioned_docs/version-v0.18.0/intro.md b/versioned_docs/version-v0.18.0/intro.md
new file mode 100644
index 0000000000..10a81b9143
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/intro.md
@@ -0,0 +1,20 @@
+---
+sidebar_position: 1
+description: Welcome to the Multi-Operator Validator Network
+---
+
+# Introduction
+
+## What is Obol?
+
+Obol Labs is a research and software development team focused on proof-of-stake infrastructure for public blockchain networks. Specific topics of focus are Internet Bonds, Distributed Validator Technology and Multi-Operator Validation. The team currently includes 20 members that are spread across the world.
+
+The core team is building the Obol Network, a protocol to foster trust minimized staking through multi-operator validation. This will enable low-trust access to Ethereum staking yield, which can be used as a core building block in a variety of Web3 products.
+
+## About this documentation
+
+This manual is aimed at developers and stakers looking to utilize the Obol Network for multi-party staking. To contribute to this documentation, head over to our [Github repository](https://github.com/ObolNetwork/obol-docs) and file a pull request.
+
+## Need assistance?
+
+If you have any questions about this documentation or are experiencing technical problems with any Obol-related projects, head on over to our [Discord](https://discord.gg/n6ebKsX46w) where a member of our team or the community will be happy to assist you.
diff --git a/versioned_docs/version-v0.18.0/sc/_category_.json b/versioned_docs/version-v0.18.0/sc/_category_.json
new file mode 100644
index 0000000000..c740383c52
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/sc/_category_.json
@@ -0,0 +1,5 @@
+{
+ "label": "Smart contracts",
+ "position": 5,
+ "collapsed": true
+}
diff --git a/versioned_docs/version-v0.18.0/sc/introducing-obol-splits.md b/versioned_docs/version-v0.18.0/sc/introducing-obol-splits.md
new file mode 100644
index 0000000000..fb642befa5
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/sc/introducing-obol-splits.md
@@ -0,0 +1,89 @@
+---
+sidebar_position: 1
+description: Smart contracts for managing Distributed Validators
+---
+
+# Obol Splits
+
+Obol develops and maintains a suite of smart contracts for use with Distributed Validators. These contracts include:
+
+- Withdrawal Recipients: Contracts used for a validator's withdrawal address.
+- Split contracts: Contracts to split ether across multiple entities. Developed by [Splits.org](https://splits.org)
+- Split controllers: Contracts that can mutate a splitter's configuration.
+
+Two key goals of validator reward management are:
+
+1. To be able to differentiate reward ether from principal ether such that node operators can be paid a percentage the _reward_ they accrue for the principal provider rather than a percentage of _principal+reward_.
+2. To be able to withdraw the rewards in an ongoing manner without exiting the validator.
+
+Without access to the consensus layer state in the EVM to check a validator's status or balance, and due to the incoming ether being from an irregular state transition, neither of these requirements are easily satisfiable.
+
+The following sections outline different contracts that can be composed to form a solution for one or both goals.
+
+## Withdrawal Recipients
+
+Validators have two streams of revenue, the consensus layer rewards and the execution layer rewards. Withdrawal Recipients focus on the former, receiving the balance skimming from a validator with >32 ether in an ongoing manner, and receiving the principal of the validator upon exit.
+
+### Optimistic Withdrawal Recipient
+
+This is the primary withdrawal recipient Obol uses, as it allows for the separation of reward from principal, as well as permitting the ongoing withdrawal of accruing rewards.
+
+An Optimistic Withdrawal Recipient [contract](https://github.com/ObolNetwork/obol-splits/blob/main/src/owr/OptimisticWithdrawalRecipient.sol) takes three inputs when deployed:
+
+- A _principal_ address: The address that controls where the principal ether will be transferred post-exit.
+- A _reward_ address: The address where the accruing reward ether is transferred to.
+- The amount of ether that makes up the principal.
+
+This contract **assumes that any ether that has appeared in it's address since it was last able to do balance accounting is skimming reward from an ongoing validator** (or number of validators) unless the change is > 16 ether. This means balance skimming is immediately claimable as reward, while an inflow of e.g. 31 ether is tracked as a return of principal (despite being slashed in this example).
+
+:::warning
+
+Worst-case mass slashings can theoretically exceed 16 ether, if this were to occur, the returned principal would be misclassified as a reward, and distributed to the wrong address. This risk is the drawback that makes this contract variant 'optimistic'. If you intend to use this contract type, **it is important you understand and accept this risk**, however minute.
+
+The alternative is to use an splits.org [waterfall contract](https://docs.splits.org/core/waterfall), which won't allow the claiming of rewards until all principal ether has been returned, meaning validators need to be exited for operators to claim their CL rewards.
+
+:::
+
+This contract fits both design goals and can be used with thousands of validators. If you deploy an Optimistic Withdrawal Recipient with a principal higher than you actually end up using, nothing goes wrong. If you activate more validators than you specified in your contract deployment, you will record too much ether as reward and will overpay your reward address with ether that was principal ether, not earned ether. Current iterations of this contract are not designed for editing the amount of principal set.
+
+#### OWR Factory Deployment
+
+The OptimisticWithdrawalRecipient contract is deployed via a [factory contract](https://github.com/ObolNetwork/obol-splits/blob/main/src/owr/OptimisticWithdrawalRecipientFactory.sol). The factory is deployed at the following addresses on the following chains.
+
+| Chain | Address |
+|---------|-------------------------------------------------------------------------------------------------------------------------------|
+| Mainnet | [0x119acd7844cbdd5fc09b1c6a4408f490c8f7f522](https://etherscan.io/address/0x119acd7844cbdd5fc09b1c6a4408f490c8f7f522) |
+| Goerli | [0xe9557FCC055c89515AE9F3A4B1238575Fcd80c26](https://goerli.etherscan.io/address/0xe9557FCC055c89515AE9F3A4B1238575Fcd80c26) |
+| Holesky | |
+| Sepolia | [0xca78f8fda7ec13ae246e4d4cd38b9ce25a12e64a](https://sepolia.etherscan.io/address/0xca78f8fda7ec13ae246e4d4cd38b9ce25a12e64a) |
+
+### Exitable Withdrawal Recipient
+
+A much awaited feature for proof of stake Ethereum is the ability to trigger the exit of a validator with only the withdrawal address. This is tracked in [EIP-7002](https://eips.ethereum.org/EIPS/eip-7002). Support for this feature will be inheritable in all other withdrawal recipient contracts. This will mitigate the risk to a principal provider of funds being stuck, or a validator being irrecoverably offline.
+
+## Split Contracts
+
+A split, or splitter, is a set of contracts that can divide ether or an ERC20 across a number of addresses. Splits are often used in conjunction with withdrawal recipients. Execution Layer rewards for a DV are directed to a split address through the use of a `fee recipient` address. Splits can be either immutable, or mutable by way of an admin address capable of updating them.
+
+Further information about splits can be found on the splits.org team's [docs site](https://docs.splits.org/). The addresses of their deployments can be found [here](https://docs.splits.org/core/split#addresses).
+
+## Split Controllers
+
+Splits can be completely edited through the use of the `controller` address, however, total editability of a split is not always wanted. A permissive controller and a restrictive controller are given as examples below.
+
+### (Gnosis) SAFE wallet
+
+A [SAFE](https://safe.global/) is a common method to administrate a mutable split. The most well-known deployment of this pattern is the [protocol guild](https://protocol-guild.readthedocs.io/en/latest/3-smart-contract.html). The SAFE can arbitrarily update the split to any set of addresses with any valid set of percentages.
+
+### Immutable Split Controller
+
+This is a [contract](https://github.com/ObolNetwork/obol-splits/blob/main/src/controllers/ImmutableSplitController.sol) that updates one split configuration with another, exactly once. Only a permissioned address can trigger the change. This contract is suitable for changing a split at an unknown point in future to a configuration pre-defined at deployment.
+
+The Immutable Split Controller [factory contract](https://github.com/ObolNetwork/obol-splits/blob/main/src/controllers/ImmutableSplitControllerFactory.sol) can be found at the following addresses:
+
+| Chain | Address |
+|---------|-------------------------------------------------------------------------------------------------------------------------------|
+| Mainnet | |
+| Goerli | [0x64a2c4A50B1f46c3e2bF753CFe270ceB18b5e18f](https://goerli.etherscan.io/address/0x64a2c4A50B1f46c3e2bF753CFe270ceB18b5e18f) |
+| Holesky | |
+| Sepolia | |
\ No newline at end of file
diff --git a/versioned_docs/version-v0.18.0/sec/_category_.json b/versioned_docs/version-v0.18.0/sec/_category_.json
new file mode 100644
index 0000000000..2c9d0b38b7
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/sec/_category_.json
@@ -0,0 +1,5 @@
+{
+ "label": "Security",
+ "position": 7,
+ "collapsed": true
+}
diff --git a/versioned_docs/version-v0.18.0/sec/bug-bounty.md b/versioned_docs/version-v0.18.0/sec/bug-bounty.md
new file mode 100644
index 0000000000..48c52d89b4
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/sec/bug-bounty.md
@@ -0,0 +1,116 @@
+---
+sidebar_position: 2
+description: Bug Bounty Policy
+---
+
+# Obol Bug Bounty
+
+## Overview
+
+Obol Labs is committed to ensuring the security of our distributed validator software and services. As part of our commitment to security, we have established a bug bounty program to encourage security researchers to report vulnerabilities in our software and services to us so that we can quickly address them.
+
+## Eligibility
+
+To participate in the Bug Bounty Program you must:
+
+- Not be a resident of any country that does not allow participation in these types of programs
+- Be at least 14 years old and have legal capacity to agree to these terms and participate in the Bug Bounty Program
+- Have permission from your employer to participate
+- Not be (for the previous 12 months) an Obol Labs employee, immediate family member of an Obol employee, Obol contractor, or Obol service provider.
+
+## Scope
+
+The bug bounty program applies to software and services that are built by Obol. Only submissions under the following domains are eligible for rewards:
+
+- Charon DVT Middleware
+- DV Launchpad
+- Obol’s Public API
+- Obol’s Smart Contracts and the contracts they depend on.
+- Obol’s Public Relay
+
+Additionally, all vulnerabilities that require or are related to the following are out of scope:
+
+- Social engineering
+- Rate Limiting (Non-critical issues)
+- Physical security
+- Non-security-impacting UX issues
+- Vulnerabilities or weaknesses in third party applications that integrate with Obol
+- The Obol website or the Obol infrastructure in general is NOT part of this bug bounty program.
+
+## Rules
+
+- Bug has not been publicly disclosed
+- Vulnerabilities that have been previously submitted by another contributor or already known by the Obol development team are not eligible for rewards
+- The size of the bounty payout depends on the assessment of the severity of the exploit. Please refer to the rewards section below for additional details
+- Bugs must be reproducible in order for us to verify the vulnerability. Submissions with a working proof of concept is necessary
+- Rewards and the validity of bugs are determined by the Obol security team and any payouts are made at their sole discretion
+- Terms and conditions of the Bug Bounty program can be changed at any time at the discretion of Obol
+- Details of any valid bugs may be shared with complementary protocols utilised in the Obol ecosystem in order to promote ecosystem cohesion and safety.
+
+## Rewards
+
+The rewards for participating in our bug bounty program will be based on the severity and impact of the vulnerability discovered. We will evaluate each submission on a case-by-case basis, and the rewards will be at Obol’s sole discretion.
+
+### Low: up to $500
+
+A Low-level vulnerability is one that has a limited impact and can be easily fixed. Unlikely to have a meaningful impact on availability, integrity, and/or loss of funds.
+
+- Low impact, medium likelihood
+- Medium impact, low likelihood
+ Examples:
+- Attacker can sometimes put a charon node in a state that causes it to drop one out of every one hundred attestations made by a validator
+
+### Medium: up to $1,000
+
+A Medium-level vulnerability is one that has a moderate impact and requires a more significant effort to fix. Possible to have an impact on validator availability, integrity, and/or loss of funds.
+
+- High impact, low likelihood
+- Medium impact, medium likelihood
+- Low impact, high likelihood
+ Examples:
+- Attacker can successfully conduct eclipse attacks on the cluster nodes with peer-ids with 4 leading zero bytes.
+
+### High: up to $4,000
+
+A High-level vulnerability is one that has a significant impact on the security of the system and requires a significant effort to fix. Likely to have impact on availability, integrity, and/or loss of funds.
+
+- High impact, medium likelihood
+- Medium impact, high likelihood
+ Examples:
+- Attacker can successfully partition the cluster and keep the cluster offline.
+
+### Critical: up to $10,000
+
+A Critical-level vulnerability is one that has a severe impact on the security of the in-production system and requires immediate attention to fix. Highly likely to have a material impact on availability, integrity, and/or loss of funds.
+
+- High impact, high likelihood
+ Examples:
+- Attacker can successfully conduct remote code execution in charon client to exfiltrate BLS private key material.
+
+We may offer rewards in the form of cash, merchandise, or recognition. We will only award one reward per vulnerability discovered, and we reserve the right to deny a reward if we determine that the researcher has violated the terms and conditions of this policy.
+
+## Submission process
+
+Please email security@obol.tech
+
+Your report should include the following information:
+
+- Description of the vulnerability and its potential impact
+- Steps to reproduce the vulnerability
+- Proof of concept code, screenshots, or other supporting documentation
+- Your name, email address, and any contact information you would like to provide.
+ Reports that do not include sufficient detail will not be eligible for rewards.
+
+## Disclosure Policy
+
+Obol Labs will disclose the details of the vulnerability and the researcher’s identity (with their consent) only after we have remediated the vulnerability and issued a fix. Researchers must keep the details of the vulnerability confidential until Obol Labs has acknowledged and remediated the issue.
+
+## Legal Compliance
+
+All participants in the bug bounty program must comply with all applicable laws, regulations, and policy terms and conditions. Obol will not be held liable for any unlawful or unauthorised activities performed by participants in the bug bounty program.
+
+We will not take any legal action against security researchers who discover and report security vulnerabilities in accordance with this bug bounty policy. We do, however, reserve the right to take legal action against anyone who violates the terms and conditions of this policy.
+
+## Non-Disclosure Agreement
+
+All participants in the bug bounty program will be required to sign a non-disclosure agreement (NDA) before they are given access to closed source software and services for testing purposes.
diff --git a/versioned_docs/version-v0.18.0/sec/contact.md b/versioned_docs/version-v0.18.0/sec/contact.md
new file mode 100644
index 0000000000..e66e1663e2
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/sec/contact.md
@@ -0,0 +1,10 @@
+---
+sidebar_position: 3
+description: Security details for the Obol Network
+---
+
+# Contacts
+
+Please email security@obol.tech to report a security incident, vulnerability, bug or inquire about Obol's security.
+
+Also, visit the [obol security repo](https://github.com/ObolNetwork/obol-security) for more details.
diff --git a/versioned_docs/version-v0.18.0/sec/ev-assessment.md b/versioned_docs/version-v0.18.0/sec/ev-assessment.md
new file mode 100644
index 0000000000..429c3f1ffa
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/sec/ev-assessment.md
@@ -0,0 +1,284 @@
+---
+sidebar_position: 4
+description: Software Development Security Assessment
+---
+
+# Software Development at Obol
+
+When hardening a projects technical security, team member's operational security, and the security of the software development practices in use by the team are some of the most criticial areas to secure. Many hacks and compromises in the space to date have been a result of these attack vectors rather than exploits of the software itself.
+
+With this in mind, in January 2023 the Obol team retained the expertise of Ethereal Venture's security researcher Alex Wade; to interview key stakeholders and produce a report into the teams Software Development Lifecycle.
+
+The below page is a result of the report that was produced. What is present here has had some sensitive information redacted, and contains responses to the recommendations made, detailing the actions the Obol team have taken to mitigate what has been highlighted.
+
+# Obol Report
+
+**Prepared by: Alex Wade (Ethereal Ventures)**
+**Date: Jan 2023**
+
+Over the past month, I worked with Obol to review their software development practices in preparation for their upcoming security audits. My goals were to review and analyze:
+
+- Software development processes
+- Vulnerability disclosure and escalation procedures
+- Key personnel risk
+
+The information in this report was collected through a series of interviews with Obol’s project leads.
+
+## Contents:
+
+- Background Info
+- Analysis - Cluster Setup and DKG
+ - Key Risks
+ - Potential Attack Scenarios
+- Recommendations
+ - R1: Users should deploy cluster contracts through a known on-chain entry point
+ - R2: Users should deposit to the beacon chain through a pool contract
+ - R3: Raise the barrier to entry to push an update to the Launchpad
+- Additional Notes
+ - Vulnerability Disclosure
+ - Key Personnel Risk
+
+
+## Background Info
+
+**Each team lead was asked to describe Obol in terms of its goals, objectives, and key features.**
+
+### What is Obol?
+
+Obol builds DVT (Distributed Validator Technology) for Ethereum.
+
+### What is Obol’s goal?
+Obol’s goal is to solve a classic distributed systems problem: uptime.
+
+Rather than requiring Ethereum validators to stake on their own, Obol allows groups of operators to stake together. Using Obol, a single validator can be run cooperatively by multiple people across multiple machines.
+
+In theory, this architecture provides validators with some redundancy against common issues: server and power outages, client failures, and more.
+
+### What are Obol’s objectives?
+
+Obol’s business objective is to provide base-layer infrastructure to support a distributed validator ecosystem. As Obol provides base layer technology, other companies and projects will build on top of Obol.
+
+Obol’s business model is to eventually capture a portion of the revenue generated by validators that use Obol infrastructure.
+
+### What is Obol’s product?
+
+Obol’s product consists of three main components, each run by its own team: a webapp, a client, and smart contracts.
+
+- [DV Launchpad](../dvl/intro.md): A webapp to create and manage distributed validators.
+- [Charon](../charon/intro.md): A middleware client that enables operators to run distributed validators.
+- [Solidity](../sc/introducing-obol-splits.md): Withdrawal and fee recipient contracts for use with distributed validators.
+
+## Analysis - Cluster Setup and DKG
+
+The Launchpad guides users through the process of creating a cluster, which defines important parameters like the validator’s fee recipient and withdrawal addresses, as well as the identities of the operators in the cluster. In order to ensure their cluster configuration is correct, users need to rely on a few different factors.
+
+**First, users need to trust the Charon client** to perform the DKG correctly, and validate things like:
+- Config file is well-formed and is using the expected version
+- Signatures and ENRs from other operators are valid
+- Cluster config hash is correct
+- DKG succeeds in producing valid signatures
+- Deposit data is well-formed and is correctly generated from the cluster config and DKG.
+
+However, Charon’s validation is limited to the digital: signature checks, cluster file syntax, etc. It does NOT help would-be operators determine whether the other operators listed in their cluster definition are the real people with whom they intend to start a DVT cluster. So -
+
+**Second, users need to come to social consensus with fellow operators.** While the cluster is being set up, it’s important that each operator is an active participant. Each member of the group must validate and confirm that:
+
+- the cluster file correctly reflects their address and node identity, and reflects the information they received from fellow operators
+- the cluster parameters are expected – namely, the number of validators and signing threshold
+
+**Finally, users need to perform independent validation.** Each user should perform their own validation of the cluster definition:
+
+- Is my information correct? (address and ENR)
+- Does the information I received from the group match the cluster definition?
+- Is the ETH2 deposit data correct, and does it match the information in the cluster definition?
+- Are the withdrawal and fee recipient addresses correct?
+
+These final steps are potentially the most difficult, and may require significant technical knowledge.
+
+## Key Risks
+
+### 1. Validation of Contract Deployment and Deposit Data Relies Heavily on Launchpad
+
+From my interviews, it seems that the user deploys both the withdrawal and fee recipient contracts through the Launchpad.
+
+What I’m picturing is that during the first parts of the cluster setup process, the user is prompted to sign one or more transactions deploying the withdrawal and fee recipient contracts to mainnet. The Launchpad apparently uses an npm package to deploy these contracts: `0xsplits/splits-sdk`, which I assume provides either JSON artifacts or a factory address on chain. The Launchpad then places the deployed contracts into the cluster config file, and the process moves on.
+
+If an attacker has published a malicious update to the Launchpad (or compromised an underlying dependency), the contracts deployed by the Launchpad may be malicious. The questions I’d like to pose are:
+
+- How does the group creator know the Launchpad deployed the correct contracts?
+- How does the rest of the group know the creator deployed the contracts through the Launchpad?
+
+My understanding is that this ultimately comes down to the independent verification that each of the group’s members performs during and after the cluster’s setup phase.
+
+At its worst, this verification might consist solely of the cluster creator confirming to the others that, yes, those addresses match the contracts I deployed through the Launchpad.
+
+A more sophisticated user might verify that not only do the addresses match, but the deployed source code looks roughly correct. However, this step is far out of the realm of many would-be validators. To be really certain that the source code is correct would require auditor-level knowledge.
+
+The risk is that:
+
+- the deployed contracts are NOT the correctly-configured 0xsplits waterfall/fee splitter contracts
+- most users are ill-equipped to make this determination themselves
+- we don’t want to trust the Launchpad as the single source of truth
+
+In the worst case, the cluster may end up depositing with malicious withdrawal or fee recipient credentials. If unnoticed, this may net an attacker the entire withdrawal amount, once the cluster exits.
+
+Note that the same (or similar) risks apply to validation of deposit data, which has the potential to be similarly difficult. I’m a little fuzzy on which part of the Obol stack actually generates the deposit data / deposit transaction, so I can’t speak to this as much. However, I think the mitigation for both of these is roughly the same - read on!
+
+**Mitigation:**
+
+It’s certainly a good idea to make it harder to deploy malicious updates to the Launchpad, but this may not be entirely possible. A higher-yield strategy may be to educate and empower users to perform independent validation of the DVT setup process - without relying on information fed to them by Charon and the Launchpad.
+
+I’ve outlined some ideas for this in #R1 and #R2.
+
+### 2. Social Consensus, aka “Who sends the 32 ETH?”
+
+Depositing to the beacon chain requires a total of 32 ETH. Obol’s product allows multiple operators to act as a single validator together, which means would-be operators need to agree on how to fund the 32 ETH needed to initiate the deposit.
+
+It is my understanding that currently, this process comes down to trust and loose social consensus. Essentially, the group needs to decide who chips in what amount together, and then trust someone to take the 32 ETH and complete the deposit process correctly (without running away with the money).
+
+Granted, the initial launch of Obol will be open only to a small group of people as the kinks in the system get worked out - but in preparation for an eventual public release, the deposit process needs to be much simpler and far less reliant on trust.
+
+Mitigation: See #R2.
+
+**Potential Attack Scenarios**
+
+During the interview process, I learned that each of Obol’s core components has its own GitHub repo, and that each repo has roughly the same structure in terms of organization and security policies. For each repository:
+
+- There are two overall github organization administrators, and a number of people have administrative control over individual repositories.
+- In order to merge PRs, the submitter needs:
+ - CI/CD checks to pass
+ - Review from one person (anyone at Obol)
+
+Of course, admin access also means the ability to change these settings - so repo admins could theoretically merge PRs without needing checks to pass, and without review/approval, organization admins can control the full GitHub organization.
+
+The following scenarios describe the impact an attack may have.
+
+**1. Publishing a malicious version of the Launchpad, or compromising an underlying dependency**
+
+- Reward: High
+- Difficulty: Medium-Low
+
+As described in Key Risks, publishing a malicious version of the Launchpad has the potential to net the largest payout for an attacker. By tampering with the cluster’s deposit data or withdrawal/fee recipient contracts, an attacker stands to gain 32 ETH or more per compromised cluster.
+
+During the interviews, I learned that merging PRs to main in the Launchpad repo triggers an action that publishes to the site. Given that merges can be performed by an authorized Obol developer, this makes the developers prime targets for social engineering attacks.
+
+Additionally, the use of the `0xsplits/splits-sdk` NPM package to aid in contract deployment may represent a supply chain attack vector. It may be that this applies to other Launchpad dependencies as well.
+
+In any case, with a fairly large surface area and high potential reward, this scenario represents a credible risk to users during the cluster setup and DKG process.
+
+See #R1, #R2, and #R3 for some ideas to address this scenario.
+
+**2. Publishing a malicious version of Charon to new operators**
+
+- Reward: Medium
+- Difficulty: High
+
+During the cluster setup process, Charon is responsible both for validating the cluster configuration produced by the Launchpad, as well as performing a DKG ceremony between a group’s operators.
+
+If new operators use a malicious version of Charon to perform this process, it may be possible to tamper with both of these responsibilities, or even get access to part or all of the underlying validator private key created during DKG.
+
+However, the difficulty of this type of attack seems quite high. An attacker would first need to carry out the same type of social engineering attack described in scenario 1 to publish and tag a new version of Charon. Crucially, users would also need to install the malicious version - unlike the Launchpad, an update here is not pushed directly to users.
+
+As long as Obol is clear and consistent with communication around releases and versioning, it seems unlikely that a user would both install a brand-new, unannounced release, and finish the cluster setup process before being warned about the attack.
+
+**3. Publishing a malicious version of Charon to existing validators**
+
+- Reward: Low
+- Difficulty: High
+
+Once a distributed validator is up and running, much of the danger has passed. As a middleware client, Charon sits between a validator’s consensus and validator clients. As such, it shouldn’t have direct access to a validator’s withdrawal keys nor signing keys.
+
+If existing validators update to a malicious version of Charon, it’s likely the worst thing an attacker could theoretically do is slash the validator, however, assuming charon has no access to any private keys, this would be predicated on one or more validator clients connected to charon also failing to prevent the signing of a slashable message. In practice, a compromised charon client is more likely to pose liveness risks than safety risks.
+
+This is not likely to be particularly motivating to potential attackers - and paired with the high difficulty described above, this scenario seems unlikely to cause significant issues.
+
+## Recommendations
+
+### R1: Users should deploy cluster contracts through a known on-chain entry point
+
+During setup, users should only sign one transaction via the Launchpad - to a contract located at an Obol-held ENS (e.g. `launchpad.obol.eth`). This contract should deploy everything needed for the cluster to operate, like the withdrawal and fee recipient contracts. It should also initialize them with the provided reward split configuration (and any other config needed).
+
+Rather than using an NPM library to supply a factory address or JSON artifacts, this has the benefit of being both:
+
+- **Harder to compromise:** as long as the user knows launchpad.obol.eth, it’s pretty difficult to trick them into deploying the wrong contracts.
+- **Easier to validate** for non-technical users: the Obol contract can be queried for deployment information via etherscan. For example:
+
+![Etherscan Contract View Screenshot](/img/EtherscanContractView.png)
+
+Note that in order for this to be successful, Obol needs to provide detailed steps for users to perform manual validation of their cluster setups. Users should be able to treat this as a “checklist:”
+
+- Did I send a transaction to `launchpad.obol.eth`?
+- Can I use the ENS name to locate and query the deployment manager contract on etherscan?
+- If I input my address, does etherscan report the configuration I was expecting?
+ - withdrawal address matches
+ - fee recipient address matches
+ - reward split configuration matches
+
+As long as these steps are plastered all over the place (i.e. not just on the Launchpad) and Obol puts in effort to educate users about the process, this approach should allow users to validate cluster configurations themselves - regardless of Launchpad or NPM package compromise.
+
+#### Obol’s response:
+Roadmapped: add the ability for the OWR factory to claim and transfer its reverse resolution ownership.
+
+### R2: Users should deposit to the beacon chain through a pool contract
+
+Once cluster setup and DKG is complete, a group of operators should deposit to the beacon chain by way of a pool contract. The pool contract should:
+
+ - Accept Eth from any of the group’s operators
+ - Stop accepting Eth when the contract’s balance hits (32 ETH * number of validators)
+ - Make it easy to pull the trigger and deposit to the beacon chain once the critical balance has been reached
+ - Offer all of the group’s operators a “bail” option at any point before the deposit is triggered
+
+Ideally, this contract is deployed during the setup process described in #R1, as another step toward allowing users to perform independent validation of the process.
+
+Rather than relying on social consensus, this should:
+ - Allow operators to fund the validator without needing to trust any single party
+ - Make it harder to mess up the deposit or send funds to some malicious actor, as the pool contract should know what the beacon deposit contract address is
+
+#### Obol’s response:
+Roadmapped: give the operators a streamlined, secure way to deposit Ether (ETH) to the beacon chain collectively, satisfying specific conditions:
+
+ - Pooling from multiple operators.
+ - Ceasing to accept ETH once a critical balance is reached, defined by 32 ETH multiplied by the number of validators.
+ - Facilitating an immediate deposit to the beacon chain once the target balance is reached.
+ - Provide a 'bail-out' option for operators to withdraw their contribution before initiating the group's deposit to the beacon chain.
+
+### R3: Raise the barrier to entry to push an update to the Launchpad
+
+Currently, any repo admin can publish an update to the Launchpad unchecked.
+
+Given the risks and scenarios outlined above, consider amending this process so that the sole compromise of either admin is not sufficient to publish to the Launchpad site. It may be worthwhile to require both admins to approve publishing to the site.
+
+Along with simply adding additional prerequisites to publish an update to the Launchpad, ensure that both admins have enabled some level of multi-factor authentication on their GitHub accounts.
+
+#### Obol’s response:
+We removed individual’s ability to merge changes without review, enforced MFA, signed commits, and employed Bulldozer bot to make sure a PR gets merged automatically when all checks pass.
+
+## Additional Notes
+### Vulnerability Disclosure
+During the interviews, I got some conflicting information when asking about Obol’s vulnerability disclosure process.
+
+Some interviewees directed me towards Obol’s security repo, which details security contacts: [ObolNetwork/obol-security](https://github.com/ObolNetwork/obol-security), while some answered that disclosure should happen primarily through Immunefi. While these may both be part of the correct answer, it seems that Obol’s disclosure process may not be as well-defined as it could be. Here are some notes:
+
+ - I wasn’t able to find information about Obol on Immunefi. I also didn’t find any reference to a security contact or disclosure policy in Obol’s docs.
+ - When looking into the obol security repo, I noticed broken links in a few of the sections in README.md and SECURITY.md:
+ - Security policy
+ - More Information
+ - Some of the text and links in the Bug Bounty Program don’t seem to apply to Obol (see text referring to Vaults and Strategies).
+ - The Receiving Disclosures section does not include a public key with which submitters can encrypt vulnerability information.
+
+It’s my understanding that these items are probably lower priority due to Obol’s initial closed launch - but these should be squared away soon!
+[Obol response to latest vuln disclosure process goes here]
+
+#### Obol’s response:
+we addressed all of the concerns in the obol-security repository:
+ 1. The security policy link has been fixed
+ 2. The Bug Bounty program received an overhaul and clearly states rewards, eligibility, and scope
+ 3. We list two GPG public keys for which we accept encrypted vulnerabilities reports.
+
+We are actively working towards integrating Immunefi in our security pipeline.
+
+### Key Personnel Risk
+A final section on the specifics of key personnel risk faced by Obol has been redacted from the original report. Particular areas of control highlighted were github org ownership and domain name control.
+
+#### Obol’s response:
+These risks have been mitigated by adding an extra admin to the github org, and by setting up a second DNS stack in case the primary one fails, along with general Opsec improvements.
diff --git a/versioned_docs/version-v0.18.0/sec/overview.md b/versioned_docs/version-v0.18.0/sec/overview.md
new file mode 100644
index 0000000000..17a81c9cc5
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/sec/overview.md
@@ -0,0 +1,36 @@
+---
+sidebar_position: 1
+description: Security Overview
+---
+
+# Overview
+
+This page serves as an overview of the Obol Network from a security point of view.
+
+This page is updated quarterly. The last update was on 2023-10-01.
+
+## Table of Contents
+
+1. [List of Security Audits and Assessments](#list-of-security-audits-and-assessments)
+1. [Security Focused Documents](#security-focused-documents)
+1. [Bug Bounty Details](./bug-bounty.md)
+
+## List of Security Audits and Assessments
+
+The completed audits reports are linked [here](https://github.com/ObolNetwork/obol-security/tree/main/audits).
+
+- A review of Obol Labs [development processes](./ev-assessment) by Ethereal Ventures
+
+- A [security assessment](https://github.com/ObolNetwork/obol-security/blob/f9d7b0ad0bb8897f74ccb34cd4bd83012ad1d2b5/audits/Sigma_Prime_Obol_Network_Charon_Security_Assessment_Report_v2_1.pdf) of Charon by [Sigma Prime](https://sigmaprime.io/).
+
+- A [solidity audit](./smart_contract_audit) of the Obol Splits contracts by [Zach Obront](https://zachobront.com/).
+
+- A second audit of Charon is planned for Q4 2023.
+
+## Security focused documents
+
+- A [threat model](./threat_model) for a DV middleware client like charon.
+
+## Bug Bounty
+
+Information related to disclosing bugs and vulnerabilities to Obol can be found on [the next page](./bug-bounty.md).
diff --git a/versioned_docs/version-v0.18.0/sec/smart_contract_audit.md b/versioned_docs/version-v0.18.0/sec/smart_contract_audit.md
new file mode 100644
index 0000000000..657c70701d
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/sec/smart_contract_audit.md
@@ -0,0 +1,476 @@
+---
+sidebar_position: 5
+description: Smart Contract Audit
+---
+
+# Smart Contract Audit
+
+
+
+## About **Obol**
+
+The Obol Network is an ecosystem for trust minimized staking that enables people to create, test, run & co-ordinate distributed validators.
+
+The Obol Manager contracts are responsible for distributing validator rewards and withdrawals among the validator and node operators involved in a distributed validator.
+
+## About **zachobront**
+
+Zach Obront is an independent smart contract security researcher. He serves as a Lead Senior Watson at Sherlock, a Security Researcher at Spearbit, and has identified multiple critical severity bugs in the wild, including in a Top 5 Protocol on Immunefi. You can say hi on Twitter at [@zachobront](http://twitter.com/zachobront).
+
+## Summary & Scope
+
+The [ObolNetwork/obol-manager-contracts](https://github.com/ObolNetwork/obol-manager-contracts/) repository was audited at commit [50ce277919723c80b96f6353fa8d1f8facda6e0e](https://github.com/ObolNetwork/obol-manager-contracts/tree/50ce277919723c80b96f6353fa8d1f8facda6e0e).
+
+
+The following contracts were in scope:
+- src/controllers/ImmutableSplitController.sol
+- src/controllers/ImmutableSplitControllerFactory.sol
+- src/lido/LidoSplit.sol
+- src/lido/LidoSplitFactory.sol
+- src/owr/OptimisticWithdrawalReceiver.sol
+- src/owr/OptimisticWithdrawalReceiverFactory.sol
+
+After completion of the fixes, the [2f4f059bfd145f5f05d794948c918d65d222c3a9](https://github.com/ObolNetwork/obol-manager-contracts/tree/2f4f059bfd145f5f05d794948c918d65d222c3a9) commit was reviewed. After this review, the updated Lido fee share system in [PR #96](https://github.com/ObolNetwork/obol-manager-contracts/pull/96/files) was reviewed.
+
+## Summary of Findings
+
+| Identifier | Title | Severity | Fixed |
+| :------: | ---------------------------- | :-------------: | :-----: |
+| [M-01](#m-01-future-fees-may-be-skirted-by-setting-a-non-eth-reward-token) | Future fees may be skirted by setting a non-ETH reward token | Medium | ✓ |
+| [M-02](#m-02-splits-with-256-or-more-node-operators-will-not-be-able-to-switch-on-fees) | Splits with 256 or more node operators will not be able to switch on fees | Medium | ✓ |
+| [M-03](#m-03-in-a-mass-slashing-event-node-operators-are-incentivized-to-get-slashed) | In a mass slashing event, node operators are incentivized to get slashed | Medium | |
+| [L-01](#l-01-obol-fees-will-be-applied-retroactively-to-all-non-distributed-funds-in-the-splitter) | Obol fees will be applied retroactively to all non-distributed funds in the Splitter | Low | ✓ |
+| [L-02](#l-02-if-owr-is-used-with-rebase-tokens-and-theres-a-negative-rebase-principal-can-be-lost) | If OWR is used with rebase tokens and there's a negative rebase, principal can be lost | Low | ✓ |
+| [L-03](#l-03-lidosplit-can-receive-eth-which-will-be-locked-in-contract) | LidoSplit can receive ETH, which will be locked in contract | Low | ✓ |
+| [L-04](#l-04-upgrade-to-latest-version-of-solady-to-fix-libclone-bug) | Upgrade to latest version of Solady to fix LibClone bug | Low | ✓ |
+| [G-01](#g-01-steth-and-wsteth-addresses-can-be-saved-on-implementation-to-save-gas) | stETH and wstETH addresses can be saved on implementation to save gas | Gas | ✓ |
+| [G-02](#g-02-owr-can-be-simplified-and-save-gas-by-not-tracking-distributedfunds) | OWR can be simplified and save gas by not tracking distributedFunds | Gas | ✓ |
+| [I-01](#i-01-strong-trust-assumptions-between-validators-and-node-operators) | Strong trust assumptions between validators and node operators | Informational | |
+| [I-02](#i-02-provide-node-operator-checklist-to-validate-setup) | Provide node operator checklist to validate setup | Informational | |
+
+## Detailed Findings
+
+### [M-01] Future fees may be skirted by setting a non-ETH reward token
+
+Fees are planned to be implemented on the `rewardRecipient` splitter by updating to a new fee structure using the `ImmutableSplitController`.
+
+It is assumed that all rewards will flow through the splitter, because (a) all distributed rewards less than 16 ETH are sent to the `rewardRecipient`, and (b) even if a team waited for rewards to be greater than 16 ETH, rewards sent to the `principalRecipient` are capped at the `amountOfPrincipalStake`.
+
+This creates a fairly strong guarantee that reward funds will flow to the `rewardRecipient`. Even if a user were to set their `amountOfPrincipalStake` high enough that the `principalRecipient` could receive unlimited funds, the Obol team could call `distributeFunds()` when the balance got near 16 ETH to ensure fees were paid.
+
+However, if the user selects a non-ETH token, all ETH will be withdrawable only thorugh the `recoverFunds()` function. If they set up a split with their node operators as their `recoveryAddress`, all funds will be withdrawable via `recoverFunds()` without ever touching the `rewardRecipient` or paying a fee.
+
+#### Recommendation
+
+I would recommend removing the ability to use a non-ETH token from the `OptimisticWithdrawalRecipient`. Alternatively, if it feels like it may be a use case that is needed, it may make sense to always include ETH as a valid token, in addition to any `OWRToken` set.
+
+#### Review
+
+Fixed in [PR 85](https://github.com/ObolNetwork/obol-manager-contracts/pull/85) by removing the ability to use non-ETH tokens.
+
+### [M-02] Splits with 256 or more node operators will not be able to switch on fees
+
+0xSplits is used to distribute rewards across node operators. All Splits are deployed with an ImmutableSplitController, which is given permissions to update the split one time to add a fee for Obol at a future date.
+
+The Factory deploys these controllers as Clones with Immutable Args, hard coding the `owner`, `accounts`, `percentAllocations`, and `distributorFee` for the future update. This data is packed as follows:
+```solidity
+ function _packSplitControllerData(
+ address owner,
+ address[] calldata accounts,
+ uint32[] calldata percentAllocations,
+ uint32 distributorFee
+ ) internal view returns (bytes memory data) {
+ uint256 recipientsSize = accounts.length;
+ uint256[] memory recipients = new uint[](recipientsSize);
+
+ uint256 i = 0;
+ for (; i < recipientsSize;) {
+ recipients[i] = (uint256(percentAllocations[i]) << ADDRESS_BITS) | uint256(uint160(accounts[i]));
+
+ unchecked {
+ i++;
+ }
+ }
+
+ data = abi.encodePacked(splitMain, distributorFee, owner, uint8(recipientsSize), recipients);
+ }
+```
+In the process, `recipientsSize` is unsafely downcasted into a `uint8`, which has a maximum value of `256`. As a result, any values greater than 256 will overflow and result in a lower value of `recipients.length % 256` being passed as `recipientsSize`.
+
+When the Controller is deployed, the full list of `percentAllocations` is passed to the `validSplit` check, which will pass as expected. However, later, when `updateSplit()` is called, the `getNewSplitConfiguation()` function will only return the first `recipientsSize` accounts, ignoring the rest.
+
+```solidity
+ function getNewSplitConfiguration()
+ public
+ pure
+ returns (address[] memory accounts, uint32[] memory percentAllocations)
+ {
+ // fetch the size first
+ // then parse the data gradually
+ uint256 size = _recipientsSize();
+ accounts = new address[](size);
+ percentAllocations = new uint32[](size);
+
+ uint256 i = 0;
+ for (; i < size;) {
+ uint256 recipient = _getRecipient(i);
+ accounts[i] = address(uint160(recipient));
+ percentAllocations[i] = uint32(recipient >> ADDRESS_BITS);
+ unchecked {
+ i++;
+ }
+ }
+ }
+```
+When `updateSplit()` is eventually called on `splitsMain` to turn on fees, the `validSplit()` check on that contract will revert because the sum of the percent allocations will no longer sum to `1e6`, and the update will not be possible.
+
+#### Proof of Concept
+
+The following test can be dropped into a file in `src/test` to demonstrate that passing 400 accounts will result in a `recipientSize` of `400 - 256 = 144`:
+```solidity
+// SPDX-License-Identifier: MIT
+pragma solidity ^0.8.0;
+
+import { Test } from "forge-std/Test.sol";
+import { console } from "forge-std/console.sol";
+import { ImmutableSplitControllerFactory } from "src/controllers/ImmutableSplitControllerFactory.sol";
+import { ImmutableSplitController } from "src/controllers/ImmutableSplitController.sol";
+
+interface ISplitsMain {
+ function createSplit(address[] calldata accounts, uint32[] calldata percentAllocations, uint32 distributorFee, address controller) external returns (address);
+}
+
+contract ZachTest is Test {
+ function testZach_RecipientSizeCappedAt256Accounts() public {
+ vm.createSelectFork("https://mainnet.infura.io/v3/fb419f740b7e401bad5bec77d0d285a5");
+
+ ImmutableSplitControllerFactory factory = new ImmutableSplitControllerFactory(address(9999));
+ bytes32 deploymentSalt = keccak256(abi.encodePacked(uint256(1102)));
+ address owner = address(this);
+
+ address[] memory bigAccounts = new address[](400);
+ uint32[] memory bigPercentAllocations = new uint32[](400);
+
+ for (uint i = 0; i < 400; i++) {
+ bigAccounts[i] = address(uint160(i));
+ bigPercentAllocations[i] = 2500;
+ }
+
+ // confirmation that 0xSplits will allow creating a split with this many accounts
+ // dummy acct passed as controller, but doesn't matter for these purposes
+ address split = ISplitsMain(0x2ed6c4B5dA6378c7897AC67Ba9e43102Feb694EE).createSplit(bigAccounts, bigPercentAllocations, 0, address(8888));
+
+ ImmutableSplitController controller = factory.createController(split, owner, bigAccounts, bigPercentAllocations, 0, deploymentSalt);
+
+ // added a public function to controller to read recipient size directly
+ uint savedRecipientSize = controller.ZachTest__recipientSize();
+ assert(savedRecipientSize < 400);
+ console.log(savedRecipientSize); // 144
+ }
+}
+```
+
+#### Recommendation
+
+When packing the data in `_packSplitControllerData()`, check `recipientsSize` before downcasting to a uint8:
+```diff
+function _packSplitControllerData(
+ address owner,
+ address[] calldata accounts,
+ uint32[] calldata percentAllocations,
+ uint32 distributorFee
+) internal view returns (bytes memory data) {
+ uint256 recipientsSize = accounts.length;
++ if (recipientsSize > 256) revert InvalidSplit__TooManyAccounts(recipientSize);
+ ...
+}
+```
+
+#### Review
+
+Fixed as recommended in [PR 86](https://github.com/ObolNetwork/obol-manager-contracts/pull/86).
+
+### [M-03] In a mass slashing event, node operators are incentivized to get slashed
+
+When the `OptimisticWithdrawalRecipient` receives funds from the beacon chain, it uses the following rule to determine the allocation:
+
+> If the amount of funds to be distributed is greater than or equal to 16 ether, it is assumed that it is a withdrawal (to be returned to the principal, with a cap on principal withdrawals of the total amount they deposited).
+
+> Otherwise, it is assumed that the funds are rewards.
+
+This value being as low as 16 ether protects against any predictable attack the node operator could perform. For example, due to the effect of hysteresis in updating effective balances, it does not seem to be possible for node operators to predictably bleed a withdrawal down to be below 16 ether (even if they timed a slashing perfectly).
+
+However, in the event of a mass slashing event, slashing punishments can be much more severe than they otherwise would be. To calculate the size of a slash, we:
+- take the total percentage of validator stake slashed in the 18 days preceding and following a user's slash
+- multiply this percentage by 3 (capped at 100%)
+- the full slashing penalty for a given validator equals 1/32 of their stake, plus the resulting percentage above applied to the remaining 31/32 of their stake
+
+In order for such penalties to bring the withdrawal balance below 16 ether (assuming a full 32 ether to start), we would need the percentage taken to be greater than `15 / 31 = 48.3%`, which implies that `48.3 / 3 = 16.1%` of validators would need to be slashed.
+
+Because the measurement is taken from the 18 days before and after the incident, node operators would have the opportunity to see a mass slashing event unfold, and later decide that they would like to be slashed along with it.
+
+In the event that they observed that greater than 16.1% of validators were slashed, Obol node operators would be able to get themselves slashed, be exited with a withdrawal of less than 16 ether, and claim that withdrawal as rewards, effectively stealing from the principal recipient.
+
+#### Recommendations
+
+Find a solution that provides a higher level of guarantee that the funds withdrawn are actually rewards, and not a withdrawal.
+
+#### Review
+
+Acknowledged. We believe this is a black swan event. It would require a major ETH client to be compromised, and would be a betrayal of trust, so likely not EV+ for doxxed operators. Users of this contract with unknown operators should be wary of such a risk.
+
+### [L-01] Obol fees will be applied retroactively to all non-distributed funds in the Splitter
+
+When Obol decides to turn on fees, a call will be made to `ImmutableSplitController::updateSplit()`, which will take the predefined split parameters (the original user specified split with Obol's fees added in) and call `updateSplit()` to implement the change.
+```solidity
+function updateSplit() external payable {
+ if (msg.sender != owner()) revert Unauthorized();
+
+ (address[] memory accounts, uint32[] memory percentAllocations) = getNewSplitConfiguration();
+
+ ISplitMain(splitMain()).updateSplit(split, accounts, percentAllocations, uint32(distributorFee()));
+}
+```
+If we look at the code on `SplitsMain`, we can see that this `updateSplit()` function is applied retroactively to all funds that are already in the split, because it updates the parameters without performing a distribution first:
+```solidity
+function updateSplit(
+ address split,
+ address[] calldata accounts,
+ uint32[] calldata percentAllocations,
+ uint32 distributorFee
+)
+ external
+ override
+ onlySplitController(split)
+ validSplit(accounts, percentAllocations, distributorFee)
+{
+ _updateSplit(split, accounts, percentAllocations, distributorFee);
+}
+```
+This means that any funds that have been sent to the split but have not yet be distributed will be subject to the Obol fee. Since these splitters will be accumulating all execution layer fees, it is possible that some of them may have received large MEV bribes, where this after-the-fact fee could be quite expensive.
+
+#### Recommendation
+
+The most strict solution would be for the `ImmutableSplitController` to store both the old split parameters and the new parameters. The old parameters could first be used to call `distributeETH()` on the split, and then `updateSplit()` could be called with the new parameters.
+
+If storing both sets of values seems too complex, the alternative would be to require that `split.balance <= 1` to update the split. Then the Obol team could simply store the old parameters off chain to call `distributeETH()` on each split to "unlock" it to update the fees.
+
+(Note that for the second solution, the ETH balance should be less than or equal to 1, not 0, because 0xSplits stores empty balances as `1` for gas savings.)
+
+#### Review
+
+Fixed as recommended in [PR 86](https://github.com/ObolNetwork/obol-manager-contracts/pull/86).
+
+### [L-02] If OWR is used with rebase tokens and there's a negative rebase, principal can be lost
+
+The `OptimisticWithdrawalRecipient` is deployed with a specific token immutably set on the clone. It is presumed that that token will usually be ETH, but it can also be an ERC20 to account for future integrations with tokenized versions of ETH.
+
+In the event that one of these integrations used a rebasing version of ETH (like `stETH`), the architecture would need to be set up as follows:
+
+`OptimisticWithdrawalRecipient => rewards to something like LidoSplit.sol => Split Wallet`
+
+In this case, the OWR would need to be able to handle rebasing tokens.
+
+In the event that rebasing tokens are used, there is the risk that slashing or inactivity leads to a period with a negative rebase. In this case, the following chain of events could happen:
+- `distribute(PULL)` is called, setting `fundsPendingWithdrawal == balance`
+- rebasing causes the balance to decrease slightly
+- `distribute(PULL)` is called again, so when `fundsToBeDistributed = balance - fundsPendingWithdrawal` is calculated in an unchecked block, it ends up being near `type(uint256).max`
+- since this is more than `16 ether`, the first `amountOfPrincipalStake - _claimedPrincipalFunds` will be allocated to the principal recipient, and the rest to the reward recipient
+- we check that `endingDistributedFunds <= type(uint128).max`, but unfortunately this check misses the issue, because only `fundsToBeDistributed` underflows, not `endingDistributedFunds`
+- `_claimedPrincipalFunds` is set to `amountOfPrincipalStake`, so all future claims will go to the reward recipient
+- the `pullBalances` for both recipients will be set higher than the balance of the contract, and so will be unusable
+
+In this situation, the only way for the principal to get their funds back would be for the full `amountOfPrincipalStake` to hit the contract at once, and for them to call `withdraw()` before anyone called `distribute(PUSH)`. If anyone was to be able to call `distribute(PUSH)` before them, all principal would be sent to the reward recipient instead.
+
+#### Recommendation
+
+Similar to #74, I would recommend removing the ability for the `OptimisticWithdrawalRecipient` to accept non-ETH tokens.
+
+Otherwise, I would recommend two changes for redundant safety:
+
+1) Do not allow the OWR to be used with rebasing tokens.
+
+2) Move the `_fundsToBeDistributed = _endingDistributedFunds - _startingDistributedFunds;` out of the unchecked block. The case where `_endingDistributedFunds` underflows is already handled by a later check, so this one change should be sufficient to prevent any risk of this issue.
+
+#### Review
+
+Fixed in [PR 85](https://github.com/ObolNetwork/obol-manager-contracts/pull/85) by removing the ability to use non-ETH tokens.
+
+### [L-03] LidoSplit can receive ETH, which will be locked in contract
+
+Each new `LidoSplit` is deployed as a clone, which comes with a `receive()` function for receiving ETH.
+
+However, the only function on `LidoSplit` is `distribute()`, which converts `stETH` to `wstETH` and transfers it to the `splitWallet`.
+
+While this contract should only be used for Lido to pay out rewards (which will come in `stETH`), it seems possible that users may accidentally use the same contract to receive other validator rewards (in ETH), or that Lido governance may introduce ETH payments in the future, which would cause the funds to be locked.
+
+#### Proof of Concept
+
+The following test can be dropped into `LidoSplit.t.sol` to confirm that the clones can currently receive ETH:
+```solidity
+function testZach_CanReceiveEth() public {
+ uint before = address(lidoSplit).balance;
+ payable(address(lidoSplit)).transfer(1 ether);
+ assertEq(address(lidoSplit).balance, before + 1 ether);
+}
+```
+
+#### Recommendation
+
+Introduce an additional function to `LidoSplit.sol` which wraps ETH into stETH before calling `distribute()`, in order to rescue any ETH accidentally sent to the contract.
+
+#### Review
+
+Fixed in [PR 87](https://github.com/ObolNetwork/obol-manager-contracts/pull/87/files) by adding a `rescueFunds()` function that can send ETH or any ERC20 (except `stETH` or `wstETH`) to the `splitWallet`.
+
+### [L-04] Upgrade to latest version of Solady to fix LibClone bug
+
+In the recent [Solady audit](https://github.com/Vectorized/solady/blob/main/audits/cantina-solady-report.pdf), an issue was found the affects LibClone.
+
+In short, LibClone assumes that the length of the immutable arguments on the clone will fit in 2 bytes. If it's larger, it overlaps other op codes and can lead to strange behaviors, including causing the deployment to fail or causing the deployment to succeed with no resulting bytecode.
+
+Because the `ImmutableSplitControllerFactory` allows the user to input arrays of any length that will be encoded as immutable arguments on the Clone, we can manipulate the length to accomplish these goals.
+
+Fortunately, failed deployments or empty bytecode (which causes a revert when `init()` is called) are not problems in this case, as the transactions will fail, and it can only happen with unrealistically long arrays that would only be used by malicious users.
+
+However, it is difficult to be sure how else this risk might be exploited by using the overflow to jump to later op codes, and it is recommended to update to a newer version of Solady where the issue has been resolved.
+
+#### Proof of Concept
+
+If we comment out the `init()` call in the `createController()` call, we can see that the following test "successfully" deploys the controller, but the result is that there is no bytecode:
+```solidity
+function testZach__CreateControllerSoladyBug() public {
+ ImmutableSplitControllerFactory factory = new ImmutableSplitControllerFactory(address(9999));
+ bytes32 deploymentSalt = keccak256(abi.encodePacked(uint256(1102)));
+ address owner = address(this);
+
+ address[] memory bigAccounts = new address[](28672);
+ uint32[] memory bigPercentAllocations = new uint32[](28672);
+
+ for (uint i = 0; i < 28672; i++) {
+ bigAccounts[i] = address(uint160(i));
+ if (i < 32) bigPercentAllocations[i] = 820;
+ else bigPercentAllocations[i] = 34;
+ }
+
+ ImmutableSplitController controller = factory.createController(address(8888), owner, bigAccounts, bigPercentAllocations, 0, deploymentSalt);
+ assert(address(controller) != address(0));
+ assert(address(controller).code.length == 0);
+}
+```
+
+#### Recommendation
+
+Delete Solady and clone it from the most recent commit, or any commit after the fixes from [PR #548](https://github.com/Vectorized/solady/pull/548/files#diff-27a3ba4730de4b778ecba4697ab7dfb9b4f30f9e3666d1e5665b194fe6c9ae45) were merged.
+
+#### Review
+
+Solady has been updated to v.0.0.123 in [PR 88](https://github.com/ObolNetwork/obol-manager-contracts/pull/88).
+
+### [G-01] stETH and wstETH addresses can be saved on implementation to save gas
+
+The `LidoSplitFactory` contract holds two immutable values for the addresses of the `stETH` and `wstETH` tokens.
+
+When new clones are deployed, these values are encoded as immutable args. This adds the values to the contract code of the clone, so that each time a call is made, they are passed as calldata along to the implementation, which reads the values from the calldata for use.
+
+Since these values will be consistent across all clones on the same chain, it would be more gas efficient to store them in the implementation directly, which can be done with `immutable` storage values, set in the constructor.
+
+This would save 40 bytes of calldata on each call to the clone, which leads to a savings of approximately 640 gas on each call.
+
+#### Recommendation
+
+1) Add the following to `LidoSplit.sol`:
+```solidity
+address immutable public stETH;
+address immutable public wstETH;
+```
+
+2) Add a constructor to `LidoSplit.sol` which sets these immutable values. Solidity treats immutable values as constants and stores them directly in the contract bytecode, so they will be accessible from the clones.
+
+3) Remove `stETH` and `wstETH` from `LidoSplitFactory.sol`, both as storage values, arguments to the constructor, and arguments to `clone()`.
+
+4) Adjust the `distribute()` function in `LidoSplit.sol` to read the storage values for these two addresses, and remove the helper functions to read the clone's immutable arguments for these two values.
+
+#### Review
+
+Fixed as recommended in [PR 87](https://github.com/ObolNetwork/obol-manager-contracts/pull/87).
+
+### [G-02] OWR can be simplified and save gas by not tracking distributedFunds
+
+Currently, the `OptimisticWithdrawalRecipient` contract tracks four variables:
+- distributedFunds: total amount of the token distributed via push or pull
+- fundsPendingWithdrawal: total balance distributed via pull that haven't been claimed yet
+- claimedPrincipalFunds: total amount of funds claimed by the principal recipient
+- pullBalances: individual pull balances that haven't been claimed yet
+
+When `_distributeFunds()` is called, we perform the following math (simplified to only include relevant updates):
+```solidity
+endingDistributedFunds = distributedFunds - fundsPendingWithdrawal + currentBalance;
+fundsToBeDistributed = endingDistributedFunds - distributedFunds;
+distributedFunds = endingDistributedFunds;
+```
+As we can see, `distributedFunds` is added to the `endingDistributedFunds` variable and then removed when calculating `fundsToBeDistributed`, having no impact on the resulting `fundsToBeDistributed` value.
+
+The `distributedFunds` variable is not read or used anywhere else on the contract.
+
+#### Recommendation
+
+We can simplify the math and save substantial gas (a storage write plus additional operations) by not tracking this value at all.
+
+This would allow us to calculate `fundsToBeDistributed` directly, as follows:
+```solidity
+fundsToBeDistributed = currentBalance - fundsPendingWithdrawal;
+```
+
+#### Review
+
+Fixed as recommended in [PR 85](https://github.com/ObolNetwork/obol-manager-contracts/pull/85).
+
+### [I-01] Strong trust assumptions between validators and node operators
+
+It is assumed that validators and node operators will always act in the best interest of the group, rather than in their selfish best interest.
+
+It is important to make clear to users that there are strong trust assumptions between the various parties involved in the DVT.
+
+Here are a select few examples of attacks that a malicious set of node operators could perform:
+
+1) Since there is currently no mechanism for withdrawals besides the consensus of the node operators, a minority of them sufficient to withhold consensus could blackmail the principal for a payment of up to 16 ether in order to allow them to withdraw. Otherwise, they could turn off their node operators and force the principal to bleed down to a final withdrawn balance of just over 16 ether.
+
+2) Node operators are all able to propose blocks within the P2P network, which are then propogated out to the rest of the network. Node software is accustomed to signing for blocks built by block builders based on the metadata including quantity of fees and the address they'll be sent to. This is enforced by social consensus, with block builders not wanting to harm validators in order to have their blocks accepted in the future. However, node operators in a DVT are not concerned with the social consensus of the network, and could therefore build blocks that include large MEV payments to their personal address (instead of the DVT's 0xSplit), add fictious metadata to the block header, have their fellow node operators accept the block, and take the MEV for themselves.
+
+3) While the withdrawal address is immutably set on the beacon chain to the OWR, the fee address is added by the nodes to each block. Any majority of node operators sufficient to reach consensus could create a new 0xSplit with only themselves on it, and use that for all execution layer fees. The principal (and other node operators) would not be able to stop them or withdraw their principal, and would be stuck with staked funds paying fees to the malicious node operators.
+
+Note that there are likely many other possible attacks that malicious node operators could perform. This report is intended to demonstrate some examples of the trust level that is needed between validators and node operators, and to emphasize the importance of making these assumptions clear to users.
+
+#### Review
+
+Acknowledged. We believe EIP 7002 will reduce this trust assumption as it would enable the validator exit via the execution layer withdrawal key.
+
+### [I-02] Provide node operator checklist to validate setup
+
+There are a number of ways that the user setting up the DVT could plant backdoors to harm the other users involved in the DVT.
+
+Each of these risks is possible to check before signing off on the setup, but some are rather hidden, so it would be useful for the protocol to provide a list of checks that node operators should do before signing off on the setup parameters (or, even better, provide these checks for them through the front end).
+
+1) Confirm that `SplitsMain.getHash(split)` matches the hash of the parameters that the user is expecting to be used.
+
+2) Confirm that the controller clone delegates to the correct implementation. If not, it could be pointed to delegate to `SplitMain` and then called to `transferControl()` to a user's own address, allowing them to update the split arbitrarily.
+
+3) `OptimisticWithdrawalRecipient.getTranches()` should be called to check that `amountOfPrincipalStake` is equal to the amount that they will actually be providing.
+
+4) The controller's `owner` and future split including Obol fees should be provided to the user. They should be able to check that `ImmutableSplitControllerFactory.predictSplitControllerAddress()`, with those parameters inputted, results in the controller that is actually listed on `SplitsMain.getController(split)`.
+
+#### Review
+
+Acknowledged. We do some of these already (will add the remainder) automatically in the launchpad UI during the cluster confirmation phase by the node operator. We will also add it in markdown to the repo.
diff --git a/versioned_docs/version-v0.18.0/sec/threat_model.md b/versioned_docs/version-v0.18.0/sec/threat_model.md
new file mode 100644
index 0000000000..fbca3c7ce8
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/sec/threat_model.md
@@ -0,0 +1,155 @@
+---
+sidebar_position: 6
+description: Threat model for a Distributed Validator
+---
+
+# Charon threat model
+
+This page outlines a threat model for Charon, in the context of it being a Distributed Validator middleware for Ethereum validator clients.
+
+## Actors
+
+- Node owner (NO)
+- Cluster node operators (CNO)
+- Rogue node operator (RNO)
+- Outside attacker (OA)
+
+## General observations
+
+This page describes some considerations the Obol core team made about the security of a distributed validator in the context of its deployment and interaction with outside actors.
+
+The goal of this threat model is to provide transparency, but it is by no means a comprehensive audit or complete security reference. It’s a sharing of the experiences and thoughts we gained during the last few years building distributed validator technologies.
+
+While to the Beacon Chain, a distributed validator is seen in much the same way as a regular validator, and thus retains some of the same security considerations, Charon’s threat model is different from a validator client’s threat model because of its general design.
+
+While a validator client owns and operates on a set of validator private keys, the design of Charon allows its node operators to rarely (if ever) see the complete validator private keys, relying instead on modern cryptography to generate partial private key shares.
+
+An Ethereum distributed validator employs advanced signature primitives such that no operator ever handles the full validator private key in any standard lifecycle step: the [BLS digital signature scheme](https://en.wikipedia.org/wiki/BLS_digital_signature) employed by the Ethereum network allows distributed validators to individually sign a blob of data and then aggregate the resulting signatures in a transparent manner, never requiring any of the participating parties to know the full private key to do so.
+
+If the subset of the available Charon nodes is lower than a given threshold, the cluster is not able to continue with its duties.
+
+Given the collaborative nature of a Distributed Validator cluster, every operator must prioritize the liveliness and well-being of the cluster. Charon, at the moment of writing this page cannot reward and penalize operators within a cluster independently.
+
+This implies that Charon’s threat model can’t quite be equated to that of a single validator client, since they work on a different - albeit similar - set of security concepts.
+
+## Identity private key
+
+A distributed validator cluster is made up of a number of nodes, often run by a number of independent operators. For each DV cluster there’s a set of Ethereum validator private keys on which they want to validate on behalf of.
+
+Alongside those, each node (henceforth ‘operator’) holds an SECP256K1 identity private key, referred to as an ENR, that identifies their node to the other cluster operators’ nodes.
+
+Exfiltration of said private key could lead to possible impersonation from an outside attacker, possibly leading to intra-cluster peering issues, eclipse attack risks, and degraded validator performance.
+
+Charon client communication is handled via BFT consensus, which is able to tolerate a given number of misbehaving nodes up to a certain threshold: once this threshold is reached, the cluster is not able to continue with its lifecycle and loses liveness guarantees (the validator goes offline). If more than two-thirds of nodes in a cluster are malicious, a cluster also loses safety guarantees (enough bad actors could collude to come to consensus on something slashable).
+
+Identity private key theft and the subsequent execution of a rogue cluster node is equivalent in the context of BFT consensus to a misbehaving node, hence the cluster can survive and continue with its duties up to what’s specified by the cluster’s BFT protocol’s parameters.
+
+The likelihood of this happening is low: an OA with enough knowledge of the topology of the operator’s network must steal `fault tolerance of the cluster + 1` identity private keys and run Charon nodes to subvert the distributed validator BFT consensus to push the validator offline.
+
+## Ethereum validator private key access
+
+A distributed validator cluster executes Ethereum validator duties by acting as a middleman between the beacon chain and a validator client.
+
+To do so, the cluster must have knowledge of the Ethereum validator’s private key.
+
+The design and implementation of Charon minimizes the chances of this by splitting the Ethereum validator private keys into parts, which are then assigned to each node operator.
+A [distributed key generation](https://en.wikipedia.org/wiki/Distributed_key_generation) (DKG) process is used in order to evenly and safely create the private key shares without any central party having access to the full private key.
+
+The cryptography primitives employed in Charon can allow a threshold of the node operator’s private key shares to be reconstructed into the whole validator private key if needed.
+
+While the facilities to do this are present in the form of CLI commands, as stated before Charon never reconstructs the key in normal operations since BLS digital signature system allows for signature aggregation.
+
+A distributed validator cluster can be started in two ways:
+
+1. An existing Ethereum validator private key is split by the private key holder, and distributed in a trusted manner among the operators.
+2. The operators participate in a distributed key generation (DKG) process, to create private key shares that collectively can be used to sign validation duties as an Ethereum distributed validator. The full private key for the cluster never exists in one place during or after the DKG.
+
+In case 1, one of the node operators K has direct access to the Ethereum validator key and is tasked with the generation of other operator’s identity keys and key shards.
+
+It is clear that in this case the entirety of the sensitive material set is as secure as K’s environment; if K is compromised or malicious, the distributed validator could be slashed.
+
+Case 2 is different, because there’s no pre-existing Ethereum validator key in a single operator's hands: it will be generated using the FROST DKG algorithm.
+
+Assuming a successful DKG process, each operator will only ever handle its own key shares instead of the full Ethereum validator private key.
+
+A set of rogue operators composed of enough members to reconstruct the original Ethereum private keys might pose the risk of slashing for a distributed validator by colluding to produce slashable messages together.
+
+We deem this scenario’s likelihood as low, as it would mean that node operators decided to willfully slash the stake that they should be being rewarded for staking.
+
+Still, in the context of an outside attack, purposefully slashing a validator would mean stealing multiple operator key shares, which in turn means violating many cluster operator’s security almost at the same time. This scenario may occur if there is a 0-day vulnerability in a piece of software they all run or in case of node misconfiguration.
+
+## Rogue node operator
+
+Nodes are connected by means of either relay nodes, or directly to one another.
+
+Each node operator is at risk of being impeded by other nodes or by the relay operator in the execution of their duties.
+
+Nodes need to expose a set of TCP ports to be able to work, and the mere fact of doing that opens up the opportunity for rogue parties to execute DDoS attacks.
+
+Another attack surface for the cluster exists in rogue nodes purposefully filling the various inter-state databases with meaningless data, or more generally submitting bogus information to the other parties to slow down the processing or, in the case of a sybil attack, bring the cluster to a halt.
+
+The likelihood of this scenario is medium, because there’s no active threat hunting part: there’s no need for the rogue node operator to penetrate and compromise other nodes to disturb the cluster’s lifecycle.
+
+## Outside attackers interfering with a cluster
+
+There are two levels of sophistication in an OA:
+
+1. No knowledge of the topology of the cluster: The attacker doesn’t know where each cluster node is located and so can’t force fault tolerance +1 nodes offline if it can’t find them.
+2. Knowledge of the topology of the network (or part of it) is possessed: the OA can operate DDoS attacks or try breaking into node’s servers - at that point, the “rogue node operator” scenario applies.
+
+The likelihood of this scenario is low: an OA needs extensive capabilities and sufficient incentive to be able to carry out an attack of this size.
+
+An outside attacker could also find and use vulnerabilities in the underlying cryptosystems and cryptography libraries used by Charon and other Ethereum clients. Forging signatures that fool Charon’s cryptographic library or other dependencies may be feasible, but forging signatures or otherwise finding a vulnerability in either the SECP256K1+ECDSA or BLS12-381+BLS cryptosystems we deem to be a low likelihood risk.
+
+## Malicious beacon nodes
+
+A malicious beacon node (BN) could prevent the distributed validator from operating its validation duties, and could plausibly increase the likelihood of slashing by serving charon illegitimate information.
+
+If the amount of nodes configured with the malicious BN are equal to the byzantine threshold for the Charon BFT consensus protocol, the validation process can potentially halt since the BFT parameter threshold is reached - most of the nodes are byzantine - the system will reach consensus on a set of data that isn’t valid.
+
+We deem the likelihood of this scenario to be medium depending on the trust model associated with the BNs deployment (cloud, self-hosted, SaaS product): node operators should always host or at least trust their own beacon nodes.
+
+## Malicious charon relays
+
+A Charon relay is used as a communication bridge between nodes that aren’t directly exposed on the Internet. It also acts as the peer discovery mechanism for a cluster.
+
+Once a peer’s IP address has been discovered via the relay, a direct connection can be attempted. Nodes can either communicate by exchanging data through a relay, or by using the relay as a means to establish a direct TCP connection to one another.
+
+A malicious relay owned by a OA could lead to:
+
+- Network topology discovery, facilitating the “outside attackers interactions with a cluster” scenario
+- Impeding node communication, potentially impacting the BFT consensus protocol liveness (not security) and distributed validator duties
+- DKG process disruption leading to frustration and potential abandonment by node operators: could lead to the usage of a standard Ethereum validator setup, which implies weaker security overall
+
+We note that BFT consensus liveness disruption can only happen if the number of nodes using the malicious relay for communication is equal to the byzantine nodes amount defined in the consensus parameters.
+
+This risk can be mitigated by configuring nodes with multiple relay URLs from only [trusted entities](../int/quickstart/advanced/self-relay.md).
+
+The likelihood of this scenario is medium: Charon nodes are configured with a default set of relay nodes, so if an OA were to compromise those, it would lead to many cluster topologies getting discovered and potentially attacked and disrupted.
+
+## Compromised runtime files
+
+Charon operates with two runtime files:
+
+- A lock file used to address operator’s nodes, define the Ethereum validator public keys and the public key shares associated with it
+- A cluster definition file used to define the operator’s addresses and identities during the DKG process
+
+The lock file is signed and validated by all the nodes participating in the cluster: assuming good security practices on the node operator side, and no bugs in Charon or its dependencies’ implementations, this scenario is unlikely.
+
+If one or more node operators are using less than ideal security practices an OA could rewire the Charon CLI flags to include the `--no-verify` flags, which disables lock file signature and hash verification (usually intended only for development purposes).
+
+By doing that, the OA can edit the lock file as it sees fit, leading to the “rogue node operator” scenario. An OA or RNO might also manage to social engineer their way into convincing other operators into running their malicious lock file with verification disabled.
+
+The likelihood of this scenario is low: an OA would need to compromise every node operator through social engineering to both use a different set of files, and to run its cluster with `--no-verify`.
+
+## Conclusions
+
+Distributed Validator Technology (DVT) helps maintain a high-assurance environment for Ethereum validators by leveraging modern cryptography to ensure no single point of failure is easily found in the system.
+
+As with any computing system, security considerations are to be expected in order to keep the environment safe.
+
+From the point of view of an Ethereum validator entity, running their services with a DV client can help greatly with availability, minimizing slashing risks, and maximizing participation in the network.
+
+On the other hand, one must take into consideration the risks involved with dishonest cluster operators, as well as rogue third-party beacon nodes or relay providers.
+
+In the end, we believe the benefits of DVT greatly outweigh the potential threats described in this overview.
diff --git a/versioned_docs/version-v0.18.0/testnet.md b/versioned_docs/version-v0.18.0/testnet.md
new file mode 100644
index 0000000000..f430d00a0b
--- /dev/null
+++ b/versioned_docs/version-v0.18.0/testnet.md
@@ -0,0 +1,116 @@
+---
+sidebar_position: 6
+description: Obol testnets roadmap
+---
+
+# Testnets
+
+Over the coming quarters, Obol Labs has and will continue to coordinate and host a number of progressively larger testnets to help harden the Charon client and iterate on the key generation tooling.
+
+The following is a breakdown of the intended testnet roadmap, the features that are to be completed by each testnet, and their target start date and duration.
+
+# Testnets
+
+- [x] [Dev Net 1](#devnet-1)
+- [x] [Dev Net 2](#devnet-2)
+- [x] [Athena Public Testnet 1](#athena-public-testnet-1)
+- [x] [Bia Public Testnet 2](#bia-public-testnet-2)
+
+## Devnet 1
+
+The first devnet aimed to have a number of trusted operators test out our earliest tutorial flows. The aim was for a single user to complete these tutorials alone, using `docker compose` to spin up 4 Charon clients and 4 different validator clients on a single machine, with a remote consensus client. The keys were created locally in Charon and activated with the existing launchpad.
+
+**Participants:** Obol Dev Team, Client team advisors.
+
+**State:** Pre-product
+
+**Network:** Kiln
+
+**Completed Date:** June 2022
+
+**Duration:** 1 week
+
+**Goals:**
+
+- A single user completes a first tutorial alone, using `docker compose` to spin up 4 Charon clients on a single machine, with a remote consensus client. The keys are created locally in Charon and activated with the existing launchpad.
+- Prove that the distributed validator paradigm with 4 separate VC implementations together operating as one logical validator works.
+- Get the basics of monitoring in place, for the following testnet where accurate monitoring will be important due to Charon running across a network.
+
+## Devnet 2
+
+The second devnet aimed to have a number of trusted operators test out our earliest tutorial flows **together** for the first time.
+
+The aim was for groups of 4 testers to complete a group onboarding tutorial, using `docker compose` to spin up 4 Charon clients and 4 different validator clients (lighthouse, teku, lodestar and vouch), each on their own machine at each operator's home or their place of choosing, running at least a kiln consensus client.
+
+This devnet was the first time `charon dkg` was tested with users. A core focus of this devnet was to collect network performance data.
+
+This was also the first time Charon was run in variable, non-virtual networks (i.e. the real internet).
+
+**Participants:** Obol Dev Team, Client team advisors.
+
+**State:** Pre-product
+
+**Network:** Kiln
+
+**Completed Date:** July 2022
+
+**Duration:** 2 weeks
+
+**Goals:**
+
+- Groups of 4 testers complete a group onboarding tutorial, using `docker compose` to spin up 4 Charon clients, each on their own machine at each operator's home or their place of choosing, running at least a kiln consensus client.
+- Operators avoid exposing Charon to the public internet on a static IP address through the use of Obol-hosted relay nodes.
+- Users test `charon dkg`. The launchpad is not used, and this dkg is triggered by a manifest config file created locally by a single operator using the `charon create dkg` command.
+- Effective collection of network performance data, to enable gathering even higher signal performance data at scale during public testnets.
+- Block proposals are in place.
+
+## Athena Public Testnet 1
+
+With tutorials for solo and group flows having been developed and refined. The goal for public testnet 1 was to get distributed validators into the hands of the wider Obol Community for the first time. The core focus of this testnet was the onboarding experience.
+
+The core output from this testnet was a significant number of public cluster running and public feedback collected.
+
+This was an unincentivized testnet and formed the basis for us to figure out a Sybil resistance mechanism.
+
+**Participants:** Obol Community
+
+**State:** Bare Minimum
+
+**Network:** Görli
+
+**Completed date:** October 2022
+
+**Duration:** 2 weeks cluster setup, 8 weeks operation
+
+**Goals:**
+
+- Get distributed validators into the hands of the Obol Early Community for the first time.
+- Create the first public onboarding experience and gather feedback. This is the first time we need to provide comprehensive instructions for as many platforms (Unix, Mac, Windows) as possible.
+- Make deploying Ethereum validator nodes accessible using the CLI.
+- Generate a backlog of bugs, feature requests, platform requests and integration requests.
+
+## Bia Public Testnet 2
+
+This second public testnet intends to take the learning from Athena and scale the network by engaging both the wider at-home validator community and professional operators. This is the first time users are setting up DVs using the DV launchpad.
+
+This testnet is also important for learning the conditions Charon will be subjected to in production. A core output of this testnet is a large number of autonomous public DV clusters running and building up the Obol community with technical ambassadors.
+
+**Participants:** Obol Community, Ethereum staking community
+
+**State:** MVP
+
+**Network:** Görli
+
+**Target Completed date:** March 2023
+
+**Duration:** 2 weeks cluster setup, 4-8 weeks operation
+
+**Goals:**
+
+- Engage the wider Solo and Professional Ethereum Staking Community.
+- Get integration feedback.
+- Build confidence in Charon after running DVs on an Ethereum testnet.
+- Learn about the conditions Charon will be subjected to in production.
+- Distributed Validator returns are competitive versus single validator clients.
+- Make deploying Ethereum validator nodes accessible using the DV Launchpad.
+- Build comprehensive guides for various profiles to spin up DVs with minimal supervision from the core team.
diff --git a/versioned_sidebars/version-v0.18.0-sidebars.json b/versioned_sidebars/version-v0.18.0-sidebars.json
new file mode 100644
index 0000000000..caea0c03ba
--- /dev/null
+++ b/versioned_sidebars/version-v0.18.0-sidebars.json
@@ -0,0 +1,8 @@
+{
+ "tutorialSidebar": [
+ {
+ "type": "autogenerated",
+ "dirName": "."
+ }
+ ]
+}
diff --git a/versions.json b/versions.json
index 675b4161bd..f25c8d2eed 100644
--- a/versions.json
+++ b/versions.json
@@ -1,4 +1,5 @@
[
+ "v0.18.0",
"v0.17.1",
"v0.17.0",
"v0.16.0",