Skip to content

Commit

Permalink
Wiki reorg: Introduce High-level Tech Terminology in Architecture (#6369
Browse files Browse the repository at this point in the history
)

* edit pipelining

* msg box

- moved two banners in there

* multi-threading

* parallel computing

* reorg architecture sidebar

* parallel computing to architecture

* Radha's feedback

- added scheduling
- modified text multi-threading
  • Loading branch information
filippoweb3 authored Nov 11, 2024
1 parent 95aa650 commit 5b4a766
Show file tree
Hide file tree
Showing 6 changed files with 171 additions and 166 deletions.
45 changes: 31 additions & 14 deletions docs/learn/learn-agile-coretime.md
Original file line number Diff line number Diff line change
@@ -1,15 +1,39 @@
---
id: learn-agile-coretime
title: Introduction to Agile Coretime
sidebar_label: Agile Coretime Intro
description: Introduction to Agile Coretime and its terminology
keywords: [coretime, blockspace, parachain, on-demand, cores]
title: Scheduling
sidebar_label: Scheduling
description: How the Polkadot Cloud achieves multi-threading to improve efficiency.
keywords: [coretime, blockspace, parachain, on-demand, cores, multi-threading, scheduling]
slug: ../learn-agile-coretime
---

Agile Coretime enables efficient utilization of Polkadot network resources and provides economic
flexibility for builders, generalizing Polkadot beyond what was initially proposed and envisioned in
its [whitepaper](https://polkadot.network/whitepaper/).
import DocCardList from '@theme/DocCardList';

[Scheduling](<https://en.wikipedia.org/wiki/Scheduling_(computing)>) is the process of assigning
tasks or jobs to resources (like CPU cores) at specific times or under certain conditions. Effective
scheduling ensures that resources are used efficiently and that tasks are completed in a timely
manner.

Polkadot introduces scheduling with **Agile Coretime**, enabling efficient utilization of Polkadot
network resources and provides economic flexibility for builders, generalizing Polkadot beyond what
was initially proposed and envisioned in its
[whitepaper](https://polkadot.com/papers/Polkadot-whitepaper.pdf). The introduction of coretime
enables multi-threading.

[Multi-threading](<https://en.wikipedia.org/wiki/Multithreading_(computer_architecture)>) is a
programming model where multiple threads (smaller sequences of programmed instructions) are created
within a single process to perform multiple tasks at once. Multi-threading is commonly used to
improve the performance of applications by executing different parts of a program concurrently.
[Concurrency](<https://en.wikipedia.org/wiki/Concurrency_(computer_science)>) does not imply
parallel execution; rather, it enables a system to manage multiple processes by quickly switching
among them.

Polkadot achieves multi-threading by [splitting and interlacing](#splitting-and-interlacing)
Coretime.

<DocCardList />

## Introduction to Agile Coretime

In Polkadot 1.0, the only way for a parachain to be secured by Polkadot was to rent a lease through
an [auction](./archive/learn-auction.md), which guaranteed parachain block validation for up-to two
Expand Down Expand Up @@ -39,13 +63,6 @@ enables the authoring of a parachain block on-demand.

![core-usage-agile-rangeSplit](../assets/core-usage-agile-rangeSplit.png)

:::info Agile Coretime is under active development

The progress of Agile Coretime development can be tracked
[here.](https://github.com/orgs/paritytech/projects/119/views/20)

:::

## Agile Coretime Terminology

### Core
Expand Down
79 changes: 35 additions & 44 deletions docs/learn/learn-async-backing.md
Original file line number Diff line number Diff line change
@@ -1,25 +1,33 @@
---
id: learn-async-backing
title: Asynchronous Backing
sidebar_label: Asynchronous Backing
description: A brief overview of asynchronous backing, and how it affects Polkadot's scalability.
keywords: [parachains, backing, parablock, perspective parachains, unincluded segments]
title: Pipelining
sidebar_label: Pipelining
description: How the Polkadot Cloud achieves pipelining to improve scalability.
keywords: [parachains, backing, parablock, perspective parachains, unincluded segments, pipelining]
slug: ../learn-async-backing
---

:::tip Asynchronous Backing Guide for Parachains
import MessageBox from "../../components/MessageBox"; import "../../components/MessageBox.css";

For upgrading a parachain for Asynchronous Backing compatibility, follow the instructions on
[this Wiki document.](../maintain/maintain-guides-async-backing.md)
<MessageBox message="To fully follow the material on this page, it is recommended to be familiar with the primary stages
of the [Parachain Protocol](./learn-parachains-protocol). <br><br> For upgrading a parachain for Asynchronous Backing compatibility, follow the instructions on
[this Wiki document.](./maintain-guides-async-backing)" />

:::

:::info Learn about Parachain Consensus
[Pipelining](<https://en.wikipedia.org/wiki/Pipeline_(computing)>) is a technique for processing
multiple stages of a task simultaneously by breaking it into smaller steps. This allows the next
step to start before the previous one is completely finished. This is often used in processors and
computer architectures to increase throughput.

To fully follow the material on this page, it is recommended to be familiar with the primary stages
of the [Parachain Protocol](./learn-parachains-protocol.md).
Polkadot introduces pipelining to the parachain block
[generation, backing, and inclusion](./learn-parachains-protocol.md) via **asynchronous backing**.
It is analogous to the logical pipelining of processor instruction in traditional architectures,
where some instructions may be executed before others are complete.

:::
Bundles of state transitions represented as blocks may be processed similarly. In the context of
Polkadot, pipelining aims to increase the throughput of the entire network by completing the
**backing** and **inclusion** steps for different blocks simultaneously. Asynchronous backing does
not just allow for pipelining within a single pipe (or core). It lays the foundation for a large
number of pipes (or cores) to run for the same parachain at the same time.

In Polkadot, parablocks are generated by [collators](./learn-collator.md) on the parachain side and
sent to [validators](./learn-validator.md) on the relay chain side for backing.
Expand Down Expand Up @@ -55,11 +63,11 @@ relay chain's progression:

Because of (1) parablocks can be generated every other relay chain block (i.e., every 12 seconds).
Because of (2) generation of parablock `P` can only start when `P - 1` is included (there is no
[pipelining](#pipelining)). Because of (3) execution time can take maximum 0.5 seconds as parablock
`P` is rushing to be backed in the next 5.5 seconds (2 seconds needed for backing and the rest for
gossiping). Every parablock is backed in 6 seconds (one relay chain block) and included in the next
6 seconds (next relay chain block). The time from generation to inclusion is 12 seconds. This limits
the amount of data a collator can add to each parablock.
pipelining). Because of (3) execution time can take maximum 0.5 seconds as parablock `P` is rushing
to be backed in the next 5.5 seconds (2 seconds needed for backing and the rest for gossiping).
Every parablock is backed in 6 seconds (one relay chain block) and included in the next 6 seconds
(next relay chain block). The time from generation to inclusion is 12 seconds. This limits the
amount of data a collator can add to each parablock.

Parablock generation will choose the most recently received relay block as a relay parent, although
with an imperfect network that may differ from the true most recent relay block. So, in general, if
Expand Down Expand Up @@ -122,9 +130,8 @@ In synchronous backing, collators generate parablocks using context entirely pul
chain. While in asynchronous backing, collators use additional context from the
[unincluded segment](#unincluded-segments). Parablocks are included every 6 seconds because backing
of parablock `N + 1` and inclusion of parablock `N` can happen on the same relay chain bock
([pipelining](#pipelining)). However, as for synchronous backing, a parablock takes 12 seconds to
get backed and included, and from inclusion to finality there is an additional 30-second time
window.
(pipelining). However, as for synchronous backing, a parablock takes 12 seconds to get backed and
included, and from inclusion to finality there is an additional 30-second time window.

Because the throughput is increased by 2x and parachains have 4x more execution time, asynchronous
backing is expected to deliver 8x more blockspace to parachains.
Expand Down Expand Up @@ -166,13 +173,13 @@ for execution. And so on, P3 can be generated while backing groups check P2, and
while P3 undergoing backing. In 24 seconds, P1 to P3 are included in the relay chain.

Note how there are always three unincluded parablocks at all times, i.e. compared to synchronous
backing there can be multiple unincluded parablocks (i.e. [pipelining](#pipelining)). For example,
when P1 is undergoing inclusion, P2 and P3 are undergoing backing. Collators were able to generate
multiple unincluded parablocks because on their end they have the
[unincluded segment](#unincluded-segments), a local storage of not-included parablock ancestors that
they can use to fetch information to build new parablocks. On the relay chain side,
[perspective parachains](#prospective-parachains) repeats the work each unincluded segment does in
tracking candidates (as validators cannot trust the record kept on parachains).
backing there can be multiple unincluded parablocks (i.e. pipelining). For example, when P1 is
undergoing inclusion, P2 and P3 are undergoing backing. Collators were able to generate multiple
unincluded parablocks because on their end they have the [unincluded segment](#unincluded-segments),
a local storage of not-included parablock ancestors that they can use to fetch information to build
new parablocks. On the relay chain side, [perspective parachains](#prospective-parachains) repeats
the work each unincluded segment does in tracking candidates (as validators cannot trust the record
kept on parachains).

The 6-second relay chain block delay includes a backing execution timeout (2 seconds) and some time
for network latency (the time it takes to gossip messages across the entire network). The limit
Expand Down Expand Up @@ -201,22 +208,6 @@ state roots, and ID info is placed on the parent block on the relay chain. The r
access the entire state of a parachain but only the values that changed during that block and the
merkelized hashes of the unchanged values.

### Pipelining

Asynchronous backing is a feature that introduces
[pipelining](https://www.techtarget.com/whatis/definition/pipelining) to the parachain block
[generation, backing and inclusion](./learn-parachains-protocol.md). It is analogous to the logical
pipelining of processor instruction in "traditional" architectures, where some instructions may be
executed before others are complete. Instructions may also be executed in parallel, enabling
multiple processor parts to work on potentially different instructions simultaneously.

Bundles of state transitions represented as blocks may be processed similarly. In the context of
Polkadot, pipelining aims to increase the throughput of the entire network by completing the backing
and inclusion steps for different blocks at the same time. Asynchronous backing does not just allow
for pipelining within a single pipe (or core). It lays the foundation for a large number of pipes
(or cores) to run for the same parachain at the same time. In that way, we have two distinct new
forms of parallel computation.

### Unincluded Segments

Unincluded segments are chains of candidate parablocks that have yet to be included in the relay
Expand Down
38 changes: 19 additions & 19 deletions docs/learn/learn-elastic-scaling.md
Original file line number Diff line number Diff line change
@@ -1,27 +1,27 @@
---
id: learn-elastic-scaling
title: Polkadot's Elastic Scaling
sidebar_label: Elastic Scaling
description: Enabling parachains to scale on-demand through instantaneous coretime.
keywords: [elastic scaling, parachains, coretime, blockspace]
title: Parallel Computing
sidebar_label: Parallel Computing
description: How the Polkadot Cloud achieves parallel computation to boost throughput.
keywords: [elastic scaling, parachains, coretime, blockspace, parallel computing]
slug: ../learn-elastic-scaling
---

The path of parablocks from their creation to their inclusion into the relay chain (discussed in the
[Parachain Protocol Page](./learn-parachains-protocol.md)) spans two domains: the parachain's and
relay chain's. Scaling the Polkadot protocol involves consideration of how parablocks are produced
by the parachain and then validated, processed, secured, made available for additional checks, and
finally included on the relay chain.

[Asynchronous backing](./learn-async-backing.md) is the optimization implemented on the relay chain
that allows parachains to produce blocks faster and allows relay chain to process them seamlessly.
Asynchronous backing also improves the parachain side with unincluded segments and augmented info
that allows collators to produce multiple parablocks even if the previous blocks are not yet
included. This upgrade allows parachains to utilize up to 2 seconds execution time per parablock,
and the relay chain will be able to include a parablock every 6 seconds.

With elastic scaling, parachains can use multiple cores to include multiple parablocks within the
same relay chain block.
import MessageBox from "../../components/MessageBox"; import "../../components/MessageBox.css";

<MessageBox message="To fully follow the material on this page, it is recommended to be familiar with the primary stages
of the [Parachain Protocol](./learn-parachains-protocol)." />

[Parallel computing](https://en.wikipedia.org/wiki/Parallel_computing) involves performing many
calculations or processes simultaneously by dividing tasks into sub-tasks that run on multiple
processors or cores. This is essential for high-performance computing tasks, where many operations
are executed in parallel to speed up processing.

Polkadot uses [pipelining](./learn-async-backing.md) and
[multi-threading](./learn-agile-coretime.md) to increase throughput and achieve concurrency,
respectively. Polkadot also provides throughput boost via parallel computation for a single task
with **elastic scaling**: parachains can use multiple cores to include multiple parablocks within
the same relay chain block.

The relay chain receives a sequence of parachain blocks on multiple cores, which are validated and
checked if all their state roots line up during their inclusion, but assume they’re unrelated
Expand Down
4 changes: 2 additions & 2 deletions docs/learn/learn-parachains-protocol.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
id: learn-parachains-protocol
title: Parachains' Protocol Overview
sidebar_label: Protocol Overview
title: Security Protocol Overview
sidebar_label: Security Protocol
description: Actors and Protocols involved in Polkadot and its Parachains' Block Finality.
keywords:
[
Expand Down
4 changes: 2 additions & 2 deletions docs/learn/learn-system-chains.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
id: learn-system-chains
title: System Parachains
sidebar_label: System Parachains
title: System Chains
sidebar_label: System Chains
description: System Parachains currently deployed on Polkadot.
keywords: [common good, system, parachains, system level, public utility]
slug: ../learn-system-chains
Expand Down
Loading

0 comments on commit 5b4a766

Please sign in to comment.