Skip to content

Commit

Permalink
save work
Browse files Browse the repository at this point in the history
  • Loading branch information
WanYixian committed Nov 25, 2024
1 parent 4437b4a commit 0107005
Show file tree
Hide file tree
Showing 67 changed files with 146 additions and 140 deletions.
66 changes: 33 additions & 33 deletions changelog/product-lifecycle.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -22,38 +22,38 @@ Below is a list of all features in the public preview phase:

| Feature name | Start version |
| :-- | :-- |
| [Shared source](/sql/commands/sql-create-source/#shared-source) | 2.1 |
| [ASOF join](/docs/current/query-syntax-join-clause/#asof-joins) | 2.1 |
| [Partitioned Postgres CDC table](/docs/current/ingest-from-postgres-cdc/) | 2.1 |
| [Map type](/docs/current/data-type-map/) | 2.0 |
| [Azure Blob sink](/docs/current/sink-to-azure-blob/) | 2.0 |
| [Approx percentile](/docs/current/sql-function-aggregate/#approx_percentile) | 2.0 |
| [Auto schema change in MySQL CDC](/docs/current/ingest-from-mysql-cdc/#automatically-change-schema) | 2.0 |
| [SQL Server CDC source](/docs/current/ingest-from-sqlserver-cdc/) | 2.0 |
| [Sink data in parquet format](/docs/current/data-delivery/#sink-data-in-parquet-format) | 2.0 |
| [Time travel queries](/docs/current/time-travel-queries/) | 2.0 |
| [Manage secrets](/docs/current/manage-secrets/) | 2.0 |
| [Amazon DynamoDB sink](../integrations/destinations/amazon-dynamodb) | 1.10 |
| Auto-map upstream table schema in [MySQL](/docs/current/ingest-from-mysql-cdc/#automatically-map-upstream-table-schema) and [PostgreSQL](/docs/current/ingest-from-postgres-cdc/#automatically-map-upstream-table-schema) | 1.10 |
| [Version column](/docs/current/sql-create-table/) | 1.9 |
| [Snowflake sink](/docs/current/sink-to-snowflake/) | 1.9 |
| [Subscription](/docs/current/subscription/) | 1.9 |
| [RisingWave as PostgreSQL FDW](/docs/current/risingwave-as-postgres-fdw/) | 1.9 |
| [Iceberg source](/docs/current/ingest-from-iceberg/) | 1.8 |
| [Google BigQuery sink](/docs/current/sink-to-bigquery/) | 1.4 |
| [SET BACKGROUND\_DDL command](/docs/current/sql-set-background-ddl/) | 1.3 |
| [Decouple sinks](/docs/current/data-delivery/#sink-decoupling) | 1.3 |
| [Pulsar sink](/docs/current/sink-to-pulsar/) | 1.3 |
| [Cassandra sink](/docs/current/sink-to-cassandra/) | 1.2 |
| [Elasticsearch sink](/docs/current/sink-to-elasticsearch/) | 1.2 |
| [NATS sink](/docs/current/sink-to-nats/) | 1.2 |
| [NATS source](/docs/current/ingest-from-nats/) | 1.2 |
| [Append-only tables](/docs/current/sql-create-table/) | 1.1 |
| [Emit on window close](/docs/current/emit-on-window-close/) | 1.1 |
| [Read-only transactions](/docs/current/sql-start-transaction/) | 1.1 |
| [AWS Kinesis sink](/docs/current/sink-to-aws-kinesis/) | 1.0 |
| [CDC Citus source](/docs/current/ingest-from-citus-cdc/) | 0.19 |
| [Iceberg sink](/docs/current/sink-to-iceberg/) | 0.18 |
| [Pulsar source](/docs/current/ingest-from-pulsar/) | 0.1 |
| [Shared source](/sql/commands/sql-create-source#shared-source) | 2.1 |
| [ASOF join](/processing/sql/joins#asof-joins) | 2.1 |
| [Partitioned Postgres CDC table](/integrations/sources/postgresql-cdc#ingest-data-from-a-partitioned-table) | 2.1 |
| [Map type](/sql/data-types/map-type) | 2.0 |
| [Azure Blob sink](/integrations/destinations/azure-blob) | 2.0 |
| [Approx percentile](/sql/functions/aggregate#approx-percentile) | 2.0 |
| [Auto schema change in MySQL CDC](/integrations/sources/mysql-cdc#automatically-change-schema) | 2.0 |
| [SQL Server CDC source](/integrations/sources/sql-server-cdc) | 2.0 |
| [Sink data in parquet encode](/delivery/overview#sink-data-in-parquet-or-json-encode) | 2.0 |
| [Time travel queries](/processing/time-travel-queries) | 2.0 |
| [Manage secrets](/operate/manage-secrets) | 2.0 |
| [Amazon DynamoDB sink](/integrations/destinations/amazon-dynamodb) | 1.10 |
| Auto-map upstream table schema in [MySQL](/integrations/sources/mysql-cdc#automatically-map-upstream-table-schema) and [PostgreSQL](/integrations/sources/postgresql-cdc#automatically-map-upstream-table-schema) | 1.10 |
| [Version column](/sql/commands/sql-create-table#pk-conflict-behavior) | 1.9 |
| [Snowflake sink](/integrations/destinations/snowflake) | 1.9 |
| [Subscription](/delivery/subscription) | 1.9 |
| [RisingWave as PostgreSQL FDW](/delivery/risingwave-as-postgres-fdw) | 1.9 |
| [Iceberg source](/integrations/sources/apache-iceberg) | 1.8 |
| [Google BigQuery sink](/integrations/destinations/bigquery) | 1.4 |
| [SET BACKGROUND\_DDL command](/sql/commands/sql-set-background-ddl) | 1.3 |
| [Decouple sinks](/delivery/overview#sink-decoupling) | 1.3 |
| [Pulsar sink](/integrations/destinations/apache-pulsar) | 1.3 |
| [Cassandra sink](/integrations/destinations/cassandra-or-scylladb) | 1.2 |
| [Elasticsearch sink](/integrations/destinations/elasticsearch) | 1.2 |
| [NATS sink](/integrations/destinations/nats-and-nats-jetstream) | 1.2 |
| [NATS source](/integrations/sources/nats-jetstream) | 1.2 |
| [Append-only tables](/sql/commands/sql-create-table#parameters) | 1.1 |
| [Emit on window close](/processing/emit-on-window-close) | 1.1 |
| [Read-only transactions](/sql/commands/sql-start-transaction) | 1.1 |
| [AWS Kinesis sink](/integrations/destinations/aws-kinesis) | 1.0 |
| [CDC Citus source](/integrations/sources/citus-cdc) | 0.19 |
| [Iceberg sink](/integrations/destinations/apache-iceberg) | 0.18 |
| [Pulsar source](/integrations/sources/pulsar) | 0.1 |

This table will be updated regularly to reflect the latest status of features as they progress through the release stages.
8 changes: 4 additions & 4 deletions changelog/release-notes.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -897,7 +897,7 @@ See the **Full Changelog** [here](https://github.com/risingwavelabs/risingwave/c

## Installation

* Now, you can easily install RisingWave on your local machine with Homebrew by running `brew install risingwave`. See [Run RisingWave](/docs/current/get-started/#install-and-start-risingwave).
* Now, you can easily install RisingWave on your local machine with Homebrew by running `brew install risingwave`. See [Run RisingWave](/get-started/quickstart#install-and-start-risingwave).

## Administration

Expand Down Expand Up @@ -1054,9 +1054,9 @@ See the **Full Changelog** [here](https://github.com/risingwavelabs/risingwave/c

## Connectors

* Adds a new parameter `match_pattern` to the S3 connector. With the new parameter, users can specify the pattern to filter files that they want to ingest from S3 buckets. For documentation updates, see [Ingest data from S3 buckets](/docs/current/ingest-from-s3/). [#7565](https://github.com/risingwavelabs/risingwave/pull/7565)
* Adds the PostgreSQL CDC connector. Users can use this connector to ingest data and CDC events from PostgreSQL directly. For documentation updates, see [Ingest data from PostgreSQL CDC](/docs/current/ingest-from-postgres-cdc/). [#6869](https://github.com/risingwavelabs/risingwave/pull/6869), [#7133](https://github.com/risingwavelabs/risingwave/pull/7133)
* Adds the MySQL CDC connector. Users can use this connector to ingest data and CDC events from MySQL directly. For documentation updates, see [Ingest data from MySQL CDC](/docs/current/ingest-from-mysql-cdc/). [#6689](https://github.com/risingwavelabs/risingwave/pull/6689), [#6345](https://github.com/risingwavelabs/risingwave/pull/6345), [#6481](https://github.com/risingwavelabs/risingwave/pull/6481), [#7133](https://github.com/risingwavelabs/risingwave/pull/7133)
* Adds a new parameter `match_pattern` to the S3 connector. With the new parameter, users can specify the pattern to filter files that they want to ingest from S3 buckets. For documentation updates, see [Ingest data from S3 buckets](/integrations/sources/s3). [#7565](https://github.com/risingwavelabs/risingwave/pull/7565)
* Adds the PostgreSQL CDC connector. Users can use this connector to ingest data and CDC events from PostgreSQL directly. For documentation updates, see [Ingest data from PostgreSQL CDC](/integrations/sources/postgresql-cdc). [#6869](https://github.com/risingwavelabs/risingwave/pull/6869), [#7133](https://github.com/risingwavelabs/risingwave/pull/7133)
* Adds the MySQL CDC connector. Users can use this connector to ingest data and CDC events from MySQL directly. For documentation updates, see [Ingest data from MySQL CDC](/integrations/sources/mysql-cdc). [#6689](https://github.com/risingwavelabs/risingwave/pull/6689), [#6345](https://github.com/risingwavelabs/risingwave/pull/6345), [#6481](https://github.com/risingwavelabs/risingwave/pull/6481), [#7133](https://github.com/risingwavelabs/risingwave/pull/7133)
* Adds the JDBC sink connector, with which users can sink data to MySQL, PostgreSQL, or other databases that are compliant with JDBC. [#6493](https://github.com/risingwavelabs/risingwave/pull/6493)
* Add new parameters to the Kafka sink connector.
* `force_append_only` : Specifies whether to force a sink to be append-only. [#7922](https://github.com/risingwavelabs/risingwave/pull/7922)
Expand Down
2 changes: 1 addition & 1 deletion client-libraries/go.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ In this guide, we use the [`pgx` driver](https://github.com/jackc/pgx) to connec

## Run RisingWave

To learn about how to run RisingWave, see [Run RisingWave](../get-started/quickstart.mdx).
To learn about how to run RisingWave, see [Run RisingWave](/get-started/quickstart).

## Install the `pgx` driver

Expand Down
2 changes: 1 addition & 1 deletion client-libraries/java.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ In this guide, we use the [PostgreSQL JDBC](https://jdbc.postgresql.org/) driver

## Run RisingWave

To learn about how to run RisingWave, see [Run RisingWave](../get-started/quickstart.mdx).
To learn about how to run RisingWave, see [Run RisingWave](/get-started/quickstart).
> You do not need to connect to RisingWave at this stage.
## Download the PostgreSQL JDBC driver
Expand Down
12 changes: 6 additions & 6 deletions client-libraries/nodejs.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ In this guide, we use the [Node.js pg driver](https://www.npmjs.com/package/pg)

## Run RisingWave

To learn about how to run RisingWave, see [Run RisingWave](../get-started/quickstart.mdx).
To learn about how to run RisingWave, see [Run RisingWave](/get-started/quickstart).

## Install npm

Expand All @@ -19,11 +19,11 @@ npm install pg

## Connect to RisingWave

:::note
<Note>

You can use either a client or a connection pool to connect to RisingWave. If you are working on a web application that makes frequent queries, we recommend that you use a connection pool. The code examples in this topic use connection pools.

:::
</Note>

Connecting to RisingWave and running a query is normally done together. Therefore, we include a basic query in the code. Replace it with the query that you want to run.

Expand Down Expand Up @@ -51,7 +51,7 @@ start().catch(console.error);

## Create a source

The code below creates a source `walk` with the [`datagen`](/ingest/ingest-from-datagen.md) connector. The `datagen` connector is used to generate mock data. The `walk` source consists of two columns, `distance` and `duration`, which respectively represent the distance and the duration of a walk. The source is a simplified version of the data that is tracked by smart watches.
The code below creates a source `walk` with the [`datagen`](/ingestion/generate-test-data) connector. The `datagen` connector is used to generate mock data. The `walk` source consists of two columns, `distance` and `duration`, which respectively represent the distance and the duration of a walk. The source is a simplified version of the data that is tracked by smart watches.

```js
const { Pool } = require('pg')
Expand Down Expand Up @@ -85,11 +85,11 @@ const start = async () => {
start().catch(console.error);
```

:::note
<Note>

All the code examples in this guide include a section for connecting to RisingWave. If you run multiple queries within one connection session, you do not need to repeat the connection code.

:::
</Note>

## Create a materialized view

Expand Down
2 changes: 1 addition & 1 deletion client-libraries/python.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ In this section, we use the [`psycopg2`](https://pypi.org/project/psycopg2/) dri

### Run RisingWave

To learn about how to run RisingWave, see [Run RisingWave](../get-started/quickstart.mdx).
To learn about how to run RisingWave, see [Run RisingWave](/get-started/quickstart).


### Install the `psgcopg2` driver
Expand Down
2 changes: 1 addition & 1 deletion client-libraries/ruby.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ In this guide, we use the [`ruby-pg`](https://github.com/ged/ruby-pg) driver to

## Run RisingWave

To learn about how to run RisingWave, see [Run RisingWave](../get-started/quickstart.mdx).
To learn about how to run RisingWave, see [Run RisingWave](/get-started/quickstart).

## Install the `ruby-pg` driver

Expand Down
4 changes: 2 additions & 2 deletions cloud/choose-a-project-plan.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ You can choose the availability region closest to you to minimize latency.
Name of the project. Assigning a descriptive name to each project can be helpful when managing multiple projects.
* **Node configuration**
Configure each node's instance resources and numbers according to your actual needs.
To learn more about the nodes, see the [architecture of RisingWave](/docs/current/architecture/).
To learn more about the nodes, see the [architecture of RisingWave](/reference/architecture).

## Understanding nodes in RisingWave

Expand All @@ -91,7 +91,7 @@ RisingWave projects consist of three types of nodes, each serving a distinct rol
4. **Meta node**: Takes charge of managing the metadata of compute and compact nodes and orchestrating operations across the system.
5. **ETCD**: A distributed key-value store that provides a reliable way to store data across a project of machines. This node cannot be scaled manually after the project is created.

For the architecture of RisingWave, see [RisingWave architecture](/docs/current/architecture/).
For the architecture of RisingWave, see [RisingWave architecture](/reference/architecture).

## Pricing

Expand Down
4 changes: 2 additions & 2 deletions cloud/develop-overview.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ Select the version of the corresponding docs when using the RisingWave user docs
<CardGroup cols={3}>
<Card title="Integrations" icon="puzzle-piece" href="/docs/current/rw-integration-summary/" iconType="solid"> See how RisingWave can integrate with your existing data stack. Vote for your favorite data tools and streaming services to help us prioritize the integration development. </Card>
<Card title="Sources" icon="database"href="/docs/current/data-ingestion/" iconType="solid"> Connect to and ingest data from external sources such as databases and message brokers. See supported data sources. </Card>
<Card title="Sinks" icon="arrow-right-from-bracket"href="/docs/current/data-delivery/" iconType="solid"> Stream processed data out of RisingWave to message brokers and databases. See supported data destinations. </Card>
<Card title="Sinks" icon="arrow-right-from-bracket"href="/delivery/overview" iconType="solid"> Stream processed data out of RisingWave to message brokers and databases. See supported data destinations. </Card>
</CardGroup>

### Process data with RisingWave
Expand Down Expand Up @@ -119,7 +119,7 @@ Continue to learn about RisingWave.
</p>

<p>
<a href="/docs/current/architecture/" style={{ textDecoration: "underline", textDecorationColor: "#005eec" }}>Architecture</a> <Icon icon="arrow-up-right-from-square" iconType="solid" size={10} />
<a href="/reference/architecture" style={{ textDecoration: "underline", textDecorationColor: "#005eec" }}>Architecture</a> <Icon icon="arrow-up-right-from-square" iconType="solid" size={10} />
</p>
<p>
<a href="/docs/current/risingwave-flink-comparison/" style={{ textDecoration: "underline", textDecorationColor: "#005eec" }}>RisingWave vs. Apache Flink</a> <Icon icon="arrow-up-right-from-square" iconType="solid" size={10} />
Expand Down
2 changes: 1 addition & 1 deletion cloud/manage-sinks.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ title: "Manage sinks"
description: "To stream data out of RisingWave, you must create a sink. A sink refers to an external target that you can send data to. You can deliver data to downstream systems via our sink connectors."
---

For the complete list of supported sink connectors and data formats, see [Data delivery](/docs/current/data-delivery/) in the RisingWave documentation.
For the complete list of supported sink connectors and data formats, see [Data delivery](/delivery/overview) in the RisingWave documentation.

## Create a sink

Expand Down
2 changes: 1 addition & 1 deletion cloud/project-byoc.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ Follow the steps below to create your own cloud environment.
<Tip>
When you run the command `rwc byoc apply --name xxx`, it will deploy some resources in your AWS/GCP/Azure environment, such as AWS S3/Google Cloud Storage/Azure Blob Storage and EKS/GKE/AKS clusters. Please do not modify the configuration of these resources. If you encounter any issues during this process, please contact our [support team](mailto:[email protected]).
</Tip>
5. Click **Next** to continue the configuration of cluster size and nodes. To learn more about the nodes, see the [architecture of RisingWave](/docs/current/architecture/).
5. Click **Next** to continue the configuration of cluster size and nodes. To learn more about the nodes, see the [architecture of RisingWave](/reference/architecture).
6. Click **Next**, name your cluster, and execute the command that pops up to establish a BYOC cluster in your environment.

Once the cluster is successfully created, you can manage it through the portal just like hosted clusters.
Expand Down
Loading

0 comments on commit 0107005

Please sign in to comment.