Skip to content

Commit

Permalink
Merge branch 'main' into follow-up-of-#2762
Browse files Browse the repository at this point in the history
  • Loading branch information
WanYixian authored Nov 27, 2024
2 parents 1b66488 + e93f2c5 commit d06f969
Show file tree
Hide file tree
Showing 225 changed files with 2,017 additions and 800 deletions.
25 changes: 25 additions & 0 deletions .github/workflows/brokenlinks-check.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
name: Broken Links Check

on:
pull_request:

jobs:
broken-links-check:
runs-on: ubuntu-latest

steps:
- name: Checkout repository
uses: actions/checkout@v4

- name: Use Node.js
uses: actions/setup-node@v4
with:
node-version: "20.x"
path: ~/.npm-global
key: ${{ runner.os }}-build-${{ env.cache-name }}

- name: Install Mintlify
run: npm i -g mintlify

- name: Run Broken Links Check
run: mintlify broken-links
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -2,3 +2,4 @@
.DS_Store
*.py
rename_files.py.save
node_modules/
66 changes: 33 additions & 33 deletions changelog/product-lifecycle.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -22,38 +22,38 @@ Below is a list of all features in the public preview phase:

| Feature name | Start version |
| :-- | :-- |
| [Shared source](/sql/commands/sql-create-source/#shared-source) | 2.1 |
| [ASOF join](/docs/current/query-syntax-join-clause/#asof-joins) | 2.1 |
| [Partitioned Postgres CDC table](/docs/current/ingest-from-postgres-cdc/) | 2.1 |
| [Map type](/docs/current/data-type-map/) | 2.0 |
| [Azure Blob sink](/docs/current/sink-to-azure-blob/) | 2.0 |
| [Approx percentile](/docs/current/sql-function-aggregate/#approx_percentile) | 2.0 |
| [Auto schema change in MySQL CDC](/docs/current/ingest-from-mysql-cdc/#automatically-change-schema) | 2.0 |
| [SQL Server CDC source](/docs/current/ingest-from-sqlserver-cdc/) | 2.0 |
| [Sink data in parquet format](/docs/current/data-delivery/#sink-data-in-parquet-format) | 2.0 |
| [Time travel queries](/docs/current/time-travel-queries/) | 2.0 |
| [Manage secrets](/docs/current/manage-secrets/) | 2.0 |
| [Amazon DynamoDB sink](../integrations/destinations/amazon-dynamodb) | 1.10 |
| Auto-map upstream table schema in [MySQL](/docs/current/ingest-from-mysql-cdc/#automatically-map-upstream-table-schema) and [PostgreSQL](/docs/current/ingest-from-postgres-cdc/#automatically-map-upstream-table-schema) | 1.10 |
| [Version column](/docs/current/sql-create-table/) | 1.9 |
| [Snowflake sink](/docs/current/sink-to-snowflake/) | 1.9 |
| [Subscription](/docs/current/subscription/) | 1.9 |
| [RisingWave as PostgreSQL FDW](/docs/current/risingwave-as-postgres-fdw/) | 1.9 |
| [Iceberg source](/docs/current/ingest-from-iceberg/) | 1.8 |
| [Google BigQuery sink](/docs/current/sink-to-bigquery/) | 1.4 |
| [SET BACKGROUND\_DDL command](/docs/current/sql-set-background-ddl/) | 1.3 |
| [Decouple sinks](/docs/current/data-delivery/#sink-decoupling) | 1.3 |
| [Pulsar sink](/docs/current/sink-to-pulsar/) | 1.3 |
| [Cassandra sink](/docs/current/sink-to-cassandra/) | 1.2 |
| [Elasticsearch sink](/docs/current/sink-to-elasticsearch/) | 1.2 |
| [NATS sink](/docs/current/sink-to-nats/) | 1.2 |
| [NATS source](/docs/current/ingest-from-nats/) | 1.2 |
| [Append-only tables](/docs/current/sql-create-table/) | 1.1 |
| [Emit on window close](/docs/current/emit-on-window-close/) | 1.1 |
| [Read-only transactions](/docs/current/sql-start-transaction/) | 1.1 |
| [AWS Kinesis sink](/docs/current/sink-to-aws-kinesis/) | 1.0 |
| [CDC Citus source](/docs/current/ingest-from-citus-cdc/) | 0.19 |
| [Iceberg sink](/docs/current/sink-to-iceberg/) | 0.18 |
| [Pulsar source](/docs/current/ingest-from-pulsar/) | 0.1 |
| [Shared source](/sql/commands/sql-create-source#shared-source) | 2.1 |
| [ASOF join](/processing/sql/joins#asof-joins) | 2.1 |
| [Partitioned Postgres CDC table](/integrations/sources/postgresql-cdc#ingest-data-from-a-partitioned-table) | 2.1 |
| [Map type](/sql/data-types/map-type) | 2.0 |
| [Azure Blob sink](/integrations/destinations/azure-blob) | 2.0 |
| [Approx percentile](/sql/functions/aggregate#approx-percentile) | 2.0 |
| [Auto schema change in MySQL CDC](/integrations/sources/mysql-cdc#automatically-change-schema) | 2.0 |
| [SQL Server CDC source](/integrations/sources/sql-server-cdc) | 2.0 |
| [Sink data in parquet encode](/delivery/overview#sink-data-in-parquet-or-json-encode) | 2.0 |
| [Time travel queries](/processing/time-travel-queries) | 2.0 |
| [Manage secrets](/operate/manage-secrets) | 2.0 |
| [Amazon DynamoDB sink](/integrations/destinations/amazon-dynamodb) | 1.10 |
| Auto-map upstream table schema in [MySQL](/integrations/sources/mysql-cdc#automatically-map-upstream-table-schema) and [PostgreSQL](/integrations/sources/postgresql-cdc#automatically-map-upstream-table-schema) | 1.10 |
| [Version column](/sql/commands/sql-create-table#pk-conflict-behavior) | 1.9 |
| [Snowflake sink](/integrations/destinations/snowflake) | 1.9 |
| [Subscription](/delivery/subscription) | 1.9 |
| [RisingWave as PostgreSQL FDW](/delivery/risingwave-as-postgres-fdw) | 1.9 |
| [Iceberg source](/integrations/sources/apache-iceberg) | 1.8 |
| [Google BigQuery sink](/integrations/destinations/bigquery) | 1.4 |
| [SET BACKGROUND\_DDL command](/sql/commands/sql-set-background-ddl) | 1.3 |
| [Decouple sinks](/delivery/overview#sink-decoupling) | 1.3 |
| [Pulsar sink](/integrations/destinations/apache-pulsar) | 1.3 |
| [Cassandra sink](/integrations/destinations/cassandra-or-scylladb) | 1.2 |
| [Elasticsearch sink](/integrations/destinations/elasticsearch) | 1.2 |
| [NATS sink](/integrations/destinations/nats-and-nats-jetstream) | 1.2 |
| [NATS source](/integrations/sources/nats-jetstream) | 1.2 |
| [Append-only tables](/sql/commands/sql-create-table#parameters) | 1.1 |
| [Emit on window close](/processing/emit-on-window-close) | 1.1 |
| [Read-only transactions](/sql/commands/sql-start-transaction) | 1.1 |
| [AWS Kinesis sink](/integrations/destinations/aws-kinesis) | 1.0 |
| [CDC Citus source](/integrations/sources/citus-cdc) | 0.19 |
| [Iceberg sink](/integrations/destinations/apache-iceberg) | 0.18 |
| [Pulsar source](/integrations/sources/pulsar) | 0.1 |

This table will be updated regularly to reflect the latest status of features as they progress through the release stages.
8 changes: 4 additions & 4 deletions changelog/release-notes.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -897,7 +897,7 @@ See the **Full Changelog** [here](https://github.com/risingwavelabs/risingwave/c

## Installation

* Now, you can easily install RisingWave on your local machine with Homebrew by running `brew install risingwave`. See [Run RisingWave](/docs/current/get-started/#install-and-start-risingwave).
* Now, you can easily install RisingWave on your local machine with Homebrew by running `brew install risingwave`. See [Run RisingWave](/get-started/quickstart#install-and-start-risingwave).

## Administration

Expand Down Expand Up @@ -1054,9 +1054,9 @@ See the **Full Changelog** [here](https://github.com/risingwavelabs/risingwave/c

## Connectors

* Adds a new parameter `match_pattern` to the S3 connector. With the new parameter, users can specify the pattern to filter files that they want to ingest from S3 buckets. For documentation updates, see [Ingest data from S3 buckets](/docs/current/ingest-from-s3/). [#7565](https://github.com/risingwavelabs/risingwave/pull/7565)
* Adds the PostgreSQL CDC connector. Users can use this connector to ingest data and CDC events from PostgreSQL directly. For documentation updates, see [Ingest data from PostgreSQL CDC](/docs/current/ingest-from-postgres-cdc/). [#6869](https://github.com/risingwavelabs/risingwave/pull/6869), [#7133](https://github.com/risingwavelabs/risingwave/pull/7133)
* Adds the MySQL CDC connector. Users can use this connector to ingest data and CDC events from MySQL directly. For documentation updates, see [Ingest data from MySQL CDC](/docs/current/ingest-from-mysql-cdc/). [#6689](https://github.com/risingwavelabs/risingwave/pull/6689), [#6345](https://github.com/risingwavelabs/risingwave/pull/6345), [#6481](https://github.com/risingwavelabs/risingwave/pull/6481), [#7133](https://github.com/risingwavelabs/risingwave/pull/7133)
* Adds a new parameter `match_pattern` to the S3 connector. With the new parameter, users can specify the pattern to filter files that they want to ingest from S3 buckets. For documentation updates, see [Ingest data from S3 buckets](/integrations/sources/s3). [#7565](https://github.com/risingwavelabs/risingwave/pull/7565)
* Adds the PostgreSQL CDC connector. Users can use this connector to ingest data and CDC events from PostgreSQL directly. For documentation updates, see [Ingest data from PostgreSQL CDC](/integrations/sources/postgresql-cdc). [#6869](https://github.com/risingwavelabs/risingwave/pull/6869), [#7133](https://github.com/risingwavelabs/risingwave/pull/7133)
* Adds the MySQL CDC connector. Users can use this connector to ingest data and CDC events from MySQL directly. For documentation updates, see [Ingest data from MySQL CDC](/integrations/sources/mysql-cdc). [#6689](https://github.com/risingwavelabs/risingwave/pull/6689), [#6345](https://github.com/risingwavelabs/risingwave/pull/6345), [#6481](https://github.com/risingwavelabs/risingwave/pull/6481), [#7133](https://github.com/risingwavelabs/risingwave/pull/7133)
* Adds the JDBC sink connector, with which users can sink data to MySQL, PostgreSQL, or other databases that are compliant with JDBC. [#6493](https://github.com/risingwavelabs/risingwave/pull/6493)
* Add new parameters to the Kafka sink connector.
* `force_append_only` : Specifies whether to force a sink to be append-only. [#7922](https://github.com/risingwavelabs/risingwave/pull/7922)
Expand Down
2 changes: 1 addition & 1 deletion client-libraries/go.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ In this guide, we use the [`pgx` driver](https://github.com/jackc/pgx) to connec

## Run RisingWave

To learn about how to run RisingWave, see [Run RisingWave](../get-started/quickstart.mdx).
To learn about how to run RisingWave, see [Run RisingWave](/get-started/quickstart).

## Install the `pgx` driver

Expand Down
2 changes: 1 addition & 1 deletion client-libraries/java.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ In this guide, we use the [PostgreSQL JDBC](https://jdbc.postgresql.org/) driver

## Run RisingWave

To learn about how to run RisingWave, see [Run RisingWave](../get-started/quickstart.mdx).
To learn about how to run RisingWave, see [Run RisingWave](/get-started/quickstart).
> You do not need to connect to RisingWave at this stage.
## Download the PostgreSQL JDBC driver
Expand Down
12 changes: 6 additions & 6 deletions client-libraries/nodejs.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ In this guide, we use the [Node.js pg driver](https://www.npmjs.com/package/pg)

## Run RisingWave

To learn about how to run RisingWave, see [Run RisingWave](../get-started/quickstart.mdx).
To learn about how to run RisingWave, see [Run RisingWave](/get-started/quickstart).

## Install npm

Expand All @@ -19,11 +19,11 @@ npm install pg

## Connect to RisingWave

:::note
<Note>

You can use either a client or a connection pool to connect to RisingWave. If you are working on a web application that makes frequent queries, we recommend that you use a connection pool. The code examples in this topic use connection pools.

:::
</Note>

Connecting to RisingWave and running a query is normally done together. Therefore, we include a basic query in the code. Replace it with the query that you want to run.

Expand Down Expand Up @@ -51,7 +51,7 @@ start().catch(console.error);

## Create a source

The code below creates a source `walk` with the [`datagen`](/ingest/ingest-from-datagen.md) connector. The `datagen` connector is used to generate mock data. The `walk` source consists of two columns, `distance` and `duration`, which respectively represent the distance and the duration of a walk. The source is a simplified version of the data that is tracked by smart watches.
The code below creates a source `walk` with the [`datagen`](/ingestion/generate-test-data) connector. The `datagen` connector is used to generate mock data. The `walk` source consists of two columns, `distance` and `duration`, which respectively represent the distance and the duration of a walk. The source is a simplified version of the data that is tracked by smart watches.

```js
const { Pool } = require('pg')
Expand Down Expand Up @@ -85,11 +85,11 @@ const start = async () => {
start().catch(console.error);
```

:::note
<Note>

All the code examples in this guide include a section for connecting to RisingWave. If you run multiple queries within one connection session, you do not need to repeat the connection code.

:::
</Note>

## Create a materialized view

Expand Down
2 changes: 1 addition & 1 deletion client-libraries/python.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ In this section, we use the [`psycopg2`](https://pypi.org/project/psycopg2/) dri

### Run RisingWave

To learn about how to run RisingWave, see [Run RisingWave](../get-started/quickstart.mdx).
To learn about how to run RisingWave, see [Run RisingWave](/get-started/quickstart).


### Install the `psgcopg2` driver
Expand Down
2 changes: 1 addition & 1 deletion client-libraries/ruby.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ In this guide, we use the [`ruby-pg`](https://github.com/ged/ruby-pg) driver to

## Run RisingWave

To learn about how to run RisingWave, see [Run RisingWave](../get-started/quickstart.mdx).
To learn about how to run RisingWave, see [Run RisingWave](/get-started/quickstart).

## Install the `ruby-pg` driver

Expand Down
4 changes: 2 additions & 2 deletions cloud/choose-a-project-plan.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ You can choose the availability region closest to you to minimize latency.
Name of the project. Assigning a descriptive name to each project can be helpful when managing multiple projects.
* **Node configuration**
Configure each node's instance resources and numbers according to your actual needs.
To learn more about the nodes, see the [architecture of RisingWave](/docs/current/architecture/).
To learn more about the nodes, see the [architecture of RisingWave](/reference/architecture).

## Understanding nodes in RisingWave

Expand All @@ -91,7 +91,7 @@ RisingWave projects consist of three types of nodes, each serving a distinct rol
4. **Meta node**: Takes charge of managing the metadata of compute and compact nodes and orchestrating operations across the system.
5. **ETCD**: A distributed key-value store that provides a reliable way to store data across a project of machines. This node cannot be scaled manually after the project is created.

For the architecture of RisingWave, see [RisingWave architecture](/docs/current/architecture/).
For the architecture of RisingWave, see [RisingWave architecture](/reference/architecture).

## Pricing

Expand Down
2 changes: 1 addition & 1 deletion cloud/connect-to-a-project.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ To connect with any local clients, follow the steps below:
* Alternatively, you can create a new user, RisingWave Cloud offers `psql`, `Connection String`, `Parameters Only`, `Java`, `Node.js`, `Python`, and `Golang` as connection options.

<Note>
To connect via `psql`, you need to [Install psql](/docs/current/install-psql-without-postgresql/) in your environment. `psql` is a command-line interface for interacting with PostgreSQL databases, including RisingWave.
To connect via `psql`, you need to [Install psql](/deploy/install-psql-without-postgresql) in your environment. `psql` is a command-line interface for interacting with PostgreSQL databases, including RisingWave.
</Note>

3. You may need to set up a CA certificate to enable SSL connections. See the instructions displayed on the portal for more details.
Expand Down
2 changes: 1 addition & 1 deletion cloud/create-a-connection.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -63,4 +63,4 @@ We aim to automate this process in the future to make it even easier.

Now, you can create a source or sink with the PrivateLink connection using SQL.

For details on how to use the VPC endpoint to create a source with the PrivateLink connection, see [Create source with PrivateLink connection](/docs/current/ingest-from-kafka/#create-source-with-privatelink-connection); for creating a sink, see [Create sink with PrivateLink connection](/docs/current/create-sink-kafka/#create-sink-with-privatelink-connection).
For details on how to use the VPC endpoint to create a source with the PrivateLink connection, see [Create source with PrivateLink connection](/integrations/sources/kafka#create-source-with-privatelink-connection); for creating a sink, see [Create sink with PrivateLink connection](/integrations/destinations/apache-kafka#create-sink-with-privatelink-connection).
2 changes: 1 addition & 1 deletion cloud/create-a-database-user.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -7,5 +7,5 @@ sidebarTitle: Create a user

* You can create a database user when [connecting to a project](/cloud/connect-to-a-project/).
* You can click **Create user** in the **Users** tab on the [project details page](/cloud/check-status-and-metrics/#check-project-details) to create a new user.
* You can run the [CREATE USER](/docs/current/sql-create-user/) command to create a new user after [connecting to a project](/cloud/connect-to-a-project/) using the console or terminal.
* You can run the [CREATE USER](/sql/commands/sql-create-user) command to create a new user after [connecting to a project](/cloud/connect-to-a-project/) using the console or terminal.
Ensure that you have logged in to the project with a user that has the `CREATEUSER` privilege. A super user has all privileges, including `CREATEUSER`.
Loading

0 comments on commit d06f969

Please sign in to comment.