From 0107005d0ba464939c936548ea86348715d274c9 Mon Sep 17 00:00:00 2001 From: WanYixian Date: Mon, 25 Nov 2024 16:11:34 +0800 Subject: [PATCH] save work --- changelog/product-lifecycle.mdx | 66 +++++++++---------- changelog/release-notes.mdx | 8 +-- client-libraries/go.mdx | 2 +- client-libraries/java.mdx | 2 +- client-libraries/nodejs.mdx | 12 ++-- client-libraries/python.mdx | 2 +- client-libraries/ruby.mdx | 2 +- cloud/choose-a-project-plan.mdx | 4 +- cloud/develop-overview.mdx | 4 +- cloud/manage-sinks.mdx | 2 +- cloud/project-byoc.mdx | 2 +- delivery/overview.mdx | 22 ++++--- deploy/risingwave-kubernetes.mdx | 2 +- faq/faq-using-risingwave.mdx | 2 +- get-started/intro.mdx | 2 +- get-started/rw-premium-edition-intro.mdx | 8 +-- .../change-data-capture-with-risingwave.mdx | 2 +- ingestion/overview.mdx | 4 +- ingestion/supported-sources-and-formats.mdx | 10 +-- integrations/destinations/apache-iceberg.mdx | 2 +- integrations/destinations/apache-kafka.mdx | 2 +- integrations/destinations/apache-pulsar.mdx | 2 +- integrations/destinations/aws-kinesis.mdx | 2 +- integrations/destinations/aws-s3.mdx | 2 +- integrations/destinations/azure-blob.mdx | 2 +- integrations/destinations/clickhouse.mdx | 4 +- integrations/destinations/delta-lake.mdx | 2 +- .../destinations/google-cloud-storage.mdx | 2 +- integrations/destinations/mysql.mdx | 4 +- integrations/destinations/postgresql.mdx | 4 +- integrations/destinations/webhdfs.mdx | 2 +- integrations/sources/amazon-msk.mdx | 2 +- integrations/sources/apache-iceberg.mdx | 2 +- integrations/sources/confluent-cloud.mdx | 2 +- integrations/sources/instaclustr-kafka.mdx | 2 +- integrations/sources/mysql-cdc.mdx | 2 +- integrations/sources/overview.mdx | 2 +- integrations/sources/postgresql-cdc.mdx | 2 +- integrations/sources/supabase-cdc.mdx | 2 +- .../visualization/beekeeper-studio.mdx | 2 +- integrations/visualization/dbeaver.mdx | 2 +- integrations/visualization/grafana.mdx | 2 +- integrations/visualization/looker.mdx | 2 +- integrations/visualization/metabase.mdx | 2 +- integrations/visualization/superset.mdx | 2 +- operate/alter-streaming.mdx | 2 +- operate/meta-backup.mdx | 2 +- performance/performance-best-practices.mdx | 2 +- .../maintain-wide-table-with-table-sinks.mdx | 4 +- processing/sql/temporal-filters.mdx | 2 +- reference/key-concepts.mdx | 2 +- sql/commands/overview.mdx | 6 +- sql/commands/sql-as-changelog.mdx | 2 +- sql/commands/sql-begin.mdx | 2 +- sql/commands/sql-commit.mdx | 2 +- sql/commands/sql-create-mv.mdx | 2 +- sql/commands/sql-create-secret.mdx | 2 +- sql/commands/sql-create-sink.mdx | 18 ++--- sql/commands/sql-create-source.mdx | 4 +- sql/commands/sql-drop-secret.mdx | 2 +- sql/data-types/overview.mdx | 2 +- sql/functions/window-functions.mdx | 2 +- sql/query-syntax/generated-columns.mdx | 2 +- sql/query-syntax/group-by-clause.mdx | 2 +- sql/query-syntax/value-exp.mdx | 4 +- sql/system-catalogs/rw-catalog.mdx | 2 +- sql/udfs/use-udfs-in-python.mdx | 2 +- 67 files changed, 146 insertions(+), 140 deletions(-) diff --git a/changelog/product-lifecycle.mdx b/changelog/product-lifecycle.mdx index 68168e13..e70447d6 100644 --- a/changelog/product-lifecycle.mdx +++ b/changelog/product-lifecycle.mdx @@ -22,38 +22,38 @@ Below is a list of all features in the public preview phase: | Feature name | Start version | | :-- | :-- | -| [Shared source](/sql/commands/sql-create-source/#shared-source) | 2.1 | -| [ASOF join](/docs/current/query-syntax-join-clause/#asof-joins) | 2.1 | -| [Partitioned Postgres CDC table](/docs/current/ingest-from-postgres-cdc/) | 2.1 | -| [Map type](/docs/current/data-type-map/) | 2.0 | -| [Azure Blob sink](/docs/current/sink-to-azure-blob/) | 2.0 | -| [Approx percentile](/docs/current/sql-function-aggregate/#approx_percentile) | 2.0 | -| [Auto schema change in MySQL CDC](/docs/current/ingest-from-mysql-cdc/#automatically-change-schema) | 2.0 | -| [SQL Server CDC source](/docs/current/ingest-from-sqlserver-cdc/) | 2.0 | -| [Sink data in parquet format](/docs/current/data-delivery/#sink-data-in-parquet-format) | 2.0 | -| [Time travel queries](/docs/current/time-travel-queries/) | 2.0 | -| [Manage secrets](/docs/current/manage-secrets/) | 2.0 | -| [Amazon DynamoDB sink](../integrations/destinations/amazon-dynamodb) | 1.10 | -| Auto-map upstream table schema in [MySQL](/docs/current/ingest-from-mysql-cdc/#automatically-map-upstream-table-schema) and [PostgreSQL](/docs/current/ingest-from-postgres-cdc/#automatically-map-upstream-table-schema) | 1.10 | -| [Version column](/docs/current/sql-create-table/) | 1.9 | -| [Snowflake sink](/docs/current/sink-to-snowflake/) | 1.9 | -| [Subscription](/docs/current/subscription/) | 1.9 | -| [RisingWave as PostgreSQL FDW](/docs/current/risingwave-as-postgres-fdw/) | 1.9 | -| [Iceberg source](/docs/current/ingest-from-iceberg/) | 1.8 | -| [Google BigQuery sink](/docs/current/sink-to-bigquery/) | 1.4 | -| [SET BACKGROUND\_DDL command](/docs/current/sql-set-background-ddl/) | 1.3 | -| [Decouple sinks](/docs/current/data-delivery/#sink-decoupling) | 1.3 | -| [Pulsar sink](/docs/current/sink-to-pulsar/) | 1.3 | -| [Cassandra sink](/docs/current/sink-to-cassandra/) | 1.2 | -| [Elasticsearch sink](/docs/current/sink-to-elasticsearch/) | 1.2 | -| [NATS sink](/docs/current/sink-to-nats/) | 1.2 | -| [NATS source](/docs/current/ingest-from-nats/) | 1.2 | -| [Append-only tables](/docs/current/sql-create-table/) | 1.1 | -| [Emit on window close](/docs/current/emit-on-window-close/) | 1.1 | -| [Read-only transactions](/docs/current/sql-start-transaction/) | 1.1 | -| [AWS Kinesis sink](/docs/current/sink-to-aws-kinesis/) | 1.0 | -| [CDC Citus source](/docs/current/ingest-from-citus-cdc/) | 0.19 | -| [Iceberg sink](/docs/current/sink-to-iceberg/) | 0.18 | -| [Pulsar source](/docs/current/ingest-from-pulsar/) | 0.1 | +| [Shared source](/sql/commands/sql-create-source#shared-source) | 2.1 | +| [ASOF join](/processing/sql/joins#asof-joins) | 2.1 | +| [Partitioned Postgres CDC table](/integrations/sources/postgresql-cdc#ingest-data-from-a-partitioned-table) | 2.1 | +| [Map type](/sql/data-types/map-type) | 2.0 | +| [Azure Blob sink](/integrations/destinations/azure-blob) | 2.0 | +| [Approx percentile](/sql/functions/aggregate#approx-percentile) | 2.0 | +| [Auto schema change in MySQL CDC](/integrations/sources/mysql-cdc#automatically-change-schema) | 2.0 | +| [SQL Server CDC source](/integrations/sources/sql-server-cdc) | 2.0 | +| [Sink data in parquet encode](/delivery/overview#sink-data-in-parquet-or-json-encode) | 2.0 | +| [Time travel queries](/processing/time-travel-queries) | 2.0 | +| [Manage secrets](/operate/manage-secrets) | 2.0 | +| [Amazon DynamoDB sink](/integrations/destinations/amazon-dynamodb) | 1.10 | +| Auto-map upstream table schema in [MySQL](/integrations/sources/mysql-cdc#automatically-map-upstream-table-schema) and [PostgreSQL](/integrations/sources/postgresql-cdc#automatically-map-upstream-table-schema) | 1.10 | +| [Version column](/sql/commands/sql-create-table#pk-conflict-behavior) | 1.9 | +| [Snowflake sink](/integrations/destinations/snowflake) | 1.9 | +| [Subscription](/delivery/subscription) | 1.9 | +| [RisingWave as PostgreSQL FDW](/delivery/risingwave-as-postgres-fdw) | 1.9 | +| [Iceberg source](/integrations/sources/apache-iceberg) | 1.8 | +| [Google BigQuery sink](/integrations/destinations/bigquery) | 1.4 | +| [SET BACKGROUND\_DDL command](/sql/commands/sql-set-background-ddl) | 1.3 | +| [Decouple sinks](/delivery/overview#sink-decoupling) | 1.3 | +| [Pulsar sink](/integrations/destinations/apache-pulsar) | 1.3 | +| [Cassandra sink](/integrations/destinations/cassandra-or-scylladb) | 1.2 | +| [Elasticsearch sink](/integrations/destinations/elasticsearch) | 1.2 | +| [NATS sink](/integrations/destinations/nats-and-nats-jetstream) | 1.2 | +| [NATS source](/integrations/sources/nats-jetstream) | 1.2 | +| [Append-only tables](/sql/commands/sql-create-table#parameters) | 1.1 | +| [Emit on window close](/processing/emit-on-window-close) | 1.1 | +| [Read-only transactions](/sql/commands/sql-start-transaction) | 1.1 | +| [AWS Kinesis sink](/integrations/destinations/aws-kinesis) | 1.0 | +| [CDC Citus source](/integrations/sources/citus-cdc) | 0.19 | +| [Iceberg sink](/integrations/destinations/apache-iceberg) | 0.18 | +| [Pulsar source](/integrations/sources/pulsar) | 0.1 | This table will be updated regularly to reflect the latest status of features as they progress through the release stages. diff --git a/changelog/release-notes.mdx b/changelog/release-notes.mdx index a3806a61..36e9f7c3 100644 --- a/changelog/release-notes.mdx +++ b/changelog/release-notes.mdx @@ -897,7 +897,7 @@ See the **Full Changelog** [here](https://github.com/risingwavelabs/risingwave/c ## Installation -* Now, you can easily install RisingWave on your local machine with Homebrew by running `brew install risingwave`. See [Run RisingWave](/docs/current/get-started/#install-and-start-risingwave). +* Now, you can easily install RisingWave on your local machine with Homebrew by running `brew install risingwave`. See [Run RisingWave](/get-started/quickstart#install-and-start-risingwave). ## Administration @@ -1054,9 +1054,9 @@ See the **Full Changelog** [here](https://github.com/risingwavelabs/risingwave/c ## Connectors -* Adds a new parameter `match_pattern` to the S3 connector. With the new parameter, users can specify the pattern to filter files that they want to ingest from S3 buckets. For documentation updates, see [Ingest data from S3 buckets](/docs/current/ingest-from-s3/). [#7565](https://github.com/risingwavelabs/risingwave/pull/7565) -* Adds the PostgreSQL CDC connector. Users can use this connector to ingest data and CDC events from PostgreSQL directly. For documentation updates, see [Ingest data from PostgreSQL CDC](/docs/current/ingest-from-postgres-cdc/). [#6869](https://github.com/risingwavelabs/risingwave/pull/6869), [#7133](https://github.com/risingwavelabs/risingwave/pull/7133) -* Adds the MySQL CDC connector. Users can use this connector to ingest data and CDC events from MySQL directly. For documentation updates, see [Ingest data from MySQL CDC](/docs/current/ingest-from-mysql-cdc/). [#6689](https://github.com/risingwavelabs/risingwave/pull/6689), [#6345](https://github.com/risingwavelabs/risingwave/pull/6345), [#6481](https://github.com/risingwavelabs/risingwave/pull/6481), [#7133](https://github.com/risingwavelabs/risingwave/pull/7133) +* Adds a new parameter `match_pattern` to the S3 connector. With the new parameter, users can specify the pattern to filter files that they want to ingest from S3 buckets. For documentation updates, see [Ingest data from S3 buckets](/integrations/sources/s3). [#7565](https://github.com/risingwavelabs/risingwave/pull/7565) +* Adds the PostgreSQL CDC connector. Users can use this connector to ingest data and CDC events from PostgreSQL directly. For documentation updates, see [Ingest data from PostgreSQL CDC](/integrations/sources/postgresql-cdc). [#6869](https://github.com/risingwavelabs/risingwave/pull/6869), [#7133](https://github.com/risingwavelabs/risingwave/pull/7133) +* Adds the MySQL CDC connector. Users can use this connector to ingest data and CDC events from MySQL directly. For documentation updates, see [Ingest data from MySQL CDC](/integrations/sources/mysql-cdc). [#6689](https://github.com/risingwavelabs/risingwave/pull/6689), [#6345](https://github.com/risingwavelabs/risingwave/pull/6345), [#6481](https://github.com/risingwavelabs/risingwave/pull/6481), [#7133](https://github.com/risingwavelabs/risingwave/pull/7133) * Adds the JDBC sink connector, with which users can sink data to MySQL, PostgreSQL, or other databases that are compliant with JDBC. [#6493](https://github.com/risingwavelabs/risingwave/pull/6493) * Add new parameters to the Kafka sink connector. * `force_append_only` : Specifies whether to force a sink to be append-only. [#7922](https://github.com/risingwavelabs/risingwave/pull/7922) diff --git a/client-libraries/go.mdx b/client-libraries/go.mdx index 23b09c7a..391d1373 100644 --- a/client-libraries/go.mdx +++ b/client-libraries/go.mdx @@ -9,7 +9,7 @@ In this guide, we use the [`pgx` driver](https://github.com/jackc/pgx) to connec ## Run RisingWave -To learn about how to run RisingWave, see [Run RisingWave](../get-started/quickstart.mdx). +To learn about how to run RisingWave, see [Run RisingWave](/get-started/quickstart). ## Install the `pgx` driver diff --git a/client-libraries/java.mdx b/client-libraries/java.mdx index 0d10b413..5608276b 100644 --- a/client-libraries/java.mdx +++ b/client-libraries/java.mdx @@ -9,7 +9,7 @@ In this guide, we use the [PostgreSQL JDBC](https://jdbc.postgresql.org/) driver ## Run RisingWave -To learn about how to run RisingWave, see [Run RisingWave](../get-started/quickstart.mdx). +To learn about how to run RisingWave, see [Run RisingWave](/get-started/quickstart). > You do not need to connect to RisingWave at this stage. ## Download the PostgreSQL JDBC driver diff --git a/client-libraries/nodejs.mdx b/client-libraries/nodejs.mdx index 25840828..e99df00f 100644 --- a/client-libraries/nodejs.mdx +++ b/client-libraries/nodejs.mdx @@ -9,7 +9,7 @@ In this guide, we use the [Node.js pg driver](https://www.npmjs.com/package/pg) ## Run RisingWave -To learn about how to run RisingWave, see [Run RisingWave](../get-started/quickstart.mdx). +To learn about how to run RisingWave, see [Run RisingWave](/get-started/quickstart). ## Install npm @@ -19,11 +19,11 @@ npm install pg ## Connect to RisingWave -:::note + You can use either a client or a connection pool to connect to RisingWave. If you are working on a web application that makes frequent queries, we recommend that you use a connection pool. The code examples in this topic use connection pools. -::: + Connecting to RisingWave and running a query is normally done together. Therefore, we include a basic query in the code. Replace it with the query that you want to run. @@ -51,7 +51,7 @@ start().catch(console.error); ## Create a source -The code below creates a source `walk` with the [`datagen`](/ingest/ingest-from-datagen.md) connector. The `datagen` connector is used to generate mock data. The `walk` source consists of two columns, `distance` and `duration`, which respectively represent the distance and the duration of a walk. The source is a simplified version of the data that is tracked by smart watches. +The code below creates a source `walk` with the [`datagen`](/ingestion/generate-test-data) connector. The `datagen` connector is used to generate mock data. The `walk` source consists of two columns, `distance` and `duration`, which respectively represent the distance and the duration of a walk. The source is a simplified version of the data that is tracked by smart watches. ```js const { Pool } = require('pg') @@ -85,11 +85,11 @@ const start = async () => { start().catch(console.error); ``` -:::note + All the code examples in this guide include a section for connecting to RisingWave. If you run multiple queries within one connection session, you do not need to repeat the connection code. -::: + ## Create a materialized view diff --git a/client-libraries/python.mdx b/client-libraries/python.mdx index f634b6bb..3f5dab87 100644 --- a/client-libraries/python.mdx +++ b/client-libraries/python.mdx @@ -13,7 +13,7 @@ In this section, we use the [`psycopg2`](https://pypi.org/project/psycopg2/) dri ### Run RisingWave -To learn about how to run RisingWave, see [Run RisingWave](../get-started/quickstart.mdx). +To learn about how to run RisingWave, see [Run RisingWave](/get-started/quickstart). ### Install the `psgcopg2` driver diff --git a/client-libraries/ruby.mdx b/client-libraries/ruby.mdx index 5504f1fe..165cbe5d 100644 --- a/client-libraries/ruby.mdx +++ b/client-libraries/ruby.mdx @@ -8,7 +8,7 @@ In this guide, we use the [`ruby-pg`](https://github.com/ged/ruby-pg) driver to ## Run RisingWave -To learn about how to run RisingWave, see [Run RisingWave](../get-started/quickstart.mdx). +To learn about how to run RisingWave, see [Run RisingWave](/get-started/quickstart). ## Install the `ruby-pg` driver diff --git a/cloud/choose-a-project-plan.mdx b/cloud/choose-a-project-plan.mdx index d0ad11ef..4a4625dc 100644 --- a/cloud/choose-a-project-plan.mdx +++ b/cloud/choose-a-project-plan.mdx @@ -79,7 +79,7 @@ You can choose the availability region closest to you to minimize latency. Name of the project. Assigning a descriptive name to each project can be helpful when managing multiple projects. * **Node configuration** Configure each node's instance resources and numbers according to your actual needs. -To learn more about the nodes, see the [architecture of RisingWave](/docs/current/architecture/). +To learn more about the nodes, see the [architecture of RisingWave](/reference/architecture). ## Understanding nodes in RisingWave @@ -91,7 +91,7 @@ RisingWave projects consist of three types of nodes, each serving a distinct rol 4. **Meta node**: Takes charge of managing the metadata of compute and compact nodes and orchestrating operations across the system. 5. **ETCD**: A distributed key-value store that provides a reliable way to store data across a project of machines. This node cannot be scaled manually after the project is created. -For the architecture of RisingWave, see [RisingWave architecture](/docs/current/architecture/). +For the architecture of RisingWave, see [RisingWave architecture](/reference/architecture). ## Pricing diff --git a/cloud/develop-overview.mdx b/cloud/develop-overview.mdx index e06c3eff..9efd8e46 100644 --- a/cloud/develop-overview.mdx +++ b/cloud/develop-overview.mdx @@ -50,7 +50,7 @@ Select the version of the corresponding docs when using the RisingWave user docs See how RisingWave can integrate with your existing data stack. Vote for your favorite data tools and streaming services to help us prioritize the integration development. Connect to and ingest data from external sources such as databases and message brokers. See supported data sources. - Stream processed data out of RisingWave to message brokers and databases. See supported data destinations. + Stream processed data out of RisingWave to message brokers and databases. See supported data destinations. ### Process data with RisingWave @@ -119,7 +119,7 @@ Continue to learn about RisingWave.

-Architecture +Architecture

RisingWave vs. Apache Flink diff --git a/cloud/manage-sinks.mdx b/cloud/manage-sinks.mdx index 0c60cee9..d902d906 100644 --- a/cloud/manage-sinks.mdx +++ b/cloud/manage-sinks.mdx @@ -3,7 +3,7 @@ title: "Manage sinks" description: "To stream data out of RisingWave, you must create a sink. A sink refers to an external target that you can send data to. You can deliver data to downstream systems via our sink connectors." --- -For the complete list of supported sink connectors and data formats, see [Data delivery](/docs/current/data-delivery/) in the RisingWave documentation. +For the complete list of supported sink connectors and data formats, see [Data delivery](/delivery/overview) in the RisingWave documentation. ## Create a sink diff --git a/cloud/project-byoc.mdx b/cloud/project-byoc.mdx index 4e5ab6a8..a0a23d82 100644 --- a/cloud/project-byoc.mdx +++ b/cloud/project-byoc.mdx @@ -21,7 +21,7 @@ Follow the steps below to create your own cloud environment. When you run the command `rwc byoc apply --name xxx`, it will deploy some resources in your AWS/GCP/Azure environment, such as AWS S3/Google Cloud Storage/Azure Blob Storage and EKS/GKE/AKS clusters. Please do not modify the configuration of these resources. If you encounter any issues during this process, please contact our [support team](mailto:cloud-support@risingwave-labs.com). -5. Click **Next** to continue the configuration of cluster size and nodes. To learn more about the nodes, see the [architecture of RisingWave](/docs/current/architecture/). +5. Click **Next** to continue the configuration of cluster size and nodes. To learn more about the nodes, see the [architecture of RisingWave](/reference/architecture). 6. Click **Next**, name your cluster, and execute the command that pops up to establish a BYOC cluster in your environment. Once the cluster is successfully created, you can manage it through the portal just like hosted clusters. diff --git a/delivery/overview.mdx b/delivery/overview.mdx index bab9f6a9..2e8ee9cd 100644 --- a/delivery/overview.mdx +++ b/delivery/overview.mdx @@ -13,11 +13,11 @@ Currently, RisingWave supports the following sink connectors: * Apache Doris sink connector (`connector = 'doris'`) With this connector, you can sink data from RisingWave to Apache Doris. For details about the syntax and parameters, see [Sink data to Apache Doris](/docs/current/sink-to-doris/). * Apache Iceberg sink connector (`connector = 'iceberg'`) -With this connector, you can sink data from RisingWave to Apache Iceberg. For details about the syntax and parameters, see [Sink data to Apache Iceberg](/docs/current/sink-to-iceberg/). +With this connector, you can sink data from RisingWave to Apache Iceberg. For details about the syntax and parameters, see [Sink data to Apache Iceberg](/integrations/destinations/apache-iceberg). * AWS Kinesis sink connector (`connector = 'kinesis'`) -With this connector, you can sink data from RisingWave to AWS Kinesis. For details about the syntax and parameters, see [Sink data to AWS Kinesis](/docs/current/sink-to-aws-kinesis/). +With this connector, you can sink data from RisingWave to AWS Kinesis. For details about the syntax and parameters, see [Sink data to AWS Kinesis](/integrations/destinations/aws-kinesis). * Cassandra and ScyllaDB sink connector (`connector = 'cassandra'`) -With this connector, you can sink data from RisingWave to Cassandra or ScyllaDB. For details about the syntax and parameters, see [Sink data to Cassandra or ScyllaDB](/docs/current/sink-to-cassandra/). +With this connector, you can sink data from RisingWave to Cassandra or ScyllaDB. For details about the syntax and parameters, see [Sink data to Cassandra or ScyllaDB](/integrations/destinations/cassandra-or-scylladb). * ClickHouse sink connector (`connector = 'clickhouse'`) With this connector, you can sink data from RisingWave to ClickHouse. For details about the syntax and parameters, see [Sink data to ClickHouse](/docs/current/sink-to-clickhouse/). * CockroachDB sink connector (`connector = 'jdbc'`) @@ -25,9 +25,9 @@ With this connector, you can sink data from RisingWave to CockroachDB. For detai * Delta Lake sink connector (`connector = 'deltalake'`) With this connector, you can sink data from RisingWave to Delta Lake. For details about the syntax and parameters, see [Sink data to Delta Lake](/docs/current/sink-to-delta-lake/). * Elasticsearch sink connector (`connector = 'elasticsearch'`) -With this connector, you can sink data from RisingWave to Elasticsearch. For details about the syntax and parameters, see [Sink data to Elasticsearch](/docs/current/sink-to-elasticsearch/). +With this connector, you can sink data from RisingWave to Elasticsearch. For details about the syntax and parameters, see [Sink data to Elasticsearch](/integrations/destinations/elasticsearch). * Google BigQuery sink connector (`connector = 'bigquery'`) -With this connector, you can sink data from RisingWave to Google BigQuery. For details about the syntax and parameters, see [Sink data to Google BigQuery](/docs/current/sink-to-bigquery/). +With this connector, you can sink data from RisingWave to Google BigQuery. For details about the syntax and parameters, see [Sink data to Google BigQuery](/integrations/destinations/bigquery). * Google Pub/Sub sink connector (`connector = 'google_pubsub'`) With this connector, you can sink data from RisingWave to Google Pub/Sub. For details about the syntax and parameters, see [Sink data to Google Pub/Sub](/docs/current/sink-to-google-pubsub/). * JDBC sink connector for MySQL, PostgreSQL, or TiDB (`connector = 'jdbc'`) @@ -37,13 +37,13 @@ With this connector, you can sink data from RisingWave to Kafka topics. For deta * MQTT sink connector (`connector = 'mqtt'`) With this connector, you can sink data from RisingWave to MQTT topics. For details about the syntax and parameters, see [Sink data to MQTT](/docs/current/sink-to-mqtt/). * NATS sink connector (`connector = 'nats'`) -With this connector, you can sink data from RisingWave to NATS. For details about the syntax and parameters, see [Sink data to NATS](/docs/current/sink-to-nats/). +With this connector, you can sink data from RisingWave to NATS. For details about the syntax and parameters, see [Sink data to NATS](/integrations/destinations/nats-and-nats-jetstream). * Pulsar sink connector (`connector = 'pulsar'`) -With this connector, you can sink data from RisingWave to Pulsar. For details about the syntax and parameters, see [Sink data to Pulsar](/docs/current/sink-to-pulsar/). +With this connector, you can sink data from RisingWave to Pulsar. For details about the syntax and parameters, see [Sink data to Pulsar](/integrations/destinations/apache-pulsar). * Redis sink connector (`connector = 'redis'`) With this connector, you can sink data from RisingWave to Redis. For details about the syntax and parameters, see [Sink data to Redis](/docs/current/sink-to-redis/). * Snowflake sink connector (`connector = 'snowflake'`) -With this connector, you can sink data from RisingWave to Snowflake. For details about the syntax and parameters, see [Sink data to Snowflake](/docs/current/sink-to-snowflake/). +With this connector, you can sink data from RisingWave to Snowflake. For details about the syntax and parameters, see [Sink data to Snowflake](/integrations/destinations/snowflake). * StarRocks sink connector (`connector = 'starrocks'`) With this connector, you can sink data from RisingWave to StarRocks. For details about the syntax and parameters, see [Sink data to StarRocks](/docs/current/sink-to-starrocks/). * Microsoft SQL Server sink connector(`connector = 'sqlserver'`) @@ -55,6 +55,12 @@ Typically, sinks in RisingWave operate in a blocking manner. This means that if Sink decoupling introduces a buffering queue between a RisingWave sink and the downstream system. This buffering mechanism helps maintain the stability and performance of the RisingWave instance, even when the downstream system is temporarily slow or unavailable. + +**PUBLIC PREVIEW** + +This feature is in the public preview stage, meaning it's nearing the final product but is not yet fully stable. If you encounter any issues or have feedback, please contact us through our [Slack channel](https://www.risingwave.com/slack). Your input is valuable in helping us improve the feature. For more information, see our [Public preview feature list](/product-lifecycle/#features-in-the-public-preview-stage). + + The `sink_decouple` session variable can be specified to enable or disable sink decoupling. The default value for the session variable is `default`. To enable sink decoupling for all sinks created in the sessions, set `sink_decouple` as `true` or `enable`. diff --git a/deploy/risingwave-kubernetes.mdx b/deploy/risingwave-kubernetes.mdx index be451032..2d7855b5 100644 --- a/deploy/risingwave-kubernetes.mdx +++ b/deploy/risingwave-kubernetes.mdx @@ -555,4 +555,4 @@ psql -h ${RISINGWAVE_HOST} -p ${RISINGWAVE_PORT} -d dev -U root -Now you can ingest and transform streaming data. See [Quick start](/docs/current/get-started/) for details. +Now you can ingest and transform streaming data. See [Quick start](/get-started/quickstart) for details. diff --git a/faq/faq-using-risingwave.mdx b/faq/faq-using-risingwave.mdx index afc96857..44d0156d 100644 --- a/faq/faq-using-risingwave.mdx +++ b/faq/faq-using-risingwave.mdx @@ -47,7 +47,7 @@ By continuously improving the reserved memory feature, we strive to offer a more The execution time for the `CREATE MATERIALIZED VIEW` statement can vary based on several factors. Here are two common reasons: -1. **Backfilling of historical data**: RisingWave ensures consistent snapshots across materialized views (MVs). So when a new MV is created, it backfills all historical data from the upstream MV or tables and calculate them, which takes some time. And the created DDL statement will only end when the backfill ends. You can run `SHOW JOBS;` in SQL to check the DDL progress. If you want the create statement to not wait for the process to finish and not block the session, you can execute `SET BACKGROUND_DDL=true;` before running the `CREATE MATERIALIZED VIEW` statement. See details in [SET BACKGROUND\_DDL](/docs/current/sql-set-background-ddl/). But please notice that the newly created MV is still invisible in the catalog until the end of backfill when `BACKGROUND_DDL=true`. +1. **Backfilling of historical data**: RisingWave ensures consistent snapshots across materialized views (MVs). So when a new MV is created, it backfills all historical data from the upstream MV or tables and calculate them, which takes some time. And the created DDL statement will only end when the backfill ends. You can run `SHOW JOBS;` in SQL to check the DDL progress. If you want the create statement to not wait for the process to finish and not block the session, you can execute `SET BACKGROUND_DDL=true;` before running the `CREATE MATERIALIZED VIEW` statement. See details in [SET BACKGROUND\_DDL](/sql/commands/sql-set-background-ddl). But please notice that the newly created MV is still invisible in the catalog until the end of backfill when `BACKGROUND_DDL=true`. 2. **High cluster latency**: If the cluster experiences high latency, it may take longer to apply changes to the streaming graph. If the `Progress` in the `SHOW JOBS;` result stays at 0.0%, high latency could be the cause. See details in [Troubleshoot high latency](/docs/current/troubleshoot-high-latency/) diff --git a/get-started/intro.mdx b/get-started/intro.mdx index 2b0045ab..ddbb74d4 100644 --- a/get-started/intro.mdx +++ b/get-started/intro.mdx @@ -74,7 +74,7 @@ RisingWave aims to help simplify event-driven architecture. You can think of Ris
- + diff --git a/get-started/rw-premium-edition-intro.mdx b/get-started/rw-premium-edition-intro.mdx index 3dd36074..69fa5f69 100644 --- a/get-started/rw-premium-edition-intro.mdx +++ b/get-started/rw-premium-edition-intro.mdx @@ -18,17 +18,17 @@ RisingWave Premium 1.0 is the first major release of this new edition with sever ### SQL and security - + ### Schema management -* Automatic schema mapping to the source tables for [PostgreSQL CDC](/docs/current/ingest-from-postgres-cdc/#automatically-map-upstream-table-schema) and [MySQL CDC](/docs/current/ingest-from-mysql-cdc/#automatically-map-upstream-table-schema) -* [Automatic schema change for MySQL CDC](/docs/current/ingest-from-mysql-cdc/#automatically-change-schema) +* Automatic schema mapping to the source tables for [PostgreSQL CDC](/integrations/sources/postgresql-cdc#automatically-map-upstream-table-schema) and [MySQL CDC](/integrations/sources/mysql-cdc#automatically-map-upstream-table-schema) +* [Automatic schema change for MySQL CDC](/integrations/sources/mysql-cdc#automatically-change-schema) * [AWS Glue Schema Registry](/docs/current/ingest-from-kafka/#read-schemas-from-aws-glue-schema-registry) ### Connectors - + For users who are already using these features in 1.9.x or earlier versions, rest assured that the functionality of these features will be intact if you stay on the version. If you choose to upgrade to v2.0 or later versions, an error will show up to indicate you need a license to use the features. diff --git a/ingestion/change-data-capture-with-risingwave.mdx b/ingestion/change-data-capture-with-risingwave.mdx index 1cc4e9c3..83f5bb22 100644 --- a/ingestion/change-data-capture-with-risingwave.mdx +++ b/ingestion/change-data-capture-with-risingwave.mdx @@ -7,6 +7,6 @@ mode: wide You can use event streaming systems like Apache Kafka, Pulsar, or Kinesis to stream changes from MySQL, PostgreSQL, and TiDB to RisingWave. In this case, you will need an additional CDC tool to stream the changes from the database and specify the corresponding formats when ingesting the streams into RisingWave. -RisingWave also provides native MySQL and PostgreSQL CDC connectors. With these CDC connectors, you can ingest CDC data from these databases directly, without setting up additional services like Kafka. For complete step-to-step guides about using the native CDC connector to ingest MySQL and PostgreSQL data, see [Ingest data from MySQL](/docs/current/ingest-from-mysql-cdc/) and [Ingest data from PostgreSQL](/docs/current/ingest-from-postgres-cdc/). This topic only describes the configurations for using RisingWave to ingest CDC data from an event streaming system. +RisingWave also provides native MySQL and PostgreSQL CDC connectors. With these CDC connectors, you can ingest CDC data from these databases directly, without setting up additional services like Kafka. For complete step-to-step guides about using the native CDC connector to ingest MySQL and PostgreSQL data, see [Ingest data from MySQL](/integrations/sources/mysql-cdc) and [Ingest data from PostgreSQL](/integrations/sources/postgresql-cdc). This topic only describes the configurations for using RisingWave to ingest CDC data from an event streaming system. For the supported sources and corresponding formats, see [Supported sources and formats](/docs/current/supported-sources-and-formats/). diff --git a/ingestion/overview.mdx b/ingestion/overview.mdx index 50cfc66a..0a5e29cf 100644 --- a/ingestion/overview.mdx +++ b/ingestion/overview.mdx @@ -69,7 +69,7 @@ WITH ( The statement will create a streaming job that continuously ingests data from the Kafka topic to the table and the data will be stored in RisingWave's internal storage, which brings the following benefits: 1. **Improved ad-hoc query performance:** When users execute queries such as `SELECT * FROM table_on_kafka`, the query engine will directly access the data from RisingWave's internal storage, eliminating unnecessary network overhead and avoiding read pressure on upstream systems. Additionally, users can create [indexes](/docs/current/indexes/) on the table to accelerate queries. -2. **Allow defining primary keys:** With the help of its internal storage, RisingWave can efficiently maintain primary key constraints. Users can define a primary key on a specific column of the table and define different behaviors for primary key conflicts with [ON CONFLICT clause](/docs/current/sql-create-table/#pk-conflict-behavior). +2. **Allow defining primary keys:** With the help of its internal storage, RisingWave can efficiently maintain primary key constraints. Users can define a primary key on a specific column of the table and define different behaviors for primary key conflicts with [ON CONFLICT clause](/sql/commands/sql-create-table#pk-conflict-behavior). 3. **Ability to handle delete/update changes**: Based on the definition of primary keys, RisingWave can efficiently process upstream synchronized delete and update operations. For systems that synchronize delete/update operations from external systems, such as database's CDC and UPSERT format messages from message queues, we **do not** allow creating a source on it but require a table with connectors. 4. **Stronger consistency guarantee**: When using a table with connectors, all downstream jobs will be guaranteed to have a consistent view of the data persisted in the table; while for source, different jobs may see inconsistent results due to different ingestion speed or data retention in the external system. 5. **Greater flexibility**: Like regular tables, you can use DML statements like [INSERT](/docs/current/sql-insert/), [UPDATE](/docs/current/sql-update/) and [DELETE](/docs/current/sql-delete/) to insert or modify data in tables with connectors, and use [CREATE SINK INTO TABLE](/docs/current/sql-create-sink-into/) to merge other data streams into the table. @@ -78,7 +78,7 @@ The statement will create a streaming job that continuously ingests data from th ### Insert data into tables -You can load data in batch to RisingWave by creating a table ([CREATE TABLE](/docs/current/sql-create-table/)) and then inserting data into it ([INSERT](/docs/current/sql-insert/)). For example, the statement below creates a table `website_visits` and inserts 5 rows of data. +You can load data in batch to RisingWave by creating a table ([CREATE TABLE](/sql/commands/sql-create-table)) and then inserting data into it ([INSERT](/docs/current/sql-insert/)). For example, the statement below creates a table `website_visits` and inserts 5 rows of data. ```sql CREATE TABLE website_visits ( diff --git a/ingestion/supported-sources-and-formats.mdx b/ingestion/supported-sources-and-formats.mdx index f51e89ea..a6a644e0 100644 --- a/ingestion/supported-sources-and-formats.mdx +++ b/ingestion/supported-sources-and-formats.mdx @@ -14,12 +14,12 @@ To ingest data in formats marked with "T", you need to create tables (with conne | :------------ | :------------ | :------------------- | | [Kafka](/docs/current/ingest-from-kafka/) | 3.1.0 or later versions | [Avro](#avro), [JSON](#json), [protobuf](#protobuf), [Debezium JSON](#debezium-json) (T), [Debezium AVRO](#debezium-avro) (T), [DEBEZIUM\_MONGO\_JSON](#debezium-mongo-json) (T), [Maxwell JSON](#maxwell-json) (T), [Canal JSON](#canal-json) (T), [Upsert JSON](#upsert-json) (T), [Upsert AVRO](#upsert-avro) (T), [Bytes](#bytes) | | [Redpanda](/docs/current/ingest-from-redpanda/) | Latest | [Avro](#avro), [JSON](#json), [protobuf](#protobuf) | -| [Pulsar](/docs/current/ingest-from-pulsar/) | 2.8.0 or later versions | [Avro](#avro), [JSON](#json), [protobuf](#protobuf), [Debezium JSON](#debezium-json) (T), [Maxwell JSON](#maxwell-json) (T), [Canal JSON](#canal-json) (T) | +| [Pulsar](/integrations/sources/pulsar) | 2.8.0 or later versions | [Avro](#avro), [JSON](#json), [protobuf](#protobuf), [Debezium JSON](#debezium-json) (T), [Maxwell JSON](#maxwell-json) (T), [Canal JSON](#canal-json) (T) | | [Kinesis](/docs/current/ingest-from-kinesis/) | Latest | [Avro](#avro), [JSON](#json), [protobuf](#protobuf), [Debezium JSON](#debezium-json) (T), [Maxwell JSON](#maxwell-json) (T), [Canal JSON](#canal-json) (T) | -| [PostgreSQL CDC](/docs/current/ingest-from-postgres-cdc/) | 10, 11, 12, 13, 14 | [Debezium JSON](#debezium-json) (T) | -| [MySQL CDC](/docs/current/ingest-from-mysql-cdc/) | 5.7, 8.0 | [Debezium JSON](#debezium-json) (T) | +| [PostgreSQL CDC](/integrations/sources/postgresql-cdc) | 10, 11, 12, 13, 14 | [Debezium JSON](#debezium-json) (T) | +| [MySQL CDC](/integrations/sources/mysql-cdc) | 5.7, 8.0 | [Debezium JSON](#debezium-json) (T) | | [CDC via Kafka](/docs/current/ingest-from-cdc/) | [Debezium JSON](#debezium-json) (T), [Maxwell JSON](#maxwell-json) (T), [Canal JSON](#canal-json) (T) | | -| [Amazon S3](/docs/current/ingest-from-s3/) | Latest | [JSON](#json), CSV | +| [Amazon S3](/integrations/sources/s3) | Latest | [JSON](#json), CSV | | [Load generator](/docs/current/ingest-from-datagen/) | Built-in | [JSON](#json) | | [Google Pub/Sub](/docs/current/ingest-from-google-pubsub/) | [Avro](#avro), [JSON](#json), [protobuf](#protobuf), [Debezium JSON](#debezium-json) (T), [Maxwell JSON](#maxwell-json) (T), [Canal JSON](#canal-json) (T) | | | [Google Cloud Storage](/docs/current/ingest-from-gcs/) | [JSON](#json) | | @@ -53,7 +53,7 @@ ENCODE AVRO ( ) ``` -You can ingest Avro map type into RisingWave [map type](/docs/current/data-type-map/) or jsonb: +You can ingest Avro map type into RisingWave [map type](/sql/data-types/map-type) or jsonb: ```sql FORMAT [ DEBEZIUM | UPSERT | PLAIN ] ENCODE AVRO ( diff --git a/integrations/destinations/apache-iceberg.mdx b/integrations/destinations/apache-iceberg.mdx index 81839525..02e7ee5e 100644 --- a/integrations/destinations/apache-iceberg.mdx +++ b/integrations/destinations/apache-iceberg.mdx @@ -251,7 +251,7 @@ WITH ( ) FORMAT PLAIN ENCODE JSON; ``` -Another option is to create an upsert table, which supports in-place updates. For more details on creating a table, see [CREATE TABLE](/docs/current/sql-create-table/) . +Another option is to create an upsert table, which supports in-place updates. For more details on creating a table, see [CREATE TABLE](/sql/commands/sql-create-table) . ```sql CREATE TABLE s1_table ( diff --git a/integrations/destinations/apache-kafka.mdx b/integrations/destinations/apache-kafka.mdx index 6057407c..b299beae 100644 --- a/integrations/destinations/apache-kafka.mdx +++ b/integrations/destinations/apache-kafka.mdx @@ -78,7 +78,7 @@ These options should be set in `FORMAT data_format ENCODE data_encode (key = 'va | Field | Notes | | :------------------------ | :-------------------------- | -| data\_format | Data format. Allowed formats:

To learn about when to define the primary key if creating an UPSERT sink, see the [Overview](/docs/current/data-delivery/). | +| data\_format | Data format. Allowed formats: To learn about when to define the primary key if creating an UPSERT sink, see the [Overview](/delivery/overview). | | data\_encode | Data encode. Allowed encodes: For `UPSERT PROTOBUF` sinks, you must specify `key encode text`, while it remains optional for other format/encode combinations. | | force\_append\_only | If true, forces the sink to be `PLAIN` (also known as append-only), even if it cannot be. | | timestamptz.handling.mode | Controls the timestamptz output format. This parameter specifically applies to append-only or upsert sinks using JSON encoding. | diff --git a/integrations/destinations/apache-pulsar.mdx b/integrations/destinations/apache-pulsar.mdx index 9c58ec36..47031d8f 100644 --- a/integrations/destinations/apache-pulsar.mdx +++ b/integrations/destinations/apache-pulsar.mdx @@ -59,7 +59,7 @@ These options should be set in `FORMAT data_format ENCODE data_encode (key = 'va | Field | Notes | | :------------------------ | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| data\_format | Data format. Allowed formats:To learn about when to define the primary key if creating an UPSERT sink, see the [Overview](/docs/current/data-delivery/). | +| data\_format | Data format. Allowed formats:To learn about when to define the primary key if creating an UPSERT sink, see the [Overview](/delivery/overview). | | data\_encode | Data encode. Supported encode: JSON. | | force\_append\_only | If true, forces the sink to be PLAIN (also known as append-only), even if it cannot be. | | timestamptz.handling.mode | Controls the timestamptz output format. This parameter specifically applies to append-only or upsert sinks using JSON encoding. | diff --git a/integrations/destinations/aws-kinesis.mdx b/integrations/destinations/aws-kinesis.mdx index 6d5eacb1..96e87eb5 100644 --- a/integrations/destinations/aws-kinesis.mdx +++ b/integrations/destinations/aws-kinesis.mdx @@ -51,7 +51,7 @@ These options should be set in `FORMAT data_format ENCODE data_encode (key = 'va | Field | Notes | | :---------------------------------- | :--------------------- | -| data\_format | Data format. Allowed formats: To learn about when to define the primary key if creating an UPSERT sink, see the [Overview](/docs/current/data-delivery/). | +| data\_format | Data format. Allowed formats: To learn about when to define the primary key if creating an UPSERT sink, see the [Overview](/delivery/overview). | | data\_encode | Data encode. Supported encode: `JSON`. | | force\_append\_only | If `true`, forces the sink to be `PLAIN` (also known as `append-only`), even if it cannot be. | | timestamptz.handling.mode | Controls the timestamptz output format. This parameter specifically applies to append-only or upsert sinks using JSON encoding. | diff --git a/integrations/destinations/aws-s3.mdx b/integrations/destinations/aws-s3.mdx index 6d9fabea..28111b7b 100644 --- a/integrations/destinations/aws-s3.mdx +++ b/integrations/destinations/aws-s3.mdx @@ -48,4 +48,4 @@ WITH ( )FORMAT PLAIN ENCODE PARQUET(force_append_only=true); ``` -For more information about encode `Parquet` or `JSON`, see [Sink data in parquet or json encode](/docs/current/data-delivery/). \ No newline at end of file +For more information about encode `Parquet` or `JSON`, see [Sink data in parquet or json encode](/delivery/overview). \ No newline at end of file diff --git a/integrations/destinations/azure-blob.mdx b/integrations/destinations/azure-blob.mdx index 440620f9..d681410c 100644 --- a/integrations/destinations/azure-blob.mdx +++ b/integrations/destinations/azure-blob.mdx @@ -52,4 +52,4 @@ WITH ( )FORMAT PLAIN ENCODE PARQUET(force_append_only=true); ``` -For more information about encode `Parquet` or `JSON`, see [Sink data in parquet or json encode](/docs/current/data-delivery/). +For more information about encode `Parquet` or `JSON`, see [Sink data in parquet or json encode](/delivery/overview). diff --git a/integrations/destinations/clickhouse.mdx b/integrations/destinations/clickhouse.mdx index d72fa9d4..7dee26f5 100644 --- a/integrations/destinations/clickhouse.mdx +++ b/integrations/destinations/clickhouse.mdx @@ -30,7 +30,7 @@ WITH ( | Parameter Names | Description | | :--------------------------- | :------------------- | -| type | Required. Specify if the sink should be upsert or append-only. If creating an upsert sink, see the [Overview](/docs/current/data-delivery/) on when to define the primary key and [Upsert sinks](#upsert-sinks) on limitations. | +| type | Required. Specify if the sink should be upsert or append-only. If creating an upsert sink, see the [Overview](/delivery/overview) on when to define the primary key and [Upsert sinks](#upsert-sinks) on limitations. | | primary\_key | Optional. A string of a list of column names, separated by commas, that specifies the primary key of the ClickHouse sink. | | clickhouse.url | Required. Address of the ClickHouse server that you want to sink data to. Format: `http://ip:port`. The default port is 8123. | | clickhouse.user | Required. User name for accessing the ClickHouse server. | @@ -92,7 +92,7 @@ WITH ( ) FORMAT PLAIN ENCODE JSON; ``` -Another option is to create an upsert table, which supports in-place updates. For more details on creating a table, see [CREATE TABLE](/docs/current/sql-create-table/) . +Another option is to create an upsert table, which supports in-place updates. For more details on creating a table, see [CREATE TABLE](/sql/commands/sql-create-table) . ```sql CREATE TABLE s1_table ( diff --git a/integrations/destinations/delta-lake.mdx b/integrations/destinations/delta-lake.mdx index e2242816..61539350 100644 --- a/integrations/destinations/delta-lake.mdx +++ b/integrations/destinations/delta-lake.mdx @@ -78,7 +78,7 @@ WITH ( ) FORMAT PLAIN ENCODE JSON; ``` -You can also choose to create an upsert table, which supports in-place updates. For more details on creating a table, see [CREATE TABLE](/docs/current/sql-create-table/). +You can also choose to create an upsert table, which supports in-place updates. For more details on creating a table, see [CREATE TABLE](/sql/commands/sql-create-table). ```sql CREATE TABLE s1_table (id int, name varchar) diff --git a/integrations/destinations/google-cloud-storage.mdx b/integrations/destinations/google-cloud-storage.mdx index cba12043..e159cbea 100644 --- a/integrations/destinations/google-cloud-storage.mdx +++ b/integrations/destinations/google-cloud-storage.mdx @@ -43,4 +43,4 @@ WITH ( )FORMAT PLAIN ENCODE PARQUET(force_append_only=true); ``` -For more information about encode `Parquet` or `JSON`, see [Sink data in parquet or json encode](/docs/current/data-delivery/). \ No newline at end of file +For more information about encode `Parquet` or `JSON`, see [Sink data in parquet or json encode](/delivery/overview). \ No newline at end of file diff --git a/integrations/destinations/mysql.mdx b/integrations/destinations/mysql.mdx index a15d9aa6..d94c049b 100644 --- a/integrations/destinations/mysql.mdx +++ b/integrations/destinations/mysql.mdx @@ -95,7 +95,7 @@ CREATE TABLE personnel ( ### Install and launch RisingWave -To install and start RisingWave locally, see the [Get started](/docs/current/get-started/) guide. We recommend running RisingWave locally for testing purposes. +To install and start RisingWave locally, see the [Get started](/get-started/quickstart) guide. We recommend running RisingWave locally for testing purposes. ### Notes about running RisingWave from binaries If you are running RisingWave locally from binaries and intend to use the native CDC source connectors or the JDBC sink connector, make sure you have [JDK 11](https://openjdk.org/projects/jdk/11/) or later versions installed in your environment. @@ -180,7 +180,7 @@ SELECT * FROM personnel; ## Data type mapping -For the MySQL data type mapping table, see the [Data type mapping table](/docs/current/ingest-from-mysql-cdc/#data-type-mapping) under the Ingest data from MySQL CDC topic. +For the MySQL data type mapping table, see the [Data type mapping table](/integrations/sources/mysql-cdc#data-type-mapping) under the Ingest data from MySQL CDC topic. Additional notes regarding sinking data to MySQL: diff --git a/integrations/destinations/postgresql.mdx b/integrations/destinations/postgresql.mdx index fd655579..ec5cc673 100644 --- a/integrations/destinations/postgresql.mdx +++ b/integrations/destinations/postgresql.mdx @@ -69,7 +69,7 @@ CREATE TABLE target_count ( ### Install and launch RisingWave -To install and start RisingWave locally, see the [Get started](/docs/current/get-started/) guide. We recommend running RisingWave locally for testing purposes. +To install and start RisingWave locally, see the [Get started](/get-started/quickstart) guide. We recommend running RisingWave locally for testing purposes. ### Notes about running RisingWave from binaries @@ -168,7 +168,7 @@ LIMIT 10; ## Data type mapping -For the PostgreSQL data type mapping table, see the [Data type mapping table](/docs/current/ingest-from-postgres-cdc/#data-type-mapping) under the Ingest data from PostgreSQL CDC topic. +For the PostgreSQL data type mapping table, see the [Data type mapping table](/integrations/sources/postgresql-cdc#data-type-mapping) under the Ingest data from PostgreSQL CDC topic. Additional notes regarding sinking data to PostgreSQL: diff --git a/integrations/destinations/webhdfs.mdx b/integrations/destinations/webhdfs.mdx index c3468368..6e133cbf 100644 --- a/integrations/destinations/webhdfs.mdx +++ b/integrations/destinations/webhdfs.mdx @@ -39,4 +39,4 @@ WITH ( )FORMAT PLAIN ENCODE PARQUET(force_append_only=true); ``` -For more information about encode `Parquet` or `JSON`, see [Sink data in parquet or json encode](/docs/current/data-delivery/). \ No newline at end of file +For more information about encode `Parquet` or `JSON`, see [Sink data in parquet or json encode](/delivery/overview). \ No newline at end of file diff --git a/integrations/sources/amazon-msk.mdx b/integrations/sources/amazon-msk.mdx index 12f5aaf3..1e005647 100644 --- a/integrations/sources/amazon-msk.mdx +++ b/integrations/sources/amazon-msk.mdx @@ -153,7 +153,7 @@ After entering messages, you can close the console window or press Ctrl + C to e ### Install and launch RisingWave -See [Quick start](/docs/current/get-started/) for options on how you can run RisingWave. +See [Quick start](/get-started/quickstart) for options on how you can run RisingWave. ### Connect the cluster[](#connect-the-cluster "Direct link to Connect the cluster") diff --git a/integrations/sources/apache-iceberg.mdx b/integrations/sources/apache-iceberg.mdx index 7be6ffa8..0d362d20 100644 --- a/integrations/sources/apache-iceberg.mdx +++ b/integrations/sources/apache-iceberg.mdx @@ -225,7 +225,7 @@ SELECT * FROM t FOR SYSTEM_TIME AS OF '2024-04-03 08:54:22.488+00:00'; ## Examples[](#examples "Direct link to Examples") -Firstly, create an append-only Iceberg table, see [Append-only sink from upsert source](/docs/current/sink-to-iceberg/#append-only-sink-from-upsert-source) for details. +Firstly, create an append-only Iceberg table, see [Append-only sink from upsert source](/integrations/destinations/apache-iceberg#append-only-sink-from-upsert-source) for details. ```sql Secondly, create an Iceberg source: CREATE SOURCE iceberg_source diff --git a/integrations/sources/confluent-cloud.mdx b/integrations/sources/confluent-cloud.mdx index aead8f6e..05ca26d3 100644 --- a/integrations/sources/confluent-cloud.mdx +++ b/integrations/sources/confluent-cloud.mdx @@ -46,7 +46,7 @@ Note that you will need the API key when creating a Kafka source in RisingWave. ### Run RisingWave -To start RisingWave, see the [Get started](/docs/current/get-started/) guide. +To start RisingWave, see the [Get started](/get-started/quickstart) guide. ### Connect to the data stream diff --git a/integrations/sources/instaclustr-kafka.mdx b/integrations/sources/instaclustr-kafka.mdx index 898b3b0f..034d9ce6 100644 --- a/integrations/sources/instaclustr-kafka.mdx +++ b/integrations/sources/instaclustr-kafka.mdx @@ -43,7 +43,7 @@ After these steps, you are on your way to build stream processing applications a ### Create a RisingWave project -You can create a RisingWave project and connect to it by following the steps in the [Quick Start](/docs/current/get-started/) in the RisingWave documentation. +You can create a RisingWave project and connect to it by following the steps in the [Quick Start](/get-started/quickstart) in the RisingWave documentation. ### Create a source diff --git a/integrations/sources/mysql-cdc.mdx b/integrations/sources/mysql-cdc.mdx index f1ea39f2..4aa58ba4 100644 --- a/integrations/sources/mysql-cdc.mdx +++ b/integrations/sources/mysql-cdc.mdx @@ -100,7 +100,7 @@ If you are running RisingWave locally from binaries and intend to use the native ## Create a table using the native CDC connector in RisingWave -To ensure all data changes are captured, you must create a table and specify primary keys. See the [CREATE TABLE](/docs/current/sql-create-table/) command for more details. +To ensure all data changes are captured, you must create a table and specify primary keys. See the [CREATE TABLE](/sql/commands/sql-create-table) command for more details. ### Syntax diff --git a/integrations/sources/overview.mdx b/integrations/sources/overview.mdx index 71ee1841..7fd667fc 100644 --- a/integrations/sources/overview.mdx +++ b/integrations/sources/overview.mdx @@ -5,4 +5,4 @@ mode: wide sidebarTitle: Overview --- - 6 items 5 items 1 item 3 items 3 item + 6 items 5 items 1 item 3 items 3 item diff --git a/integrations/sources/postgresql-cdc.mdx b/integrations/sources/postgresql-cdc.mdx index 86365be7..40502ada 100644 --- a/integrations/sources/postgresql-cdc.mdx +++ b/integrations/sources/postgresql-cdc.mdx @@ -116,7 +116,7 @@ If you are running RisingWave locally from binaries and intend to use the native ## Create a table using the native CDC connector -To ensure all data changes are captured, you must create a table or source and specify primary keys. See the [CREATE TABLE](/docs/current/sql-create-table/) command for more details. +To ensure all data changes are captured, you must create a table or source and specify primary keys. See the [CREATE TABLE](/sql/commands/sql-create-table) command for more details. ### Syntax diff --git a/integrations/sources/supabase-cdc.mdx b/integrations/sources/supabase-cdc.mdx index 1c0e9c15..cc2308e7 100644 --- a/integrations/sources/supabase-cdc.mdx +++ b/integrations/sources/supabase-cdc.mdx @@ -12,7 +12,7 @@ Create a Supabase project and a source table. Enable real-time when creating the ## Ingest CDC data into RisingWave -Since every Supabase project is a dedicated PostgreSQL database, use the PostgreSQL source connector to ingest CDC data from RisingWave. For the syntax, parameters, and examples, see [Ingest data from PostgreSQL CDC](/docs/current/ingest-from-postgres-cdc/#create-a-table-using-the-native-cdc-connector). +Since every Supabase project is a dedicated PostgreSQL database, use the PostgreSQL source connector to ingest CDC data from RisingWave. For the syntax, parameters, and examples, see [Ingest data from PostgreSQL CDC](/integrations/sources/postgresql-cdc#create-a-table-using-the-native-cdc-connector). To start ingesting data from Supabase, a connection with the database must be established first by using the `CREATE SOURCE` command. diff --git a/integrations/visualization/beekeeper-studio.mdx b/integrations/visualization/beekeeper-studio.mdx index eaf1b97c..a072a61a 100644 --- a/integrations/visualization/beekeeper-studio.mdx +++ b/integrations/visualization/beekeeper-studio.mdx @@ -10,7 +10,7 @@ RisingWave only supports connecting the Beekeeper Studio Community edition. The ## Prerequisites * Ensure that Beekeeper Studio Community Edition is installed. To download Beekeeper Studio, see the [Beekeeper releases page](https://github.com/beekeeper-studio/beekeeper-studio/releases/). -* Install and start RisingWave. For instructions on how to get started, see the [Quick start guide](/docs/current/get-started/). +* Install and start RisingWave. For instructions on how to get started, see the [Quick start guide](/get-started/quickstart). ## Establish the connection 1. In the Beekeeper Studio interface, under **New connection**, select **Postgres** as the **Connection type**. diff --git a/integrations/visualization/dbeaver.mdx b/integrations/visualization/dbeaver.mdx index 4dd681b2..ad55c67f 100644 --- a/integrations/visualization/dbeaver.mdx +++ b/integrations/visualization/dbeaver.mdx @@ -9,7 +9,7 @@ This guide will go over how to connect DBeaver to RisingWave so you can seamless ## Prerequisites * Ensure that DBeaver is installed. To download DBeaver, see the [DBeaver download page](https://dbeaver.io/download/). Please make sure that your DBeaver version is at least [v23.3.4](https://dbeaver.io/2024/02/04/dbeaver-23-3-4/). -* Install and start RisingWave. For instructions on how to get started, see the [Quick start guide](/docs/current/get-started/). +* Install and start RisingWave. For instructions on how to get started, see the [Quick start guide](/get-started/quickstart). ## Establish the connection diff --git a/integrations/visualization/grafana.mdx b/integrations/visualization/grafana.mdx index 310d30cb..57ba9323 100644 --- a/integrations/visualization/grafana.mdx +++ b/integrations/visualization/grafana.mdx @@ -11,7 +11,7 @@ This guide will go over how to add RisingWave as a data source in Grafana. ### Install and launch RisingWave -To install and start RisingWave locally, see the [Get started](/docs/current/get-started/) guide. We recommend running RisingWave locally for testing purposes. +To install and start RisingWave locally, see the [Get started](/get-started/quickstart) guide. We recommend running RisingWave locally for testing purposes. Connect to streaming sources. For details on connecting to a streaming source and what connectors are supported with RisingWave, see [CREATE SOURCE](/docs/current/sql-create-source/). diff --git a/integrations/visualization/looker.mdx b/integrations/visualization/looker.mdx index 739c5573..f3909cdb 100644 --- a/integrations/visualization/looker.mdx +++ b/integrations/visualization/looker.mdx @@ -9,7 +9,7 @@ Since RisingWave is compatible with PostgreSQL, you can easily connect Looker to ## Prerequisites * Ensure that [Looker](https://cloud.google.com/looker) is installed and accessible from the RisingWave cluster. -* Install and start RisingWave. For instructions on how to get started, see the [Quick start guide](/docs/current/get-started/). +* Install and start RisingWave. For instructions on how to get started, see the [Quick start guide](/get-started/quickstart). ## Establish the connection diff --git a/integrations/visualization/metabase.mdx b/integrations/visualization/metabase.mdx index d12171c7..36909597 100644 --- a/integrations/visualization/metabase.mdx +++ b/integrations/visualization/metabase.mdx @@ -9,7 +9,7 @@ Since RisingWave is compatible with PostgreSQL, you can connect Metabase to Risi ## Prerequisites * Metabase installed and running. -* Install and start RisingWave. For instructions on how to get started, see the [Quick start guide](/docs/current/get-started/). +* Install and start RisingWave. For instructions on how to get started, see the [Quick start guide](/get-started/quickstart). ## Establish the connection diff --git a/integrations/visualization/superset.mdx b/integrations/visualization/superset.mdx index 065c0f78..4fffb3e6 100644 --- a/integrations/visualization/superset.mdx +++ b/integrations/visualization/superset.mdx @@ -13,7 +13,7 @@ This guide will go over how to: ### Install and start RisingWave -To install and start RisingWave locally, see the [Get started](/docs/current/get-started/) guide. We recommend running RisingWave locally for demo purposes. +To install and start RisingWave locally, see the [Get started](/get-started/quickstart) guide. We recommend running RisingWave locally for demo purposes. Connect to a streaming source. For details on connecting to streaming sources and what sources are supported with RisingWave, see [CREATE SOURCE](/docs/current/sql-create-source/). diff --git a/operate/alter-streaming.mdx b/operate/alter-streaming.mdx index 72f0d795..b556d19d 100644 --- a/operate/alter-streaming.mdx +++ b/operate/alter-streaming.mdx @@ -137,4 +137,4 @@ CREATE MATERIALIZED VIEW adult_users AS It was discovered later that the legal definition for adulthood should be set at ≥16\. Initially, one might consider modifying the filter condition from `age >= 18` to `age >= 16` as a straightforward solution. However, this is not feasible in stream processing since records with ages between 16 and 18 have already been filtered out. Therefore, the only option to restore the missing data is to recompute the entire stream from the beginning. -Therefore, we recommend persistently storing the source data in a long-term storage solution, such as [a RisingWave table](/docs/current/sql-create-table/). This allows for the recomputation of the materialized view when altering the logic becomes necessary. +Therefore, we recommend persistently storing the source data in a long-term storage solution, such as [a RisingWave table](/sql/commands/sql-create-table). This allows for the recomputation of the materialized view when altering the logic becomes necessary. diff --git a/operate/meta-backup.mdx b/operate/meta-backup.mdx index 81dac66e..8c087959 100644 --- a/operate/meta-backup.mdx +++ b/operate/meta-backup.mdx @@ -25,7 +25,7 @@ Here's an example of how to create a new meta snapshot with `risectl`: risectl meta backup-meta ``` -`risectl` is included in the pre-built RisingWave binary. For details, see [Quick start](/docs/current/get-started/#binaries). +`risectl` is included in the pre-built RisingWave binary. For details, see [Quick start](/get-started/quickstart#binaries). ## View existing meta snapshots diff --git a/performance/performance-best-practices.mdx b/performance/performance-best-practices.mdx index 46111e71..b4548453 100644 --- a/performance/performance-best-practices.mdx +++ b/performance/performance-best-practices.mdx @@ -82,6 +82,6 @@ This is an advanced feature that is still in the [public preview stage](/product ## How to monitor the progress of direct CDC -To effectively monitor the progress of direct Change Data Capture (CDC), you can employ two key methods tailored to historical and real-time data for PostgreSQL and MySQL databases. For more details, see [Use Direct CDC for PostgreSQL](/docs/current/ingest-from-postgres-cdc/#monitor-the-progress-of-direct-cdc) and [Use Direct CDC for MySQL](/docs/current/ingest-from-mysql-cdc/#monitor-the-progress-of-direct-cdc). +To effectively monitor the progress of direct Change Data Capture (CDC), you can employ two key methods tailored to historical and real-time data for PostgreSQL and MySQL databases. For more details, see [Use Direct CDC for PostgreSQL](/integrations/sources/postgresql-cdc#monitor-the-progress-of-direct-cdc) and [Use Direct CDC for MySQL](/integrations/sources/mysql-cdc#monitor-the-progress-of-direct-cdc). For any other questions or tips regarding performance tuning, feel free to join our [Slack community](https://www.risingwave.com/slack) and become part of our growing network of users. Engage in discussions, seek assistance, and share your experiences with fellow users and our engineers who are eager to provide insights and solutions. diff --git a/processing/maintain-wide-table-with-table-sinks.mdx b/processing/maintain-wide-table-with-table-sinks.mdx index dba01d0e..7909afc5 100644 --- a/processing/maintain-wide-table-with-table-sinks.mdx +++ b/processing/maintain-wide-table-with-table-sinks.mdx @@ -3,7 +3,7 @@ title: "Maintain wide table with table sinks" description: "This guide introduces how to maintain a wide table whose columns come from different sources. Traditional data warehouses or ETL use a join query for this purpose. However, streaming join brings issues such as low efficiency and high memory consumption." --- -In some cases with limitation, use the [CREATE SINK INTO TABLE](/docs/current/sql-create-sink-into/) and [ON CONFLICT clause](/docs/current/sql-create-table/#pk-conflict-behavior) can save the resources and achieve high efficiency. +In some cases with limitation, use the [CREATE SINK INTO TABLE](/docs/current/sql-create-sink-into/) and [ON CONFLICT clause](/sql/commands/sql-create-table#pk-conflict-behavior) can save the resources and achieve high efficiency. ## Merge multiple sinks with the same primary key @@ -97,4 +97,4 @@ But maintaining wide table with table sinks can save the resources and achieve h -Furthermore, for the large dimension table, we can use [Temporal Join](/docs/current/query-syntax-join-clause/) as the partial join to reduce the streaming state and improve performance. +Furthermore, for the large dimension table, we can use [Temporal Join](/processing/sql/joins) as the partial join to reduce the streaming state and improve performance. diff --git a/processing/sql/temporal-filters.mdx b/processing/sql/temporal-filters.mdx index 797bb1ec..e79be010 100644 --- a/processing/sql/temporal-filters.mdx +++ b/processing/sql/temporal-filters.mdx @@ -82,7 +82,7 @@ The temporal filter in this query is in the `WHERE` clause. It checks whether th ## Usage 2: Delay table changes -When the time expression with `NOW()` is the upper bound condition of the base relation such as `ts + interval '1 hour' < now()`, it can "delay" the table's changes of the input relation. It could be useful when used with the [Temporal Join](/docs/current/query-syntax-join-clause/). +When the time expression with `NOW()` is the upper bound condition of the base relation such as `ts + interval '1 hour' < now()`, it can "delay" the table's changes of the input relation. It could be useful when used with the [Temporal Join](/processing/sql/joins). Here is a typical example of the temporal join used to widen a fact table. diff --git a/reference/key-concepts.mdx b/reference/key-concepts.mdx index 35c1fac7..a132a3a0 100644 --- a/reference/key-concepts.mdx +++ b/reference/key-concepts.mdx @@ -53,7 +53,7 @@ A sink is an external target to which you can send data. RisingWave now supports A source is a resource that RisingWave can read data from. Common sources include message brokers such as Apache Kafka and Apache Pulsar and databases such as MySQL and PostgreSQL. You can create a source in RisingWave using the [CREATE SOURCE](/docs/current/sql-create-source/) command. -If you want to persist the data from the source, you should use the [CREATE TABLE](/docs/current/sql-create-table/) command with connector settings. +If you want to persist the data from the source, you should use the [CREATE TABLE](/sql/commands/sql-create-table) command with connector settings. Regardless of whether the data is persisted in RisingWave, you can create materialized views to perform data transformations. diff --git a/sql/commands/overview.mdx b/sql/commands/overview.mdx index f78a51bf..ad57faa9 100644 --- a/sql/commands/overview.mdx +++ b/sql/commands/overview.mdx @@ -55,10 +55,10 @@ sidebarTitle: Overview > Modify the properties of a schema. - Modify the properties of a sink. Modify the properties of a source. Modify a server configuration parameter. Modify the properties of a table. Modify the properties of a user. Modify the properties of a view. Convert stream into an append-only changelog. Start a transaction. Cancel specific streaming jobs. Add comments on tables or columns. Commit the current transaction. Create a user-defined aggregate function. Create a connection between VPCs. Create a new database. Create a user-defined function. Create an index on a column of a table or a materialized view to speed up data retrieval. Create a materialized view. Create a new schema. Create a secret to store credentials. Create a sink into RisingWave's table. Create a sink. Supported data sources and how to connect RisingWave to the sources. Create a table. Create a new user account. Create a non-materialized view. - Remove rows from a table. Get information about the columns in a table, source, sink, view, or materialized view. Discard session state. Drop a user-defined aggregate function. Remove a connection. Remove a database. Drop a user-defined function. Remove an index. Remove a materialized view. Remove a schema. Drop a secret. Remove a sink. Remove a source. Remove a table. Remove a user. Drop a view. Show the execution plan of a statement. Commit pending data changes and persists updated data to storage. Grant a user privileges. Insert new rows of data into a table. Trigger recovery manually. Revoke privileges from a user. Retrieve data from a table or a materialized view. Run Data Definition Language (DDL) operations in the background. Enable or disable implicit flushes after batch operations. Set time zone. Change a run-time parameter. + Modify the properties of a sink. Modify the properties of a source. Modify a server configuration parameter. Modify the properties of a table. Modify the properties of a user. Modify the properties of a view. Convert stream into an append-only changelog. Start a transaction. Cancel specific streaming jobs. Add comments on tables or columns. Commit the current transaction. Create a user-defined aggregate function. Create a connection between VPCs. Create a new database. Create a user-defined function. Create an index on a column of a table or a materialized view to speed up data retrieval. Create a materialized view. Create a new schema. Create a secret to store credentials. Create a sink into RisingWave's table. Create a sink. Supported data sources and how to connect RisingWave to the sources. Create a table. Create a new user account. Create a non-materialized view. + Remove rows from a table. Get information about the columns in a table, source, sink, view, or materialized view. Discard session state. Drop a user-defined aggregate function. Remove a connection. Remove a database. Drop a user-defined function. Remove an index. Remove a materialized view. Remove a schema. Drop a secret. Remove a sink. Remove a source. Remove a table. Remove a user. Drop a view. Show the execution plan of a statement. Commit pending data changes and persists updated data to storage. Grant a user privileges. Insert new rows of data into a table. Trigger recovery manually. Revoke privileges from a user. Retrieve data from a table or a materialized view. Run Data Definition Language (DDL) operations in the background. Enable or disable implicit flushes after batch operations. Set time zone. Change a run-time parameter. Show the details of your RisingWave cluster. Show columns in a table, source, sink, view or materialized view. Show existing connections. Show the query used to create the specified index. Show the query used to create the specified materialized view. Show the query used to create the specified sink. Show the query used to create the specified source. Show the query used to create the specified table. Show the query used to create the specified view. Show all cursors in the current session. Show existing databases. Show all user-defined functions. Show existing indexes from a particular table. Show internal tables to learn about the existing internal states. Show all streaming jobs. Show existing materialized views. Show the details of the system parameters. - Display system current workload. Show existing schemas. Shows all sinks. Show existing sources. Show all subscription cursors in the current session. Show existing tables. Show existing views. Start a transaction. Modify existing rows in a table. + Display system current workload. Show existing schemas. Shows all sinks. Show existing sources. Show all subscription cursors in the current session. Show existing tables. Show existing views. Start a transaction. Modify existing rows in a table. diff --git a/sql/commands/sql-as-changelog.mdx b/sql/commands/sql-as-changelog.mdx index 373a0cf0..5c5eacb3 100644 --- a/sql/commands/sql-as-changelog.mdx +++ b/sql/commands/sql-as-changelog.mdx @@ -3,7 +3,7 @@ title: "AS CHANGELOG" description: "Use the `AS CHANGELOG` clause to convert a changelog operation in a stream into a column." --- -This can be used to create materialized views and sinks. See the practice in [Sink data with upsert in Snowflake](/docs/current/sink-to-snowflake/#sink-data-with-upsert). +This can be used to create materialized views and sinks. See the practice in [Sink data with upsert in Snowflake](/integrations/destinations/snowflake#sink-data-with-upsert). ## Syntax diff --git a/sql/commands/sql-begin.mdx b/sql/commands/sql-begin.mdx index 04266814..43e71e70 100644 --- a/sql/commands/sql-begin.mdx +++ b/sql/commands/sql-begin.mdx @@ -25,6 +25,6 @@ BEGIN - + diff --git a/sql/commands/sql-commit.mdx b/sql/commands/sql-commit.mdx index 00012fec..83fc1067 100644 --- a/sql/commands/sql-commit.mdx +++ b/sql/commands/sql-commit.mdx @@ -42,7 +42,7 @@ COMMIT title="START TRANSACTION" icon="play" iconType="solid" - href="/docs/current/sql-start-transaction/" + href="/sql/commands/sql-start-transaction" horizontal /> diff --git a/sql/commands/sql-create-mv.mdx b/sql/commands/sql-create-mv.mdx index 37323b4f..4c392590 100644 --- a/sql/commands/sql-create-mv.mdx +++ b/sql/commands/sql-create-mv.mdx @@ -12,7 +12,7 @@ CREATE MATERIALIZED VIEW [IF NOT EXISTS] mv_name AS select_query; `CREATE MATERIALIZED VIEW` will first **backfill** historical data from the referenced relations, and completion time varies based on the volume of data to be backfilled. -To perform the operations in the background, you can execute `SET BACKGROUND_DDL=true;` before running the `CREATE MATERIALIZED VIEW` statement. See details in [SET BACKGROUND\_DDL](/docs/current/sql-set-background-ddl/). +To perform the operations in the background, you can execute `SET BACKGROUND_DDL=true;` before running the `CREATE MATERIALIZED VIEW` statement. See details in [SET BACKGROUND\_DDL](/sql/commands/sql-set-background-ddl). ## Parameters diff --git a/sql/commands/sql-create-secret.mdx b/sql/commands/sql-create-secret.mdx index 8849c3d6..69504a94 100644 --- a/sql/commands/sql-create-secret.mdx +++ b/sql/commands/sql-create-secret.mdx @@ -52,7 +52,7 @@ SHOW CREATE SOURCE mysql_source; title="Manage secrets" icon="key" icontype="solid" - href="/docs/current/manage-secrets/" + href="/operate/manage-secrets" > A comprehensive guide for secret management operations diff --git a/sql/commands/sql-create-sink.mdx b/sql/commands/sql-create-sink.mdx index 97aef14d..f9761e33 100644 --- a/sql/commands/sql-create-sink.mdx +++ b/sql/commands/sql-create-sink.mdx @@ -3,7 +3,7 @@ title: "CREATE SINK" description: "Use the `CREATE SINK` command to create a sink. A sink is an external target where you can send data processed in RisingWave. You can create a sink from a table or a materialized view." --- -If your goal is to create an append-only sink, you can use the emit-on-window-close policy when creating the materialized view that you want to sink data from. For details about the policy, see [Emit on window close](/docs/current/emit-on-window-close/). +If your goal is to create an append-only sink, you can use the emit-on-window-close policy when creating the materialized view that you want to sink data from. For details about the policy, see [Emit on window close](/processing/emit-on-window-close). ## Syntax @@ -41,19 +41,19 @@ Please distinguish between the parameters set in the FORMAT and ENCODE options a Click a sink name to see the SQL syntax, options, and sample statement of sinking data from RisingWave to the sink. * [Apache Doris](/docs/current/sink-to-doris/) -* [Apache Iceberg](/docs/current/sink-to-iceberg/) -* [AWS Kinesis](/docs/current/sink-to-aws-kinesis/) -* [Cassandra or ScyllaDB](/docs/current/sink-to-cassandra/) +* [Apache Iceberg](/integrations/destinations/apache-iceberg) +* [AWS Kinesis](/integrations/destinations/aws-kinesis) +* [Cassandra or ScyllaDB](/integrations/destinations/cassandra-or-scylladb) * [ClickHouse](/docs/current/sink-to-clickhouse/) * [CockroachDB](/docs/current/sink-to-cockroach/) * [Delta Lake](/docs/current/sink-to-delta-lake/) -* [Elasticsearch](/docs/current/sink-to-elasticsearch/) -* [Google BigQuery](/docs/current/sink-to-bigquery/) +* [Elasticsearch](/integrations/destinations/elasticsearch) +* [Google BigQuery](/integrations/destinations/bigquery) * [Kafka](/docs/current/create-sink-kafka/) (Supports versions 3.1.0 or later) * [MySQL](/docs/current/sink-to-mysql-with-jdbc/) (Supports versions 5.7 and 8.0.x) -* [NATS](/docs/current/sink-to-nats/) +* [NATS](/integrations/destinations/nats-and-nats-jetstream) * [PostgreSQL](/docs/current/sink-to-postgres/) -* [Pulsar](/docs/current/sink-to-pulsar/) +* [Pulsar](/integrations/destinations/apache-pulsar) * [Redis](/docs/current/sink-to-redis/) * [StarRocks](/docs/current/sink-to-starrocks/) * [TiDB](/docs/current/sink-to-tidb/) @@ -65,7 +65,7 @@ Click a sink name to see the SQL syntax, options, and sample statement of sinkin title="Overview of data delivery" icon="truck" icontype="solid" - href="/docs/current/data-delivery/" + href="/delivery/overview" /> A comprehensive guide for secret management operations, including creation, usage, and deletion. diff --git a/sql/data-types/overview.mdx b/sql/data-types/overview.mdx index 2a01050d..5c3ea00a 100644 --- a/sql/data-types/overview.mdx +++ b/sql/data-types/overview.mdx @@ -23,7 +23,7 @@ sidebarTitle: Overview | interval | | Time span.
Input in string format. Units include: second/seconds/s, minute/minutes/min/m, hour/hours/hr/h, day/days/d, month/months/mon, and year/years/yr/y. Units smaller than second can only be specified in a numerical format. | Examples: `interval '4 hour'` → `04:00:00`
`interval '3 day'` → `3 days 00:00:00`
`interval '04:00:00.1234'` → `04:00:00.1234` | | struct | | A struct is a column that contains nested data. | For syntax and examples, see [Struct](/docs/current/data-type-struct/). | | array | | An array is an ordered list of zero or more elements that share the same data type. | For syntax and examples, see [Array](/docs/current/data-type-array/). | -| map | | A map contains key-value pairs. | For syntax and examples, see [Map](/docs/current/data-type-map/). | +| map | | A map contains key-value pairs. | For syntax and examples, see [Map](/sql/data-types/map-type). | | JSONB | | A (binary) JSON value that ignores semantically-insignificant whitespaces or order of object keys. | For syntax and examples, see [JSONB](/docs/current/data-type-jsonb/). | diff --git a/sql/functions/window-functions.mdx b/sql/functions/window-functions.mdx index 5b3b386e..2a4392b6 100644 --- a/sql/functions/window-functions.mdx +++ b/sql/functions/window-functions.mdx @@ -83,4 +83,4 @@ last_value ( value anyelement ) → anyelement All aggregate functions, including builtin ones such as `sum()` and `min()`, user-defined ones and `AGGREGATE:`-prefixed scalar functions, can be used as window functions. -For the complete list of builtin aggregate functions and their usage, see [Aggregate functions](/docs/current/sql-function-aggregate/). \ No newline at end of file +For the complete list of builtin aggregate functions and their usage, see [Aggregate functions](/sql/functions/aggregate). \ No newline at end of file diff --git a/sql/query-syntax/generated-columns.mdx b/sql/query-syntax/generated-columns.mdx index 11e38c55..56c56119 100644 --- a/sql/query-syntax/generated-columns.mdx +++ b/sql/query-syntax/generated-columns.mdx @@ -4,7 +4,7 @@ description: "A generated column is a special column that is always computed fro --- -To create a generated column, use the `AS ` clause in [CREATE TABLE](/docs/current/sql-create-table/) or [CREATE SOURCE](/docs/current/sql-create-source/) statements, for example: +To create a generated column, use the `AS ` clause in [CREATE TABLE](/sql/commands/sql-create-table) or [CREATE SOURCE](/docs/current/sql-create-source/) statements, for example: ```sql CREATE TABLE t1 (v1 int AS v2-1, v2 int, v3 int AS v2+1); diff --git a/sql/query-syntax/group-by-clause.mdx b/sql/query-syntax/group-by-clause.mdx index ae877dd4..5d120ed6 100644 --- a/sql/query-syntax/group-by-clause.mdx +++ b/sql/query-syntax/group-by-clause.mdx @@ -17,7 +17,7 @@ GROUP BY column1, column2....columnN ORDER BY column1, column2....columnN ``` -If your goal is to generate windowed calculation results strictly as append-only output, you can utilize the emit-on-window-close policy. This approach helps avoid unnecessary computations. For more information on the emit-on-window-close policy, please refer to [Emit on window close](/docs/current/emit-on-window-close/). +If your goal is to generate windowed calculation results strictly as append-only output, you can utilize the emit-on-window-close policy. This approach helps avoid unnecessary computations. For more information on the emit-on-window-close policy, please refer to [Emit on window close](/processing/emit-on-window-close). You can use more than one column in the `GROUP BY` clause. diff --git a/sql/query-syntax/value-exp.mdx b/sql/query-syntax/value-exp.mdx index baa4079d..e1bdfcf2 100644 --- a/sql/query-syntax/value-exp.mdx +++ b/sql/query-syntax/value-exp.mdx @@ -18,7 +18,7 @@ aggregate_name ( * ) [ FILTER ( WHERE filter_clause ) ] aggregate_name ( [ expression [ , ... ] ] ) WITHIN GROUP ( order_by_clause ) [ FILTER ( WHERE filter_clause ) ] ``` -`aggregate_name` is one of the aggregation functions listed on [Aggregate functions](/docs/current/sql-function-aggregate/), and `expression` is a value expression that does not contain an aggregate expression or a window function call. +`aggregate_name` is one of the aggregation functions listed on [Aggregate functions](/sql/functions/aggregate), and `expression` is a value expression that does not contain an aggregate expression or a window function call. The `DISTINCT` keyword, which is only available in the second form, cannot be used together with an `ORDER BY` or `WITHIN GROUP` clause. Additionally, it's important to note that the `order_by_clause` is positioned differently in the first and fourth forms. @@ -55,7 +55,7 @@ Currently, the `PARTITION BY` clause is required. If you do not want to partitio For ranking window functions like `row_number`, `rank` and `dense_rank`, `ORDER BY` clause is required. -When operating in the [Emit on window close](/docs/current/emit-on-window-close/) mode for a streaming query, `ORDER BY` clause is required for all window functions. Please ensure that you specify exactly one column in the `ORDER BY` clause. This column, generally a timestamp column, must have a watermark defined for it. It's important to note that when using the timestamp column from this streaming query in another streaming query, the watermark information associated with the column is not retained. +When operating in the [Emit on window close](/processing/emit-on-window-close) mode for a streaming query, `ORDER BY` clause is required for all window functions. Please ensure that you specify exactly one column in the `ORDER BY` clause. This column, generally a timestamp column, must have a watermark defined for it. It's important to note that when using the timestamp column from this streaming query in another streaming query, the watermark information associated with the column is not retained. `window_function_name` is one of the window functions listed on [Window functions](/docs/current/sql-function-window-functions/). diff --git a/sql/system-catalogs/rw-catalog.mdx b/sql/system-catalogs/rw-catalog.mdx index 574e40b3..6487dd68 100644 --- a/sql/system-catalogs/rw-catalog.mdx +++ b/sql/system-catalogs/rw-catalog.mdx @@ -109,7 +109,7 @@ SELECT name, initialized_at, created_at FROM rw_sources; | rw\_relation\_info | Contains low-level relation information about tables, sources, materialized views, and indexes that are available in the database. | | rw\_relations | Contains information about relations in the database, including their unique IDs, names, types, schema IDs, and owners. | | rw\_schemas | Contains information about schemas that are available in the database, including their names, unique IDs, owner IDs, and more. | -| rw\_secrets | Contains information about the ID, name, owner, and access control of secret objects. For more details about secrets, see [Manage secrets](/docs/current/manage-secrets/). | +| rw\_secrets | Contains information about the ID, name, owner, and access control of secret objects. For more details about secrets, see [Manage secrets](/operate/manage-secrets). | | rw\_sinks | Contains information about sinks that are available in the database, including their unique IDs, names, schema IDs, owner IDs, connector types, sink types, connection IDs, definitions, and more. | | rw\_sources | Contains information about sources that are available in the database, including their unique IDs, names, schema IDs, owner IDs, connector types, column definitions, row formats, append-only flags, connection IDs, and more. | | rw\_streaming\_parallelism | Contains information about the streaming parallelism configuration for streaming jobs, including their IDs, names, relation types, and parallelism. | diff --git a/sql/udfs/use-udfs-in-python.mdx b/sql/udfs/use-udfs-in-python.mdx index 0621f010..0af44948 100644 --- a/sql/udfs/use-udfs-in-python.mdx +++ b/sql/udfs/use-udfs-in-python.mdx @@ -7,7 +7,7 @@ sidebarTitle: Python ## Prerequisites * Ensure that you have [Python](https://www.python.org/downloads/) (3.8 or later) installed on your computer. -* Ensure that you have [started and connected to RisingWave](/docs/current/get-started/#run-risingwave). +* Ensure that you have [started and connected to RisingWave](/get-started/quickstart#run-risingwave). ## 1\. Install the RisingWave UDF API for Python