Skip to content

Commit

Permalink
Merge branch 'ClickHouse:main' into laeg/power-bi
Browse files Browse the repository at this point in the history
  • Loading branch information
laeg authored Nov 14, 2024
2 parents 70f1660 + 8b218c8 commit 971b067
Show file tree
Hide file tree
Showing 4 changed files with 19 additions and 22 deletions.
6 changes: 1 addition & 5 deletions docs/en/cloud/reference/supported-regions.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,7 @@ keywords: [aws, gcp, google cloud, azure, cloud, regions]
description: Supported regions for ClickHouse Cloud
---
# Supported Cloud Regions
## HEADING 2

### HEADING 3

#### HEADING 4
## AWS Regions

- ap-northeast-1 (Tokyo)
- ap-south-1 (Mumbai)
Expand Down
2 changes: 1 addition & 1 deletion docs/en/cloud/security/aws-privatelink.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Please complete the following steps to enable AWS Private Link:
1. Add Endpoint ID to service(s) allow list.


Find complete Terraform example for AWS Private Link [here](https://github.com/ClickHouse/terraform-provider-clickhouse/tree/main/examples/PrivateLink).
Find complete Terraform example for AWS Private Link [here](https://github.com/ClickHouse/terraform-provider-clickhouse/blob/main/examples/resources/clickhouse_private_endpoint_registration/resource.tf).

## Prerequisites

Expand Down
24 changes: 10 additions & 14 deletions docs/en/integrations/data-ingestion/clickpipes/object-storage.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
sidebar_label: ClickPipes for Object Storages
sidebar_label: ClickPipes for Object Storage
description: Seamlessly connect your object storage to ClickHouse Cloud.
slug: /en/integrations/clickpipes/object-storage
---
Expand All @@ -12,51 +12,47 @@ You have familiarized yourself with the [ClickPipes intro](./index.md).

## Creating your first ClickPipe

1. Access the SQL Console for your ClickHouse Cloud Service.

![ClickPipes service](./images/cp_service.png)

2. Select the `Data Sources` button on the left-side menu and click on "Set up a ClickPipe"
1. In the cloud console, select the `Data Sources` button on the left-side menu and click on "Set up a ClickPipe"

![Select imports](./images/cp_step0.png)

3. Select your data source.
2. Select your data source.

![Select data source type](./images/cp_step1.png)

4. Fill out the form by providing your ClickPipe with a name, a description (optional), your IAM role or credentials, and bucket URL. You can specify multiple files using bash-like wildcards. For more information, [see the documentation on using wildcards in path](#limitations).
3. Fill out the form by providing your ClickPipe with a name, a description (optional), your IAM role or credentials, and bucket URL. You can specify multiple files using bash-like wildcards. For more information, [see the documentation on using wildcards in path](#limitations).

![Fill out connection details](./images/cp_step2_object_storage.png)

5. The UI will display a list of files in the specified bucket. Select your data format (we currently support a subset of ClickHouse formats) and if you want to enable continuous ingestion [More details below](#continuous-ingest).
4. The UI will display a list of files in the specified bucket. Select your data format (we currently support a subset of ClickHouse formats) and if you want to enable continuous ingestion [More details below](#continuous-ingest).

![Set data format and topic](./images/cp_step3_object_storage.png)

6. In the next step, you can select whether you want to ingest data into a new ClickHouse table or reuse an existing one. Follow the instructions in the screen to modify your table name, schema, and settings. You can see a real-time preview of your changes in the sample table at the top.
5. In the next step, you can select whether you want to ingest data into a new ClickHouse table or reuse an existing one. Follow the instructions in the screen to modify your table name, schema, and settings. You can see a real-time preview of your changes in the sample table at the top.

![Set table, schema, and settings](./images/cp_step4a.png)

You can also customize the advanced settings using the controls provided

![Set advanced controls](./images/cp_step4a3.png)

7. Alternatively, you can decide to ingest your data in an existing ClickHouse table. In that case, the UI will allow you to map fields from the source to the ClickHouse fields in the selected destination table.
6. Alternatively, you can decide to ingest your data in an existing ClickHouse table. In that case, the UI will allow you to map fields from the source to the ClickHouse fields in the selected destination table.

![Use and existing table](./images/cp_step4b.png)

:::info
You can also map [virtual columns](../../sql-reference/table-functions/s3#virtual-columns), like `_path` or `_size`, to fields.
:::

8. Finally, you can configure permissions for the internal clickpipes user.
7. Finally, you can configure permissions for the internal clickpipes user.

**Permissions:** ClickPipes will create a dedicated user for writing data into a destination table. You can select a role for this internal user using a custom role or one of the predefined role:
- `Full access`: with the full access to the cluster. Required if you use Materialized View or Dictionary with the destination table.
- `Only destination table`: with the `INSERT` permissions to the destination table only.

![permissions](./images/cp_step5.png)

9. By clicking on "Complete Setup", the system will register you ClickPipe, and you'll be able to see it listed in the summary table.
8. By clicking on "Complete Setup", the system will register you ClickPipe, and you'll be able to see it listed in the summary table.

![Success notice](./images/cp_success.png)

Expand All @@ -70,7 +66,7 @@ You can also map [virtual columns](../../sql-reference/table-functions/s3#virtua

![View overview](./images/cp_overview.png)

10. **Congratulations!** you have successfully set up your first ClickPipe. If this is a streaming ClickPipe it will be continuously running, ingesting data in real-time from your remote data source. Otherwise it will ingest the batch and complete.
9. **Congratulations!** you have successfully set up your first ClickPipe. If this is a streaming ClickPipe it will be continuously running, ingesting data in real-time from your remote data source. Otherwise it will ingest the batch and complete.

## Supported Data Sources

Expand Down
9 changes: 7 additions & 2 deletions knowledgebase/profiling-clickhouse-with-llvm-xray.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,9 +23,9 @@ statistical profiler.

### Instrument the code

Image the following souce code:
Imagine the following souce code:

```c++
```cpp
#include <chrono>
#include <cstdio>
#include <thread>
Expand Down Expand Up @@ -110,10 +110,14 @@ We can use web-based UIs like [speedscope.app](https://www.speedscope.app/) or
While Perfetto makes visualizing multiple threads and querying the data easier, speedscope is better
generating a flamegraph and a sandwich view of your data.

#### Time Order

![time-order](./images/profiling-clickhouse-with-llvm-xray/time-order.png)

#### Left Heavy
![left-heavy](./images/profiling-clickhouse-with-llvm-xray/left-heavy.png)

#### Sandwitch
![sandwich](./images/profiling-clickhouse-with-llvm-xray/sandwich.png)

## Profiling ClickHouse
Expand All @@ -128,6 +132,7 @@ generating a flamegraph and a sandwich view of your data.
4. Visualize the trace in [speedscope.app](https://www.speedscope.app/) or
[Perfetto](https://ui.perfetto.dev).


![clickhouse-time-order](./images/profiling-clickhouse-with-llvm-xray/clickhouse-time-order.png)

Notice that this is the visualization of only one thread. You can select the others `tid`s on the
Expand Down

0 comments on commit 971b067

Please sign in to comment.