Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updated Strapi docs! #640

Merged
merged 2 commits into from
Sep 20, 2023
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
192 changes: 135 additions & 57 deletions docs/website/docs/dlt-ecosystem/verified-sources/strapi.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,88 +6,166 @@ keywords: [strapi api, strapi verified source, strapi]

# Strapi

:::info
Need help deploying these sources, or figuring out how to run them in your data stack?
:::info Need help deploying these sources, or figuring out how to run them in your data stack?

[Join our slack community](https://dlthub-community.slack.com/join/shared_invite/zt-1slox199h-HAE7EQoXmstkP_bTqal65g) or [book a call](https://calendar.app.google/kiLhuMsWKpZUpfho6) with our support engineer Adrian.
[Join our Slack community](https://dlthub-community.slack.com/join/shared_invite/zt-1slox199h-HAE7EQoXmstkP_bTqal65g)
or [book a call](https://calendar.app.google/kiLhuMsWKpZUpfho6) with our support engineer Adrian.
:::

Strapi is a headless CMS (Content Management System) that allows developers to create API-driven
content management systems without having to write a lot of custom code.

Strapi is a headless CMS (Content Management System) that allows developers to create powerful API-driven content management systems without having to write a lot of custom code.
Since Strapi's available endpoints vary based on your Strapi setup, ensure you recognize the ones
you'll ingest to transfer data to your warehouse.

Since the endpoints that will be available in Strapi depend on your setup, in order to get data from strapi to your warehouse, you need to be aware of which endpoints you will ingest.
This Strapi `dlt` verified source and
[pipeline example](https://github.com/dlt-hub/verified-sources/blob/master/sources/strapi_pipeline.py)
loads data using “Strapi API” to the destination of your choice.

## Grab API token
Sources and resources that can be loaded using this verified source are:

1. Log in to your Strapi account
2. Click on `⚙️ settings` in the left-hand sidebar menu
3. In the settings menu, go to the `API tokens` option that is present under global settings
4. Click on `Create new API token`
5. Fill in the details like `Name`, `description`, and `token duration`
6. In the token type drop-down menu, you can select `Read Only`, `Full access` or custom permissions for the API (note: if setting a custom permission, please make sure to select `find` and `findOne`.
7. API token will be displayed on clicking the “save” button
8. Copy the token displayed (i.e. the token will be used in configuring `dlt` secrets)
| Name | Description |
| ------------- | -------------------------- |
| strapi_source | Retrieves data from Strapi |

## Initialize the pipeline with Strapi verified source
## Setup Guide

Initialize the pipeline with the following command:
### Grab API token

`dlt init strapi <destination>`
1. Log in to Strapi.
1. Click ⚙️ in the sidebar.
1. Go to API tokens under global settings.
1. Create a new API token.
1. Fill in Name, Description, and Duration.
1. Choose token type: Read Only, Full Access, or custom (with find and findOne selected).
1. Save to view your API token.
1. Copy it for DLT secrets setup.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


For destination, you can use `bigquery`, `duckdb`, or one of the other [destinations](../../dlt-ecosystem/destinations).
### Initialize the verified source

## **Add credentials**
To get started with your data pipeline, follow these steps:

In the `.dlt` folder, you will find `secrets.toml`, which looks like this:
1. Enter the following command:

```python
# put your secret values and credentials here. do not share this file and do not push it to github
[sources.strapi]
api_secret_key = "api_secret_key" # please set me up!
domain = "domain" # please set me up!
```
```bash
dlt init strapi duckdb
```

[This command](../../reference/command-line-interface) will initialize
[the pipeline example](https://github.com/dlt-hub/verified-sources/blob/master/sources/strapi_pipeline.py)
with Strapi as the [source](../../general-usage/source) and [duckdb](../destinations/duckdb.md)
as the [destination](../destinations).

1. If you'd like to use a different destination, simply replace `duckdb` with the name of your
preferred [destination](../destinations).

1. After running this command, a new directory will be created with the necessary files and
configuration settings to get started.

For more information, read the
[Walkthrough: Add a verified source.](../../walkthroughs/add-a-verified-source)

### Add credentials

1. In the `.dlt` folder, there's a file called `secrets.toml`. It's where you store sensitive
information securely, like access tokens. Keep this file safe. Here's its format for service
account authentication:

```python
# put your secret values and credentials here. do not share this file and do not push it to github
[sources.strapi]
api_secret_key = "api_secret_key" # please set me up!
domain = "domain" # please set me up!
```

1. Replace `api_secret_key` with [the API token you copied above](strapi.md#grab-api-token).

1. Strapi auto-generates the domain.

1. The domain is the URL opened in a new tab when you run Strapi, e.g.,
\[my-strapi.up.your_app.app\].

1. Finally, enter credentials for your chosen destination as per the [docs](../destinations/).

## Run the pipeline

1. Before running the pipeline, ensure that you have installed all the necessary dependencies by
running the command:

```bash
pip install -r requirements.txt
```

1. You're now ready to run the pipeline! To get started, run the following command:

```bash
python3 strapi_pipeline.py
```

> In the provided script, we've included a list with one endpoint, "athletes." Simply add any
> other endpoints from your Strapi setup to this list in order to load them. Then, execute this
> file to initiate the data loading process.

1. Once the pipeline has finished running, you can verify that everything loaded correctly by using
the following command:

1. Replace `api_secret_key` with [the API token you copied above](strapi.md#grab-api-token)
2. The domain is created automatically by Strapi
3. When you run the Strapi project and a new tab opens in the browser, the URL in the address bar of that tab is the domain). For example, `[my-strapi.up.railway.app](http://my-strapi.up.railway.app)`
4. Follow the instructions in [Destinations](../destinations/) to add credentials for your chosen destination
```bash
dlt pipeline <pipeline_name> show
```

## Add your endpoint and run  **`strapi_pipeline.py`**
For example, the `pipeline_name` for the above pipeline example is `strapi`, you may also use any
custom name instead.

After initializing the pipeline a file named `strapi_pipeline.py` is created.
For more information, read the [Walkthrough: Run a pipeline](../../walkthroughs/run-a-pipeline).

## Sources and resources

`dlt` works on the principle of [sources](../../general-usage/source) and
[resources](../../general-usage/resource).

### Source `strapi_source`

This function retrives data from Strapi.

```python
import dlt
from strapi import strapi_source

def load(endpoints=None):
endpoints = ['athletes'] or endpoints
pipeline = dlt.pipeline(pipeline_name='strapi', destination='bigquery', dataset_name='strapi_data')

# run the pipeline with your parameters
load_info = pipeline.run(strapi_source(endpoints=endpoints))
# pretty print the information on data that was loaded
print(load_info)

if __name__ == "__main__" :
# add your desired endpoints to the list
endpoints = ['athletes']
load(endpoints)
@dlt.source
def strapi_source(
endpoints: List[str],
api_secret_key: str = dlt.secrets.value,
domain: str = dlt.secrets.value,
) -> Iterable[DltResource]:
```

In the sample script above, we have one list with one endpoint called “athletes”. Add other endpoints (i.e. the endpoints your Strapi setup has) to this list to load them. Afterwards, run this file to load the data.
`endpoints`: Collections to fetch data from.

## Run the pipeline
`api_secret_key`: API secret key for authentication, defaults to dlt secrets.
`domain`: Strapi API domain name, defaults to dlt secrets.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add empty line between 141 and 142 rows


### Create your own pipeline

If you wish to create your own pipelines, you can leverage source and resource methods from this
verified source.

1. Install the necessary dependencies by running the following command:
1. Configure the pipeline by specifying the pipeline name, destination, and dataset as follows:

`pip install -r requirements.txt`
```python
pipeline = dlt.pipeline(
pipeline_name="strapi", # Use a custom name if desired
destination="duckdb", # Choose the appropriate destination (e.g., duckdb, redshift, post)
dataset_name="strapi_data" # Use a custom name if desired
)
```

2. Now the pipeline can be run by using the command:
1. To load the specified endpoints:

`python3 strapi_pipeline.py`
```python
endpoints = ["athletes"] or endpoints
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

delete "or endpoints"

load_data = strapi_source(endpoints=endpoints)

3. To make sure that everything is loaded as expected, use the command:
load_info = pipeline.run(load_data)
# pretty print the information on data that was loaded
print(load_info)
```

`dlt pipeline <pipeline_name> show`
(For example, the pipeline_name for the above pipeline is `strapi_pipeline`, you may also use any custom name instead)
> We loaded the "athletes" endpoint above, which can be customized to suit our specific
> requirements.