Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: lychee link checker from built website version #855

Merged
merged 39 commits into from
Mar 8, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
39 commits
Select commit Hold shift + click to select a range
c692d83
docs: add lychee github action for test purposes
TC-MO Jan 17, 2024
f8a7759
bump checkout action & add .lycheeignore file
TC-MO Jan 17, 2024
aa3ce3e
fix broken regex
TC-MO Jan 17, 2024
56599f6
fix typo in file name
TC-MO Jan 17, 2024
87c8ba4
add yt to ignored links
TC-MO Jan 17, 2024
4352529
add base URL flag
TC-MO Jan 17, 2024
3b7986f
add regex to ignore image links
TC-MO Jan 17, 2024
da2a79a
add webp format to ignored
TC-MO Jan 17, 2024
5d5dde3
add svg format to ignored
TC-MO Jan 17, 2024
f800ecf
set sources as start point for lychee - test
TC-MO Jan 17, 2024
7b9f40d
remove trailing slash from base arg
TC-MO Jan 17, 2024
c6d3107
Merge branch 'master' into lychee-test
TC-MO Jan 23, 2024
757dd5d
Merge branch 'master' into lychee-test
TC-MO Feb 5, 2024
926a67e
fix: use the build website version for link checking
barjin Feb 12, 2024
e03be6a
add edit in github links to ignored by lychee
TC-MO Feb 20, 2024
2b90b9f
add new ignore
TC-MO Feb 20, 2024
a4448c0
add new ignore
TC-MO Feb 22, 2024
04c9609
fix ignore
TC-MO Feb 22, 2024
22a88b2
Merge branch 'master' into feat/lychee-link-checker
TC-MO Feb 22, 2024
4af0ea2
fix broken links
TC-MO Feb 22, 2024
d865680
fix broken links * add new ignores
TC-MO Feb 26, 2024
4e0b297
fix broken links
TC-MO Feb 26, 2024
2f88c43
fix broken links
TC-MO Feb 26, 2024
3216550
test broken link
TC-MO Feb 26, 2024
78acbc2
add new ignore & new arguments
TC-MO Feb 29, 2024
991c904
Merge branch 'master' into feat/lychee-link-checker
TC-MO Feb 29, 2024
d9a30b8
fix exclude & dead links
TC-MO Feb 29, 2024
8b703fd
fix chrome web store links
TC-MO Feb 29, 2024
84c8afe
add new ignore
TC-MO Feb 29, 2024
f0b8104
add og-image to ignores
TC-MO Feb 29, 2024
d2e4d5c
fix: update the remaining links (trailing slash, random md loader qui…
barjin Mar 6, 2024
5291407
chore: don't link check node_modules
barjin Mar 6, 2024
0deb1cf
change max retries value to 6
TC-MO Mar 7, 2024
d9831c3
comment out restricted google spreadsheet link
TC-MO Mar 8, 2024
5d202b1
Merge branch 'master' into feat/lychee-link-checker
TC-MO Mar 8, 2024
70bc6d8
fix vale issues
TC-MO Mar 8, 2024
7784732
chore: fix spelling
barjin Mar 8, 2024
b55b8b6
chore: add 429 to accepted HTTP statuses
barjin Mar 8, 2024
9725cee
fix: revert the default accepted status codes
barjin Mar 8, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .github/styles/Apify/Capitalization.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,8 @@ message: "The word '%s' should always be capitalized."
ignorecase: false
level: error
tokens:
- '\bactor\b'
- '\bactors\b'
- '(?<!\W)\bactor\b'
- '(?<!\W)\bactors\b'
- '(?<!@)\bapify\b(?!-\w+)'
- '(?<!\()\bhttps?://[^\s]*\bapify\b[^\s]*\b(?!\))|(?<!\[)\bhttps?://[^\s]*\bapify\b[^\s]*\b(?!\])'

Expand Down
34 changes: 34 additions & 0 deletions .github/workflows/lychee.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
name: Lychee Link Checker

on: [pull_request]

jobs:
link-check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4

- name: Use Node.js 18
uses: actions/setup-node@v4
with:
node-version: 18
cache: 'npm'
cache-dependency-path: 'package-lock.json'
always-auth: 'true'
registry-url: 'https://npm.pkg.github.com/'
scope: '@apify-packages'

- name: Build docs
run: |
npm ci --force
npm run build
env:
APIFY_SIGNING_TOKEN: ${{ secrets.APIFY_SIGNING_TOKEN }}
NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}

- uses: lycheeverse/[email protected]
env:
GITHUB_TOKEN: ${{ secrets.APIFY_SERVICE_ACCOUNT_GITHUB_TOKEN }}
with:
fail: true
args: --base https://docs.apify.com --exclude-path 'build/versions.html' --max-retries 6 --verbose --no-progress --accept '100..=103,200..=299,403..=403, 429' './build/**/*.html'
9 changes: 9 additions & 0 deletions .lycheeignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
http:\/\/localhost:3000.*
https:\/\/www\.youtube.*
\.(jpg|jpeg|png|gif|bmp|webp|svg)$
https:\/\/github\.com\/apify\/apify-docs\/edit\/[^ ]*a
https:\/\/docs\.apify\.com\/assets\/[^ ]*
file:\/\/\/.*
https://chrome\.google\.com/webstore/.*
https?:\/\/(www\.)?npmjs\.com\/.*
^https://apify\.com/api/og-image.*
4 changes: 2 additions & 2 deletions apify-docs-theme/src/config.js
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ const themeConfig = ({
items: [
{
label: 'Reference',
href: `${absoluteUrl}/api/v2/`,
href: `${absoluteUrl}/api/v2`,
target: '_self',
rel: 'dofollow',
},
Expand Down Expand Up @@ -170,7 +170,7 @@ const themeConfig = ({
items: [
{
label: 'Reference',
href: `${absoluteUrl}/api/v2/`,
href: `${absoluteUrl}/api/v2`,
target: '_self',
rel: 'dofollow',
},
Expand Down
2 changes: 1 addition & 1 deletion sources/academy/glossary/concepts/http_cookies.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,4 +19,4 @@ HTTP cookies are small pieces of data sent by the server to the user's web brows
2. To make the website show location-specific data (works for websites where you could set a zip code or country directly on the page, but unfortunately doesn't work for some location-based ads).
3. To make the website less suspicious of the crawler and let the crawler's traffic blend in with regular user traffic.

For local testing, we recommend using the [**EditThisCookie**](https://chrome.google.com/webstore/detail/editthiscookie/fngmhnnpilhplaeedifhccceomclgfbg?hl=en) Chrome extension.
For local testing, we recommend using the [**EditThisCookie**](https://chrome.google.com/webstore/detail/fngmhnnpilhplaeedifhccceomclgfbg) Chrome extension.
2 changes: 1 addition & 1 deletion sources/academy/glossary/tools/modheader.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@

If you read about [Postman](./postman.md), you might remember that you can use it to modify request headers before sending a request. This is great, but the main problem is that Postman can only make static requests - meaning, it is unable to load JavaScript or any [dynamic content](../concepts/dynamic_pages.md).

[ModHeader](https://chrome.google.com/webstore/detail/modheader/idgpnmonknjnojddfkpgkljpfnnfcklj?hl=en) is a Chrome extension which can be used to modify the HTTP headers of the requests you make with your browser. This means that, for example, if your scraper using a headless browser Puppeteer is being blocked due to an improper **User-Agent** header, you can use ModHeader to test the target website and quickly solve the issue.
[ModHeader](https://chrome.google.com/webstore/detail/idgpnmonknjnojddfkpgkljpfnnfcklj) is a Chrome extension which can be used to modify the HTTP headers of the requests you make with your browser. This means that, for example, if your scraper using a headless browser Puppeteer is being blocked due to an improper **User-Agent** header, you can use ModHeader to test the target website and quickly solve the issue.

Check warning on line 16 in sources/academy/glossary/tools/modheader.md

View workflow job for this annotation

GitHub Actions / vale

[vale] sources/academy/glossary/tools/modheader.md#L16

[write-good.Passive] 'be used' may be passive voice. Use active voice if you can.
Raw output
{"message": "[write-good.Passive] 'be used' may be passive voice. Use active voice if you can.", "location": {"path": "sources/academy/glossary/tools/modheader.md", "range": {"start": {"line": 16, "column": 121}}}, "severity": "WARNING"}

Check warning on line 16 in sources/academy/glossary/tools/modheader.md

View workflow job for this annotation

GitHub Actions / vale

[vale] sources/academy/glossary/tools/modheader.md#L16

[write-good.TooWordy] 'modify' is too wordy.
Raw output
{"message": "[write-good.TooWordy] 'modify' is too wordy.", "location": {"path": "sources/academy/glossary/tools/modheader.md", "range": {"start": {"line": 16, "column": 132}}}, "severity": "WARNING"}

Check warning on line 16 in sources/academy/glossary/tools/modheader.md

View workflow job for this annotation

GitHub Actions / vale

[vale] sources/academy/glossary/tools/modheader.md#L16

[write-good.Passive] 'being blocked' may be passive voice. Use active voice if you can.
Raw output
{"message": "[write-good.Passive] 'being blocked' may be passive voice. Use active voice if you can.", "location": {"path": "sources/academy/glossary/tools/modheader.md", "range": {"start": {"line": 16, "column": 284}}}, "severity": "WARNING"}

Check warning on line 16 in sources/academy/glossary/tools/modheader.md

View workflow job for this annotation

GitHub Actions / vale

[vale] sources/academy/glossary/tools/modheader.md#L16

[Microsoft.Terms] Prefer 'Personal digital assistant' over 'Agent'.
Raw output
{"message": "[Microsoft.Terms] Prefer 'Personal digital assistant' over 'Agent'.", "location": {"path": "sources/academy/glossary/tools/modheader.md", "range": {"start": {"line": 16, "column": 324}}}, "severity": "WARNING"}

Check warning on line 16 in sources/academy/glossary/tools/modheader.md

View workflow job for this annotation

GitHub Actions / vale

[vale] sources/academy/glossary/tools/modheader.md#L16

[write-good.Weasel] 'quickly' is a weasel word!
Raw output
{"message": "[write-good.Weasel] 'quickly' is a weasel word!", "location": {"path": "sources/academy/glossary/tools/modheader.md", "range": {"start": {"line": 16, "column": 393}}}, "severity": "WARNING"}

Check warning on line 16 in sources/academy/glossary/tools/modheader.md

View workflow job for this annotation

GitHub Actions / vale

[vale] sources/academy/glossary/tools/modheader.md#L16

[Microsoft.Adverbs] Consider removing 'quickly'.
Raw output
{"message": "[Microsoft.Adverbs] Consider removing 'quickly'.", "location": {"path": "sources/academy/glossary/tools/modheader.md", "range": {"start": {"line": 16, "column": 393}}}, "severity": "WARNING"}

## The ModHeader interface {#interface}

Expand Down
2 changes: 1 addition & 1 deletion sources/academy/glossary/tools/switchyomega.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@

---

SwitchyOmega is a Chrome extension for managing and switching between proxies which can be added in the [Chrome Webstore](https://chrome.google.com/webstore/detail/proxy-switchyomega/padekgcemlokbadohgkifijomclgjgif).
SwitchyOmega is a Chrome extension for managing and switching between proxies which can be added in the [Chrome Webstore](https://chrome.google.com/webstore/detail/padekgcemlokbadohgkifijomclgjgif).

Check warning on line 14 in sources/academy/glossary/tools/switchyomega.md

View workflow job for this annotation

GitHub Actions / vale

[vale] sources/academy/glossary/tools/switchyomega.md#L14

[write-good.Passive] 'be added' may be passive voice. Use active voice if you can.
Raw output
{"message": "[write-good.Passive] 'be added' may be passive voice. Use active voice if you can.", "location": {"path": "sources/academy/glossary/tools/switchyomega.md", "range": {"start": {"line": 14, "column": 89}}}, "severity": "WARNING"}

After adding it to Chrome, you can see the SwitchyOmega icon somewhere amongst all your other Chrome extension icons. Clicking on it will display a menu, where you can select various differnt connection profiles, as well as open the extension's options.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ Before moving on, give these valuable resources a quick lookover:
- Refamiliarize with the various available data on the [Request object](https://crawlee.dev/api/core/class/Request).
- Learn about the [`failedRequestHandler` function](https://crawlee.dev/api/browser-crawler/interface/BrowserCrawlerOptions#failedRequestHandler).
- Understand how to use the [`errorHandler`](https://crawlee.dev/api/browser-crawler/interface/BrowserCrawlerOptions#errorHandler) function to handle request failures.
- Ensure you are comfortable using [key-value stores](/sdk/js/docs/guides/data-storage#key-value-store) and [datasets](/sdk/js/docs/api/dataset#__docusaurus), and understand the differences between the two storage types.
- Ensure you are comfortable using [key-value stores](/sdk/js/docs/guides/result-storage#key-value-store) and [datasets](/sdk/js/docs/guides/result-storage#dataset), and understand the differences between the two storage types.

## Knowledge check 📝 {#quiz}

Expand Down
10 changes: 5 additions & 5 deletions sources/academy/platform/get_most_of_actors/actor_readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,11 +26,11 @@

## What should you add to your Actor README?

Aim for sections 1-6 below and try to include at least 300 words. You can move the sections around to some extent if it makes sense, e.g. 3 might come after 6. Consider using emojis as bullet points or otherwise trying to break up the text.
Aim for sections 16 below and try to include at least 300 words. You can move the sections around to some extent if it makes sense, e.g. 3 might come after 6. Consider using emojis as bullet points or otherwise trying to break up the text.

1. **What does (Actor name) do?**

- in 1-2 sentences describe what the Actor does and what it does not do
- in 12 sentences describe what the Actor does and what it does not do
- consider adding keywords like API, e.g. Instagram API
- always have a link to the target website in this section

Expand All @@ -43,12 +43,12 @@
3. **How much will it cost to scrape (target site)?**

- Simple text explaining what type of proxies are needed and how many platform credits (calculated mainly from consumption units) are needed for 1000 results.
- This is calculated from carrying out several runs (or from runs saved in the DB). @Zuzka can help if needed. [Information in this table](https://docs.google.com/spreadsheets/d/1NOkob1eYqTsRPTVQdltYiLUsIipvSFXswRcWQPtCW9M/edit#gid=1761542436), tab "cost of usage".
- This is calculated from carrying out several runs (or from runs saved in the DB).<!-- @Zuzka can help if needed. [Information in this table](https://docs.google.com/spreadsheets/d/1NOkob1eYqTsRPTVQdltYiLUsIipvSFXswRcWQPtCW9M/edit#gid=1761542436), tab "cost of usage". -->
- Here’s an example for this section:

> ## How much will it cost me to scrape Google Maps reviews?
>
> <br/> Apify provides you with $5 free usage credits to use every month on the Apify Free plan and you can get up to 100,000 reviews from this Google Maps Reviews Scraper for those credits. So 100k results will be completely free!
> <br/> Apify provides you with $5 free usage credits to use every month on the Apify Free plan and you can get up to 100,000 reviews from this Google Maps Reviews Scraper for those credits. This means 100k results will be completely free!

Check warning on line 51 in sources/academy/platform/get_most_of_actors/actor_readme.md

View workflow job for this annotation

GitHub Actions / vale

[vale] sources/academy/platform/get_most_of_actors/actor_readme.md#L51

[write-good.Weasel] 'completely' is a weasel word!
Raw output
{"message": "[write-good.Weasel] 'completely' is a weasel word!", "location": {"path": "sources/academy/platform/get_most_of_actors/actor_readme.md", "range": {"start": {"line": 51, "column": 228}}}, "severity": "WARNING"}
> <br/> But if you need to get more data or to get your data regularly you should grab an Apify subscription. We recommend our $49/month Starter plan - you can get up to 1 million Google Maps reviews every month with the $49 monthly plan! Or 10 million with the $499 Scale plan - wow!

4. **How to scrape (target site)**
Expand Down Expand Up @@ -94,4 +94,4 @@

## Next up {#next}

If you followed all the tips described above, your Actor README is almost good to go! In the [next lesson](./guidelines_for_writing.md) we will give you a few instructions on how you can create a tutorial for your Actor.
If you followed all the tips described above, your Actor README is almost good to go! In the [next lesson](./guidelines_for_writing.md) we will give you a few instructions on how you can create a tutorial for your Actor.

Check warning on line 97 in sources/academy/platform/get_most_of_actors/actor_readme.md

View workflow job for this annotation

GitHub Actions / vale

[vale] sources/academy/platform/get_most_of_actors/actor_readme.md#L97

[write-good.Weasel] 'few' is a weasel word!
Raw output
{"message": "[write-good.Weasel] 'few' is a weasel word!", "location": {"path": "sources/academy/platform/get_most_of_actors/actor_readme.md", "range": {"start": {"line": 97, "column": 156}}}, "severity": "WARNING"}
Original file line number Diff line number Diff line change
Expand Up @@ -9,11 +9,12 @@

---

The most popular way of [integrating](https://help.apify.com/en/collections/1669767-integrating-with-apify) the Apify platform with an external project/application is by programmatically running an [Actor](/platform/actors) or [task](/platform/actors/running/tasks), waiting for it to complete its run, then collecting its data and using it within the project. Though this process sounds somewhat complicated, it's actually quite easy to do; however, due to the plethora of features offered on the Apify platform, new users may not be sure how exactly to implement this type of integration. So, let's dive in and see how you can do it.
The most popular way of [integrating](https://help.apify.com/en/collections/1669769-integrations) the Apify platform with an external project/application is by programmatically running an [Actor](/platform/actors) or [task](/platform/actors/running/tasks), waiting for it to complete its run, then collecting its data and using it within the project. Though this process sounds somewhat complicated, it's actually quite easy to do; however, due to the plethora of features offered on the Apify platform, new users may not be sure how exactly to implement this type of integration. Let's dive in and see how you can do it.

Check warning on line 12 in sources/academy/tutorials/api/run_actor_and_retrieve_data_via_api.md

View workflow job for this annotation

GitHub Actions / vale

[vale] sources/academy/tutorials/api/run_actor_and_retrieve_data_via_api.md#L12

[write-good.TooWordy] 'however' is too wordy.
Raw output
{"message": "[write-good.TooWordy] 'however' is too wordy.", "location": {"path": "sources/academy/tutorials/api/run_actor_and_retrieve_data_via_api.md", "range": {"start": {"line": 12, "column": 433}}}, "severity": "WARNING"}

Check warning on line 12 in sources/academy/tutorials/api/run_actor_and_retrieve_data_via_api.md

View workflow job for this annotation

GitHub Actions / vale

[vale] sources/academy/tutorials/api/run_actor_and_retrieve_data_via_api.md#L12

[write-good.Weasel] 'exactly' is a weasel word!
Raw output
{"message": "[write-good.Weasel] 'exactly' is a weasel word!", "location": {"path": "sources/academy/tutorials/api/run_actor_and_retrieve_data_via_api.md", "range": {"start": {"line": 12, "column": 535}}}, "severity": "WARNING"}

Check warning on line 12 in sources/academy/tutorials/api/run_actor_and_retrieve_data_via_api.md

View workflow job for this annotation

GitHub Actions / vale

[vale] sources/academy/tutorials/api/run_actor_and_retrieve_data_via_api.md#L12

[write-good.TooWordy] 'implement' is too wordy.
Raw output
{"message": "[write-good.TooWordy] 'implement' is too wordy.", "location": {"path": "sources/academy/tutorials/api/run_actor_and_retrieve_data_via_api.md", "range": {"start": {"line": 12, "column": 546}}}, "severity": "WARNING"}

Check warning on line 12 in sources/academy/tutorials/api/run_actor_and_retrieve_data_via_api.md

View workflow job for this annotation

GitHub Actions / vale

[vale] sources/academy/tutorials/api/run_actor_and_retrieve_data_via_api.md#L12

[write-good.TooWordy] 'type of' is too wordy.
Raw output
{"message": "[write-good.TooWordy] 'type of' is too wordy.", "location": {"path": "sources/academy/tutorials/api/run_actor_and_retrieve_data_via_api.md", "range": {"start": {"line": 12, "column": 561}}}, "severity": "WARNING"}

> Remember to check out our [API documentation](/api/v2) with examples in different languages and a live API console. We also recommend testing the API with a nice desktop client like [Postman](https://www.getpostman.com/) or [Insomnia](https://insomnia.rest).

There are 2 main ways of using the Apify API:

Apify API offers two ways of interacting with it:

- [Synchronously](#synchronous-flow)
- [Asynchronously](#asynchronous-flow)
Expand All @@ -36,7 +37,7 @@

- Some other optional settings if you'd like to change the default values (such as allocated memory or the build).

The URL for a [POST request](https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/POST) to run an actor looks like this:
The URL of [POST request](https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/POST) to run an actor looks like this:

Check warning on line 40 in sources/academy/tutorials/api/run_actor_and_retrieve_data_via_api.md

View workflow job for this annotation

GitHub Actions / vale

[vale] sources/academy/tutorials/api/run_actor_and_retrieve_data_via_api.md#L40

[Microsoft.GeneralURL] For a general audience, use 'address' rather than 'URL'.
Raw output
{"message": "[Microsoft.GeneralURL] For a general audience, use 'address' rather than 'URL'.", "location": {"path": "sources/academy/tutorials/api/run_actor_and_retrieve_data_via_api.md", "range": {"start": {"line": 40, "column": 5}}}, "severity": "WARNING"}

```cURL
https://api.apify.com/v2/acts/ACTOR_NAME_OR_ID/runs?token=YOUR_TOKEN
Expand Down Expand Up @@ -261,9 +262,9 @@

By default, it will return the data in JSON format with some metadata. The actual data are in the `items` array.

There are plenty of additional parameters that you can use. You can learn about them in the [documentation](/api/v2#/reference/datasets/item-collection/get-items). We will only mention that you can pass a `format` parameter that transforms the response into popular formats like CSV, XML, Excel, RSS, etc.
You can use plenty of additional parameters, to learn more about them, visit our API reference [documentation](/api/v2#/reference/datasets/item-collection/get-items). We will only mention that you can pass a `format` parameter that transforms the response into popular formats like CSV, XML, Excel, RSS, etc.

Check warning on line 265 in sources/academy/tutorials/api/run_actor_and_retrieve_data_via_api.md

View workflow job for this annotation

GitHub Actions / vale

[vale] sources/academy/tutorials/api/run_actor_and_retrieve_data_via_api.md#L265

[write-good.TooWordy] 'additional' is too wordy.
Raw output
{"message": "[write-good.TooWordy] 'additional' is too wordy.", "location": {"path": "sources/academy/tutorials/api/run_actor_and_retrieve_data_via_api.md", "range": {"start": {"line": 265, "column": 23}}}, "severity": "WARNING"}

Check warning on line 265 in sources/academy/tutorials/api/run_actor_and_retrieve_data_via_api.md

View workflow job for this annotation

GitHub Actions / vale

[vale] sources/academy/tutorials/api/run_actor_and_retrieve_data_via_api.md#L265

[write-good.Weasel] 'only' is a weasel word!
Raw output
{"message": "[write-good.Weasel] 'only' is a weasel word!", "location": {"path": "sources/academy/tutorials/api/run_actor_and_retrieve_data_via_api.md", "range": {"start": {"line": 265, "column": 176}}}, "severity": "WARNING"}

The items are paginated, which means you can ask only for a subset of the data. Specify this using the `limit` and `offset` parameters. There is actually an overall limit of 250,000 items that the endpoint can return per request. To retrieve more, you will need to send more requests incrementing the `offset` parameter.
The items are paginated, which means you can ask only for a subset of the data. Specify this using the `limit` and `offset` parameters. This endpoint has a limit of 250,000 items that it can return per request. To retrieve more, you will need to send more requests incrementing the `offset` parameter.

Check warning on line 267 in sources/academy/tutorials/api/run_actor_and_retrieve_data_via_api.md

View workflow job for this annotation

GitHub Actions / vale

[vale] sources/academy/tutorials/api/run_actor_and_retrieve_data_via_api.md#L267

[write-good.Passive] 'are paginated' may be passive voice. Use active voice if you can.
Raw output
{"message": "[write-good.Passive] 'are paginated' may be passive voice. Use active voice if you can.", "location": {"path": "sources/academy/tutorials/api/run_actor_and_retrieve_data_via_api.md", "range": {"start": {"line": 267, "column": 11}}}, "severity": "WARNING"}

Check warning on line 267 in sources/academy/tutorials/api/run_actor_and_retrieve_data_via_api.md

View workflow job for this annotation

GitHub Actions / vale

[vale] sources/academy/tutorials/api/run_actor_and_retrieve_data_via_api.md#L267

[write-good.Weasel] 'only' is a weasel word!
Raw output
{"message": "[write-good.Weasel] 'only' is a weasel word!", "location": {"path": "sources/academy/tutorials/api/run_actor_and_retrieve_data_via_api.md", "range": {"start": {"line": 267, "column": 50}}}, "severity": "WARNING"}

```cURL
https://api.apify.com/v2/datasets/DATASET_ID/items?format=csv&offset=250000
Expand Down
6 changes: 3 additions & 3 deletions sources/academy/tutorials/node_js/debugging_web_scraper.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,15 +23,15 @@
document.getElementsByTagName('head')[0].appendChild(jq);
```

If that doesn't work because of CORS violation, you can install [this extension](https://chrome.google.com/webstore/detail/jquery-inject/iibfbhlfimdnkinkcenncoeejnmpemof) that injects jQuery on a button click.
If that doesn't work because of CORS violation, you can install [this extension](https://chrome.google.com/webstore/detail/ekkjohcjbjcjjifokpingdbdlfekjcgi) that injects jQuery on a button click.

There are 2 main ways how to test a pageFunction code in your console:
You can test a `pageFunction` code in two ways in your console:

## Pasting and running a small code snippet

Usually, you don't need to paste in the whole pageFunction as you can simply isolate the critical part of the code you are trying to debug. You will need to remove any references to the `context` object and its properties like `request` and the final return statement but otherwise, the code should work 1:1.

I will also usually remove `const` declarations on the top level variables. This helps you to run the same code many times over without needing to restart the console (you cannot declare constants more than once). So my declaration will change from:
I will also usually remove `const` declarations on the top level variables. This helps you to run the same code many times over without needing to restart the console (you cannot declare constants more than once). My declaration will change from:

Check warning on line 34 in sources/academy/tutorials/node_js/debugging_web_scraper.md

View workflow job for this annotation

GitHub Actions / vale

[vale] sources/academy/tutorials/node_js/debugging_web_scraper.md#L34

[Microsoft.FirstPerson] Use first person (such as 'I ') sparingly.
Raw output
{"message": "[Microsoft.FirstPerson] Use first person (such as 'I ') sparingly.", "location": {"path": "sources/academy/tutorials/node_js/debugging_web_scraper.md", "range": {"start": {"line": 34, "column": 1}}}, "severity": "WARNING"}

Check warning on line 34 in sources/academy/tutorials/node_js/debugging_web_scraper.md

View workflow job for this annotation

GitHub Actions / vale

[vale] sources/academy/tutorials/node_js/debugging_web_scraper.md#L34

[write-good.Weasel] 'usually' is a weasel word!
Raw output
{"message": "[write-good.Weasel] 'usually' is a weasel word!", "location": {"path": "sources/academy/tutorials/node_js/debugging_web_scraper.md", "range": {"start": {"line": 34, "column": 13}}}, "severity": "WARNING"}

Check warning on line 34 in sources/academy/tutorials/node_js/debugging_web_scraper.md

View workflow job for this annotation

GitHub Actions / vale

[vale] sources/academy/tutorials/node_js/debugging_web_scraper.md#L34

[write-good.Weasel] 'many' is a weasel word!
Raw output
{"message": "[write-good.Weasel] 'many' is a weasel word!", "location": {"path": "sources/academy/tutorials/node_js/debugging_web_scraper.md", "range": {"start": {"line": 34, "column": 113}}}, "severity": "WARNING"}

Check warning on line 34 in sources/academy/tutorials/node_js/debugging_web_scraper.md

View workflow job for this annotation

GitHub Actions / vale

[vale] sources/academy/tutorials/node_js/debugging_web_scraper.md#L34

[Microsoft.FirstPerson] Use first person (such as 'My') sparingly.
Raw output
{"message": "[Microsoft.FirstPerson] Use first person (such as 'My') sparingly.", "location": {"path": "sources/academy/tutorials/node_js/debugging_web_scraper.md", "range": {"start": {"line": 34, "column": 215}}}, "severity": "WARNING"}

```js
const results = [];
Expand Down
8 changes: 4 additions & 4 deletions sources/academy/tutorials/node_js/optimizing_scrapers.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,9 @@

Especially if you are running your scrapers on [Apify](https://apify.com), performance is directly related to your wallet (or rather bank account). The slower and heavier your program is, the more proxy bandwidth, storage, [compute units](https://help.apify.com/en/articles/3490384-what-is-a-compute-unit) and higher [subscription plan](https://apify.com/pricing) you'll need.

The goal of optimization is simple: Make the code run as fast possible and use the least resources possible. On Apify, the resources are memory and CPU usage (don't forget that the more memory you allocate to a run, the bigger share of CPU you get - proportionally). Memory alone should never be a bottleneck though. If it is, that means either a bug (memory leak) or bad architecture of the program (you need to split the computation to smaller parts). So in the rest of this article, we will focus only on optimizing CPU usage. You allocate more memory only to get more power from the CPU.
The goal of optimization is simple: Make the code run as fast possible and use the least resources possible. On Apify, the resources are memory and CPU usage (don't forget that the more memory you allocate to a run, the bigger share of CPU you get - proportionally). Memory alone should never be a bottleneck though. If it is, that means either a bug (memory leak) or bad architecture of the program (you need to split the computation to smaller parts). The rest of this article, will focus only on optimizing CPU usage. You allocate more memory only to get more power from the CPU.

Check warning on line 16 in sources/academy/tutorials/node_js/optimizing_scrapers.md

View workflow job for this annotation

GitHub Actions / vale

[vale] sources/academy/tutorials/node_js/optimizing_scrapers.md#L16

[write-good.TooWordy] 'allocate' is too wordy.
Raw output
{"message": "[write-good.TooWordy] 'allocate' is too wordy.", "location": {"path": "sources/academy/tutorials/node_js/optimizing_scrapers.md", "range": {"start": {"line": 16, "column": 198}}}, "severity": "WARNING"}

Check warning on line 16 in sources/academy/tutorials/node_js/optimizing_scrapers.md

View workflow job for this annotation

GitHub Actions / vale

[vale] sources/academy/tutorials/node_js/optimizing_scrapers.md#L16

[write-good.TooWordy] 'it is' is too wordy.
Raw output
{"message": "[write-good.TooWordy] 'it is' is too wordy.", "location": {"path": "sources/academy/tutorials/node_js/optimizing_scrapers.md", "range": {"start": {"line": 16, "column": 321}}}, "severity": "WARNING"}

Check warning on line 16 in sources/academy/tutorials/node_js/optimizing_scrapers.md

View workflow job for this annotation

GitHub Actions / vale

[vale] sources/academy/tutorials/node_js/optimizing_scrapers.md#L16

[write-good.Weasel] 'only' is a weasel word!
Raw output
{"message": "[write-good.Weasel] 'only' is a weasel word!", "location": {"path": "sources/academy/tutorials/node_js/optimizing_scrapers.md", "range": {"start": {"line": 16, "column": 492}}}, "severity": "WARNING"}

Check warning on line 16 in sources/academy/tutorials/node_js/optimizing_scrapers.md

View workflow job for this annotation

GitHub Actions / vale

[vale] sources/academy/tutorials/node_js/optimizing_scrapers.md#L16

[write-good.TooWordy] 'allocate' is too wordy.
Raw output
{"message": "[write-good.TooWordy] 'allocate' is too wordy.", "location": {"path": "sources/academy/tutorials/node_js/optimizing_scrapers.md", "range": {"start": {"line": 16, "column": 526}}}, "severity": "WARNING"}

Check warning on line 16 in sources/academy/tutorials/node_js/optimizing_scrapers.md

View workflow job for this annotation

GitHub Actions / vale

[vale] sources/academy/tutorials/node_js/optimizing_scrapers.md#L16

[write-good.Weasel] 'only' is a weasel word!
Raw output
{"message": "[write-good.Weasel] 'only' is a weasel word!", "location": {"path": "sources/academy/tutorials/node_js/optimizing_scrapers.md", "range": {"start": {"line": 16, "column": 547}}}, "severity": "WARNING"}

There is one more thing. Optimization has its own cost: development time. You should always think about how much time you're able to spend on it and if it's worth it.
One more thing to remember. Optimization has its own cost: development time. You should always think about how much time you're able to spend on it and if it's worth it.

Before we dive into the practical side of things, lets diverge with an analogy to help us think about the performance of scrapers.

Expand All @@ -29,13 +29,13 @@

## Back to scrapers {#back-to-scrapers}

What are the engines of the scraping world? A [browser](https://github.com/puppeteer/puppeteer/blob/master/docs/api.md), an [HTTP library](https://www.npmjs.com/package/@apify/http-request), an [HTML parser](https://github.com/cheeriojs/cheerio), and a [JSON parser](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/parse). The CPU spends more than 99% of its workload in these libraries. As with engines, you are not likely gonna write these from scratch - instead you'll use something like [Crawlee](https://crawlee.dev) that handles a lot of the overheads for you.
What are the engines of the scraping world? A [browser](https://github.com/puppeteer/puppeteer?tab=readme-ov-file#puppeteer), an [HTTP library](https://www.npmjs.com/package/@apify/http-request), an [HTML parser](https://github.com/cheeriojs/cheerio), and a [JSON parser](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/parse). The CPU spends more than 99% of its workload in these libraries. As with engines, you are not likely gonna write these from scratch - instead you'll use something like [Crawlee](https://crawlee.dev) that handles a lot of the overheads for you.

It is about how you use these tools. The small amount of code you write in your [`requestHandler`](https://crawlee.dev/api/http-crawler/interface/HttpCrawlerOptions#requestHandler) is absolutely insignificant compared to what is running inside these tools. In other words, it doesn't matter how many functions you call or how many variables you extract. If you want to optimize your scrapers, you need to choose the lightweight option from the tools and use it as little as possible. A crawler scraping only JSON API can be as much as 200 times faster/cheaper than a browser based solution.

**Ranking of the tools from the most efficient to the least:**

1. **JSON API** (HTTP call + JSON parse) - Scraping an API (public or internal) is the best option. The response is usually smaller than the HTML page and the data are already structured and cheap to parse. Usable for about 30% of websites.
2. **Pure HTML** (HTTP call + HTML parse) - All data is on the main single HTML page. Often the HTML contains script and JSON data that are rich and nicely structured. Some pages can be quite big and the parsing is slower than for JSON. But it is still 10-20 times faster than a browser. Usable for about 90% of websites.
2. **Pure HTML** (HTTP call + HTML parse) - All data is on the main single HTML page. Often the HTML contains script and JSON data that are rich and nicely structured. Some pages can be quite big and the parsing is slower than for JSON. But it is still 1020 times faster than a browser. Usable for about 90% of websites.

Check warning on line 39 in sources/academy/tutorials/node_js/optimizing_scrapers.md

View workflow job for this annotation

GitHub Actions / vale

[vale] sources/academy/tutorials/node_js/optimizing_scrapers.md#L39

[Microsoft.Adverbs] Consider removing 'nicely'.
Raw output
{"message": "[Microsoft.Adverbs] Consider removing 'nicely'.", "location": {"path": "sources/academy/tutorials/node_js/optimizing_scrapers.md", "range": {"start": {"line": 39, "column": 151}}}, "severity": "WARNING"}

Check warning on line 39 in sources/academy/tutorials/node_js/optimizing_scrapers.md

View workflow job for this annotation

GitHub Actions / vale

[vale] sources/academy/tutorials/node_js/optimizing_scrapers.md#L39

[write-good.TooWordy] 'it is' is too wordy.
Raw output
{"message": "[write-good.TooWordy] 'it is' is too wordy.", "location": {"path": "sources/academy/tutorials/node_js/optimizing_scrapers.md", "range": {"start": {"line": 39, "column": 243}}}, "severity": "WARNING"}
3. **Browser** (hundreds of HTTP calls, script execution, rendering) - Browsers are huge beasts. They do so much work to allow for smooth human interaction which makes them really inefficient for scraping. Use a browser only if it helps you bypass anti-scraping protection or you need to interact with the page.

Loading