Skip to content

Commit

Permalink
Fix links and skip everything on community (#385)
Browse files Browse the repository at this point in the history
  • Loading branch information
jsignell authored Jul 19, 2022
1 parent 5a258bb commit 70aa04b
Show file tree
Hide file tree
Showing 5 changed files with 7 additions and 4 deletions.
3 changes: 3 additions & 0 deletions .ci/check-links.sh
Original file line number Diff line number Diff line change
Expand Up @@ -31,9 +31,12 @@ URLS=$(
| grep -v '{' \
| grep -v -E '\:[0-9]+$' \
| grep -v 'auth.docker.io' \
| grep -v "community.saturnenterprise.io/" \
| grep -v 'demo.saturnenterprise.io' \
| grep -v "https://github.com/saturncloud/docs/" \
| grep -v "https://github.com/saturncloud/website/" \
| grep -v "https://fonts." \
| grep -v "https://www.kaggle.com/code/fahd09/eda-of-crime-in-chicago-2005-2016/notebook" \
| grep -v "https://AA99999.us-east-2.aws.snowflakecomputing.com/console/login" \
| grep -v -E "(http|https)://[0-9]+" \
| grep -v 'localhost.' \
Expand Down
2 changes: 1 addition & 1 deletion examples/bodo/bodo-eda-chicago-crimes.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
"# Chicago Crimes - Single-Node Bodo - Jupyter Notebook\n",
"\n",
"This example shows an exploratory data analysis (EDA) of crimes in Chicago using the HPC-like platform Bodo using a notebook on a single node. Chicago crime data is extracted from Bodo's public repository, cleaned and processed. Then some analysis are done to extract insight. All are **parallelized across multiple cores using Bodo**. This can be a straightforward way to\n",
"make Python code run faster than it would otherwise without requiring much change to the code. Original example can be found [here](https://medium.com/@ahsanzafar222/chicago-crime-data-cleaning-and-eda-a744c687a291) and [here](https://www.kaggle.com/fahd09/eda-of-crime-in-chicago-2005-2016).\n",
"make Python code run faster than it would otherwise without requiring much change to the code. Original example can be found [here](https://medium.com/@ahsanzafar222/chicago-crime-data-cleaning-and-eda-a744c687a291) and [here](https://www.kaggle.com/code/fahd09/eda-of-crime-in-chicago-2005-2016/notebook).\n",
"\n",
"The Bodo framework knows when to parallelize code based on the `%%px` at the start of cells and `@bodo.jit` function decorators. Removing those and restarting the kernel will run the code without Bodo.\n",
"\n",
Expand Down
2 changes: 1 addition & 1 deletion examples/julia-api/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

## Overview

[Genie](https://genieframework.com/) is a full-stack web framework for Julia. Genie allows you to create complex web apps, but here we are using it to make a simple API. Check out the [Genie documentation](https://genieframework.com/docs/tutorials/Overview.html) for more information.
[Genie](https://genieframework.com/) is a full-stack web framework for Julia. Genie allows you to create complex web apps, but here we are using it to make a simple API. Check out the [Genie documentation](https://genieframework.com/docs/genie/tutorials/Overview.html) for more information.

An API is a way for programs to communicate with each other. They work similarly to websites, but instead of a human typing in a url and getting an HTML page back, a program can send a similar request to a URL and get different types of data back.

Expand Down
2 changes: 1 addition & 1 deletion examples/rapids/02-rapids-gpu-cluster.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@
"Compared to the first excercise, this exercise uses a few new packages.\n",
"\n",
"* [`dask_saturn`](https://github.com/saturncloud/dask-saturn) and [`dask_distributed`](http://distributed.dask.org/en/stable/): Set up and run the Dask cluster in Saturn Cloud.\n",
"* [`dask-cudf`](https://docs.rapids.ai/api/cudf/stable/basics/dask-cudf.html): Create distributed `cudf` dataframes using Dask."
"* [`dask-cudf`](https://docs.rapids.ai/api/cudf/stable/user_guide/dask-cudf.html): Create distributed `cudf` dataframes using Dask."
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion examples/snowflake/advanced/rf-rapids-dask.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -280,7 +280,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Dask performs computations in a [lazy manner](https://tutorial.dask.org/01x_lazy.html), so we persist the dataframe to perform data loading and feature processing and load into GPU memory."
"Dask computations are lazy - nothing is computed until we explicitly call `.compute()`. Sometimes that means that intermediary results don't get stored and instead get recomputed. We can keep track of certain data and eagerly start computation by _persisting_ the dataframe. Calling `.persist()` tells Dask to perform data loading and feature processing and load the data into GPU memory."
]
},
{
Expand Down

0 comments on commit 70aa04b

Please sign in to comment.