From e44d4aadf531b6366ebeed6d6e0e642a54c3fc1d Mon Sep 17 00:00:00 2001 From: GitHub Actions Date: Thu, 7 Nov 2024 14:41:37 +0000 Subject: [PATCH] site deploy Auto-generated via `{sandpaper}` Source : 1e20ed488dc98158b024171cc3c6bf1161ae9533 Branch : md-outputs Author : GitHub Actions Time : 2024-11-07 14:41:23 +0000 Message : markdown source builds Auto-generated via `{sandpaper}` Source : a5a9c9aeca97ec7cc150ab73735d7b5d96b5fa70 Branch : main Author : Chris Endemann Time : 2024-11-07 14:40:37 +0000 Message : Update Training-models-in-SageMaker-notebooks.md --- Training-models-in-SageMaker-notebooks.html | 12 ++++-------- aio.html | 12 ++++-------- .../Training-models-in-SageMaker-notebooks.html | 12 ++++-------- instructor/aio.html | 12 ++++-------- md5sum.txt | 2 +- pkgdown.yml | 2 +- 6 files changed, 18 insertions(+), 34 deletions(-) diff --git a/Training-models-in-SageMaker-notebooks.html b/Training-models-in-SageMaker-notebooks.html index 503f770..c99bd52 100644 --- a/Training-models-in-SageMaker-notebooks.html +++ b/Training-models-in-SageMaker-notebooks.html @@ -1452,8 +1452,7 @@

Cost of distributed computing

Key steps in distributed training with XGBoost

-

1. Data partitioning -

+

1. Data partitioning

  • The dataset is divided among multiple instances. For example, with two instances, each instance may receive half of the dataset.
  • In SageMaker, data partitioning across instances is handled @@ -1461,8 +1460,7 @@

    1. Data partitioning reducing manual setup.

-

2. Parallel gradient boosting -

+

2. Parallel gradient boosting

  • XGBoost performs gradient boosting by constructing trees iteratively based on calculated gradients.
  • Each instance calculates gradients (first-order derivatives) and @@ -1473,8 +1471,7 @@

    2. Parallel gradient boosting

-

3. Communication between instances -

+

3. Communication between instances

  • After computing gradients and Hessians locally, instances synchronize to share and combine these values.
  • Synchronization keeps the model parameters consistent across @@ -1485,8 +1482,7 @@

    3. Communication between instan across multiple instances.

-

4. Final model aggregation -

+

4. Final model aggregation

  • Once training completes, XGBoost aggregates the trained trees from each instance into a single final model.
  • This aggregation enables the final model to perform as though it diff --git a/aio.html b/aio.html index d86867b..5c5153b 100644 --- a/aio.html +++ b/aio.html @@ -3504,8 +3504,7 @@

    Cost of distributed computing

    Key steps in distributed training with XGBoost

    -

    1. Data partitioning - +

    1. Data partitioning

    • The dataset is divided among multiple instances. For example, with @@ -3516,8 +3515,7 @@

      1. Data partitioning

    -

    2. Parallel gradient boosting - +

    2. Parallel gradient boosting

    • XGBoost performs gradient boosting by constructing trees iteratively @@ -3531,8 +3529,7 @@

      2. Parallel gradient boosting

    -

    3. Communication between instances - +

    3. Communication between instances

    • After computing gradients and Hessians locally, instances @@ -3546,8 +3543,7 @@

      3. Communication between instan

    -

    4. Final model aggregation - +

    4. Final model aggregation

    • Once training completes, XGBoost aggregates the trained trees from diff --git a/instructor/Training-models-in-SageMaker-notebooks.html b/instructor/Training-models-in-SageMaker-notebooks.html index b6ded3d..fdc809b 100644 --- a/instructor/Training-models-in-SageMaker-notebooks.html +++ b/instructor/Training-models-in-SageMaker-notebooks.html @@ -1454,8 +1454,7 @@

      Cost of distributed computing

      Key steps in distributed training with XGBoost

      -

      1. Data partitioning -

      +

      1. Data partitioning

      • The dataset is divided among multiple instances. For example, with two instances, each instance may receive half of the dataset.
      • In SageMaker, data partitioning across instances is handled @@ -1463,8 +1462,7 @@

        1. Data partitioning reducing manual setup.

      -

      2. Parallel gradient boosting -

      +

      2. Parallel gradient boosting

      • XGBoost performs gradient boosting by constructing trees iteratively based on calculated gradients.
      • Each instance calculates gradients (first-order derivatives) and @@ -1475,8 +1473,7 @@

        2. Parallel gradient boosting

      -

      3. Communication between instances -

      +

      3. Communication between instances

      • After computing gradients and Hessians locally, instances synchronize to share and combine these values.
      • Synchronization keeps the model parameters consistent across @@ -1487,8 +1484,7 @@

        3. Communication between instan across multiple instances.

      -

      4. Final model aggregation -

      +

      4. Final model aggregation

      • Once training completes, XGBoost aggregates the trained trees from each instance into a single final model.
      • This aggregation enables the final model to perform as though it diff --git a/instructor/aio.html b/instructor/aio.html index 1631ef4..3a37971 100644 --- a/instructor/aio.html +++ b/instructor/aio.html @@ -3512,8 +3512,7 @@

        Cost of distributed computing

        Key steps in distributed training with XGBoost

        -

        1. Data partitioning - +

        1. Data partitioning

        • The dataset is divided among multiple instances. For example, with @@ -3524,8 +3523,7 @@

          1. Data partitioning

        -

        2. Parallel gradient boosting - +

        2. Parallel gradient boosting

        • XGBoost performs gradient boosting by constructing trees iteratively @@ -3539,8 +3537,7 @@

          2. Parallel gradient boosting

        -

        3. Communication between instances - +

        3. Communication between instances

        • After computing gradients and Hessians locally, instances @@ -3554,8 +3551,7 @@

          3. Communication between instan

        -

        4. Final model aggregation - +

        4. Final model aggregation

        • Once training completes, XGBoost aggregates the trained trees from diff --git a/md5sum.txt b/md5sum.txt index 0cb8f66..bde982a 100644 --- a/md5sum.txt +++ b/md5sum.txt @@ -9,7 +9,7 @@ "episodes/SageMaker-notebooks-as-controllers.md" "7b44f533d49559aa691b8ab2574b4e81" "site/built/SageMaker-notebooks-as-controllers.md" "2024-11-06" "episodes/Accessing-S3-via-SageMaker-notebooks.md" "6f7c3a395851fe00f63e7eb44e553830" "site/built/Accessing-S3-via-SageMaker-notebooks.md" "2024-11-06" "episodes/Interacting-with-code-repo.md" "105dace64e3a1ea6570d314e4b3ccfff" "site/built/Interacting-with-code-repo.md" "2024-11-06" -"episodes/Training-models-in-SageMaker-notebooks.md" "df102099945f116048ff948364a2ec0a" "site/built/Training-models-in-SageMaker-notebooks.md" "2024-11-07" +"episodes/Training-models-in-SageMaker-notebooks.md" "513c99991e6d9d5ceb4da7f021af74ec" "site/built/Training-models-in-SageMaker-notebooks.md" "2024-11-07" "episodes/Training-models-in-SageMaker-notebooks-part2.md" "a508320d07314a39d83b9b4c8114e92b" "site/built/Training-models-in-SageMaker-notebooks-part2.md" "2024-11-07" "episodes/Hyperparameter-tuning.md" "c9fe9c20d437dc2f88315438ac6460db" "site/built/Hyperparameter-tuning.md" "2024-11-07" "instructors/instructor-notes.md" "cae72b6712578d74a49fea7513099f8c" "site/built/instructor-notes.md" "2023-03-16" diff --git a/pkgdown.yml b/pkgdown.yml index 1dd6e94..839a2d1 100644 --- a/pkgdown.yml +++ b/pkgdown.yml @@ -2,4 +2,4 @@ pandoc: 3.1.11 pkgdown: 2.1.1 pkgdown_sha: ~ articles: {} -last_built: 2024-11-07T14:39Z +last_built: 2024-11-07T14:41Z