Skip to content

Commit

Permalink
Fix link to job configuration documentation (logicalclocks#425)
Browse files Browse the repository at this point in the history
  • Loading branch information
SirOibaf authored and davitbzh committed Dec 5, 2024
1 parent b31e9ff commit 32eaa78
Show file tree
Hide file tree
Showing 3 changed files with 12 additions and 12 deletions.
6 changes: 3 additions & 3 deletions python/hsfs/feature_group.py
Original file line number Diff line number Diff line change
Expand Up @@ -2703,7 +2703,7 @@ def save(
When using the `python` engine, write_options can contain the
following entries:
* key `spark` and value an object of type
[hsfs.core.job_configuration.JobConfiguration](../job_configuration)
[hsfs.core.job_configuration.JobConfiguration](../jobs/#jobconfiguration)
to configure the Hopsworks Job used to write data into the
feature group.
* key `wait_for_job` and value `True` or `False` to configure
Expand Down Expand Up @@ -2892,7 +2892,7 @@ def insert(
When using the `python` engine, write_options can contain the
following entries:
* key `spark` and value an object of type
[hsfs.core.job_configuration.JobConfiguration](../job_configuration)
[hsfs.core.job_configuration.JobConfiguration](../jobs/#jobconfiguration)
to configure the Hopsworks Job used to write data into the
feature group.
* key `wait_for_job` and value `True` or `False` to configure
Expand Down Expand Up @@ -3055,7 +3055,7 @@ def multi_part_insert(
When using the `python` engine, write_options can contain the
following entries:
* key `spark` and value an object of type
[hsfs.core.job_configuration.JobConfiguration](../job_configuration)
[hsfs.core.job_configuration.JobConfiguration](../jobs/#jobconfiguration)
to configure the Hopsworks Job used to write data into the
feature group.
* key `wait_for_job` and value `True` or `False` to configure
Expand Down
14 changes: 7 additions & 7 deletions python/hsfs/feature_view.py
Original file line number Diff line number Diff line change
Expand Up @@ -1421,7 +1421,7 @@ def create_training_data(
* key `use_spark` and value `True` to materialize training dataset
with Spark instead of [Hopsworks Feature Query Service](https://docs.hopsworks.ai/latest/setup_installation/common/arrow_flight_duckdb/).
* key `spark` and value an object of type
[hsfs.core.job_configuration.JobConfiguration](../job_configuration)
[hsfs.core.job_configuration.JobConfiguration](../jobs/#jobconfiguration)
to configure the Hopsworks Job used to compute the training dataset.
* key `wait_for_job` and value `True` or `False` to configure
whether or not to the save call should return only
Expand Down Expand Up @@ -1686,7 +1686,7 @@ def create_train_test_split(
* key `use_spark` and value `True` to materialize training dataset
with Spark instead of [Hopsworks Feature Query Service](https://docs.hopsworks.ai/latest/setup_installation/common/arrow_flight_duckdb/).
* key `spark` and value an object of type
[hsfs.core.job_configuration.JobConfiguration](../job_configuration)
[hsfs.core.job_configuration.JobConfiguration](../jobs/#jobconfiguration)
to configure the Hopsworks Job used to compute the training dataset.
* key `wait_for_job` and value `True` or `False` to configure
whether or not to the save call should return only
Expand Down Expand Up @@ -1947,7 +1947,7 @@ def create_train_validation_test_split(
* key `use_spark` and value `True` to materialize training dataset
with Spark instead of [Hopsworks Feature Query Service](https://docs.hopsworks.ai/latest/setup_installation/common/arrow_flight_duckdb/).
* key `spark` and value an object of type
[hsfs.core.job_configuration.JobConfiguration](../job_configuration)
[hsfs.core.job_configuration.JobConfiguration](../jobs/#jobconfiguration)
to configure the Hopsworks Job used to compute the training dataset.
* key `wait_for_job` and value `True` or `False` to configure
whether or not to the save call should return only
Expand Down Expand Up @@ -2059,7 +2059,7 @@ def recreate_training_dataset(
* key `use_spark` and value `True` to materialize training dataset
with Spark instead of [Hopsworks Feature Query Service](https://docs.hopsworks.ai/latest/setup_installation/common/arrow_flight_duckdb/).
* key `spark` and value an object of type
[hsfs.core.job_configuration.JobConfiguration](../job_configuration)
[hsfs.core.job_configuration.JobConfiguration](../jobs/#jobconfiguration)
to configure the Hopsworks Job used to compute the training dataset.
* key `wait_for_job` and value `True` or `False` to configure
whether or not to the save call should return only
Expand Down Expand Up @@ -2178,7 +2178,7 @@ def training_data(
* key `"arrow_flight_config"` to pass a dictionary of arrow flight configurations.
For example: `{"arrow_flight_config": {"timeout": 900}}`.
* key `spark` and value an object of type
[hsfs.core.job_configuration.JobConfiguration](../job_configuration)
[hsfs.core.job_configuration.JobConfiguration](../jobs/#jobconfiguration)
to configure the Hopsworks Job used to compute the training dataset.
Defaults to `{}`.
spine: Spine dataframe with primary key, event time and
Expand Down Expand Up @@ -2341,7 +2341,7 @@ def train_test_split(
* key `"arrow_flight_config"` to pass a dictionary of arrow flight configurations.
For example: `{"arrow_flight_config": {"timeout": 900}}`
* key `spark` and value an object of type
[hsfs.core.job_configuration.JobConfiguration](../job_configuration)
[hsfs.core.job_configuration.JobConfiguration](../jobs/#jobconfiguration)
to configure the Hopsworks Job used to compute the training dataset.
Defaults to `{}`.
spine: Spine dataframe with primary key, event time and
Expand Down Expand Up @@ -2544,7 +2544,7 @@ def train_validation_test_split(
* key `"arrow_flight_config"` to pass a dictionary of arrow flight configurations.
For example: `{"arrow_flight_config": {"timeout": 900}}`
* key `spark` and value an object of type
[hsfs.core.job_configuration.JobConfiguration](../job_configuration)
[hsfs.core.job_configuration.JobConfiguration](../jobs/#jobconfiguration)
to configure the Hopsworks Job used to compute the training dataset.
Defaults to `{}`.
spine: Spine dataframe with primary key, event time and
Expand Down
4 changes: 2 additions & 2 deletions python/hsfs/training_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -624,7 +624,7 @@ def save(
When using the `python` engine, write_options can contain the
following entries:
* key `spark` and value an object of type
[hsfs.core.job_configuration.JobConfiguration](../job_configuration)
[hsfs.core.job_configuration.JobConfiguration](../jobs/#jobconfiguration)
to configure the Hopsworks Job used to compute the training dataset.
* key `wait_for_job` and value `True` or `False` to configure
whether or not to the save call should return only
Expand Down Expand Up @@ -690,7 +690,7 @@ def insert(
When using the `python` engine, write_options can contain the
following entries:
* key `spark` and value an object of type
[hsfs.core.job_configuration.JobConfiguration](../job_configuration)
[hsfs.core.job_configuration.JobConfiguration](../jobs/#jobconfiguration)
to configure the Hopsworks Job used to compute the training dataset.
* key `wait_for_job` and value `True` or `False` to configure
whether or not to the insert call should return only
Expand Down

0 comments on commit 32eaa78

Please sign in to comment.