Skip to content

Commit

Permalink
Fix link to job configuration documentation (logicalclocks#425)
Browse files Browse the repository at this point in the history
  • Loading branch information
SirOibaf committed Dec 5, 2024
1 parent 03fc722 commit df567ca
Show file tree
Hide file tree
Showing 3 changed files with 12 additions and 12 deletions.
6 changes: 3 additions & 3 deletions python/hsfs/feature_group.py
Original file line number Diff line number Diff line change
Expand Up @@ -2694,7 +2694,7 @@ def save(
When using the `python` engine, write_options can contain the
following entries:
* key `spark` and value an object of type
[hsfs.core.job_configuration.JobConfiguration](../job_configuration)
[hsfs.core.job_configuration.JobConfiguration](../jobs/#jobconfiguration)
to configure the Hopsworks Job used to write data into the
feature group.
* key `wait_for_job` and value `True` or `False` to configure
Expand Down Expand Up @@ -2883,7 +2883,7 @@ def insert(
When using the `python` engine, write_options can contain the
following entries:
* key `spark` and value an object of type
[hsfs.core.job_configuration.JobConfiguration](../job_configuration)
[hsfs.core.job_configuration.JobConfiguration](../jobs/#jobconfiguration)
to configure the Hopsworks Job used to write data into the
feature group.
* key `wait_for_job` and value `True` or `False` to configure
Expand Down Expand Up @@ -3046,7 +3046,7 @@ def multi_part_insert(
When using the `python` engine, write_options can contain the
following entries:
* key `spark` and value an object of type
[hsfs.core.job_configuration.JobConfiguration](../job_configuration)
[hsfs.core.job_configuration.JobConfiguration](../jobs/#jobconfiguration)
to configure the Hopsworks Job used to write data into the
feature group.
* key `wait_for_job` and value `True` or `False` to configure
Expand Down
14 changes: 7 additions & 7 deletions python/hsfs/feature_view.py
Original file line number Diff line number Diff line change
Expand Up @@ -1424,7 +1424,7 @@ def create_training_data(
* key `use_spark` and value `True` to materialize training dataset
with Spark instead of [Hopsworks Feature Query Service](https://docs.hopsworks.ai/latest/setup_installation/common/arrow_flight_duckdb/).
* key `spark` and value an object of type
[hsfs.core.job_configuration.JobConfiguration](../job_configuration)
[hsfs.core.job_configuration.JobConfiguration](../jobs/#jobconfiguration)
to configure the Hopsworks Job used to compute the training dataset.
* key `wait_for_job` and value `True` or `False` to configure
whether or not to the save call should return only
Expand Down Expand Up @@ -1703,7 +1703,7 @@ def create_train_test_split(
* key `use_spark` and value `True` to materialize training dataset
with Spark instead of [Hopsworks Feature Query Service](https://docs.hopsworks.ai/latest/setup_installation/common/arrow_flight_duckdb/).
* key `spark` and value an object of type
[hsfs.core.job_configuration.JobConfiguration](../job_configuration)
[hsfs.core.job_configuration.JobConfiguration](../jobs/#jobconfiguration)
to configure the Hopsworks Job used to compute the training dataset.
* key `wait_for_job` and value `True` or `False` to configure
whether or not to the save call should return only
Expand Down Expand Up @@ -1979,7 +1979,7 @@ def create_train_validation_test_split(
* key `use_spark` and value `True` to materialize training dataset
with Spark instead of [Hopsworks Feature Query Service](https://docs.hopsworks.ai/latest/setup_installation/common/arrow_flight_duckdb/).
* key `spark` and value an object of type
[hsfs.core.job_configuration.JobConfiguration](../job_configuration)
[hsfs.core.job_configuration.JobConfiguration](../jobs/#jobconfiguration)
to configure the Hopsworks Job used to compute the training dataset.
* key `wait_for_job` and value `True` or `False` to configure
whether or not to the save call should return only
Expand Down Expand Up @@ -2103,7 +2103,7 @@ def recreate_training_dataset(
* key `use_spark` and value `True` to materialize training dataset
with Spark instead of [Hopsworks Feature Query Service](https://docs.hopsworks.ai/latest/setup_installation/common/arrow_flight_duckdb/).
* key `spark` and value an object of type
[hsfs.core.job_configuration.JobConfiguration](../job_configuration)
[hsfs.core.job_configuration.JobConfiguration](../jobs/#jobconfiguration)
to configure the Hopsworks Job used to compute the training dataset.
* key `wait_for_job` and value `True` or `False` to configure
whether or not to the save call should return only
Expand Down Expand Up @@ -2222,7 +2222,7 @@ def training_data(
* key `"arrow_flight_config"` to pass a dictionary of arrow flight configurations.
For example: `{"arrow_flight_config": {"timeout": 900}}`.
* key `spark` and value an object of type
[hsfs.core.job_configuration.JobConfiguration](../job_configuration)
[hsfs.core.job_configuration.JobConfiguration](../jobs/#jobconfiguration)
to configure the Hopsworks Job used to compute the training dataset.
Defaults to `{}`.
spine: Spine dataframe with primary key, event time and
Expand Down Expand Up @@ -2385,7 +2385,7 @@ def train_test_split(
* key `"arrow_flight_config"` to pass a dictionary of arrow flight configurations.
For example: `{"arrow_flight_config": {"timeout": 900}}`
* key `spark` and value an object of type
[hsfs.core.job_configuration.JobConfiguration](../job_configuration)
[hsfs.core.job_configuration.JobConfiguration](../jobs/#jobconfiguration)
to configure the Hopsworks Job used to compute the training dataset.
Defaults to `{}`.
spine: Spine dataframe with primary key, event time and
Expand Down Expand Up @@ -2588,7 +2588,7 @@ def train_validation_test_split(
* key `"arrow_flight_config"` to pass a dictionary of arrow flight configurations.
For example: `{"arrow_flight_config": {"timeout": 900}}`
* key `spark` and value an object of type
[hsfs.core.job_configuration.JobConfiguration](../job_configuration)
[hsfs.core.job_configuration.JobConfiguration](../jobs/#jobconfiguration)
to configure the Hopsworks Job used to compute the training dataset.
Defaults to `{}`.
spine: Spine dataframe with primary key, event time and
Expand Down
4 changes: 2 additions & 2 deletions python/hsfs/training_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -624,7 +624,7 @@ def save(
When using the `python` engine, write_options can contain the
following entries:
* key `spark` and value an object of type
[hsfs.core.job_configuration.JobConfiguration](../job_configuration)
[hsfs.core.job_configuration.JobConfiguration](../jobs/#jobconfiguration)
to configure the Hopsworks Job used to compute the training dataset.
* key `wait_for_job` and value `True` or `False` to configure
whether or not to the save call should return only
Expand Down Expand Up @@ -690,7 +690,7 @@ def insert(
When using the `python` engine, write_options can contain the
following entries:
* key `spark` and value an object of type
[hsfs.core.job_configuration.JobConfiguration](../job_configuration)
[hsfs.core.job_configuration.JobConfiguration](../jobs/#jobconfiguration)
to configure the Hopsworks Job used to compute the training dataset.
* key `wait_for_job` and value `True` or `False` to configure
whether or not to the insert call should return only
Expand Down

0 comments on commit df567ca

Please sign in to comment.