Repo documentation for recipe_engine
- archive
- assertions
- bcid_reporter
- bcid_verifier — API for interacting with Software Verifier.
- buildbucket — API for interacting with the buildbucket service.
- cas — API for interacting with cas client.
- cas_input — Simple API for handling CAS inputs to a recipe.
- change_verifier — Recipe API for LUCI Change Verifier.
- cipd — API for interacting with CIPD.
- commit_position
- context — The context module provides APIs for manipulating a few pieces of 'ambient' data that affect how steps are run.
- cq — Wrapper for CV API.
- cv — Recipe API for LUCI CV, the pre-commit testing system.
- defer — Runs a function but defers the result until a later time.
- file — File manipulation (read/write/delete/glob) methods.
- findings
- futures — Implements in-recipe concurrency via green threads.
- generator_script — A simple method for running steps generated by an external script.
- golang
- json — Methods for producing and consuming JSON.
- led — An interface to call the led tool.
- legacy_annotation — Legacy Annotation module provides support for running a command emitting legacy @@@annotation@@@ in the new luciexe mode.
- luci_analysis — API for interacting with the LUCI Analysis RPCs This API is for calling LUCI Analysis RPCs for various aggregated info about test results.
- luci_config
- milo — API for specifying Milo behavior.
- nodejs
- path — All functions related to manipulating paths in recipes.
- platform — Mockable system platform identity functions.
- properties — Provides access to the recipes input properties.
- proto — Methods for producing and consuming protobuf data to/from steps and the filesystem.
- random — Allows randomness in recipes.
- raw_io — Provides objects for reading and writing raw data to and from steps.
- resultdb — API for interacting with the ResultDB service.
- runtime
- scheduler — API for interacting with the LUCI Scheduler service.
- service_account — API for getting OAuth2 access tokens for LUCI tasks or private keys.
- step — Step is the primary API for running steps (external programs, etc.
- swarming
- time — Allows mockable access to the current time.
- tricium — API for Tricium analyzers to use.
- url — Methods for interacting with HTTP(s) URLs.
- uuid — Allows test-repeatable access to a random UUID.
- version — Thin API for parsing semver strings into comparable object.
- warning — Allows recipe modules to issue warnings in simulation test.
- archive:examples/full
- assertions:tests/assert-raises
- assertions:tests/assert_count_equal
- assertions:tests/assertions
- assertions:tests/attribute_error
- assertions:tests/long_message
- assertions:tests/max_diff
- bcid_reporter:examples/usage
- bcid_verifier:tests/test-verify
- buildbucket:examples/full — This file is a recipe demonstrating the buildbucket recipe module.
- buildbucket:run/multi — Launches multiple builds at the same revision.
- buildbucket:tests/add_build_tags
- buildbucket:tests/add_step_tags
- buildbucket:tests/backend
- buildbucket:tests/backend_utilities_fail
- buildbucket:tests/build
- buildbucket:tests/cancel
- buildbucket:tests/collect
- buildbucket:tests/get
- buildbucket:tests/list_builders
- buildbucket:tests/output_commit — This recipe tests the buildbucket.
- buildbucket:tests/schedule
- buildbucket:tests/search
- cas:examples/full
- cas_input:examples/full
- change_verifier:tests/match_config
- change_verifier:tests/search
- cipd:examples/full
- cipd:tests/platform
- commit_position:examples/full
- context:examples/full
- context:tests/cwd
- context:tests/env
- context:tests/greenlet
- context:tests/infra_step
- context:tests/luci_context
- cq:examples/ordered_cls
- cq:examples/trigger_child_builds
- cq:tests/cl_group_key
- cq:tests/do_not_retry
- cq:tests/experimental
- cq:tests/inactive
- cq:tests/mode_of_run
- cq:tests/owner_is_googler
- cq:tests/reuse
- cq:tests/triggered_build_ids
- cv:examples/ordered_cls
- cv:examples/trigger_child_builds
- cv:tests/attempt_key
- cv:tests/cl_group_key
- cv:tests/cl_owner
- cv:tests/do_not_retry
- cv:tests/experimental
- cv:tests/inactive
- cv:tests/mode_of_run
- cv:tests/owner_is_googler
- cv:tests/reuse
- cv:tests/triggered_build_ids
- defer:tests/collect
- defer:tests/context
- defer:tests/non_deferred
- defer:tests/result
- defer:tests/suppressed
- engine_tests/bad_subprocess — Tests that daemons that hang on to STDOUT can't cause the engine to hang.
- engine_tests/comprehensive_ui — A fast-running recipe which comprehensively covers all StepPresentation features available in the recipe engine.
- engine_tests/config_operations — Tests that recipes can modify configuration options in various ways.
- engine_tests/early_termination — Simple recipe which runs a bunch of subprocesses which react to early termination in different ways.
- engine_tests/expect_exception — Tests that tests with a single exception are handled correctly.
- engine_tests/expect_exceptions — Tests that tests with multiple exceptions are handled correctly.
- engine_tests/failure_results — Tests that run_steps is handling recipe failures correctly.
- engine_tests/functools_partial — Engine shouldn't explode when step_test_data gets functools.
- engine_tests/incorrect_recipe_result — Tests that engine.
- engine_tests/long_sleep — Simple recipe which sleeps in a subprocess forever to facilitate early termination tests.
- engine_tests/missing_start_dir — Tests that deleting the current working directory doesn't immediately fail.
- engine_tests/module_injection_site — This test serves to demonstrate that the ModuleInjectionSite object on recipe modules (i.
- engine_tests/multi_test_data — Tests that step_data can accept multiple specs at once.
- engine_tests/multiple_placeholders — Tests error checking around multiple placeholders in a single step.
- engine_tests/nonexistent_command
- engine_tests/placeholder_exception — Tests that placeholders can't wreck the world by exhausting the step stack.
- engine_tests/proto_output_properties — Tests that output properties can be a proto message.
- engine_tests/proto_properties
- engine_tests/recipe_paths — Tests that recipes have access to names, resources and their repo.
- engine_tests/recipe_test_data — Tests that we can pass data via api.
- engine_tests/sort_properties — Tests that step presentation properties can be ordered.
- engine_tests/undeclared_method
- engine_tests/unicode
- engine_tests/whitelist_steps — Tests that step_data can accept multiple specs at once.
- file:examples/chmod
- file:examples/compute_hash
- file:examples/copy
- file:examples/copytree
- file:examples/error
- file:examples/file_hash
- file:examples/flatten_single_directories
- file:examples/glob
- file:examples/handle_json_file
- file:examples/listdir
- file:examples/raw_copy
- file:examples/read_write_proto
- file:examples/symlink
- file:examples/truncate
- findings:tests/infer_source
- findings:tests/upload_findings
- futures:examples/background_helper
- futures:examples/extreme_namespaces
- futures:examples/fan_out_in
- futures:examples/lazy_fan_out_in
- futures:examples/lazy_fan_out_in_early_abort
- futures:examples/lottasteps — This tests the engine's ability to handle many simultaneously-started steps.
- futures:examples/metadata — This tests metadata features of the Future object.
- futures:examples/result
- futures:examples/semaphore
- generator_script:examples/full
- golang:examples/full
- json:examples/full
- json:tests/add_json_log
- json:tests/unsorted — Test to assert that sort_keys=False preserves insertion order.
- led:tests/full
- led:tests/led_real_build
- led:tests/no_exist
- led:tests/trigger_build
- led:tests/trigger_build_with_payload
- legacy_annotation:examples/full
- luci_analysis:tests/query_failure_rate_test — Tests for query_failure_rate.
- luci_analysis:tests/query_stability_test — Tests for query_stability.
- luci_analysis:tests/test_generate_analysis — Tests for generate_analysis.
- luci_analysis:tests/test_generate_stability_response — Tests for generate_stability_response.
- luci_analysis:tests/test_history_query — Tests for query_failure_rate.
- luci_analysis:tests/test_lookup_bug — Tests for lookup_bug.
- luci_analysis:tests/test_query_cluster_failures — Tests for query_cluster_failres.
- luci_analysis:tests/test_query_variants — Tests for query_variants.
- luci_config:tests/full
- milo:examples/full
- nodejs:examples/full
- path:examples/full
- path:tests/cast_to_path
- path:tests/dynamic_paths
- path:tests/exists
- path:tests/test_api_legacy — Test to cover legacy aspects of PathTestApi.
- placeholder
- platform:examples/full
- properties:examples/full
- proto:tests/encode_decode
- proto:tests/placeholders
- random:tests/full
- raw_io:examples/full
- raw_io:tests/output_mismatch
- resultdb:examples/exonerate
- resultdb:examples/get_included_invocations
- resultdb:examples/get_invocation_instructions
- resultdb:examples/include
- resultdb:examples/query
- resultdb:examples/query_new_test_variants
- resultdb:examples/query_test_result_statistics
- resultdb:examples/query_test_results
- resultdb:examples/query_test_variants
- resultdb:examples/resultsink
- resultdb:examples/test_presentation
- resultdb:examples/test_presentation_default
- resultdb:examples/update_invocation
- resultdb:examples/upload_invocation_artifacts
- runtime:tests/full
- scheduler:examples/emit_triggers — This file is a recipe demonstrating emitting triggers to LUCI Scheduler.
- scheduler:examples/info — This file is a recipe demonstrating reading/mocking scheduler host.
- scheduler:examples/triggers — This file is a recipe demonstrating reading triggers of the current build.
- service_account:examples/full
- step:examples/full
- step:tests/active_result
- step:tests/drop_expectation
- step:tests/empty
- step:tests/inject_paths
- step:tests/nested
- step:tests/raise_on_failure
- step:tests/stdio
- step:tests/step_call_args
- step:tests/step_cost
- step:tests/sub_build
- step:tests/timeout
- swarming:examples/full
- swarming:examples/this_task
- swarming:tests/collect_errors
- swarming:tests/copy
- swarming:tests/list_bots
- swarming:tests/realms
- swarming:tests/task_request_from_jsonish
- swarming:tests/task_result_from_jsonish
- time:examples/full
- time:examples/jitter
- tricium:examples/add_comment
- tricium:examples/wrapper — An example of a recipe wrapping legacy analyzers.
- tricium:tests/add_comment_validation
- tricium:tests/enforce_comments_num_limit
- url:examples/full
- url:tests/join
- url:tests/validate_url
- uuid:examples/full
- version:examples/full
- warning:tests/fakes — This is a fake recipe to trick the simulation and make it believes that this module has tests.
recipe_modules / archive
DEPS: json, path, platform, step
class ArchiveApi(RecipeApi):
Provides steps to manipulate archive files (tar, zip, etc.).
— def extract(self, step_name: str, archive_file: (config_types.Path | str), output: (config_types.Path | str), mode: str='safe', include_files: Sequence[str]=(), archive_type: (str | None)=None):
Step to uncompress |archive_file| into |output| directory.
Archive will be unpacked to |output| so that root of an archive is in |output|, i.e. archive.tar/file.txt will become |output|/file.txt.
Step will FAIL if |output| already exists.
Args:
- step_name (str): display name of a step.
- archive_file (Path): path to an archive file to uncompress, MUST exist.
- output (Path): path to a directory to unpack to. The output directory MAY exist, in which case the extract will unpack on-top-of the existing files. It's an error for one of the extracted files to overlap with an already-present file, however.
- mode (str): Must be either 'safe' or 'unsafe'. In safe mode, if the
archive attempts to extract files which would escape the extraction
output
location, the extraction will fail (raise StepException) which contains a memberStepException.archive_skipped_files
(all other files will be extracted normally). If 'unsafe', then tarfiles containing paths escapingoutput
will be extracted as-is. - include_files (List[str]) - A list of globs matching files within the
archive. Any files not matching any of these globs will be skipped.
If omitted, all files are extracted (the default). Globs are matched
with the
fnmatch
module. If a file "filename" in the archive exists, include_files with "file*" will match it. All paths for the matcher are converted to posix style (forward slash). - archive_type (str): archive_file's archive type ("zip" or "tar"). This allows overriding the default detected type (based on file extension).
— def package(self, root: config_types.Path):
Returns Package object that can be used to compress a set of files.
Usage:
# Archive root/file and root/directory/**
(api.archive.package(root).
with_file(root / 'file').
with_dir(root / 'directory').
archive('archive step', output, 'tbz'))
# Archive root/**
zip_path = (
api.archive.package(root).
archive('archive step', api.path.start_dir / 'output.zip')
)
Args:
- root: a directory that would become root of a package, all files added to an archive must be Paths which are under this directory. If no files or directories are added with 'with_file' or 'with_dir', the entire root directory is packaged.
Returns: Package object.
recipe_modules / assertions
class AssertionsApi(RecipeApi):
Provides access to the assertion methods of the python unittest module.
Asserting non-step aspects of code (return values, non-step side effects) is expressed more naturally by making assertions within the RunSteps function of the test recipe. This api provides access to the assertion methods of unittest.TestCase to be used within test recipes.
All non-deprecated assertion methods of unittest.TestCase can be used.
An enhancement to the assertion methods is that if a custom msg is used, values for the non-msg arguments can be substituted into the message using named substitution with the format method of strings. e.g. self.AssertEqual(0, 1, '{first} should be {second}') will raise an AssertionError with the message: '0 should be 1'.
The attributes longMessage and maxDiff are supported and have the same behavior as the unittest module.
Example (.../recipe_modules/my_module/tests/foo.py):
DEPS = [
'my_module',
'recipe_engine/assertions',
'recipe_engine/properties',
'recipe_engine/runtime',
]
def RunSteps(api):
'''Behavior of foo depends on whether build is experimental'''
value = api.my_module.foo()
expected_value = api.properties.get('expected_value')
api.assertions.assertEqual(value, expected_value)
def GenTests(api):
yield (
api.test('basic')
+ api.properties(expected_value='normal value')
)
yield (
api.test('experimental')
+ api.properties(expected_value='experimental value')
+ api.runtime(is_experimental=True)
)
recipe_modules / bcid_reporter
DEPS: cipd, path, properties, step
class BcidReporterApi(RecipeApi):
API for interacting with Provenance server using the broker tool.
@property
— def bcid_reporter_path(self):
Returns the path to the broker binary.
When the property is accessed the first time, the latest stable, released broker will be installed using cipd.
— def report_cipd(self, digest, pkg, iid, server_url=None):
Reports cipd digest to local provenance server.
This is used to report produced artifacts hash and metadata to provenance, it is used to generate provenance.
Args:
- digest (str) - The hash of the artifact.
- pkg (str) - Name of the cipd package built.
- iid (str) - Instance ID of the package.
- server_url (Optional[str]) - URL for the local provenance server, the broker tool will use default if not specified.
— def report_gcs(self, digest, guri, server_url=None):
Reports gcs digest to local provenance server.
This is used to report produced artifacts hash and metadata to provenance, it is used to generate provenance.
Args:
- digest (str) - The hash of the artifact.
- guri (str) - Name of the GCS artifact built. This is the unique GCS URI, e.g. gs://bucket/path/to/binary.
- server_url (Optional[str]) - URL for the local provenance server, the broker tool will use default if not specified.
— def report_sbom(self, digest, guri, sbom_subjects=[], server_url=None):
Reports SBOM gcs digest to local provenance server.
This is used to report the SBOM metadata to provenance, along with the hash of the artifact it represents. It is also used to generate provenance.
Args:
- digest (str) - The hash of the SBOM.
- guri (str) - This is the unique GCS URI for the SBOM, e.g. gs://bucket/path/to/sbom.
- sbom_subjects (str list or str) - The hash values corresponding to the artifacts that this SBOM covers.
- server_url (Optional[str]) - URL for the local provenance server, the broker tool will use default if not specified.
— def report_stage(self, stage, server_url=None):
Reports task stage to local provenance server.
Args:
- stage (str) - The stage at which task is executing currently, e.g. "start". Concept of task stage is native to Provenance service, this is a way of self-reporting phase of a task's lifecycle. This information is used in conjunction with process-inspected data to make security policy decisions. Valid stages: (start, fetch, compile, upload, upload-complete, test).
- server_url (Optional[str]) - URL for the local provenance server, the broker tool will use default if not specified.
recipe_modules / bcid_verifier
API for interacting with Software Verifier.
To successfully authenticate to this API, you must have the https://www.googleapis.com/auth/bcid_verify OAuth scope.
class BcidVerifierApi(RecipeApi):
API for interacting with Software Verifier
@property
— def bcid_verifier_path(self):
Returns the path to the bcid_verifier binary.
When the property is accessed the first time, the latest stable, released version of bcid_verifier will be installed using CIPD.
— def verify_provenance(self, bcid_policy: str, artifact_path: str, attestation_path: str, log_only_mode: bool=False):
Calls the BCID Software Verifier API to verify provenance for an artifact.
Args:
- bcid_policy: Name of the BCID policy name to verify provenance with.
- artifact_path: Local file path to the artifact to be verified.
- attestation_path: Local file path to the attestation (intoto.jsonl) file for the provided artifact.
- log_only_mode: Whether to verify provenance in log only mode, and skip enforcement. Enforcement fails closed, and if unable to receive a response from Software Verifier, it will constitute a rejection. In log only mode, a failed request or a failure to verify will not be considered a failure.
recipe_modules / buildbucket
DEPS: json, path, platform, raw_io, resultdb, runtime, step, uuid, warning
API for interacting with the buildbucket service.
Requires buildbucket
command in $PATH
:
https://godoc.org/go.chromium.org/luci/buildbucket/client/cmd/buildbucket
class BuildbucketApi(RecipeApi):
A module for interacting with buildbucket.
— def add_tags_to_current_build(self, tags: list[common_pb2.StringPair]):
Adds arbitrary tags during the runtime of a build.
Args:
- tags: tags to add. May contain duplicates. Empty tag values won't remove existing tags with matching keys, since tags can only be added.
@property
— def backend_hostname(self):
Returns the backend hostname for the build. If it is legacy swarming build then the swarming hostname will be returned.
@property
— def backend_task_dimensions(self):
Returns the task dimensions used by the task for the build.
— def backend_task_dimensions_from_build(self, build: (build_pb2.Build | None)=None):
Returns the task dimensions for the provided build. If no build is provided, then self.build will be used.
@property
— def backend_task_id(self):
Returns the task id of the task for the build.
— def backend_task_id_from_build(self, build: (build_pb2.Build | None)=None):
Returns the task id of the task for the provided build. If no build is provided, then self.build will be used.
@property
— def bucket_v1(self):
Returns bucket name in v1 format.
Mostly useful for scheduling new builds using v1 API.
@property
— def build(self):
Returns current build as a buildbucket.v2.Build
protobuf message.
For value format, see Build
message in
build.proto.
DO NOT MODIFY the returned value.
Do not implement conditional logic on returned tags; they are for indexing.
Use returned build.input
instead.
Pure Buildbot support: to simplify transition to buildbucket, returns a
message even if the current build is not a buildbucket build. Provides as
much information as possible. Some fields may be left empty, violating
the rules described in the .proto files.
If the current build is not a buildbucket build, returned build.id
is 0.
— def build_url(self, host: (str | None)=None, build_id: ((int | str) | None)=None):
Returns url to a build. Defaults to current build.
@property
— def builder_cache_path(self):
Path to the builder cache directory.
Such directory can be used to cache builder-specific data. It remains on the bot from build to build. See "Builder cache" in https://chromium.googlesource.com/infra/luci/luci-go/+/main/buildbucket/proto/project_config.proto
@property
— def builder_full_name(self):
Returns the full builder name: {project}/{bucket}/{builder}.
@property
— def builder_name(self):
Returns builder name. Shortcut for .build.builder.builder
.
@property
— def builder_realm(self):
Returns the LUCI realm name of the current build.
Raises InfraFailure
if the build proto doesn't have project
or bucket
set. This can happen in tests that don't properly mock build proto.
— def builder_url(self, *, host: (str | None)=None, project: (str | None)=None, bucket: (str | None)=None, builder: (str | None)=None, build: (build_pb2.Build | None)=None):
Returns url to a builder. Defaults to current builder.
— def cancel_build(self, build_id: (int | str), reason: (str | None)=None, step_name: (str | None)=None):
Cancel the build associated with the provided build ID.
Args:
build_id
: a buildbucket build ID. It should be either an integer or the numeric value in string format (e.g. 123456789 or '123456789').reason
: reason for canceling the given build. Markdown is supported.
Returns: None if build is successfully canceled. Otherwise, an InfraFailure will be raised
— def collect_build(self, build_id: str, **kwargs: Any):
Shorthand for collect_builds
below, but for a single build only.
Args:
- build_id: Integer ID of the build to wait for.
Returns: Build. for the ended build.
— def collect_builds(self, build_ids: Sequence[(int | str)], interval: (int | None)=None, timeout: (int | None)=None, step_name: (str | None)=None, raise_if_unsuccessful: bool=False, url_title_fn: (UrlTitleFunction | None)=None, mirror_status: bool=False, fields: Set[str]=DEFAULT_FIELDS, cost: (engine_types.ResourceCost | None)=None, eager: bool=False):
Waits for a set of builds to end and returns their details.
Args:
build_ids
: List of build IDs to wait for.interval
: Delay (in secs) between requests while waiting for build to end. Defaults to 1m.timeout
: Maximum time to wait for builds to end. Defaults to 1h.step_name
: Custom name for the generated step.raise_if_unsuccessful
: if any build being collected did not succeed, raise an exception.url_title_fn
: generates build URL title. See module docstring.mirror_status
: mark the step as failed/infra-failed if any of the builds did not succeed. Ignored if raise_if_unsuccessful is True.fields
: a list of fields to include in the response, names relative tobuild_pb2.Build
(e.g. ["tags", "infra.swarming"]).cost
: A step.ResourceCost to override for the underlying bb invocation. If not specified, will use the recipe_engine's default values for ResourceCost.eager
: Whether stop upon getting the first build.
Returns: A map from integer build IDs to the corresponding Build for all specified builds.
— def get(self, build_id: (int | str), url_title_fn: (UrlTitleFunction | None)=None, step_name: (str | None)=None, fields: Set[str]=DEFAULT_FIELDS, test_data: (build_pb2.Build | None)=None):
Gets a build.
Args:
build_id
: a buildbucket build ID.url_title_fn
: generates build URL title. See module docstring.step_name
: name for this step.fields
: a list of fields to include in the response, names relative tobuild_pb2.Build
(e.g. ["tags", "infra.swarming"]).test_data
: a build_pb2.Build for use in testing.
Returns: A build_pb2.Build.
— def get_multi(self, build_ids: Sequence[(int | str)], url_title_fn: (UrlTitleFunction | None)=None, step_name: (str | None)=None, fields: Set[str]=DEFAULT_FIELDS, test_data: (Sequence[build_pb2.Build] | None)=None):
Gets multiple builds.
Args:
build_ids
: a list of build IDs.url_title_fn
: generates build URL title. See module docstring.step_name
: name for this step.fields
: a list of fields to include in the response, names relative tobuild_pb2.Build
(e.g. ["tags", "infra.swarming"]).test_data
: a sequence of build_pb2.Build objects for use in testing.
Returns: A dict {build_id: build_pb2.Build}.
@property
— def gitiles_commit(self):
Returns input gitiles commit. Shortcut for .build.input.gitiles_commit
.
For value format, see
GitilesCommit
message.
Never returns None, but sub-fields may be empty.
— def hide_current_build_in_gerrit(self):
Hides the build in UI
@host.setter
— def host(self, value: str):
— def is_critical(self, build: (build_pb2.Build | None)=None):
Returns True if the build is critical. Build defaults to the current one.
— def list_builders(self, project: str, bucket: str, step_name: (str | None)=None):
Lists configured builders in a bucket.
Args:
- project: The name of the project to list from (e.g. 'chromeos').
- bucket: The name of the bucket to list from (e.g. 'release').
Returns: A list of builder names, excluding the project and bucket (e.g. 'betty-pi-arc-release-main').
— def run(self, schedule_build_requests: Sequence[builds_service_pb2.ScheduleBuildRequest], collect_interval: (int | None)=None, timeout: (int | None)=None, url_title_fn: (UrlTitleFunction | None)=None, step_name: (str | None)=None, raise_if_unsuccessful: bool=False, eager: bool=False):
Runs builds and returns results.
A shortcut for schedule() and collect_builds(). See their docstrings.
Returns: A list of completed Builds in the same order as schedule_build_requests.
— def schedule(self, schedule_build_requests: Sequence[builds_service_pb2.ScheduleBuildRequest], url_title_fn: (UrlTitleFunction | None)=None, step_name: (str | None)=None, include_sub_invs: bool=True):
Schedules a batch of builds.
Example:
req = api.buildbucket.schedule_request(builder='linux')
api.buildbucket.schedule([req])
Hint: when scheduling builds for CQ, let CQ know about them:
api.cv.record_triggered_builds(*api.buildbucket.schedule([req1, req2]))
Args:
- schedule_build_requests: a list of
buildbucket.v2.ScheduleBuildRequest
protobuf messages. Create one by callingschedule_request
method. - url_title_fn: generates a build URL title. See module docstring.
- step_name: name for this step.
- include_sub_invs: flag for including the scheduled builds' ResultDB invocations into the current build's invocation. Default is True.
Returns:
A list of
Build
messages in the same order as requests.
Raises:
InfraFailure
if any of the requests fail.
— def schedule_request(self, builder: str, project: (str | Inherit)=INHERIT, bucket: (str | Inherit)=INHERIT, properties: Mapping[(str, Any)]=None, experimental: ((bool | common_pb2.Trinary) | Inherit)=INHERIT, experiments: (Mapping[(str, bool)] | None)=None, gitiles_commit: (common_pb2.GitilesCommit | Inherit)=INHERIT, gerrit_changes: (Sequence[common_pb2.GerritChange] | Inherit)=INHERIT, tags: (Sequence[common_pb2.StringPair] | None)=None, inherit_buildsets: bool=True, swarming_parent_run_id: (str | None)=None, dimensions: (Sequence[common_pb2.RequestedDimension] | None)=None, priority: ((int | None) | Inherit)=INHERIT, critical: ((bool | common_pb2.Trinary) | Inherit)=INHERIT, exe_cipd_version: ((str | Inherit) | None)=None, fields: Set[str]=DEFAULT_FIELDS, can_outlive_parent: (bool | None)=None, as_shadow_if_parent_is_led: bool=False, led_inherit_parent: bool=False):
Creates a new ScheduleBuildRequest
message with reasonable defaults.
This is a convenience function to create a ScheduleBuildRequest
message.
Among args, messages can be passed as dicts of the same structure.
Example:
request = api.buildbucket.schedule_request(
builder='linux',
tags=api.buildbucket.tags(a='b'),
)
build = api.buildbucket.schedule([request])[0]
Args:
- builder: name of the destination builder.
- project: project containing the destination builder. Defaults to the project of the current build.
- bucket: bucket containing the destination builder. Defaults to the bucket of the current build.
- properties: input properties for the new build.
- experimental: whether the build is allowed to affect prod. Defaults to the
value of the current build. Read more about
[
experimental
field](https://cs.chromium.org/chromium/infra/go/src/go.chromium.org/luci/buildbucket/proto/build.proto?q="bool experimental"). - experiments: enabled and disabled experiments for the new build. Overrides the result computed from experiments defined in builder config.
- gitiles_commit: input commit. Defaults to the input commit of the current
build. Read more about
gitiles_commit
. - gerrit_changes: list of input CLs. Defaults to gerrit changes of the
current build. Read more about
gerrit_changes
. - tags: tags for the new build.
- inherit_buildsets: if
True
(default), the returned request will include buildset tags from the current build. - swarming_parent_run_id: associate the new build as child of the given
swarming run id. Defaults to
None
meaning no association. If passed, must be a valid swarming run id (specific execution of a task) for the swarming instance on which build will execute. Typically, you'd want to set it toapi.swarming.task_id
. Read more aboutparent_run_id
. - dimensions: override dimensions defined on the server.
- priority: Swarming task priority. The lower the more important. Valid
values are
[20..255]
. Defaults to the value of the current build. PassNone
to use the priority of the destination builder. - critical: whether the build status should not be used to assess correctness of the commit/CL. Defaults to .build.critical. See also Build.critical in https://chromium.googlesource.com/infra/luci/luci-go/+/main/buildbucket/proto/build.proto
- exe_cipd_version: CIPD version of the LUCI Executable (e.g. recipe) to
use. Pass
None
to use the server configured one. - fields: a list of fields to include in the response, names
relative to
build_pb2.Build
(e.g. ["tags", "infra.swarming"]). - can_outlive_parent: flag for if the scheduled child build can outlive
the current build or not (as enforced by Buildbucket;
swarming_parent_run_id currently ALSO applies).
Default is None. For now
- if
luci.buildbucket.manage_parent_child_relationship
is not in the current build's experiments, can_outlive_parent is always True. - Otherwise if can_outlive_parent is None, ScheduleBuildRequest.can_outlive_parent will be determined by swarming_parent_run_id. TODO(crbug.com/1031205): remove swarming_parent_run_id.
- if
- as_shadow_if_parent_is_led: flag for if to schedule the child build in
shadow bucket and have shadow adjustments applied, if the current build
is in shadow bucket.
Examples:
- if the child build inherits the parent's bucket (explicitly or
implicitly).
- if the parent is a normal build in bucket 'original', the child will also be created in bucket 'original'.
- if the parent is a led build in bucket 'shadow', the child will also
be created in bucket 'shadow'.
- Note: the schdule request in this case will use bucket 'original'
instead of bucket
shadow
. It's because Buildbucket needs the 'original' bucket to find the Builder config to generate the child build so it can then put it in 'shadow' bucket.
- Note: the schdule request in this case will use bucket 'original'
instead of bucket
- if the child build is using a different bucket from the parent, then that bucket will be used in both normal and led flow to create the child.
- if the child build inherits the parent's bucket (explicitly or
implicitly).
- led_inherit_parent: flag for if the child led build should inherit
agent_input and exe from its parent led build. It only takes effect if
the parent is a led build and
as_shadow_if_parent_is_led
is True.
— def search(self, predicate: builds_service_pb2.BuildPredicate, limit: (int | None)=None, url_title_fn: (UrlTitleFunction | None)=None, report_build: bool=True, step_name: (str | None)=None, fields: Set[str]=DEFAULT_FIELDS, timeout: (int | None)=None, test_data: (Callable[([], Sequence[build_pb2.Build])] | None)=None):
Searches builds with one predicate.
Example: find all builds of the current CL.
from PB.go.chromium.org.luci.buildbucket.proto import rpc as builds_service_pb2
related_builds = api.buildbucket.search(builds_service_pb2.BuildPredicate(
gerrit_changes=list(api.buildbucket.build.input.gerrit_changes),
))
Underneath it calls bb batch
to perform the search, which should have a
better performance and memory usage than bb ls
: since we could get the
batch response as a whole and take advantage of the proto recipe for direct
encoding/decoding. And the limit could be used as the page_size in
SearchBuildsRequest.
— def search_with_multiple_predicates(self, predicate: Sequence[builds_service_pb2.BuildPredicate], limit: (int | None)=None, url_title_fn: (UrlTitleFunction | None)=None, report_build: bool=True, step_name: (str | None)=None, fields: Set[str]=DEFAULT_FIELDS, timeout: (int | None)=None, test_data: (Callable[([], Sequence[build_pb2.Build])] | None)=None):
Searches for builds with multiple predicates.
Example: find all builds with one tag OR another.
from PB.go.chromium.org.luci.buildbucket.proto import rpc as builds_service_pb2
related_builds = api.buildbucket.search([
builds_service_pb2.BuildPredicate(
tags=['one.tag'],
),
builds_service_pb2.BuildPredicate(
tags=['another.tag'],
),
])
Unlike search(), it still calls bb ls
to keep the overall limit working.
Args:
- predicate: a list of
builds_service_pb2.BuildPredicate
objects. The predicates are connected with logical OR. - limit: max number of builds to return. Defaults to 1000.
- url_title_fn: generates a build URL title. See module docstring.
- report_build: whether to report build search results in step presentation. Defaults to True.
- fields: a list of fields to include in the response, names relative
to
build_pb2.Build
(e.g. ["tags", "infra.swarming"]). - timeout: if supplied, the recipe engine will kill the step after the specified number of seconds
- test_data: A sequence of build_pb2.Build protos for this step to return in testing.
Returns: A list of builds ordered newest-to-oldest.
— def set_output_gitiles_commit(self, gitiles_commit: common_pb2.GitilesCommit):
Sets buildbucket.v2.Build.output.gitiles_commit
field.
This will tell other systems, consuming the build, what version of the code was actually used in this build and what is the position of this build relative to other builds of the same builder.
Args:
- gitiles_commit: the commit that was
actually checked out. Must have host, project and id.
ID must match r'^[0-9a-f]{40}$' (git revision).
If position is present, the build can be ordered along commits.
Position requires ref.
Ref, if not empty, must start with
refs/
.
Can be called at most once per build.
@property
— def shadowed_bucket(self):
@property
— def swarming_bot_dimensions(self):
Returns the swarming bot dimensions for the build.
— def swarming_bot_dimensions_from_build(self, build: (build_pb2.Build | None)=None):
Returns the swarming bot dimensions for the provided build. If no build is provided, then self.build will be used.
@property
— def swarming_parent_run_id(self):
Returns the parent_run_id (swarming specific) used in the task.
@property
— def swarming_priority(self):
Returns the priority (swarming specific) of the task.
@property
— def swarming_task_service_account(self):
Returns the swarming specific service account used in the task.
@staticmethod
— def tags(**tags: (list[str] | str)):
Alias for tags in util.py. See doc there.
— def use_service_account_key(self, key_path: (config_types.Path | str)):
Tells this module to start using given service account key for auth.
Otherwise the module is using the default account (when running on LUCI or locally), or no auth at all (when running on Buildbot).
Exists mostly to support Buildbot environment. Recipe for LUCI environment should not use this.
Args:
- key_path: a path to JSON file with service account credentials.
@contextlib.contextmanager
— def with_host(self, host: str):
Set the buildbucket host while in context, then reverts it.
recipe_modules / cas
DEPS: cipd, context, file, json, path, raw_io, runtime, step
API for interacting with cas client.
A module for interacting with cas client.
— def archive(self, step_name, root, *paths, log_level='info', **kwargs):
Archives given paths to a cas server.
Args:
- step_name (str): name of the step.
- root (str|Path): root directory of archived tree, should be absolute path.
- paths (list(str|Path)): path to archived files/dirs, should be absolute path. If empty, [root] will be used.
- log_level (str): logging level to use, rarely needed but helpful for debugging.
- kwargs: Additional keyword arguments to forward to "step".
Returns: digest (str): digest of uploaded root directory.
— def download(self, step_name, digest, output_dir):
Downloads a directory tree from a cas server.
Args:
- step_name (str): name of the step.
- digest (str): the digest of a cas tree.
- output_dir (Path): path to an output directory.
@property
— def instance(self):
— def viewer_url(self, digest):
Return URL of cas viewer.
@contextlib.contextmanager
— def with_instance(self, instance):
Sets the CAS instance while in context, then reverts it.
recipe_modules / cas_input
Simple API for handling CAS inputs to a recipe.
Recipes sometimes need files as part of their execution which don't live in source control (for example, they're generated elsewhere but tested in the recipe). In that case, there needs to be an easy way to give these files as an input to a recipe, so that the recipe can use them somehow. This module makes this easy.
This module has input properties which contains a list of CAS inputs to download. These can easily be download to disk with the 'download_caches' method, and subsequently used by a recipe in whatever relevant manner.
class CasInputApi(RecipeApi):
A module for downloading CAS inputs to a recipe.
— def download_caches(self, output_dir, caches=None):
Downloads RBE-CAS caches and puts them in a given directory.
Args: output_dir: The output directory to download the caches to. If you're unsure of what directory to use, self.m.path.start_dir is a directory the recipe engine sets up for you that you can use. caches: A CasCache proto message containing the caches which should be downloaded. See properties.proto for the message definition. If unset, it uses the caches in this recipe module properties. Returns: The output directory as a Path object which contains all the cache data.
@property
— def input_caches(self):
recipe_modules / change_verifier
DEPS: buildbucket, cipd, cv, luci_config, proto, raw_io, step
Recipe API for LUCI Change Verifier.
LUCI Change Verifier is the pre-commit verification service that will replace CQ daemon. See: https://chromium.googlesource.com/infra/luci/luci-go/+/HEAD/cv
This recipe module depends on the prpc binary being available in $PATH: https://godoc.org/go.chromium.org/luci/grpc/cmd/prpc
his recipe module depends on experimental API provided by LUCI CV and may subject to change in the future. Please reach out to the LUCI team first if you want to use this recipe module; file a ticket at: https://bugs.chromium.org/p/chromium/issues/entry?components=Infra%3ELUCI%3EBuildService%3EPresubmit%3ECV
class ChangeVerifierApi(RecipeApi):
This module provides recipe API of LUCI Change Verifier.
— def match_config(self, host: str, change: int, project: (str | None)=None, config_name: str=cv_api.CONFIG_FILE):
Retrieve the applicable CV group for a given change.
— def search_runs(self, project: str, cls: ((Sequence[GerritChange] | GerritChange) | None)=None, limit: (int | None)=None, step_name: (str | None)=None, dev: bool=False):
Searches for Runs.
Args:
- project: LUCI project name.
- cls: CLs, specified as (host, change number) tuples. A single tuple may also be passed. All Runs returned must include all of the given CLs, and Runs may also contain other CLs.
- limit: max number of Runs to return. Defaults to 32.
- step_name: optional custom step name in RPC steps.
- dev: whether to use the dev instance of Change Verifier.
Returns: A list of CV Runs ordered newest to oldest that match the given criteria.
recipe_modules / cipd
DEPS: buildbucket, context, file, futures, json, path, platform, properties, raw_io, step, url
API for interacting with CIPD.
Depends on 'cipd' binary available in PATH: https://godoc.org/go.chromium.org/luci/cipd/client/cmd/cipd
CIPDApi provides basic support for CIPD.
This assumes that cipd
(or cipd.exe
or cipd.bat
on windows) has been
installed somewhere in $PATH.
Attributes:
- max_threads (int) - Number of worker threads for extracting packages. If 0, uses CPU count.
— def acl_check(self, pkg_path, reader=True, writer=False, owner=False):
Checks whether the caller has a given roles in a package.
Args:
- pkg_path (str) - The package subpath.
- reader (bool) - Check for READER role.
- writer (bool) - Check for WRITER role.
- owner (bool) - Check for OWNER role.
Returns True if the caller has given roles, False otherwise.
— def add_instance_link(self, step_result):
— def build(self, input_dir, output_package, package_name, compression_level: (CompressionLevel | None)=None, install_mode: (InstallMode | None)=None, preserve_mtime: bool=False, preserve_writable: bool=False):
Builds, but does not upload, a cipd package from a directory.
Args:
- input_dir (Path) - The directory to build the package from.
- output_package (Path) - The file to write the package to.
- package_name (str) - The name of the cipd package as it would appear when uploaded to the cipd package server.
- compression_level - Deflate compression level. If None, defaults to 5 (0 - disable, 1 - best speed, 9 - best compression).
- install_mode - The mechanism that the cipd client should use when installing this package. If None, defaults to the platform default ('copy' on windows, 'symlink' on everything else).
- preserve_mtime - Preserve file's modification time.
- preserve_writable - Preserve file's writable permission bit.
Returns the CIPDApi.Pin instance.
— def build_from_pkg(self, pkg_def, output_package, compression_level: (CompressionLevel | None)=None):
Builds a package based on a PackageDefinition object.
Args:
- pkg_def (PackageDefinition) - The description of the package we want to create.
- output_package (Path) - The file to write the package to.
- compression_level - Deflate compression level. If None, defaults to 5 (0 - disable, 1 - best speed, 9 - best compression).
Returns the CIPDApi.Pin instance.
— def build_from_yaml(self, pkg_def, output_package, pkg_vars=None, compression_level: (CompressionLevel | None)=None):
Builds a package based on on-disk YAML package definition file.
Args:
- pkg_def (Path) - The path to the yaml file.
- output_package (Path) - The file to write the package to.
- pkg_vars (dict[str]str) - A map of var name -> value to use for vars referenced in package definition file.
- compression_level - Deflate compression level. If None, defaults to 5 (0 - disable, 1 - best speed, 9 - best compression).
Returns the CIPDApi.Pin instance.
@contextlib.contextmanager
— def cache_dir(self, directory):
Sets the cache dir to use with CIPD by setting the $CIPD_CACHE_DIR environment variable.
If directory is "None", will use no cache directory.
— def create_from_pkg(self, pkg_def, refs=None, tags=None, metadata=None, compression_level: (CompressionLevel | None)=None, verification_timeout=None):
Builds and uploads a package based on a PackageDefinition object.
This builds and uploads the package in one step.
Args:
- pkg_def (PackageDefinition) - The description of the package we want to create.
- refs (list[str]) - A list of ref names to set for the package instance.
- tags (dict[str]str) - A map of tag name -> value to set for the package instance.
- metadata (list[Metadata]) - A list of metadata entries to attach.
- compression_level - Deflate compression level. If None, defaults to 5 (0 - disable, 1 - best speed, 9 - best compression).
- verification_timeout (str) - Duration string that controls the time to wait for backend-side package hash verification. Valid time units are "s", "m", "h". Default is "5m".
Returns the CIPDApi.Pin instance.
— def create_from_yaml(self, pkg_def, refs=None, tags=None, metadata=None, pkg_vars=None, compression_level: (CompressionLevel | None)=None, verification_timeout=None):
Builds and uploads a package based on on-disk YAML package definition file.
This builds and uploads the package in one step.
Args:
- pkg_def (Path) - The path to the yaml file.
- refs (list[str]) - A list of ref names to set for the package instance.
- tags (dict[str]str) - A map of tag name -> value to set for the package instance.
- metadata (list[Metadata]) - A list of metadata entries to attach.
- pkg_vars (dict[str]str) - A map of var name -> value to use for vars referenced in package definition file.
- compression_level - Deflate compression level. If None, defaults to 5 (0 - disable, 1 - best speed, 9 - best compression).
- verification_timeout (str) - Duration string that controls the time to wait for backend-side package hash verification. Valid time units are "s", "m", "h". Default is "5m".
Returns the CIPDApi.Pin instance.
— def describe(self, package_name, version, test_data_refs=None, test_data_tags=None):
Returns information about a package instance given its version: who uploaded the instance and when and a list of attached tags.
Args:
- package_name (str) - The name of the cipd package.
- version (str) - The package version to point the ref to.
- test_data_refs (seq[str]) - The list of refs for this call to return by default when in test mode.
- test_data_tags (seq[str]) - The list of tags (in 'name:val' form) for this call to return by default when in test mode.
Returns the CIPDApi.Description instance describing the package.
— def ensure(self, root, ensure_file, name='ensure_installed'):
Ensures that packages are installed in a given root dir.
Args:
- root (Path) - Path to installation site root directory.
- ensure_file (EnsureFile|Path) - List of packages to install.
- name (str) - Step display name.
Returns the map of subdirectories to CIPDApi.Pin instances.
— def ensure_file_resolve(self, ensure_file, name='cipd ensure-file-resolve'):
Resolves versions of all packages for all verified platforms in an ensure file.
Args:
- ensure_file (EnsureFile|Path) - Ensure file to resolve.
— def ensure_tool(self, package: str, version: str, executable_path: str=None):
Downloads an executable from CIPD.
Given a package named "name/of/some_exe/${platform}" and version "someversion", this will install the package at the directory "[START_DIR]/cipd_tool/name/of/some_exe/someversion". It will then return the absolute path to the executable within that directory.
This operation is idempotent, and will only run steps to download the package if it hasn't already been installed in the same build.
Args:
- package (str) - The full name of the CIPD package.
- version (str) - The version of the package to download.
- executable_path (str|None) - The path within the package of the desired executable. Defaults to the basename of the package (the final non-variable component of the package name). Must use forward-slashes, even on Windows.
Returns a Path to the executable.
Future-safe; Multiple concurrent calls for the same (package, version) will block on a single ensure step.
@property
— def executable(self):
— def instances(self, package_name, limit=None):
Lists instances of a package, most recently uploaded first.
Args:
- package_name (str) - The name of the cipd package.
- limit (None|int) - The number of instances to return. 0 for all. If None, default value of 'cipd' binary will be used (20).
Returns the list of CIPDApi.Instance instance.
— def pkg_deploy(self, root, package_file):
Deploys the specified package to root.
ADVANCED METHOD: You shouldn't need this unless you're doing advanced
things with CIPD. Typically you should use the ensure
method here to
fetch+install packages to the disk.
Args:
- package_file (Path) - Path to a package file to install.
- root (Path) - Path to a CIPD root.
Returns a Pin for the deployed package.
— def pkg_fetch(self, destination, package_name, version):
Downloads the specified package to destination.
ADVANCED METHOD: You shouldn't need this unless you're doing advanced things
with CIPD. Typically you should use the ensure
method here to
fetch+install packages to the disk.
Args:
- destination (Path) - Path to a file location which will be (over)written with the package file contents.
- package_name (str) - The package name (or pattern with e.g. ${platform})
- version (str) - The CIPD version to fetch
Returns a Pin for the downloaded package.
@property
— def platform(self):
Returns the CIPD platform string, equivalent to '${platform}'.
— def register(self, package_name, package_path, refs=None, tags=None, metadata=None, verification_timeout=None):
Uploads and registers package instance in the package repository.
Args:
- package_name (str) - The name of the cipd package.
- package_path (Path) - The path to package instance file.
- refs (list[str]) - A list of ref names to set for the package instance.
- tags (dict[str]basestring) - A map of tag name -> value to set for the package instance.
- metadata (list[Metadata]) - A list of metadata entries to attach.
- verification_timeout (str) - Duration string that controls the time to wait for backend-side package hash verification. Valid time units are "s", "m", "h". Default is "5m".
Returns: The CIPDApi.Pin instance.
— def search(self, package_name, tag, test_instances=None):
Searches for package instances by tag, optionally constrained by package name.
Args:
- package_name (str) - The name of the cipd package.
- tag (str) - The cipd package tag.
- test_instances (None|int|List[str]) - Default test data for this step:
- None - Search returns a single default pin.
- int - Search generates
test_instances
number of testing IDsinstance_id_%d
and returns pins for those. - List[str] - Returns pins for the given testing IDs.
Returns the list of CIPDApi.Pin instances.
— def set_metadata(self, package_name, version, metadata):
Attaches metadata to a package instance.
Args:
- package_name (str) - The name of the cipd package.
- version (str) - The package version to attach metadata to.
- metadata (list[Metadata]) - A list of metadata entries to attach.
Returns the CIPDApi.Pin instance.
— def set_ref(self, package_name, version, refs):
Moves a ref to point to a given version.
Args:
- package_name (str) - The name of the cipd package.
- version (str) - The package version to point the ref to.
- refs (list[str]) - A list of ref names to set for the package instance.
Returns the CIPDApi.Pin instance.
— def set_tag(self, package_name, version, tags):
Tags package of a specific version.
Args:
- package_name (str) - The name of the cipd package.
- version (str) - The package version to resolve. Could also be itself a tag or ref.
- tags (dict[str]str) - A map of tag name -> value to set for the package instance.
Returns the CIPDApi.Pin instance.
recipe_modules / commit_position
class CommitPositionApi(RecipeApi):
Recipe module providing commit position parsing and formatting.
@classmethod
— def format(cls, ref, revision_number):
Returns a commit position string.
ref must start with 'refs/'.
@classmethod
— def parse(cls, value):
Returns (ref, revision_number) tuple.
recipe_modules / context
The context module provides APIs for manipulating a few pieces of 'ambient' data that affect how steps are run.
The pieces of information which can be modified are:
- cwd - The current working directory.
- env - The environment variables.
- infra_step - Whether or not failures should be treated as infrastructure failures vs. normal failures.
The values here are all scoped using Python's with
statement; there's no
mechanism to make an open-ended adjustment to these values (i.e. there's no way
to change the cwd permanently for a recipe, except by surrounding the entire
recipe with a with statement). This is done to avoid the surprises that
typically arise with things like os.environ or os.chdir in a normal python
program.
Example:
with api.context(cwd=api.path.start_dir / 'subdir'):
# this step is run inside of the subdir directory.
api.step("cat subdir/foo", ['cat', './foo'])
class ContextApi(RecipeApi):
@contextlib.contextmanager
— def __call__(self, cwd: (config_types.Path | None)=None, env_prefixes: (Mapping[(str, Sequence[str])] | None)=None, env_suffixes: (Mapping[(str, Sequence[str])] | None)=None, env: (Mapping[(str, str)] | None)=None, infra_steps: (bool | None)=None, luciexe: (sections_pb2.LUCIExe | None)=None, realm: str=None, deadline: (sections_pb2.Deadline | None)=None):
Allows adjustment of multiple context values in a single call.
Args:
- cwd - the current working directory to use for all steps.
To 'reset' to the original cwd at the time recipes started, pass
api.path.start_dir
. - env_prefixes - Environmental variable prefix augmentations. See below for more info.
- env_suffixes - Environmental variable suffix augmentations. See below for more info.
- env - Environmental variable overrides. See below for more info.
- infra_steps - if steps in this context should be considered infrastructure steps. On failure, these will raise InfraFailure exceptions instead of StepFailure exceptions.
- luciexe - The override value for 'luciexe' section in LUCI_CONTEXT.
This is currently used to modify the
cache_dir
for all launched LUCI Executable (viaapi.step.sub_build(...)
). - realm - allows changing the current LUCI realm. It is used when creating new LUCI resources (e.g. spawning new Swarming tasks). Pass an empty string to disassociate the context from a realm, emulating an environment prior to LUCI realms. This is useful during the transitional period.
- deadline - Deadline information to set; See LUCI_CONTEXT documentation
for how this section works. Automatically adjusted by steps with
timeout
set.
Environmental Variable Overrides:
Env is a mapping of environment variable name to the value you want that environment variable to have. The value is one of:
- None, indicating that the environment variable should be removed from the environment when the step runs.
- A string value. Note that string values will be %-formatted with the current value of the environment at the time the step runs. This means that you can have a value like: "/path/to/my/stuff:%(PATH)s" Which, at the time the step executes, will inject the current value of $PATH.
"env_prefix" and "env_suffix" are a list of Path or strings that get prefixed (or suffixed) to their respective environment variables, delimited with the system's path separator. This can be used to add entries to environment variables such as "PATH" and "PYTHONPATH". If prefixes are specified and a value is also defined in "env", the value will be installed as the last path component if it is not empty.
Look at the examples in "examples/" for examples of context module usage.
@property
— def cwd(self):
Returns the current working directory that steps will run in.
Returns (Path|None) - The current working directory. A value of None is equivalent to api.path.start_dir, though only occurs if no cwd has been set (e.g. in the outermost context of RunSteps).
@property
— def deadline(self):
Returns the current value (sections_pb2.Deadline) of deadline section in
the current LUCI_CONTEXT. Returns {grace_period: 30}
if deadline is not
defined, per LUCI_CONTEXT spec.
@property
— def env(self):
Returns modifications to the environment.
By default this is empty. If you want to observe the program's startup
environment, see ENV_PROPERTIES
in
https://chromium.googlesource.com/infra/luci/recipes-py/+/refs/heads/main/doc/user_guide.md#properties-and-env_properties
Returns (dict) - The env-key -> value mapping of current environment modifications.
@property
— def env_prefixes(self):
Returns Path prefix modifications to the environment.
This will return a mapping of environment key to Path tuple for Path prefixes registered with the environment.
Returns (dict) - The env-key -> value(Path) mapping of current environment prefix modifications.
@property
— def env_suffixes(self):
Returns Path suffix modifications to the environment.
This will return a mapping of environment key to Path tuple for Path suffixes registered with the environment.
Returns (dict) - The env-key -> value(Path) mapping of current environment suffix modifications.
@property
— def infra_step(self):
Returns the current value of the infra_step setting.
Returns (bool) - True iff steps are currently considered infra steps.
— def initialize(self):
@property
— def luci_context(self):
Returns the currently tracked LUCI_CONTEXT sections as a dict of proto messages.
Only contains luciexe
, realm
, 'resultdb' and deadline
.
@property
— def luciexe(self):
Returns the current value (sections_pb2.LUCIExe) of luciexe section in the current LUCI_CONTEXT. Returns None if luciexe is not defined.
@property
— def realm(self):
Returns the LUCI realm of the current context.
May return None if the task is not running in the realm-aware mode. This is a transitional period. Eventually all tasks will be associated with realms.
@property
— def resultdb_invocation_name(self):
Returns the ResultDB invocation name of the current context.
Returns None if resultdb is not defined.
recipe_modules / cq
DEPS: cv, properties, warning
Wrapper for CV API.
This module is a thin wrapper of the cv module.
— def initialize(self):
Apply non-default value cq module properties to the cv module.
recipe_modules / cv
DEPS: buildbucket, properties, step
Recipe API for LUCI CV, the pre-commit testing system.
This module provides recipe API of LUCI CV, a pre-commit testing system.
@property
— def active(self):
Returns whether CQ is active for this build.
— def allow_reuse_for(self, *modes):
Instructs CQ that this build can be reused in a future Run if and only if its mode is in the provided modes.
Overwrites all previously set values.
@property
— def allowed_reuse_modes(self):
@property
— def attempt_key(self):
Returns a string that is unique for a CV attempt.
The same attempt_key
will be used for all builds within an
attempt.
Raises: CQInactive if CQ is not active for this build.
@property
— def cl_group_key(self):
Returns a string that is unique for a current set of Gerrit change patchsets (or, equivalently, buildsets).
The same cl_group_key
will be used if another Attempt is made for the
same set of changes at a different time.
Raises: CQInactive if CQ is not active for this build.
@property
— def cl_owners(self):
Returns string(s) of the owner's email addresses used for the patchset.
Usually CLs only have one owner, but more than one is possible so a list will be returned.
Raises: CQInactive if CQ is not active for this build.
@property
— def do_not_retry_build(self):
@property
— def equivalent_cl_group_key(self):
Returns a string that is unique for a given set of Gerrit changes disregarding trivial patchset differences.
For example, when a new "trivial" patchset is uploaded, then the cl_group_key will change but the equivalent_cl_group_key will stay the same.
Raises: CQInactive if CQ is not active for this build.
@property
— def experimental(self):
Returns whether this build is triggered for a CQ experimental builder.
See Builder.experiment_percentage
doc in CQ
config
Raises: CQInactive if CQ is not active for this build.
— def initialize(self):
@property
— def ordered_gerrit_changes(self):
Returns list[bb_common_pb2.GerritChange] in order in which CLs should be applied or submitted.
Raises: CQInactive if CQ is not active for this build.
@property
— def owner_is_googler(self):
Returns whether the Run/Attempt owner is a Googler.
DO NOT USE: this is a temporary workaround for crbug.com/1259887 that is supposed to be used by builders in Chrome project only. Raises: CQInactive if CQ is not active for this build. ValueError if the builder is not in Chrome project.
@property
— def props_for_child_build(self):
Returns properties dict meant to be passed to child builds.
These will preserve the CQ context of the current build in the about-to-be-triggered child build.
properties = {'foo': bar, 'protolike': proto_message}
properties.update(api.cv.props_for_child_build)
req = api.buildbucket.schedule_request(
builder='child',
gerrit_changes=list(api.buildbucket.build.input.gerrit_changes),
properties=properties)
child_builds = api.buildbucket.schedule([req])
api.cv.record_triggered_builds(*child_builds)
The contents of returned dict should be treated as opaque blob, it may be changed without notice.
— def record_triggered_build_ids(self, *build_ids):
Adds the given Buildbucket build IDs to the list of triggered build IDs.
Must be called after some step.
Args:
- build_ids (list of int or string): Buildbucket build IDs.
— def record_triggered_builds(self, *builds):
Adds IDs of given Buildbucket builds to the list of triggered build IDs.
Must be called after some step.
Expected usage:
api.cv.record_triggered_builds(*api.buildbucket.schedule([req1, req2]))
Args:
Build
objects, typically returned byapi.buildbucket.schedule
.
@property
— def run_mode(self):
Returns the mode(str) of the CQ Run that triggers this build.
Raises: CQInactive if CQ is not active for this build.
— def set_do_not_retry_build(self):
Instruct CQ to not retry this build.
This mechanism is used to reduce duration of CQ attempt and save testing capacity if retrying will likely return an identical result.
@property
— def top_level(self):
Returns whether CQ triggered this build directly.
Can be spoofed. DO NOT USE FOR SECURITY CHECKS.
Raises: CQInactive if CQ is not active for this build.
@property
— def triggered_build_ids(self):
Returns recorded Buildbucket build IDs as a list of integers.
recipe_modules / defer
Runs a function but defers the result until a later time.
Runs a function but defers the result until a later time.
Exceptions caught by api.defer() will show in MILO as they occur, but won't continue to propagate the exception until api.defer.collect() or DeferredResult.result() is called.
For StepFailures and InfraFailures, MILO already includes the failure output. For other exceptions, api.defer() will add a step showing the exception and continue.
If exceptions were caught and saved in DeferredResults, api.defer.collect() will raise an ExceptionGroup containing all deferred exceptions. ExceptionGroups containing specific kinds of exceptions can be handled using the "except*" syntax (for more details see https://docs.python.org/3/tutorial/errors.html#raising-and-handling-multiple-unrelated-exceptions).
If there are no failures, api.defer.collect() returns a Sequence of the return values of the functions passed into api.defer().
— def __call__(self, func: Callable[(..., T)], *args, **kwargs):
Calls func(*args, **kwargs) but catches all exceptions.
Returns a DeferredResult. If the call returns a value, the DeferredResult contains that value. If the call raises an exception, the DeferredResult contains that exception.
The DeferredResult is expected to be passed into api.defer.collect(), but DeferredResult.result() does similar processing.
— def collect(self, results: Sequence[DeferredResult], step_name: (str | None)=None):
Raise any exceptions in the given list of DeferredResults.
If there are no exceptions, do nothing. If there are one or more exceptions, reraise one of the worst of them.
Args: results: Results to check. step_name: Name for step including traceback logs if there are failures. If None, don't include a step with traceback logs.
@contextlib.contextmanager
— def context(self, collect_step_name: (str | None)=None):
Creates a context that tracks deferred calls.
Usage:
with api.defer.context() as defer: defer(api.step, ...) defer(api.step, ...) ...
recipe_modules / file
DEPS: json, path, proto, raw_io, step
File manipulation (read/write/delete/glob) methods.
— def chmod(self, name: str, path: (config_types.Path | str), mode: str, recursive: bool=False):
Set the access mode for a file or directory.
Args:
- name: The name of the step.
- path: The path of the file or directory.
- mode: The access mode in octal.
- recursive: Whether to run chmod recursively.
Raises: file.Error
— def compute_hash(self, name: str, paths: Sequence[(config_types.Path | str)], base_path: (config_types.Path | str), test_data: str=''):
Computes hash of contents of a directory/file.
This function will compute hash by including following info of a file:
- str(len(path)) // path is relative to base_path
- path // path is relative to base_path
- str(len(file))
- file_content
Each of these components are separated by a newline character. For example, for file = "hello" and the contents "world" the hash would be over:
5
hello
5
world
Args:
- name: The name of the step.
- paths: Path of directory/file(s) to compute hash.
- base_path: Base directory to calculating hash relative to absolute path.
For e.g.
start_dir
of a recipe execution can be used. - test_data: Some default data for this step to return when running under simulation. If no test data is provided, we compute test_data as sha256 of concatenated relative paths passed.
Returns: Hex encoded hash of directory/file content.
Raises: file.Error and ValueError if passed paths input is not str or Path.
— def copy(self, name: str, source: ((config_types.Path | str) | recipe_api.Placeholder), dest: ((config_types.Path | str) | recipe_api.Placeholder)):
Copies a file (including mode bits) from source to destination on the local filesystem.
Behaves identically to shutil.copy.
Args:
- name: The name of the step.
- source: The path to the file you want to copy.
- dest: The path to the destination file name. If this path exists and is
a directory, the basename of
source
will be appended to derive a path to a destination file.
Raises: file.Error
— def copytree(self, name: str, source: (config_types.Path | str), dest: (config_types.Path | str), symlinks: bool=False):
Recursively copies a directory tree.
Behaves identically to shutil.copytree.
dest
must not exist.
Args:
- name (str): The name of the step.
- source (Path): The path of the directory to copy.
- dest (Path): The place where you want the recursive copy to show up. This must not already exist.
- symlinks (bool): Preserve symlinks. No effect on Windows.
Raises: file.Error
— def ensure_directory(self, name: str, dest: (config_types.Path | str), mode: int=511):
Ensures that dest
exists and is a directory.
Args:
- name: The name of the step.
- dest: The directory to ensure.
- mode: The mode to use if the directory doesn't exist. This method does not ensure the mode if the directory already exists (if you need that behaviour, file a bug).
Raises: file.Error if the path exists but is not a directory.
— def file_hash(self, file_path: (config_types.Path | str), test_data: str=''):
Computes hash of contents of a single file.
Args:
- file_path: Path of file to compute hash.
- test_data: Some default data for this step to return when running under simulation. If no test data is provided, we compute test_data as sha256 of path passed.
Returns: Hex encoded hash of file content.
Raises: file.Error and ValueError if passed paths input is not str or Path.
— def filesizes(self, name: str, files: Sequence[(config_types.Path | str)], test_data: (Sequence[int] | None)=None):
Returns list of filesizes for the given files.
Args:
- name: The name of the step.
- files: Paths to files.
- test_data: List of filesizes to use in tests.
Returns size of each file in bytes.
— def flatten_single_directories(self, name: str, path: (config_types.Path | str)):
Flattens singular directories, starting at path.
Example:
$ mkdir -p dir/which_has/some/singular/subdirs/
$ touch dir/which_has/some/singular/subdirs/with
$ touch dir/which_has/some/singular/subdirs/files
$ flatten_single_directories(dir)
$ ls dir
with
files
This can be useful when you just want the 'meat' of a very sparse directory
structure. For example, some tarballs like foo-1.2.tar.gz
extract all
their contents into a subdirectory foo-1.2/
.
Using this function would essentially move all the actual contents of the extracted archive up to the top level directory, removing the need to e.g. hard-code/find the subfolder name after extraction (not all archives are even named after the subfolder they extract to).
Args:
- name: The name of the step.
- path: The absolute path to begin flattening.
Raises: file.Error
— def glob_paths(self, name: str, source: (config_types.Path | str), pattern: str, include_hidden: bool=False, test_data: Sequence[config_types.Path]=()):
Performs glob expansion on pattern
.
glob rules for pattern
follow the same syntax as for the stdlib glob
module with recursive=True
.
e.g. 'a/**/*.py'
a/b/foo.py => MATCH
a/b/c/foo.py => MATCH
a/foo.py => MATCH
a/b/c/d/e/f/g/h/i/j/foo.py => MATCH
other/foo.py => NO MATCH
Args:
- name (str): The name of the step.
- source (Path): The directory whose contents should be globbed.
- pattern (str): The glob pattern to apply under
source
. - include_hidden (bool): Include files beginning with
.
. - test_data (iterable[str]): Some default data for this step to return when running under simulation. This should be the list of file items found in this directory.
Returns all paths found.
Raises: file.Error.
— def listdir(self, name: str, source: (config_types.Path | str), recursive: bool=False, test_data: Sequence[str]=(), include_log: bool=True):
Lists all files inside a directory.
If the source dir contains non-unicode file or dir names, the corresponding bad characters will be replace with "?" mark.
Args:
- name: The name of the step.
- source: The directory to list.
- recursive: If True, do not emit subdirectory entries but recurse
into them instead, emitting paths relative to
source
. Doesn't follow symlinks. Very slow for large directories. - test_data: Some default data for this step to return when running under simulation. This should be the list of relative paths found in this directory.
- include_log: Include step log of read text.
Returns list of entries
Raises: file.Error.
— def move(self, name: str, source: (config_types.Path | str), dest: (config_types.Path | str)):
Moves a file or directory.
Behaves identically to shutil.move.
Args:
- name (str): The name of the step.
- source (Path): The path of the item to move.
- dest (Path): The new name of the item.
Raises: file.Error
— def read_json(self, name: str, source: (config_types.Path | str), test_data: Any='', include_log: bool=True):
Reads a file as UTF-8 encoded json.
Args:
- name: The name of the step.
- source: The path of the file to read.
- test_data: Some default json serializable data for this step to return when running under simulation.
- include_log: Include step log of read json.
Returns: The content of the file.
Raise file.Error
— def read_proto(self, name: str, source: (config_types.Path | str), msg_class: type[ProtoMessage], codec: ProtoCodec, test_proto: Any=None, include_log: bool=True, decoding_kwargs: (dict | None)=None):
Reads a file into a proto message.
Args:
- name: The name of the step.
- source: The path of the file to read.
- msg_class: The message type to be read.
- codec: The encoder to use.
- test_proto: A default proto message for this step to return when running under simulation.
- include_log: Include step log of read proto.
- decoding_kwargs: Passed directly to the chosen encoder. See proto module for details.
— def read_raw(self, name: str, source: (config_types.Path | str), test_data: bytes=''):
Reads a file as raw data.
Args:
- name: The name of the step.
- source: The path of the file to read.
- test_data: Some default data for this step to return when running under simulation.
Returns: The unencoded (binary) contents of the file.
Raises: file.Error
— def read_text(self, name: str, source: (config_types.Path | str), test_data: str='', include_log: bool=True):
Reads a file as UTF-8 encoded text.
Args:
- name: The name of the step.
- source: The path of the file to read.
- test_data: Some default data for this step to return when running under simulation.
- include_log: Include step log of read text.
Returns: The content of the file.
Raises: file.Error
— def remove(self, name: str, source: (config_types.Path | str)):
Removes a file.
Does not raise Error if the file doesn't exist.
Args:
- name (str): The name of the step.
- source (Path): The file to remove.
Raises: file.Error.
— def rmcontents(self, name: str, source: (config_types.Path | str)):
Similar to rmtree, but removes only contents not the directory.
This is useful e.g. when removing contents of current working directory. Deleting current working directory makes all further getcwd calls fail until chdir is called. chdir would be tricky in recipes, so we provide a call that doesn't delete the directory itself.
Args:
- name (str): The name of the step.
- source (Path): The directory whose contents should be removed.
Raises: file.Error.
— def rmglob(self, name: str, source: (config_types.Path | str), pattern: str, recursive: bool=True, include_hidden: bool=True):
Removes all entries in source
matching the glob pattern
.
glob rules for pattern
follow the same syntax as for the stdlib glob
module with recursive=True
.
e.g. 'a/**/*.py'
a/b/foo.py => MATCH
a/b/c/foo.py => MATCH
a/foo.py => MATCH
a/b/c/d/e/f/g/h/i/j/foo.py => MATCH
other/foo.py => NO MATCH
Args:
- name: The name of the step.
- source: The directory whose contents should be filtered and removed.
- pattern: The glob pattern to apply under
source
. Anything matching this pattern will be removed. - recursive: Recursively remove entries under
source
. TODO: Remove this option. Use**
syntax instead. - include_hidden: Include files beginning with
.
. TODO: Set to False by default to be consistent with file.glob.
Raises: file.Error.
— def rmtree(self, name: str, source: (config_types.Path | str)):
Recursively removes a directory.
This uses a native python on Linux/Mac, and uses rd
on Windows to avoid
issues w.r.t. path lengths and read-only attributes. If the directory is
gone already, this returns without error.
Args:
- name: The name of the step.
- source: The directory to remove.
Raises: file.Error.
— def symlink(self, name: str, source: ((config_types.Path | str) | recipe_api.Placeholder), linkname: ((config_types.Path | str) | recipe_api.Placeholder)):
Creates a symlink on the local filesystem.
Behaves identically to os.symlink.
Args:
- name (str): The name of the step.
- source (Path|Placeholder): The path to link from.
- linkname (Path|Placeholder): The destination to link to.
Raises: file.Error
— def symlink_tree(self, root: (config_types.Path | str)):
Creates a SymlinkTree, given a root directory.
Args:
- root: root of a tree of symlinks.
— def truncate(self, name: str, path: (config_types.Path | str), size_mb: int=100):
Creates an empty file with path and size_mb on the local filesystem.
Args:
- name: The name of the step.
- path: The absolute path to create.
- size_mb: The size of the file in megabytes. Defaults to 100
Raises: file.Error
— def write_json(self, name: str, dest: (config_types.Path | str), data: Any, indent: ((int | str) | None)=None, include_log: bool=True, sort_keys: bool=True):
Write the given json serializable data
to dest
.
Args:
- name: The name of the step.
- dest: The path of the file to write.
- data: Json serializable data to write.
- indent: The indent of the written JSON. See https://docs.python.org/3/library/json.html#json.dump for more details.
- include_log: Include step log of written json.
- sort_keys: Sort they keys in
data
. See api.json.input().
Raises: file.Error.
— def write_proto(self, name: str, dest: (config_types.Path | str), proto_msg: google.protobuf.message, codec: ProtoCodec, include_log: bool=True, encoding_kwargs: (dict | None)=None):
Writes the given proto message to dest
.
Args:
- name: The name of thhe step.
- dest: The path of the file to write.
- proto_msg: Message to write.
- codec: The encoder to use.
- include_log: Include step log of written proto.
- encoding_kwargs: Passed directly to the chosen encoder. See proto module for details.
— def write_raw(self, name: str, dest: (config_types.Path | str), data: bytes):
Write the given data
to dest
.
Args:
- name: The name of the step.
- dest: The path of the file to write.
- data: The data to write.
Raises: file.Error.
— def write_text(self, name: str, dest: (config_types.Path | str), text_data: str, include_log: bool=True):
Write the given UTF-8 encoded text_data
to dest
.
Args:
- name: The name of the step.
- dest: The path of the file to write.
- text_data: The UTF-8 encoded data to write.
- include_log: Include step log of written text.
Raises: file.Error.
recipe_modules / findings
DEPS: buildbucket, proto, resultdb, step, uuid
class FindingsAPI(RecipeApi):
— def populate_source_from_current_build(self, location: findings_pb.Location):
Set the location source based on the input of the current build.
This can be used for finding.location or replacement.location. Currently, only works for build with exactly one Gerrit change. Raise ValueError otherwise.
— def upload_findings(self, findings: list[findings_pb.Finding], step_name: (str | None)=None):
Uploads code findings to ResultDB.
Requires ResultDB to be enabled for the current Build.
Args:
- findings (List(findings_pb.Finding)): Code findings to upload. findings definition can be found in https://chromium.googlesource.com/infra/luci/recipes-py/+/HEAD/recipe_proto/go.chromium.org/luci/common/proto/findings/findings.proto
- step_name (str): optional step name for uploading findings.
recipe_modules / futures
Implements in-recipe concurrency via green threads.
class FuturesApi(RecipeApi):
Provides access to the Recipe concurrency primitives.
@staticmethod
— def iwait(futures, timeout=None, count=None):
Iteratively yield up to count
Futures as they become done.
This is analogous to gevent.iwait
.
Usage:
for future in api.futures.iwait(futures):
# consume future
If you are not planning to consume the entire iwait iterator, you can avoid the resource leak by doing, for example:
with api.futures.iwait(a, b, c) as iter:
for future in iter:
if future is a:
break
You might want to use iwait
over wait
if you want to process a group
of Futures in the order in which they complete. Compare:
for task in iwait(swarming_tasks): # task is done, do something with it
vs
while swarming_tasks: task = wait(swarming_tasks, count=1)[0] # some task is done swarming_tasks.remove(task) # do something with it
Args:
- futures (List[Future]) - The Future objects to wait for.
- timeout (None|seconds) - How long to wait for the Futures to be done.
- count (None|int) - The number of Futures to yield. If None, yields all of them.
Yields futures in the order in which they complete until we hit the timeout or count. May also be used with a context manager to avoid leaking resources if you don't plan on consuming the entire iterable.
— def make_bounded_semaphore(self, value=1):
Returns a gevent.BoundedSemaphore with depth value
.
This can be used as a context-manager to create concurrency-limited sections like:
def worker(api, sem, i):
with api.step.nest('worker %d' % i):
with sem:
api.step('one at a time', ...)
api.step('unrestricted concurrency' , ...)
sem = api.future.make_semaphore()
for i in xrange(100):
api.futures.spawn(fn, sem, i)
*** promo NOTE: If you use the BoundedSemaphore without the context-manager syntax, it could lead to difficult-to-debug deadlocks in your recipe.
*** promo NOTE: This method will raise ValueError if used with @@@annotation@@@ mode.
— def make_channel(self):
Returns a single-slot communication device for passing data and control between concurrent functions.
This is useful for running 'background helper' type concurrent processes.
*** promo NOTE: It is strongly discouraged to pass Channel objects outside of a recipe module. Access to the channel should be mediated via a class/contextmanager/function which you return to the caller, and the caller can call in a makes-sense-for-your-moudle's-API way.
See ./tests/background_helper.py for an example of how to use a Channel correctly.
It is VERY RARE to need to use a Channel. You should avoid using this unless you carefully consider and avoid the possibility of introducing deadlocks.
*** promo NOTE: This method will raise ValueError if used with @@@annotation@@@ mode.
@escape_all_warnings
— def spawn(self, func, *args, **kwargs):
Prepares a Future to run func(*args, **kwargs)
concurrently.
Any steps executed in func
will only have manipulable StepPresentation
within the scope of the executed function.
Because this will spawn a greenlet on the same OS thread (and not,
for example a different OS thread or process), func
can easily be an
inner function, closure, lambda, etc. In particular, func, args and kwargs
do not need to be pickle-able.
This function does NOT switch to the greenlet (you'll have to block on a future/step for that to happen). In particular, this means that the following pattern is safe:
# self._my_future check + spawn + assignment is atomic because
# no switch points occur.
if not self._my_future:
self._my_future = api.futures.spawn(func)
*** promo NOTE: If used in @@@annotator@@@ mode, this will block on the completion of the Future before returning it.
Kwargs:
- __name (str) - If provided, will assign this name to the spawned
greenlet. Useful if this greenlet ends up raising an exception, this
name will appear in the stderr logging for the engine. See
Future.name
for more information. - __meta (any) - If provided, will assign this metadata to the returned Future. This field is for your exclusive use.
- Everything else is passed to
func
.
Returns a Future of func
's result.
@escape_all_warnings
— def spawn_immediate(self, func, *args, **kwargs):
Returns a Future to the concurrently running func(*args, **kwargs)
.
This is like spawn
, except that it IMMEDIATELY switches to the new
Greenlet. You may want to use this if you want to e.g. launch a background
step and then another step which waits for the daemon.
Kwargs:
- __name (str) - If provided, will assign this name to the spawned
greenlet. Useful if this greenlet ends up raising an exception, this
name will appear in the stderr logging for the engine. See
Future.name
for more information. - __meta (any) - If provided, will assign this metadata to the returned Future. This field is for your exclusive use.
- Everything else is passed to
func
.
Returns a Future of func
's result.
@staticmethod
— def wait(futures, timeout=None, count=None):
Blocks until count
futures
are done (or timeout occurs) then
returns the list of done futures.
This is analogous to gevent.wait
.
Args:
- futures (List[Future]) - The Future objects to wait for.
- timeout (None|seconds) - How long to wait for the Futures to be done.
If we hit the timeout, wait will return even if we haven't reached
count
Futures yet. - count (None|int) - The number of Futures to wait to be done. If None, waits for all of them.
Returns the list of done Futures, in the order in which they were done.
recipe_modules / generator_script
DEPS: context, json, path, step
A simple method for running steps generated by an external script.
class GeneratorScriptApi(RecipeApi):
— def __call__(self, path_to_script, *args, checkout_dir=None, **_):
Run a script and generate the steps emitted by that script.
The script will be invoked with --output-json /path/to/file.json. The script is expected to exit 0 and write steps into that file. Once the script outputs all of the steps to that file, the recipe will read the steps from that file and execute them in order.
Any *args specified will be additionally passed to the script.
If path_to_script
ends with .py, it will be run with vpython3
.
The step data is formatted as a list of JSON objects. Each object corresponds to one step, and contains the following keys:
- name: the name of this step.
- cmd: a list of strings that indicate the command to run (e.g. argv)
- env: a {key:value} dictionary of the environment variables to override. every value is formatted with the current environment with the python % operator, so a value of "%(PATH)s:/some/other/path" would resolve to the current PATH value, concatenated with ":/some/other/path"
- cwd: an absolute path to the current working directory for this script.
- always_run: a bool which indicates that this step should run, even if some previous step failed.
- outputs_presentation_json: a bool which indicates that this step will
emit a presentation JSON file. If this is True, the cmd will be extended
with a
--presentation-json /path/to/file.json
. This file will be used to update the step's presentation on the build status page. The file will be expected to contain a single JSON object, with any of the following keys:- logs: {logname: [lines]} specifies one or more auxiliary logs.
- links: {link_name: link_content} to add extra links to the step.
- step_summary_text: A string to set as the step summary.
- step_text: A string to set as the step text.
- properties: {prop: value} build_properties to add to the build status page. Note that these are write-only: The only way to read them is via the status page. There is intentionally no mechanism to read them back from inside of the recipes.
recipe_modules / golang
DEPS: cipd, context, path, platform
@contextlib.contextmanager
— def __call__(self, version, path=None, cache=None):
Installs a Golang SDK and activates it in the environment.
Installs it under the given path
, defaulting to [CACHE]/golang
. Various
cache directories used by Go are placed under cache
, defaulting to
[CACHE]/gocache
.
version
will be used to construct CIPD package version for packages under
https://chrome-infra-packages.appspot.com/p/infra/3pp/tools/go/.
To reuse the Go SDK deployment and caches across builds, declare the corresponding named caches in Buildbucket configs. E.g. when using defaults:
luci.builder(
...
caches = [
swarming.cache("golang"),
swarming.cache("gocache"),
],
)
Note: CGO is disabled on Windows currently, since Windows doesn't have a C compiler available by default.
Args:
- version (str) - a Go version to install (e.g.
1.16.10
). - path (Path) - a path to install Go into.
- cache (Path) - a path to put Go caches under.
recipe_modules / json
Methods for producing and consuming JSON.
@staticmethod
— def dumps(*args, **kwargs):
Works like json.dumps
.
By default this sorts dictionary keys (see discussion in input()
), but you
can pass sort_keys=False to override this behavior.
@returns_placeholder
— def input(self, data, sort_keys=True):
A placeholder which will expand to a file path containing .
By default this sorts dictionaries in data
to make this output
deterministic. In python3, dictionary insertion order is preserved per-spec,
so this is no longer necessary for determinism, and in some cases (such as
SPDX), the 'pretty' output is in non-alphabetical order. The default remains
True
, however, to avoid breaking all downstream tests.
— def is_serializable(self, obj):
Returns True if the object is JSON-serializable.
@staticmethod
— def loads(data, **kwargs):
Works like json.loads
, but:
- strips out unicode objects (replacing them with utf8-encoded str objects).
- replaces 'int-like' floats with ints. These are floats whose magnitude is less than (2**53-1) and which don't have a decimal component.
@returns_placeholder
— def output(self, add_json_log=True, name=None, leak_to=None):
A placeholder which will expand to '/tmp/file'.
If leak_to is provided, it must be a Path object. This path will be used in place of a random temporary file, and the file will not be deleted at the end of the step.
Args:
- add_json_log (True|False|'on_failure') - Log a copy of the output json
to a step link named
name
. If this is 'on_failure', only create this log when the step has a non-SUCCESS status.
— def read(self, name, path, add_json_log=True, output_name=None, **kwargs):
Returns a step that reads a JSON file.
*** note DEPRECATED: Use file.read_json instead.
recipe_modules / led
DEPS: cipd, context, json, path, proto, step, swarming
An interface to call the led tool.
Interface to the led tool.
"led" stands for LUCI editor. It allows users to debug and modify LUCI jobs. It can be used to modify many aspects of a LUCI build, most commonly including the recipes used.
The main interface this module provides is a direct call to the led binary:
led_result = api.led( 'get-builder', ['luci.chromium.try:chromium_presubmit']) final_data = led_result.then('edit-recipe-bundle').result
See the led binary for full documentation of commands.
— def __call__(self, *cmd: str):
Runs led with the given arguments. Wraps result in a LedResult
.
@property
— def cipd_input(self):
The versioned CIPD package containing the recipes code being run.
If set, it will be an InputProperties.CIPDInput
protobuf; otherwise None.
— def initialize(self):
— def inject_input_recipes(self, led_result: LedResult):
Sets the version of recipes used by led to correspond to the version currently being used.
If neither the rbe_cas_input
nor the cipd_input
property is set,
this is a no-op.
Args:
- led_result: The
LedResult
whose job.Definition will be passed into the edit command.
@property
— def launched_by_led(self):
Whether the current build is a led job.
@property
— def led_build(self):
Whether the current build is a led job as a real Buildbucket build.
@property
— def rbe_cas_input(self):
The location of the rbe-cas containing the recipes code being run.
If set, it will be a swarming.v1.CASReference
protobuf;
otherwise, None.
@property
— def run_id(self):
A unique string identifier for this led job, if it's a raw swarming task.
If the current build is not a led job as raw swarming task, value will be an empty string.
@property
— def shadowed_bucket(self):
The bucket of the original build/builder the led build replicates from.
If set, it will be an InputProperties.ShadowedBucket
protobuf;
otherwise None.
— def trigger_builder(self, project_name: str, bucket_name: str, builder_name: str, properties: dict, use_payload: bool=False):
Trigger a builder using led.
This can be used by recipes instead of buildbucket or scheduler triggers in case the running build was triggered by led.
This is equivalent to: led get-builder project/bucket:builder | <inject_input_recipes> | led edit | led launch
Args:
- project_name - The project that defines the builder.
- bucket_name - The bucket that configures the builder.
- builder_name - Name of the builder to trigger.
- properties - Dict with properties to pass to the triggered build.
- use_payload - Use edit-payload or edit -rbh to update cas input.
recipe_modules / legacy_annotation
Legacy Annotation module provides support for running a command emitting legacy @@@annotation@@@ in the new luciexe mode.
The output annotations is converted to a build proto and all steps in the build will appear as the child steps of the launched cmd/step in the current running build (using the Merge Step feature from luciexe protocol). This is the replacement for allow_subannotation feature in the legacy annotate mode.
class LegacyAnnotationApi(RecipeApi):
— def __call__(self, name, cmd, timeout=None, step_test_data=None, cost=_ResourceCost(), legacy_global_namespace=False):
Runs cmd that is emitting legacy @@@annotation@@@.
Currently, it will run the command as sub_build if running in luciexe mode or simulation mode. Otherwise, it will fall back to launch a step with allow_subannotation set to true.
If legacy_global_namespace
is True, this enables an even more-legacy
global namespace merging mode. Do not enable this. See crbug.com/1310155.
recipe_modules / luci_analysis
API for interacting with the LUCI Analysis RPCs
This API is for calling LUCI Analysis RPCs for various aggregated info about test results. See go/luci-analysis for more info.
class LuciAnalysisApi(RecipeApi):
— def lookup_bug(self, bug_id, system='monorail'):
Looks up the rule associated with a given bug.
This is a wrapper of luci.analysis.v1.Rules
LookupBug
API.
Args: bug_id (str): Bug Id is the bug tracking system-specific identity of the bug. For monorail, the scheme is {project}/{numeric_id}, for buganizer the scheme is {numeric_id}. system (str): System is the bug tracking system of the bug. This is either "monorail" or "buganizer". Defaults to monorail.
Returns: list of rules (str), Format: projects/{project}/rules/{rule_id}
— def query_cluster_failures(self, cluster_name):
Queries examples of failures in the given cluster.
This is a wrapper of luci.analysis.v1.Clusters
QueryClusterFailures
API.
Args: cluster_name (str): The resource name of the cluster to retrieve. Format: projects/{project}/clusters/{cluster_algorithm}/{cluster_id}
Returns: list of DistinctClusterFailure
For value format, see [DistinctClusterFailure
message]
(https://bit.ly/DistinctClusterFailure)
— def query_failure_rate(self, test_and_variant_list, project='chromium'):
Queries LUCI Analysis for failure rates
Args: test_and_variant_list list(Test): List of dicts containing testId and variantHash project (str): Optional. The LUCI project to query the failures from. Returns: List of TestVariantFailureRateAnalysis protos
— def query_stability(self, test_variant_position_list, project='chromium'):
Queries LUCI Analysis for test stability.
Args: test_variant_position_list list(TestVariantPosition): List of dicts containing testId, variant and source position project (str): Optional. The LUCI project to query the failures from. Returns: Tuple of (List(TestVariantStabilityAnalysis), TestStabilityCriteria) Raises: StepFailure if query is invalid or service returns unexpected responses.
— def query_test_history(self, test_id, project='chromium', sub_realm=None, variant_predicate=None, partition_time_range=None, submitted_filter=None, page_size=1000, page_token=None):
A wrapper method to use luci.analysis.v1.TestHistory
Query
API.
Args: test_id (str): test ID to query. project (str): Optional. The LUCI project to query the history from. sub_realm (str): Optional. The realm without the ":" prefix. E.g. "try". Default all test verdicts will be returned. variant_predicate (luci.analysis.v1.VariantPredicate): Optional. The subset of test variants to request history for. Default all will be returned. partition_time_range (luci.analysis.v1.common.TimeRange): Optional. A range of timestamps to query the test history from. Default all will be returned. (At most recent 90 days as TTL). submitted_filter (luci.analysis.v1.common.SubmittedFilter): Optional. Whether test verdicts generated by code with unsubmitted changes (e.g. Gerrit changes) should be included in the response. Default all will be returned. Default all will be returned. page_size (int): Optional. The number of results per page in the response. If the number of results satisfying the given configuration exceeds this number, only the page_size results will be available in the response. Defaults to 1000. page_token (str): Optional. For instances in which the results span multiple pages, each response will contain a page token for the next page, which can be passed in to the next request. Defaults to None, which returns the first page.
Returns: (list of parsed luci.analysis.v1.TestVerdict objects, next page token)
— def query_variants(self, test_id, project='chromium', sub_realm=None, variant_predicate=None, page_size=1000, page_token=None):
A wrapper method to use luci.analysis.v1.TestHistory
QueryVariants
API.
Args:
test_id (str): test ID to query. project (str): Optional. The LUCI project to query the variants from. sub_realm (str): Optional. The realm without the ":" prefix. E.g. "try". Default all test verdicts will be returned. variant_predicate (luci.analysis.v1.VariantPredicate): Optional. The subset of test variants to request history for. Default all will be returned. page_size (int): Optional. The number of results per page in the response. If the number of results satisfying the given configuration exceeds this number, only the page_size results will be available in the response. Defaults to 1000. page_token (str): Optional. For instances in which the results span multiple pages, each response will contain a page token for the next page, which can be passed in to the next request. Defaults to None, which returns the first page.
Returns: (list of VariantInfo { variant_hash: str, variant: { def: dict } }, next page token)
— def rule_name_to_cluster_name(self, rule):
Convert the resource name for a rule to its corresponding cluster. Args: rule (str): Format: projects/{project}/rules/{rule_id} Returns: cluster (str): Format: projects/{project}/clusters/{cluster_algorithm}/{cluster_id}.
recipe_modules / luci_config
DEPS: buildbucket, file, proto, step
class LuciConfigApi(RecipeApi):
Module for polling and parsing luci config files via the luci-config API.
Depends on prpc
binary being available in $PATH:
https://godoc.org/go.chromium.org/luci/grpc/cmd/prpc
— def buildbucket(self, **kwargs):
— def commit_queue(self, config_name: (str | None)=None, **kwargs):
— def fetch_config(self, config_name: str, message_type: MessageType, project: (str | None)=None, local_dir: (config_types.Path | None)=None, allow_unknown_fields: bool=False, allow_cache: bool=True):
Fetch and parse config file from the luci-config API as a proto.
Since configs are unlikely to change significantly during a build and to simplify test data, results are cached.
Args: config_name: The name of the config file to fetch, e.g. "commit-queue.cfg". message_type: The Python type corresponding to the config's protobuf message type. project: The name of the LUCI project to fetch the config from; e.g. "fuchsia". Defaults to the project that the current Buildbucket build is running in. local_dir: If specified, assumed to point to a local directory of files generated by lucicfg. The specified config file will be read from the corresponding local file rather than fetching it from the LUCI Config service. allow_unknown_fields: Whether to allow unknown fields, rather then erroring out on them. This is useful when reading config files for which the corresponding proto file that's been copied into the recipes repo may be out of date. This option should be used with care, as it strips potentially important information. allow_cache: Allow retrieving from a cache if we've already retrieved this config before.
— def fetch_config_raw(self, config_name: str, project: (str | None)=None, local_dir: (config_types.Path | None)=None, allow_cache: bool=True):
Fetch and parse config file from the luci-config API as a proto.
Since configs are unlikely to change significantly during a build and to simplify test data, results are cached.
Args: config_name: The name of the config file to fetch, e.g. "commit-queue.cfg". project: The name of the LUCI project to fetch the config from; e.g., "fuchsia". Defaults to the project that the current Buildbucket build is running in. local_dir: If specified, assumed to point to a local directory of files generated by lucicfg. The specified config file will be read from the corresponding local file rather than fetching it from the LUCI Config service. allow_cache: Allow retrieving from a cache if we've already retrieved this config before.
— def milo(self, **kwargs):
— def scheduler(self, **kwargs):
recipe_modules / milo
DEPS: buildbucket, json, path, platform, raw_io, resultdb, runtime, step, uuid
API for specifying Milo behavior.
A module for interacting with Milo.
@property
— def current_results_url(self):
Returns a Milo URL to view the current invocation's results.
eg: https://luci-milo.appspot.com/ui/inv/some-inv-name
@property
— def host(self):
Hostname of Milo instance corresponding to the current build.
Defaults to the prod instance, but will try to detect when using dev.
— def show_blamelist_for(self, gitiles_commits):
Specifies which commits and repos Milo should show a blamelist for.
If not set, Milo will only show a blamelist for the main repo in which this build was run.
Args: gitiles_commits: A list of buildbucket.common_pb2.GitilesCommit messages or dicts of the same structure. Each commit must have host, project and id. ID must match r'^[0-9a-f]{40}$' (git revision).
recipe_modules / nodejs
DEPS: cipd, context, path, platform
@contextlib.contextmanager
— def __call__(self, version, path=None, cache=None):
Installs a Node.js toolchain and activates it in the environment.
Installs it under the given path
, defaulting to [CACHE]/nodejs
. Various
cache directories used by npm are placed under cache
, defaulting to
[CACHE]/npmcache
.
version
will be used to construct CIPD package version for packages under
https://chrome-infra-packages.appspot.com/p/infra/3pp/tools/nodejs/.
To reuse the Node.js toolchain deployment and npm caches across builds, declare the corresponding named caches in Buildbucket configs. E.g. when using defaults:
luci.builder(
...
caches = [
swarming.cache("nodejs"),
swarming.cache("npmcache"),
],
)
Args:
- version (str) - a Node.js version to install (e.g.
17.1.0
). - path (Path) - a path to install Node.js into.
- cache (Path) - a path to put Node.js caches under.
recipe_modules / path
All functions related to manipulating paths in recipes.
Recipes handle paths a bit differently than python does. All path manipulation in recipes revolves around Path objects. These objects store a base path (always absolute), plus a list of components to join with it. New paths can be derived by calling the .join method with additional components.
In this way, all paths in Recipes are absolute, and are constructed from a small collection of anchor points. The built-in anchor points are:
api.path.start_dir
- This is the directory that the recipe started in. it's similar tocwd
, except that it's constant.api.path.cache_dir
- This directory is provided by whatever's running the recipe. Files and directories created under here /may/ be evicted in between runs of the recipe (i.e. to relieve disk pressure).api.path.cleanup_dir
- This directory is provided by whatever's running the recipe. Files and directories created under here /are guaranteed/ to be evicted in between runs of the recipe. Additionally, this directory is guaranteed to be empty when the recipe starts.api.path.tmp_base_dir
- This directory is the system-configured temp dir. This is a weaker form of 'cleanup', and its use should be avoided. This may be removed in the future (or converted to an alias of 'cleanup').api.path.checkout_dir
- This directory is set by various checkout modules in recipes. It was originally intended to make recipes easier to read and make code somewhat generic or homogeneous, but this was a mistake. New code should avoid 'checkout', and instead just explicitly pass paths around. This path may be removed in the future.
@recipe_api.ignore_warnings('recipe_engine/CHECKOUT_DIR_DEPRECATED')
— def __contains__(self, pathname: NamedBasePathsType):
This method is DEPRECATED.
If pathname
is "checkout", returns True iff checkout_dir is set.
If you want to check if checkout_dir is set, use
api.path.checkout_dir is not None
or similar, instead.
Returns True for all other pathname
values in NamedBasePaths.
Returns False for all other values.
In the past, the base paths that this module knew about were extensible via a very complicated 'config' system. All of that has been removed, but this method remains for now.
@recipe_api.ignore_warnings('recipe_engine/CHECKOUT_DIR_DEPRECATED')
— def abs_to_path(self, abs_string_path: str):
Converts an absolute path string abs_string_path
to a real Path
object, using the most appropriate known base path.
- abs_string_path MUST be an absolute path
- abs_string_path MUST be rooted in one of the configured base paths known to the path module.
This method will find the longest match in all the following:
- module resource paths
- recipe resource paths
- repo paths
- home_dir
- start_dir
- tmp_base_dir
- cleanup_dir
Example:
# assume [START_DIR] == "/basis/dir/for/recipe"
api.path.abs_to_path("/basis/dir/for/recipe/some/other/dir") ->
Path("[START_DIR]/some/other/dir")
Raises an ValueError if the preconditions are not met, otherwise returns the Path object.
— def abspath(self, path: (config_types.Path | str)):
Equivalent to os.abspath.
— def assert_absolute(self, path: (config_types.Path | str)):
Raises AssertionError if the given path is not an absolute path.
Args:
- path - The path to check.
— def basename(self, path: (config_types.Path | str)):
Equivalent to os.path.basename.
@property
— def cache_dir(self):
This directory is provided by whatever's running the recipe.
When the recipe executes via Buildbucket, directories under here map to 'named caches' which the Build has set. These caches would be preserved locally on the machine executing this recipe, and are restored for subsequent recipe exections on the same machine which request the same named cache.
By default, Buildbucket installs a cache named 'builder' which is an immediate subdirectory of cache_dir, and will attempt to be persisted between executions of recipes on the same Buildbucket builder which use the same machine. So, if you are just looking for a place to put files which may be persisted between builds, use:
api.path.cache_dir/'builder'
As the base Path.
Note that directories created under here /may/ be evicted in between runs of the recipe (i.e. to relieve disk pressure).
— def cast_to_path(self, strpath: str):
This returns a Path for strpath which can be used anywhere a Path is required.
If strpath
is not an absolute path (e.g. rooted with a valid Windows drive
or a '/' for non-Windows paths), this will raise ValueError.
This implicitly tries abs_to_path prior to returning a drive-rooted Path. This means that if strpath is a subdirectory of a known path (say, cache_dir), the returned Path will be based on that known path. This is important for test compatibility.
@checkout_dir.setter
— def checkout_dir(self, path: config_types.Path):
Sets the global variable api.path.checkout_dir
to the given path.
@property
— def cleanup_dir(self):
This directory is guaranteed to be cleaned up (eventually) after the execution of this recipe.
This directory is guaranteed to be empty when the recipe starts.
— def dirname(self, path: (config_types.Path | str)):
For "foo/bar/baz", return "foo/bar".
This corresponds to os.path.dirname().
The type of the return value matches the type of the argument.
Args: path: path to take directory name of
Returns dirname of path
— def exists(self, path):
Equivalent to os.path.exists.
The presence or absence of paths can be mocked during the execution of the recipe by using the mock_* methods.
— def expanduser(self, path):
Do not use this, use api.path.home_dir
instead.
This ONLY handles path
== "~", and returns str(api.path.home_dir)
.
@property
— def home_dir(self):
This is the path to the current $HOME directory.
It is generally recommended to avoid using this, because it is an indicator that the recipe is non-hermetic.
— def initialize(self):
This is called by the recipe engine immediately after init(), but
with self._paths_client
initialized.
— def isdir(self, path):
Equivalent to os.path.isdir.
The presence or absence of paths can be mocked during the execution of the recipe by using the mock_* methods.
— def isfile(self, path):
Equivalent to os.path.isfile.
The presence or absence of paths can be mocked during the execution of the recipe by using the mock_* methods.
— def join(self, path, *paths):
Equivalent to os.path.join.
Note that Path objects returned from this module (e.g.
api.path.start_dir) have a built-in join method (e.g.
new_path = p.joinpath('some', 'name')). Many recipe modules expect Path
objects rather than strings. Using this join
method gives you raw path
joining functionality and returns a string.
If your path is rooted in one of the path module's root paths (i.e. those
retrieved with api.path.something), then you can convert from a string path
back to a Path with the abs_to_path
method.
— def mkdtemp(self, prefix: str=tempfile.template):
Makes a new temporary directory, returns Path to it.
Args:
- prefix - a tempfile template for the directory name (defaults to "tmp").
Returns a Path to the new directory.
— def mkstemp(self, prefix: str=tempfile.template):
Makes a new temporary file, returns Path to it.
Args:
- prefix - a tempfile template for the file name (defaults to "tmp").
Returns a Path to the new file.
*** promo NOTE: Unlike tempfile.mkstemp, the file's file descriptor is closed. If you need the full security properties of mkstemp, please outsource this to e.g. either a resource script of your recipe module or recipe.
— def mock_add_directory(self, path: config_types.Path):
For testing purposes, mark that file |path| exists.
— def mock_add_file(self, path: config_types.Path):
For testing purposes, mark that file |path| exists.
— def mock_add_paths(self, path: config_types.Path, kind: FileType=FileType.FILE):
For testing purposes, mark that |path| exists.
— def mock_copy_paths(self, source: config_types.Path, dest: config_types.Path):
For testing purposes, copy |source| to |dest|.
— def mock_remove_paths(self, path: config_types.Path, should_remove: Callable[([str], bool)]=(lambda p: True)):
For testing purposes, mark that |path| doesn't exist.
Args: path: The path to remove. should_remove: Called for every candidate path. Return True to remove this path.
— def normpath(self, path):
Equivalent to os.path.normpath.
@property
— def pardir(self):
Equivalent to os.pardir.
@property
— def pathsep(self):
Equivalent to os.pathsep.
— def realpath(self, path: (config_types.Path | str)):
Equivalent to os.path.realpath.
— def relpath(self, path, start):
Roughly equivalent to os.path.relpath.
Unlike os.path.relpath, start
is required. If you want the 'current
directory', use the recipe_engine/context
module's cwd
property.
@property
— def sep(self):
Equivalent to os.sep.
— def split(self, path):
For "foo/bar/baz", return ("foo/bar", "baz").
This corresponds to os.path.split().
The type of the first item in the return value matches the type of the argument.
Args: path (Path or str): path to split into directory name and basename
Returns (dirname(path), basename(path)).
— def splitext(self, path: (config_types.Path | str)):
For "foo/bar.baz", return ("foo/bar", ".baz").
This corresponds to os.path.splitext().
The type of the first item in the return value matches the type of the argument.
Args: path: Path to split into name and extension
Returns: (name, extension_including_dot).
@property
— def start_dir(self):
This is the directory that the recipe started in. it's similar to cwd
,
except that it's constant for the duration of the entire program.
If you want to modify the current working directory for a set of steps, See the 'recipe_engine/context' module which allows modifying the cwd safely via a context manager.
@property
— def tmp_base_dir(self):
This directory is the system-configured temp dir.
This is a weaker form of 'cleanup', and its use should be avoided. This may be removed in the future (or converted to an alias of 'cleanup').
recipe_modules / platform
Mockable system platform identity functions.
class PlatformApi(RecipeApi):
Provides host-platform-detection properties.
Mocks:
- name (str): A value equivalent to something that might be returned by sys.platform.
- bits (int): Either 32 or 64.
@property
— def arch(self):
Returns the current CPU architecture.
Can return "arm" or "intel".
@property
— def bits(self):
Returns the bitness of the userland for the current system (either 32 or 64 bit).
TODO: If anyone needs to query for the kernel bitness, another accessor should be added.
@property
— def cpu_count(self):
The number of logical CPU cores (i.e. including hyper-threaded cores),
according to psutil.cpu_count(True)
.
— def initialize(self):
@property
— def is_linux(self):
Returns True iff the recipe is running on Linux.
@property
— def is_mac(self):
Returns True iff the recipe is running on OS X.
@property
— def is_win(self):
Returns True iff the recipe is running on Windows.
@property
— def name(self):
Returns the current platform name which will be in:
- win
- mac
- linux
@staticmethod
— def normalize_platform_name(plat: str):
One of python's sys.platform values -> 'win', 'linux' or 'mac'.
@property
— def total_memory(self):
The total physical memory in MiB.
Return type is int.
This is equivalent to psutil.virtual_memory().total / (1024 ** 2)
.
recipe_modules / properties
Provides access to the recipes input properties.
Every recipe is run with a JSON object called "properties". These contain all inputs to the recipe. Some common examples would be properties like "revision", which the build scheduler sets to tell a recipe to build/test a certain revision.
The properties that affect a particular recipe are defined by the recipe itself, and this module provides access to them.
Recipe properties are read-only; the values obtained via this API reflect the values provided to the recipe engine at the beginning of execution. There is intentionally no API to write property values (lest they become a kind of random-access global variable).
class PropertiesApi(RecipeApi, collections.abc.Mapping):
PropertiesApi implements all the standard Mapping functions, so you can use it like a read-only dict.
— def thaw(self):
Returns a read-write copy of all of the properties.
recipe_modules / proto
Methods for producing and consuming protobuf data to/from steps and the filesystem.
@staticmethod
— def decode(data, msg_class, codec, **decoding_kwargs):
Decodes a proto message from a string.
Args:
- msg_class (protobuf Message subclass) - The message type to decode.
- codec ('BINARY'|'JSONPB'|'TEXTPB') - The encoder to use.
- decoding_kwargs - Passed directly to the chosen decoder. See input placeholder for details.
Returns the decoded proto object.
@staticmethod
— def encode(proto_msg, codec, **encoding_kwargs):
Encodes a proto message to a string.
Args:
- codec ('BINARY'|'JSONPB'|'TEXTPB') - The encoder to use.
- encoding_kwargs - Passed directly to the chosen encoder. See output placeholder for details.
Returns the encoded proto message.
@returns_placeholder
— def input(self, proto_msg, codec, **encoding_kwargs):
A placeholder which will expand to a file path containing the encoded
proto_msg
.
Example: proto_msg = MyMessage(field=10) api.step('step name', ['some_cmd', api.proto.input(proto_msg)])
Args:
- proto_msg (message.Message) - The message data to encode.
- codec ('BINARY'|'JSONPB'|'TEXTPB') - The encoder to use.
- encoding_kwargs - Passed directly to the chosen encoder. See:
- BINARY: google.protobuf.message.Message.SerializeToString
- 'deterministic' defaults to True.
- JSONPB: google.protobuf.json_format.MessageToJson
- 'preserving_proto_field_name' defaults to True.
- 'sort_keys' defaults to True.
- 'indent' defaults to 0.
- TEXTPB: google.protobuf.text_format.MessageToString
- BINARY: google.protobuf.message.Message.SerializeToString
Returns an InputPlaceholder.
@returns_placeholder
— def output(self, msg_class, codec, add_json_log=True, name=None, leak_to=None, **decoding_kwargs):
A placeholder which expands to a file path and then reads an encoded proto back from that location when the step finishes.
Args:
- msg_class (protobuf Message subclass) - The message type to decode.
- codec ('BINARY'|'JSONPB'|'TEXTPB') - The encoder to use.
- add_json_log (True|False|'on_failure') - Log a copy of the parsed proto
in JSONPB form to a step link named
name
. If this is 'on_failure', only create this log when the step has a non-SUCCESS status. - leak_to (Optional[Path]) - This path will be used in place of a random temporary file, and the file will not be deleted at the end of the step.
- decoding_kwargs - Passed directly to the chosen decoder. See:
- BINARY: google.protobuf.message.Message.Parse
- JSONPB: google.protobuf.json_format.Parse
- 'ignore_unknown_fields' defaults to True.
- TEXTPB: google.protobuf.text_format.Parse
recipe_modules / random
Allows randomness in recipes.
This module sets up an internal instance of 'random.Random'. In tests, this is
seeded with 1234
, or a seed of your choosing (using the test_api's seed()
method)
All members of random.Random
are exposed via this API with getattr.
*** promo
NOTE: This is based on the python random
module, and so all caveats which
apply there also apply to this (i.e. don't use it for anything resembling
crypto).
Example:
def RunSteps(api):
my_list = range(100)
api.random.shuffle(my_list)
# my_list is now random!
— def __getattr__(self, name):
Access a member of random.Random
.
recipe_modules / raw_io
Provides objects for reading and writing raw data to and from steps.
@returns_placeholder
@staticmethod
— def input(data, suffix='', name=None):
Returns a Placeholder for use as a step argument.
This placeholder can be used to pass data to steps. The recipe engine will dump the 'data' into a file, and pass the filename to the command line argument.
data MUST be either of type 'bytes' (recommended) or type 'str' in Python 3. Respectively, 'str' or 'unicode' in Python 2.
If the provided data is of type 'str', it is encoded to bytes assuming
utf-8 encoding. Please switch to input_text(...)
instead in this case.
If 'suffix' is not '', it will be used when the engine calls tempfile.mkstemp.
See examples/full.py for usage example.
@returns_placeholder
@staticmethod
— def input_text(data, suffix='', name=None):
Returns a Placeholder for use as a step argument.
Similar to input(), but ensures that 'data' is valid utf-8 text. Any non-utf-8 characters will be replaced with �.
data MUST be either of type 'bytes' or type 'str' (recommended) in Python 3. Respectively, 'str' or 'unicode' in Python 2.
If the provided data is of type 'bytes', it is expected to be valid utf-8 encoded data. Note that, the support of type 'bytes' is for backwards compatibility to Python 2, we may drop this support in the future after recipe becomes Python 3 only.
@returns_placeholder
@staticmethod
— def output(suffix='', leak_to=None, name=None, add_output_log=False):
Returns a Placeholder for use as a step argument, or for std{out,err}.
If 'leak_to' is None, the placeholder is backed by a temporary file with a suffix 'suffix'. The file is deleted when the step finishes.
If 'leak_to' is not None, then it should be a Path and placeholder redirects IO to a file at that path. Once step finishes, the file is NOT deleted (i.e. it's 'leaking'). 'suffix' is ignored in that case.
Args:
- add_output_log (True|False|'on_failure') - Log a copy of the output
to a step link named
name
. If this is 'on_failure', only create this log when the step has a non-SUCCESS status.
@returns_placeholder
— def output_dir(self, leak_to=None, name=None):
Returns a directory Placeholder for use as a step argument.
If leak_to
is None, the placeholder is backed by a temporary dir.
Otherwise leak_to
must be a Path; if the path doesn't exist, it will be
created.
The placeholder value attached to the step will be a dictionary-like mapping of relative paths to the contents of the file. The actual reading of the file data is done lazily (i.e. on first access).
Relative paths are stored with the native slash delimitation (i.e. forward slash on *nix, backslash on Windows).
Example:
result = api.step('name', [..., api.raw_io.output_dir()])
# some time later; The read of 'some/file' happens now:
some_file = api.path.join('some', 'file')
assert result.raw_io.output_dir[some_file] == 'contents of some/file'
# data for 'some/file' is cached now; To free it from memory (and make
# all further reads of 'some/file' an error):
del result.raw_io.output_dir[some_file]
result.raw_io.output_dir[some_file] -> raises KeyError
@returns_placeholder
@staticmethod
— def output_text(suffix='', leak_to=None, name=None, add_output_log=False):
Returns a Placeholder for use as a step argument, or for std{out,err}.
Similar to output(), but uses an OutputTextPlaceholder, which expects utf-8 encoded text. Similar to input(), but tries to decode the resulting data as utf-8 text, replacing any decoding errors with �.
Args:
- add_output_log (True|False|'on_failure') - Log a copy of the output
to a step link named
name
. If this is 'on_failure', only create this log when the step has a non-SUCCESS status.
recipe_modules / resultdb
DEPS: context, futures, json, raw_io, step, time, uuid
API for interacting with the ResultDB service.
Requires rdb
command in $PATH
:
https://godoc.org/go.chromium.org/luci/resultdb/cmd/rdb
class ResultDBAPI(RecipeApi):
A module for interacting with ResultDB.
— def assert_enabled(self):
— def config_test_presentation(self, column_keys=(), grouping_keys=('status',)):
Specifies how the test results should be rendered.
Args: column_keys: A list of keys that will be rendered as 'columns'. status is always the first column and name is always the last column (you don't need to specify them). A key must be one of the following: 1. 'v.{variant_key}': variant.def[variant_key] of the test variant (e.g. v.gpu).
grouping_keys: A list of keys that will be used for grouping tests. A key must be one of the following: 1. 'status': status of the test variant. 2. 'name': name of the test variant. 3. 'v.{variant_key}': variant.def[variant_key] of the test variant (e.g. v.gpu). Caveat: test variants with only expected results are not affected by this setting and are always in their own group.
@property
— def current_invocation(self):
@property
— def enabled(self):
— def exclude_invocations(self, invocations, step_name=None):
Shortcut for resultdb.update_included_invocations().
— def exonerate(self, test_exonerations, step_name=None):
Exonerates test variants in the current invocation.
Args: test_exonerations (list): A list of test_result_pb2.TestExoneration. step_name (str): name of the step.
— def get_included_invocations(self, inv_name=None, step_name=None):
Returns names of included invocations of the input invocation.
Args: inv_name (str): the name of the input invocation. If input is None, will use current invocation. step_name (str): name of the step.
Returns: A list of invocation name strs.
— def get_invocation_instructions(self, inv_name=None, step_name=None):
Returns instructions from the input invocation.
Args: inv_name (str): the name of the input invocation. If input is None, will use current invocation. step_name (str): name of the step.
Returns: instruction_pb2.Instructions of the invocation requested.
— def include_invocations(self, invocations, step_name=None):
Shortcut for resultdb.update_included_invocations().
— def invocation_ids(self, inv_names):
Returns invocation IDs by parsing invocation names.
Args: inv_names (list of str): ResultDB invocation names.
Returns: A list of invocation_ids.
— def query(self, inv_ids, variants_with_unexpected_results=False, merge=False, limit=None, step_name=None, tr_fields=None, test_invocations=None, test_regex=None):
Returns test results in the invocations.
Most users will be interested only in results of test variants that had unexpected results. This can be achieved by passing variants_with_unexpected_results=True. This significantly reduces output size and latency.
Example: results = api.resultdb.query( [ # Invocation ID for a Swarming task. 'task-chromium-swarm.appspot.com-deadbeef', # Invocation ID for a Buildbucket build. 'build-234298374982' ], variants_with_unexpected_results=True, )
Args: inv_ids (list of str): IDs of the invocations. variants_with_unexpected_results (bool): if True, return only test results from variants that have unexpected results. merge (bool): if True, return test results as if all invocations are one, otherwise, results will be ordered by invocation. limit (int): maximum number of test results to return. Unlimited if 0. Defaults to 1000. step_name (str): name of the step. tr_fields (list of str): test result fields in the response. Test result name will always be included regardless of this param value. test_invocations (dict {invocation_id: api.Invocation}): Default test data to be used to simulate the step in tests. The format is the same as what this method returns. test_regex (str): A regular expression of the relevant test variants to query for.
Returns: A dict {invocation_id: api.Invocation}.
— def query_new_test_variants(self, invocation: str, baseline: str, step_name: str=None, step_test_data: dict=None):
Query ResultDB for new tests.
Makes a QueryNewTestVariants rpc.
Args: inovcation: Name of the invocation, e.g. "invocations/{id}". baseline: The baseline to compare test variants against, to determine if they are new. e.g. “projects/{project}/baselines/{baseline_id}”.
Returns: A QueryNewTestVariantsResponse proto message with is_baseline_ready and new_test_variants.
— def query_test_result_statistics(self, invocations=None, step_name=None):
Retrieve stats of test results for the given invocations.
Makes a call to the QueryTestResultStatistics API. Returns stats for all given invocations, including those included indirectly.
Args: invocations (list): A list of the invocations to query statistics for. If None, the current invocation will be used. step_name (str): name of the step.
Returns: A QueryTestResultStatisticsResponse proto message with statistics for the queried invocations.
— def query_test_results(self, invocations, test_id_regexp=None, variant_predicate=None, field_mask_paths=None, page_size=100, page_token=None, step_name=None):
Retrieve test results from an invocation, recursively.
Makes a call to QueryTestResults rpc. Returns a list of test results for the invocations and matching the given filters.
Args: invocations (list of str): retrieve the test results included in these invocations. test_id_regexp (str): the subset of test IDs to request history for. Default to None. variant_predicate (resultdb.proto.v1.predicate.VariantPredicate): the subset of test variants to request history for. Defaults to None, but specifying will improve runtime. field_mask_paths (list of str): test result fields in the response. Test result name will always be included regardless of this param value. page_size (int): the maximum number of variants to return. The service may return fewer than this value. The maximum value is 1000; values above 1000 will be coerced to 1000. Defaults to 100. page_token (str): for instances in which the results span multiple pages, each response will contain a page token for the next page, which can be passed in to the next request. Defaults to None, which returns the first page. step_name (str): name of the step.
Returns: A QueryTestResultsResponse proto message with test_results and next_page_token.
For value format, see [QueryTestResultsResponse
message]
(https://bit.ly/3dsChbo)
— def query_test_variants(self, invocations, test_variant_status=None, field_mask_paths=None, page_size=100, page_token=None, step_name=None):
Retrieve test variants from an invocation, recursively.
Makes a call to QueryTestVariants rpc. Returns a list of test variants for the invocations and matching the given filters.
Args: invocations (list of str): retrieve the test results included in these invocations. test_variant_status (resultdb.proto.v1.test_variant.TestVariantStatus): Use the UNEXPECTED_MASK status to retrieve only variants with non-EXPECTED status. field_mask_paths (list of str): test variant fields in the response. Test id, variantHash and status will always be included. Example: use ["test_id", "variant", "status", "sources_id"] to exclude results from the response. (Note that test_id and status are still specified for clarity.) page_size (int): the maximum number of variants to return. The service may return fewer than this value. The maximum value is 1000; values above 1000 will be coerced to 1000. Defaults to 100. page_token (str): for instances in which the results span multiple pages, each response will contain a page token for the next page, which can be passed in to the next request. Defaults to None, which returns the first page. step_name (str): name of the step.
Returns: A QueryTestVariantsResponse proto message with test_results and next_page_token.
For value format, see [QueryTestVariantsResponse
message]
(http://shortn/_hv3edsXidO)
— def unwrap(self, cmd: list[str]):
Reverses the wrap command
If the command is wrapped with the rdb command and delimiter this will return the unwrapped command.
Args: cmd (list of strings): the command line to attempt to unwrap
— def update_included_invocations(self, add_invocations=None, remove_invocations=None, step_name=None):
Add and/or remove included invocations to/from the current invocation.
Args: add_invocations (list of str): invocation IDs to add to the current invocation. remove_invocations (list of str): invocation IDs to remove from the current invocation.
This updates the inclusions of the current invocation specified in the LUCI_CONTEXT.
— def update_invocation(self, parent_inv='', step_name=None, source_spec=None, is_source_spec_final=None, baseline_id=None, instructions=None, raise_on_failure=True):
Makes a call to the UpdateInvocation API to update the invocation
Args:
parent_inv (str): the name of the invocation to be updated.
step_name (str): name of the step.
source_spec (luci.resultdb.v1.SourceSpec): The source information
to apply to the given invocation.
is_source_spec_final (bool): Whether the source spec is final and won't
be changed again.
baseline_id (str): Baseline identifier for this invocation, usually of
the format {buildbucket bucket}:{buildbucket builder name}. For example,
'try:linux-rel'. Baselines are used to detect new tests in invocations.
instructions (luci.resultdb.v1.Instructions): The reproduction
instructions for this invocation. It may contain step instructions and
test result instructions. The test instructions may contain instructions
for test results in this invocation and in included invocations.
raise_on_failure (bool): If set, and status
is not SUCCESS, raise
the appropriate exception.
— def upload_invocation_artifacts(self, artifacts, parent_inv=None, step_name=None):
Create artifacts with the given content type and contents or gcs_uri.
Makes a call to the BatchCreateArtifacts API. Returns the created artifacts.
Args: artifacts (dict): a collection of artifacts to create. Each key is an artifact ID, with the corresponding value being a dict containing: 'content_type' (optional) one of 'contents' (binary string) or 'gcs_uri' (str) parent_inv (str): the name of the invocation to create the artifacts under. If None, the current invocation will be used. step_name (str): name of the step.
Returns: A BatchCreateArtifactsResponse proto message listing the artifacts that were created.
— def wrap(self, cmd, test_id_prefix='', base_variant=None, test_location_base='', base_tags=None, coerce_negative_duration=False, include=False, realm='', location_tags_file='', require_build_inv=True, exonerate_unexpected_pass=False, inv_properties='', inv_properties_file='', inherit_sources=False, sources='', sources_file='', baseline_id='', inv_extended_properties_dir=''):
Wraps the command with ResultSink.
Returns a command that, when executed, runs cmd in a go/result-sink environment. For example:
api.step('test', api.resultdb.wrap(['./my_test']))
Args:
cmd (list of strings): the command line to run.
test_id_prefix (str): a prefix to prepend to test IDs of test results
reported by cmd.
base_variant (dict): variant key-value pairs to attach to all test results
reported by cmd. If both base_variant and a reported variant have a
value for the same key, the reported one wins.
Example:
base_variant={
'bucket': api.buildbucket.build.builder.bucket,
'builder': api.buildbucket.builder_name,
}
test_location_base (str): the base path to prepend to the test location
file name with a relative path. The value must start with "//".
base_tags (list of (string, string)): tags to attach to all test results
reported by cmd. Each element is a tuple of (key, value), and a key
may be repeated.
coerce_negative_duration (bool): If true, negative duration values will
be coerced to 0. If false, tests results with negative duration values
will be rejected with an error.
include (bool): If true, a new invocation will be created and included
in the parent invocation.
realm (str): realm used for the new invocation created if include=True
.
Default is the current realm used in buildbucket.
location_tags_file (str): path to the file that contains test location
tags in JSON format.
require_build_inv(bool): flag to control if the build is required to have
an invocation.
exonerate_unexpected_pass(bool): flag to control if automatically
exonerate unexpected passes.
inv_properties(str): stringified JSON object that contains structured,
domain-specific properties of the invocation. When not specified,
invocation-level properties will not be updated.
inv_properties_file(string): Similar to inv_properties but takes a path
to the file that contains the JSON object. Cannot be used when
inv_properties is specified.
inherit_sources(bool): flag to enable inheriting sources from the parent
invocation.
sources(string): JSON-serialized luci.resultdb.v1.Sources object that
contains information about the code sources tested by the invocation.
Cannot be used when inherit_sources or sources_file is specified.
sources_file(string): Similar to sources, but takes a path to the
file that contains the JSON object. Cannot be used when
inherit_sources or sources is specified.
baseline_id(string): Baseline identifier for this invocation, usually of
the format {buildbucket bucket}:{buildbucket builder name}.
For example, 'try:linux-rel'.
inv_extended_properties_dir(str): Path to a directory that contains files
for the invocation's extended_properties in JSON format.
Only files directly under this dir with the extension ".jsonpb" will be
read. The filename after removing ".jsonpb" and the file content will be
added as a key-value pair to the invocation's extended_properties map.
recipe_modules / runtime
class RuntimeApi(RecipeApi):
This module assists in experimenting with production recipes.
For example, when migrating builders from Buildbot to pure LUCI stack.
@property
— def in_global_shutdown(self):
True iff this recipe is currently in the 'grace_period' specified by
LUCI_CONTEXT['deadline']
.
This can occur when:
- The LUCI_CONTEXT has hit the 'soft_deadline'; OR
- The LUCI_CONTEXT has been 'canceled' and the recipe_engine has received a SIGTERM (on *nix) or Ctrl-Break (on Windows).
As of 2021Q2, while the recipe is in the grace_period, it can do anything
except for starting new steps (but it can e.g. update presentation of open
steps, or return RawResult from RunSteps). Attempting to start a step while
in the grace_period will cause the step to skip execution. When a signal is
received or the soft_deadline is hit, all currently running steps will be
signaled in turn (according to the LUCI_CONTEXT['deadline']
protocol).
It is good practice to ensure that recipes exit cleanly when canceled or time out, and this could be used anywhere to skip 'cleanup' behavior in 'finally' clauses or context managers.
https://chromium.googlesource.com/infra/luci/luci-py/+/HEAD/client/LUCI_CONTEXT.md
@property
— def is_experimental(self):
True if this recipe is currently running in experimental mode.
Typical usage is to modify steps which produce external side-effects so that non-production runs of the recipe do not affect production data.
Examples:
- Uploading to an alternate google storage file name when in non-prod mode
- Appending a 'non-production' tag to external RPCs
recipe_modules / scheduler
DEPS: buildbucket, json, platform, raw_io, step, time
API for interacting with the LUCI Scheduler service.
Depends on 'prpc' binary available in $PATH: https://godoc.org/go.chromium.org/luci/grpc/cmd/prpc Documentation for scheduler API is in https://chromium.googlesource.com/infra/luci/luci-go/+/main/scheduler/api/scheduler/v1/scheduler.proto RPCExplorer available at https://luci-scheduler.appspot.com/rpcexplorer/services/scheduler.Scheduler
class SchedulerApi(RecipeApi):
A module for interacting with LUCI Scheduler service.
— def emit_trigger(self, trigger, project, jobs, step_name=None):
Emits trigger to one or more jobs of a given project.
Args: trigger (Trigger): defines payload to trigger jobs with. project (str): name of the project in LUCI Config service, which is used by LUCI Scheduler instance. See https://luci-config.appspot.com/. jobs (iterable of str): job names per LUCI Scheduler config for the given project. These typically are the same as builder names.
— def emit_triggers(self, trigger_project_jobs, timestamp_usec=None, step_name=None):
Emits a batch of triggers spanning one or more projects.
Up to date documentation is at https://chromium.googlesource.com/infra/luci/luci-go/+/main/scheduler/api/scheduler/v1/scheduler.proto
Args:
trigger_project_jobs (iterable of tuples(trigger, project, jobs)):
each tuple corresponds to parameters of emit_trigger
API above.
timestamp_usec (int): unix timestamp in microseconds.
Useful for idempotency of calls if your recipe is doing its own retries.
https://chromium.googlesource.com/infra/luci/luci-go/+/main/scheduler/api/scheduler/v1/triggers.proto
@property
— def host(self):
Returns the backend hostname used by this module.
@property
— def invocation_id(self):
Returns the invocation ID of the current build as an int64 integer.
Returns None if the current build was not triggered by the scheduler.
@property
— def job_id(self):
Returns the job ID of the current build as "/".
Returns None if the current build was not triggered by the scheduler.
— def set_host(self, host):
Changes the backend hostname used by this module.
Args: host (str): server host (e.g. 'luci-scheduler.appspot.com').
@property
— def triggers(self):
Returns a list of triggers that triggered the current build.
A trigger is an instance of triggers_pb2.Trigger.
recipe_modules / service_account
DEPS: path, platform, raw_io, step
API for getting OAuth2 access tokens for LUCI tasks or private keys.
This is a thin wrapper over the luci-auth go executable ( https://godoc.org/go.chromium.org/luci/auth/client/cmd/luci-auth).
Depends on luci-auth to be in PATH.
class ServiceAccountApi(RecipeApi):
— def default(self):
Returns an account associated with the task.
On LUCI, this is default account exposed through LUCI_CONTEXT["local_auth"] protocol. When running locally this is an account the user logged in via "luci-auth login ..." command prior to running the recipe.
— def from_credentials_json(self, key_path):
Returns a service account based on a JSON credentials file.
This is the file generated by Cloud Console when creating a service account key. It contains the private key inside.
Args: key_path: (str|Path) object pointing to a service account JSON key.
recipe_modules / step
DEPS: context, path, platform, proto, warning
Step is the primary API for running steps (external programs, etc.)
@property
— def InfraFailure(self):
InfraFailure is a subclass of StepFailure, and will translate to a purple build.
This exception is raised from steps which are marked as infra_step
s when
they fail.
@property
— def MAX_CPU(self):
Returns the maximum number of millicores this system has.
@property
— def MAX_MEMORY(self):
Returns the maximum amount of memory on the system in MB.
— def ResourceCost(self, cpu=500, memory=50, disk=0, net=0):
A structure defining the resources that a given step may need.
The four resources are:
- cpu (measured in millicores): The amount of cpu the step is expected to take. Defaults to 500.
- memory (measured in MB): The amount of memory the step is expected to take. Defaults to 50.
- disk (as percentage of max disk bandwidth): The amount of "disk
bandwidth" the step is expected to take. This is a very simplified
percentage covering IOPS, read/write bandwidth, seek time, etc. At 100,
the step will run exclusively w.r.t. all other steps having a
disk
cost. At 0, the step will run regardless of other steps with disk cost. - net (as percentage of max net bandwidth): The amount of "net
bandwidth" the step is expected to take. This is a very simplified
percentage covering bandwidth, latency, etc. and is indescriminate of
the remote hosts, network conditions, etc. At 100, the step will run
exclusively w.r.t. all other steps having a
net
cost. At 0, the step will run regardless of other steps with net cost.
A step will run when ALL of the resources are simultaneously available. The Recipe Engine currently uses a greedy scheduling algorithm for picking the next step to run. If multiple steps are waiting for resources, this will pick the largest (cpu, memory, disk, net) step which fits the currently available resources and run that. The theory is that, assuming:
- Recipes are finite tasks, which aim to run ALL of their steps, and want to do so as quickly as possible. This is not a typical OS scheduling scenario where there's some window of time over which the recipe needs to be 'fair'. Additionally, recipes run with finite timeouts attached.
- The duration of a given step is the same regardless of when during the build it runs (i.e. running a step now vs later should take roughly the same amount of time).
It's therefore optimal to run steps as quickly as possible, to avoid wasting the timeout attached to the build.
Note that bool(ResourceCost(...))
is defined to be True if the
ResourceCost has at least one non-zero cost, and False otherwise.
Args:
- cpu (int): Millicores that this step will take to run. See
MAX_CPU
helper. A value higher than the maximum number of millicores on the system is equivalent toMAX_CPU
. - memory (int): Number of Mebibytes of memory this step will take to run.
See
MAX_MEMORY
as a helper. A value higher than the maximum amount of memory on the system is equivalent toMAX_MEMORY
. - disk (int [0..100]): The disk IO resource this step will take as a percentage of the maximum system disk IO.
- net (int [0..100]): The network IO resource this step will take as a percentage of the maximum system network IO.
Returns:
a ResourceCost suitable for use with api.step(...)
's cost kwarg. Note
that passing None
to api.step for the cost kwarg is equivalent to
ResourceCost(0, 0, 0, 0)
.
@property
— def StepFailure(self):
This is the base Exception class for all step failures.
It can be manually raised from recipe code to cause the build to turn red.
Usage:
raise api.StepFailure("some reason")
except api.StepFailure:
@property
— def StepWarning(self):
StepWarning is a subclass of StepFailure, and will translate to a yellow build.
— def __call__(self, name: str, cmd: (list[(((int | str) | Placeholder) | Path)] | None), ok_ret: ((Sequence[int] | Literal['any']) | Literal['all'])=(0,), infra_step: bool=False, raise_on_failure: bool=True, wrapper: Sequence[(((int | str) | Placeholder) | Path)]=(), timeout: ((int | timedelta) | None)=None, stdout: (Placeholder | None)=None, stderr: (Placeholder | None)=None, stdin: (Placeholder | None)=None, step_test_data: (Callable[([], StepTestData)] | None)=None, cost: _ResourceCost=_ResourceCost()):
Runs a step (subprocess).
Args:
-
name (string): The name of this step.
-
cmd (None|List[int|string|Placeholder|Path]): The program arguments to run.
If None or an empty list, then this step just shows up in the UI but doesn't run anything (and always has a retcode of 0). See the
empty()
method on this module for a more useful version of this mode.Otherwise:
- Numbers and strings are used as-is.
- Placeholders are 'rendered' to a string (using their render() method).
Placeholders are e.g.
api.json.input()
orapi.raw_io.output()
. Typically rendering these turns into an absolute path to a file on disk, which the program is expected to read from/write to. - Paths are rendered to an OS-native absolute path.
-
ok_ret (tuple or set of ints, 'any', 'all'): allowed return codes. Any unexpected return codes will cause an exception to be thrown. If you pass in the value 'any' or 'all', the engine will allow any return code to be returned. Defaults to {0}.
-
infra_step: Whether or not this is an infrastructure step. Failing infrastructure steps will place the step in an EXCEPTION state and if raise_on_failure is True an InfraFailure will be raised.
-
raise_on_failure: Whether or not the step will raise on failure. If True, a StepFailure will be raised if the step's status is FAILURE, an InfraFailure will be raised if the step's status is EXCEPTION and a StepWarning will be raised if the step's status is WARNING. Regardless of the value of this argument, an InfraFailure will be raised if the step is canceled.
-
wrapper: If supplied, a command to prepend to the executed step as a command wrapper.
-
timeout: If supplied, the recipe engine will kill the step after the specified number of seconds. Also accepts a datetime.timedelta.
-
stdout: Placeholder to put step stdout into. If used, stdout won't appear in annotator's stdout.
-
stderr: Placeholder to put step stderr into. If used, stderr won't appear in annotator's stderr.
-
stdin: Placeholder to read step stdin from.
-
step_test_data (func -> recipe_test_api.StepTestData): A factory which returns a StepTestData object that will be used as the default test data for this step. The recipe author can override/augment this object in the GenTests function.
-
cost (None|ResourceCost): The estimated system resource cost of this step. See
ResourceCost()
. The recipe_engine will prevent more than the machine's maximum resources worth of steps from running at once (i.e. steps will wait until there's enough resource available before starting). Waiting subprocesses are unblocked in capacity-available order. This means it's possible for pending tasks with large requirements to 'starve' temporarily while other smaller cost tasks run in parallel. Equal-weight tasks will start in FIFO order. Steps with a cost of None will NEVER wait (which is the equivalent ofResourceCost()
). Defaults toResourceCost(cpu=500, memory=50)
.
Returns a step_data.StepData
for the running step.
@property
— def active_result(self):
The currently active (open) result from the last step that was run. This
is a step_data.StepData
object.
Allows you to do things like:
try:
api.step('run test', [..., api.json.output()])
finally:
result = api.step.active_result
if result.json.output:
new_step_text = result.json.output['step_text']
api.step.active_result.presentation.step_text = new_step_text
This will update the step_text of the test, even if the test fails. Without this api, the above code would look like:
try:
result = api.step('run test', [..., api.json.output()])
except api.StepFailure as f:
result = f.result
raise
finally:
if result.json.output:
new_step_text = result.json.output['step_text']
api.step.active_result.presentation.step_text = new_step_text
— def close_non_nest_step(self):
Call this to explicitly terminate the currently open non-nest step.
After calling this, api.step.active_step will return the current nest step context (if any).
No-op if there's no currently active non-nest step.
— def empty(self, name, status='SUCCESS', step_text=None, log_text=None, log_name='stdout', raise_on_failure=True):
Runs an "empty" step (one without any command).
This can be useful to insert a status step/message in the UI, or summarize some computation which occurred inside the recipe logic.
Args:
name (str) - The name of the step.
status step.(INFRA_FAILURE|FAILURE|SUCCESS) - The initial status for this
step.
step_text (str) - Some text to set for the "step_text" on the presentation
of this step.
log_text (str|list(str)) - Some text to set for the log of this step. If
this is a list(str), will be treated as separate lines of the log.
Otherwise newlines will be respected.
log_name (str) - The name of the log to output log_text
to.
raise_on_failure (bool) - If set, and status
is not SUCCESS, raise
the appropriate exception.
Returns step_data.StepData.
— def funcall(self, name, func, *args, **kwargs):
Call a function and store the results and exception in a step.
Sample usage:
api.step.funcall(None, some_function, 4, json=True)
@contextlib.contextmanager
— def nest(self, name, status='worst'):
Nest allows you to nest steps hierarchically on the build UI.
This generates a dummy step with the provided name in the current namespace.
All other steps run within this with
statement will be nested inside of
this dummy step. Nested steps can also nest within each other.
The presentation for the dummy step can be updated (e.g. to add step_text,
step_links, etc.) or set the step's status. If you do not set the status,
it will be calculated from the status' of all the steps run within this one
according to the status
algorithm selected.
- If there's an active exception when leaving the
with
statement, the status will be one of FAILURE, WARNING, EXCEPTION, or CANCELED (depending on the type of exception and whether it resulted from the child step being canceled). - Otherwise:
- If the status algorithm is 'worst', it will assume the status of the worst child step. This is useful for when your nest step runs e.g. a bunch of test shards. If any shard fails, you want the nest step to fail as well.
- If the status algorithm is 'last', it will assume the status of the last child step. This is useful for when you're using the nest step to encapsulate a sequence operation where only the last step's status really matters.
*** promo NOTE: Because the nest step allows action on the result of all steps run within it, a nest step will wait for ALL recipe code within it (including greenlets spawned with api.future.spawn!).
Example:
with api.step.nest('run shards'): # status='worst' is the default.
with api.defer.context() as defer:
for shard in shards:
defer(run_shard, shard)
# status='last'
with api.step.nest('do upload', status='last'):
for attempt in range(num_attempts):
try:
do_upload() # first one fails, but second succeeds.
except api.step.StepFailure:
if range >= num_attempts - 1:
raise
pass
# manually adjust status
with api.step.nest('custom thing') as presentation:
# stuff!
presentation.status = 'FAILURE' # or whatever
Args:
- name (str): The name of this step.
- status ('worst'|'last'): The algorithm to use to pick a
presentation.status
if the recipe doesn't set one explicitly.
Yields a StepPresentation for this dummy step, which you may update as you please.
— def raise_on_failure(self, result, status_override=None):
Raise an appropriate exception if a step is not successful.
Arguments:
- result - The step result.
- status_override - An optional status value to override the status present on the result of the step. This allows for the exception to include information about the result and be based off of the initial status even if the step's status has subsequently been changed, which aligns with the behavior that would occur if a step was executed with raise_on_failure=True and a step's status was changed in a finally block.
Returns: If the step's status is SUCCESS, the step result will be returned.
Raises:
- StepFailure if the step's status is FAILURE
- StepWarning if the step's status is WARNING
- InfraFailure if the step's status is EXCEPTION or CANCELED
— def sub_build(self, name: str, cmd: (((int | str) | Placeholder) | Path), build: build_pb2.Build, raise_on_failure: bool=True, output_path: ((str | Path) | None)=None, legacy_global_namespace=False, merge_output_properties_to: (None | list[str])=None, timeout=None, step_test_data=None, cost=_ResourceCost()):
Launch a sub-build by invoking a LUCI executable. All steps in the sub-build will appear as child steps of this step (Merge Step).
See protocol: https://go.chromium.org/luci/luciexe
Example:
run_exe = api.cipd.ensure_tool(...) # Install LUCI executable `run_exe`
# Basic Example: launch `run_exe` with empty initial build and
# default options.
ret = api.sub_build("launch sub build", [run_exe], build_pb2.Build())
sub_build = ret.step.sub_build # access final build proto result
# Example: launch `run_exe` with input build to recipe and customized
# output path, cwd and cache directory.
with api.context(
# Change the cwd of the launched LUCI executable
cwd=api.path.start_dir / 'subdir',
# Change the cache_dir of the launched LUCI executable. Defaults to
# api.path.cache_dir if unchanged.
luciexe=sections_pb2.LUCIExe(cache_dir=api.path.cache_dir / 'sub'),
):
# Command executed:
# `/path/to/run_exe --output [CLEANUP]/build.json --foo bar baz`
ret = api.sub_build("launch sub build",
[run_exe, '--foo', 'bar', 'baz'],
api.buildbucket.build,
output_path=api.path.cleanup_dir / 'build.json')
sub_build = ret.step.sub_build # access final build proto result
Args:
- name (str): The name of this step.
- cmd (list[int|string|Placeholder|Path]): Same as the
cmd
parameter in__call__
method except that None is NOT allowed. cmd[0] MUST denote a LUCI executable. The--output
flag and its value should NOT be provided in the list. It should be provided via keyword argoutput_path
instead. - build (build_pb2.Build): The initial build state that the launched luciexe will start with. This method will clone the input build, modify the clone's fields and pass the clone to luciexe (see 'Invocation' section in http://go.chromium.org/luci/luciexe for what modification will be done).
- raise_on_failure: Whether or not the step will raise on failure. If True, a StepFailure will be raised if the step's status is FAILURE, an InfraFailure will be raised if the step's status is EXCEPTION and a StepWarning will be raised if the step's status is WARNING. Regardless of the value of this argument, an InfraFailure will be raised if the step is canceled.
- output_path (None|str|Path): The value of the
--output
flag. If provided, it should be a path to a non-existent file (its directory MUST exist). The extension of the path dictates the encoding format of final build proto (SeeEXT_TO_CODEC
). If not provided, the output will be a temp file with binary encoding. - legacy_global_namespace (bool): If set, activates legacy global namespace merging. Only meant for legacy ChromeOS builders. See crbug.com/1310155.
- merge_output_properties_to: If set, will cause the sub-build's output properties to be merged into THIS build's output properties at the given path. The special token RootOutputProperties on StepApi means to merge the sub-build's properties to the root of this build's output. Otherwise this should be a key path through the output properties' JSON objects. For example if this was ["a", "b"], and the sub build emitted {"hello": 100}, then this build would show {"a": {"b": {"hello": 100}}}.
- timeout (None|int|float|datetime.timedelta): Same as the
timeout
parameter in__call__
method. - step_test_data(Callable[[], recipe_test_api.StepTestData]): Same as the
step_test_data
parameter in__call__
method. - cost (None|ResourceCost): Same as the
cost
parameter in__call__
method.
Returns a step_data.StepData
for the finished step. The final build proto
object can be accessed via ret.step.sub_build
. The build is guaranteed to
be present (i.e. not None) with a terminal build status.
Raises StepFailure
if the sub-build reports FAILURE status.
Raises InfraFailure
if the sub-build reports INFRA_FAILURE or CANCELED
status.
recipe_modules / swarming
DEPS: buildbucket, cas, cipd, context, json, path, properties, raw_io, step
class SwarmingApi(RecipeApi):
API for interacting with swarming.
The tool's source lives at http://go.chromium.org/luci/client/cmd/swarming.
This module will deploy the client to [CACHE]/swarming_client/; users should add this path to the named cache for their builder.
@property
— def bot_id(self):
Swarming bot ID executing this task.
— def collect(self, name, tasks, output_dir=None, task_output_stdout='json', timeout=None, eager=False, verbose=False):
Waits on a set of Swarming tasks.
Args: name (str): The name of the step. tasks (Iterable(str|TaskRequestMetadata)): A list of task IDs or metadata objects corresponding to tasks to wait for. output_dir (Path|None): Where to download the tasks' isolated outputs. If set to None, they will not be downloaded; else, a given task's outputs will be downloaded to output_dir//. task_output_stdout (str|Path|Iterable(str|Path)): Where to output each task's text output. If given an iterable, will output it into multiple locations. Supported values are 'none', 'json', 'console' or a Path. At most one output Path is allowed. Accepts 'all' as a legacy alias for ['json', 'console']. timeout (str|None): The duration for which to wait on the tasks to finish. If set to None, there will be no timeout; else, timeout follows the format described by https://golang.org/pkg/time/#ParseDuration. eager (bool): Whether to return as soon as the first task finishes, instead of waiting for all tasks to finish. verbose (bool): Whether to use verbose logs.
Returns: A list of TaskResult objects.
@property
— def current_server(self):
Swarming server executing this task.
— def ensure_client(self):
— def initialize(self):
— def list_bots(self, step_name, dimensions=None, fields=None):
List bots matching the given options.
Args: step_name (str): The name of the step. dimensions (None|Dict[str, str]): Select bots that match the given dimensions. fields (None|List[str]): Fields to include in the response. If not specified, all fields will be included.
Returns: A list of BotMetadata objects.
@contextlib.contextmanager
— def on_path(self):
This context manager ensures the go swarming client is available on $PATH.
Example:
with api.swarming.on_path():
# do your steps which require the swarming binary on path
— def show_request(self, name, task):
Retrieve the TaskRequest for a Swarming task.
Args: name (str): The name of the step. task (str|TaskRequestMetadata): Task ID or metadata objects of the swarming task to be retrieved.
Returns: TaskRequest objects.
@property
— def task_id(self):
This task's Swarming ID.
— def task_request(self):
Creates a new TaskRequest object.
See documentation for TaskRequest/TaskSlice to see how to build this up into a full task.
Once your TaskRequest is complete, you can pass it to trigger
in order to
have it start running on the swarming server.
— def task_request_from_jsonish(self, json_d):
Creates a new TaskRequest object from a JSON-serializable dict.
The input argument should match the schema as the output of TaskRequest.to_jsonish().
— def trigger(self, step_name, requests, verbose=False, server=None):
Triggers a set of Swarming tasks.
Args: step_name (str): The name of the step. requests (seq[TaskRequest]): A sequence of task request objects representing the tasks we want to trigger. verbose (bool): Whether to use verbose logs. server (string): Address of the server to trigger the task on, e.g. https://chromium-swarm.appspot.com. If not set, the server the current task is running on is used.
Returns: A list of TaskRequestMetadata objects.
recipe_modules / time
Allows mockable access to the current time.
— def exponential_retry(self, retries: int, delay: datetime.timedelta, condition: Callable[([Exception], bool)]=None):
Adds exponential retry to a function.
Decorator which retries the function with exponential backoff.
Each time the decorated function throws an exception, we sleep for some amount of time. We increase the amount of time exponentially to prevent cascading failures from overwhelming systems. We also add a jitter to avoid the thundering herd problem.
Example usage:
def RunSteps(api):
@api.time.exponential_retry(5, datetime.timedelta(seconds=1))
def test_retries():
api.step('running', None)
raise Exception()
test_retries()
# Executes 6 steps with 'running' as a common prefix of their step names.
When writing a recipe module whose method needs to be retried, you won't have access to the time module in the class body, but you can import a class-method decorator like:
from RECIPE_MODULES.recipe_engine.time.api import exponential_retry
This decorator can be used on class methods or on functions (for example, functions in a recipe file).
*** promo NOTE: Your module/recipe MUST ALSO depend on "recipe_engine/time" in its DEPS.
*** promo NOTE: For non-class-method functions, the first parameter to those functions must be an api object, such as the passed to RunSteps.
Example usage 1 (class method decorator):
from recipe_engine.recipe_api import RecipeApi
from RECIPE_MODULES.recipe_engine.time.api import exponential_retry
# NOTE: Don't forget to put "recipe_engine/time" in the module DEPS.
class MyRecipeModule(RecipeApi):
@exponential_retry(5, datetime.timedelta(seconds=1))
def my_retriable_function(self, ...):
self.m.step('running', None)
Example usage 2 (function with api as first arg):
from RECIPE_MODULES.recipe_engine.time.api import exponential_retry
# NOTE: Don't forget to put "recipe_engine/time" in DEPS.
@exponential_retry(5, datetime.timedelta(seconds=1))
def helper_function(api):
api.step('running', None)
def RunSteps(api):
helper_funciton(api)
— def ms_since_epoch(self):
Returns current timestamp as an int number of milliseconds since epoch.
— def sleep(self, secs: (float | int), with_step: (bool | None)=None, step_result: (step_data.StepData | None)=None):
Suspend execution of |secs| (float) seconds, waiting for GLOBAL_SHUTDOWN. Does nothing in testing.
Args:
- secs - The number of seconds to sleep.
- with_step - If True, emits a step to indicate to users that the recipe is sleeping (not just hanging). If None, then will default to True if sleeping for a long time (>60sec); this can be disabled by setting explicitly to None. If the GLOBAL_SHUTDOWN event has already occurred, then a step will always be emitted in order to force raising an exception.
- step_result - Result of running a step. Should be None if with_step is True or None.
— def time(self):
Returns current timestamp as a float number of seconds since epoch.
— def timeout(self, seconds: ((float | int) | datetime.timedelta)=None):
Provides a context that times out after the given time.
Usage: with api.time.timeout(datetime.timedelta(minutes=5)):
Look at the "deadline" section of https://chromium.googlesource.com/infra/luci/luci-py/+/HEAD/client/LUCI_CONTEXT.md to see how this works.
— def utcnow(self):
Returns current UTC time as a datetime.datetime.
recipe_modules / tricium
DEPS: buildbucket, cipd, context, file, findings, json, path, properties, proto, resultdb, step
API for Tricium analyzers to use.
This recipe module is intended to support different kinds of analyzer recipes, including:
- Recipes that wrap one or more legacy analyzers.
- Recipes that accumulate comments one by one.
- Recipes that wrap other tools and parse their output.
class TriciumApi(RecipeApi):
TriciumApi provides basic support for Tricium.
— def __init__(self, **kwargs):
Sets up the API.
Initializes an empty list of comments for use with add_comment and write_comments.
— def add_comment(self, category, message, path, start_line=0, end_line=0, start_char=0, end_char=0, suggestions=()):
Adds one comment to accumulate.
For semantics of start_line, start_char, end_line, end_char, see Gerrit doc https://gerrit-review.googlesource.com/Documentation/rest-api-changes.html#comment-range
— def is_binary(self, path):
— def run_legacy(self, analyzers, input_base, affected_files, commit_message, emit=True):
Runs legacy analyzers.
This function internally accumulates the comments from the analyzers it
runs to the same global storage used by add_comment()
. By default it
emits comments from legacy analyzers to the tricium output property,
along with any comments previously created by calling add_comment()
directly, after running all the specified analyzers.
Args:
- analyzers (List(LegacyAnalyer)): Analyzers to run.
- input_base (Path): The Tricium input dir, generally a checkout base.
- affected_files (List(str)): Paths of files in the change, relative to input_base.
- commit_message (str): Commit message from Gerrit.
- emit (bool): Whether to write results to the tricium output
property. If unset, the caller will be responsible for calling
write_comments
to emit the comments added by the legacy analyzers. This is useful for recipes that need to run a mixture of custom analyzers (usingadd_comment()
to store comments) and legacy analyzers.
@staticmethod
— def validate_comment(comment):
Validates comment to comply with Tricium/Gerrit requirements.
Raise ValueError on the first detected problem.
— def write_comments(self, upload_findings=True):
Emit the results accumulated by add_comment
and run_legacy
.
recipe_modules / url
DEPS: context, json, path, raw_io, step
Methods for interacting with HTTP(s) URLs.
— def get_file(self, url, path, step_name=None, headers=None, transient_retry=True, strip_prefix=None, cert: (str | None)=None):
GET data at given URL and writes it to file.
Args:
- url: URL to request.
- path (Path): the Path where the content will be written.
- step_name: optional step name, 'GET ' by default.
- headers: a {header_name: value} dictionary for HTTP headers.
- transient_retry (bool or int): Determines how transient HTTP errorts (>500) will be retried. If True (default), errors will be retried up to 10 times. If False, no transient retries will occur. If an integer is supplied, this is the number of transient retries to perform. All retries have exponential backoff applied.
- strip_prefix (str or None): If not None, this prefix must be present at the beginning of the response, and will be stripped from the resulting content (e.g., GERRIT_JSON_PREFIX).
- cert (str): Optional path to a CA_BUNDLE file or directory with certificates of trusted CAs. If provided, pinned to the given cert or certs.
Returns (UrlApi.Response): Response with "path" as its "output" value.
Raises:
- HTTPError, InfraHTTPError: if the request failed.
- ValueError: If the request was invalid.
— def get_json(self, url, step_name=None, headers=None, transient_retry=True, strip_prefix=None, log=False, default_test_data=None, cert: (str | None)=None):
GET data at given URL and writes it to file.
Args:
- url: URL to request.
- step_name: optional step name, 'GET ' by default.
- headers: a {header_name: value} dictionary for HTTP headers.
- transient_retry (bool or int): Determines how transient HTTP errorts (>500) will be retried. If True (default), errors will be retried up to 10 times. If False, no transient retries will occur. If an integer is supplied, this is the number of transient retries to perform. All retries have exponential backoff applied.
- strip_prefix (str or None): If not None, this prefix must be present at the beginning of the response, and will be stripped from the resulting content (e.g., GERRIT_JSON_PREFIX).
- log (bool): If True, emit the JSON content as a log.
- default_test_data (jsonish): If provided, use this as the unmarshalled JSON result when testing if no overriding data is available.
- cert (str): Optional path to a CA_BUNDLE file or directory with certificates of trusted CAs. If provided, pinned to the given cert or certs.
Returns (UrlApi.Response): Response with the JSON as its "output" value.
Raises:
- HTTPError, InfraHTTPError: if the request failed.
- ValueError: If the request was invalid.
— def get_raw(self, url, step_name=None, headers=None, transient_retry=True, default_test_data=None, cert: (str | None)=None):
GET data at given URL and writes it to file.
Args:
- url: URL to request.
- step_name: optional step name, 'GET ' by default.
- headers: a {header_name: value} dictionary for HTTP headers.
- transient_retry (bool or int): Determines how transient HTTP errorts (>500) will be retried. If True (default), errors will be retried up to 10 times. If False, no transient retries will occur. If an integer is supplied, this is the number of transient retries to perform. All retries have exponential backoff applied.
- default_test_data (str): If provided, use this as the text output when testing if no overriding data is available.
- cert (str): Optional path to a CA_BUNDLE file or directory with certificates of trusted CAs. If provided, pinned to the given cert or certs.
Returns (UrlApi.Response): Response with the content as its output value.
Raises:
- HTTPError, InfraHTTPError: if the request failed.
- ValueError: If the request was invalid.
— def get_text(self, url, step_name=None, headers=None, transient_retry=True, default_test_data=None, cert: (str | None)=None):
GET data at given URL and writes it to file.
Args:
- url: URL to request.
- step_name: optional step name, 'GET ' by default.
- headers: a {header_name: value} dictionary for HTTP headers.
- transient_retry (bool or int): Determines how transient HTTP errorts (>500) will be retried. If True (default), errors will be retried up to 10 times. If False, no transient retries will occur. If an integer is supplied, this is the number of transient retries to perform. All retries have exponential backoff applied.
- default_test_data (str): If provided, use this as the text output when testing if no overriding data is available.
- cert (str): Optional path to a CA_BUNDLE file or directory with certificates of trusted CAs. If provided, pinned to the given cert or certs.
Returns (UrlApi.Response): Response with the content as its output value.
Raises:
- HTTPError, InfraHTTPError: if the request failed.
- ValueError: If the request was invalid.
— def join(self, *parts):
Constructs a URL path from composite parts.
Args:
- parts (str...): Strings to concastenate. Any leading or trailing slashes will be stripped from intermediate strings to ensure that they join together. Trailing slashes will not be stripped from the last part.
— def validate_url(self, v):
Validates that "v" is a valid URL.
A valid URL has a scheme and netloc, and must begin with HTTP or HTTPS.
Args:
- v (str): The URL to validate.
Returns (bool): True if the URL is considered secure, False if not.
Raises: ValueError: if "v" is not valid.
recipe_modules / uuid
Allows test-repeatable access to a random UUID.
— def random(self):
Returns a random UUID string.
recipe_modules / version
Thin API for parsing semver strings into comparable object.
class VersionApi(RecipeApi):
@staticmethod
— def parse(version):
Parse implements PEP 440 parsing for semvers.
If version
is strictly parseable as PEP 440, this returns a Version
object. Otherwise it does a 'loose' parse, just extracting numerals from
version.
You can read more about how this works at: https://setuptools.readthedocs.io/en/latest/pkg_resources.html#parsing-utilities (for strict parsing) and https://github.com/di/packaging_legacy (for the fallback behavior).
recipe_modules / warning
Allows recipe modules to issue warnings in simulation test.
class WarningApi(RecipeApi):
— def issue(self, name):
Issues an execution warning.
name
MAY either be a fully qualified "repo_name/WARNING_NAME" or a short
"WARNING_NAME". If it's a short name, then the "repo_name" will be
determined from the location of the file issuing the warning (i.e. if the
issue() comes from a file in repo_X, then "WARNING_NAME" will be
transformed to "repo_X/WARNING_NAME").
It is recommended to use the short name if the warning is defined in the same repo as the issue() call.
recipes / archive:examples/full
DEPS: archive, context, file, json, path, platform, raw_io, step
— def RunSteps(api):
recipes / assertions:tests/assert-raises
DEPS: assertions, properties, step
— def RunSteps(api):
recipes / assertions:tests/assert_count_equal
— def RunSteps(api):
recipes / assertions:tests/assertions
DEPS: assertions, properties, step
— def RunSteps(api):
recipes / assertions:tests/attribute_error
DEPS: assertions, properties, step
— def RunSteps(api):
recipes / assertions:tests/long_message
— def RunSteps(api):
recipes / assertions:tests/max_diff
DEPS: assertions, properties, step
— def RunSteps(api):
recipes / bcid_reporter:examples/usage
— def RunSteps(api):
recipes / bcid_verifier:tests/test-verify
DEPS: assertions, bcid_verifier, properties, step
— def RunSteps(api):
recipes / buildbucket:examples/full
DEPS: buildbucket, json, platform, properties, raw_io, runtime, step
This file is a recipe demonstrating the buildbucket recipe module.
— def RunSteps(api):
recipes / buildbucket:run/multi
DEPS: buildbucket, properties, swarming
Launches multiple builds at the same revision.
— def RunSteps(api, build_requests, collect_builds):
recipes / buildbucket:tests/add_build_tags
— def RunSteps(api):
recipes / buildbucket:tests/add_step_tags
— def RunSteps(api):
recipes / buildbucket:tests/backend
DEPS: assertions, buildbucket, properties, step
— def RunSteps(api):
recipes / buildbucket:tests/backend_utilities_fail
— def RunSteps(api):
recipes / buildbucket:tests/build
DEPS: assertions, buildbucket, properties, step
— def RunSteps(api):
recipes / buildbucket:tests/cancel
— def RunSteps(api):
recipes / buildbucket:tests/collect
DEPS: buildbucket, properties, step
— def RunSteps(api):
recipes / buildbucket:tests/get
— def RunSteps(api):
recipes / buildbucket:tests/list_builders
— def RunSteps(api):
recipes / buildbucket:tests/output_commit
DEPS: buildbucket, platform, properties, raw_io, step
This recipe tests the buildbucket.set_output_gitiles_commit function.
— def RunSteps(api):
recipes / buildbucket:tests/schedule
DEPS: buildbucket, json, properties, runtime, step
— def RunSteps(api):
recipes / buildbucket:tests/search
DEPS: buildbucket, properties, raw_io, runtime, step
— def RunSteps(api, props):
recipes / cas:examples/full
DEPS: cas, file, path, properties, runtime, step
— def RunSteps(api):
recipes / cas_input:examples/full
DEPS: cas_input, path, properties
— def RunSteps(api):
recipes / change_verifier:tests/match_config
DEPS: buildbucket, change_verifier, step
— def RunSteps(api):
recipes / change_verifier:tests/search
— def RunSteps(api):
— def make_runs(count=1):
Generates response Runs for a test.
recipes / cipd:examples/full
DEPS: buildbucket, cipd, json, path, platform, properties, step
— def RunSteps(api, use_pkg, pkg_files, pkg_dirs, pkg_vars, ver_files, install_mode, refs, tags, metadata, max_threads):
recipes / cipd:tests/platform
— def RunSteps(api):
recipes / commit_position:examples/full
— def RunSteps(api):
recipes / context:examples/full
DEPS: context, path, raw_io, step, time
— def RunSteps(api):
recipes / context:tests/cwd
— def RunSteps(api):
recipes / context:tests/env
DEPS: context, path, raw_io, step
— def RunSteps(api):
recipes / context:tests/greenlet
— def RunSteps(api):
recipes / context:tests/infra_step
— def RunSteps(api):
recipes / context:tests/luci_context
DEPS: assertions, context, path, step
— def RunSteps(api):
recipes / cq:examples/ordered_cls
DEPS: assertions, buildbucket, cq, properties, step
@recipe_api.ignore_warnings('recipe_engine/CQ_MODULE_DEPRECATED')
— def RunSteps(api):
recipes / cq:examples/trigger_child_builds
DEPS: assertions, buildbucket, cq, json, properties, step
@recipe_api.ignore_warnings('recipe_engine/CQ_MODULE_DEPRECATED')
— def RunSteps(api):
recipes / cq:tests/cl_group_key
@recipe_api.ignore_warnings('recipe_engine/CQ_MODULE_DEPRECATED')
— def RunSteps(api):
recipes / cq:tests/do_not_retry
DEPS: buildbucket, cq, step
@recipe_api.ignore_warnings('recipe_engine/CQ_MODULE_DEPRECATED')
— def RunSteps(api):
recipes / cq:tests/experimental
DEPS: assertions, cq, properties, step
@recipe_api.ignore_warnings('recipe_engine/CQ_MODULE_DEPRECATED')
— def RunSteps(api):
recipes / cq:tests/inactive
DEPS: assertions, cq, properties
@recipe_api.ignore_warnings('recipe_engine/CQ_MODULE_DEPRECATED')
— def RunSteps(api):
recipes / cq:tests/mode_of_run
DEPS: cq, properties, step
@recipe_api.ignore_warnings('recipe_engine/CQ_MODULE_DEPRECATED')
— def RunSteps(api):
recipes / cq:tests/owner_is_googler
DEPS: assertions, buildbucket, cq, properties
@recipe_api.ignore_warnings('recipe_engine/CQ_MODULE_DEPRECATED')
— def RunSteps(api):
recipes / cq:tests/reuse
DEPS: assertions, cq, step
@recipe_api.ignore_warnings('recipe_engine/CQ_MODULE_DEPRECATED')
— def RunSteps(api):
recipes / cq:tests/triggered_build_ids
DEPS: buildbucket, cq, step
@recipe_api.ignore_warnings('recipe_engine/CQ_MODULE_DEPRECATED')
— def RunSteps(api):
recipes / cv:examples/ordered_cls
DEPS: assertions, buildbucket, cv, properties, step
— def RunSteps(api):
recipes / cv:examples/trigger_child_builds
DEPS: assertions, buildbucket, cv, json, properties, step
— def RunSteps(api):
recipes / cv:tests/attempt_key
— def RunSteps(api):
recipes / cv:tests/cl_group_key
— def RunSteps(api):
recipes / cv:tests/cl_owner
— def RunSteps(api):
recipes / cv:tests/do_not_retry
DEPS: buildbucket, cv, step
— def RunSteps(api):
recipes / cv:tests/experimental
DEPS: assertions, cv, properties, step
— def RunSteps(api):
recipes / cv:tests/inactive
DEPS: assertions, cv, properties
— def RunSteps(api):
recipes / cv:tests/mode_of_run
DEPS: cv, properties, step
— def RunSteps(api):
recipes / cv:tests/owner_is_googler
DEPS: assertions, buildbucket, cv, properties
— def RunSteps(api):
recipes / cv:tests/reuse
DEPS: assertions, cv, step
— def RunSteps(api):
recipes / cv:tests/triggered_build_ids
DEPS: buildbucket, cv, step
— def RunSteps(api):
recipes / defer:tests/collect
DEPS: context, defer, properties, step
— def RunSteps(api, props):
recipes / defer:tests/context
DEPS: context, defer, properties, step
— def RunSteps(api, props):
recipes / defer:tests/non_deferred
DEPS: context, defer, properties, step
— def RunSteps(api, props):
recipes / defer:tests/result
DEPS: context, defer, properties, step
— def RunSteps(api, props):
recipes / defer:tests/suppressed
DEPS: defer, properties, step
— def RunSteps(api: recipe_api.RecipeApi, props: properties_pb2.SuppressedInputProps):
recipes / engine_tests/bad_subprocess
Tests that daemons that hang on to STDOUT can't cause the engine to hang.
— def RunSteps(api):
recipes / engine_tests/comprehensive_ui
A fast-running recipe which comprehensively covers all StepPresentation features available in the recipe engine.
— def RunSteps(api):
— def named_step(api, name):
recipes / engine_tests/config_operations
Tests that recipes can modify configuration options in various ways.
— def BaseConfig(**_kwargs):
— def DumpRecipeEngineTestConfig(api, config):
— def RunSteps(api):
@config_ctx()
— def test1(c):
@config_ctx(includes=['test2a'])
— def test2(c):
@config_ctx()
— def test2a(c):
recipes / engine_tests/early_termination
DEPS: file, futures, path, platform, step
Simple recipe which runs a bunch of subprocesses which react to early termination in different ways.
— def RunSteps(api, props):
recipes / engine_tests/expect_exception
Tests that tests with a single exception are handled correctly.
— def RunSteps(api):
— def my_function():
recipes / engine_tests/expect_exceptions
Tests that tests with multiple exceptions are handled correctly.
— def RunSteps(api):
— def my_function():
recipes / engine_tests/failure_results
Tests that run_steps is handling recipe failures correctly.
— def RunSteps(api):
recipes / engine_tests/functools_partial
Engine shouldn't explode when step_test_data gets functools.partial.
This is a regression test for a bug caused by this revision: http://src.chromium.org/viewvc/chrome?revision=298072&view=revision
When this recipe is run (by run_test.py), the _print_step code is exercised.
— def RunSteps(api):
recipes / engine_tests/incorrect_recipe_result
DEPS: json, properties, step
Tests that engine.py can handle unknown recipe results.
— def RunSteps(api, props):
recipes / engine_tests/long_sleep
DEPS: futures, properties, step
Simple recipe which sleeps in a subprocess forever to facilitate early termination tests.
— def RunSteps(api, props):
recipes / engine_tests/missing_start_dir
Tests that deleting the current working directory doesn't immediately fail
— def RunSteps(api):
recipes / engine_tests/module_injection_site
This test serves to demonstrate that the ModuleInjectionSite object on
recipe modules (i.e. the .m
) also contains a reference to the module which
owns it.
This was implemented to aid in refactoring some recipes (crbug.com/782142).
— def RunSteps(api):
recipes / engine_tests/multi_test_data
Tests that step_data can accept multiple specs at once.
— def RunSteps(api):
recipes / engine_tests/multiple_placeholders
DEPS: assertions, json, step
Tests error checking around multiple placeholders in a single step.
— def RunSteps(api):
recipes / engine_tests/nonexistent_command
— def RunSteps(api):
recipes / engine_tests/placeholder_exception
Tests that placeholders can't wreck the world by exhausting the step stack.
— def RunSteps(api):
recipes / engine_tests/proto_output_properties
Tests that output properties can be a proto message.
— def RunSteps(api):
recipes / engine_tests/proto_properties
— def RunSteps(api, properties, env_props):
recipes / engine_tests/recipe_paths
Tests that recipes have access to names, resources and their repo.
— def RunSteps(api):
recipes / engine_tests/recipe_test_data
Tests that we can pass data via api.recipe_test_data.
— def RunSteps(api):
recipes / engine_tests/sort_properties
Tests that step presentation properties can be ordered.
— def RunSteps(api):
recipes / engine_tests/undeclared_method
DEPS: cipd, properties, step
— def RunSteps(api, from_recipe, attribute, module):
recipes / engine_tests/unicode
— def RunSteps(api):
recipes / engine_tests/whitelist_steps
DEPS: context, properties, step
Tests that step_data can accept multiple specs at once.
— def RunSteps(api, fakeit):
recipes / file:examples/chmod
— def RunSteps(api):
recipes / file:examples/compute_hash
DEPS: assertions, file, path
— def RunSteps(api):
recipes / file:examples/copy
— def RunSteps(api):
recipes / file:examples/copytree
— def RunSteps(api):
recipes / file:examples/error
— def RunSteps(api):
recipes / file:examples/file_hash
DEPS: assertions, file, path
— def RunSteps(api):
recipes / file:examples/flatten_single_directories
— def RunSteps(api):
recipes / file:examples/glob
— def RunSteps(api):
recipes / file:examples/handle_json_file
— def RunSteps(api):
recipes / file:examples/listdir
— def RunSteps(api):
recipes / file:examples/raw_copy
— def RunSteps(api):
recipes / file:examples/read_write_proto
— def RunSteps(api):
recipes / file:examples/symlink
— def RunSteps(api):
recipes / file:examples/truncate
— def RunSteps(api):
recipes / findings:tests/infer_source
DEPS: assertions, buildbucket, findings, properties
— def RunSteps(api, expected_loc):
recipes / findings:tests/upload_findings
DEPS: buildbucket, findings, properties
— def RunSteps(api, props):
recipes / futures:examples/background_helper
DEPS: futures, json, path, raw_io, step
— def RunSteps(api):
— def manage_helper(api, chn):
@contextmanager
— def run_helper(api):
Runs the background helper.
Yields control once helper is ready. Kills helper once leaving the context manager.
This is an example of what your recipe module code would look like. Note that we don't pass the channel to the 'user' code (i.e. RunSteps).
recipes / futures:examples/extreme_namespaces
DEPS: context, futures, path, step
— def Level1(api, i):
— def Level2(api, i):
— def RunSteps(api):
recipes / futures:examples/fan_out_in
— def RunSteps(api):
recipes / futures:examples/lazy_fan_out_in
— def RunSteps(api):
— def RunSteps(api):
recipes / futures:examples/lottasteps
DEPS: futures, properties, step
This tests the engine's ability to handle many simultaneously-started steps.
Prior to this, logdog butler and the recipe engine would run out of file handles, because every spawn_immediate would immediately generate all log handles for the step, instead of waiting for the step's cost to be available.
— def RunSteps(api, props):
recipes / futures:examples/metadata
This tests metadata features of the Future object.
— def RunSteps(api):
recipes / futures:examples/result
— def RunSteps(api):
recipes / futures:examples/semaphore
— def RunSteps(api):
— def worker(api, sem, i, N):
recipes / generator_script:examples/full
DEPS: generator_script, json, path, properties, step
— def RunSteps(api, script_name):
recipes / golang:examples/full
— def RunSteps(api):
recipes / json:examples/full
DEPS: json, path, properties, raw_io, step
@recipe_api.ignore_warnings('recipe_engine/JSON_READ_DEPRECATED')
— def RunSteps(api):
recipes / json:tests/add_json_log
— def RunSteps(api):
recipes / json:tests/unsorted
Test to assert that sort_keys=False preserves insertion order.
— def RunSteps(api):
recipes / led:tests/full
DEPS: buildbucket, led, properties, proto, step
— def RunSteps(api, get_cmd, child_properties, sloppy_child_properties, do_bogus_edits):
recipes / led:tests/led_real_build
DEPS: buildbucket, led, properties, proto, step
— def RunSteps(api, get_cmd):
recipes / led:tests/no_exist
— def RunSteps(api):
recipes / led:tests/trigger_build
DEPS: led, properties, step
— def RunSteps(api):
recipes / led:tests/trigger_build_with_payload
DEPS: led, properties, step
— def RunSteps(api):
recipes / legacy_annotation:examples/full
DEPS: legacy_annotation, raw_io, step
— def RunSteps(api):
DEPS: assertions, json, luci_analysis, properties, raw_io
Tests for query_failure_rate.
— def RunSteps(api, input_list):
recipes / luci_analysis:tests/query_stability_test
DEPS: assertions, json, luci_analysis, properties, raw_io
Tests for query_stability.
— def RunSteps(api, input_list):
DEPS: json, luci_analysis, raw_io, step
Tests for generate_analysis.
— def RunSteps(api):
DEPS: json, luci_analysis, raw_io, step
Tests for generate_stability_response.
— def RunSteps(api):
recipes / luci_analysis:tests/test_history_query
DEPS: json, luci_analysis, raw_io, step
Tests for query_failure_rate.
— def RunSteps(api):
recipes / luci_analysis:tests/test_lookup_bug
DEPS: assertions, json, luci_analysis, step
Tests for lookup_bug.
— def RunSteps(api):
DEPS: assertions, luci_analysis, step
Tests for query_cluster_failres.
— def RunSteps(api):
recipes / luci_analysis:tests/test_query_variants
Tests for query_variants.
— def RunSteps(api):
recipes / luci_config:tests/full
DEPS: buildbucket, luci_config, path
— def RunSteps(api):
recipes / milo:examples/full
— def RunSteps(api):
recipes / nodejs:examples/full
— def RunSteps(api):
recipes / path:examples/full
DEPS: json, path, platform, properties, step
@recipe_api.ignore_warnings('recipe_engine/CHECKOUT_DIR_DEPRECATED')
— def RunSteps(api):
recipes / path:tests/cast_to_path
— def RunSteps(api):
recipes / path:tests/dynamic_paths
@recipe_api.ignore_warnings('recipe_engine/CHECKOUT_DIR_DEPRECATED')
— def RunSteps(api):
recipes / path:tests/exists
@recipe_api.ignore_warnings('recipe_engine/CHECKOUT_DIR_DEPRECATED')
— def RunSteps(api):
recipes / path:tests/test_api_legacy
Test to cover legacy aspects of PathTestApi.
@recipe_api.ignore_warnings('recipe_engine/CHECKOUT_DIR_DEPRECATED')
— def RunSteps(api):
recipes / placeholder
DEPS: buildbucket, properties, step, swarming, time
— def RunSteps(api, properties):
recipes / platform:examples/full
— def RunSteps(api):
recipes / properties:examples/full
DEPS: json, properties, step
— def RunSteps(api, props, env_props):
recipes / proto:tests/encode_decode
DEPS: assertions, path, proto, step
— def RunSteps(api):
recipes / proto:tests/placeholders
— def RunSteps(api):
recipes / random:tests/full
— def RunSteps(api):
recipes / raw_io:examples/full
DEPS: path, platform, properties, raw_io, step
— def RunSteps(api):
recipes / raw_io:tests/output_mismatch
DEPS: assertions, raw_io, step
— def RunSteps(api):
recipes / resultdb:examples/exonerate
DEPS: context, json, properties, resultdb, step
— def RunSteps(api):
— def RunSteps(api):
— def RunSteps(api):
recipes / resultdb:examples/include
— def RunSteps(api):
recipes / resultdb:examples/query
DEPS: buildbucket, resultdb, step
— def RunSteps(api):
— def RunSteps(api, invocation, baseline):
— def RunSteps(api):
recipes / resultdb:examples/query_test_results
— def RunSteps(api, invocation, test_id_regexp):
recipes / resultdb:examples/query_test_variants
— def RunSteps(api, invocation, test_variant_status, field_mask_paths):
recipes / resultdb:examples/resultsink
— def RunSteps(api):
recipes / resultdb:examples/test_presentation
— def RunSteps(api):
— def RunSteps(api):
recipes / resultdb:examples/update_invocation
— def RunSteps(api, invocation, gitiles_commit, gerrit_changes):
— def RunSteps(api):
recipes / runtime:tests/full
— def RunSteps(api):
recipes / scheduler:examples/emit_triggers
DEPS: buildbucket, json, runtime, scheduler, time
This file is a recipe demonstrating emitting triggers to LUCI Scheduler.
— def RunSteps(api):
recipes / scheduler:examples/info
This file is a recipe demonstrating reading/mocking scheduler host.
— def RunSteps(api):
recipes / scheduler:examples/triggers
This file is a recipe demonstrating reading triggers of the current build.
— def RunSteps(api):
recipes / service_account:examples/full
DEPS: path, platform, properties, raw_io, service_account
— def RunSteps(api, key_path, scopes):
recipes / step:examples/full
DEPS: buildbucket, context, json, path, properties, step
— def RunSteps(api, access_invalid_data, access_deep_invalid_data, assign_extra_junk, timeout):
recipes / step:tests/active_result
— def RunSteps(api):
recipes / step:tests/drop_expectation
— def RunSteps(api):
recipes / step:tests/empty
— def RunSteps(api):
recipes / step:tests/inject_paths
DEPS: context, path, properties, step
— def RunSteps(api):
recipes / step:tests/nested
— def RunSteps(api):
recipes / step:tests/raise_on_failure
— def RunSteps(api, infra_step, set_status_to_exception):
recipes / step:tests/stdio
— def RunSteps(api):
recipes / step:tests/step_call_args
— def RunSteps(api):
recipes / step:tests/step_cost
— def RunSteps(api):
recipes / step:tests/sub_build
DEPS: assertions, context, json, path, properties, step
— def RunSteps(api, props):
recipes / step:tests/timeout
— def RunSteps(api, timeout):
recipes / swarming:examples/full
DEPS: buildbucket, cipd, json, path, properties, step, swarming
— def RunSteps(api):
recipes / swarming:examples/this_task
— def RunSteps(api):
recipes / swarming:tests/collect_errors
DEPS: assertions, path, swarming
— def RunSteps(api):
recipes / swarming:tests/copy
— def RunSteps(api):
recipes / swarming:tests/list_bots
— def RunSteps(api):
recipes / swarming:tests/realms
DEPS: assertions, buildbucket, context, step, swarming
— def RunSteps(api):
recipes / swarming:tests/task_request_from_jsonish
— def RunSteps(api):
recipes / swarming:tests/task_result_from_jsonish
DEPS: assertions, path, swarming
— def RunSteps(api):
recipes / time:examples/full
DEPS: assertions, properties, runtime, step, time
— def RunSteps(api):
@exponential_retry(5, datetime.timedelta(seconds=1))
— def helper_fn_that_needs_retries(api):
recipes / time:examples/jitter
DEPS: assertions, properties, step, time
— def RunSteps(api, properties):
recipes / tricium:examples/add_comment
DEPS: buildbucket, properties, proto, tricium
— def CreateExpectedFinding(api, input_comment):
— def RunSteps(api, trigger_type_error):
recipes / tricium:examples/wrapper
DEPS: buildbucket, file, path, tricium
An example of a recipe wrapping legacy analyzers.
— def RunSteps(api):
recipes / tricium:tests/add_comment_validation
DEPS: buildbucket, properties, tricium
— def RunSteps(api, case):
recipes / tricium:tests/enforce_comments_num_limit
DEPS: assertions, buildbucket, properties, proto, tricium
— def RunSteps(api, props):
recipes / url:examples/full
DEPS: context, path, step, url
— def RunSteps(api):
recipes / url:tests/join
— def RunSteps(api):
recipes / url:tests/validate_url
DEPS: properties, step, url
— def RunSteps(api):
recipes / uuid:examples/full
— def RunSteps(api):
recipes / version:examples/full
— def RunSteps(api):
recipes / warning:tests/fakes
This is a fake recipe to trick the simulation and make it believes that
this module has tests. The actual test for this module is done via unit test
because the issue
method can only be used from recipe_modules, not recipes.
— def RunSteps(api):