Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for autotune limits in service_shard_update #3830

Merged
merged 6 commits into from
Oct 14, 2024

Conversation

karmeleon
Copy link
Contributor

Added to PaaSTA at large in #3744. Also pulls in commit from #3829 to make min/max instance values optional and adds a new --dry-run param that skips committing and pushing changes to yelpsoa-configs.

Confirmed that the new kwargs do as expected, generating valid configs. If anyone knows a way to do this that doesn't involve a mess of nested if statements, I'm all ears lol

Copy link
Member

@nemacysts nemacysts left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

generally lgtm - we'll probably want to back out the min/max instance changes + fix the autotune limits typo, but otherwise i don't see any concerns here

paasta_tools/contrib/service_shard_update.py Outdated Show resolved Hide resolved
Comment on lines +42 to +48
parser.add_argument(
"-d",
"--dry-run",
help="Do not commit changes to git",
action="store_true",
dest="dry_run",
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

++

(i'm also a fan of --for-real flags so that dry-run is the default, but that would be a little more annoying to rollout here)

paasta_tools/contrib/service_shard_update.py Outdated Show resolved Hide resolved
@karmeleon karmeleon requested a review from a team as a code owner October 11, 2024 20:46
@karmeleon
Copy link
Contributor Author

karmeleon commented Oct 11, 2024

@nemacysts sorry for the 6 months of suspense! kinda forgot about this PR until we ran into some super anemic ssr shards with 0.1 cpus causing hella timeouts on stage lol

Copy link
Member

@nemacysts nemacysts left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm - unless you'd like to add tests here, i'm assuming that you ran this locally and got the expected output :)

(and in any case, we'd probably wanna wait until monday to release this anyway :p)

Comment on lines +296 to +297
for resource in list(limit_config):
for key in list(limit_config[resource]):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(these list()'s aren't necessary, but i'm assuming they're were helpful in some way while you were debugging and converting from a view to a list shouldn't really impact perf in a meaningful way for this script)

Copy link
Contributor Author

@karmeleon karmeleon Oct 11, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unfortunately they are necessary under py3 :( without them i get this

Traceback (most recent call last):
  File "paasta_tools/contrib/service_shard_update.py", line 350, in <module>
    main(args)
  File "paasta_tools/contrib/service_shard_update.py", line 297, in main
    for key in limit_config[resource]:
RuntimeError: dictionary changed size during iteration

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh d'oh - i'm dumb: we're mutating the dict

i blame friday-brain :p

@karmeleon
Copy link
Contributor Author

yep, got the expected output :) will merge this next week

@karmeleon karmeleon merged commit 762ebc8 into master Oct 14, 2024
10 checks passed
@karmeleon karmeleon deleted the add_autotune_limits_to_service_shard_update branch October 14, 2024 16:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants