-
-
Notifications
You must be signed in to change notification settings - Fork 218
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SC-4053] DataDog metric for historic app dependency feature usage #35445
[SC-4053] DataDog metric for historic app dependency feature usage #35445
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @Charl1996
I'm not 100% sure how the current solution will perform at scale, so planning to check out what happens on staging.
I agree that this won't work well at scale.
Can we revisit to only count how many apps have the feature enabled in one day (and not every minute) ? That does not necessarily give us the count of how many dropped the feature use but it does give us the trend on the use of the feature.
Additionally,
I think we would need to push back on ignoring test apps since for that we would need to load app docs which isn't going to work at scale.
Or else then we would need to update the view to also put "is_test" in the corresponding map.js
file which is not a low LoE or worth the effort to ignore test apps on any production environment.
'app_manager/saved_app', | ||
startkey=[domain, app_id, {}], | ||
endkey=[domain, app_id], | ||
descending=True, | ||
reduce=False, | ||
include_docs=False, | ||
include_docs=True, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
setting include_docs to True
will magnify what this would return many folds, so it should be done with extreme caution.
So, we should set it to False by default and only set it to True only when needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it should be done with extreme caution.
Noted! I'll look into updating the map.js file, then we can keep this False
This function checks whether the app_dependencies feature was used in the past but is | ||
no longer being used. | ||
""" | ||
def get_app_dependencies(build): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Just a minor suggestion on how this could be implemented since we don't necessarily need the list of dependencies
def has_app_dependencies(build):
return bool(build.get('profile', {}).get('features', {}).get('dependencies'))
if has_app_dependencies(app_builds[0]):
return False
if not app_builds: | ||
return False | ||
|
||
if get_app_dependencies(app_builds[0]): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if no new version is made, will this keep reporting the same app version everyday in the count as where the dependencies were disabled?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not exactly. If no new version is made and the latest version has it enabled it will be reported as not having disabled it.
Iterating over all domains, apps, and historic builds is quite expensive. Why is this going in datadog and running every hour? This could be implemented as a management command that would gather all historic data and be run as needed. |
Confirmed that it can run every day, so will update. Even so...
I'll check if having a management command do this might be feasible. |
@Charl1996 You might have a look at AppMigrationCommandBase as a starting point. It's meant to modify apps, but you could extract the structure (which iterates over all domains and apps, with progress tracking and options for including builds and linked apps) into a superclass and then use that as a base for a command that would gather this metric. |
Please note, this PR might close in the near future as we're rethinking the approach to tracking feature usage in DataDog. |
Thanks for the reconsideration here @Charl1996 |
Closed in favour of #35473 |
Product Description
No visible change on HQ, simply sending a metric through to data dog.
Technical Summary
Ticket
Note: This PR is one of two, since there's 2 different metrics that we need to track on DataDog. I decided to spilt the implementation into two separate PRs to keep each PR short. Also, they're not dependant on one another.
This PR collects and reports a new metric to DataDog. This metric is aimed to give more insight into the app dependency feature usage and tracks specifically the following:
Check for application which had the app dependency feature enabled for any build in the past, but does not have it enabled anymore.
I'm not 100% sure how the current solution will perform at scale, so planning to check out what happens on staging.
Safety Assurance
Safety story
Local testing was done. Planning to test on staging as well to see what the implementation does, since we're iterating over all domains's applications, and for each application iterating over a bunch, mostly all, of the app builds. This happens every hour.
Automated test coverage
Added some unit tests
QA Plan
No formal QA planned.
Rollback instructions
Labels & Review