-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test turbo server #78
base: main
Are you sure you want to change the base?
test turbo server #78
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #78 +/- ##
==========================================
+ Coverage 26.09% 28.25% +2.15%
==========================================
Files 19 24 +5
Lines 1667 1961 +294
Branches 331 381 +50
==========================================
+ Hits 435 554 +119
- Misses 1232 1406 +174
- Partials 0 1 +1
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
@mariolenz Im still working on this (albeit locally) and I think Im at a bit of a stopping point. There are two options for caching in my mind, although neither are great IMO:
For option 1, the added complexity is considerable and errors can be downright cryptic. For option 2, the main downside is how limited we would be in what can be cached. For a cache result to persist from task to task, the result needs to be encoded and decoded. Pyvmomi objects are complex and it seems like many cannot be decoded back into objects. I envision the main use case would be to cache VM IDs, and then modules can quickly lookup the VM info using the ID. We may be able to cache VM object results. That would be much more useful but I haven't tried it yet. I know thats a lot, hopefully its clear-ish. I just wanted to update and give an opportunity to raise any questions. |
I don't know enough about how ansible-core executes a playbook. Is there a new process for every task? Or is it just one process running all the tasks? Or, if you run for example 5 tasks in parallel, are there 5 processes ("workers") that run the tasks? If several tasks are run in a process, how about just caching the session in an in-memory data structure like a dict? With BTW thanks a lot for working on this! |
Each task is a separate process, which is why option 2 is so disappointing :/ The turbo server (option 1) works by keeping a persistent socket open on the remote host and running tasks through that (or at least, thats my understanding of it at this point). Since the session is persistent, we can use an in-memory cache like you describe which is nice. My most recent pushes have been experimenting using the I think that adding the turbo server as an optional/experimental feature is a good idea. That would enable users to save the 2 seconds per task and re-use their authenticated sessions. Plus better error handling/docs for the turbo server would help with vmware_rest anyway. We can add in function caching where it makes sense and is relatively safe, but thats probably better as a "phase 2" thing. If that makes sense to you (and Danielle, ill talk with her this week), I'll close this and open a new PR with better documentation |
For the record, someone I've talked about this told me:
Sounds complicated. Especially since we need inter-process communication. |
Yea it definitely is complicated. The turbo server in cloud.common is only used by maybe 3 projects? And only vmware_rest has it as mandatory or even enabled by default as far as I know. All three of those "problems" are still relevant for the turbo server, although I think the first one is less so. |
SUMMARY
Testing turbo server to see if tests are executed faster. If it works it should cache sessions in between api calls to vcenter.
Based on https://github.com/ansible-collections/kubernetes.core/blob/c8a9326306e65c0edf945fb3e99a67937cbe9375/plugins/modules/k8s_cp.py#L143
Related to https://forum.ansible.com/t/10551