You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[Continuation of discussion in the matrix channel.]
I'm do a rebuild of a bunch of packages (for reproducible build checks) and I'm using a bunch of mock invocations in parallel, e.g. 5–15. I did that in the past on a slower workstation with fewer instances, and it seemed fine. But now I have a beefier machine and the workers largely run serialized. Looking at the output from individual workers, they often block on "waiting for yumcache lock" and "waiting for root lock".
Some caching is OK, but it seems that if I scale to above a few workers (which would work nicely to saturate the CPUs), the cache locks become contended. The machine stays >50% idle.
I'd want to not share the cache at all somehow. In particular, all the rpms are pre-downloaded, so the yumcache is probably not doing anything useful at all. (All rpms are cached locally, because I'm doing historical builds, i.e. I query koji listBuildroot and get all the rpms and provide a repo with those rpms and mock installs rpms from there. So nothing is downloaded when mock is running, all rpms are provided on disk.)
I think this issue is solvable via the yum_cache plugin; instead of sharing one directory across multiple mock builds we want to create a tmpfs mountpoint per each mock call. Is anyone able to help with this?
Is this really something that should be solved via the yum_cache plugin?
What if, instead, the buildroot cache directory was scoped to the actual root name with the extension instead of the shared root name without the extension, e.g. this line was changed from:
IIUC (and I might not), that would cause the yum_cache plugin and other plugins that use the cache directory to utilize cache specific to each build root instead of shared regardless of the unique extension.
Would that solve the issue in a more general way? I could totally be missing something, as I'm not super familiar with the mock codebase.
[Continuation of discussion in the matrix channel.]
I'm do a rebuild of a bunch of packages (for reproducible build checks) and I'm using a bunch of mock invocations in parallel, e.g. 5–15. I did that in the past on a slower workstation with fewer instances, and it seemed fine. But now I have a beefier machine and the workers largely run serialized. Looking at the output from individual workers, they often block on "waiting for yumcache lock" and "waiting for root lock".
The workers are invoked as:
mock -r cache/build/perl-ExtUtils-F77-1.26-10.fc41/mock.cfg --uniqueext=a --define='_buildhost buildvm-ppc64le-10.iad2.fedoraproject.org' --define='distribution Fedora Project' --define='packager Fedora Project' --define='vendor Fedora Project' --define='bugurl https://bugz.fedoraproject.org/perl-ExtUtils-F77' --without=tests --nocheck cache/rpms/perl-ExtUtils-F77-1.26-10.fc41/perl-ExtUtils-F77-1.26-10.fc41.src.rpm
Some caching is OK, but it seems that if I scale to above a few workers (which would work nicely to saturate the CPUs), the cache locks become contended. The machine stays >50% idle.
I'd want to not share the cache at all somehow. In particular, all the rpms are pre-downloaded, so the yumcache is probably not doing anything useful at all. (All rpms are cached locally, because I'm doing historical builds, i.e. I query koji listBuildroot and get all the rpms and provide a repo with those rpms and mock installs rpms from there. So nothing is downloaded when mock is running, all rpms are provided on disk.)
Things I tried:
--config-opts=yum_cache_enable=False
/cc @praiskup
The text was updated successfully, but these errors were encountered: