Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Inaccurate transfer progress if stage contains over 10k files #221

Closed
veriditin opened this issue Sep 30, 2024 · 1 comment
Closed

Comments

@veriditin
Copy link

veriditin commented Sep 30, 2024

If I want to push the data related to specific stage, and the amount of files is large (300k in my case), then the transfer progress information for this push action is completely inaccurate.

<snip>
Transferring:
 * aa/e4fa778a8ea6034ae3d…b82b5a70abfe5039840882:100% /1.118Ki, 0/s, -
 * ab/f4b05fcb08267584853…680644bb53668c3b33e0cd:100% /6.201Ki, 0/s, -
 * b0/f97f9061ee33de721e2…fc1ebd94cb0dd3c6524760:100% /59.624Ki, 0/s, -
 * 80/f76e715c125a9a3b55f…8ed83437119d4c0466d17b:100% /894, 0/s, -
 * d0/0e98564e078a7d9c659…d1bafbecd91ff6352bc654:100% /752, 0/s, -
 * 7e/ff372b4414cb15e31d0…345d3d0ef1b37205d7aacc:100% /1.873Ki, 0/s, -
 * 97/f8f1369e6a983073bc4…b7e3a7e4b3b6ce0ab64307:100% /3.560Ki, 0/s, -
 * 90/f5513a9d6084a1314db…d86d80e33a6bf0f0a00b8e:100% /3.381Ki, 0/s, -
 * a0/f55bc8ea261a9b049d1…07b088d763a69b0fc7eddb:100% /702, 0/s, -
Transferred:   	  832.843 MiB / 862.690 MiB, 97%, 3.586 MiB/s, ETA 8s
Transferred:       247662 / 257810, 96%
Elapsed time:      3m36.3s
<snip>
Transferring:
 * a3/ff63ff4bf6b485f5551…cc2e1666dc2129158d07e6:100% /8.901Ki, 0/s, -
 * d8/1a990cfcb35f2bff419…a6eaffb9d5ed08e50b7eb9:100% /1.095Ki, 0/s, -
 * c6/f32e35d7eeb66513438…6acee49bb83ba3ed923535:100% /11.438Ki, 0/s, -
 * ee/149da6970692cb68771…08f67a369271c0d5634cca:100% /3.667Ki, 0/s, -
 * b8/f9544e4f7c311eb9132…bcf1db9da99b4ae327cf2f:100% /4.257Ki, 0/s, -
 * f0/0f9e72be846edc351a6…e51642ee0b6154e5b03fcd:100% /1.117Ki, 0/s, -
 * f1/0ef6bb3baf3642eea25…ba25ccc20b2e1fb1e28262:100% /931, 0/s, -
 * cf/20edb90b93c881c1359…93fd88b7333d495bfb7e6f:100% /15.273Ki, 0/s, -
 * a3/ff79e8e9c7562f52379…310cbc374760ae9afd5242:100% /12.558Ki, 0/s, -
Transferred:   	  853.257 MiB / 885.016 MiB, 96%, 3.542 MiB/s, ETA 8s
Transferred:       254577 / 264725, 96%
Elapsed time:      3m42.3s
<snip>
Transferred:   	  864.101 MiB / 895.826 MiB, 96%, 3.524 MiB/s, ETA 9s
Transferred:       258052 / 268200, 96%
Elapsed time:      3m45.3s
Transferring:
 * f1/1fd9a805cf4689b57f6…82620f61c9e4965d31af42:100% /813, 0/s, -
 * fc/1fa145540991044aaf1…434eeaa3e1dccb6534de93:100% /41.541Ki, 0/s, -
 * e1/2bd8e7896d4d4b016dd…92238cd2ccbcefebd8f83c:100% /8.086Ki, 0/s, -
 * f8/1ea580e6d30b923d2c3…ee0fd6df8e649cec99d9e9:100% /2.988Ki, 0/s, -
 * ef/20f52658199c53b09c8…750cf946f974fdda3374c2:100% /11.309Ki, 0/s, -
 * d7/2cdd81143aa1bd21b48…07f9548488e524c9f32d5c:100% /1.049Ki, 0/s, -
 * f3/20f967eb8a49aa19893…e29608138968994402ac19:100% /3.308Ki, 0/s, -
 * ec/2643f3b49eb0581f71c…c2bd90828650f78b6737c5:100% /5.374Ki, 0/s, -
 * e0/2626f3c57c58f2fd2b6…d239b2791dd8a57cd2129e:100% /8.854Ki, 0/s, -
Transferred:   	  865.789 MiB / 897.596 MiB, 96%, 3.548 MiB/s, ETA 8s
Transferred:       258651 / 268799, 96%
Elapsed time:      3m45.8s
Transferring:
 * d7/2f287499dd65ef93887…fd665d43cd8cac9c36b2a6:100% /707, 0/s, -
 * e5/2cf4112b969e22ecd0c…2b1d3c5efd45aacc8edfe4:100% /3.327Ki, 0/s, -
 * d6/2c89121671c1a23e430…9dbb75dd79b9f00e876b56:100% /766, 0/s, -
 * f9/23dea56f958b4eb8be6…3d25e82ca54ae6e582179d:100% /745, 0/s, -
 * df/2a34d16300932ff0172…7d32d4729c2d3d0f365e27:100% /1.936Ki, 0/s, -
 * fb/244876bafb23705ccd1…38ff5d583b8cc77dbac813:100% /1.283Ki, 0/s, -
 * d4/31a342d2b66c1c3ed7b…e79562e7d5b1e358d9becc:100% /9.561Ki, 0/s, -
 * f2/212ecc9dceaec6d19f7…32210a7154b777ca28e83f:100% /1.438Ki, 0/s, -
 * fc/217dbc44fa9154f7c14…2f97e0f8276933aacac99e:100% /1.386Ki, 0/s, -
Transferred:   	  867.746 MiB / 899.325 MiB, 96%, 3.548 MiB/s, ETA 8s
Transferred:       259245 / 269393, 96%
Elapsed time:      3m46.3s
Transferring:
 * f2/249cac050ec27ba49cb…d3b32d21a304764cdf3d21:100% /1.358Ki, 0/s, -
 * d9/309cf1c1342065db362…4cb8beaf1de7ba9112d5e8:100% /1.282Ki, 0/s, -
 * db/3352be2f73bda2968ef…8a8ef9bd47140196ce7186:100% /795, 0/s, -
 * d7/31b21e668783d2fe99c…bb484c8e70810ebfacc9b7:100% /941, 0/s, -
 * d2/30d50fac5c9b428301d…6e9af5046e9f8d89071951:100% /830, 0/s, -
 * e2/2f07357c3049d65bcc8…9e8a0f913134a3e4344a16:100% /1.578Ki, 0/s, -
 * f3/263cd490adc507988b1…cbb5abd0002c287c59f4f3:100% /3.791Ki, 0/s, -
 * e7/28f08d76749fe11f3d7…9be4e11b9a1b9622ea35d4:100% /962, 0/s, -
 * f1/25d846051fa18f96b9e…6b3c29a952fd25a36a60f1:100% /1.076Ki, 0/s, -
Transferred:   	 1012.506 MiB / 1012.506 MiB, 100%, 3.579 MiB/s, ETA 0s
Transferred:       305901 / 305901, 100%
Elapsed time:      4m26.6s
Fixing permissions   305901 / 305901

Dud constantly reports an ETA of some amount of sec, but you can clearly see the elapsed time is way longer than the ETA. Some investigation leads me to believe this is an issue with the transfer progress in rclone, which by default only checks 10k files ahead at any give time: https://forum.rclone.org/t/progress-shows-incorrect-total-size-to-be-transferred/8686/3

setting RCLONE_MAX_BACKLOG=400000 dud push fixes the progress to text to be reasonably accurate. As now the progress will look ahead 400k files and thus know exactly which files need to be synched and how long this is probably going to take.

Considering dud knows exactly how many files are related to a stage, maybe it makes sense to set the max-backlog rclone parameter to this value by default?

@kevin-hanselman
Copy link
Owner

kevin-hanselman commented Oct 29, 2024

Thanks for opening the issue, @veriditin!

You are correct that this behavior is entirely from rclone. As the creator of rclone mentions in that forum post, the default is set to conserve memory. Apart from that, I cannot be precisely sure what the ramifications are when overriding rclone's max backlog. In general I want to avoid attempts at choosing optimal rclone settings for Dud, as Nick and the rclone community have spent countless hours picking a sensible default configuration. For this reason, I will closing this issue as "not planned" for now.

That said: In my opinion, the best way to handle this would be #170. It appears there was some movement on this in rclone earlier in the year, so here's hoping we get a solution that Dud can work with soon.

Thanks for "doing your homework" in this issue report (and #220)! It's very helpful to maintainers 👍

@kevin-hanselman kevin-hanselman closed this as not planned Won't fix, can't repro, duplicate, stale Oct 29, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants