Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Very slow copying to mounted volume dir. Low I/O? #4459

Open
atabaeph opened this issue Jan 28, 2025 · 1 comment
Open

Very slow copying to mounted volume dir. Low I/O? #4459

atabaeph opened this issue Jan 28, 2025 · 1 comment

Comments

@atabaeph
Copy link

Hi, Guys.
I am newbie in GlusterFS, need some help.
created volume for replication to 11 servers. every server must have same files.
mounted volume to /opt/voiceprints
Tried copy all old voicepints to new dir /opt/voiceprints and it was loo long.
then desided to use rsync.
rsync -av --progress /opt/voiceprints.bckp/verify.kz-kz.8k.male /opt/voiceprints/ | pv
Image

/opt/voiceprints.bckp/verify.kz-kz.8k.male this folder only 450MB and it copying more than 1 hour.

Maybe I need to add some configuration?

gluster volume info

Volume Name: voiceprints_volume
Type: Replicate
Volume ID: e6356083-d314-4203-b87f-2fc112bd53db
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 11 = 11
Transport-type: tcp
Bricks:
Brick1: engine1:/opt/gluster_brick
Brick2: engine2:/opt/gluster_brick
Brick3: engine3:/opt/gluster_brick
Brick4: engine4:/opt/gluster_brick
Brick5: engine7:/opt/gluster_brick
Brick6: engine8:/opt/gluster_brick
Brick7: engine9:/opt/gluster_brick
Brick8: engine10:/opt/gluster_brick
Brick9: engine13:/opt/gluster_brick
Brick10: engine14:/opt/gluster_brick
Brick11: engine15:/opt/gluster_brick
Options Reconfigured:
diagnostics.count-fop-hits: on
diagnostics.latency-measurement: on
cluster.granular-entry-heal: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
dd if=/dev/zero of=/opt/voiceprints/filetest bs=1M count=1
1+0 records in
1+0 records out
1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0433219 s, 24.2 MB/s
@pranithk
Copy link
Member

pranithk commented Feb 7, 2025

Do you really want 11 replicas? Are you going to read the data directly from the brick or from the mount? In any case,

Could you provide the file generated in of the format: /var/run/gluster/glusterdumpXXX

setfattr -n trusted.io-stats-dump -v tmp-io-stats_3 /mnt/test

Also could you provide output of the following command before and after running the workload

gluster volume profile <volname> info incremental

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants