-
Notifications
You must be signed in to change notification settings - Fork 1
3.5.3
This is a bugfix release. The Release Notes for 3.5.0, 3.5.1 and 3.5.2 contain a listing of all the new features that were added and bugs fixed in the GlusterFS 3.5 stable release.
- 1081016: glusterd needs xfsprogs and e2fsprogs packages
- 1100204: brick failure detection does not work for ext4 filesystems
- 1126801: glusterfs logrotate config file pollutes global config
- 1129527: DHT :- data loss - file is missing on renaming same file from multiple client at same time
- 1129541: [DHT:REBALANCE]: Rebalance failures are seen with error message " remote operation failed: File exists"
- 1132391: NFS interoperability problem: stripe-xlator removes EOF at end of READDIR
- 1133949: Minor typo in afr logging
- 1136221: The memories are exhausted quickly when handle the message which has multi fragments in a single record
- 1136835: crash on fsync
- 1138922: DHT + rebalance : rebalance process crashed + data loss + few Directories are present on sub-volumes but not visible on mount point + lookup is not healing directories
- 1139103: DHT + Snapshot :- If snapshot is taken when Directory is created only on hashed sub-vol; On restoring that snapshot Directory is not listed on mount point and lookup on parent is not healing
- 1139170: DHT :- rm -rf is not removing stale link file and because of that unable to create file having same name as stale link file
- 1139245: vdsm invoked oom-killer during rebalance and Killed process 4305, UID 0, (glusterfs nfs process)
- 1140338: rebalance is not resulting in the hash layout changes being available to nfs client
- 1140348: Renaming file while rebalance is in progress causes data loss
- 1140549: DHT: Rebalance process crash after add-brick and `rebalance start' operation
- 1140556: Core: client crash while doing rename operations on the mount
- 1141558: AFR : "gluster volume heal <volume_name> info" prints some random characters
- 1141733: data loss when rebalance + renames are in progress and bricks from replica pairs goes down and comes back
- 1142052: Very high memory usage during rebalance
- 1142614: files with open fd's getting into split-brain when bricks goes offline and comes back online
- 1144315: core: all brick processes crash when quota is enabled
- 1145000: Spec %post server does not wait for the old glusterd to exit
- 1147156: AFR client segmentation fault in afr_priv_destroy
- 1147243: nfs: volume set help says the rmtab file is in "/var/lib/glusterd/rmtab"
- 1149857: Option transport.socket.bind-address ignored
- 1153626: Sizeof bug for allocation of memory in afr_lookup
- 1153629: AFR : excessive logging of "Non blocking entrylks failed" in glfsheal log file.
- 1153900: Enabling Quota on existing data won't create pgfid xattrs
- 1153904: self heal info logs are filled with messages reporting ENOENT while self-heal is going on
- 1155073: Excessive logging in the self-heal daemon after a replace-brick
- 1157661: GlusterFS allows insecure SSL modes
-
The following configuration changes are necessary for 'qemu' and 'samba vfs plugin' integration with libgfapi to work seamlessly:
-
gluster volume set server.allow-insecure on
-
restarting the volume is necessary
gluster volume stop <volname> gluster volume start <volname>
-
Edit
/etc/glusterfs/glusterd.vol
to contain this line:option rpc-auth-allow-insecure on
-
restarting glusterd is necessary
service glusterd restart
More details are also documented in the Gluster Wiki on the [Libgfapi with qemu libvirt](../Feature Planning/GlusterFS 3.5/libgfapi with qemu libvirt.md) page.
-
-
For Block Device translator based volumes open-behind translator at the client side needs to be disabled.
gluster volume set <volname> performance.open-behind disabled
-
libgfapi clients calling
glfs_fini
before a successfulglfs_init
will cause the client to hang as reported here. The workaround is NOT to callglfs_fini
for error cases encountered before a successfulglfs_init
. This is being tracked in Bug 1134050 for glusterfs-3.5 and Bug 1093594 for mainline. -
If the
/var/run/gluster
directory does not exist enabling quota will likely fail (Bug 1117888).