How to recover from type-mismatch gluster split-brain problem

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

According to all documents I have read it is not possible to recover from a glusterfs split-brain situation when the problems is "Type-mismatch". You can see this is the self-heal deamon log /var/log/glusterfs/glustershd.log for instance with lines like:

[2022-01-18 23:06:34.791698] E [MSGID: 108008] [afr-self-heal-common.c:385:afr_gfid_split_brain_source] 13-gluster-normal-px2jenkins-replicate-0: Gfid mismatch detected for <gfid:4c21e24c-1d22-4a12-a139-8a515f4b8d13>/build.xml>, 1d4f68e1-1a11-474d-8712-55ab8164d06f on my-volume-client-1 and cc8057e4-4395-411b-8315-34d941d1acae on my-volume-client-0.

The end symptom to the user experiences is that file access of some files reports: "endpoint not connected"

The only suggestion I have found was to completely recreate the volumes. 

I could however recover by following this procedure (using the volume "my-volume"):

Find the files with problems

First run:

gluster volume heal my-volume info

This will give you a list of possible victims. One list per brick. Like:

Brick server11:/srv/gluster-normal/my-volume
Status: Connected
Number of entries: 0

Brick server19:/srv/gluster-normal/my-volume
Status: Connected
Number of entries: 2

Brick server15:/srv/gluster-normal/my-volume
Status: Connected
Number of entries: 2

This list might change if you run it several times because the self-healing daemon is running, but it you see a file that is constantly there, like the file that you got the initial problem with you continue and check get gfid of those files on all bricks

Check GFID on proper server

Login to each brick for the volume and check the gfid for the file to correct, gfid is the gluster internal id for a file, a bit like inode-numbers. First run on the brick that report no problems for this file.

On server11:

getfattr -e hex -n trusted.gfid /srv/gluster-normal/my-volume/problem-file
# file: /srv/gluster-normal/my-volume/problem-file

This is the proper id. Next run on the failing bricks.

Correct the GFID on broken servers

On server19 first check that there is indeed a mismatch:

cd /srv/gluster-normal/my-volume
getfattr -e hex -n trusted.gfid problem-file
# file: /srv/gluster-normal/my-volume/problem-file

There was the first problem so now correct it with:

cd /srv/gluster-normal/my-volume
setfattr  -n trusted.gfid -v 0x4c9d33a1fa064ccea30e0295807de94b problem-file

Correct according to the above on all servers where there is a mismatch of the gfid of the file.

Note! If a directory has mismatching gfid:s, it must be corrected before any files below it.

Remove files not known

For entries like: <gfid:c12161df-cdb8-4605-9381-3e9da7e458ba> in the "gluster v heal my-volume info" output usually means a missing file that can be removed on all bricks warning about it. The file then located in the folder <brick-dir>/.glusterfs/c1/21/c12161df-cdb8-4605-9381-3e9da7e458ba and is a symbolic link to the proper path, both can be removed normally. But make sure its ok to delete.

Some tips

If the list of non-healed files still persist, Verify if you can access them from fuse mounted dirs. If so, try to copy them to a temporary file and then copy back so that the gluster replicas will update.

Keep cleaning out broken gfid:s or possibly delete broken files until gluster v heal info returns 0 entries (for a quiet filsystem that is)

devhelp for wxWidgets

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

For anyone interested in devhelp for wxWidgets I generated one here.

I used hhconvert from the htmlhelp python2 package with some manual tweaking.


  • Download the latest wxWidgets chm docs
  • Run: python2 wxWidgets-3.1.5.chm wxWidgets-3.1.5.tgz
  • Unpack the file under ~/.local/share/devhelp/books/wxWidgets-3.1.5
  • Make sure the devhelp-file is named "wxWidgets-3.1.5.devhelp", e.i. same as its foldername.
  • Also move up all files from the book subfolder to be in the same folder as the .devhelp-file
  • Restart devhelp

Getting X11 client work with x2go from docker container

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

Having a X11 client running in a docker container to run properly in a x2go session is not quite straight forward from normal X11 clients in docker. To make it work its important to have the option --network=host set.

Like this:

docker run -e HOME=$HOME -e DISPLAY=$DISPLAY --network=host -it -v /tmp/.X11-unix:/tmp/.X11-unix -v $HOME/.Xauthority:$HOME/.Xauthority --user $(id -u):$(id -g) "image-name"

How to change IP-address of a gluster brick

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

I wrote this small script to replace the IP-address of a brick of a gluster volume. This can for instance be needed when new network interfaces are added or changed on a gluster node:

if [ $# != 4 ]; then
    echo "Usage: switch-brick volume old-host new-host path-on-server" >&2
    exit 1


$DRYRUN gluster volume reset-brick "$VOLUME" "$OLD_HOST":"$DISK_PATH" start &&
$DRYRUN gluster volume reset-brick "$VOLUME" "$OLD_HOST":"$DISK_PATH" "$NEW_HOST":"$DISK_PATH" commit force