Traefik plugins not working, in fact no middleware is working?

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

When installing Traefik, as a kubernetes ingress or other, its not immediately clear what do to with the middleware configurations. If this is not correctly done, the effect is as plugins and middlewares are not working. One must actively turn on a middleware (plugins runs as middlewares). For kubernetes this is either done via an annotation for an ingress. If you for instance have the plugin fail2ban installed and want to use it on your web-service, you need to add following annotation to your ingress (if its in the namespace "default"): default-fail2ban@kubernetescrd

The syntax is: <namespace>-<middleware>@kubernetescrd if you run in kubernetes and have created the Middleware defined there as a custom resource.

Or, you can enable it by default on all web-services managed by Traefik by adding this helm configuration:

- ""
- "--experimental.plugins.fail2ban.version=v0.6.6"
- "--entrypoints.web.http.middlewares=default-fail2ban@kubernetescrd,default-geoblock@kubernetescrd"
- "--entrypoints.websecure.http.middlewares=default-fail2ban@kubernetescrd,default-geoblock@kubernetescrd"
- "--entrypoints.ssh.http.middlewares=default-fail2ban@kubernetescrd,default-geoblock@kubernetescrd"

So your puppet certificate has expired?

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

Puppet server uses two certificates/keys, one certificate authority (CA), that is used for signing the clients certificates, and one for the web services. Both of them will expire sometime.
In this article I assume that all your puppet certs and keys are located somewhere under /etc/puppetlabs/puppet/ssl.

Under the ca-subfolder the CA-certs are stored in the as:
ca_crt.pem: The CA cert
ca_key.pem: The CA private key that corresponds to the cert.

Under the certs-subfolder you will se:
ca.pem: A copy of the CA cert
<servername>.pem: Cert of web service

Under private_keys-subfolder you will see:
<servername>.pem: Private key that belongs to the server cert.

If you use foreman, it is also usually configured to use the same certificates places.

Before you continue you will need to have openssl and XCA installed (XCA is a super nice certificate management tool:

To check if your certs has expired, install openssl and run (for CA in this example):

bash$ openssl x509 -in ca_crt.pem -enddate

In XCA, import the keys above under the tab "Private Keys", and then import the certs under the tab "Certificates". Here you will also see a nice view of dates and other info.

If only your server cert has expired you can just renew that one by right-clicking it and choose renewal. Select new expiry date and export it and copy the file to /etc/puppetlabs/puppet/ssl/certs/<servername>.pem (sometimes certs are also named .crt, what ever your system uses).

If you CA has expired things is more troublesome and will require changes on all managed clients.
First you need to renew your CA-cert. Make sure to re-use the same serial number since that might be included in its self-signing. Export the cert and verify that it is indeed a self-signed certificate with:
openssl verify -CAfile new-ca-cert.pem new-ca-cert.pem

I had problems with this since the original cert used its DN and Serial number for signing. If you get the error message:
"unable to get local issuer certificate"
you most likely have a problem with the serial number. Then instead create a completely new CA-cert wich XCA as:

1. On the first tab choose "Create self signed certificate". Select CA as template and click "Apply all".
2. On the second tab "Subject", enter common name to be the exact common name as the old certificate.
3. Click in "Used keys too" and make sure to select the original private key for the CA - if this is not done, everything will fail.
4. Under extensions also click in "Authority Key Identifier" - this will also cause everything to fail if not done.
5. Make sure to select a expiration date far in the future.
6. OK

Now export this cert and store it under /etc/puppetlabs/puppet/ssl/. as both ./ca/ca_crt.pem and as ./certs/ca.pem

Make some checks (running under /etc/puppetlabs/puppet/ssl):
1. Check that the output from "openssl rsa -in ca/ca_key.pem -noout -modulus" and "openssl x509 -in ca/ca_crt.pem -noout -modulus" show the same modulus
2. Make the same comparising for the server key and cert.
3. Verify the server cert with: openssl verify -CAfile certs/ca.pem certs/<servername>.pem

Restart puppetmaster and if you run foreman also apache2, foreman and foreman-proxy.

Sadly, if you renewed the CA-certificate you also need to copy the ca_crt.pem to all clients to replace their file /etc/puppetlabs/puppet/ssl/certs/ca.pem

Things should work by now.

Im considering myself to have one extra cronjob on all clients that will download this ca.pem from a known https location regularly. This might or might not be a security risk though.

How to recover from type-mismatch gluster split-brain problem

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

According to all documents I have read it is not possible to recover from a glusterfs split-brain situation when the problems is "Type-mismatch". You can see this is the self-heal deamon log /var/log/glusterfs/glustershd.log for instance with lines like:

[2022-01-18 23:06:34.791698] E [MSGID: 108008] [afr-self-heal-common.c:385:afr_gfid_split_brain_source] 13-gluster-normal-px2jenkins-replicate-0: Gfid mismatch detected for <gfid:4c21e24c-1d22-4a12-a139-8a515f4b8d13>/build.xml>, 1d4f68e1-1a11-474d-8712-55ab8164d06f on my-volume-client-1 and cc8057e4-4395-411b-8315-34d941d1acae on my-volume-client-0.

The end symptom to the user experiences is that file access of some files reports: "endpoint not connected"

The only suggestion I have found was to completely recreate the volumes. 

I could however recover by following this procedure (using the volume "my-volume"):

Find the files with problems

First run:

gluster volume heal my-volume info

This will give you a list of possible victims. One list per brick. Like:

Brick server11:/srv/gluster-normal/my-volume
Status: Connected
Number of entries: 0

Brick server19:/srv/gluster-normal/my-volume
Status: Connected
Number of entries: 2

Brick server15:/srv/gluster-normal/my-volume
Status: Connected
Number of entries: 2

This list might change if you run it several times because the self-healing daemon is running, but it you see a file that is constantly there, like the file that you got the initial problem with you continue and check get gfid of those files on all bricks

Check GFID on proper server

Login to each brick for the volume and check the gfid for the file to correct, gfid is the gluster internal id for a file, a bit like inode-numbers. First run on the brick that report no problems for this file.

On server11:

getfattr -e hex -n trusted.gfid /srv/gluster-normal/my-volume/problem-file
# file: /srv/gluster-normal/my-volume/problem-file

This is the proper id. Next run on the failing bricks.

Correct the GFID on broken servers

On server19 first check that there is indeed a mismatch:

cd /srv/gluster-normal/my-volume
getfattr -e hex -n trusted.gfid problem-file
# file: /srv/gluster-normal/my-volume/problem-file

There was the first problem so now correct it with:

cd /srv/gluster-normal/my-volume
setfattr  -n trusted.gfid -v 0x4c9d33a1fa064ccea30e0295807de94b problem-file

Correct according to the above on all servers where there is a mismatch of the gfid of the file.

Note! If a directory has mismatching gfid:s, it must be corrected before any files below it.

Remove files not known

For entries like: <gfid:c12161df-cdb8-4605-9381-3e9da7e458ba> in the "gluster v heal my-volume info" output usually means a missing file that can be removed on all bricks warning about it. The file then located in the folder <brick-dir>/.glusterfs/c1/21/c12161df-cdb8-4605-9381-3e9da7e458ba and is a symbolic link to the proper path, both can be removed normally. But make sure its ok to delete.

Some tips

If the list of non-healed files still persist, Verify if you can access them from fuse mounted dirs. If so, try to copy them to a temporary file and then copy back so that the gluster replicas will update.

Keep cleaning out broken gfid:s or possibly delete broken files until gluster v heal info returns 0 entries (for a quiet filsystem that is)

Using multiple ldap or AD domains for gerrit authentication

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

As far as I know its not directly possible to use multiple LDAP/AD domains for
authentication with gerrit.
You can use multiple LDAP/AD servers but that is only for redundancy.
In order to be able to login from multiple domains then one way is to use a
proxying OpenLDAP server.
In our case this server has its own local user database - which we needed for
historical reasons and also to be able to add local users without having those
accounts in AD.

The setup requires the use of both the ldap and the meta ldap databases.
The meta-database handles rewriting and parsing of login and accounts from
one domain and schema to AD schema.

Lets say in this scenario we have following DIT:s:

Active directory domain: OU=Users,DC=company,DC=com
Local ldap domain: OU=People,DC=local,DC=domain

You want to allow people to login to gerrit from both the local ldap account and from the AD domain.

Then three databases are needed in the slapd.conf file.

First database, a pure proxying for the AD domain as:

# Proxy to real NGAD AD
database ldap
suffix "OU=Users,DC=company,DC=com"
uri "ldap://local.domain/"
lastmod off

Second database, a meta database that gerrit uses to lookup the real DN given only the login name. This real DN is then used for the final authentication. This authentication is made to the same ldap server so that is why this ldap also need to proxy the pure AD DIT.

This is a subordinate domain meaning lookups with be done in this domain first. It has a dummy sub-DIT as it would collide with the top DIT otherwise. It looks like:

# Settings for AD
database meta
suffix "OU=AD,OU=People,DC=local,DC=domain" <- Note the extra sub OU=AD
uri "ldap://local.domain/OU=AD,OU=People,DC=local,DC=domain"
rewriteEngine on
rewriteContext searchBase
rewriteRule "OU=AD,OU=People,DC=local,DC=domain" "OU=Users,DC=company,DC=com" ":"

rewriteContext searchFilter
rewriteRule ".uid=(.*)." "(samaccountname=%1)" ":"
idassert-bind bindmethod=simple
binddn="CN=Nisse Hult,OU=Users,DC=company,DC=com" <- This should be a real AD account.
overlay rwm
rwm-map attribute uid sAMAccountName

Finally a local database for local accounts as:

# Local overlay LDAP
database mdb
suffix "DC=local,DC=domain"
directory "/var/lib/ldap"
rootdn "CN=admin,DC=local,DC=domain"
rootpw "{SSHA}jwd892ej8jfdf2df83f"     <- encrypted password for cn=admin, created with slappasswd

Except from the above, all "normal" slapd configs must also be in place. I basically just
used the example config with the addition that the rwm overlay and the meta and ldap databases modules must also be loaded like:

moduleload back_ldap
moduleload back_meta
moduleload rwm

I have not tried to convert this to config directory format config allowing configuration changes directly via ldapmodify
but I assume that should work fine also.

devhelp for wxWidgets

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

For anyone interested in devhelp for wxWidgets I generated one here.

I used hhconvert from the htmlhelp python2 package with some manual tweaking.


  • Download the latest wxWidgets chm docs
  • Run: python2 wxWidgets-3.1.5.chm wxWidgets-3.1.5.tgz
  • Unpack the file under ~/.local/share/devhelp/books/wxWidgets-3.1.5
  • Make sure the devhelp-file is named "wxWidgets-3.1.5.devhelp", e.i. same as its foldername.
  • Also move up all files from the book subfolder to be in the same folder as the .devhelp-file
  • Restart devhelp