When using Syncrepl…

Quick OpenLDAP tip boys & girls…

When using syncrepl to replicate from a master LDAP server to a slave LDAP server, always remember to configure the ACLs on the master LDAP server to allow the “sync dn” to read everything.

I know it sounds entirely obvious but today I realized that the order in which I had defined the ACLs on the master LDAP server was preventing the sync dn from reading the “userPassword” attribute and thus also preventing it from syncing it to the slave. The consequence of which was that users would not be able to authenticate against the slave server! Shit!

Of course, since everything else was syncing properly, all the NSS (lookup) stuff worked fine but anything authentication related like PAM wouldn’t work because the user bind would fail with “Invalid credentials” in /var/log/secure. It had a been some time since I tested authentication so I must never have actually tested authentication against the slave (whoops!) and thus didn’t notice until now. I know I tested lookups but testing authentication must have slipped by somehow. Grrr, testing.

Good thing I caught the problem early and it never escalated into a problem, that really could have sucked down the line.

Don’t make the same mistake I did.

IT Watchdogs SuperGoose (WxGoos-2) Review

Some time ago it became apparent that we would require environmental monitoring in our server room. The primary reason being that our server room was never initially intended to be a server room and the after-the-fact A/C unit installation (size, vent placement, etc.) is definitely less than optimal. Not to mention the A/C unit is likely overloaded as well, judging by some of the data we gathered after installing the environmental monitoring equipment and software. Basically, I needed to be made aware of any potential problems with the environment in that room so that should anything go wrong, I can act quickly. A secondary use of the data is to trend the environment changes in order to reveal specific patterns that may help with long-term planning.

Time Navigator HA Cluster Agent Configuration

I’ve been wanting to post about a configuration that allows for seamless file-level backup of storage attached to an active/passive high availability cluster in an uninterrupted fashion using Atempo’s Time Navigator and I’m finally going to do it.

The Problem

The initial difficulty lies in the requirement that the data must be consistently backed up at every interval, no matter which cluster node is currently the active node with the backend storage mounted. To do this, an agent is required to be configured as a cluster resource in order to “follow” the mounting/exporting of the storage to any cluster node. So in order to accomplish this,  N + 1 tina agents are required. That is, if you have two cluster nodes, you need three agents to successfully backup each node with the local agent and the storage, as it floats about the cluster nodes depending on failure or migration events.

Luckily for me, the good people at Atempo have engineered the agent in such a way that multiple agents can be ran on a single node, each binding to it’s own IP address and each individually controlled via it’s own init script. Of course, we need to make some file edits to make all this happen and that’s what I’m going share!

Read More