Posted: August 5th, 2010 | Author: rthomson | Filed under: Sysadmin | Tags: atempo, backup, linux, server, software, tina, unix | No Comments »
I’ve been wanting to post about a configuration that allows for seamless file-level backup of storage attached to an active/passive high availability cluster in an uninterrupted fashion using Atempo’s Time Navigator and I’m finally going to do it.
The initial difficulty lies in the requirement that the data must be consistently backed up at every interval, no matter which cluster node is currently the active node with the backend storage mounted. To do this, an agent is required to be configured as a cluster resource in order to “follow” the mounting/exporting of the storage to any cluster node. So in order to accomplish this, N + 1 tina agents are required. That is, if you have two cluster nodes, you need three agents to successfully backup each node with the local agent and the storage, as it floats about the cluster nodes depending on failure or migration events.
Luckily for me, the good people at Atempo have engineered the agent in such a way that multiple agents can be ran on a single node, each binding to it’s own IP address and each individually controlled via it’s own init script. Of course, we need to make some file edits to make all this happen and that’s what I’m going share!
[ Read More ]»
Posted: April 23rd, 2010 | Author: rthomson | Filed under: Sysadmin | Tags: acl, code, linux, shell, unix | 3 Comments »
I’ve recently come across a situation where the inherent design of POSIX ACLs has left me scratching my head for a solution to the problem of setting up a “project” or “group share” directory on Linux. The problem is as follows: We have several different projects or groups that desire a directory where any and every file created, copied or moved to said directory will become owned by a particular group and have group read/write permissions set automatically.
Most of the problem is solved through age-old UNIX techniques. For group ownership, all we need to do is setup the top-level directory to be owned by the “project” or “group share” group and setgid the directory:
$ mkdir project1
$ chown .projgroup project1
$ chmod g+s project1
This effectively forces every file created, moved or copied into the “project1″ directory to be owned by group “projgroup”. So far, so good. The difficulties begin when we attempt to use default ACLs to enforce the permissions of any files created, moved or copied into the directory.
The POSIX ACL standard defines “default” ACLs which can be applied to a directory, which are in turn inherited by newly created/copied/moved child files and directories. While the default ACLs are inherited properly, the ACL mask when applied to files copied into the group share directory WITHOUT previous group write set prevents the files from being group writable!
$ getfacl project1
# file: project1
# owner: root
# group: projgroup
So far so good, right?
$ ls -alh test
-rw-r--r-- 1 user1 user 0 Apr 23 15:10 test
$ cp test project1
$ ls -alh project1/test
-rw-r--r--+ 1 user1 projgroup 0 Apr 23 15:10 project1/test
What the… ?!?! No group write? Noooooo!
$ getfacl project1
# file: project1/test
# owner: user1
# group: projgroup
And so we have the great POSIX ACL mask problem, which is by design in fact. Still looking for a complete solution that doesn’t involve global trying to force a specific umask on every account… It would be nice if I could ensure that every file had group write set before it was copied into the group share directory but alas, I cannot. Telling users to manually check and change permissions is also a pain. Cron jobs to change group write recursively is also ugly. Please, someone provide me with the solution.
Posted: March 30th, 2010 | Author: rthomson | Filed under: Tips & Tricks | Tags: linux, unix | No Comments »
Here’s a quick tip for killing your processes on Linux/UNIX:
kill -1 -1
The first -1 is the signal you are going to send and the second -1 means “every process”. The -1 signal is SIGHUP (hang up) which is basically a nice way of asking a process to terminate (or reload in some cases). The reason sending SIGHUP to every process works to kill only the processes of the account that ran the command is that not every process will respond signals from just anyone. Only processes running at the user who executes the above command will respond to it. Other users processes, including those running as root will not respond. Be careful however, running this as root will attempt to kill ALL processes.
Not all processes will respond to SIGHUP by exiting so sometimes more force is necessary:
kill -9 -1
The -9 signal is SIGKILL (kill program) which should take care of any pesky processes that don’t want to exit nicely.
And that’s that.