Browsing Automounted NFS with Nautilus

Has browsing automounted NFS shares with nautilus got you pulling out hair in frustration?

Ever since we transitioned from the RHEL4 environment to Fedora 14, people have been reporting terrible slowness and delays in nautilus when browsing our NFS shares. Reports of waiting over a minute for an NFS automount root-level directory with < 100 sub directories to display the contents are not good.

This wasn’t a problem on our old RHEL4 terminal server and I couldn’t for the life of me understand how nautilus could have become so slow in the years since RHEL4 was released. It just didn’t make sense. I started to think something had to be wrong and that this wasn’t just the new normal expected behaviour but I had nothing to go on.

I tried the basic recommendations: Disable thumbnails, disable preview, disable directory item counts. That didn’t help the user experience in any dramatic way. At this point, I started recommended pcmanfm and thunar as a way to workaround nautilus’ terrible performance. I even wrote a fairly concise script for modifying the default file manager and desktop-drawing application so that using a different file manager wouldn’t be so foreign in GNOME.

Then one day I started looking at the verbose level output from automount while browsing the NFS mounts with nautilus and found a substantial amount of this in the logs:

Apr 28 11:19:10 hostname automount[18959]: attempting to mount entry /home/.svn
Apr 28 11:19:10 hostname automount[18959]: key ".svn" not found in map source(s).
Apr 28 11:19:10 hostname automount[18959]: failed to mount /home/.svn

Oh my! Why are there repeated access attempts for “.svn”? What is causing automount to perform map lookups for “.svn” in the automount-controlled directories? Could it be nautilus?

Why yes!

As it turns out the GNOME SVN integration package “gnubversion” includes a nautilus extension and this extension was causing Nautilus to look for “.svn” directories everywhere and it just so happens that looking for “.svn” in a root-level automount directory causes slow map lookup failures that (presumably) kill the perceptible performance of browsing automounted NFS shares.

I removed gnubversion (as no one was using it) and the user experience for nautilus has normalized. While nautilus still isn’t as speedy as pcmanfm or thunar, its no longer a cause of forceful hair removal incidents… and all is well in the world.

Fresh Win2k Install and Windows Update Error

I needed to re-install a Windows 2000 Pro system today because the HDD was failing and we wanted to convert from ATA to SATA at some point anyways. We have nice gzipped dd images of the system, but that’s with the ATA drive and a different SATA controller. The install is also old and crufty. We need a system in a better known state and so fresh re-install it is.

As to why I’m installing Windows 2000 in 2011? This application requires Windows 2000 Pro as it is an instrument controller and thus the proprietary control software is finicky and we only receive support with the manufacturer-mandated OS and software stack version(s). There are a few other reasons why we also need to keep Windows 2000 Pro at this point but they aren’t relevant or interesting.

Now on to the problem.

Windows Update no longer works from a fresh install of Win2k Pro! The issue is Internet Explorer 5, the version of IE bundled with Windows 2000 Pro. Windows Update now requires at least IE6 in order to function properly. I don’t know when that changed but presumably some time ago as I haven’t run Windows Update on a fresh 2000 install in years. Luckily the solution is fairly simple, just download IE6 from microsoft.com and get rockin’.

It struck me as strange at first but quite understandable after a few moments of reflecting on it, especially considering Windows 2000 reached end-of-life in July 2010.

Yeah, that post pretty much sucked. Sorry, folks.

LTSP 5 and AIGLX

Woot! LTSP 5 + LDM over SSH (LDM_DIRECTX=False in lts.conf) + Open source radeon driver with AIGLX is working!

Nothing like running compiz smoothly on a dual monitor thin client :D

The problem I was having was that despite the X server on the thin client being fully configured and tested to use hardware acceleration locally, when connected to the terminal server over the secure LDM tunnel I was getting direct rendering with the software renderer which results in a big fail for compiz.

The key to avoiding the software renderer from being used for DRI was setting LIBGL_ALWAYS_INDIRECT=1 as an environment variable. I don’t know why with everything configured correctly that the system defaults to using the software renderer instead of indirect rendering + hardware renderer but at least forcing this environment variable in a global profile script allows for sexy hardware accelerated compiz goodness from securely connected thin clients.

Without the environment variable to force indirect rendering, glxinfo output with the LIBGL_DEBUG=verbose env variable set was complaining that the “drm device” didn’t exist. I suspect this is because glxinfo was expecting to somehow find the /dev/dri/card0 device on the terminal server itself instead of on the thin client and of course it doesn’t exist on the server… the OpenGL card is installed on the thin client!

There must be a way to get this working without the LIBGL_ALWAYS_INDIRECT environment variable but I couldn’t figure it out… this really smells of a hack but since it’s very easy to apply globally and it works just how I expect things to work, I’ll have to leave it in place until the time I can figure out another non-hacky way of getting the results I want with this configuration.