GlusterFS Madness

Background
As mentioned in Btrfs on the server, we have been using btrfs as our primary filesystem for our servers for the last year and a half or so, and, for the most part, it’s been great. There have only been a few times that we’ve needed the snapshots that btrfs gives us for free, but when we did, we really needed them.

At the end of the last school year, we had a bit of a problem with the servers and came close to losing most of our shared data, despite using DRBD as a network mirror. In response to that, we set up a backup server which has the sole job of rsyncing the data from our primary servers nightly. The backup server is also using btrfs and doing nightly snapshots, so one of the major use-cases behind putting btrfs on our file servers has become redundant.

The one major problem we’ve had with our file servers is that, as the number of systems on the network has increased, our user data server can’t handle the load. The configuration caching filesystem (CCFS) I wrote has helped, but even with CCFS, our server was regularly hitting a load of 10 during breaks and occasionally getting as high as 20.

Switching to GlusterFS
With all this in mind, I decided to do some experimenting with GlusterFS. While we may have had high load on user data server, our local mirror and shared data servers both had consistently low loads, and I was hoping that GlusterFS would help me spread the load between the three servers.

The initial testing was very promising. When using GlusterFS over ext4 partitions using SSD journaling on just one server, the speed was just a bit below NFS over btrfs over DRBD. Given the distributed nature of GlusterFS, adding more servers should increase the speed linearly.

So I went ahead and broke the DRBD mirroring for our eight 2TB drives and used the four secondary DRBD drives to set up a production GlusterFS volume. Our data was migrated over, and we used GlusterFS for a week without any problems. Last Friday, we declared the transition to GlusterFS a success, wiped the four remaining DRBD drives, and added them to the GlusterFS volume.

I started the rebalance process for our GlusterFS volume Friday after school, and it continued to rebalance over the weekend and through Monday. On Monday night, one of the servers crashed. I went over to the school to power cycle the server, and, when it came back up, continued the rebalance.

Disaster!
Tuesday morning, when I checked on the server, I realized that, as a result of the crash, the rebalance wasn’t working the way it should. Files were being removed from the original drives but not being moved to the new drives, so we were losing files all over the place.

After an emergency meeting with the principal (who used to be the school’s sysadmin before becoming principal), we decided do ditch GlusterFS and go back to NFS over ext4 over DRBD. We copied over the files from the GlusterFS partitions, and then filled in the gaps from our backup server. Twenty-four sleepless hours later, the user data was back up and the shared data was up twenty-four sleepless hours after that.

Lessons learned

  1. Keep good backups. Our backups allowed us to restore almost all of the files that the GlusterFS rebalance had deleted. The only files lost were the ones created on Monday.
  2. Be conservative about what you put into production. I’m really not good at this. I like to try new things and to experiment with new ideas. The problem is that I can sometimes put things into production without enough testing, and this is one result.
  3. Have a fallback plan. In this case, our fallback was to wipe the server and restore all the data from the backup. It didn’t quite come to that as we were able to recover most of the data off of GlusterFS, but we did have a plan if it did.
  4. Avoid GlusterFS. Okay, maybe this isn’t what I should have learned, but I’ve already had one bad experience with GlusterFS a couple of years ago where its performance just wasn’t up to scratch. For software that’s supposedly at a 3.x.x release, it still seems very beta-quality.

The irony of this whole experience is that by switching the server filesystems from btrfs to ext4 with SSD journals, the load on our user data server has dropped to below 1.0. If I’d just made that switch, I could have avoided two days of downtime and a few sleepless nights.

Nuclear explosion credit – Licorne by Pierre J.. Used under the CC-BY-NC 2.0 license.

Advertisements

4 thoughts on “GlusterFS Madness

  1. Jeff Darcy (@Obdurodon)

    It’s hard to say what might have happened without a lot more information, which I’ll assume was in your bug report and is being examined by others, but I am curious about one thing. Usually when GlusterFS is used to replace DRBD it’s used with replication enabled, but you don’t mention replication. Both your performance results and your rebalance failure also seem more consistent with a non-replicated setup. Am I correct in concluding that you were not using GlusterFS replication?

    Reply
    1. jdieter Post author

      First off, my apologies, but I haven’t created a bug report. We were at a point where we needed the system up ASAP, and I didn’t expect there’d be much value for you in a bug report without having a live system to test fixes on.

      We did have GlusterFS set up, originally with two pairs of 2TB drives and then expanded it to four pairs of 2TB drives. It was during the migrate-data stage of the rebalance that one of the servers died and was then restarted, and that’s when we started losing data across both replicated drives.

      I do still have the logs, so if you think it’s worth it for me to open a bug report, I’ll happily do so. I just won’t be able to test any fixes.

      Reply
      1. Jeff Darcy (@Obdurodon)

        A bug report with logs would certainly be most appreciated, if you ever get the chance. If you were using replication, then there would seem to be two mysteries: why the replication layer didn’t deal with the single node failure by just using the other replica, and why the distribution layer responded incorrectly to the failure of the replica pair by deleting files that hadn’t been written successfully to their new locations. Neither behavior is consistent with my own experience, and the second failure seems particularly odd because the rebalance code doesn’t actually *delete* any files (it copies and then renames over the original). I assume that you checked for presence of the “missing” files on the bricks that had remained up, not just the one that had failed, to rule out the possibility that the failed brick had just not been properly reintegrated. That would still be a bug, but of a different sort than what you’ve described.

        In any case, I’m sorry that you had this problem, and thank you (in advance) for any information you can provide in case this is a problem that could affect others.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s