Tag Archives: fedora

Summer work

The dog and the river

It’s summer, we’re in the US, and I’m thoroughly enjoying the time with my family. It’s been quite a while since we’ve seen everyone here, and we’ve all been having a blast. The one downside (though my wife is convinced it’s an upside) is that my parents have limited internet, so my work time has been, out of necessity, minimal. It has nothing to do, I assure you, with our beautiful beach on the river.

I have managed to push through a bugfix LizardFS update for Fedora and EPEL, and I’ve been working on some benchmarks comparing GlusterFS, LizardFS and NFS. I’ve been focusing on the compilebench benchmark which basically simulates compiling and reading kernel trees, and is probably the closest thing to our usage pattern at the school (lots of relatively small files being written, changed, read and deleted).

Using NFS isn’t really fair, since it’s not distributed, but it’s still the go-to for networked storage in the Linux world, so I figured it would be worth getting an idea of exactly how much slower the alternatives are. If I can get Ceph up and running, I’ll see if I can benchmark it too.

In other news, I have the privilege of attending Flock again this year. I’m really looking forward to getting a better feel on Fedora’s movement towards modules, something that I hope to put into practice over the next year at the systems in school.

Hopefully, I’ll get a chance to get my benchmarks out within the next couple of weeks, and I’m sure I’ll have a lot to say about Flock.

Advertisements

Multiseat systems and the NVIDIA binary driver (update)

fireworks

Last month I wrote about using the NVIDIA binary driver with multiseat systems. There were a number of crazy tweaks that we had to use to make it work, but with some recent updates, the most egregious are no longer necessary. Hans de Goede posted about an Xorg update that removes the requirement for a separate Xorg configuration folder for the NVIDIA card, and I’ve created a pull request for negativo17.org’s NVIDIA driver that uses the updated Xorg configs in a way that’s friendly to multiseat systems.

To make it Just Work™, all you should need is xorg-x11-server-1.19.0-3.fc25 from F25 updates-testing, my mesa build (source here, Fedora’s mesa rebuilt with libglvnd enabled), my NVIDIA driver build (source here), and the negativo17.org nvidia repository enabled.

With the above packages, Xorg should use the correct driver automagically with whatever video card you have.

Multiseat systems and the NVIDIA binary driver

Building mesa

Building mesa

Ever since our school switched to Fedora on the desktop, I’ve either used the onboard Intel graphics or AMD Radeon cards, since both are supported out of the box in Fedora. With our multiseat systems, we now need three external video cards on top of the onboard graphics on each system, so we’ve bought a large number of Radeon cards over the last few years.

Unfortunately, our local supplier has greatly reduced the number of AMD cards that they stock. In their latest price lists, they have a grand total of two Radeon cards in our price range, and one of them is almost seven years old!

This has led me to take a second look at NVIDIA cards, and I’m slowly coming back around to the concept of buying them and maybe even using their binary drivers. Our needs have changed since we first started using Linux, and NVIDIA’s binary driver does offer some unique benefits.

As we’ve started teaching 3D modeling using Blender, render time has become a real bottleneck for some of our students. We allow students to use the computers before and after school, but some of them don’t have much flexibility in their transportation and need to get their rendering done during the school breaks. Having two or three students all trying to render at the same time on a single multiseat system can lead to a sluggish system and very slow rendering. The easiest way to fix this is to do the rendering in the GPU, which Blender does support, but only using NVIDIA’s binary driver.

So about a month ago, I ordered a cheap NVIDIA card for testing purposes. I swapped it with an AMD card on one of our multiseat systems and powered it up. Fedora recognized the card using the open-source nouveau driver and everything just worked. Beautiful!

Then, a few hours later, I noticed the system had frozen. I rebooted it, and, after a few hours, it had frozen again. I moved the NVIDIA card into a different system, and, after a few hours, it froze while the original system just kept running.

Some research showed that the nouveau driver sometimes has issues with multiple video cards on the same system. There was some talk about extracting the binary driver’s firmware and using it in nouveau, but I decided to see if I could get the binary driver working without breaking our other Intel and AMD seats.

The first thing I did was upgrade the test system to Fedora 25 in hopes of taking advantage of the work done to make mesa and the NVIDIA binary driver coexist. I then installed the binary NVIDIA drivers from this repository (mainly because his version of blender already has the CUDA kernels compiled in). The NVIDIA seat came up just fine, but I quickly found that mesa in Fedora 25 isn’t built with libglvnd (a shim between either the mesa or NVIDIA OpenGL implementation, depending on which card you’re using and your applications) enabled, so all of the seats based on open drivers didn’t come up. But, even when it was enabled, I ran into this bug, so I ended up extending this patch so it would also work with Gallium drivers and applying it.

This took me several steps closer, but apparently the X11 GLX module is not part of libglvnd and NVIDIA sets the Files section in xorg.conf to use it’s own GLX module (which, oddly enough, doesn’t work with the open drivers). I finally worked around this via the ugly hack of creating two different xorg.conf.d directories and telling lightdm to use the NVIDIA one when loading the NVIDIA seat.

Voilà! We now have a multiseat system with one Intel built-in card using the mesa driver, two AMD cards using the mesa Gallium driver, and one NVIDIA card using the NVIDIA binary driver. And it only cost me eight hours and my sanity.

So what needs to happen to make this Just Work™? Either libglvnd needs to also include the X11 GLX module or we need a different shim to accomplish the same thing. And Fedora needs to build mesa with libglvnd enabled (but not until this bug is fixed!)

My mesa build is here and the source rpm is here. There is a manual “Provides: libGL.so.1()(64bit)” in there that isn’t technically correct, but I really didn’t want to recompile negativo17’s libglvnd to add it in and my mesa build requires that libglvnd implementation.

My xorg configs are here and my lightdm configuration is here. Please note that the xorg configs have my specific PCI paths; yours may differ.

And I do plan to write a script to automate the xorg and lightdm configs. I’ll update this post when I’ve done so.

Sidenote: As I was looking through my old posts to see if I had anything on NVIDIA, I came across a comment by Seth Vidal. He was an excellent example of what the Fedora community is all about, and I really miss him.

Update: Configuration has become much simpler. An updated post is here.

From NFS to LizardFS

If you’ve been following me for a while, you’ll know that we started our data servers out using NFS on ext4 mirrored over DRBD, hit some load problems, switched to btrfs, hit load problems again, tried a hacky workaround, ran into problems, dropped DRBD for glusterfs, had a major disaster, switched back to NFS on ext4 mirrored over DRBD, hit more load problems, and finally dropped DRBD for ZFS.

As of March 2016, our network looked something like this:

Old server layout

Old server layout

Our NFS over ZFS system worked great for three years, especially after we added SSD cache and log devices to our ZFS pools, but we were starting to overload our ZFS servers and I realized that we didn’t really have any way of scaling up.

This pushed me to investigate distributed filesystems yet again. As I mentioned here, distributed filesystems have been a holy grail for me, but I never found one that would work for us. Our problem is that our home directories (including config directories) are stored on our data servers, and there might be over one hundred users logged in simultaneously. Linux desktops tend to do a lot of small reads and writes to the config directories, and any latency bottlenecks tend to cascade. This leads to an unresponsive network, which then leads to students acting out the Old Testament practice of stoning the computer. GlusterFS was too slow (and almost lost all our data), CephFS still seems too experimental (especially for the features I want), and there didn’t seem to be any other reasonable alternatives… until I looked at LizardFS.

LizardFS (a completely open source fork of MooseFS) is a distributed filesystem that has one fascinating twist: All the metadata is stored in RAM. It gets written out to the hard drive regularly, but all of the metadata must fit into the RAM. The main result is that metadata lookups are rocket-fast. Add to that the ability to direct different paths (say, perhaps, config directories) to different storage types (say, perhaps, SSDs), and you have a filesystem that is scalable and fast.

LizardFS does have its drawbacks. You can run hot backups of your metadata servers, but only one will ever be the active master at any one time. If it goes down, you have to manually switch one of the replicas into master mode. LizardFS also has a very complicated upgrade procedure. First the metadata replicas must be upgraded, then the master and finally the clients. And finally, there are some corner cases where replication is not as robust as I would like it to be, but they seem to be well understood and really only seem to affect very new blocks.

So, given the potential benefits and drawbacks, we decided to run some tests. The results were instant… and impressive. A single user’s login time on a server with no load… doubled. Instead of five seconds, it took ten for them to log in. Not good. But when a whole class logged in simultaneously, it took only 15 seconds for them to all log in, down from three to five minutes. We decided that a massive speed gain in the multiple user scenario was well worth the speed sacrifice in the single-user scenario.

Another bonus is that we’ve gone from two separate data servers with two completely different filesystems (only one which ever had high load) to five data servers sharing the load while serving out one massive filesystem, giving us a system that now looks like this:

New server setup

New server layout

So, six months on, LizardFS has served us well, and will hopefully continue to serve us for the next (few? many?) years. The main downside is that Fedora doesn’t have LizardFS in its repositories, but I’m thinking about cleaning up my spec and putting in a review request.

Updated to add graphics of old and new server layouts, info about Fedora packaging status, LizardFS bug links, and remove some grammatical errors

Updated 12 April 2017 I’ve just packaged up LizardFS to follow Fedora’s guidelines and the review request is here.

Flock 2016

Man and woman in driving horse-drawn buggy

Downtown Kraków

I have just returned from a vacation in beautiful Kraków, where, entirely coincidentally, there just happened to be a Fedora conference! My family and I enjoyed the amazing sights around Kraków (if you haven’t visited the salt mine, you really should), but my personal highlight was getting to attend my first Flock, where I got to meet people face to face who I’d only previously talked with via IRC and email.

I got to chance to speak about how we use Fedora in the classroom in our school (slides here). There were some excellent questions from the audience at the end, and I realized (again!) that my biggest problem is creating decent documentation about what we’re doing so others can follow.

One of my goals over the next year is to make sure that our work is easily reproducible in other schools, both from the sysadmin side and from the educational side.

My biggest take-away from the conference is that Fedora is moving into some very interesting times as it starts to expand from rpms being the only system delivery mechanism. I’m very interested in ostree with its concept of a read-only system partition and the work they’re doing on layered trees so you can have multiple system images branching off of one base image.

I’d really like to thank the event organizers for all the work they did putting Flock together, the design team for the beautiful t-shirts, and the Fedora community for just being great. And, while I’m at it, I’d like to extend personal thanks to Rafał Luzynski and his wife, Maja, for their hospitality.

Talk – Using Fedora in the classroom

Spreadsheet assignment

Spreadsheet assignment

So I’m sitting here in Kraków, doing some last-minute preparation for my talk (Fedora in the Classroom) at the upcoming Flock conference next week.

I’ll be looking at why we use Fedora in our school, what tools we use to setup and maintain our workstations, and the actual subjects that we teach our students, complete with actual projects1 that our students have done.

If you’re a teacher looking for ways to use open source software in the classroom, an administrator looking for a computer curriculum that emphasizes creativity and comprehension over memorization and rote learning, or you’re just interested in seeing how Fedora is effectively used in a school environment, please do come check it out.

 
[1] Projects have been anonymized to protect student privacy

Notes on a mass upgrade to Fedora 23

Picture of Fedora 23 desktop

Fedora 23

One of the hardest parts of running Fedora in a school setting is keeping on top of the upgrades, and I ended up falling a few months behind. Fedora 23 was released back in November, and it took me until February to start the upgrade process.

For our provisioning process, we’ve switched from a custom koji instance to ansible (with our plays on github), and this release was the first time I was really able to take advantage it. I changed our default kickstart to point to the Fedora 23 repositories, installed it on a test system, ran ansible on it, and voilà, I had a working Fedora 23 setup, running perfectly with all our school’s customizations. It was the easiest upgrade experience I’ve ever had!

Well, mostly.

As usual, the moment you think everything is perfect is the moment everything goes wrong. On our multiseat systems, we have three external AMD graphics cards along with the internal Intel graphics. The first bug I noticed was that the Intel card wasn’t doing any graphics acceleration. It turns out that VGA arbitration is automatically turned on if you have more than one video card, and Intel cards don’t support it in DRI2. DRI3 does handle arbitration just fine, but it was (and still is) disabled in the latest xorg-x11-drv-intel in the updates repository. Luckily for me, there’s a build in koji that re-enables DRI3. Problem solved.

The second bug was…odd. While we use gnome-shell as the default desktop environment in the school, we use lightdm for logging in, mainly because of it’s flexibility. We run xscreensaver in the login screen (and only in the login screen) to make it clear which computers are off, which are on, and which are logged in. GDM doesn’t support xscreensaver, but lightdm does. And this brings us back to the bug. On the Intel seat, moving the mouse or pressing a key would stop the screensaver as expected, but the screen would remain black except for the username control. It seems that the “VisibilityNotify” event isn’t being honored by the driver (though don’t ask me why it should be passed down to the driver). I filed a bug, and then finally figured out that fading xscreensaver back in works around the problem.

The third bug is even stranger. On the teacher’s machine, we have a small script that starts x11vnc (giving no control to anyone connecting to it) so the teacher can give a demonstration to the students. But after install Fedora 23 on the teacher’s machine, the demo kept showing the same three frames over and over. The teacher’s system isn’t multiseat and is using the builtin Intel graphics, so, oddly enough, disabling DRI3 fixed the problem. I filed another bug.

When upgrading the staff room systems, I ran into a bug in which cups runs screaming into the night (ok, slight exaggeration) if you have a server announcing printers over both the old cups and new dnssd protocols. Since we don’t have any pre-F21 systems any more, I’ve just disabled the old cups protocol on the server.

And, finally, my principal, who teachers computers to grades 11 and 12, came in to ask me why LibreOffice was crashing for a couple (and only a couple) of his students when they were formatting cells on a spreadsheet that he gave them. After some fancy footwork involving rm’d .config/libreoffice directories and files saved into random odd formats and then back into ods, we finally managed to format the cells without a crash. Lovely.

All this brings me back to ansible. In each of the bugs that required changes to the workstations, all I had to do was update the ansible scripts and push the changes out. Talk about painless! Ansible has made this job so much easier!

And I do want to finish by saying that these bugs are part of the reason that I love Fedora. With Fedora, I have the freedom to fix these problems myself. For both the cups bug and the xscreensaver bug, I was able to dig into the source code to start tracking down where the problem lay and come up with a workaround. And if I can just get the LibreOffice bug to reproduce, I could get a crash dump off of it and possibly figure it out too. Hurrah for source code!