For the second consecutive year, our grade nine students have been doing 3D modeling using Blender. A couple of weeks ago, our students finished up their first assignments, and I gave the top modelers the option of showing off their work. So, without further ado, here are the top three models in each of the three grade nine classes.
Thanks to some help from the school, Tro Chakerian (one of my assistants) and I will be going to Brno, Czech Republic for DevConf 2014. I’m really looking forward to getting to meet some other Fedora developers, and I’m hoping to learn a few things that will advance our school’s system over the next year or so.
I’m particularly looking forward to Why use a SAT solver for package management?, DNF API and Fedora.next: Future of Fedora Big Picture. The last one is probably the most important for us as a school as we plan to continue to use Fedora on the desktop for the foreseeable future.
First, the hardware for a multiseat system (and the price at time of order from our local supplier):
- 1 x Intel G2020 – 2.90 GHz – $65
- 1 x Kingston DDR3-1600 8G – $65
- 1 x MSI Z77A-G45 motherboard – $155
1 x Asus P8Z77-V LK motherboard – $160
- 1 x Kingston SSDNow V300 60GB – $70
- 3 x Sapphire Radeon HD6450 – $50
- 1 x Generic case – $20
- 4 x 4 Port USB hub – $5
- Tax – 10%
The final price is somewhere between $600 and $610, depending on the motherboard.
Once you have the hardware built, make sure the onboard video is enabled in the BIOS and is set to be the primary display. Plug the USB hubs into the computer. Make sure you don’t swap ports after they’ve been plugged in. Then, install the standard Fedora 19 GNOME desktop and install the latest version of the lesbg-multiseat package from the school’s repositories. Enable the multiseat service (
systemctl enable prepare-multiseat).
Make sure GDM is installed and that you’re using it as your display manager. You can use any desktop environment you’d like but you must use GDM (or LightDM with some patches) as other display managers don’t recognize systemd’s seat management. Reboot the computer.
When the computer comes up, there should be a login screen on each monitor. Each USB hub should automatically match a monitor, but you may have to swap ports so the hubs match the right monitor. lesbg-multiseat will always try to match the USB hubs to the video cards in order, so the first usb port will match the first video card, and so on.
Congratulations, you now have a multiseat system. Note that the configuration is designed to be minimal. We use the same OS image for single-seat or multiseat systems.
This year in our main computer room, we switched from single-seat systems to multiseat systems. Our old single-seat systems cost us roughly $300 a system, and we would generally buy 20 a year. The goal with our multiseat systems was to see if we could do better than $300/seat. I also had a number of requirements, some of which would raise the cost, while others couldn’t be met the last time I looked into multiseat systems.
My first requirement was 3D acceleration on all seats. I know someone’s been working on separating OpenGL processing from the display server, which would theoretically allow us to use Plugable devices, but until that’s done, we need a separate video card for each seat. We also need motherboards that can support more than one PCIE video card (as well as preferably supporting the built-in GPU). This is the main extra expense for our multiseat systems.
My second requirement was plug-and-play USB. The last time I looked into multiseat, that wasn’t supported under Linux; USB devices would only be detected if they were plugged in when the X server started. But, thanks to some relatively new code in systemd which is now controlling logins using logind, USB ports can be directed to specific seats, with the devices plugged into them appearing in the correct seat when they’re plugged in.
In June, we bought a test system that came to just under $600. To our normal order we added a gaming motherboard, three of the cheapest PCIE AMD Radeon 5xxx/6xxx series cards we could find, extra RAM, and four USB hubs. The idea with the USB hubs was to place one next to each monitor and create our own wannabe-Plugable devices. I then wrote a small program that would deterministically assign each USB hub to a different monitor on bootup. An extra bonus to this program is that we can daisy chain the USB hubs. Once the program was working, I let the students play with the test system… and it worked!
So, during the summer, we bought ten more systems and put them in our main computer room. At four seats per system, we are saving 50%, so we were able to replace all forty computers in the main room in one year (and add four more seats as a bonus).
The main annoyance we’re still dealing with is that the USB hubs we got aren’t that great, and we’ve had a few fail on us. But they’re easy (and cheap) to replace. I also had to make some changes to X, like re-enabling Ctrl+Alt+Backspace as a solution for a stuck seat, which is better than rebooting the whole computer. And we do have the occasional hang where all four seats stop working, which I think is tied to the number of open files, but I haven’t tracked it down yet.
I’ve been very happy with our multiseat systems and would like to extend a huge thank you to the systemd developers for their work on logind.
Edit: More details are available in this post.
In the Linux Showdown, you get 30 minutes to complete a certain number of sysadmin tasks. Some of the tasks are pretty simple, while some of the others become more difficult. I entered the first day and managed to get 9th place with a score of 100% and a time of just under 17 minutes.
The second day I ran into trouble. One of the tasks was to reset the mysql root password, and, though I followed the directions here, twice, I was never able to log into mysql as root. The commands seemed to be running correctly, but I was locked out.
In my day-job as the system administrator for a school, I would keep bashing away at the problem until I figured out what I was doing wrong. In the competition, I ran out of time after fifteen minutes of debugging and ended up with a lousy 40%. Ouch!
I was frustrated, but figured the third day’s competition should fit a bit better. The hint said that it was a scripting competition, and my python foo is pretty decent. Sure enough, day three involved finding files with modification times between two dates, adding them to a database, and then tarring them up.
I came up with a python script that found the necessary files and added them to the database. Except my clever ‘INSERT’ statement didn’t actually work. If I manually copied and pasted it into mysql, it worked perfectly, but it didn’t run from the script. Grrr. I spent ten minutes debugging… and my time was up!
Well, that sucked. This time I got an impressive 20%. Double ouch!
After finishing the test, I went to bed and spent fifteen minutes ranting to my poor wife. The next day, after cooling off, I decided I was done. The hint for the last competition said that it had something to do with security, and I wouldn’t call myself an expert on that. If I’m getting 20% in the areas that I’m relatively good at, then what should I expect in areas that I’m less comfortable with.
Then it hit me. If I’m not comfortable with it, why not just do it for fun? If I know I’m probably going to get a zero, who cares? I checked the leaderboard, and the highest score at the time was 67%, so my zero wouldn’t be so bad. I went ahead and started the last competition.
Step one, secure the mail server. We don’t run our own mail servers here at the school and I know nothing about postfix, so I spent ten minutes or so Googling for some kind of solution, typed in what I thought was a partial fix, and then decided to give up.
Step two, secure a page on the webserver. This is something I have to do quite often, so I was able to get it done in five minutes or so.
Finally, step three, secure an FTP server. Who still uses FTP? We don’t! I wasn’t even sure what the ftp daemon’s name was, so I ran a ‘ps aux | grep ftp’. This was the only reason that I noticed that the ftp daemon wasn’t using the config file in /etc, but rather some config file in someone’s home directory. I did what I thought would secure the ftp server in both config files, and saw that I had a little over two minutes left.
Ok, I could have spent some more time on postfix, but I knew nothing about it, so I decided that I was finished. Worst case, I’d get 33% for the webserver (which was the only fix I’d actually tested). Best case, 67% for the ftp server, which I was pretty sure I’d fixed. If so, I might actually get in the top twenty. So, I logged in to the leaderboard, checked my ranking… First!??!? With 100%? What?
Apparently the random lines from Google that I put into my postfix config had secured it. Pure luck. As I followed the leaderboard for the rest of the day, it became obvious that many people with a lot of experience with apache, postfix and ftp were whipping right through the contest, missing the ftp config file in the home directory, and getting 67%, while I kept sitting on top with the lone 100%. I felt like such a fraud.
Finally, in the last hour before the contest ending, someone else found the solution five minutes faster than I did and got first place. Praise God! I still felt like a fraud, but at least first place was going to someone who knew what they were doing.
So, in four days of competitions, I got the highest score in the areas I was weakest in and the lowest score in the areas I was strongest in. That seems to indicate either that I don’t know what my strengths and weaknesses are, or that the competition needs some tweaking. Well, I think I’m at least reasonably aware of my strengths and weaknesses, and I’m very aware of how much of a role chance played in all four days of competition. So how can this competition be tweaked?
The strengths of the competition are pretty obvious. The whole point of TrueAbility is to winnow out people who talk the talk, but can’t walk the walk. When you get a résumé, you don’t know whether the applicant can actually do all the things they claim to be able to do, so, with TrueAbility, you give someone a VM and a list of tasks, and see whether or not they can do them. TrueAbility doesn’t care how they do the tasks, they just check that the tasks are completed. Brilliant!
The biggest weakness in the competition is the time limit. A vast majority of the problems we face as sysadmins need to be fixed quickly, but rarely does a complex problem need to be solved within 30 minutes. This time limit in the competition introduces a bias against those who work methodically. While hiring fast workers is always nice, basing hiring decisions based on how fast someone can code rather than how well they code is not wise.
In addition, the marking (especially for the last few days) was extremely coarse, so ranking was heavily dependent on how quickly you finished. This was especially noticeable in the first day, where the only difference between 1st place and 28th place was whether you took 10 minutes to finish the job or 30 minutes. As was obvious in the last day’s competition, this emphasis on time caused people to rush so much that they made mistakes. Time makes a lousy basis for ranking.
So what’s the solution? I see two complementary things that could be done to improve the competition. The first is to break down the grading even more, and assign different values to the different tasks. I’d even add in some standard tasks (with a total score of a maximum of 20%) along the lines of “Make sure that you close any ports not needed for your task”, “Disallow password logins over ssh and set up the server to trust your ssh key”, and “Replace your Ubuntu install with the real sysadmin’s OS: Fedora”. Ok, I’m half joking on that last one, but you get the idea. The key thing is that it should be almost impossible to get 100%, but a mediocre sysadmin should be able to hit 70% with only minor difficulty, and a talented sysadmin shouldn’t have much trouble reaching 90%.
The other thing that would help would be a removal of the hard deadline. Instead, allow candidates to continue working beyond the time limit, with a deduction of 1-2% for every minute. This introduces a cost to breaking the deadline without causing the candidate to completely fail because they needed ten more minutes.
With these two adjustments, time should become secondary to doing the job right. If I spend 10 minutes getting 90%, I’ll still get a lower score than someone who takes their time to do it right in 30 minutes. And, if I spend 40 minutes reaching 90%, I’ll only lose 20% for going over and end with a score of 70%, rather than sitting at zero because I just couldn’t finish my script within the deadline.
TrueAbility, thank you for the time and effort you’ve put into developing the problems for this competition, and thank you for the creative idea of a sysadmin’s competition in the first place.
And I really want to congratulate those who were able to consistently get high scores under the tough time limits.
Now I’m off to get some sleep before our first day of school.
For their final Blender assignment, my grade nine students were asked to make an object (in most cases a car) go around some kind of track that followed the contour of the ground. These are some of the more creative projects that I received. Please note that in some cases the source doesn’t perfectly match the video. For some there were extenuating circumstances, and the others had it reflected in their scores.
I’m not sure if the camera angle really does it justice, but this scene has a really nice switchback. The car, the road and the terrain are all nicely modelled and textured, a rare combination.
Car on track by Fayez – CC BY 3.0 – Source
With Fedora 19 comes a change that is rather bitter-sweet. Yum-presto, the plugin that I originally wrote to allow the use of deltarpms in Fedora, has been retired, its functionality merged directly into yum. While this is a necessary step to achieve things like parallel downloads of deltarpms and regular rpms, it’s still hard to see the death of a project that I’ve been involved with since its beginning, six years ago.
I started using Linux in 2000, switching from Red Hat Linux to Mandrake in 2001, and then to Fedora Core in September 2004, just before Fedora Core 3 was released. I had just returned to the Lebanon Evangelical School as a teacher and sysadmin after finishing university, and I remember downloading the FC3 Test 2 ISO image on our impressively fast 128kbps link. And I still remember my disgust when Test 2 was still quite buggy and I had to download the FC2 ISO.
Somewhere around the end of 2005, I came up with the brilliant idea of running a local mirror of Fedora Core for our school. The problem was that updates took forever to download, so I had a flash of inspiration. What if… we could download only the parts of the rpm that had changed? Updates would be much smaller and running a local mirror would become feasible. With that kind of motivation, I quickly hacked together a brilliantly elegant mess of code that was able to delta two small binary files in sometime under a day and apply the resultant patch in sometime under an hour. Definitely not my proudest coding moment, but I announced my work in the Fedora Forum (which I’m sure made sense to me at the time).
I got a single response to my post. Rahul Sundaram pointed out, much to my disgust, that Michael Schroeder of Suse had already created an efficient and fast (at least compared to my mess of code) program that could create and apply deltas to rpms. This program was cleverly named deltarpm. After looking at deltarpm and verifying that, yes, it did do what I wanted, I then put the whole project on the back burner for a while.
2006 passed. I got married, in the process missing a war in Lebanon, and spent most of rest of the year trying to adapt to the new rules of married life. Apparently leaving my dirty clothes all over the house was no longer acceptable, and there was some difference of opinion on what the definition of a well-balanced breakfast was. I thought that the peanuts in the Snickers bar did a great job of balancing out my Coke, while my new Irish wife thought that my logic needed to be balanced with a sharp smack on the head. She won. She usually does.
Early in 2007, Ahmed Kamal started posting about some work he was doing on a yum plugin that would download deltarpms. This plugin was named yum-presto. In early March, I got involved and started reworking some of the code. By the end of March, we started testing the plugin with Fedora Core 6, and, in early April, Kevin Fenzi sponsored me and reviewed yum-presto for Fedora Extras. We hit bugs. Then we fixed them. We were on a roll.
Then things ground to a halt. We had the client side working reliably, but there was no way that the Fedora infrastructure was going to use my hacky scripts for generating deltarpms. Someone needed to do some work to properly integrate deltarpm generation into Fedora’s infrastructure, but I had no idea how to go about it.
So I ended up in a rut, daily generating deltarpms for our FC6 i386 mirror, but it was anything *but* official. If anyone wanted to use them, they had to manually change the url in fedora-updates.repo. FC6 turned into Fedora 7, then Fedora 8, then Fedora 9, and finally Fedora 10. Jeremy Katz, Casey Dahlin and James Antill all contributed major changes to the client code, with James essentially rewriting the whole thing, but the infrastructure side of things stayed stagnant.
I opened a ticket against bodhi to try to figure out where the deltarpm generation should be happening, and, after a few rounds of pass the parcel, Seth Vidal finally decided to put it straight into createrepo. With that work done in 2009, Fedora 11 was the first release with official deltarpms across all the platforms that Fedora supported.
Four years have passed since Fedora 11 came out. Now I’m the one trying to convince my kids that Snickers and Coke weren’t meant for breakfast. And yum-presto has been mostly in maintenance mode. Last May, Lars Gullik Bjønnes added code to rebuild the rpms in parallel, and, in August, Zdenek Pavlas added code to download drpms in parallel. Both features made it into Fedora 18. I didn’t realize at the time that those would be the last features added to yum-presto.
On February 21, Zdenek posted this on yum-devel:
The native drpm support is complete enough to be used and tested. I’m quite satisfied with the performance, but the applydeltarpm backend needs more work. I’m going to make a rawhide release today.
Yum-presto had just died. And yum was now able to do things with deltarpms that we never could do as a plugin.
I do want to say thank you to everyone who helped test and develop yum-presto. A special thank you to Michael Schroeder, whose work on deltarpm made yum-presto possible. Thank you to the Lebanon Evangelical School for hosting the i386 deltarpms and Angel Marin for hosting the x86_64 deltarpms for the two years it took to get the Fedora infrastructure up and running. And thank you to all those whose hard work makes Fedora the exciting place that it is.
In February, our grade nine students started doing 3D modeling. Last week, our students finished up their first assignments, and I gave the top modelers the option of showing off their work. So, without further ado, here are the top four models in each of the three grade nine classes.
At our school we have around 100 desktops, a vast majority of which run Fedora, and somewhere around 900 users. We switched from Windows to Fedora shortly after Fedora 8 was released and we’ve hit 8, 10, 13, 16, and 17 (deploying a local koji instance has made it easier to upgrade).
As I finished putting together our new Fedora 18 image, there were a few things I wanted to mention.
- Offline updates: Traditionally, our systems automatically updated on shutdown. In the 16-17 releases, that became very fragile as any systemctl scriptlets in the updates would block because systemd was in the process of shutting down. Now, with systemd’s support for offline updates, we can download the updates on shutdown, reboot the computer, and install the updates in a minimal system environment. I’ve packaged my offline updater here.
- btrfs snapshots: This isn’t new in Fedora 18, but, with the availability of offline updates, we’ve finally been able to take proper advantage of it. One problem we have is that we have impatient students who think the reset button is the best way to get access to a computer that’s in the middle of a large update. Now, if some genius reboots the computer while it’s updating, it reverts to its pre-update state, and then attempts the update again. If, on the other hand, the update fails due to a software fault, the computer reverts to its pre-update state and boots normally. Either way, the system won’t be the half-updated zombie that so many of my Fedora 17 desktops are.
- dconf mandatory settings: Over the years we’ve moved from gconf to dconf, and I love the easy way that dconf allows us to set mandatory settings for Gnome. This continued working with only a small modification from Fedora 17 to Fedora 18, available here and here.
- systemd: This isn’t a new feature in 18, but systemd deserves a shout-out anyway. It does a great job of making my workstations boot quickly and has greatly simplified my initscripts. It’s so nice to be able to easily prevent the display manager from starting before we have mounted our network directories.
- Gnome Shell: We actually started experimenting with Gnome Shell when it was first included in Fedora, and I switched to it as the default desktop in Fedora 13. As we’ve moved from 13 to 16, then 17, and now 18, it’s been a nice clean evolution for our users. When I first enabled Gnome Shell in our Fedora 13 test environment, the feedback from our students was very positive. “It doesn’t look like Windows 98 any more!” was the most common comment. As we’ve upgraded, our users have only become more happy with it.
The bad in Fedora 18 mainly comes down to the one area where Linux in general, and Fedora specifically, is weak – being backwards-compatible. This was noticeable in two very specific places:
- dconf mandatory settings: With the version of dconf in Fedora 18, we now have the ability to have multiple user dconf databases. This is a great feature, but it requires a change in the format of the database profile files, which meant my database profile files from Fedora 17 no longer worked correctly. In fact, they caused gnome-settings-daemon to crash, which crashed Gnome and left users unable to log in. Oops. To be fair, this was a far less annoying change because I only had to change a couple of lines, but I’m still not impressed that dconf couldn’t just read my old db profile files.
As a developer, I totally understand the “I have a better way” mindset, but I think backwards compatibility is still vital. That’s why I love rsync and systemd, but have very little time for unison (three different versions in the Fedora repositories because newer versions don’t speak the same language as older versions).
I know some people will say, “If you want stability, just use RHEL.” That’s fine, but I’m not necessarily looking for stability. I like the rate of change in Fedora. What I dislike is when things break because someone wanted to do something different.
All in all, I’ve been really happy with Fedora as our school’s primary OS, and each new release’s features only make me happier. Now I need to go fix a regression in yum-presto that popped up because of some changes we made because we wanted to do something different.
As mentioned in this post, I have an HP Veer cell phone that I bought last year when we were in the States on furlough. Despite (or maybe because of) a 2.6″ display and tiny keyboard, I’ve found it a joy to use, but its operating system, WebOS has one major drawback. Apps-wise, it’s dying.
My wife has an LG P500 with half the Veer’s RAM, a fraction of its flash, an ICS ROM that the good fellows over at AOKP have managed to hack together, and an up-to-date version of Skype, Whatsapp and any other app she desires. I, on the other hand, have a version of Skype that doesn’t log in unless I reset my user information, an impressive but buggy implementation of Whatsapp that seems to suck the ole’ power out of my phone, and a small list of apps that’s slowly but surely shrinking.
Over the Christmas holidays, I thought I would see what it would take to get some form of Android on this phone. There seem to have been a couple of attempts, but no source code that I could find, which brings me to my first complaint.
What is up with Android ROM developers who take thousands of man-hours of somebody else’s work, add a few hours of their own, and then act as if they’ve invented the wheel and they need to keep it top secret? Maybe it’s the fact that I’m coming from the Fedora community where the concept of working as a team is an ideal even if it doesn’t always happen in fact, but the isolationist attitude I see in the different Android communities is quite depressing. I believe it’s one of the main reasons that participants in the Android community can be quite harsh with each other, much harsher on average than I see in the Fedora community.
So I go to AOSP, download the platform and build it with no problems. Yay! Then I need to get the kernel source. From where? I start with https://android.googlesource.com/kernel/msm.git. There are four branches, three which are supposedly 3.0 or 3.4 JB kernels. And I can’t compile any of them. Some of the problems are configuration issues (the config.gz from WebOS’s 2.6.25 kernel is only a starting point), but there are also simple typos that are preventing compilation. Not exactly confidence-inspiring.
I decide to check out Cyanogenmod’s kernel, but it looks like it hasn’t been touched in forever. A closer look at Cyanogenmod’s github repository shows 10… no 20… no a hundred kernel forks for various devices? And all this brings me to my second major complaint.
Why on earth do we need a different kernel fork for each device? All I want to do is build straight from kernel.org. Is that so wrong? When will the forking stop?
Seriously, though, forking the Linux kernel to build it for your device seems a bit overkill, but it’s par for the course for the Android community. Is there any way to get this stuff back upstream? Or at the very least, could AOSP (or possibly Cyanogenmod) use one “official” kernel tree with patches sent back from these forks?
Anyhow, now I’m off to see if I can get one of the Cyanogenmod kernel forks to build. I’m sure I saw one in there for a msm7x30 chipset. Maybe I’d be better off just waiting until I can get a Nexus 4. Happy new year, everyone!
Edit: A couple of weeks ago, I got a Samsung Galaxy S3 Mini, as they are available and cheaper here in Lebanon than in the States. It may not be the best phone out there, but it’s a great size and easily does everything I need. Goodbye WebOS. It was nice knowing you.