In February, our grade nine students started doing 3D modeling. Last week, our students finished up their first assignments, and I gave the top modelers the option of showing off their work. So, without further ado, here are the top four models in each of the three grade nine classes.
At our school we have around 100 desktops, a vast majority of which run Fedora, and somewhere around 900 users. We switched from Windows to Fedora shortly after Fedora 8 was released and we’ve hit 8, 10, 13, 16, and 17 (deploying a local koji instance has made it easier to upgrade).
As I finished putting together our new Fedora 18 image, there were a few things I wanted to mention.
- Offline updates: Traditionally, our systems automatically updated on shutdown. In the 16-17 releases, that became very fragile as any systemctl scriptlets in the updates would block because systemd was in the process of shutting down. Now, with systemd’s support for offline updates, we can download the updates on shutdown, reboot the computer, and install the updates in a minimal system environment. I’ve packaged my offline updater here.
- btrfs snapshots: This isn’t new in Fedora 18, but, with the availability of offline updates, we’ve finally been able to take proper advantage of it. One problem we have is that we have impatient students who think the reset button is the best way to get access to a computer that’s in the middle of a large update. Now, if some genius reboots the computer while it’s updating, it reverts to its pre-update state, and then attempts the update again. If, on the other hand, the update fails due to a software fault, the computer reverts to its pre-update state and boots normally. Either way, the system won’t be the half-updated zombie that so many of my Fedora 17 desktops are.
- dconf mandatory settings: Over the years we’ve moved from gconf to dconf, and I love the easy way that dconf allows us to set mandatory settings for Gnome. This continued working with only a small modification from Fedora 17 to Fedora 18, available here and here.
- systemd: This isn’t a new feature in 18, but systemd deserves a shout-out anyway. It does a great job of making my workstations boot quickly and has greatly simplified my initscripts. It’s so nice to be able to easily prevent the display manager from starting before we have mounted our network directories.
- Gnome Shell: We actually started experimenting with Gnome Shell when it was first included in Fedora, and I switched to it as the default desktop in Fedora 13. As we’ve moved from 13 to 16, then 17, and now 18, it’s been a nice clean evolution for our users. When I first enabled Gnome Shell in our Fedora 13 test environment, the feedback from our students was very positive. “It doesn’t look like Windows 98 any more!” was the most common comment. As we’ve upgraded, our users have only become more happy with it.
The bad in Fedora 18 mainly comes down to the one area where Linux in general, and Fedora specifically, is weak – being backwards-compatible. This was noticeable in two very specific places:
- dconf mandatory settings: With the version of dconf in Fedora 18, we now have the ability to have multiple user dconf databases. This is a great feature, but it requires a change in the format of the database profile files, which meant my database profile files from Fedora 17 no longer worked correctly. In fact, they caused gnome-settings-daemon to crash, which crashed Gnome and left users unable to log in. Oops. To be fair, this was a far less annoying change because I only had to change a couple of lines, but I’m still not impressed that dconf couldn’t just read my old db profile files.
As a developer, I totally understand the “I have a better way” mindset, but I think backwards compatibility is still vital. That’s why I love rsync and systemd, but have very little time for unison (three different versions in the Fedora repositories because newer versions don’t speak the same language as older versions).
I know some people will say, “If you want stability, just use RHEL.” That’s fine, but I’m not necessarily looking for stability. I like the rate of change in Fedora. What I dislike is when things break because someone wanted to do something different.
All in all, I’ve been really happy with Fedora as our school’s primary OS, and each new release’s features only make me happier. Now I need to go fix a regression in yum-presto that popped up because of some changes we made because we wanted to do something different.
As mentioned in this post, I have an HP Veer cell phone that I bought last year when we were in the States on furlough. Despite (or maybe because of) a 2.6″ display and tiny keyboard, I’ve found it a joy to use, but its operating system, WebOS has one major drawback. Apps-wise, it’s dying.
My wife has an LG P500 with half the Veer’s RAM, a fraction of its flash, an ICS ROM that the good fellows over at AOKP have managed to hack together, and an up-to-date version of Skype, Whatsapp and any other app she desires. I, on the other hand, have a version of Skype that doesn’t log in unless I reset my user information, an impressive but buggy implementation of Whatsapp that seems to suck the ole’ power out of my phone, and a small list of apps that’s slowly but surely shrinking.
Over the Christmas holidays, I thought I would see what it would take to get some form of Android on this phone. There seem to have been a couple of attempts, but no source code that I could find, which brings me to my first complaint.
What is up with Android ROM developers who take thousands of man-hours of somebody else’s work, add a few hours of their own, and then act as if they’ve invented the wheel and they need to keep it top secret? Maybe it’s the fact that I’m coming from the Fedora community where the concept of working as a team is an ideal even if it doesn’t always happen in fact, but the isolationist attitude I see in the different Android communities is quite depressing. I believe it’s one of the main reasons that participants in the Android community can be quite harsh with each other, much harsher on average than I see in the Fedora community.
So I go to AOSP, download the platform and build it with no problems. Yay! Then I need to get the kernel source. From where? I start with https://android.googlesource.com/kernel/msm.git. There are four branches, three which are supposedly 3.0 or 3.4 JB kernels. And I can’t compile any of them. Some of the problems are configuration issues (the config.gz from WebOS’s 2.6.25 kernel is only a starting point), but there are also simple typos that are preventing compilation. Not exactly confidence-inspiring.
I decide to check out Cyanogenmod’s kernel, but it looks like it hasn’t been touched in forever. A closer look at Cyanogenmod’s github repository shows 10… no 20… no a hundred kernel forks for various devices? And all this brings me to my second major complaint.
Why on earth do we need a different kernel fork for each device? All I want to do is build straight from kernel.org. Is that so wrong? When will the forking stop?
Seriously, though, forking the Linux kernel to build it for your device seems a bit overkill, but it’s par for the course for the Android community. Is there any way to get this stuff back upstream? Or at the very least, could AOSP (or possibly Cyanogenmod) use one “official” kernel tree with patches sent back from these forks?
Anyhow, now I’m off to see if I can get one of the Cyanogenmod kernel forks to build. I’m sure I saw one in there for a msm7x30 chipset. Maybe I’d be better off just waiting until I can get a Nexus 4. Happy new year, everyone!
Edit: A couple of weeks ago, I got a Samsung Galaxy S3 Mini, as they are available and cheaper here in Lebanon than in the States. It may not be the best phone out there, but it’s a great size and easily does everything I need. Goodbye WebOS. It was nice knowing you.
Seven or eight years ago, I set up a small computer system to play music on. A little under four years ago I upgraded it to a media center by adding a 22″ 1920×1080 monitor and installing XBMC on the computer. And today, to treat myself for my 29th (with three years experience) birthday, I’ve finally upgraded to a proper 39″ 1080p LED TV.
XBMC looks just as good on a large screen, and Avatar in HD is just gorgeous. The only down side is that now it’s a lot easier to see how bad my photography skills are when our pictures cycle through as the screensaver.
Now if I could just work out how to set up full screen anti-aliasing in XBMC using AMD’s open source drivers…
Two years ago, as mentioned in btrfs on the server, we set up btrfs as our primary filesystem on our data servers. After we started running into high load as our network expanded (and a brief experiment with GlusterFS as mentioned in GlusterFS Madness), in March we switched over to ext4 with the journal on an SSD.
So, as of March, we had three data servers.
datastore01 was the primary server for usershare, our shared data.
datastore03 was the primary server for users, which, surprisingly, held our users’ home directories.
datastore02 was secondary for both usershare and users which were synced using DRBD.
One of the things I had originally envisioned when I set up our system was a self-correcting system. I played around with both the Red Hat Cluster suite and heartbeat and found that they were a bit much for what we were trying to achieve, but I wanted a system where, if a single data server went down, the only notice I would have would be a Nagios alert, and not a line of people outside my office asking me what the problem is.
While I never achieved that level of self correction, I could switch usershare from
datastore02 with a less than 30-second delay, and the same applied with switching users from
datastore02. NFS clients would connect to an aliased IP that switched when the filesystem switched, so they would only freeze for about 30 seconds, and then come back.
This made updating the systems pretty painless. I would update
datastore02 first, reboot into the new kernel and verify that everything was working correctly. Then, I would migrate usershare over to
datastore02 and update
datastore01 came back up, I would migrate usershare back, and then repeat the process with users and
We also had nightly rsync backups to
backup01 which was running btrfs and which would create a snapshot after the backup finished. We implemented nightly backups after a ham-fisted idiot of a system administrator (who happens to sleep next to my wife every night) managed to corrupt our filesystem (and, coincidentally, come within a hair’s breadth of losing all of our data) back when we were still using btrfs. The problem with DRBD is that it writes stuff to the secondary drives immediately, which is great when you want network RAID, but bad when the corruption that you just did on the primary is immediately sent to the secondary. Oops. Anyhow, after we managed to recover from that disaster (with lots of prayer and a very timely patch from Josef Bacik), we decided that a nightly backup to a totally separate filesystem wouldn’t be a bad idea.
We also had two virtual hosts,
virtserver02. Our virtual machines’ hard drives were synced between the two using DRBD. We could stop a virtual machine on one host and start it on the other, but live migration didn’t work and backups were a nightly rsync to backup01.
I was pretty happy with our setup, but our load problem popped up again. While it was better than it was before the switch, it would still sometimes peak during breaks and immediately after school.
As I was asking myself what other system administrators do, it hit me that one of my problems was my obsession with self-correcting systems. More specifically, my obsession with automatic correction of a misbehaving server, rather than the more common issue of automatically “correcting” misbehaving hard drives. Because of that, I had been ignoring NAS’s as none of them seemed to have something that worked along the same lines as DRBD.
I started looking at FOSS NAS solutions, and found NAS4Free, a FreeBSD appliance that comes with the latest open-source version of ZFS. The beauty of ZFS when it comes to speed is that, unlike btrfs, it allows you to set up a SSD as a read cache or as the data log.
After running some tests over the summer, I found that ZFS with a SSD cache partition and SSD log partition was quite a bit faster than our ext4 partitions with SSD log, especially with multiple systems hitting the server at the same time with multiple small writes.
So we switched our data servers over to NAS4Free, reduced them to two, and added another backup server. The data servers are configured with RAIDZ1 plus SSD caches and logs. The backups are configured with RAIDZ1, no cache, no SSD log.
A nice feature of ZFS (which I believe btrfs also recently got) is the ability to send a diff between two snapshots from one server to another. Using this feature (which isn’t exposed in the NAS4Free web interface, but accessible using a bash script that runs at 2:00 every morning), I’m able to send my backups to the backup servers in far less time than it used to take to run rsync.
One other nice feature of NAS4Free is the ability of ZFS to create a “volume” which is basically a disk device as part of the data pool, and then export it using iSCSI. I switched our virtual machines’ hard drives from DRBD to iSCSI, which now allows us to live migrate from one virtual host to the other. We also get the bonus of automatic backups of the ZFS volumes as part of the snapshot diffs.
There is one major annoyance and one major regression in our system, though. First the annoyance. ZFS has no way of removing a drive. You can swap out a drive in a RAIDZ or mirror set, but once you’ve added a RAIDZ set, a mirror or even a single drive, you cannot remove them without destroying the pool. Apparently enterprise users never want to shrink their storage. More on this in my next post.
The major regression is that if either of our data servers goes down, the whole network goes down until I get the server back up. I can switch us over to the backups, but we’ll be using yesterday’s data if I do, so that’s very much a last resort. This basically means that I need to be ready to swap the drives into a new system if one of our data servers does go down. And there will be downtime if (when?) that happens. Joy.
So now we have a system that gives us the speed we need, but not the redundancy I’d like. What I’d really like would be a filesystem that is fully distributed, has no single point of failure and allows you to store volumes on it. GlusterFS fits the bill (mostly), but I’m gunshy at the moment. Ceph looks like it may fit the bill even better with RBD as well as CephFS, but the filesystem part isn’t considered production-ready yet.
So where does that leave us? As we begin the 2012-2013 school year, file access and writing is faster than ever. We’d need simultaneous failure of four hard drives before we start losing data, and, once I deploy our third backup server for high-priority data, it will take even more to lose the data. We do have a higher risk of downtime in the event of a server failure, but we’re not at the point where that downtime would keep us from our primary job, teaching.
So, there I was in my office working hard (during summer break, no less, I hope my boss reads this) on LESSON (our school’s marking system) when I received a distressing email from a friend of mine.
Apparently he had been mugged during a trip to Spain (something I found very surprising as we’d just had dinner with him on Thursday, and he hadn’t mentioned a trip then), and the miscreants had stolen everything but his passport. All my friend needed was a small loan of €2000 to cover his hotel bills and taxi to the airport. Sent via Western Union, that admirable institution that takes such pains to make sure that money ends up where it’s supposed to.
Yeah. This particular friend would have trouble racking up a €200 hotel bill, much less ten times that amount. I immediately got on the phone and called his wife. Sure enough, their Yahoo account had been compromised and she could no longer access it. I talked her through Yahoo’s compromised account process, and she was able to reset her password using her security questions (apparently the scammer hadn’t changed those yet). Checking the original email showed that the scammer was accessing Yahoo’s webmail through a web proxy, hidemyass.com.
When she finally got into their account, all of their contacts had been deleted along with the last few months of their sent mail. She sent an email to Yahoo explaining the situation and got an automated reply saying that she would be contacted within 24 hours. In the meantime, she has no way of letting her contacts know the message is a fraud. Even worse, the “SOS from Spain” email had a reply-to address that was subtly different from the original, an added i between first initial and last name. This means that, even though she has regained control of her own account, anyone replying to her email will be replying to an account still under the control of the scammer.
I didn’t want all that effort by the scammer to go to waste, so I sent an email to the fake account asking how I could help my dear friend in Spain. To make a long story short, I went back and forth with my “friend” for three and a half hours, finally offering to loan him €1000. I only ended the fun when the scammer insisted on having the Western Union confirmation number.
I ended the conversation with a supposed link to the confirmation number, but which was actually a tasteful picture of a donkey braying. I then contacted a technician at hidemyass.com and forwarded them the emails along with the log of the scammer accessing the donkey picture. Surprisingly, the originating IP was from Nigeria. What a shock!
It did turn out that the scammer was actually paying to use the web proxy, so the technician suspended their account. Which means that instead of making money of this particular scam, the scammer actually lost money. It’s not much of a win, but I’ll take what I can get.
Last September, while I was on sabbatical in the States, I bought an HP laptop (along with my HP tablet and HP phone). I ordered it online and specced it to have a full HD display and a Radeon video card in addition to the built-in Intel video, and was very happy with it.
The one complaint I had was that the fan started making noise after four or five months (at which point I was back in Lebanon). I tried disassembling the laptop to clean the fan, and it didn’t fix the problem. So I just let it go, and, eventually, the noisy fan got quieter.
A few weeks ago, I noticed that my laptop was running visibly slower than usual, and I rebooted. As came back up, I got a warning from the BIOS saying that the fan was no longer working correctly. I put my hand next to the fan outlet, and there was almost no air movement. The fan wasn’t completely dead, but it sure wasn’t working the way it should. When Fedora finished booting, it was running slow again. I checked the sensors, and, sure enough, it was running hot. Very hot. Obviously the fan needed to be replaced.
I checked the warranty status on my laptop, and found, to my surprise, that, even though I bought it in the US, it was covered with a world-wide warranty. Supposedly I could take my laptop anywhere in the world and have it repaired for free.
Now, I’ve lived in Lebanon for many years, and found that ‘customer service’ here normally consists of waving goodbye as you walk away with your new purchase. And a warranty is worth its weight in gold… until you leave the shop. Then it no longer applies, because, obviously, warranties aren’t intended to protect you once you’ve started using the product. This applies to the bigger international companies as well as the smaller mom-and-pop shops. I remember asking the local official Skoda dealer about a warranty on a used car and he told me, “I’ll give you a verbal warranty.” For some reason, I was less than impressed.
I contacted HP in Lebanon and they directed me to one of the HP authorized repair centers called Computer Information Systems (CIS). I took my laptop in last Friday and fully expected to be told that, for some obscure reason, the fan wasn’t going to be covered under warranty. Instead, they took my laptop without any hassle at all.
On Tuesday (two working days later!), I got a call from CIS telling me that it was ready. I went back in and got my laptop back, again with no hassle at all. They did tell me that I was in luck that they had the part because it was a US model, but they would have ordered the part if they hadn’t had it in stock.
So, while I’m not impressed that the system fan died so quickly, I’m very impressed that HP automatically gave me a worldwide warranty and I’m astounded that CIS did the work quickly without trying to find some way to charge me. Thank you both for your professionalism!
At 12:30PM on June 5, 2012, Eoin (pronounced the same as “Owen” for those who unreasonably expect the Irish spelling to have some relationship to how the name is pronounced) Kelly Dieter was born. He’s 7 lbs, 12 oz (3.53 kg).
Naomi made this delivery look like a piece of cake (again), and Eoin went a full six hours between feeds last night, so we’re hoping that he’ll do as well as his sisters did at sleeping through the night.
Saoirse and Ailíse are very excited that they have a little brother, though it’s taken a bit of work to convince Ailíse that Eoin shouldn’t be treated like a doll.
In the meantime, I don’t think I’ll be very productive over the next few weeks.
So if you’re running Fedora or RHEL with EPEL enabled, installing novacom is as easy as:
yum install novacom
Novacom is a utility that allows you to connect to a WebOS device via it’s USB cable. It’s used by the WebOS Quick Installer and can also give you direct terminal access to the device’s OS. In many ways it’s is comparable to using adb with Android devices.
The one major technical problem novacom has had is that it only worked with libusb-0.x and did not work with libusb-compat. This meant that, to use novacom on Fedora 15+, you had to download Fedora 14′s libusb and do some funky library overrides. Since novacom was closed-source, there was no way for us to actually fix the problem.
Last week, HP open sourced novacom, and yesterday, I started looking into package novacom for Fedora. It turns out that the novacom daemon was using libusb to initialize the USB device, but was doing the actual reading and writing using a home-grown implementation. This implementation depended on some private information from libusb that, oddly enough, was completely different (and, as far as I could see, completely unavailable) in libusb-compat.
I wrote a patch to change novacom so it used the stock libusb read and write functions, and it now works perfectly with both the old libusb and the new libusb-compat. Yay!
So, in the not so distant future, installing novacom on Fedora should be as easy as “yum install novacom”.