Updating your HP BIOS in 256 easy(ish)(not really) steps

Have you tried turning it off and on again?

In case you can’t tell by the title of this post, I’m… mildly… annoyed with HP right now. The story starts with my just-over-a-year-old HP Pavilion laptop that has been having problems with its power brick.

While I was at work on Monday, the laptop started running on batter power even when it was plugged in, but when I got home, everything worked perfectly. I assumed it had something to do with the power at work, and wondered whether it might even be firmware related. I looked through the newest BIOS’s changelog, and, sure enough there was something mentioned about power and charging, so I downloaded it and updated my BIOS.

At least, that’s what I would have done if I was running Windows. Unfortunately for me, I’m not. I don’t even have a dual-boot system because I haven’t really needed Windows for years, and, when I do, a VM does the job just fine.

The only BIOS update HP offers is a Windows exe file, so I downloaded it, and ran cabextract to get the files off it. The tool lshw told me that my motherboard was an 0820D, and the zip contained a file called 0820DF45.bin (the BIOS revision is F.45), so I had everything I needed. I put the bin file on a USB, rebooted into HP’s recovery tools, and then went to firmware management, selected the bin file… and fail! It sat there telling me that I need a signature file for the firmware.

So I searched for the signature file, but it wasn’t in the exe. I googled for it, and found a lot of people who seem to be in the same boat. One suggestion was to run the exe on a Windows system and select the “Put BIOS update on USB” option. Sounded easy enough, so I booted my Windows VM, ran the exe, accepted the stupid EULA (I’m pretty sure I saw something in the forty-third paragraph about dancing on one leg while balancing a cupcake on my nose), installed the BIOS updater, and… nothing. After twenty seconds or so, a message popped up, “This program might not have installed correctly. Install using compatibility settings?” Yeah, thanks. After multiple attempts at different compatibility settings… still nothing.

I googled around a bit more, and found a 2GB HP USB image that you can use to recover your BIOS if it gets corrupted. Sweet! I downloaded it, and several hours later, I found out it only has the original BIOS revision (complete with a signature file!), but not my latest update!

At this point, I was desperate. My final hope was to figure out some way to boot my laptop into Windows. I have an 500GB SSD with a grand total of 30GB free, so that wasn’t an option. What about a Live USB? I mean, Linux distributions have had Live CDs and USBs forever, so it must just work in Windows, right?

Nope. Not unless you have Windows 10 Enterprise with it’s Windows-to-go feature. Luckily, the guys over at Hasleo software have created a nifty little tool called WinToUSB that does the same thing. I copied my VM image over to a USB, booted from it, and ran the BIOS update.

It worked perfectly and even offered to put the BIOS update on a USB! It seems that HP, in their infinite wisdom, have designed the updater so it refuses to start unless you’re on an HP machine.

To add insult to injury, all the updater does is copy the BIOS bin file and its signature onto the EFI partition, where it gets updated after a reboot. As far as I can tell, the signature file is generated on the fly by the updater, which begs the question… Why? Why generate the signature on the fly, rather than just stick it in the embedded CAB file with the BIOS images? Why require an HP system to generate a USB image containing the BIOS update? Why require your users to dance on one foot while balancing a cupcake on their nose?

And, as further insult (or maybe we’re back to injury), the BIOS update didn’t fix my charging problem, and it turns out that my just-out-of-warranty power brick is dying. Thanks, HP. You guys rock! After three HPs laptops in a row, I think it may be time for a change.

Picture of burning laptop by secumem, used under a CC BY-SA 3.0 license

Scratch group projects – 2018

Scratch

Scratch

Once again, it’s time for this year’s Scratch projects from my grade 10 students. Up next is python, but their final projects are available at https://scratch.lesbg.com. Feel free to play them and rate them. This is a first attempt for students, so do please be gentle on the ratings.

If you’re into nice mini-games, one of my favorites from this year is Arcade, which includes Duck Hunt on steroids!

If you want to check out previous years’ projects, they’re also available at the links at the top left. If you have any comments or suggestions for the site itself, please leave them below.

Changes ahead

Lebanon Evangelical School

I’ve been working at the Lebanon Evangelical School for over fifteen years, and it has been an amazing experience. During that time we’ve switched to Linux on both servers and desktops, setup an open-source infrastructure that’s both powerful and user-friendly, and created an IT curriculum that pushes our students well beyond the local standards.

More importantly, I’ve had the opportunity to invest in students’ lives and watch them grow. One of the students I taught in the early 2000s is now teaching my daughter! Our school really is “lighting the fires that burn for eternity.”

Having said all that, the time has come for a change, and this school year will be our family’s last in Lebanon. I will be spending this school year simplifying and documenting our system to make sure that the school is in good shape for my departure, while also working to train up those under me.

Next summer, we will be leaving Lebanon, but our next step is still up in the air. Ireland and the US are the obvious choices, but we’d definitely be open to opportunities elsewhere. For now, our focus is on making sure we finish the school year strong.

Flock 2017

The conference center

This summer we were in the States visiting my family, and that just happened to match up with the fact that Flock was in the States this year (granted, the wrong side of the country, but still well worth the travel). This was the second Flock I’ve attended, and, compared to the last one, had far more of a focus on action rather than just listening to talks.

Flock 2017 was held at the Hyannis Resort and Conference Center in Cape Cod, Massachusetts. I arrived fairly late in the evening on Monday, August 28th (meeting up on the bus with some friends from last year’s Flock). The sessions started early in the morning on Tuesday and continued until Friday at noon.

There were loads of excellent sessions, but I want to focus on two important new technologies that were a major focus of a number of the sessions at Flock, and that I believe are going to change the way we deliver Fedora in the coming years.

Atomic Host
The Atomic series of sessions was a great introduction to Fedora’s Atomic Host, a project that looks to create a more Android-like OS that starts with a read-only base and layers on applications using your container flavor of choice. On a server that flavor might be Docker, while on a workstation, it would probably be Flatpak. The first session, Atomic Host 101, was led by Dusty Mabe who did an excellent job of putting together practice material so we could actually do what was being demonstrated during the session. (This material is available online and the workshop can be done at home, so, if you’re at all interested in Atomic Host, I strongly recommend going through it.)

The beauty of Atomic Host is that updates are, for lack of a better word, atomic. Fedora Atomic guarantees that the update process is either completely applied or not at all. The days of half-applied updates on systems suffering from unexpected power losses are over. There’s also verification that the OS you are running is the OS you installed, complete with diff-like comparisons that show you what configuration files have been changed. And, as an added bonus, if there are problems with your current update, reverting to the previous one is as easy as a single command.

Atomic Host has a couple of experimental features that greatly expand its flexibility regarding the read-only status of the base. One its downsides was that, if you wanted a new or updated system-wide tool, you would have to completely regenerate the base image. Now, installing a new system-wide package is as easy as typing rpm-ostree install <package>, which layers that package on top of the base. Of course, this cool feature did require you to reboot the computer to get access to the new package… until they added the livefs feature which allows you to immediately access newly installed packages without requiring a reboot.

Owen and Patrick discuss Atomic Workstation

I looked at Atomic host and rpm-ostree a year or so ago for our school workstations (which should be a perfect fit for the concept), and abandoned it because there was no way to run scripts after the rpms were installed to the image. I have a number of ansible plays that must be run to get the workstation in shape, and, as I documented here, there’s no way I’m going back to packaging up my configuration as rpms. The good news is that it appears that rpm-ostree has grown the ability to run a post-system-install script that can call ansible, so I think I’m going to give this another shot. Anything would be better than the home-grown scripts I’m currently using.

Modularity
The second new technology that had a strong showing at Flock was the (relatively) new Modularity initiative. I think the first I heard about Modularity was at last year’s Flock in Poland, where, if I recall correctly, Matthew Miller compared packages to individual Lego pieces and modules to prebuilt Lego kits. The idea behind it sounded cool, but it wasn’t until this Flock that I finally understood how it’s supposed to work.

The key idea behind Modularity is that you can combine a group of packages into a module, and release multiple streams of that module in Fedora. So one might have a LibreOffice module with a 5.3 stream, a 5.4 stream and a stable stream. Each stream may have different lifecycle guarantees, which would mean the LibreOffice 5.3 stream would be updated until the last 5.3 stable release, while the 5.4 stream would go all the way to the last 5.4 stable release. The stable stream might track LibreOffice 5.3 until 5.4.0 comes out and then switch. The key limitation behind streams is that, while Fedora might have multiple streams available, you can only have one stream installed on your system at any given time. Streams can be seen as separate DNF mini-repositories with packages that are designed to work well together.

Each stream may also have different profiles, which, in our LibreOffice example, might be default and full. The default profile might include Writer, Calc and Impress, while full might also include Base and Draw. Individual packages might be added or removed from the stream, so you could install the default profile, and then add LibreOffice Draw or remove LibreOffice Calc. Unlike streams, multiple profiles from the same module can be installed on the same system. In this way, they are most similar to the current package groups we have in DNF.

At this Flock, there were daily Modularity feedback sessions where we were talked through some simple tasks (install a module, switch to a different stream, add another profile, etc), and then asked for feedback on the user experience. I found this very effective in getting an understanding on how Modularity works, and the Modularity group did an excellent job of improving on their code in response to the feedback they received.

I did attend a session on how to build a module, but, unfortunately, because of technical problems and our hotel’s high-quality (cough, cough) internet, they didn’t quite have all the pieces in place in time for us to be able to practice making our own modules. I’d love to make a module for LizardFS, but it’s obvious that there’s still a lot of bootstrapping that has to happen before we can get there. Each library it uses needs to be made into a module, so we’re looking at lots of work before even a reasonably-sized fraction of the packages are available as modules. On the flip side, if done right, Modularity gives us the potential for a lot more flexibility in how we use Fedora.

Other odds and ends
A couple of days before Flock, Kevin Fenzi released a PSA about deltarpms being broken in Fedora 26. I attended the Bodhi Hack session with Randy Barlow, and dove right in to try to fix the problem, even though I’ve never touched Bodhi before. I came up with a pull request, but was hitting my head against a problem with the way we were fixing it when Dennis Gilmore made a small change in the mash configuration that fixed the problem in a far simpler way. I do really appreciate Randy’s help in understanding how Bodhi works, his guidance in pointing out the best way to fix the problem, and his patience with my questions. And I’ve come to appreciate his (and Adam Williamson’s) emphasis on making test cases for his code.

Hyannis Beach

I also had a chat with Patrick Uiterwijk and Kevin Fenzi about the feasibility of using casync for downloading our metadata. The advantage is that casync only downloads the chunks that are actually different, but there are major concerns about how much mirrors will appreciate the file churn inherent in using casync. The reductions in download size definitely make it worth further investigation.

All in all, Flock was an excellent place to match faces with names, learn new concepts, meet new friends, and find new ways of contributing back to Fedora. A huge thank you to everyone involved in organizing this conference!

Benchmarking small file performance on distributed filesystems

The actual benches

As I mentioned in my last post, I’ve spent the last couple of weeks doing benchmarks on the GlusterFS, CephFS and LizardFS distributed filesystems, focusing on small file performance. I also ran the same tests on NFSv4 to use as a baseline, since most Linux users looking at a distributed filesystem will be moving from NFS.

The benchmark I used was compilebench, which was designed to emulate real-life disk usage by creating a kernel tree, simulating a compile of the tree, reading all the files in the tree, and finally deleting the tree. I chose this benchmark because it does a lot of work with small files, very similar to what most file access looks like in our school. I did modify the benchmark to only do one read rather than the default of three to match the single creation, compilation simulation and deletion performed on each client.

The benchmarks were run on three i7 servers with 32GB of RAM, connected using a gigabit switch, running CentOS 7. GlusterFS is version 3.8.14, CephFS is version 10.2.9, and LizardFS is version 3.11.2. For GlusterFS, CephFS and LizardFS, the three servers operated as distributed data servers with three replicas per file. I first had one server connect to the distributed filesystem and run the benchmark, giving us the single-client performance. Then, to emulate 30 clients, each server made ten connections to the distributed filesystem and ten copies of the benchmark were run simultaneously on each server.

For the NFS server, I had to do things differently because there are apparently some major problems with connecting NFS clients to a NFS server on the same system. For this one, I set up a fourth server that operated just as a NFS server.

All of the data was stored on XFS partitions on SSDs for speed. After running the benchmarks with one distributed filesystem, it was shut down and its data deleted, so each distributed filesystem had the same disk space available to it.

The NFS server was setup to export its shares async (also for speed). The LizardFS clients used the recommended mount options, while the other clients just used the defaults (I couldn’t find any recommended mount options for GlusterFS or CephFS). CephFS was mounted using the kernel module rather than the FUSE filesystem.

So, first up, let’s look at single-client performance (click for the full-size chart):

Initial creation didn’t really have any surprises, though I was really impressed with CephFS’s performance. It came really close to matching the performance of the NFS server. Compile simulation also didn’t have many surprises, though CephFS seemed to start hitting performance problems here. LizardFS initially surprised me in the read benchmark, though I realized later that the LizardFS client will prioritize a local server if the requested data is on it. I have no idea why NFS was so slow, though. I was expecting NFS reads to be the fastest. LizardFS also did really well with deletions, which didn’t surprise me too much. LizardFS was designed to make metadata operations very fast. GlusterFS, which did well through the first three benchmarks, ran into trouble with deletions, taking almost ten times longer than LizardFS.

Next, let’s look at multiple-client performance. With these tests, I ran 30 clients simultaneously, and, for the first three tests, summed up their speeds to give me the total speed that the server was giving the clients. CephFS ran into problems during its test, claiming that it had run out of disk space, even though (at least as far as I could see) it was only using about a quarter of the space on the partition. I went ahead and included the numbers generated before the crash, but I would take them with a grain of salt.

Once again, initial creation didn’t have any major surprises, though NFS did really well, giving much better aggregate performance than it did in the earlier single-client test. LizardFS also bettered its single-client speed, while GlusterFS and CephFS both were slower creating files for 30 clients at the same time.

LizardFS started to do very well with the compile benchmark, with an aggregate speed over double that of the other filesystems. LizardFS flew with the read benchmark, though I suspect some of that is due to the client preferring the local data server. GlusterFS managed to beat NFS, while CephFS started running into major trouble.

The delete benchmark seemed to be a continuation of the single-client delete benchmark with LizardFS leading the way, NFS just under five times slower, and GlusterFS over 25 times slower. The CephFS benchmarks had all failed by this point, so there’s no data for it.

I would be happy to re-run these tests if someone has suggestions on optimizations especially for GlusterFS and CephFS.

Summer work

The dog and the river

It’s summer, we’re in the US, and I’m thoroughly enjoying the time with my family. It’s been quite a while since we’ve seen everyone here, and we’ve all been having a blast. The one downside (though my wife is convinced it’s an upside) is that my parents have limited internet, so my work time has been, out of necessity, minimal. It has nothing to do, I assure you, with our beautiful beach on the river.

I have managed to push through a bugfix LizardFS update for Fedora and EPEL, and I’ve been working on some benchmarks comparing GlusterFS, LizardFS and NFS. I’ve been focusing on the compilebench benchmark which basically simulates compiling and reading kernel trees, and is probably the closest thing to our usage pattern at the school (lots of relatively small files being written, changed, read and deleted).

Using NFS isn’t really fair, since it’s not distributed, but it’s still the go-to for networked storage in the Linux world, so I figured it would be worth getting an idea of exactly how much slower the alternatives are. If I can get Ceph up and running, I’ll see if I can benchmark it too.

In other news, I have the privilege of attending Flock again this year. I’m really looking forward to getting a better feel on Fedora’s movement towards modules, something that I hope to put into practice over the next year at the systems in school.

Hopefully, I’ll get a chance to get my benchmarks out within the next couple of weeks, and I’m sure I’ll have a lot to say about Flock.

Locks in the classroom – 2017

For the fifth year now, our grade nine students have been doing 3D modeling using Blender. Our students finished up their first assignments over a month ago, but it’s taken this long for me to get the top models together. So, with no further delay, here are the top models from each of the three grade nine classes (click on the pictures for Full HD renders).

We start with this nice mix of color and reflection.

Lock by A. Badr – CC BYSource

Next is a double-lock combination.

Lock by Amanda S. – CC BYSource

And I love the addition of a key in this lock.

Lock by christian07 – CC BY-SASource

Next is the missing link. I really like the mix of pink and black.

Lock by fromdawn02 – CC BY-SASource

Here we have safari-themed locks. Nice choice of textures!

Lock by A. Ayvazian – CC BY-SASource

And here’s a lock in a cave. The cave itself is a proper model, not just a background image.

Lock by A. Zamroud – CC BY-SASource

Here we have the lock on the Pearly Gates. I wish the lock texture was of better quality, but I really appreciate how @david190 has shaped the lock to match the texture, especially on the top.

Lock by @david190 – CC BY-SASource

And here we have a beautiful lock in a velvety box. Very nice!

Lock by Waad H – CC BY-SASource

We can’t have a lock assignment without the obligatory pirate’s treasure chest. I love the attention to detail!

Lock by Sarah B. – CC BY-SASource

And, our final submission includes a monkey. Because… monkeys rock? What a wonderful level of detail in this scene.

Lock by JP Kazzi – CC BY-SASource

Bare-metal Kubernetes

A few years ago, I attended my first Linux conference, DevConf 2014. Many of the speakers talked about containers and how wonderful they were, and my interest was piqued, but I’ve never really had an opportunity to use them.

As the sysadmin for a school, there just isn’t much need for the scalability provided for by containers. Our internal web site runs on a single VM, and the short downtimes required for system updates and upgrades are not a problem, especially if I plan them for the weekends. On the flip side, having something that we can use to spin up web services quickly isn’t a bad idea, so, over the last few months, I’ve been experimenting with Kubernetes.

My main goal was to get something running that was easy to set up and Just Works™. Well, my experience setting up Kubernetes was anything but easy, but, now that it’s up, it does seem to just work. My main problem was that I wanted to use my three oVirt nodes (running CentOS 7) as both Kubernetes nodes and masters, which meant the tutorials took a bit of finessing to get working.

I mostly followed this guide for the initial setup, and then this guide, but I did run into a few problems that I’d like to document. The first was that my containers were inaccessible on their cluster IP range, the primary symptom being that the kube-dashboard service couldn’t connect to the kubernetes service. It turned out that I, rather stupidly, forgot to start kube-proxy, which does all the iptables magic to direct traffic to the correct destination.

The second problem I ran into was that I couldn’t get pretty graphs in kube-dashboard because the heapster service wouldn’t start because I hadn’t set up the cluster DNS service, kube-dns. To be fair, the instructions for doing so are pretty unclear. In the end, I downloaded skydns-rc.yaml.sed and skydns-svc.yaml.sed, and replaced $DNS_DOMAIN and $DNS_SERVER_IP with the values I wanted to use.

The final problem I ran into is that I’m using our school’s local Certificate Authority for all the certificates we use, and I’ve had to keep on adding new subject alternative names to the server cert and then regenerate it. At the moment, it’s got the following:
DNS:kubernetes.example.com
DNS:node01.example.com
DNS:node02.example.com
DNS:node03.example.com
DNS:localhost
DNS:localhost.localdomain
DNS:kubernetes.default.svc.local
# Where I replaced $DNS_DOMAIN with “local”
DNS:kubernetes.local
DNS:kubernetes.default
IP Address:127.0.0.1
IP Address:172.30.0.1
# Where our cluster IP range is 172.30.0.0/16

I suspect I could now get rid of some of those hostnames/addresses, and I’m not even sure if this method is best practice, but at least it’s all working.

So I’m at the point now where I need to see if I can setup our MikroTik router as a load balancer and then see if I can get our web based marking system, LESSON, moved over to a container with multiple replicas. Hurrah for redundancy!

Broken Wagon Wheel by Kevin Casper is in the public domain / used under a CC0 license

Odds and ends

Picture of the steles at Nahr el Kalb

 

Since I last posted, there have been a number of small updates, but nothing that seemed big enough to write about. So I figured it might be worth posting a short summary of what I’ve been up to over the last couple of months.

In no particular order:

FOSDEM
I had the opportunity to visit FOSDEM for the first time last month. Saw lots of cool things, met lots of cool people and even managed to bag a LibreOffice hoodie. Most importantly, it was a chance to build friendships, which have a far higher value than any code ever will.

Wireless access points
I should probably write a proper post about this sometime, but a number of years ago we bought about 30 TP-LINK WR741ND wireless APs and slapped a custom build of OpenWRT on them. We installed the last spare a couple of months ago and ran into problems finding a decent replacement (specific hardware revisions can be quite difficult to find in Lebanon). After much searching, we managed to get ahold of a TP-LINK WR1043ND for testing and our OpenWRT build works great on it. Even better, it has a four-port gigabit switch which will give us much better performance than the old 100Mbps ones.

LizardFS patches
I ran into a couple of performance issues that I wrote some patches to fix. One is in the process of being accepted upsteam, while the other has been deemed too invasive, given that upstream would like to deal with the problem in a different way. For the moment, I’m using both on the school system, and they’re working great.

Kernel patch (tools count, right?)
After the F26 mass rebuild, I ran into problems building the USB/IP userspace tools with GCC 7. Fixing the bugs was relatively simple, and, since the userspace tools are part of the kernel git repository, I got to submit my first patches to the LKML. The difference between a working kernel patch and a good kernel patch can be compared to the difference between a Volkswagen Beetle and the Starship Enterprise. I really enjoyed the iterative process, and, after four releases, we finally had something good enough to go into the kernel. A huge thank you goes out to Peter Senna, who looked over my code before I posted it and made sure I didn’t completely embarrass myself. (Peter’s just a great guy anyway. If you ever get the chance to buy him a drink, definitely do so.)

Ancient history
As of about three weeks ago, I am teaching history. Long story as to how it happened, but I’m enjoying a few extra hours per week with my students, and history, especially ancient history, is a subject that I love. To top it off, there aren’t many places in the world where you can take your students on a field trip to visit the things you’re studying. On Wednesday, we did a trip to Nahr el-Kalb (the Dog River) where there are stone monuments erected by the ancient Assyrian, Egyptian, and Babylonian kings among others. I love Lebanon.

Scratch group projects – 2017

Scratch

Scratch

It’s January, so it must be time for this year’s Scratch projects from my grade 10 students. We’re moving on to python, but I’ve posted their projects at http://scratch.lesbg.com Feel free to play them and rate them. This is a first attempt for students, so do please be gentle on the ratings.

One of my personal favorites is Gravity Clash, which is strangely addicting, given how simple it is.

If you want to check out previous years’ projects, they’re also available at the links at the top left. If you have any comments or suggestions for the site itself, please leave them below.