Posted by: jdieter | October 31, 2015

Where’s my blankey? (aka IPv6 on a flat network)

Boy on beach holding small soft toy while sucking thumb

Familiarity is comforting

At the Lebanon Evangelical School – Loueizeh, we try to push the envelope when it comes to testing new technology.  Our website makes efficient use of some of the latest CSS technology from 1996, and we have just started experimenting with this new-fangled IPv6 technology, also from the mid 90’s.  Cue Macarena single…

If you’re on a flat network and want to match MAC addresses to IPv6 addresses, ditch ISC’s DHCPv6 server, and instead use dhcpy6d

After years of asking, our ISP has finally given us an IPv6 /48 prefix, so I finally have the privilege of setting up an IPv6 network to coexist with our IPv4 network.  Yay!  As an aside, I think we’re one of the first organizations in Lebanon to be working with IPv6.  A couple of years ago, I asked our old ISP about getting an IPv6 address and was told that they didn’t have any.  There were plenty of IPv4 addresses to go around, so why bother?  But I digress…

Our current IPv4 topology is… flat.  Very flat.  We use a single 10.x.y.z internal IP range, given out by two DHCP servers, where x is a fixed number, y is the category of the device and z is the individual address.  The categories allow us to divide devices into guest, staff and student groups.  We are very aware that MAC addresses can be easily spoofed, so these categories are primarily used for traffic shaping and nothing security related.  I am aware of (and would like to start working with) VLANs, but not all of our networking equipment supports them.

So our primary requirement for IPv6 is that we are able to set up corresponding categories to match our IPv4 categories (y in 10.x.y.z).  Another requirement is that we use some form of static IP addresses for logging purposes and that these static addresses be somehow tied to a device’s corresponding IPv4 address, mainly for convenience.

My first plan was to use SLAAC, which allows us to use 16 bits to specify categories (double what we have with our IPv4 addresses), while still allowing the client 64 bits to use in coming up with its own address.  Unfortunately, SLAAC falls down for us on both categories and static addresses.  To implement categories properly, we’d need to divide our network into VLANs, and we still have a few unmanaged switches that don’t know how to handle them.  And, while some devices just use their MAC addresses to generate their unique address, others (especially the more modern mobile OS’s which automatically implement RFC4941) will generate random unique addresses, which causes problems if we want to track who’s doing what.

Plan B was to use DHCPv6, which isn’t that much of a stretch, seeing as we’re already using DHCP on our IPv4 network.  Granted, since Android doesn’t work with DHCPv6 (really, Google?), that means all of the Android devices on our network will be stuck on IPv4, but, since we’re still in the experimental stage, I’m ok with that.  With DHCPv6 I can choose which address each device gets, allowing me to specify both categories and static (randomly generated) IP addresses.  So I started putting together an ISC DHCPv6 server, looked at the examples, and… “Where the heck am I supposed to put the MAC address?”

You see, in their infinite wisdom, the IETF decided that MAC addresses were no longer sufficient to tell the DHCPv6 server what IP address we should get.  No, now we get to use these new things called DUIDs which last forever like MAC addresses, but stay the same for all interfaces on a system, unlike MAC addresses.  Woo-hoo!  There are some small exceptions, especially with a computer or phone as opposed to an embedded device.  The DUID will not be automatically be the same if you dual boot.  Or if you wipe the OS.  Or, if like us, you make extensive use of netbooting for system maintenance.  But other than that, DUIDs will last forever.

The best part is that there are multiple ways of generating a DUID (including straight from the MAC address), but the server doesn’t have any say in telling the clients how it wants its DUIDs generated.  That means that clients like NetworkManager will quite happily generate a DUID based on /etc/machine-id, which is itself randomly generated the first time a system is booted.  Not very useful if you want to pass the same IP address whether a system is being netbooted into maintenance mode or booting normally off of the hard drive.

Now, all this wouldn’t be insurmountable, except for the fact that I currently have a database of MAC addresses matched with IPv4 addresses (complete with categories), but, because DUIDs don’t necessarily have anything to do with the MAC address, I have no idea how to match that same device to the IPv6 address I want to give it.

Apparently ISC’s DHCPv6 server has implemented a relatively new specification that allows link-local addresses to be sent to the DHCPv6 server through a relay, but our network is already flat, so I shouldn’t need to screw around with a relay.

There’s also some ambiguity about the Confirm message in DHCPv6.  The specific section says:

When the server receives a Confirm message, the server determines whether the addresses in the Confirm message are appropriate for the link to which the client is attached. If all of the addresses in the Confirm message pass this test, the server returns a status of Success. If any of the addresses do not pass this test, the server returns a status of NotOnLink.

The question is, if I’ve changed the static IP address for a client, is the old address still appropriate for the link?  As the sysadmin, my answer is, “No, please discontinue use of the old address immediately.”  Unfortunately, the ISC DHCPv6 server disagrees with me and will happily confirm the old addresses until the cows come home.

After a bit of searching, I’ve found a solution that I’m actually quite happy with.  The Leibniz Institute for Solid State and Materials Research in Dresden has released a DHCPv6 server called dhcpy6d written in Python that allows you to match based on MAC address, DUID or a combination of both.  There have been a few bugs in it, and it’s not yet handling Confirm messages the way I’d like it to, but upstream has been very responsive, and I’m looking forward to having an IPv6 system that works for us.

We’re not quite up and running 100% yet, but I hope to be there by the second week of November.  Of course, I had hoped to get some work done during this long weekend, but God had other plans (at least, judging by the lightning that destroyed quite a bit of my networking equipment at home just before school let out for the weekend).

Posted by: jdieter | August 24, 2015

Solving the mystery of the disappearing bluetooth device

wifi This is a true[1] story

One of the features my laptop comes with is Bluetooth, which I’ve found to be quite handy considering all the highly important uses I have for Bluetooth (using Bluetooth tethering on my phone when traveling, controlling my presentations with my phone, using a Wii-mote for playing SuperTuxKart portable Bluetooth controller with built-in accelerometer to analyze the consistency of the matrices used when rendering three-dimensional objects onto a two-dimensional field).

About three months ago, I started to run into problems. Not the easy kind of problem where “BUG: unable to handle kernel paging request at 0000ffffd15ea5e” brings the laptop to an abrupt stop, but instead the kind of problem that causes real trouble.

My Bluetooth module starts to randomly reset itself. I’ll be working merrily, trying to connect my phone or the… portable Bluetooth controller… and, halfway through the process, it will hang. Kernel logs show that the Bluetooth module has been unplugged from the USB bus and then reconnected. Which, when you think about it, makes a whole lot of sense, given that the Bluetooth module is built into the WiFi card which is screwed onto the motherboard.

When faced with kernel logs that boggle the mind, the most logical thing to do is downgrade the kernel. I know that I was able to successfully… analyze the matrices used for, oh, whatever it was… back at the beginning of June, which means I had working Bluetooth on June 1. Let’s see what kernel was latest then, download and install it, boot from it, and…

kernel: usb 8-4: USB disconnect, device number 3
kernel: usb 8-4: new full-speed USB device number 4 using ohci-pci


Ok, the hardware must be dying.  Stupid Atheros card.  No idea why it’s just the Bluetooth and not the WiFi as well, but we’re in Ireland and I’m on eBay, so I’ll just order another one.  Made by a different company.  A week later, a slightly used Ralink combo card shows up. I plug it in, fire her up, and…

kernel: usb 8-4: USB disconnect, device number 3
kernel: ohci-pci 0000:00:13.0: HC died; cleaning up
kernel: ohci-pci 0000:00:13.0: frame counter not updating; disabled

Double #$@&%*! Now the Bluetooth module is completely gone and the only way to get it back is to reboot. Grrrrr.

At this point I’ve got a hammer in my hand, my laptop in front of me, and the only thing keeping me from submitting a video for a new OnePlus One is my wife warning me that we’re not going to be buying me a new laptop any time this decade.

So I take a deep breath, calmly return the hammer to the toolbox (no, dear, I have no idea how that dent got on the toolbox), and decide to instead go down the road less traveled. I open up Fedora’s bugzilla and start preparing my bug report, taking special care to only use words that I’d be willing to say in front of my children. “…so the Bluetooth module keeps getting disconnected. It’s almost like the USB bus is cutting its power for some stupid…”

Wait a minute! Just before we traveled to Ireland, I remember experimenting with PowerTOP. And PowerTOP has this cool feature that allows you to automatically enable all power saving options on boot. And I might have enabled it. So I check, and, yes I have turned on autosuspend for my Bluetooth module. I turn it off, try to connect my… portable Bluetooth controller… and it works, first time. I do some… matrix analysis… with it and everything continues to work perfectly.

So I am an idiot. I close the page with the half-finished bug report and go to admit to my wife that I just wasted €20 on a WiFi card that I didn’t really need.  And, uh, if any Atheros or Ralink people read this, well, I’m sorry for any negative thoughts I may have had about your WiFi cards.

[1] Well, mostly true, anyway. Some of the details might be mildly exaggerated.

Posted by: jdieter | June 30, 2015

On vacation

Irish sunshine

Irish sunshine

So we’ve managed to make it to Ireland to visit my wife’s family for the summer. It took only a few hours of flying and the kids were great, and now we get to enjoy the beautiful Irish weather.

There are a lot of family things and a few technical things I’d like to get done this summer. Based on last summer’s record, I’ll probably accomplish most of the family stuff, but I’m not too hopeful that I’ll actually get to any of the tech stuff. And that’s probably for the best.

Posted by: jdieter | April 29, 2015

Managing the unmanageable

Sisyphus pushing a boulder up the mountain

Uphill battle

When I first started working at LES, sometime in the last century, the computers were networked together using some high-tech gizmos called “hubs“. These hubs would reach a maximum speed of 10Mbps on a good day, if there were only two devices connected and the solar flares were at a minimum.

Time marched on and we upgraded to 10/100Mbps hubs, then 10/100Mbps switches, and then, finally, in the last few years to unmanaged gigabit switches. One of the biggest problems with using unmanaged switches is that the network can be brought to a standstill using a simple patch cable, plugged into two network sockets. I’ve become pretty adept at recognizing the signs of a network switching loop (the lights on the switches are flickering like the last few seconds on the timer in Mission Impossible, the servers are inaccessible, the teachers are waiting outside my office with baseball bats). One of our network loop disasters hiccups even managed to anonymously make it to a site dedicated to technology-related problems.

Over the last month, though, I had lots of small problems that never quite reached the level of crashing the network. Our Fedora systems, connected to the server via NFS, would occasionally freeze for a few seconds, and then start working again. Our accountants, who are running Windows, complained that their connection to the server was being broken a couple of times each day, causing their accounting software to crash. And pinging any server would result in a loss of ten-fifteen packets every ten minutes or so.

I checked our switches for the flicker of death and came up dry. I tried dumping packets from a server on one side of the school to a server on the other side of the school and consistently reached 1Gbps. In desperation, I retipped the Cat6 cable connecting the switches that form the backbone of our network. All to no avail. I decided to wait until evening and then unplug the switches one at a time until I found the problem. The problem disappeared.

The next morning it was back. I had two options. Disconnect the switches one port at a time in the middle of the school day, while teachers, students and accountants are all trying to use the system. Or put in a request for some managed switches and see if they could help us figure out what the heck was going on. Hundreds of irritated users outside my door… or new kit. It was a hard call, but I went for the new kit.

We started with an eight-port MikroTik switch/router, and, after I tested it for a day, we quickly grabbed a couple more 24-port MikroTik switches (most of our backbone locations have nine or ten ports that need to be connected and MikroTik either does 8 or 24 ports).

After we got the three core locations outfitted with switches, I quickly got messages on the switches pointing to a potential network loop on a link to one of our unmanaged leaf switches in the computer room, which was connected to another unmanaged five-port switch that had apparently had a bad day and decided it would start forwarding packets back through itself.

I replaced the five port switch with a TP-Link five-port router running OpenWRT and, just like that, everything was back to normal.

I am never going back to unmanaged switches again. Having managed switches as our network’s backbone reduced the time to find the problem by a factor of 10 to 20, and, if we’d had managed switches all the way through the network from the beginning, we could have zeroed in directly on the bad switch rather than spending weeks trying to work out what the problem was.

So now we’re back to a nice quiet network where packet storms are but a distant nightmare. Knock on wood.

First Work: Myth of Sisyphus detail #1 by AbominableDante, used under a CC BY-NC-ND license

Posted by: jdieter | March 31, 2015

Locks in the classroom – 2015

For the third year now, our grade nine students have been doing 3D modeling using Blender. A couple of weeks ago, our students finished up their first assignments, and I gave the top modelers the option of showing off their work. So, without further ado, here are the top models in each of the three grade nine classes.

Treasure Chest by Joe - CC BY 4.0 - Source

Treasure Chest by Joe – CC BY 4.0Source

Lock by Haidar CC BY 4.0 Source

Lock by Haidar – CC BY 4.0Source

Lock by John-Paul CC BY-SA 4.0 Source

Lock by John-Paul – CC BY-SA 4.0Source

Lock by Atwi CC BY 4.0 Source

Lock by Atwi – CC BY 4.0Source

Lock by Khodor CC BY 4.0 Source

Lock by Khodor – CC BY 4.0Source

Lock by Al Hassan CC BY 4.0 Source

Lock by Al Hassan – CC BY 4.0Source

Lock by Mokdad CC BY 4.0 Source

Lock by Mokdad – CC BY 4.0Source

Lock by Grace CC BY 4.0 Source

Lock by Grace – CC BY 4.0Source

Lock by Elie CC BY-SA 4.0 Source

Lock by Elie – CC BY-SA 4.0Source

Posted by: jdieter | February 28, 2015

Scratching an itch



Last year I started teaching programming to my grade 10 classes. I started with Python, which is easy to understand, forces good programming practices, and is one of my favorite languages. It was a complete disaster. I had four or five in each class who understood what I was doing, and the rest were completely lost, which says a whole lot about my teaching. At 2014, I chatted with Matthew Miller about my Python problem, and he suggested teaching my students Scratch.

For those (like me) that don’t know about it, Scratch is a graphical programming language that’s designed to be easy to use while still allowing the full power of a proper programming language. The benefit of teaching programming using Scratch is that the students get quick graphical feedback on what works and what doesn’t, and syntax errors are pretty much impossible. Once they understand the basic concepts of programming, it’s then easier to switch to something like Python.

I switched to Scratch, and the students loved it. (Or, at the very least, liked it better than Python.) I ended the school year with a group assignment that was partially graded based on votes by the rest of the classes. I had great ideas for making the group assignments available online, but never went anywhere with it. Fast-forward to this year where we’ve started with Scratch and are now almost done with it and ready to move on to Python. And, since I now have a deadline, I’ve put together a simple site so they can vote on each others’ group projects.

At the moment, it has last year’s projects and is open for anyone to rate, so if you want to try out their projects, go to, give them a shot, and rate them. This was a first attempt for both students and myself, so please be gentle on the ratings.

Sometime in the next few weeks I’ll post this year’s projects. They will be available to play, but initially only students or teachers in the school will be able to rate them. Once I’ve scored them, I’ll open up the ratings to everybody.

If you have any comments or suggestions for the site itself, please leave them below.

Posted by: jdieter | January 31, 2015

Multiseat and anaconda bugs

Clouds over a mountain

Those look like storm clouds…

A year ago, I put together a post about the multiseat Fedora systems we’re using in our school. Over the past month, I’ve been putting together an upgrade from our Fedora 19 image to Fedora 21.

While doing the upgrade, I ran into a few bugs, and the first one was a doozy! Roughly half the time our multiseat systems started, the login screen would only show on two or three of the four seats. The only way to fix it was to restart the display manager, and even that only had a 50% chance of success.

At first I tried bodging around the bug by staggering the timing of Xorg’s startup, but that only made things worse. So I started looking at the logs and then looking at the Xorg code. It became obvious that the problem was that the first seat (seat0) would try to claim all the GPUs on the system. If it beat the other seats to their GPUs, they would, oddly enough, refuse to start. I put together a patch, filed a bug, and watched as those who know a lot more about Xorg’s internals take my ugly patch and make it beautiful. This patch has been merged into Xorg 1.17 and I’m hoping we’ll get it backported for F20 and F21 as I really don’t want to have to maintain internal Xorg packages until we switch to F22.

There do seem to be a couple of other bugs related to lightdm/xorg, but they’re far rarer and I haven’t spent much time on tracking them down, much less filing bugs. Occasionally lightdm starts the X server, but never gets a signal back saying that it’s ready, so they both sit there waiting for the other process. And far more rarely, the greeter crashes, which causes lightdm to shut down the seat. I think lightdm should retry a few times, but either it doesn’t or I haven’t found the right config option yet.

We did run into one interesting race condition in anaconda when we started mass-installing F21 on our systems. We use iPXE and Fedora’s PXE network install images with a custom kickstart to do the install (in graphical mode, because pretty installs make it less likely that a student will press the reset button while the install is progressing). On some systems, I’d get an error message that basically said that a repository that was supposed to be enabled had disappeared, which would crash anaconda.

Thanks to anaconda’s wonderful debugging tools, I was able to work out what list was being emptied and finally tracked it down to a race between the backend filling the frontend with its list of repositories and the frontend telling the backend to remove any repositories that aren’t in its list of repositories. Another ugly patch attached to the bug report, and we’ll see what happens with this one. At least I’m able to rebuild the squashfs installer image so the bug is fixed for us internally.

So most of our computers have now been upgraded to Fedora 21 and the reaction from our students has been positive. Now to get some Fedora 22 test systems built…

Posted by: jdieter | December 31, 2014

Us Versus Them

LEGO fire being put out by team


I was reading the backlog of the Fedora development mailing list and came across a post in which Richard Hughes made a very interesting comment:

I know lots of Red Hat developers worn down by the low-level harassment
on this mailing list, so much so, that they just stop pushing the boundaries and go work on something else cool, e.g. ChromeOS.

I’ve been following this particular mailing list for many years, and the sad thing is, I think he’s right. There’s this underlying current of “us versus them” that can pop up, especially in longer-running threads, and “them” is someone with a email address.

On some levels this makes sense. Red Hat is the single largest entity in Fedora and many (if not most) of the movers and shakers in Fedora are Red Hat employees. A quick glance at the Fedora 21 System Wide Changes shows many more Red Hat employees than not. Is it any wonder that individual contributors can feel a bit like a sailboat in the way of an aircraft carrier?

So, is this some conspiracy to keep Fedora under Red Hat control? Is it something we should fight against? Or is there a reasonable explanation for Red Hat’s influence?

First off, there’s the question of whether people are hired at Red Hat to work on Fedora or whether they’re hired because of their work on Fedora. I had the opportunity at Devconf earlier this year to sit down with Patrick Uiterwijk, who did most of the work on Fedora’s OpenID provider, and was then hired by Red Hat because of that work. Patrick’s is not the only story like that. While not all competent Fedora contributors are Red Hat employees, Red Hat employees who contribute to Fedora are generally pretty darn competent, and competency in Fedora is rewarded with influence.

There’s also the fact that Red Hat pays people to work on Fedora. Many individual contributors are working on Fedora in their spare time. While this doesn’t necessarily affect the quality of their work, it does tend to affect the quantity. To give an example, at DevConf, I also talked with Stephen Gallagher about joining the Fedora Server working group. After DevConf, I signed up for the mailing list and then did… nothing. I’m the sysadmin and a teacher at my school, and at home I’m a husband and father of four children under six. While I have great intentions of helping out with the Server working group, it’s just not high enough on my list of priorities for me to have the time… and I suspect I’m not the only individual contributor in that boat.

Finally, there’s the fact that Red Hat’s employees actually get to know each other, at least to some extent. One of the big things I’ve learned in my years working here in Lebanon is the importance of relationship. It’s a lot easier to work with someone after you’ve sat down with them, had a coffee (or, in my case, a Coke) and chatted. This was the main reason I enjoyed DevConf and one reason I really wish I could make one of the Flock conferences.

So where does this leave us? Red Hat does have a large influence on Fedora. It’s not a conspiracy, it’s life, and attacking Red Hat employees because of its influence is counterproductive.

So, going back to Richard’s original message, we need to stop tearing each other down. When people speak, let’s assume good faith, and not assume that any ideas we disagree with will spell the end of Fedora, Linux or the world as we know it. Most of all, we need to make a conscious choice to value each other, even when we disagree.

Have a great 2015!

Posted by: jdieter | November 1, 2014

Autobackup to Linux from the Samsung NX300M

Camera on floor

Samsung NX300M

While we were on vacation back in the States during the summer, our camera started doing strange things with the flash, so we decided it was time to get a new one. After much reading and debating, we settled on the Samsung NX300M. It’s been a major step up from our old point and click, and we’ve enjoyed the quality of the shots a lot.

Being the nerd that I am, I started wondering what OS powered the camera and whether it was hackable. After doing some in-depth research (typing “hack NX300M” in Google), I came across these two fascinating articles about the NX300. Basically, any code that you put in in the root directory of your SD card will be run as root during camera bootup. I immediately tried to make my own on my NX300M, but, unfortunately, it didn’t work. After some experimentation, I found that the initial firmware for the camera (1.10) does run, so I downgraded my firmware and got to work.

I decided to make the killer app for a Wifi-enabled camera (at least, I think it’s a killer app). The NX300M can do some pretty cool things with its built-in Wifi like acting as a hotspot so it can send pictures to your phone. It can also do some form of automatic backup to your Windows desktop, but, as we don’t have any Windows systems in our house, I was unable to try it. The annoying thing about the Windows automatic backup feature, though, is that you have to manually switch to the Wifi setting and choose “Auto Backup” to run it. If I’m going to be doing automatic backups from my camera, I want it to be truly automatic, oddly enough. And I want it to backup to my wife’s laptop, which is running Fedora, preferrably using a secure copy method like rsync over ssh or scp.

I spent several days trying to work out how to make this all work, and here are some notes from my attempt:

  • There is one root filesystem and two support filesystems, all running ubifs.
  • The root filesystem is read-only, and, though I can remount it as rw, any writes seem to go to /dev/null. I don’t understand ubifs enough to know whether this is expected behavior.
  • The NX300M seems to be running the same ancient version of Tizen that the NX300 is, with a couple of proprietary binaries that do all the heavy lifting. Pretty much everything that Georg wrote about the NX300 in the above pages applied to the NX300M.
  • Working out how to start Wifi from was very difficult. The NX300M uses a very old version of connman plus a funky daemon called net-config to actually power up and down the Wifi card. Neither has a cli tool available on the NX300M to control it, and each can only be controlled via dbus. Running dbus introspection on net-config causes the camera to reboot. Who knew?
  • /dev/log_main has a log of pretty much everything, including dbus commands. This is what I used to figure out what commands I needed to use to get the Wifi up.
  • The NX300M comes bundled with an SSH client (though no SSH server), which makes the backup method pretty simple.
  • Tizen has a power manager that’s run by the NX300M, but I’m not sure whether it’s actually being used. It appeared to me from the logs that the proprietary UI app was turning off the display and then the camera. To keep the camera from shutting off during the automatic backup, I ended up using xdotool to send keypresses to the UI. Yes, that’s ugly.
  • The UI understands the concept of time zones, but everything at the OS level is in UTC, and, at least as far as I can tell, the camera treats the UTC time as if it’s the current time zone.
  • There is a rtc on the camera, but it doesn’t seem to support wakeup events. :(

So, in the end I was able to put together a script that runs on boot that checks for any new pictures and, if there are any, connects to our Wifi and scp’s them to my wife’s laptop. It then appends the filename to a hidden file in the DCIM directory on the SD card that keeps track of which files have been copied across. If my wife decides to rename the file or the folder it’s in, I don’t really want to send the picture across again.

I’m releasing my code under the GPLv2+, and it’s available on GitHub. I hope somebody finds it useful.

Posted by: jdieter | October 22, 2014

Using FreeIPA as a backend for DHCP


Yeah, this…

Disclaimer: This is not an official guide and in no way represents best practices for FreeIPA. It is ugly and involves the digital equivalent of bashing on screws with a hammer. Having said that, when nobody has invented the right screwdriver yet, sometimes you just have to hammer away.

First, some history. We’ve been running separate DHCP, DNS and LDAP servers since we switched from static IP addresses and a Windows NT domain somewhere around ten years ago. The DHCP server was loosely connected with the DNS server, and I had written this beautifully complex (read: messily unreadable) script that would allow you to quickly add a system to both DHCP and DNS. A few months ago, we migrated all of our users over to FreeIPA, and I started the process of migrating our DNS database over. Unfortunately, this meant that our DHCP fixed addresses were being configured separately from our DNS entries.

Last week I investigated what it would take to integrate our DHCP leases into FreeIPA. First I checked on the web to see if something like this had already been written, but the closest thing I could find was a link to a design page for a feature that’s due to appear in FreeIPA 4.x.

So here’s my (admittedly hacky) contribution:

  1. sync_dhcp – A bash script (put in /srv, chmod +x)that constantly checks whether the DNS zone’s serial number has changed, and, if it has, runs…
  2. – A python script (put in /srv, chmod +x) that regenerates a list of fixed-addresses in /etc/dhcp/hosts.conf
  3. dhcpd.conf – A sample dhcpd.conf (put in /etc/dhcp) that uses the list generated by
  4. sync-dhcp.service – A systemd service (put in /etc/systemd/system) to run sync_dhcp on bootup
  5. make_dns – A script (chmod +x) that allows the sysadmin to easily add new dns entries with a mac address

sync_dhcp does need to know your domain so it knows which DNS zone serial to check, but other than that, the first four files should work with little or no modification. You will need to create a dnsserver user in FreeIPA, give the user read access to DNS entries, and put its password in /etc/dhcp/dnspasswd (readable only by root).

make_dns makes a number of assumptions that are true of our network, but may not be true of yours. It first assumes that you’re using a network (yes, I know that’s not right; it’s long story) and that 10.10.9.x and 10.10.10.x IPs are for unrecognized systems. It also requires that you’ve installed freeipa-admintools and run kinit for a user with permissions to change DNS entries, as it’s just basically a fancy wrapper around the IPA cli tools.

Bent Screw Hole Backyard Metal Macros by Steven Depolo used under a CC BY 2.0 license

Older Posts »



Get every new post delivered to your Inbox.