5210R - CentOS 8/AlmaLinux 8 ISO building
Developer diary entry about re-spinning CentOS 8 and AlmaLinux 8 to provide us with a custom BlueOnyx 5210R installation ISO.
Editors Note: This guide was initially published when I did the first CentOS 8 ISO image for BlueOnyx 5210R. Since then this page has gotten many search engine hits from people who were looking for a solution to their own ISO image building ordeals. With the EOL of CentOS 8 and the emergence of AlmaLinux 8 we have since long switched BlueOnyx 5210R to use AlmaLinux 8 as a base and our installation ISO uses a similar procedure as outlined below. Hence I wrote a more straightforward guide that provides instructions how to build a custom AlmaLinux 8 ISO image.
When CentOS 8 came out I spent a week trying to build an ISO image for easy install of our CentOS 8 based BlueOnyx 5210R and this time around it has been quite an ordeal. The result is providing the same level of perfection as previous ISO images, but to get there wasn't easy or fast. I'd like to take a moment to explain why and maybe it also helps others that have the same issues with re-spinning CentOS 8 to generate custom installation media.
How we did it before:
I've been building ISO images for BlueOnyx since the CentOS 5 days. So that's CentOS 5 (32-bit), Scientific Linux 6, CentOS 6 (both 32-bit and 64-bit), CentOS 7, Virtuozzo Linux 7 ... so what surprise could CentOS 8 hold in that regards?
Let's see: We've done everything in the book there. The build process for the really first few ISO images for CentOS 5 had custom scripts for determining the install order, RPM selection was done by process of elimination and trial and error, the comps.xml was a piece of manual work and in the end the work directory was burned off to a CD.
A bit later on we used whatever Anaconda provided back then and later on switched to Revisor and Pungi for the CentOS 5, the EL6 related builds. For CentOS 7 we used Pungi v3.12-3 from (I think) Fedora 28 with the SRPM rebuilt on CentOS 7.
Of course the sweet part there is that tools like Revisor or Pungi work off a Kickstart script, pull all the required RPMs (and their dependencies) off the mirrors, create the 1st and 2nd stage images, Grub configuration of the ISO, handle EFI- and Netboot and what not. Once you have that kickstart script sorted out you can fully automate the ISO builds and the result will usually always be spot on and working reliably when you need an updated ISO again. It eliminates all the tedious and laborious manual steps that are usually involved in that process.
How does that work on CentOS 8?
I was initially delighted when I noticed that EPEL for EL8 already had a "pungi" RPM and when it came to the point where I needed it that delight turned quickly into utter chagrin. That Pungi is v4.16 and the difference between the one I used for CentOS 7 and this one couldn't be bigger. For starters the main executable is no longer named "pungi", but "pungi-koji" and the whole mechanism how it works and how it's configured is different.
I tried to make heads and tails of this reading up on the documentation, but I didn't get very far. The documentation has a sample config file for re-spinning Fedora 28 (latest is FC31) and it became quickly clear that some of the config options they mention there had already been deprecated. The documentation is already that far behind.
My usually good Google-Fu turned up nothing about how to use "pungi-koji" and so far nobody was so kind to share an actually working configuration for re-spinning CentOS 8 with "pungi-koji". Brian Stinson from CentOS mentioned they were using pungi-koji in a recent talk in Budapest (videos or transcripts of it aren't online), so I poked a bit around in git.centos.org and koji.mbox.centos.org. I found some kickstarts they were using and the comps.xml for CentOS 8.0.0.1905 (which is also on their ISO), but nothing that really gave me some examples or pointers.
It's clear that CentOS uses Koji as their build infrastructure and pungi-koji seems to tie into that neatly. It also appears to work stand-alone - if you know what to do. I don't - yet.
The good, the bad and the ugly old way:
So I decided it was back to the basics and create the 5210R ISO the same way we did back in the "dark ages" 11 years ago: Taking the CentOS 8 ISO and doing a re-spin by adjusting the isolinux.cfg, grub.conf, cleaning out all RPMs and stuffing the CD with just the RPMs we need. Which also required a new comps.xml.
The obvious first step was to mount and rip off the CentOS-8-x86_64-1905-dvd1.iso and copy all files to our work directory. The structure and directories there hold no surprises. There are the "AppStream" and "BaseOS" directories that hold the two local DNF/YUM repositories that the ISO uses. I cleaned both out entirely.
Our BlueOnyx 5210R YUM repository already has a comps.xml that defines the group "BlueOnyx" and it's RPMs. So we can do a "yum groupinstall blueonyx" that gets us everything we need aboard.
Anaconda needs the "base" and "core" Groups in order to perform an install and we also want at least a minimal install or the what CentOS calls the "Custom Operating System" group in their comps.xml.
So how do we get all the RPMs from these various groups into the "BaseOS" repository of our ISO?
It's as simple as that:
# Get all needed RPMs:
LC_ALL=C /usr/bin/dnf-3 -y groupinstall base --downloadonly --downloaddir=/home/build_cd.5210R/5210R/BaseOS/Packages/ --installroot=/home/build_cd.5210R/yum-env/root/ --releasever=8
LC_ALL=C /usr/bin/dnf-3 -y groupinstall core --downloadonly --downloaddir=/home/build_cd.5210R/5210R/BaseOS/Packages/ --installroot=/home/build_cd.5210R/yum-env/root/ --releasever=8
LC_ALL=C /usr/bin/dnf-3 -y groupinstall "Custom Operating System" blueonyx --downloadonly --downloaddir=/home/build_cd.5210R/5210R/BaseOS/Packages/ --installroot=/home/build_cd.5210R/yum-env/root/ --releasever=8
LC_ALL=C /usr/bin/dnf-3 -y install dnf yum NetworkManager dracut-config-generic dracut-norescue firewalld kernel nfs-utils rsync tar dnf-utils anaconda grub2 grub2-efi-x64 efibootmgr efivar-libs shim-x64 authconfig chrony firewalld memtest86+ syslinux pv --downloadonly --downloaddir=/home/build_cd.5210R/5210R/BaseOS/Packages/ --installroot=/home/build_cd.5210R/yum-env/root/ --releasever=8
This downloads all required RPMs that we need for the 5210R ISO.
In order to rebuild the repository data for this newly populated "BaseOS" ISO repository we need a custom comps.xml. No biggy. We can create one by hand! Oh, wait. That's a whopping 1742 RPMs in that repository. That would require a lot of typing. \o/
Yum to the rescue again:
cat /home/build_cd.5210R/yum-env/compsbuilder/comps.xml_header > /home/build_cd.5210R/yum-env/compsbuilder/comps.xml
LC_ALL=C /usr/bin/dnf-3 list --downloadonly --downloaddir=/home/build_cd.5210R/5210R/BaseOS/Packages/ --installroot=/home/build_cd.5210R/yum-env/root/ --releasever=8 | awk {'print $1'} | sed -e 's@.i686@@'| sed -e 's@.x86_64@@'|sed -e 's@.noarch@@'|sort -u | awk {'print " <packagereq variant="BaseOS" type=\"mandatory\">"$1"</packagereq>"'} >> /home/build_cd.5210R/yum-env/compsbuilder/comps.xml
cat /home/build_cd.5210R/yum-env/compsbuilder/comps.xml_footer >> /home/build_cd.5210R/yum-env/compsbuilder/comps.xml
Recall we used YUM with the switch --installroot to download the RPMs? Our /home/build_cd.5210R/yum-env/root/ looks this way:
[root@5210r build_cd.5210R]# tree /home/build_cd.5210R/yum-env/root/
/home/build_cd.5210R/yum-env/root/
├── etc
│ └── os-release
└── var
├── cache
│ └── dnf
│ ├── AppStream-31fadb45047f5928
│ │ ├── mirrorlist
│ │ └── repodata
│ │ ├── gen
│ │ │ └── groups.xml
│ │ └── repomd.xml
│ ├── AppStream-filenames.solvx
│ ├── AppStream.solv
│ ├── BaseOS-586be817612a3cb1
│ │ ├── gen
│ │ │ └── groups.xml
│ │ └── repomd.xml
│ ├── BaseOS-filenames.solvx
│ ├── BaseOS.solv
│ ├── BlueOnyx-5210R-93e8b6f7899e21b9
│ │ ├── gen
│ │ │ └── groups.xml
│ │ └── repomd.xml
│ ├── BlueOnyx-5210R-filenames.solvx
│ ├── BlueOnyx-5210R.solv
│ ├── BlueOnyx-OS-5210R-941ee16655c006a7
│ │ ├── gen
│ │ │ └── groups.xml
│ │ └── repomd.xml
│ ├── BlueOnyx-OS-5210R-filenames.solvx
│ ├── BlueOnyx-OS-5210R.solv
│ ├── expired_repos.json
│ ├── extras-1bc32aa9ba0bb1a1
│ │ └── repomd.xml
│ ├── extras-filenames.solvx
│ ├── extras.solv
│ └── tempfiles.json
├── lib
│ ├── dnf
│ │ ├── history.sqlite
│ │ ├── history.sqlite-shm
│ │ └── history.sqlite-wal
│ └── rpm
│ ├── Basenames
│ ├── Conflictname
│ ├── Dirnames
│ ├── Enhancename
│ ├── Filetriggername
│ ├── Group
│ ├── Installtid
│ ├── Name
│ ├── Obsoletename
│ ├── Packages
│ ├── Providename
│ ├── Recommendname
│ ├── Requirename
│ ├── Sha1header
│ ├── Sigmd5
│ ├── Suggestname
│ ├── Supplementname
│ ├── Transfiletriggername
│ └── Triggername
└── log
├── dnf.librepo.log
├── dnf.log
├── dnf.rpm.log
└── hawkey.log
22 directories, 66 files
The /etc/os-release in it is a copy of the CentOS 8 /etc/os-release. The DNF cache inside that --installroot contains all the infos about the downloaded RPMs and we simply let DNF/YUM present a list. Which we parse for the RPM names and use that to generate the middle section of our new comps.xml. Which we simply augment with a hand crafted top and bottom part. The top part contains the <group></group> bits for the "core" and "base" repositories from the original CentOS 8 ISO. Just with the paths adjusted to point to the "BaseOS" repo and not the "AppStream" repo for some of the strays in there. Additionally there is the top of the <group> declaration for our "BlueOnyx" group. The bottom part we substitute is just the closing tags of the end of the document.
Problem solved and the comps.xml can now be automatically regenerated after each YUM-download to re-populate the ISO with the latest RPMs off the mirrors. Then it's just running "createrpo":
rm -Rf /home/build_cd.5210R/5210R/BaseOS/repodata
mkdir -p /home/build_cd.5210R/5210R/BaseOS/repodata
rm -Rf /home/build_cd.5210R/5210R/AppStream/repodata
mkdir -p /home/build_cd.5210R/5210R/AppStream/repodata
cp /home/build_cd.5210R/yum-env/compsbuilder/comps.xml /home/build_cd.5210R/5210R/BaseOS/repodata/comps.xml
createrepo -s sha1 -g /home/build_cd.5210R/5210R/BaseOS/repodata/comps.xml /home/build_cd.5210R/5210R/BaseOS
cp /home/build_cd.5210R/yum-env/compsbuilder/comps.xml /home/build_cd.5210R/5210R/BaseOS/repodata/comps.xml
cp /home/build_cd.5210R/yum-env/compsbuilder/comps.xml /home/build_cd.5210R/5210R/AppStream/repodata/comps.xml
createrepo -s sha1 -g /home/build_cd.5210R/5210R/AppStream/repodata/comps.xml /home/build_cd.5210R/5210R/AppStream
cp /home/build_cd.5210R/yum-env/compsbuilder/comps.xml /home/build_cd.5210R/5210R/AppStream/repodata/comps.xml
The final bit of work was fiddling with isolinux.cfg and the grub.cfg to add the menu entries we want and to do a bit of rebranding and the ISO can be spit out:
LC_ALL=C /usr/bin/mkisofs -o /home/build_cd.5210R/BlueOnyx-$VER.iso -b isolinux/isolinux.bin -J -R -l -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -eltorito-alt-boot -e images/efiboot.img -no-emul-boot -graft-points -V "CentOS-8-BaseOS-x86_64" /home/build_cd.5210R/5210R
LC_ALL=C isohybrid --uefi /home/build_cd.5210R/BlueOnyx-$VER.iso
LC_ALL=C /usr/bin/implantisomd5 /home/build_cd.5210R/BlueOnyx-$VER.iso
The result then looked like this:
The devil is in the details, right?
Our "yum groupinstall blueonyx" (onto a "naked" pre-installed CentOS 8) was already pretty clean and relatively error free. It also did all post-install related tasks either during YUM install or on first reboot of a freshly installed system. There was nothing an Anaconda kickstart script actually needed to do during the ISO's POST-install routine. My goal right there and then was: Provide an install option that gives the user the full freedom to select any option he wants in the Anaconda installer. And not just use an appended kickstart script to auto-fill all options and kick off the install fully automated with no user-input required.
Like we did in the past with all prior ISO's.
Therefore I set out to see if that worked and added the "Interactively Install BlueOnyx 5210R" option to the boot menu. This doesn't use "inst.ks" to supply a kickstart on the append line of isolinux.cfg and that way the Anaconda installer menu is not populated with any preselections nor are any custom POST-install scripts run. Unless they were provided and handled by RPMs that were being installed.
The first install off that ISO failed (as expected). Anaconda is now a *lot* more finicky and if RPMs have the wrong exit state during their POST-install then Anaconda will report a fatal failure and aborts the ISO install. Luckily fewer than 10 of our 835 self-supplied RPMs had "unclean" exit states and fixing and re-rolling them solved that issue one by one.
A funny problem I've encountered during the install is this: Our ISO contains the RPM "blueonyx-yumconf", which brings the config files aboard that tie our 5210R into the BlueOnyx-5210R YUM repository. With all the dependency juggling during installation this RPM got installed after "dnf", but before "yum". The "yum" RPM is just a wrapper that provides a symlink at /usr/bin/yum to /usr/bin/dnf-3. That shouldn't be a problem then, right? Wrong! If the "yum" RPM is missing when Anaconda installs "blueonyx-yumconf", then DNF tries to update the DNF cache for the BlueOnyx repo. Which won't work if the ISO install has no working network connection. How stupid is that? Well, the solution was easy: I re-rolled our "blueonyx-yumconf" and it now requires "yum" to be installed first. Problem solved.
A dozen install attempts later the ISO performed a clean BlueOnyx 5210R installation with no issues during the install itself. Yet: Testing that install and putting it through it's paces revealed some more defects unrelated to the ISO building or the ISO installation itself. It took two more days to get these sorted, but then we finally got the flawless install we wanted.
At the end of it I added the other boot option shown above. These again use "inst.ks" to supply a kickstart on the append line of isolinux.cfg and provide the fully automated installs and the default partitioning (and optional RAID1 setup) our users are accustomed to from previous BlueOnyx versions.
Automated ISO builds
In between all of this I wiggled the new ISO building method into our old set of scripts and Makefiles that we've been using for ISO-building since the CentOS 5 days. It goes back to the version for CentOS 4 that Brian Smith from NuOnce.net once used and which he had contributed to us back then. Over the years this has seen many changes such as the integration of Revisor or Pungi. Now it has been modified "back to the roots", but thanks to YUM/DNF and some Bash-Scripting the level of automation is again as high as it used to be with Revisor or Pungi:
[root@5210r build_cd.5210R]# make
+-------------------------------------+
BlueOnyx CD Generator
version 5210R-CentOS-8.0.1905-20191014
+-------------------------------------+
Options:
version - Modify Version
isoimport - Import stock CentOS 8 ISO
yumdownload - Use YUM to dowload latest RPMs
compsgen - Generate the "comps.xml" file
compsedit - Modify the "comps.xml" file
createrepo - Run "createrepo" on BaseOS/Packages
spec - Modify specfile for custom install script
installer - Make RPM for custom install script
readme - Modify readme on the CD
iso - Create a .ISO image
test - Publish ISO to Zebra VM 425 for testing
store - Publish ISO to mirror
A "make all" runs "make version" (updates the date-string of the ISO), "make yumdownload", "make compsgen" (creates the comps.xml), "make createrepo" and "make iso". That spits out an ISO image.
With "make test" the ISO is moved to an Aventurin{e} 6109R that uses an OpenVZ VM to test the ISO install. That transfer takes 20 seconds and after a "prlctl reset <UUID>" the VM is then restarted to boot off the ISO. The installation progress of that can be monitored and interacted with via "noVNC" in the browser.
Once the ISO is tested properly a simple "make store" publishes it to the toplevel BlueOnyx mirror for distribution.
I still need to figure out "pungi-koji", though. Because long term I'd like to have our custom branding in the Anaconda installer as well and I don't want to fiddle with the install images themselves - unless I can't avoid it.
If you find this article helpful or would like to share notes about what you figured out about using "pungi-koji" then please drop me an email at mstauber _at_ blueonyx.it.
← Return