So this is sort of a sequel to my previous adventures with backwards compatibility.
Ever since I started this exercise, I figured some day I would automate this stuff further. As in, for the years I’ve been running these build scripts on my MacBook Pro to make the bundles and disk images I host on Mac Source Ports. It’s been an evolution, and anyone watching from the outside would probably have the takeaway that these are the adventures from someone who doesn’t know what in the heck they’re doing but is slowly but surely basically discovering everything that the veterans have been knowing for years. Which is accurate.
When I started out, long before I had the idea for the site, I was using Xcode for everything which made sense as that’s what iOS development is done in. I don’t know to what extent I didn’t like Makefiles and scripted builds and to what extent I just didn’t understand them, but once I figured out how to make ioquake3 build via the included scripts, and that it would conjure up a complete app bundle, the lightbulb went off that This Was The Way and that if you did everything right, you can fire off a single command at a command prompt and it just handles everything else for you. Some of the variable names I use in scripts to this day are derived from the ioquake3 build scripts.
Once I started the site and started trying to figure out how to build source ports, I got into the pattern of making a GitHub fork of the project and added a build script to the root of the directory. Sometimes this went hand in hand with having to make small modifications to the project to get it to build on the Mac (which still happens) but mostly it was just the way I started doing it. I would name the script such that it was clear who put it there and what it was for. After a few variants I settled on the site name and what kind of build it was. So for example for a Universal 2 build (has x86_64 and arm64 code) I would call it “macsourceports_universal2.sh”
There were flaws with this plan, not the least of which was keeping the source of the GitHub fork up to date with the upstream changes, but also that in the cases where I didn’t need to make any other changes then basically I had a fork whose only difference was the build script. So I decided to modify this practice – I would still have build scripts for the projects but instead of having them be in the projects themselves I would have a central project, parallel to all the others, and keep them in subdirectories there. I called this project MSPBuildSystem.
This afforded several advantages, mostly that I could more quickly update builds by just fetching the latest and building them (provided there’s no breaking changes). Also since portions of the build scripts would not change much between projects (like the actual signing and notarizing) I could factor those out into their own files and just include them inline. It also meant having one central location for all the actual finished builds.
And long term in the back of my mind I always figured some day I would be in a build server sort of situation. As in, something that would take this to the next level – automation of the builds. So when ioquake3 makes a new commit, or when DevilutionX releases a new build tag, it would just figure it out and build it for me. The genesis of this whole project was me crowd funding an M1 Mac mini for the purposes of getting Apple Silicon builds happening and worked out, however that mini hasn’t been doing a whole lot since I got an M1 Max MacBook Pro. But it did seem perfect to me to be a build server – it doesn’t have tons of hard drive space or RAM but it can run the builds just fine. I just needed to figure out the best way how.
Parallel to all of that was the binary compatibility issue. Very briefly for those who didn’t read the link above: Apple occasionally makes breaking changes to the functionality of dynamic libraries such that libraries built for later versions of macOS don’t work on versions of macOS prior to the breaking change. Package managers like Homebrew always try and send you libraries built for the version of macOS you are on which is ideal for performance but not from a compatibility standpoint. Far as I can tell there is no way to tell them to give you older versions of the libraries and even if there were their formula documentation indicates they won’t provide versions that far back anyway, or at least not officially or reliably.
All of that is a way of saying: the only way to get versions of the libraries compatible with older versions of macOS is to build them yourself, which is reliable but formidable.
The way I chose to do this was first to make a macOS virtual machine in Parallels. My long term goal for the server was to use that standalone M1 Mac mini but that required proximity to the device, a VM was something I could take anywhere and then migrate later. It was also a great way to test and make sure I could build this and be sure Homebrew wasn’t factoring into the equation. I didn’t want to run into the possibility that something worked because it was using a Homebrew build of something instead of what I wanted it to. Also, Homebrew – while a great tool for what it’s good for – tends to get possessive over your /usr/local/ folder. If it finds something in there it didn’t put there itself it complains at you, and when you’re in a situation where you want Homebrew to manage everything this is a good call, but mixing and matching isn’t really great for it.
Also, I decided I was going to see how long I could go before needing to install Rosetta 2. This way I could be sure that nothing I built or did required Intel and only worked because of Rosetta 2. I knew I might not be able to cling to this forever but I figured it was worth a shot. The M1 mini already has Rosetta 2 so that wasn’t going to be a forever thing (it’s apparently possible to remove Rosetta 2 but it’s nontrivial and unsupported)
The first one I tried was easy enough: SDL2. It used CMake and I was able to get it to build in one step with both architectures at the same time. And then the install step did what I expected, which was to put the library and its related files such as include headers in the right place. As I understand it Homebrew, having come on to the scene in the twilight era of Universal 1, never really got the religion of universal binaries and was blissfully mostly Intel-only for about fifteen years, putting the Intel files in /usr/local/. When Apple Silicon came onto the scene they then either had to push Universal 2 versions of everything or put the Apple Silicon versions somewhere else, and they opted for the latter, putting them in /opt/Homebrew/. I figure at least part of the logic is that holding multiple architectures in a library causes the file to double or triple in size and might as well save the space if someone doesn’t want or need both. Also long term the older architecture might get dropped. But in any event, by default SDL2 and pretty much all the other libraries I tried out just toss everything in /usr/local/ like UNIX basically intended.
So that was easy enough, though one thing I noticed is that the id of the library used @rpath instead of an absolute path. This is something I’d been avoiding and that most of the time Homebrew didn’t do but if it’s best practice now I might as well do it while I’m changing stuff.
So let’s say you’re using SDL2. The file is installed at /usr/local/lib/libSDL2.dylib (that’s not quite right but go with it). It knows it’s there so the id of the dylib is also /usr/local/lib/libSDL2.dylib. You link it in an executable so the executable has a link to /usr/local/lib/libSDL2.dylib because that’s the ID and also where it can find it. Everything is great… unless you copy that file to a system that doesn’t also have libSDL2.dylib at that exact location. You can include libSDL2.dylib in the bundle but unless you can tell the executable where it is, it won’t be able to find it, and you don’t know for sure where the executable will be.
So one thing you can do is change the ID of the file to something that will tell the executable where it is. Like if you put it in the same folder you can give it the id “@executable_path/libSDL2.dylib” which basically says wherever the executable is, that’s where the library can be found. If you want to put it in, say, a Frameworks folder parallel to the MacOS folder in the app bundle you could make it be “@executable_path/../Frameworks/libSDL2.dylib”
But there’s scenarios where this can get tricky fast, especially if the libraries are referencing other libraries. The solution to that is @rpath. Basically the id of the library is “@rpath/libSDL2.dylib” and instead of telling the libraries and executables a path per-library, you just tell the executable what @rpath is and it handles it from there. I had been using dylibbundler to bundle dylibs but it chokes on the concept of @rpath because in theory, @rpath makes dylibbundler unnecessary. So I had to conjure up a script that would traverse the various libraries and copy them over manually. It basically works.
So now armed with a version of SDL2 that was compiled as Universal 2 targeting as far back as Mac OS X 10.7 (which, that’s the other thing, SDL2 officially only supports as far back as 10.7), I looked and I only have a couple of source ports that only used SDL2 and nothing else. One of them was bstone, the source port for Blake Stone, so I migrated that to my new build server process. I wanted the scripts to still work in both places so I structured it such that I can pass in a “buildserver” flag and it knows to use the different values and locations.
Once I had it going I decided I wanted to test it on real hardware. I have an old MacBook Pro from around 2011 or so and I nuked it and split its 750GB hard drive (not SSD) into three 250GB partitions. On the first partition I put whatever Internet Recovery would put on there – on EveryMac.com it says this thing shipped with 10.7 but it put 10.8 on there which, whatever that’s old/good enough. After all that was said and done I got the files on there which was its own fun – I couldn’t get it to see the newer Macs and vice versa and I had to finagle a USB stick with a FAT32 file system on it since 10.8 is too old to see APFS. And also I had to use an adapter since my M1 Max MacBook Pro is too new to use USB-A.
Anyway it worked. I had a Mac from 2011 running a build from 2024.
So then it was off to do the other libraries. I kept a spreadsheet of what all libraries were used by which projects. I prioritized them by which ones were used most often (and SDL2 was used the most often). Naturally this is where things got more difficult/interesting.
Some libraries used CMake, some used Make, and a few used something like Ninja or SCons. Probably the percentage matches what I see with source ports, but I had it down to a science eventually, with CMake and Make specific build scripts and custom guys for the other ones. Boost, a 25-year-old C++ set of libraries had its own build system that you would then build and then it would build the Boost libraries. That was annoying but they kinda get a pass since they predate so much of this stuff. Some could build multiple architectures in one go, others needed to be separated and lipo’d later. Many of them had dependencies which had to be built first. Some of those had dependencies of their own. Homebrew’s website was great for this because it helped map things out for me. Sometimes the dependencies were optional but I tended to build everything anyway since I don’t know what parts of which libraries are needed by what ports, though I have run into the occasional port that gets mad when you put too many dependencies in. The list of things I had to build to get ffmpeg working was like seventy entries or something, and a couple of them I just gave up on (i.e., the library that puts subtitles in rendered-out videos). A few dependencies were needed for builds but not to be linked in later, so in those cases I could get away with just using the ARM64 architecture and not building Universal 2. And a few things, like Python or CMake itself, were just easier to download off of the official website prebuilt and use that. If it was a “development dependency” you just need it while building the app, not at runtime or on the target system, so an official build is fine.
I got into a good routine by the end but what made it take so long in part was just the simple fact that this whole endeavor is a spare time thing for me so as real life things would intervene my capacity for working on it would diminish.
Once that process got far enough along (namely, that a majority of the libraries would build) I started to modify the build scripts for the source ports so that they would use the new process. I had been doing all of this off of a branch of the main MSPBuildSystem repo, so that nothing would collide. It was also a great excuse to revamp other parts of the build process, like how a number of the early projects I’d been doing were using things that worked but weren’t the ideal way of doing things. Since this new process uses Universal 2 dylibs, I was often able to get the Make/CMake build processes to do a Universal 2 build in one go instead of having to make two different builds and lipo them together. This didn’t always work – if the project has architecture-specific code (like ioquake3, which still has Intel MMX instructions in it from the original code when building for Intel) then you still have to build two times. But I usually had to build it this way on the original system because Homebrew has the two different locations for the libraries.
I blew through all the ports which could easily be handled, in the process moving as many of them off of custom forks as I could (of which there were a surprising number) and then did another pass for the ones that gave me trouble in the first pass. As of this writing I’m on probably the third or so pass and have about 60% of them moved over to the new way of doing things.
There’s clearly more to the process than just “set the deployment version to an old number and it’ll run” because I’m having very hit or miss results so far. Ports like yquake2 (for Quake II) and rottexpr (for Rise of the Triad) run fine on Mac OS X 10.8 but dhewm3 (for DOOM 3) doesn’t. However, it does run on macOS Mojave 10.14, which is significant because it’s the last one before the most recent compatibility break and a lot of folks are stuck on it so they don’t give up their 32-bit games. So that’s at least progress. Later I can work on the reasons why it’s hit or miss and maybe address that too.
And so I needed to start on the other major prong of this task – the actual build automation. In a way I started this long ago with some research I had done and only recently got serious about it.
Basically I had gone down the path of – I started making shell scripts to build these projects, then made them more modular and elaborate to be able to share script elements in common locations, and the next logical step would be to abstract a layer on top of that and automate the execution of the build scripts, automate the discovery of new versions, and automate the updating of myself and/or the site via email or whatever.
That was my first thought. My second thought was: surely someone has done that by now. Right?
The first thing I ran into with looking into build systems was that there were surprisingly few free options. I’m not completely opposed to paying for something but both because this operation has been lean so far and also because everything else about it has been free and open (down to the build scripts being in a public GitHub repo) it seemed appropriate to use something free and if possible open to automate it.
The two options I found that met the free and open criteria were Jenkins and Buildbot. This is when I ran into the second thing in the process.
Something I’ve come across in doing this whole thing is the fact that, while I’m sure I’m not unique or alone in doing this, it’s at least somewhat unusual to take on the building of a bunch of different projects you didn’t write or maintain. Homebrew does this, they make builds of the latest code of a bunch of different libraries. To some extent package system maintainers do this – when you’re running the package manager for the OS that powers the Raspberry Pi, you make a bunch of different builds of things for the consumers of your SOC boards to run.
One of the differences, with all due respect to source port projects, is that most of those other situations deal with uniformity and some amount of assumption that things will build. You download the code for libPNG, you build it, it has these specific places in the UNIX file system where they go, you put them there, done. And of course libPNG builds, it’s one of the low level building blocks with no dependencies. But some of these source ports are all over the map. They don’t always support being built on the Mac, they don’t always make their own app bundles, they don’t always make assumptions that make them Mac-friendly, and they don’t always use a build system that’s easy to automate. One project, The Ur-Quan Masters (source port of Star Control 2) has its own build system which are interactive shell scripts (and, in its defense, is so old it predates a number of the modern solutions, or at least their widespread use).
That’s one of the benefits of having a series of shell scripts handle this – they paper over the differences. Anything that can be done via command line can be handled in the script, right down to oddly specific one-off maneuvers or the Notarization process. So what I needed, I figured, was a system that could automate the firing off of these scripts, due in no small part to not wanting to lose my existing investment in effort.
I started to look at Jenkins and while it has a slick web interface, I quickly got the impression that it was designed for teams that were willing to build their process around Jenkins. I remembered all the “how do I do X in Jenkins” posts I’ve seen on Stack Overflow all these years, and I saw that Jenkins is particularly good at continuous integration and unit test execution for teams, which is not necessarily what I need. The CI part, sure, that’s essentially a term for what this whole deal is about but, for example, I don’t need unit tests to be run because I don’t have any and also these aren’t my projects. The team members might want this but that’s usually not me. And most of these source ports either don’t have unit tests or if they do they’re not part of the repo.
So then I looked into Buildbot and it’s more or less the opposite – whereas Jenkins (as I followed it) seemed to be a primarily web-driven process, Buildbot was entirely about scripting, which interested me, but it was really not intuitive to use. Sort of the difference between a product made by a techie and used in hardcore projects versus a product designed for a wider audience. I guess. Plus it also seemed like it had the issue of being aimed at projects different than mine, where the ownership concerns are different.
I briefly entertained making my own project. I thought about how projects like the Homebridge UI run scripts and show you a Terminal-style interface with the results and I thought, maybe that’s not that hard to do. The more I looked into how Homebridge UI did it, it looked fairly excruciating. Web development always feels to me like it makes certain difficult tasks easy and certain easy tasks brutally difficult because fundamentally you’re trying to make websites, web pages and web browsers do things they were not originally designed to do.
I then briefly entertained making a non-web version of what I wanted to do. A native, “Mac-assed Mac app“. I could better explore app design in Cocoa (something I never really got the religion of on the Mac) and the Mac version of Windows Services. I even went down the road of prototyping it. But I just couldn’t help but think I was reinventing a wheel for no reason.
And then, stumbling across the ScummVM Buildbot page, it inspired me to give Buildbot another shot. It was tricky to get it going, especially since I wasn’t too experienced with using Python as an environment, but I eventually got it going. Buldbot does indeed still have some of the same issues I was concerned with, namely that it really wants you to use its tasks and not just fire off some existing script, but it works. It’s basically what I had in mind when I briefly explored doing a web-based project, and I’m sort of thinking now that Jenkins could have been coerced into something similar but for now I’m just going to stick with Buildbot
I originally wanted to do everything in completed phases. Complete all the libraries, then complete all the transitions of all the source ports to the new process, then migrate everything to the Mac mini as a dedicated build server. But at various points I started to come to the conclusion that this was unnecessary, not if I wanted to still update the site. I’m reminded of how Valve apparently spent four years making Source 2 and then four years making Half-Life: Alyx, which may have been the best course of action from an engineering perspective (and it’s not like Valve is ever going to go out of business) but it’s frustrating that gamers had to go without anything new for eight years. I wasn’t going to take eight years to do this but I didn’t want to hold up everything else I was doing.
Moving to the Mac mini for a physical build server was the last part of this concept. As I never used the mini as my daily driver it had a lot less stuff on it than a typical machine. In the Windows world I would likely have reformatted and started over but given what I had on it, it seemed about as much effort to just uninstall everything and delete as much as I could. Homebrew has mechanisms to uninstall everything and then itself, I used it to uninstall both instances and then verified the /opt/Homebrew/ folder was deleted and the /usr/local folder was empty. After deleting all unnecessary apps, purging the ~/Library/Application Support/ folders (since I’m no longer going to run builds on this Mac, just build them), and following the notes I had taken during the library process, I started the process of rebuilding all the libraries. I could have probably just copied the /usr/local folder from my virtual machine but I wanted to make sure that the process was reproducible, just to enforce whether or not I understood what I was doing or just lucked into it.
The library process was fairly straightforward, the only issue was build order – namely that I had to make sure a library’s dependencies were built in order. Homebrew does have a command that shows the tree of dependencies and you can traverse it on the website, and this mostly worked as a guide but it wasn’t flawless. If I had to do this on a regular basis I’d probably have been more meticulous in planning it, but as it stands I just sort of forced my way through the process.
Once I got Buildbot going, everything was automated in theory but I kept having to manually check the machine so I needed to get email notifications set up. At first I couldn’t get it working and after reading some posts about Buildbot not working with Gmail, I was afraid that the (understandably) strict mechanisms in place to prevent spam were preventing it from working, but once I learned about app-specific passwords and got the right SSL port I started getting emails.
Overall there’s three kinds of ports, with regards to update frequency. Some projects update very rarely, either because they’re mature to the point of being complete or because they’ve been abandoned. A few are pointing to a GitHub repo I’ve made because the original source is like a zip file on Sourceforge or something. The second type has atomic releases, usually in the form of version numbers. In most cases those version numbers take the form of a git tag, so for those projects I changed the git poller in Buildbot to poll for new tags and pass those into the build server scripts. The third type of project is the one where it doesn’t keep version numbers or atomic releases it’s just whatever the latest code is. Sometimes there’s a reason like how ioquake3 has to always be 1.36 in order to maintain compatibility in multiplayer but sometimes it’s just down to how rigid the project wants to be. Those projects I just have it doing a build every time there’s a commit. For mature projects with infrequent changes this is no big deal, but for active and often newer projects it can be frequent. I recently added the OpenMoHAA project that runs Medal of Honor: Allied Assault, and it’s building fairly frequently as per the emails. You can kinda tell when the developers have the time to work on it.
The whole thing isn’t perfect. I’ve had more than a few automatic builds go off based on what are actually old commits so I’m not sure what the logic there is, and as of this writing I haven’t seen a new versioned release of the projects where I’m monitoring tags so I have yet to see if I have that done correctly but overall the project is going well, and I will be able to use the work I’ve done so far to make updates to the site more frequently, which was the goal. Perhaps long term I can have the server update the site for me but we’ll see. I do notice the Python process that handles the main process consumes more memory than the 8GB mini physically has and never seems to go down, and sometimes refreshing the web interface takes forever so perhaps there’s some scaling issues I need to address but for now it’s working. Even if I do wind up having to migrate to some other software probably 80% or more of the work I’ve done so far is portable.
Long term I’d like to see how far I can take this concept. Right now I have a process which builds Universal 2 apps for source ports, but in the future I could see about expanding this to go further back, perhaps making Universal 1 builds that run on PowerPC or 32-bit Intel Macs. I’m sure there’s more to that, like how some libraries in their modern code form can’t run on machines that old, or some ports might not work with machines that old, and at some point I will need to investigate build tools other than the latest blessed Xcode tools (plus I’ll need to actually get some old Macs) but that’s a little ways away. As it stands now I’m just trying to migrate the rest of the existing projects to the new process.
So that’s the Mac Source Ports Build Server. A long, strange journey to what is essentially an unassuming baseline M1 Mac mini sitting inconspicuously on the corner of the desk here at Mac Source Ports HQ, quietly plugging away and keeping my builds up to date and letting me know when they’re done. Thank you for coming to my TED talk.