Circling back to the gap in the Quake series, I have now ported Quake II to iOS and tvOS for Apple TV. Here’s the repo:
Quake II for iOS and tvOS for Apple TV
Deviating slightly from my usual practice, I’m writing this accompanying article before I’m done and before I get everything working. So far, this project has been the most difficult port yet, simply because as far as I can tell no one has ever done it before (and if they have, I didn’t find it). My Wolfenstein and DOOM work was based on the original, official id Software iOS ports from 2009, my Quake port was based off of a Google Cardboard experimental port, my Quake III: Arena port was done based off of Beben III, a culmination of efforts to port ioquake3 to iOS, and my Return to Castle Wolfenstein port was based off of fusing the iortcw port of the original RTCW code with the Quake III: Arena port.
Quake II is sort of the odd man out in the original Quake trilogy. The original Quake launched as a software-rendering 3D engine developed on NEXTSTEP platforms for DOS with a single-player focus and an added multiplayer experience that proved to have real legs. Quake III: Arena launched as a hardware-accelerated 3D engine developed on Windows for multiple computer platforms with a multiplayer focus and a tacked-on single player mode featuring bot match challenges. Between those two, Quake II launched, with a focus split between single and multiplayer, both a software and hardware renderer, and developed on Windows for Windows, full stop. It did launch later for the Mac but it was years down the line.
It represented a number of firsts for id Software. It was their first game to launch on CD-ROM, it was their first game to feature a cutscene (unless you count the text on the screen from previous games), and it was their first game to have a hardware renderer out of the box. And while it fit the pattern of shareware game/retail sequel, unlike the retail sequels to Wolfenstein 3-D and DOOM, it didn’t serve as a sequel to the original Quake, plot-wise (granted, Quake barely had a plot to begin with).
Both Quake and Quake II are collectively considered “id Tech 2” since in many ways Quake II‘s engine was a mild upgrade over Quake‘s. Quake was delivered in June 1996 and Quake II was delivered in December 1997, a mere year and a half later. There was apparently a plan in place at one point to not call it Quake II but rather have a whole new name but for various reasons they fell back on the registered and recognizable name. And while the timing of the releases is indicative of a second game using a mostly mildly evolved version of the original’s engine, a passage in David Kushner’s Masters of DOOM book describes how the development team at Ion Storm (which John Romero co-founded after being ousted from id), after deciding to pause development on their game Daikatana in favor of upgrading its engine from Quake to Quake II, got the first source code drop from id for the then-unfinished Quake II and realized to their horror that the code was merely a graphic upgrade but was instead a more significant reworking of the codebase and their efforts to upgrade their existing code was not going to be easy or straightforward.
In that vein, id Sofware had a number of developers who had licensed the code for Quake and wanted the technological improvements present in Quake II, especially since their games were slated to be released after Quake II shipped so they worked out a program to allow existing Quake licensees to upgrade to Quake II for a fee smaller than if they had to license the newer engine from scratch. This is how games like Half-Life started out their life on Quake and incorporated portions of Quake II.
And so it’s interesting to me that while the original Quake gets a fair amount of retro nostalgia and ports to keep it up and running in the modern day, and Quake III: Arena got the near-singular but highly focused ioquake3 project, Quake II is in this weird middle niche of people who preferred it to either what came before or after. When I made a post on Reddit pointing out my Quake and Quake III: Arena ports, I had a few people asking me if I was ever going to do Quake II.
Whereas Quake had a very loose plot that no one cared about nearly as much as the Lovecraftian aesthetic, and Quake III: Arena had no real plot at all, favoring a purely multiplayer focus, Quake II actually came up with a plot and a universe and lore that id would try repeatedly to use in the future. In the game, an alien race known as the Strogg are poised to attack Earth, so your character is participating in a preliminary strike on their home planet. However, an accident that causes your craft to crash land also has the effect of saving your life because the rest of your comrades are captured or killed on arrival so it falls to you to fight your way to the head bad guy and save the day.
It’s not a particularly deep or original plot (the Strogg are similar to Star Trek’s Borg race, a similarity enforced by later games), and the initial plot has to explain why you’re a single character fighting alone, but it’s at least an attempt at a universe.
Quake II would go on to have two expansion packs, Quake II Mission Pack: The Reckoning and Quake II Mission Pack: Ground Zero. Both basically followed the same formula. For some reason I never bought them back in the day and had never played them until this project. Whereas I adamantly bought the expansions for Quake, I skipped the ones for Quake II, maybe because my aging Pentium processor with no hardware accelerator card didn’t do so hot with the game in the first place.
In 2005, id returned to the Strogg plot with Quake 4 (the switch from Roman to Arabic numerals is unexplained but apparently deliberate), which used id Tech 4, the same engine as DOOM 3. It was developed by Raven Software and has a similar plot to Quake II but features a plot twist absolutely spoiled by Activision in the press run-up: halfway through the game you are captured by the Strogg and turned into one of them (more or less completing the Borg analogy). In 2007, id released Enemy Territory: Quake Wars, a game play sequel to Wolfenstein: Enemy Territory and a plot sequel to Quake 4. It was developed by Splash Damage and also used id Tech 4. Quake 4‘s reception was lukewarm, and Enemy Territory: Quake Wars had the unfortunate timing of being delayed to the point of launching in close proximity to Team Fortress 2, which is sort of like being that movie that hit theaters on the same day as Star Wars. The Strogg showed up again in Quake Champions but other than that, the plot line was abandoned by id.
There’s a small handful of Quake II source ports floating about and the most popular one seems to be Yamagi Quake II, so I went with that one. Since like I said I couldn’t find any existing iOS ports the first thing I did was get it up and running on the Mac from the included makefile. Then just to make things a little easier I got it up and running within an Xcode project. I had to tweak a couple of things but overall it was pretty straightforward once I could see what the makefile was doing.
I’ve gone over it before but the 1-2 punch of why I didn’t do Quake II before Quake III: Arena was both because I stumbled across the Beben III port but also because of how Quake II handles rules code. Quake uses the inadvertently cross platform PROGS.DAT compiled in QuakeC, Quake III: Arena can use a similar concept with .qvm files or can use the platform-specific dynamic library equivalent, but Quake II in the middle relies upon the platform-specific dynamic library, which is a .dll file for Windows, a .dylib for macOS, and an .so file for Linux.
The shipping version of Quake II had three libraries – the “game” library for game rules code, a “ref_soft” library for the software renderer, and a “ref_gl” library for the hardware renderer. Yamagi Quake II had four libraries because it had the original, OpenGL 1.4 renderer renamed “ref_gl1” and a newer, OpenGL 3.2-based renderer named “ref_gl3”. At any given point in time Quake II is only using one renderer (the “ref” part is short for “refresh” which for whatever reason is the term id used for a renderer back then). In the PC and Mac versions of the game you can change the renderer in-game. I believe it involves a restart of the game executable but it’s simple enough to switch. In my Xcode project I set up the main Quake II executable as one target and then four additional .dylib targets as the game code and the three renderers. Everything worked fine.
The problem is that the most logical thing to go with for iOS is the .dylib file, except I can’t find anything concrete that says whether or not iOS is allowed to use them in App Store apps. They weren’t supported on iOS at all prior to iOS 8 and Xcode doesn’t really want you to use them in iOS so it doesn’t have it as an option for iOS, although you can create a Mac dylib and then switch the architecture and such to arm64.
So the first thing I tried was this – the using of .dylibs. I knew that if someone wanted to use my work in the future to get their Quake II-based game on the App Store they would run into problems but I figured I would cross that bridge when I get there.
Didn’t matter though because it didn’t work.
Although I had assumed at least the Windows version of Quake II used the native methods of the operating system to load up .dll files, it looks like the engine is trying to load them up itself. This makes the code probably a bit more portable since you basically outline how your platform operates (i.e., does it use .dll or .dylib or .so) and then the code takes it from there. However, the code is insistent on the idea that the .dylibs be in this very specific subdirectory and for the life of me I couldn’t get Xcode to both sign the .dylib files, put them in the right places, and also not break the main build.
So then I tried static libraries, which iOS puts in .a files. The problem I ran into with these, however, was the fact that once code in the main game executable tried to call a method with a common name in the static libraries, it would get confused on which one to call. There’s two methods common to each library, both of them deal with outputting messages to the screen or to logs and the code would call the one for, say, the renderer and it would die because part of the renderer hadn’t been initialized yet. One way around this might be to rename these two functions in each of the libraries, so that there wouldn’t be any confusion anymore, but that would also involve changing all the references to them in the code to the new names and that just seemed like a huge hassle for something I wasn’t sure would fix the problem.
So then I decided I’d try to just compile everything into one big glob. This meant short circuiting the code in the main game that would load up the code from a file and instead just pointing the objects to the existing methods within the code that was now all the same executable. I had to route around the two methods duplicated in each library but that wasn’t a huge deal since they could just use the main ones now.
And it worked. Or at least it compiled. Which was a plus.
Parallel to all of this was the issue of SDL.
SDL stands for Simple DirectMedia Layer and it was created back in the late 1990’s as a way to abstract away the hardware layer for games and other applications. To some extent, you can write your code to use SDL and then SDL communicates with the hardware. It’s not a completely automatic process, and you’re still left writing things like OpenGL code instead of some SDL wrapper around things, but in theory it saves you a lot of effort.
It’s also something I’ve completely avoided until now. The Wolfenstein 3-D and DOOM ports from id just never used them, the Quake port I derived from just didn’t use it at all, and although ioquake3 uses SDL, the Beben III code I derived from not only didn’t use it, but went out of its way to avoid it. Over time I’ve been trying to whittle away the differences between what I’m doing and what ioquake3 is doing to a minimum and most of the remaining differences are due to SDL avoidance. Like I’ve mentioned before, some number of the files in there date back to 2008, pretty much the dawn of the iPhone SDK. My suspicion was that either SDL didn’t exist for the iPhone when some of this work was done, or it wasn’t good or mature enough for the job. I wrote to the Beben III author to ask and he told me it’s been so long he doesn’t remember but he strongly suspects that was the case (that it didn’t exist or it was insufficient).
So to port Yamagi Quake II to iOS would either require a similar avoidance of SDL, or I’d need to figure out how to use SDL in iOS. I chose the latter, although there were a few times I was tempted to abandon ship and avoid it like before. In theory, if I can get good at working with SDL then porting future games might be easier.
Figuring out things about SDL and iOS was not easy since there are few samples online and it’s difficult to find information that isn’t assuming you’re writing for a computer platform. The source code you can get from the SDL website does have a directory in there for Xcode projects running on iOS but I had issues getting them up and running. In hindsight, some of this may have been confusion over the multiple different directions I was trying to head with this.
One of the things that wasn’t intuitive at first is that SDL wants to basically take over the main loop of your game. C programs have this concept of the main() function. Whatever is in your main() function, that’s the first thing that gets executed. Every programming language or platform has this concept, most of them call it something else, but at its heart it’s an abstracted version of this main() function idea. For iOS, there’s a class you set up as an AppDelgate and the methods that get called from that determine how your app loads, functions, etc.
However I couldn’t seem to find a lot of information about how to incorporate SDL into an app that uses CocoaTouch menus – or “normal” iPhone screens prior to the loading of the game engine. All the demos were designed to have a main() function, none of them had an AppDelegate at all. In my Quake and Quake III: Arena ports, I had used a GLKitViewController and set it up such that on every call to the update delegate method, I would poll the inputs (like the controller) and then run the method from the engine that handled the frame. In the actual game engine code, the main() function was routed around with preprocessor macros and replaced with a function called Sys_Startup which is called from the Swift code when we’re ready to launch the engine part.
I didn’t see anything like this with SDL and so just to keep things moving, I decided to just do it the SDL way and figure out if my way was possible later. So using the template for an SDL iOS app, I just created one for Quake II and went from there.
Something I noticed with Beben III and that I kept going for the Quake III: Arena port was that while the ioquake3 project came with two different options for renderers, only one of them was used in Beben III, and it was the older of the two. I’m not completely sure why they picked the one that they did but I had thought perhaps it was the most compatible one with the OpenGL ES implementation on iOS. So I started going down that road, using the ref_gl1 renderer from Yamagi Quake II.
After some work with compiling the code and providing some implementations of things from OpenGL that OpenGL ES lacked, I got the game to build and run and amazingly enough it seemed to be running. I say seemed to because while I could hear things and I could see log messages on the screen that indicated it was loading and running the game, the screen was black. By default, it seems that SDL interprets taps as either the ENTER key or a mouse click, or both, so I was actually able to tap the screen and hear the menus of Quake II and start a game, followed by the opening video cutscene, which I couldn’t see, and then the familiar opening sounds of the first level, which I couldn’t see. It’s such a bizarre feeling to know that you’re clearly doing something right and clearly doing something wrong. It was also running with the iOS keyboard covering half the screen because the engine assumes it’s running on a PC and so it’s querying for keyboard input, but on iOS this means it would have the keyboard on the screen and we don’t want that. It was easy enough to go in and short circuit that.
I’ve experienced this before with the previous Quake ports, and usually the issue was that something somewhere wasn’t being configured right with regards to the GL context. But I couldn’t seem to find that issue with this code so I searched high and low for information about how to launch an OpenGL ES iOS app with SDL and, again, I couldn’t seem to find anything about it. And then I stumbled across this demo app called “fireworks” which I was able to shimmy in to my demo app I had been working with and it worked – I had OpenGL ES graphics on an iPhone screen with SDL.
And then I noticed this “fireworks” demo was in the demo project from SDL, too. This whole project was really a comedy of errors.
So anyway I looked at the demo code and what all it was doing with regards to setting things up prior to launching the game and I applied those concepts to my Quake II port and suddenly I could see some things in-game. And they were fucked up and wrong. There were no textures on the walls and some of the solid surfaces like boxes and pillars were translucent. And the video cutscenes and intro were just a white screen instead of a black screen. But still, it was progress.
Around this time I fired up another target in the project on a lark to instead use the software renderer. I didn’t expect much, but amazingly it worked. Everything about it worked fine. Granted, it looked like it was running on a Sony PlayStation in 1996, both because software renderers hadn’t aged well and also because I don’t think I had the trick to the resolution down yet but I guess it’s a statement about how well SDL works that the one target that didn’t need to worry with OpenGL vs OpenGL ES worked basically perfectly.
So I was back to struggling with the GL1 renderer. I tweaked some values here and there and some improvements were made available, but still it wasn’t right. As I’ve said many times, I know next to nothing about OpenGL or OpenGL ES, and in the wake of their Apple deprecation my motivation to learn them is pretty thin, but in the course of minimizing the differences in the Quake III: Arena port, I’ve vicariously picked up some amount of knowledge and tricks with regards to what’s needed to get it to work, like what color formats are required to work on iOS and so forth. For extra frustration points the things like the “LOADING” graphic worked fine, and the enemies had textures on them, but none of the walls did. You could actually fire up an MFi game controller and play through it, like some sort of modern day minimalist indie shooter like Superhot (that was the other thing – thanks to SDL the MFi controller almost completely worked out of the box, after I mapped a few things differently)
But still I struggled with it for weeks in my spare time. And then one day on the train home I had an epiphany: I had looked all over the Internet to see if anyone had ported Quake II to iOS, but I hadn’t looked to see if anyone had ported yquake2 in particular to iOS. Or, failing that, if anyone had ported it to another OpenGL ES platform, like Android or the Raspberry Pi.
I still couldn’t find anyone who had ported yquake2 to iOS, and I found a port to the Raspberry Pi that hadn’t been touched since 2012 (yquake2 has had a lot of work since then – it was using SDL 1.2 instead of 2.0 back then, for one thing), but then I found someone named Emile Belanger that had updated an Android port of it in December of 2018, which was just a month prior to when I did this search.
So I fired up Beyond Compare and did a diff of the spot where he had done a fork of yquake2 and the current version of the code and studied the differences. About half of the changes he made were either the same changes I made, or had the same basic effect in a different way. The other half of the changes he made were things I hadn’t seen done before.
I sat there and in the course of about an hour I went through his changes to the gl1 renderer and applied them to my iOS port, and when I booted up the game, the opening video cutscene was still just a white screen, but the game engine actually rendered correctly. I was floored. Many of the changes I made seemed to have no real obvious effect but clearly they were having an effect seeing as how I was now able to actually run the game. In the simulator it still had that “crazy lines all over the place” thing happening like Quake III: Arena did, but on an actual device it basically looked and ran perfectly. I was elated.
I still needed to work through the issue with the video cutscenes, but everything there came up blank too. Videos in Quake II used something called a .cin format. It seems archaic and crude now but basically it was a file that was for the most part just a set of .PCX files. And the game was getting its color scheme from a .PCX file too. PCX is an image format so old very few people remember it anymore, and it’s actually kind of difficult to find something that can even open them. Back in the day I could open them with MS Paint, but even MS Paint doesn’t support them now. I wound up having to fire up GiMP, all to basically figure out that the palette wasn’t the issue.
And then I noticed that the most recent changes to that Android port were to the gl3 renderer. It’s interesting, out of the box the gl1 renderer wouldn’t build on iOS but the gl3 renderer compiled just fine, it just crashed when you tried to run it. It complained about a shader script and that was way out of my depth. But then I figured I’d do the same diff thing with the gl3 renderer and the Android port and try it out with the gl3 renderer in my iOS port.
One of the things that was bombing out in my iOS port was the fact that the engine has this reality check of sorts to make sure the gl3 renderer is using something that supports at least OpenGL 3.2, but the methods that inquire about versions in OpenGL ES work the same way, except they come back as “3.0” since while there is an OpenGL ES 3.2, iOS only goes as far as OpenGL ES 3.0 so you wind up having to short circuit that code for iOS (basically hard code “3.2” as a return for the major/minor version). And I guess the same goes for Android since the diff of the gl3 has to do the same thing.
In any event back when I couldn’t get the gl3 thing to run on iOS, I figured it was related to how OpenGL ES 3.2 didn’t exist on iOS and the engine was looking for (non-ES) OpenGL 3.2. Well, as it turns out this concern was misplaced for a few reasons not the least of which is that there’s not version number parity across GL and GLES – OpenGL ES 3.0 is roughly analogous to the 4.x line of regular OpenGL. But still, messing with shader scripts was outside of my wheelhouse.
But I did the grafting of the code changes anyway, and some of them were indeed in the shader script code, which took the form of literally loading up a hard coded string within the engine and then feeding that to whatever comprehends shaders.
In any event after doing this, which also took like an hour, the gl3 renderer worked just fine in iOS. And the Simulator. And the video cutscenes worked fine too.
So now I had a version of yquake2 that ran on iOS more or less stock, with whatever SDL for iOS brings to the table. Now, however, I had to get it working in my usual format – that of launching into a Cocoa Touch-based storyboard arrangement, both because it would be consistent with the other ports, but also because I needed to figure out how to get the controls on the screen.
Earlier I had mentioned how SDL wants to take over the main() loop of your program. The problem is that there’s not a whole lot of information out there about how a game could have SDL and UIKit/Cocoa Touch controls coexist. There’s even one message on a forum I found that effectively said “don’t. Either use SDL or UIKit. Don’t try and use both” which wasn’t encouraging. If I couldn’t use anything from UIKit, then I couldn’t use my standard on-screen joystick and buttons, and I would instead have to figure out how to get them to render using the game engine. Which wasn’t impossible or anything (the DOOM port does this), but it would potentially make things take a lot longer to finish.
However I was able to find one single resource out there that had figured out how to make both coexist and also had its source code available for inspection: the iOS port of an old open source game called Hedgewars. It’s on the App Store so I went and downloaded it and, assuming I’m guessing correctly as to which parts are SDL and which ones aren’t, it seems to work as advertised.
However I couldn’t get it up and compiling in Xcode. It has several dependencies including, believe it or not, the need to install a Pascal compiler and it’s iOS shim because the original game apparently was done in Pascal and so I’m guessing this was the easiest way to ensure it operated the same way. But I couldn’t get it working and so I instead just studied the part I was interested in and used it as a guide.
I was all excited to be able to just get the game running like I have for Quake and Quake III: Arena, however it didn’t go as planned. In the previous games I had a surface on a storyboard scene I would then add controls on top of. However SDL doesn’t just take over the main() loop, it pretty much takes over the whole damn screen. If I added controls before “launching” the game they got clobbered and if I added them afterwards the code never got to them because SDL took over by then.
Digging into it some more I figured out that as part of the work done by the SDL project, they don’t just take over the painting of the screen but rather they effectively just replace whatever your UIViewController is doing with SDL_UIKitViewController, which is just a subclass of UIViewController with some more SDL functionality.
There’s a file in yquake2 called sdl.c that handles most of the SDL stuff and I was able to figure out where in this code I would need to modify it in order to add more objects to the SDL_UIKitViewController. This was another one of those deals where I figured out that the thing I needed to do was either impossible in C or just a heck of a lot easier in Objective-C so I renamed the file to sdl.m and added in the new code that way.
On a side note, in the course of a few of these projects, most notably the Virtual Boy VR project, I’ve run into several spots where it would be easier to just play along with Objective-C instead of figuring out the Swift equivalent and I have to say Objective-C makes a lot more sense now. Mostly in that I never really fully appreciated how many of the seemingly strange syntax decisions they came up with make more sense when you consider that the language had to be a syntactical superset of C.
Back when I wanted to get into iOS programming and before I had bought my first Mac, I was vaguely aware that you had to use Objective-C to develop for iOS but I figured it was no big deal seeing as how “C” in the name must have meant it was just a slightly different version of C. And once I hit the smalltalk part of it with the use of nested brackets and so forth I realized this wasn’t going to be as easy as I had thought it would be. And that was on top of learning all the ins and outs of the iOS SDK and Xcode. Meanwhile the occasional developer who hadn’t abandoned the Mac by 2008 suddenly became much more valuable.
So when Swift came along in 2014, I dropped Objective-C like a bad habit. To some extent, when Sun set out to make Java, they basically wanted to see how they could improve C++. When Microsoft set out to make C#, they basically wanted to see how they could improve Java. And so when Apple set out to make Swift, they basically wanted to see how they could improve on C# and drop semicolons along the way.
So for years I viewed Swift as greatness (although maybe less so when major versions were still introducing breaking changes) and I viewed people who preferred Objective-C as the digital equivalent of Stockholm Syndrome.
However, I get it now, and the ability to just turn a C file into an Objective-C file is compelling.
Anyway back to that SDL_UIKitViewController. In Objective-C the way to append existing functionality to a partial class is to do what they call a category. In Swift they call the same thing an extension. So after writing that love letter to Objective-C up there you may or may not be surprised that once I started to write new code I decided to go with Swift.
I extended SDL_UIKitViewController in a file called SDL_UIKitViewController+Additions.swift, which follows the naming conventions of Objective-C categories because why the hell not. I added methods to make the Joystick and buttons and wire them up to do things and then viola, I had on-screen controls. Later on when I needed to add the visible/invisible quick save and load buttons I ran into a problem because they needed to be class-level variables but Swift doesn’t support adding variables via extensions. However I found a code pattern that sort of uses a syntax hack to effectively do it via static structs and long story short it works.
So now I needed to add tilt controls. In the course of perusing the SDL code I saw it had accelerometer code (actually if you don’t have an MFi controller attached it still tries to use the iOS accelerometer as a joystick) so I figured I’d leverage that.
Yeah this was not straightforward at all. CoreMotion has a few different options and the one I had used in the RTCW port was DeviceMotion. SDL has support for the accelerometer which isn’t the same thing (it relies on acceleration which isn’t the same thing as just tilting the device). I was able to kinda sorta get the effect I wanted but the results were janky and stuttering. I said to hell with it, renamed input_sdl.c to input_sdl.m and grafted in the RTCW code I had used before and we were off to the races.
On a side note I played some Fortnite on iOS and I’m torn between thinking that their method of view aiming is superior and thinking that it only really works well in a game where combat is less frequent than in a game like Quake II, but I’m not opposed to experimenting with it later. The problem with making aiming and firing be a drag or tap affair is that you have to choose which you want to do whereas you can tilt the screen at the same time as firing.
So the game was now rendering, the controls were on screen, and everything just worked the same way on Apple TV and tvOS once I got those Storyboards up to speed the same way. The tvOS menus are a bit janky because I haven’t nailed down the trick to making a menu option highlight and not be so bright that it makes the text hard to read but I figure I’ll come back to all my ports and fix that at some point. That’s pretty much the other edge of the open source sword – people working on open source projects for not-profit reasons tend to just focus on the parts that interest them and not so much in the parts that aren’t as much fun.
With Quake III: Arena, there’s an expansion pack but I didn’t mess with it because I wasn’t sure the right way to do so and also because it’s not clear that anyone cared. With Quake, I was able to support both expansion packs (mission packs) in the same app, you just chose which you wanted to play from a menu. One unfortunate thing is that you can’t backtrack – once you pick one that’s it. If you want to play a different one right now you have to kill the app and start over. I figure there’s a better way to handle this but I haven’t done the legwork on it yet. With DOOM II and Final DOOM, I made those be separate apps via Xcode targets because it just seemed logical to me – DOOM II wasn’t an expansion, it was a sequel, so it makes sense that it be a separate app although I figured out a little late that as it turns out for whatever reason (possibly due to the choice of PrBoom) that you can go back and forth and just choose to play the other games in the same app. Maybe I should make a new target that can do all three for those who don’t like to have so many icons.
But this whole thing where I don’t have the iOS port of Quake II supporting .dylib files and just compiling the whole shooting match into one big glob means that I don’t have a way to support the expansions in the same app. So to have the expansions I had to create separate app targets and so I have three apps per platform, one for Quake II and one each for the mission pack. Quake II did have a third retail expansion called Quake II Netpack I: Extremities which despite the title never saw another Netpack entry (kinda like when some group makes a compilation album with some optimistic title like Greatest Hits, Vol. 1 and then either breaks up before there can be a second one or they just don’t stick with the convention when another compilation comes around). Netpack I, however, wasn’t a normal expansion pack like the others, it was mostly a collection of mods and maps on a disc for those who didn’t have great Internet access or something (but even then, the goal was to play online, so it’s just a weird release). Anyway, I didn’t make another target for it for these reasons (it has 11 game mods, that would be 11 more targets).
It has occurred to me that since I had to go in and short circuit the .dylib loading code to get everything to work in the same glob of code, I could instead just have that code look in a different spot for the .dylib, but at this point maybe I’ll try that idea later. Besides, now I’ve made these cool icons and banners.
Here are some videos of the ports in action, and how it plays with a SteelSeries Nimbus controller
In any event, I think that pretty well covers it. Quake II went from the game that was impossible for me to port to being the game I’ve now got up and running. I would like to circle back at some point and add multiplayer support but I need to do that to my Quake port as well.
If anyone wants to contact me I can be reached at email@example.com.