I finally got an iPod last month. It’s something of a fitting irony that as soon as I get one, no one talks about the iPod anymore and it’s all about the iPhone. Oh well, whatever.

I got the 80GB model because, other than just being an iPod, the most important thing to me was storage space. Of course, Apple does like every other vendor of hard drives and advertises it as 80GB but it’s only 80GB in base ten numbering, but every operating system worth its salt countd bytes in base two, so it winds up having a formatted capacity of 74GB. Which is fine, except that I still had too much music in MP3 form.

The first thing I did was to go through and properly tag my collection. I’ve always been pretty good about this but apparently not good enough. I got my Wife a red 8GB iPod Nano for her birthday back in April (it’s somewhat ironic that I’ve been whining about wanting an iPod for years now and first one I buy is not for me). In dealing with iTunes on her system, I learned several things. Namely, iTunes runs solely off of tag information for everything. That folder structure you’ve been maintaining for years now? That’s nice, but it doesn’t mean squat unless the stuff is tagged properly. That “folder.jpg” file you’ve kept in the folder for the album cover art? Doesn’t mean squat – iTunes goes off of what album art is embedded inside of the MP3 file. Also, you need to use the “Album Artist” field so that the one song on the album with a different artist (i.e., Snoop Dogg featuring Xhibit) still winds up in the same “album” with the rest of the entries. I had to re-adjust my practices a bit. Fortunately I found a program, MP3tag, which seems to do everything I need it to.

For my own technology-snobbish reasons, I actually went to the Apple Store in Plano to get the thing. The irony of passing many Costco, Best Buy, Circuit City, Fry’s and Wal-Mart stores that all sell iPods was not lost on me. I don’t really have any concrete reasons other than the fact that I figured, if you’re going to buy an Apple product, go to an Apple Store. Why not, right? I originally wanted a white model, but I had halfway convinced myself to get the black one. It did look a lot slicker in photographs but when I actually got to the store, where there are several tethered-by-a-steel-rope models to play with, the black ones were much dirtier, and the screens just didn’t look as good, even at maximum brightness. Plus, iPods are supposed to be white. So I went with white. I also picked up a good clear sturdy plastic case.

So once I got home I did one last pass on my MP3 collection with regards to proper tagging and then proceeded to back it up. It took 19 DVD-R’s to do so, and I had actually started the process a few days prior (making 19 Nero documents and then burning them later). Then, I went through and pruned the collection – I removed any artists I wasn’t really that interested in. I removed any albums that I didn’t think made sense on my iPod. For example, I cut out the Nirvana boxed set since it’s neat as a completionist’s entry, but not as something to actually listen to. I cut most of Prince’s albums because, well, most of it is crap – but I kept the greatest hits albums because he does do some great stuff now and again. I trimmed the collection down to about 63GB.

Then I fired up iTunes. Or rather, first I went and downloaded iTunes. It used to be that it was included on a disc with the iPod – now they literally just tell you to go download it. Not that it’s a big deal, just that with a $350 investment, a 20¢ disc is an odd way to cut costs. It also used to be that the most expensive iPod also included a dock, but now it just comes with the same cable as all the others – of course the most expensive iPod used to cost about $50 more, so I guess it evens out (since Apple’s Universal Dock is about $40).

So then I imported my music collection into iTunes. The main reason I did the DVD-R backup was because I’ve read a post or two where iTunes wiped out someone’s music collection this way. I had better luck as iTunes didn’t wipe me out and took about 30-45 minutes to import my collection.

Then I synced the iPod. I had actually been playing with it a bit while I was waiting for discs to burn and for iTunes to finish importing songs. I bought this thing on a Friday evening while my Wife was out running an event until 2 in the morning. I don’t remember when I started the syncing but basically it didn’t finish before I went to bed three hours later. By my estimates it took about four hours over USB2 to send all the music over. I didn’t get to actually check it out until the next morning.

So I hit eject. Only it didn’t work. iTunes told me that something else had a handle on the iPod. I just figured it couldn’t handle that much music being sent over at once. I resorted to disconnecting it anyway and doing a soft reset. It worked fine after that. I eventually figured out that Winamp has a default plugin now that is designed to “grab” an iPod when it’s plugged in, so as long as I don’t have Winamp running when I want to eject, I’m good.

There were still more quirks to overcome. The “Artists” menu was cluttered with every one-off artist from every soundtrack or various artists album I’ve ever owned. I eventually figured out the Compilation flag which keeps these artists out of the Artists list and in the Compilations list. Then I looked at the Artists menu and I saw “Adolph Hitler” – turns out I had missed the South Park Christmas Album.

I’ve also started to do some more proactive things to trim my collection down further – all the better to store new music and podcasts on. For example, I’ve removed any redundant songs from greatest hits compilations – you know, the ones where all the songs are old except for the two new ones? I’ve deleted all the previously released songs. I think without this, I would have Aerosmith’s “Walk This Way” like 20 times on there. If I want to listen to a boxed set, I construct a playlist of the running order of the set – the old songs and the ones specific to the boxed set.

At present, I have about 14,000 songs on the iPod. I’m not sure if that includes the podcasts or not but anyway, I more recently went through and downsampled anything above 192kbps down to 192kbps. In the time since I initially loaded up the thing my collection grew to 70GB but now I have it back down to 65GB. Soon I’ll need to just suck it up and start removing stuff I don’t listen to in favor of things I do listen to. I actually downsampled everything to 128kbps and got things down to 52GB but everything just sounded too awful (though ironically I do have several 128kbps files that sound great) so I went and rolled back (after doing a second backup/restore onto DVD-R’s)

I always figured I would never use the iPod for video but for grins I fired up the trailer to The Simpsons Movie and dangit, I actually like the video capabilities of this thing. So I fired up a video converter and now I keep the ocassional DivX -> QuickTime movie on there. One of the first things my Wife did when I got her the Nano was to cash in some of her credit card reward points on a boom box that takes the iPod as input – so now we can use either of our iPods in that boom box and listen to our music on the go. The other thing we got her for her birthday was her family and I got her a new car stereo system to replace the dying one – this new one has an 1/8″ jack so she can listen to her Nano in the car. It also has an iPod-specific cable but at $50 for the cable and $30 to install it, we drew the line there.

I have a friend who hates the iPod. Actually, he hates Apple. He hates Apple with the passion of, well, the passion of how a Linux zealot hates Microsoft. I still haven’t told him yet that I own an iPod, mainly because I just don’t want to hear about it. My friend likely just hates Apple because they’re run by liberal turtleneck-wearing hippie Democrats in California. It does make me think about why I went with it. At one point in time you could make the argument that iPod was overpriced, and it still is expensive, but now they’re in-line with other players. The 30GB iPod and the 30GB Microsoft Zune cost the same. The Creative Zen tops out at 60GB and the Archos line of players is mainly about video, which like I said is secondary on my list of concerns. The Sandisk Sansa line is an up-and-comer, but they’re flash only and have nowhere near the capacity I need.

The iPod’s interface, features and marketing are tough to beat – to say nothing about the ecosystem of peripherals and accessories. Ironically, this is exactly the reasoning behind Windows’ dominance – you could switch to Linux or Macintosh but so many things – from games to scanners – can’t come with you. And if you think about it, it makes sense why the iPod is so popular. Apple makes computers and some people do buy them but so much of your content – that is, your programs, documents, games, etc. – can’t come along. The Macintosh is incompatible with most of your existing content (Boot Camp and virtualization notwithstanding). The iPod, however, by virtue of the fact that it can play MP3’s, is compatible with your existing content. This is why iPod has 75% of the MP3 player market, and Macintosh has 5% of the PC market.

Anywho, just thought I’d share.

OK, I’m going to switch gears for a minute and make a different kind of post.

Just recently, I changed jobs. The job I held before I had for four years. The one before that, for about fifteen months. Before that was darkness (aka College).

The job change in question was a long time coming. I switched positions in the organization two years ago and started working from home (since the company decided to close the office in favor of telecommuting). At first it was great – no commute, no office politics, new position and I liked the work. But then I started getting handed assignments that I didn’t like, like traveling all over to install software or srcripting a survey in a proprietary language. Between these, my company’s fondness for offshoring, constant reorganizations and the fact that they announced my position was becoming a business analyst that didn’t code anymore, I decided it was time to move on. I started a new position a few weeks ago.

The job search, in earnest, took about three months. Not too bad, but it reminded me of why it took so long to move on – I fucking hate the job search process

Interviews – I hate interviewing. For starters, when you haven’t interviewed in a while, you suck at it, so you blow the first couple of interviews. Plus it never fails that you’re really good at Area A in development but they keep asking you questions about Area B and so you feel like a know-nothing idiot when it’s done.

I know I should be grateful, and I am mostly, but at one point I was interviewing every day of the week, which got old quick. I got so tired of explaining to yet another person my life’s history to that point. My degree in college is functionally unrelated to my career and my GPA wasn’t the most awesome, so that’s fun to explain to everyone, too. Nevermind that I’ve been out of college for seven years now, I still have to explain why I majored in Geography and still can’t find my way to the airport I drove to the week before.

Every once in a blue moon over the last four years of my job I would get a random recruiter phone call and the job sounded good enough to at least go talk to them. They’d get to the question “so why are you looking for a new job?” and I’d say “I’m not – you called me” and I’d never hear from them again. Maybe this was me being passive aggressive, I dunno.

My wife has a friend and he has told us that he has never interviewed for a job and then not received an offer. That’s a pretty impressive statistic and you believe it coming from him – but with all due respect, he doesn’t have to go through the technical interviews programmers have to go through. He doesn’t have to write code on a whiteboard in a suit for four hours. He doesn’t have to explain how much the Empire State Building weighs (and yes, I did actually get that question once – I assume we both read the same article). He doesn’t have to explain five different ways that his code is right because the interviewer keeps fucking with him.

Recruiters – I make this complaint in general, not in specific. The job right now I got through a recruiter. In fact, in this last round, I only interviewed twice that wasn’t through a recruiter.

That being said, there are recruiters who are awesome and do their job and make their money and perform a useful role. Then there are the other ones. The ones who know absolutely nothing about the tech field they’re working in. In a meeting with one, I had to explain to her what all the different technologies mean (I didn’t mind, she was pretty receptive about it). Worse are the ones who think they know what they’re talking about, but don’t. Like the ones who see mostly C# on my resume and then assume I couldn’t do a VB.NET job (C# and VB.NET are the two main functionally-identical languages in .NET). Not that I wouldn’t prefer it, that I couldn’t do it. And then they say “do you know anyone who could do it?” – oh, I see, you want me do to your job for you and get someone else a job in the process?

Something else I also figured out really quickly is that if a hundred different recruiters pitch you the same job, there’s something bad wrong with the job probably. There’s got to be some reason why turnover is huge and the company in question has resorted to calling every recruiter in the area (and some not in the area) and say “have at it”.

Recruiters generally want to meet with you before pitching you to people and that’s fine and all so long as they understand that you still have a regular job and you can meet them after work. Some don’t – I would imagine that a good chunk of their clients are unemployed and will jump at the chance to drive to Downtown Dallas and meet them at 10:30 AM. Worse still are the ones who aren’t in your area – at least you don’t have to go meet them in person but I just don’t think I’d be comfortable with getting employment from a third party in Alaska.

Monster.com – now, I’m not really complaining about Monster because this time around Monster indirectly got me the job. For various reasons, this time around I just let the recruiters call me and went from there – I didn’t get to the point where I needed to start applying for jobs directly. Monster has this thing where when you sign on, update your resume, and log off, you’re bumped to the top of some “hey they logged on” list – theory being, you’re looking for a new job. After a couple of months of looking I finally did this one day and the following day I got – I kid you not – 35 emails and 20 phone calls. This was on a Thursday – I pretty much didn’t get any work done until the following week.

That’s not my complaint about Monster. Actually, I guess my complaint isn’t about Monster at all. My complaint is when people ignore what I’ve said on Monster. I listed my profile as the Dallas, TX area, no travel, no relocating, and no straight contracts (i.e., a “three months and then you’re done” sort of position). What happened? Lots of calls about straight contracts. Calls about relocations and positions with lots of travel. “But what if it’s all in the state of Tex…” NO TRAVEL. My wife had a 100% travel position for a while – never again do I get in a position like that if I can manage it. Truth is, I don’t mind some travel – it’s kinda neat to get away for a couple of days and it wouldn’t be the end of the world (my last job had me travelling 4-5 times a year, tops), but if you call me after reading my “NO TRAVEL” portion and pitch me a job that sends me all over the country, I’m going to say “no thanks” and hang up.

Job Applications – Some places you go to interview, usually the ones where it’s a direct hire position, want you to fill out a job application before you actually talk to them. This, to me, is probably the single most annoying thing about the job search process.

First, it never fails that I forget to think “oh hey this is a direct hire position, maybe I should bring a ‘cheat sheet’ in case they make me fill out an application”. Instead, it usually comes a surprise. Then they make you fill out your life story, and make you feel pretty rotten in the process. They want you to go back X years in your job history. Dude, it’s on my fucking resume, why don’t you just look there. Yeah, I know you need it for your records – how about you save us both some time and make me fill this out when we’re closer to a job offer? (that’s another thing – what’s up with these jobs that require like ten interviews to get the position – getting in at the Pentagon is easier!) You have to explain any gap longer than thirty days. They want names, addresses, phone numbers, direct reports, etc. It’s especially annoying when they’re asking for the name and phone number of the boss you don’t want to clue into the fact that you’re looking for work and don’t give you a “please don’t contact them” checkbox. I always put in the name and phone number of a coworker who’s in on the gag so they can divert them if need be.

Then they ask for your educational history in the same way. Like I remember offhand the address and phone number of my High School. Oh good, they ask for GPA and Major. Then they ask for any convictions you have. Some ask for that, of which I have none, but some even ask you if you’ve had any tickets for moving violations. Well hell, I don’t know – I had a ticket some years back for an expired registration sticker – is that a moving violation? I was moving the car when the cop spotted me. And heck if I remember if that was in the last seven years or not.

Sometimes they ask if you drink or use tobacco. Erm, define “use”. I have the occasional social drink and yeah, I’ve smoked a cigar in the last decade probably. Does that count? My wife worked at a place that simply would not hire tobacco users and would fire you if they found out you were a smoker. Not smoking on the property, smoking at all ever period. Company line was that smokers needed fewer work breaks and were cheaper on insurance costs. I think the company just liked that it was a legal way to discriminate (same company had few if any black people). So damned if I want to get fired because I had a Swisher Sweet four years ago and forgot about it.

The absolute worst is when they have you sign the back. It never fails that there’s some clause right above the signature spot that says “we have the right to contact…. your employer…” erm – no? Please don’t? If possible, I turn the thing in without signing it. Maybe this has cost me jobs before, I dunno.

Now that I’ve put some of my gripes out there, here’s some actual stories, some from this latest “round” of interviews, some from others

Too-good-to-be-true – one recruiter called me up and pitched me a job that, while it was far away, was offering 2.5x what I made in my previous job (current job at the time). While that sounds awesome and all, it was so much more money that it set off my bullshit detector. It didn’t help that he was pitching the job to me like it was a used car. When I finally got him to tell me the name, it rang a bell and I gave him some “I’ll think about it” line and called my friend. Yup, true enough, the same person had pestered him two years prior, even threatening to come out to meet him at his workplace. I learned from someone else in the time since that the company in question was horrible to work for and generally worked people to death then had nice rounds of layoffs. I eventually programmed the recruiter in my phone under the name “IGNORE”

No we won’t tell you – one recruiter called me up, and every time it wasn’t just one guy on the phone, it was always him and his partner. They pitched me some job that was a little further than I’d like to drive but I figured it was worth being submitted to. That is, until they wouldn’t tell me the name. I get that when you first have contact with a recruiter they don’t want to tell you the client’s name because they don’t want you to go behind their backs and go directly to the client and cut them out of the loop (consequently if they have an exclusive arrangement with the client, they’ll tell you right off the bat). But my policy is that I must know the name of the client before I let you submit me. Part of this is because there are certain companies I don’t want to work for for various reasons (like I’ve known people who have worked there and they’ve warned me to stay away) and sometimes I’ve been submitted there before so it’s a waste of everyone’s time to resubmit me. But this one recruiter-pair literally wouldn’t tell me until after they submitted me. When I wouldn’t budge on the matter, they offered to take me to lunch to sweet-talk me, but I refused. Generally, this is a sign that the company in question is so notorious that no one will work with them.

Not too fair – a couple of years ago I interviewed for a large mortgage broker headquartered in the area. Their location was a large, sprawling, multi-building campus. All was well and good until I got there and saw the “JOB FAIR” banner. There were tons of people there, all vying to be cogs in the mortgage broker machine. My first thought was “I’ve been scammed”. Kinda like when someone sets up a “job interview” for you and it winds up being a large seminar trying to sell you on Amway or something. I had to park a mile away from the place. Hopeful that perhaps I wasn’t part of the “Job Fair”, I went to the building on the campus without the “Job Fair” banner. I asked for the Such-and-such building I was supposed to be reporting to for the interview – the woman pointed to the building with the banner. Every conversation I had went like this

“I’m here for an interview with So-and-so at the Such-and-such building.”

“Are you here for the Job Fair”

“I don’t know if I’m here for the Job Fair, all I know is I’m here for an interview with So-and-so at the Such-and-such building.”

And then the person would just point me to someone for the Job Fair. Apparently they were doing interviews at the Job Fair. I had to repeat this routine like 3-4 times and as soon as I finally got to someone who knew what was going on (while I prepared the “there’s been a misunderstanding fuck you bye” speech in my head) they finally said “Oh, you’re supposed to go to the fourth floor” and there I had the actual interview.

We don’t know either – I once had an interview at a company that made trucks (we’ll just leave it at that). They had a “hire for life” mentality, and mentioned that the only reason they had a position open for interviewing at all was because someone had retired – their retirement party was the week before. Nice, considering my gig at the time saw programmers as an exportable resource. Only problem was – they had no idea what I’d be doing when I got there. Their policy was to hire first, then figure out what the person does later. So that part where you interview them to see if you even want the gig? Yeah that was impossible. I might be working in VB.NET. Or VB6. They didn’t know for sure. Oh, I might have to travel to Europe for six months on hire. Or not. They didn’t really know for sure. I wasn’t sad when they never called back.

Oh and the reason I mention it was a trucking-related company was that this was another one of those fill-out-a-job-application companies. They didn’t make me do it first, I did it at the end of the interview. It included a question asking for my CB Handle. I guess they hired truckers as well?

I hope you’re not evil – This one was not an interview per se but it was with a recruiter. Job sounded great, but then she said “the only thing is the company is… well, they’re… Christian, so they want someone with good morals and values.” I asked, “So, what does that mean? What does it matter if they’re a Christian company?” She responds “Well, I don’t think they start off every day with a prayer like Interstate Batteries but they just want someone with good…. morals and values”.

I’m not sure what they expected me to say. “Oh sorry, I have no morals or values. Hell, you should be scared to be in the room with me.” I never heard more on the position but it just struck me as odd that they were bracing me for the overt Christianity of this company (which is just fine with me, I sort of have a separation of work and church stance). I wouldn’t have ended the interview or anything, but it was just weird.

Please wait – When I was trying to move to the Metroplex like four years ago I interviewed at this one place on the Tollway and at one point towards what I thought was the end they said “OK we’ll be right back”… and I sat there for like 45 minutes in this dead silent conference room in this dead silent office doing nothing. Then one of them came back and said “OK, we’ll call you back for your next interview”

That wasn’t the weird part – the weird part is I did that standard thing where you wait a bit and you call back to see if you got the job (or in this case, the next interview). Which if you have to call them either means they’re slow or you didn’t get the gig. So the receptionist says “oh he’s busy now, would you like to leave him a message?” so I do. And I call back in a few hours. Same story, only I don’t leave a message. I do the same for the next few days. I leave him another message or two. I’m on the road headed to another interview (for the job I accepted and kept for four years) when another recruiter calls and pitches me this place and I tell him I’ve already interviewed there. I eventually quit calling the place and I never did hear back from them.

Now, I know what happened – they went in the back and maybe they were busy or maybe they discussed me, I don’t know. Maybe they just passed on hiring me or maybe the position fell through or whatever, which was fine. But why avoid my calls? Why not call me back when I’ve taken the time to call you first? If I don’t have the job just tell me and I’ll leave you alone. Why avoid me? I mean, it did eventually work – I eventually just quit calling and so they avoided any conflict. But jeeze, grow some sack and tell me no already.

Carry the one – One recruiter called me up and pitched me a gig with a company that sounded fun (that’s another thing – every job is pitched to you as a “fun place to work”. Every single one) until it came out that they required a 50-hour work week. Thanks but no thanks, all other things being equal, I’ll take the job that only requires 40 hours per week.

But they weren’t done yet – you got paid in some sort of sliding scale overtime sort of deal. So like, if your salary was $X that was what you got paid for your typical 40-hour work week. Divide the salary by the number of weeks/hours in a year and that was the amount per-hour you’d be paid for those extra 10 hours a week.

So… why not just pay me 120% of $X? As in, if the job paid $10,000 a year for a 40-hour week (an unrealistic but simple number) why not just say “oh the job pays $12,000 per year but you have to work 50 hours a week”. Why in the hell are you making me do the math on this one? Is it because the 50-hour-a-week thing is such a turnoff for everyone that you’re trying to make it sound like I get a bonus for it? Or are you trying to trick me into thinking I’ll get paid more than I will?

I think they were targeting the desperate-to-get-a-job types. That or they were just handed a shitty job to pitch. Like the one local firm I kept getting pitched that had a suit-and-tie policy. Sorry, all other things being equal I’m taking the job that lets me wear something normal to work.

Contracting Insanity – this one is not mine, but it’s my favorite interview story ever. It was a Slashdot comment.

All is well now, I got an awesome job through a great recruiter at a good pay rate mere minutes from my house (30 of them). Truth be told, I like these horror stories, I just hate going through them.

Several years ago there was this one (now defunct) page I would go to and people would post their webcams there. I think I went there because the Penny Arcade guys had their webcams there. Something about the time made webcams interesting.

Anyway if you remember anything about webcams when they were “hot” (and no I don’t mean the “dirty” ones) you’ll remember the trend was to pose for some shot, use some sort of editing software to graft a phrase on the image, and then leave that static image, in place, for a long time (I think people had started to realize that they scratched themselves too much to leave the things live for too long).

In the wake of 9/11 one of the webcams on this page (whose webcam it was escapes me) was just of the guy sitting in a room, the only illumination being from his monitor, with a rather downtrodden expression. The phrase he had typed on the top of the image was: “When I was growing up, my mother always said TV, movies and videogames would desensitize me to violence and reality.” At the bottom of the image was the phrase: “I really wish she had been right about that”

One of the topics about interactive entertainment (okay, video games) that’s always fascinated me is how it affects us, or doesn’t, or if it even can or not. Many of us in the gaming proletariat have always maintained that video games don’t effect us. We know that’s not entirely true – play Civilization IV long enough and you’ll be moving the pieces in your sleep. Play Tetris long enough and the cityscape skylines will start to beg for more pieces. But playing GTA3 didn’t make me into a violent criminal. If anything its non-repetitive gameplay actually hinders your ability to draw too many patterns in your mind.

Still, I play a lot of games where I am in a 3-D world with very realistic (or at least convincing) graphics, armed with a gun, and killing anyone I see. Sometimes the blood makes patterns on the walls. In some areas of DOOM 3, the brains literally pop out of the enemies (who are all zombies). Thanks to the invention of rag doll physics, I can now hear the crunch of their bones as their bodies traipse down every stair or rock on the way to the ground. In playing all of this it has entered my mind that I may be getting desensitized to violence. It doesn’t stop me of course.

And then this past summer I bought a game off of Steam called DEFCON. This game’s premise is essentially to implement the “Global Thermonuclear War” game from the 1983 blockbuster WarGames. A brilliant premise, especially for children of the 80’s like me, and one I can’t believe wasn’t done sooner. The graphics are low key, the gameplay is simple, and the whole notion reeks of style.

One thing, though – the game is exceptionally creepy. It’s a combination of using the rather low-tech graphics (though it’s not like the game is some EGA slouch), eerie music, and some subtle sound effects (like wind) that make the game downright spooky to play. But not because it’s some scary notion like those in the Resident Evil games, no this one is creepy because – you’re basically killing millions upon millions of people. The catchphrase of the game is “Everybody Dies” and it’s a given in the game that a large number of your people will die, too – the way to “win” (or one of them anyway) is to just make sure more of the enemy’s side dies than yours.

So it says something that in a day and age where I can play a game that lets me mow down pedestrians and kill innocent people, I get the heebee jeebees from seeing “DALLAS HIT: 5.4 MILLION DEAD” on the screen in cold stale letters. I guess it means two things – I haven’t been desensitized to violence, after all, and that context is important despite what the Jack Thompsons of the world think.

Over Christmas, I bought a Nintendo DS. I now mostly retract my earlier statementthis is now the most perfect gaming device I have ever purchased. It’s too bad it can’t do multiplayer GBA games or play GB/GBC cartridges, but after seeing my GBA games on the backlit screen, there was no going back to my GBA.

The first game I bought was New Super Mario Bros. The second game I got was Brain Age: Train Your Brain In Minutes A Day. I was sort of shocked to see that Brain Age was #10 on the top 10 console games sold in 2006, period. I figured that I was unusual in being weird enough to want to play this game (though, I did notice it was advertised in my wife’s magazine Real Simple, so perhaps Nintendo got it right about expanding their market.)

Brain Age claims, in a very “for entertainment value only” sort of way, to exercise your brain and make your mind “sharp”. It takes the research of Ryuta Kawashima and turns it into an interactive game, which is quite effective because it is considerably more interactive than a book and can calculate your progress for you (the game even plays as if you’re holding it like a book). It uses the internal clock of the DS to make it such that you can only play the games once per day, it tracks your progress on a graph, and even comes with Sudoku puzzles.

I’ve been playing this game for a few months now and, though it might be a placebo effect, I do think the game is actually effective at what it claims. Not that I think it’s made me smarter or sharper necessarily, but I am getting better at the activities daily and the nature of some of the tasks (quick rapid fire math calculations, memorizing lists of words) do seem a lot like the sorts of things we have kids do in schools. It occurs to me that this game would be excellent for schools. This is the sort of game my wife could like. Hospitals in Japan have been using it to ward off dementia. The game is selling many times better than Nintendo had ever dreamed.

But then it occurs to me – if you accept the notion that Brain Age might have an effect on your mind – in this case a positive one – don’t you also have to accept the notion that other video games might have a negative effect on your mind?

The style of Final Fantasy-type games (specifically, old SNES-type games with low tech graphics) is reproducible enough that a company in Japan actually made a software package called RPG Maker whose purpose is to allow people to make their own Final Fantasy-style RPG. An individual named Danny Ledonne used a version of this software to make a game caled Super Columbine Massacre RPG!, which recreates – to some extent – the events of April 20, 1999, putting you in the role of Eric Harris and Dylan Klebold. Seeing as how I’m the only person I know who actually thought JFK Reloaded was neat, I figured I’d give the game a go.

Final Fantasy-type games are famous for using the “opening screen of text” tactic. Usually it’s a screen of a solid color (white, black, etc.) with text on it, each line fading in, and usually some sort of weird nonsense that makes no sense whatsoever outside of the universe of the game (which you know nothing about since every Final Fantasy game is completely different). However, SCMRPG‘s opening screen read:

The purest surreal act would be to go into a crowd and fire at random.
André Breton, 1896-1966

I actually felt nauseous reading that. And the slow, meticulous pace of the opening sequence of the game was just surreal. When dippy looking 16-bit sprites are representing some androgynous fictional Japanese characters, it’s easy to have no emotional attachment to the game. When the sprites represent real-life killers who meticulously planned the then-worst school shooting in history, the experience is much different.

The author of this game did his research – just about everything in the game comes from a real-life incident or allegation (easy enough to do, since everything about that day and the killers has been documented over the years). The MIDI music is from the era. The theme song on the main page is “The Nobodies”, the Marilyn Manson song which is generally accepted to be about Columbine.

The author has come under a lot of fire for the content of the game, especially the “going to Hell” detour the game takes (more of a reference to the types of detours the Final Fantasy-style games take than a commentary on the killers) and many people have stated that the author’s initial purpose was to stir up controversy and that he only switched his story to the “social commentary” role once he got the popularity he desired. I disagree; I think he intended to make a work of art and once he figured out that the technology he wanted to employ – namely that of an old RPG-style game – would prove feasible enough for his purpose, he went ahead and finished it.

I started writing this post in February. Shortly after starting it, I made decision to start looking for a new job. In late March, a family issue gained the majority of my attention until late April, and in the last three weeks I finally secured and started another job. This is why the time to write this new post took so long this time. In the meantime though, another school shooting went down at Virginia Tech and perhaps ironically, it took place in the same timeframe as Columbine (the third week of April).

The game industry was able to breathe a slight sigh of relief when it came out that Seung-Hui Cho did not play video games (though this didn’t stop Jack Thompson from making the claim anyway). With a body count of 32 compared to Columbine’s 15, the VT shooting became the worst school massacre in history, and in the ensuing it weeks it caused a lot of speculation and finger pointing. However, it seems to have vanished from the spotlight quicker than Columbine did. Perhaps it’s the Iraq War, perhaps its that it was a college as opposed to a high school (where the students don’t have a choice in the matter of attending), perhaps it was because there wasn’t an easy pop culture target to nail it to (Marilyn Manson, etc.), perhaps it was the video and photos that the killer sent to NBC News during the tragedy, maybe it’s the misplaced blame on gun laws (a few months prior, VT made it illegal to carry a concealed weapon on campus, leading some to believe that had this rule not been put in place the massacre could have been ended by another student). Whatever it is, the focus on VT has fallen a lot quicker than Columbine’s shadow.

In any event, I’m not sure if games can really have any lasting effect on our senses anyway. On JFKaos, a JFK Reloaded fan site (the only one, probably) someone claiming to be from a marketing agency wrote into the webmaster. This person stated that Traffic, the developer of JFK Reloaded, contacted them first and came over to show the game. In the initial showing of the game, one woman was so nauseated by watching the game being played (in the game if you hit JFK’s head in the same way Oswald did it has the same “brains flying” effect as the Zapruder film) that she got nauseous and had to flee the room.

Now, I’ve seen videos on the Internet that have made me sick and given me nightmares. However, JFK Reloaded didn’t. Neither does the Zapruder film. Neither do horror movies or blood splattered on the walls in video games. Does this mean I’ve become desensitized? Does this mean society’s become desensitized? (witness how The Beatles were once seen as a corrupting influence, but now Marilyn Manson collaborates with Disney)

Or does this just mean what we’ve all known all along and no one wants to admit – different things affect people in different ways and censoring something for the masses in order to avoid upsetting a small number of people is pointless.

I’ve seen some upgrades recently.

Back in October, I jointed the cult of widescreen LCD owners. LCD monitors are one of those deals where, once you make the switch, you wonder how you ever got along with CRT.

Amusingly, I went back on some of the bold claims that I’ve made in the past. It occurred to me that I had spent a lot of money on a 20″ widescreen monitor (native res of 1680×1050) and part of the reasoning behind that was I wanted things on my monitor to look prettier. And here I was running the old, circa-1995 Windows 95 theme. So I gave the Royale Theme a whirl, and I wound up liking it – I think half of the reason is because it was designed with LCD monitors in mind. I also went ahead and tried out ClearType and, after getting over some of my prejudice, wound up liking it, too – again, because it was designed with LCD’s in mind. I had to apply a different font for my programming editors, but it wound up being worth it in the long run. Ironically, though, I now have less reason to upgrade to Vista (or will at least have less neat new stuff happening when I do) as a result.

I also got, for my birthday, a G5 Laser Mouse and a G15 Keyboard. I’m in the unique-ish position of being anti-wireless. In the same way that I now think people who “prefer” CRT over LCD are backwards, I’m in the position of thinking wired mouses are superior to wireless mice, which many people believe is backwards. The G5 has a wireless “cousin”, the G7, which is about $30 more and does not feature the removable weights cartridge that the G5 does. I know I sound like those people who refuse to move to CD’s and prefer vinyl because vinyl sounds slightly better, but I refuse to go to a wireless mouse because the wired mice are a little more responsive. I open up MS Paint and try to make circles with the mice very quickly. Without fail, the wired mice make better curves than their wireless cousins – the wireless mice always have straight lines as part of the curves. So for $30 more it’s a less accurate mouse and is missing the weight cartridge feature? Pass.

Of course the irony there is that it’s not like I’m such a meticulous hardcore gamer that an extra 1.4 grams on the right side of the mouse will make a huge difference, but it’s the principle of the thing.

The G15 keyboard is really nice but it’s another break from tradition for me. I’ve always viewed keyboards as these cheap, disposable devices and here I am spending $100 on one. It’s already made me less likely to eat at the desk – my last keyboard was so clogged with food and dirt it wasn’t worth the effort to save. But the illuminated keys are worth the price of admission alone. Ironically, I don’t tend to play a lot of the games the macro keys would come in handy for, but they have come in handy for testing things I’m developing – my main job has this project I’m working on where I have to fill in a form on a web page before continuing. Once I hooked this up to a macro, life was good.

The LCD screen is seen by some as gimmicky (enough so that Logitech sells a version of the keyboard, the G11, without the screen for about $30 less) but ironically for me it’s worth it for a lot of non-game reasons. I mostly play FPS games so the fact that it tells me how much health I have or how many bullets less is not that useful – that information is on the screen already (though it does still sort of come in handy in PREY since there’s an actual number on the LCD screen instead of just a meter) but the TrillianG15 plugin for Trillian Pro is sent from the gods – now I can answer instant messages without having to tab out of the game. I can also keep track of the time with the clock on the LCD and can check out performance settings with the performance monitor, check out what song is playing in-game using the media display, all from the LCD.

The only irony in all of this (other than the fact that it wasn’t until I got these guys home did I realize the shortcoming in that I didn’t own a USB KVM switch for my work laptop) is that I’ve had to adjust to them. I’ve never owned a mouse that could “tilt” the wheel so I keep screwing things up when I try to middle-click on something. And I never realized how much of my typing was based off of “where are my hands on the keyboard” until I got a keyboard that was much wider than any other keyboard I had ever used (side note: the G15 is just barely small enough to fit on my keyboard drawer, both in terms of LCD clearance and sheer width). I kept hitting macro keys (which, by default, are mapped to the F-keys) because I thought it was the edge of the QWERTY section. And while the rest of the keyboard is a standard 101-key affair, Logitech lays out their keys and sizes just differently enough that I’ve had to do some readjustments. I had a florescent lamp on my desk and, ironically, the light bouncing off of the black keys makes them harder to see – so that had to go. And though I used to be bad about not cutting my fingernails quickly enough, never again since whatever material these keys are made out of feels like crap when you hit it with a nail (in my opinion anyway, I have no idea what it’s like for women with longer nails). Overall though these are awesome purchases – that I can see the keys in the dark and ratchet down my mouse sensitivity in-game has already paid off in spades.

One of the things that comes along with newer technology like this is whether or not your games support it. The widescreen is the biggest X factor. If a game is sufficiently old and/or didn’t allow for minute tweaking, then it doesn’t work with widescreen, or at least not correctly. If a game can’t run in the native resolution then you have to run it non-natively, which causes some blurring due to the nature of LCD’s. Not a huge deal, and I’ve gotten to where I try to run older games in a window (which presents its own challenges if the game is a really old version of DirectX that didn’t get along with high color displays all that well). The Widescreen Gaming Forum has come in handy but if the game is old enough and no one can find a tweak that works, and it won’t run in a window, then you’re just sort of hosed. Old games like Quake 3 work with some work because the developers were freaking awesome but even some newer games don’t work. Neverwinter Nights works with widescreen but the newer game Star Wars: Knights of the Old Republic, which worked with the Neverwinter Nights engine, doesn’t work since the developers cut off support for the game before the functionality was grafted into NWN (either that or they just never bothered to re-graft it back in. Bioware developed both games, so I wouldn’t be surprised if it was just a matter of contract woes with Lucasarts – in any event the support can be hacked back in).

Ironically, one game that supports widescreen, multi-monitor gaming, the G15 keyboard’s LCD screen and and all of its function keys, and the G5 mouse, is World of Warcraft – aka, the game I refuse to play both because of my anti-MMORPG stance and also because I’m afraid that I’ll get hooked and like it. It’s kinda like those situations in college or high school where you’re at a party and there’s marijuana floating around – you don’t try the pot both because you’re anti-drugs, and also because – you’re afraid you might like it. So the fact that I just compared WoW to drugs says something about it.

But it does bring up something else I’ve noticed – one of the podcasts I listen to is the PC Gamer Podcast, and it’s one of the more interesting podcasts I listen to. These guys have been covering PC games for over a decade (though I don’t think there’s anyone who’s been with the magazine since day one) and the debate is lively (for example, they don’t like the 0-100% scale they use, either, but they’re stuck with it). One thing I kinda don’t like about the podcast though is that 1/3-1/2 of every show talks about World of Warcraft. If your only exposure to the PC gaming world was this podcast, you might not even realize there were other MMORPG’s out there. Sure, WoW has 8 million gamers now (or accounts, but that’s the current active number – not just over the lifetime of the game) so it’s not like something you can ignore. It concerns me because while WoW is something of a shining example of PC gaming superiority, there are a lot of people who believe their $15/month fee is a better investment than additional games, so they actually buy even fewer PC games because of the game.

But one thing World of Warcraft has for it that even other MMORPG’s don’t necessarily have is constant development. Sure, part of that is the fact that they need to keep coming up with new content or people stop paying and playing, but as part of that continuing development is that the game adapts to new technologies. When dual core came out, they adapted to it (I don’t know offhand if WoW exploits dual core but at least it doesn’t screw it up like it has with other games). When widescreen monitors came out, they supported those (it’s more than just a resolution change, it also requires a POV tweak). When the G15 keyboard came out they put support for it in the game. It was compatible with Vista from day one and heck, even the Macintosh version got a universal binary so the game runs natively on both PowerPC and Intel hardware in that universe. It’s just nice that the game continuously updates itself for new stuff. One of these days when everyone’s computer is more powerful, they’ll make new expansion packs that require beefier hardware and have better graphics (not that this is unique, EverQuest did the same thing).

Granted, this is from Blizzard, the same guys that are still issuing updates for Starcraft, so they have a tradition of setting the bar really high for support, and having 8 million people pay $15 or thereabouts each (I doubt China is paying that much per-head, and prepaid cards do get the price down some) does help. And it’s not like I have an answer here – without further sales and revenue coming in it’s not like there’s much of an incentive to continue development after the sales window (though as I say that, id came out with a DOOM 3 patch yesterday), but it’s the dirty little secret that while in theory the PC is eternally reverse compatible (as opposed to consoles where the Nintendo 64 doesn’t play SNES games and current consoles only begrudgingly play old games because optical discs means that the form factor argument is out the window) the fact is that sometimes getting the PC to run old games is quite the task. If the game is an old DOS game, DOSBox usually does the trick. If the game used DirectX then in theory with a tiny bit of hassle it should always work. But if none of these tricks work and some new technology breaks things (I wound up having to fire off Painkiller with XCPU because neither the AMD Dual Core Optimizer nor the MS Dual Core Hotfix would fix it) then you’re just sort of screwed.

World of Warcraft continues to grow over two years after its release – PREY sold over a million copies but it went from top dollar to cheap bin in seven months. I wonder if PC games would sell better if they had a definite commitment on development windows. No one (or at least not that many people) wanted to buy Quake 4 for fear it wouldn’t be supported as long as other games and when that happens, its a self-fulfilling prophecy since a lack of people playing the game causes it to be less popular and doesn’t encourage others to do so. To its credit, Quake 4 did release several major updates and I think could be the modern-day-technology successor to Quake 3 but I fear it’s too late to be given that chance. Battlefield 2 never saw all the fixes it needed (and literally one year later it was still unfixed while EA forced them to whip out Battlefield 2142) so confidence is important to gamers.

Anyway, I’m happy that I’ve got these nice new upgrades (I need a new hard drive and Windows Vista Ultimate but that’s a ways off) and when I can get the games to play along that’s extra awesome. But I do see the lack of adaptation as a problem. When you’re id Software and your 1999 game Quake 3 can adapt (or even your 1996 game Quake through the source code you gave away) that’s awesome. When you’re EA/DICE and your 2005 game Battlefield 2 can’t adapt (or tells me I’m a cheater if I try), that’s unacceptable. It’s almost tempting to try World of Warcraft just because they give gamers what they want instead of telling them. I still have copies of World of Warcraft sitting in this office with me. I’m tempted to install it. I can quit at any time, ya know.

Nah, I’ll just fire up Oblivion instead…

There was a point in time in which the group Guns N’ Roses was the biggest band on the face of the earth and with the exception of groups like Led Zeppelin or The Beatles, the greatest band of all time. And this was after one album.

Similar to my following of Van Halen, Guns N’ Roses is one of the other groups I follow. Their first album, 1987’s Appetite for Destruction is pretty much perfect – many rank it as the #2 rock album of all time, just under Led Zeppelin’s fourth album. At the time people even hailed them as the second coming of Led Zeppelin.

A follow-up EP, Lies, (or GN’R Lies, depending on how you read the cover) had their earlier independent release Live Like A Suicide and four new acoustic tracks, including the controversial “One in a Million”. A song like that would derail most careers, but GN’R had too much momentum.

Three years later GN’R came out with two albums simultaneously, Use Your Illusion I and Use Your Illusion II. They were much different albums from their previous efforts – they were highly produced, featured long epic songs and horn sections, and were promoted by the band’s first headlining tour. Most fans came along for the ride, some decided that the new albums were too different and the result of Axl Rose’s increasingly eccentric mind. Izzy Stradlin left the group before the tour started, which was the first sign of trouble.

1993 saw the release of “The Spaghetti Incident?”, a 12-song EP of covers, mostly of punk rock tunes. No one knew it at the time but it would be the last full release from the “original” lineup of GN’R (sans Steve Adler, their original dummer who was fired after Lies). A cover of the Rolling Stones’ “Sympathy for the Devil” appearing on the 1997 Interview with the Vampire soundtrack album would be the last song from the original lineup.

For a long time nothing happened. Slash quit the group and started a short lived side project, Slash’s Snakepit. Duff McKagan left at the end of his contract. Matt Sorum and Gilby Clarke were fired. Slash, McKagan and Sorum eventually did have a “second coming” in the form of hooking up with Scott Weiland and forming Velvet Revolver – a band who experienced an unprecedented amount of initial success based more or less off of the fact that they were considered the second coming of GN’R.

Of course the real second coming of GN’R (or the other one, if you prefer) was with the band that Axl Rose was now the only remaining original member of. He started hiring new replacements for his former bandmates and started recording new material. Somewhat quickly, this new GN’R had a song called “Oh My God” ready for 1999’s End of Days soundtrack.

Shortly thereafter, though, the band went into stealth mode recording a new album and very little was heard from them for months at a time. Occasionally a snippet of information would come out, like a producer for the album had been hired (or quit, or fired), or a new member of the band (like guitarist Buckethead – famous for wearing a mask and an empty KFC bucket on his head) had been hired (or quit, or fired).

At some point, the name of the new album came out: Chinese Democracy.

In 2002 there was some hope that the band was nearing completion of the new album when they were the surprise closing band on the MTV Video Music Awards. This was followed by a national tour. However, eight dates into the tour the entire affair was canceled (they had maybe played four or five shows) and the band went into stealth mode again. Years went by without a peep from the GN’R camp, other than from producers or members who had quit. Axl became the next Bigfoot – people would report on seeing him in the same way one would report seeing the Loch Ness Monster (of course, Nessie never gets interviewed by a surprise camera crew coming out of a hockey game)

However, in January of this year, Axl went on record (and more or less came out of hiding) as saying “you will hear new music this year”, which was pretty much accepted by most as meaning that Chinese Democracy would be released in 2006. In February, decent quality recordings of the songs “There Was A Time”, “Better”, “I.R.S.” and “Catcher in the Rye” were leaked on the Internet – the unconfirmed rumor was that Axl leaked them himself to test out the waters. That same month, Slash claimed to have heard the album and said it would be released in March, which obviously never happened. Over the intervening months, Axl occasionally dropped hints about the new album – the most prevalent being that there were 32 songs in some state of completion, 23 of which he was working on completing, and 13 of which would actually be on the final album.

Axl made a surprise appearance on the Eddie Trunk show in May (his first interview in several years), he allowed Harmonix and Red Octane to put “Sweet Child O’ Mine” in Guitar Hero II as a playable song. Over the summer the New Guns N’ Roses played several sold out warmup shows and tried out the new songs. The plans were in place for the European tour over the summer with the North American tour to start in October.

And yet time went on with no announcement of the release date for Chinese Democracy. Axl had a chance to avoid or deny the idea that it would still be released in 2006 when he was asked about it on MTV News backstage at the 2006 Video Music Awards in August, but he still maintained that it would indeed be released in 2006.

In October a strong rumor was posted on RollingStone.com which indicated the album was to be released on November 21, but no one has ever confirmed it. When asked about the release date, GN’R’s manager just stated “there are only fifteen Tuesdays left in the year” (new albums are released on Tuesdays). A Harley Davidson ad featuring the final studio version of “Better” was placed on the HarleyDavidson.com website on October 21, only to be replaced by a version featuring “Paradise City” (from Appetite for Destruction) with the “Better” version changed to “coming soon”. When asked further on the release date for the album, GN’R’s manager stated “we might not bother with a release date – you might just walk into your record store one day and find it there”.

So that’s where it stands today – the tour is continuing (one canceled date notwithstanding) and the album is still “officially” being released in 2006, but no one knows anything else. As I write this there are nine days until the rumored November 21st date and still nothing from GN’R and/or their label. One potential problem is that the 21st is also the date that the new Jay-Z album is released (Jay-Z had previously “retired” so this release is seen as significant). Employees from record stores not only report that their usual indicators of an impending release show nothing for Chinese Democracy, they also show nothing at all whereas albums coming out in 2007 have at least some trace in the system.

Some speculate that perhaps the management wasn’t kidding with their statements that the album might just appear on store shelves one day. Given that the aforementioned Jay-Z album that’s being released on the same day has already leaked online and Chinese Democracy hasn’t, it might be that the album is being handled in such an interesting manner to thwart piracy (it’s hard to pirate an album if you’re not even sure it’s finished yet). While an album magically appearing in stores would not be the best maneuver from a marketing push perspective, the Eminems album still sold amazingly well when their releases were pushed up unexpectedly to odd days of the week (like the Friday before the scheduled Tuesday) to thwart piracy. Of course those albums at least had a release date to speak of, and GN’R’s popularity in 2006 doesn’t compare to Eminem’s popularity in 2002.

Still, Axl does have in his possession something resembling the final album – he’s used it as collateral to get into clubs (he used it to get a club to stay open on his birthday – the DJ reported handling two CD’s). The “13 songs” statement seem to indicate that the final lineup of the album has been decided on (I find myself wondering why he’s trying to finish the other 13 songs). Sebastian Bach, who hung out with Axl enough to get himself used as an opening artist on their tour, says that he’s heard the album and that it’s “amazing”. Rumors have circulated that people in the parking lots of Interscope (the label, I believe – “Geffen Records” no longer exists) were listening to it via loudspeakers on the building. It’s also been rumored that last week’s concert cancellation (the original official story was that the fire marshals were trying to force GN’R to tone down their show and really force them out, the “official” official story was that the local police would fine the group if they drank beer on stage – but why they would forego a $200K concert to avoid a $250 fine is weird) was due to Axl needing to fly to California to make some last minute decisions on the record (the other rumor is that since only 3,500 seats of the 5,000 seat venue were sold, Axl took it as an insult and canceled the show). Supposedly the cover art is finished and the marketing campaign is ready to go.

And yet – no album. Or release date. It seems extremely weird for an album that’s supposedly going to be released by the end of the year to not have anything remotely more concrete available in the way of information. But then again, nothing about GN’R has been normal thus far – Axl has used the same name of the group despite being the lone original member (Dizzy Reed is a holdover from the Use Your Illusion days but he still wasn’t in the original lineup) and then went on to spend close to ten years recording an album at a rumored cost of $14 million (perhaps that’s it – the record label has already spent so much money they don’t want to spend money to promote it). This truly is the Duke Nukem Forever of the record industry. It could be that Axl and crew have been mum because they’re working so hard on it. It could be that they don’t want to disenfranchise concert goers by stating that the album in fact won’t make it out in 2006 like they promised. It could be that they just don’t know yet at this point when it will be out. It could be that they’re targetting December 26, 2006 as the release date – the last Tuesday of the year. And it could be that November 21, 2006 will see at least something – a single, an announcement, etc. (the “Talking Metal” podcast believes the date will be December 5, 2006 – and there’s some speculation that they might have insider information).

My main curiousity is – what is the point of no return? At what point is it that it’s literally too late to get the album into stores? It’s been said that between Thanksgiving and Christmas record labels “shut down” (which is why the Christmas albums all come out in October or so) and so if it doesn’t make it by November 21st (the last Tuesday before Black Friday) then it will likely come out at the end of the year or not at all. But if this coming Tuesday (the 14th) comes and goes with no announcement does that mean that the 21st is impossible? Or will it really be one of those “walk into the store” kind of deals? And if it is, will the album be successful? Appetite for Destruction shot up the charts with no video or radio airplay or advance promotion, could Chinese Democracy do the same?

And overall, I’m curious about the album because the leaks, to me anyway, sounded good. I know this isn’t GN’R with Slash (the closest we will get to that is Velvet Revolver). I know this is essentially Axl’s solo project with the same name. It would be like if Studio 60 On The Sunset Strip was also called The West Wing but it was still about an SNL show in LA and not The White House. I bought Daikatana the first day it was out because man – what a story. I want to hear this album because I want to know what an album from an eccentric perfectionist spending a decade and a small fortune sounds like. Was GN’R huge because of Axl, or despite him?

All I know is – no matter what, if I wake up one morning (maybe next Tuesday) and hear that Chinese Democracy is suddenly on store shelves, I’m stopping what I’m doing and running to the nearest store and buying it. And any CD singles with unreleased songs. It’ll be like 1992 again.

A couple of years back I picked up a game called Metroid: Zero Mission for the Game Boy Advance.

Metroid: Zero Mission is based off of the same game engine as Metroid: Fusion which for all intents and purposes was “Metroid 4”.

When the first screenshots of Metroid: Zero Mission came out everyone just sort of assumed that it was a remake of the original Metroid game. Nintendo had done this before – 1993’s Super Mario All-Stars for the SNES had the Super Mario Bros. trilogy from the NES (including the Japanese version of SMB2) and “tightened up” the graphics to 16-bit SNES standards. So people assumed that Metroid: Zero Mission was Metroid with the “All-Stars” treatment.

However it later was revealed that Zero Mission was something of a “reimagining” of Metroid. They had taken several of the facets of Metroid, like enemies leaving missiles, and some of the level layouts, and then taken a hard left. It was more like a “remix” of the original game.

As a bonus, it contained the original Metroid game, emulated from the NES, as an unlockable bonus when you beat the game (which was sort of weird since Nintendo was selling this game for $30 and also selling a ported cartridge with Metroid alone for $20 at the time).

So I got this game and started playing Metroid: Zero Mission. And it’s really good, and I’m really good at playing the game. And the whole time I’m thinking about how this game is so much like the original Metroid.

Thing is, while I love Metroid, I was never that good at it. I never beat it (though I saw others beat it) and actually I’m not 100% sure if I ever even beat the first boss. But before too long I had beat almost all the bosses in Metroid: Zero Mission and was on my way to the Mother Brain.

So I’m thinking to myself, “well, I bet it’s that in the last ten years or whatever since I played this game last I got a lot better”. And I don’t doubt on some level that’s true.

And at some point I finally beat the game and unlocked Metroid. I immediately fired up the game and played a few rounds.

Now I remember why I never got far in that game. It wasn’t because I wasn’t as good a gamer back then – it was because Metroid is fucking hard

Somewhere around the same timeframe, I got a disc for the GameCube called Mega Man: Anniversary Collection. This was a collection of the eight “Mega Man” games (as opposed to “Mega Man X”, “Mega Man Zero” or “Mega Man Purple Monkey Dishwasher”), the first six of which were on the NES and had more or less the same engine, the seventh was on the SNES, and the eighth was on the PSX/Saturn. I started playing the original games and yup – those games were fucking hard. Like, really really hard.

At this point I have a flashback of having to run down the hallway because my little sister decided after getting killed in Super Mario Bros the answer was to go to the NES console and start beating on it. A neighbor of mine’s younger sister had beat their NES so hard that it required a second cartridge to be shoved in to keep the first cartridge down – she had broken the “toaster oven” mechanism, it seems.

Now, Metroid was hard for various reasons, but one of them was that – the hardware (the NES) was limited and was so new that no one knew what they were doing with it yet. Don’t get me wrong, it’s a great game and a classic, but a lot of the gameplay mechanics were due to the fact that they were so limited with regards to experience with the hardware and with what it could do.

But it didn’t take them too long to figure out what they were doing – The Legend of Zelda is still a masterpiece and I think even a tiny change to its graphics or gameplay would have ruined it.

And the NES was groundbreaking in this way – for the first time, what you were playing actually sort of looked like what it was supposed to be representing. In this era of 3D graphics we take this for granted but at the time, the NES was the first game system where you were controlling an actual sprite that looked like what the character was supposed to look like, not some green square that you had to “pretend” was carring a sword (which was a green line)

But I can’t help but be amazed at, in hindsight, how hard those games were. In May, a game called SiN Episodes: Emergence was released by a developer named Ritual. It used a “dynamic difficulty” feature to adjust the difficulty of the game to your gameplay style (or lack thereof). There was a bug in the game when it shipped originally and as a result, one section of the game was excrutiatingly difficult. I kept up with the feedback on message boards (some of which were frequented by Ritual employees) and I don’t think I’ve ever seen or heard so much bitching and outcry in my life, and I’ve seen people literally beat on consoles before. And most of these people were the kinds who played old NES games, back when games were really hard.

I actually went and playtested this game and what I didn’t realize before showing up was – they literally wanted me to sit there and play through the entire game. The idea behind it was that it was the first “episode” in a series of games and so it is shorter and also less expensive than other games. The concept is known as “Episodic Content” and depending on your school of thought, it’s either the best thing since sliced bread, or a horrible way to bleed gamers or more money.

Now I had a couple of problems – first, I’m not the best FPS player. Oh sure, I like them and all but realistically, I suck at them. And the second problem is – I rarely finish games.

Still, I was there already so I figured give it a shot. And I finished the game in a bit over six hours. And this was on one of their “beater” systems with a lackluster video card and long load times.

And I also beat it when it finally came out – before they fixed the bug which caused it to be too hard at one point.

It got me thinking though – I’m not sure how many games I’ve ever actually beaten before. I know back in the NES era I finished Zelda and Mega Man 2 and 3, and back when PC gaming got reborn, I finished Wolfenstein 3-D and DOOM (but mostly because they just ran out of levels)

Actually, I think it’s one of the big problems in the game industry is that people don’t finish games. They finish TV shows and movies (which is easy enough since the movie or show is at max three houts) and they usually finish books (unless it’s War and Peace or some book they can’t stand) but they generally don’t finish games.

And the biggest reason that they don’t finish games is because games are hard. Movies and television don’t require input. Books just require you to turn enough pages. Music just requires you listen. Games require you to play. Which is great of course – it’s the point. But unless you really really like the game you won’t keep playing. And even then, when the stupid boss battle kills you in seventeen seconds over and over because your last save point left you with 11% health, you still won’t finish.

So the game industry is trying some different tactics with regards to difficulty in length. Perhaps the bravest maneuver I saw was in the game Prey. In the game at some point your character earns some powerup (with a Native American name I can’t remember at the moment) and after that point, when you die, you go through this “afterlife” sequence where you’re dumped out back to where you were before. So basically, death has no consequence (other than a short but annoying cutscene). On the one hand, this on some level helped the game since it made it a lot easier to finish and, as a result, most people who bought the game did finish it. On the other hand this tactic may have backfired since it made the game artificially short – when coupled with the fact that the game just wasn’t that long to begin with, some felt ripped off since this game was full priced.

Another tactic is to break up the game into smaller chunks – like they do with episodic content. Of course, this tactic runs afoul of pricing problems – SiN Episodes‘ three pieces will run the end-user $59.97 if they all remain $19.99 and that’s about $10 more than your average PC game. Although you can pull out anecdotal examples from the past (like the $74.99 SNES version of Street Fighter II) or adjust prices for inflation, it doesn’t change the fact that the consumer basically wants to pay no more than $50 for a full-length game and no more. They don’t like that episodic content is trying to make more money in the long run. They don’t like microtransactions and smaller content releases (the tide turned on Oblivion quickly when the developer started releasing add-ons at $2-3 a pop). They don’t like that Xbox 360 games start out at $59.99. This drives them to wait out the cheap bin or go to used game stores.

And for this the user has to wait. I loved SiN Episodes: Emergence but it was released in May and Ritual hasn’t even released the name of the second episode or a single screenshot from the second episode, much less a name. And Valve delayed Half-Life 2: Episode 2 until January. Maybe that’s the other big problem – the movie industry generally makes their timelines, the game industry doesn’t.

Anyway, the original point of this it-took-way-to-long-to-finish rant is that games used to be really hard and somewhere along the way we stopped expecting them to be hard. We started to whine when we couldn’t beat them (and then we started to whine when they were too easy to beat). We used to eat what we were fed and now we’re complaining about the food. I’m not sure if that’s a good or a bad thing. Rebellion is a good thing sometimes, and other times it kills otherwise perfectly good ideas.

Way back when I first tried Windows XP, I actually sorta liked the default “skin” that it comes preloaded with. I actually remarked at the time how it “felt” good, and not like it was for idiots or something.

Yeah, that didn’t last. I quickly grew tired of the bright green/blue contrast. I hated that the close/maximize/minimize icons were huge. And outside of “neato” apps, I don’t like the rounded edges of the windows. I’ve been on servers running Windows Server 2003 where the default start menu and its recently used programs list are left on by default. Yeah, I’ve never seen how that’s been seen as more useful than the Start menu we’ve had since Windows 95 or so.

So whenever I do an install of XP, be it for my personal machine or work or whatever, the first things I do, always, are to set it back to the “Windows Classic” skin and the “Classic” Start menu. I guess this makes me the XP equivalent of Dana Carvey’s “grumpy old man” character from SNL. “In my day we didn’t have these fancy windows, we had to know what to click on and we liked it!”

Now really, this is just a personal preference sort of thing but it does make me wonder – is it really just that or is there something else going on here? Do I just like things to be more complicated?

I make my living doing development and a lot of it is using .NET. Microsoft has two main programming languages – C# and VB.NET. VB.NET is their .NET version of VB and C# is more like a C/C++/Java type language. So for example – here’s a code sample in C#:

string s;

for (int i = 0; i < 10; i++)
    s += “x”;

And here’s the same code in VB.NET

Dim i As Integer
Dim s As String

For i = 1 To 10
    s += “x”

So as you can see, the two languages are syntactically very similar (especially in this dead-simple example).

So what’s the difference? Well the C# code follows the conventions put forth by C/C++/Java – semicolons at the end of the line for termination, curly brackets to denote groups of code to be looped through.

Big deal right? Well, the thing is VB people hate that. They see the semicolons as annoying. They see the brackets as redundant. They see C# coding as slow. And the VB.NET IDE enforces this mentality. VB.NET is not case sensitive but it doesn’t matter because the IDE goes through and fixes your code for you – if you had typed “dim” on that first line, it would have changed it to “Dim” for you. Same for all the other keywords. And when you put in that “For” it puts in the “Next” for you. The C# IDE, even though it’s the same binary, doesn’t do this for you (by default anyway, I’ve never looked for it).

So at first blush, VB.NET is the more productive language right? I mean, they do all this stuff for you right? Well here’s where my complicated mentality kicks in – I think VB.NET is annoying and slows you down.

You see those declarations of “i” and “s”? Look at the C# version – you declare s with eight characters. It takes fifteen with VB.NET. And what the hell is “Dim” anyway? It means “Dimension”. Err, what? I don’t want to “Dimension” anything. I just want a fucking string. Oh, and I also need an integer for the loop. I just need it for the loop. Afterwards I don’t want or need it anymore. Not in VB.NET – you have to “Dimension” a integer. You can’t define it inline like you do with C#/. Granted, after this loop, “i” still exists but at least we didn’t have to devote an entire line of code to declaring this just-used-once variable. And as redundant as those brackets are, they are less typing than having to put “Next” there. Plus they make the code, in my opinion, easier to read.

But a lot of people don’t agree with me. They say that the VB.NET code is easier to read. They see the “For” and “Next” (along with the VB.NET “If” and “End If”) as easier to read. No wondering what that closing bracket is closing, you can see the end of the conditional block using natural language. They like how the VB.NET IDE handles the case sensitivity for you (i.e., you type “string” in all lowercase and VB.NET’s IDE changes it to “String” for you). They like how VB.NET doesn’t let you go one second without telling you where you’ve screwed up (C# doesn’t catch certian types of errors until you compile)

So again, maybe I just like things to be more complicated.

Thing is, my belief is that having an IDE do too much makes you actually slower and lazier. Knowing you can get away with bad practices to be saved by the IDE is bad form. Being swift enough to know the syntax and putting your own semicolons at the ends of lines is actually quicker than writing a bunch of code in a malformed method in the hopes of purposely letting the IDE do the work for you.

But then again I use Intellisense and I let the IDE do the indenting for me, plus I like the syntax highlighting so I obviously don’t take this “more complicated” bit to its extreme. Like those crazy fuckers who use vi. I’m sure they’re the fastest text editing peoples known to man but what a way to live.

A cohort of mine shares the “C# is needlessly complicated” mentality and we often trade jabs with regards to “language snobbery” and so forth (I won’t leave C# for VB.NET without a fight, even converting VB.NET to C# by hand to avoid it sometimes, and somehow I’m a snob – even though my cohort does the same thing in reverse). His take is that the “hand-holding” VB.NET does for you is just fine and dandy and makes him more productive, despite the extra typing (in his defense, it does seem like for the extra typing you have to do while coding, the VB.NET IDE does then pitch in and do a lot for you).

And he thought that right up until .NET 2.0 came out.

You see, .NET 2.0 is actually a fairly significant overhaul. In .NET 1.1, an ASP.NET project would compile down to a single DLL binary file, and to deploy it you just copy that file along witht he aspx pages to the server. .NET 2.0 (and more specifically Visual Studio 2005) doesn’t do this. It actually runs the site off of a cache of the DLL in the temporary files folder. If you want the DLL to deploy, you actually have to “publish” your site. And by default, there’s a DLL file for every single page on the site – you have to configure it differently if you want to go back to a single DLL file.

Supposedly this was because people found the old way confusing. People would deploy their source code files by mistake, which is pretty much a bad idea. But to me it’s counter-intuitive. I would (and still do) use NANT scripts to come up with a site that can be deployed from source control. Yes, this relies on an external 3rd party program but it affords me more control.

Also with ASP.NET 2.0, project files are out. No more central csproj or vbproj file. Again, this is supposedly to make it easier for developers – more specifically, in old-style ASP there was no project file (mainly because there wasn’t neccessarily an IDE) – if the page was in the directory then it was on the site. Old-style ASP developers found the concept of a project file too confusing. But this also makes certian tasks (mostly in the era of compilation) more complicated. Something of a catch-22.

In Visual Studio .NET 2003 you decided to pick a project to create and then had to pick whether or not you were making a Windows Forms application, an ASP.NET application, etc. In Visual Studio 2005 you pick “Project” or “Web Site” – so if you create a “Project” you don’t get to pick an ASP.NET project. Apparently people found that too confusing.

So again, maybe I just like things to be more complicated.

And he hell of it is – my friend who likes VB.NET because it’s less complicated than C# hates the changes in VS2005 and .NET 2.0 to make it “less complicated”. I’m not sure if it’s because he and I are just used to the way it worked before and worked through the wrinkles and don’t want to have to change (somewhat akin to those people who decided that Windows 98 is their last OS and they haven’t moved to XP and won’t) or if it’s because a number of the changes that make .NET 2.0 and VS2005 “less” complicated actually make it *more* complicated because now everything you know has suddenly changed.

Of course maybe it’s also that I want to be in control, and I can’t stand not being in control. Well, when it comes to tech anyway.

Ever since a friend turned me on to PC building about seven years ago, I’ve built my own systems. I want control of what’s in the system and more importantly, what’s on them. I don’t want extraneous software. I don’t want anything on my system that I didn’t put there myself. I won’t install anything that comes with baggage.

I’m not the only one that thinks this way – hardcore gamers tend to be a rather picky lot. A company called Starforce makes an anti-piracy solution for games, also called Starforce. A Starforce game relies on a hidden device driver to be installed on your system when you install or run the game. The device driver is at Ring 0, so it could do whatver it wants. When you uninstall the game, the driver stays on. Some users of Starforce games reported that it caused random crashing and BSOD when doing normal things like putting an audio CD in the CD-ROM drive.

Gamers started to avoid purchasing Starforce-enabled games. Myself included. Starforce was indeed effective – since the executable is encrypted it cannot be cracked, but the bad press was enough to make most publishers ditch using it. The depressed sales due to boycotts were enough to negate the added sales due to lack of piracy.

Valve required that a program called Steam be installed in order to run or play Half-Life 2. Steam does several things – for one, it ensures that you’re running the latest version of the game, and updates the game if there’s a new version. It also gives Valve a method of banning online cheaters. Plus you can use it to buy games – I bought SiN Episodes: Emergence without visiting the store. And if my hard drive crashed tommorow, I could download the small Steam client, log in with my account, and have Steam download and install the latest version of all the games I own through it, without needing the install media.

But Steam has its problems. For starters, it’s another program running. You don’t really get to control the fact that it’s there – it’s required. It’s not like when a game comes with GameSpy Arcade and you say “no thanks I don’t want that” – you don’t get a choice. Steam also requires an Internet connection in order to create an account and associate your purchase with your account. It’s a reasonable requirement – it’s not like Internet access is scarce – but really if you were only going to play the single player game of Half-Life 2 you shouldn’t have to connect to the Internet. Steam connects to the Internet every time you play, to check for updates. If it finds one it downloads and installs it – and you’re not allowed to play the game until it finishes. The only solution to that would be to choke off Internet access to the Steam client (via a firewall or NIC disabling) and let it give up every single time. Plus ultimately Steam associates your CD Key with your account permanently. You can’t just take your Half-Life 2 discs and CD key and sell it to someone else. That copy is now permanently associated with your account. They have effectively used it to negate the Right of First Sale.

Personally, I don’t think the Steam thing is such a big deal – I don’t cheat, I like having things up to date, and I never sell games. And most gamers agree. But there is still a small minority who loathe Steam to the point where they refuse to play Half-Life 2 or any other game that uses it, period. I have to admire these people – they’re sticking to their guns. They’re sort of like the lunatic Linux fringe who think that being cross-platform is more important than sticking with Windows development and only getting 90% of the market (they’re small but in their favorite places they’re quite vocal).

But ultimately besides control (they don’t want any more apps on their systems than neccessary) I think the disdain over Steam comes back to liking things complicated – sure, Steam makes some things less complicated but others more complicated (now you have to have an account, Internet access, etc.)

It’s my understanding that acceptance of .NET 2.0 and VS2005 hasn’t been as quick or as swift as Microsoft would have hoped – developers are now saying they’re more productive in VS2003 and .NET 1.1 than they are in .NET 2.0. I think that’s part of the reason. Ironic then that VS2003 and .NET 1.1, which are more complicated than the new stuff are also what allow developers to be more productive. I think that part of that “Ready to Launch” event was not so much because MS wants to give away free software, but more because that tactic at least got it into the hands of developers – the ones they’re still trying to convince to use it.

So maybe I’m not the only one who likes things to be more complicated.

Back in November I went to a Microsoft event here in Dallas called “Ready to Launch”, part of the launch “tour” for Visual Studio 2005, their latest development IDE, SQL Server 2005, their enterprise-level database (probably runs your bank), and BizTalk 2006, a product I’ve only heard about from job recruiters and apparently no one actually ever uses. There was a lot of cool stuff there, vendors, lectures, etc. But the real reason everyone (myself included) went was this – everyone who went got a free copy of Visual Studio 2005 (and SQL Server 2005 and BizTalk 2006). That’s worth taking a damn day off of work.

So here was Microsoft spending untold gobs of money (they had it in the Dallas Convention Center which ain’t cheap) on top of the ton of money they spent to develop Visual Studio 2005 and SQL Server 2005. And this was one stop on one tour, which had hundreds of stops around the world (several units toured simultaneously). All to launch products which they had every intent of giving away for free to people who attended.

To the average person this logic doesn’t make much sense. Heck, to the average developer it didn’t make much sense either – but they went anyway since, ya know, free software.

So what was going on here? Well, this entire event was probably the single biggest and most obvious manifestation of Spolsky’s theory that Microsoft really wants to give away development tools and the only thing stopping them (other than DOJ antitrust violations) is the fact that if they did obliterate the competition (i.e., Borland) then Windows would be subject to vendor lock-in, making it a less inviting development platform (to some degree this is virtually what is happening, but on paper there are other options besides Microsoft)

But what winds up happening is that Microsoft makes compromises. They give away the software, but only to a select few. They only put on the event in select locations (mostly major metropolitan areas) and limit attendance to those who reserve well in advance. They only give away the “Standard” version of the software, the “Professional”, “Enterprise” and “Super Dragon” Editions they reserve for paying customers. And they drag you out to a day-long pep rally to make you excited about developing for Windows, complete with vendors and a catered meal.

And it works, of course. No one says “why would I go to that? I can get GCC or MONO for free and start developing now for free” – they sign up for these events in record numbers.

Microsoft, as everyone knows, relies more or less 100% on Windows. They also rely on Office, but Office is bascially nothing without Windows. They have to sell Windows, or else they can’t afford to do anything else. No one wants to buy Windows unless it can run popular programs. So Microsoft will do whatever is neccessary, just short of freely giving everything away, to keep and encourage Windows development.

Microsoft is in a pretty unique position in history. No other company has ever been able to do what they have done in the same way thay they have done it. They not only came up with the most popular of the early operating systems for personal computers, but they were able to attain a virtual monopoly prior to the massive expansion of the PC market. The hardware changed and got commoditized, but you still needed Microsoft’s operating system (be it DOS, DOS and Windows or Windows) or else you couldn’t run the popular software. Their revenues doubled every year even though no one was buying their software from stores at a retail price – it just came with their machines.

Few other industries are like this. Cell phones exploded over the last few years but no single vendor provides the underlying operating system for them – each cell phone vendor just makes their own. This is why things like Java and Brew come along to try and make programs run on cell phones (slowly, without 100% compatibility). Car makers go to a dozen or more tire manufacturers when they ship cars to dealers. Microsoft was smart, determined, but most of all: lucky.

Problem now though is this – now everyone has a computer. Everyone. Well, ok, not everyone everyone, but when my Grandfather-In-Law has a PC he uses to look up tractor parts, you’ve pretty much reached the saturation point. So now Microsoft can’t make as much money hand over fist like they used to. The only way they make money now, other than selling things besides Windows and Office, is to sell upgrades. So Windows 98 begat Windows ME which begat Windows XP which will soon begat (begit?) Windows Vista. And of course Office goes through more frequent upgrades.

The only other thing Microsoft could do to create more money is to enter into new markets. So you see them do things like create SmartPhone software – basically, their take on the operating systems you see in phones. Since you rarely see them in phones however, it’s obvious that Microsoft doesn’t have what it takes to enter that market. They’re doing OK in the PDA market – they had 30% market share with their PocketPC operating system in 2001 or so, but now they’re doing so well that Palm is considering selling them. Problem is though, people are using PDA’s less and less – they want one device in their pockets, not two (a PDA and a cell phone), so the PDA loses since it can’t place phone calls (and the ones that do are, you know, PDA-sized).

But there is one device which is already in every home in America, even more than the PC – the television set. Microsoft has tried repeatedly to get on TV sets. At the dawn of the Web era there was a company called WebTV that tried to get a set-top device with a wireless keyboard and mouse, allowing people to browse the web on their TV sets. A poorly executed idea (it used a propretary browser which rarely rendered things correctly) it was on its way to death when Microsoft bought it and marketed it as their product. It still lingers today as MSN TV but it never took off.

Microsoft tried to enter the PVR market with UltimateTV but it never made a dent in the market held by TiVo and ReplayTV. They have tried some one-off devices, things that hooked up to televisions and let you view photos and so forth. And most recently they unveiled the bizarre and underused Windows XP Media Center Edition.

But in 1999 or so a guy at Microsoft named Seamus Blackley had an idea that Microsoft should try and enter the game console market, currently dominated by Sony and Nintendo – he figured the inexpensive, powerful, and well known commodity hardware of the PC, coupled with the ease of use afforded by DirectX, would make for an unbeatable console.

Microsoft allowed the Xbox to happen for one reason – they saw it as their way to get onto television sets. And this time it might work.

Microsoft surely realizes that games are the only thing keeping Windows on top. OK, so it’s not really the thing that keeps Windows on top but it sure as hell does help. Firefox has captured over 10% of the web browser market from IE. People have literally moved web browsers, something many people don’t even realize is an option, because they’re so sick of IE and its issues. It doesn’t matter to them that they can’t visit their bank’s website anymore, they’re tired of worms. And a number of people would probably consider a Macintosh or maybe even something like Linux but they know that their games won’t come across, so they resist. Applications have substitutes, games don’t always. If you move to Linux then you can run OpenOffice instead of Microsoft Office. Sure, you’ll be missing some stuff, but if you can live without them then you can make the switch (incidentally, the feature-OpenOffice-is-missing-and-I-can’t-live-without-it differs from person to person and collectively keeps the world on Microsoft Office and Windows). But Half-Life 2 is only on Windows. Period. Not on Linux or Macintosh. If you leave it can’t come with you.

So who cares about operating system politics when you’re killing headcrabs?

So Microsoft realizes that they can use the Xbox to get on American televsions. And it worked, pretty much.

The Xbox (1) was a game console, period. They did do a couple of things non game related – you could buy a software package allowing you to rip MP3’s to the hard drive, for example, but the real plans were for the Xbox successor, the Xbox 360. They finally realized that there had to be a hook in order to get onto American televisions, and high-end gaming was it. Unlike their forays into the cell phone and PDA markets, they figured the strength here was also to make the hardware as well as the operating system (in this case, the prorietary embedded OS built into the machine). Heck, they had even tried the make-the-OS-not-the-hardware bit when they teamed up with Sega to make a Windows CE port for the Dreamcast – and we know how that system turned out.

So – everybody wins. Microsoft wins, comsumers win, everyone wins. Right?

Well, one teeny tiny problem. Namely, the pre-existing game industry.

See, Microsoft was able to manipulate the game industry into helping it get onto the television sets of the world. Well, mostly in North America and Europe – Japan pretty much told Microsoft to stick it. And it’s not like the game industry didn’t also win here – they got to deal with someone who’s not Sony (totalitarian) or Nintendo (quirky – people tend to buy a Nintendo system and then not buy non-Nintendo games on it). Sure, it’s the same company that destoyed Sun and Netscape and anyone else who dared compete with it, but hey – at least it was an American game console, right?

The game industry, like all other game industries, wants profit. They see hurdles in the way of their profit. First off is piracy. Next off is the difficulties involved in supporting many platforms. Finally there’s updates, future content, and online play.

There’s two divisions of the industry – the PC and Consoles. Both have advantages and disadvantages. The main advantage of the PC is that you don’t have to run anything past anyone, No censors, no quality committees, and most importantly – no royalties to anyone. Microsoft gets a cut of every game sold on the Xbox or Xbox 360, they don’t get squat when a Windows game is sold. But the PC is also the home of the CD burner, so anyone, in theory, can pirate any game they want. Disc-based copy protections, online CD keys and so forth help, but they can all be overcome to some degree, and they’re harder to implement and support. Consoles can more easily lock out piracy (your average person isn’t ambitious enough to open up their machine and start soldering chips) and since they’re the exact same hardware every time, the support costs involved are much smaller – no trying to troubleshoot why someone with an off-kilter combination of hardware keeps getting a BSOD every time he opens a particular door. But on a console you have to get it right the first time. Prior to the Xbox’s hard drive, “patching” a game was impossible and even with the hard drive and the Xbox Live service, it’s not a given the person is even online or capable of getting the patch (it’s not really a given on the PC either, but it is more accepted). Plus the manufacturing options for console games are smaller and every unit sold means royalties to the console maker, on top of whatever the console maker gets paid to manufacture the game for you.

Still, what’s happened in recent years is this – with the increased costs of supporting the PC’s myriad of hardware, plus the fact that the PC has more widespread piracy, more and more publishers are loathe to make PC games. The Xbox essentially was a PC, with its Pentium 3-based processor and DirectX API. Microsoft figured this would make it easy to develop for the Xbox. It did – it almost made it too easy. Many publishers started to phase out PC game development entirely, based off of the idea that the effort to get the Xbox and PC versions in sync wasn’t worth it, so go with the platform that has less piracy and sells more units to people who can be guaranteed that the game will work when they get it home.

This poses a problem for Microsoft. If games leave the PC they leave Windows. If games leave Windows, then in theory one of the biggest reasons the home users stay on Windows at all goes away. The hell of it is though, one of the biggest vehicles for causing this change, the Xbox, was invented by Microsoft. And the Xbox 360, if left unchecked, could just make this worse.

So Microsoft came up with a way to do both – they call it XNA. Using it, developers can come up with an Xbox 360 version and a PC version of their games simultaneously. They even released the game MechCommander 2 for free to demonstrate it. They also give away XNA for free, not entirely unlike their approach with Visual Studio 2005 and the “Ready to Launch” tour.

So far the approach is working – id Software is developing their next game using it, even going so far as to say the Xbox 360 is their primary development platform now. When John Carmack endorses something Microsoft does, then you know they did something right. id released DOOM 3 for the PC in 2004 but it was well into 2005 before a second developer was able to get it running on the Xbox – with cut down levels and graphics. However, Raven was able to get an Xbox 360 port of Quake 4 ready for the Xbox 360’s launch, just one month after the PC version hit stores. And the current must-have RPG, Bethesda Softworks’ The Elder Scrolls IV: Oblivion, debuted simultaneously on the PC and Xbox 360, sporting some of the best graphics ever seen.

It’s no secret I’ve not been a big fan of the Xbox 360 for various reasons, mostly that I figured it might hurt PC gaming. But now I have a different take on it – Microsoft sees their Xbox 360 platform not as a PC gaming killer but rather as a complimentary platform to the PC. Both have their strengths – no one is going to play Civilization IV on their Xbox 360, but the 360 is cheaper than upgrading a PC and it takes advantage of HDTV. The 360 even integrates with Windows Media Center.

I still don’t own one and may not even bother with one this year, but I no longer hate the Xbox 360. I still prefer Oblivion on my PC though.

A few years back I joined the Stephen King Library, which is essnetially a book club (it is in fact owned by Book of the Month Club, Inc.). Every six weeks or so they would send you a hardcover Stephen King book. The book is designed to look like the original issue but they were usually obviously newer printings. A few books were even exclusive to the club, like the hardcover version of the Storm of the Centrury screenplay, or the misellaneous compilation Secret Windows. Between the SKL and Half-Price books I have a pretty complete collection of Stephen King’s work. Maybe one of these days I’ll even read them all.

Problem for the SKL right now though is – he’s not writing that much anymore. He “retired” a while back but he has like two books coming out this year so doesn’t look like the retirement stuck. Still over the last year or so the output of Mr. King has been pretty sparse. The SKL sent me a “Stephen King Desk Calendar”, which I immediately sent back. No thanks, I don’t even really use my digital calendars properly as it is.

Then a few weeks back they sent me another book – ‘Salem’s Lot: Illustrated Edition. I was tempted to send it back, too, but I got to looking at it. It’s the same old book but with some additions. First, it has portions of the book which were cut out initially, as extra chapters in the back. And the book is “illustrated” with creepy photos. And there’s also some additional side stories, written recently I presume. And the book is a lot thicker as a result. So on my bigass shelf of Stephen King books I constructed as part of my new redone home office, the last book on the shelf is this latest book.

But then a funny thing hit me – the cut out chapters? Those are deleted scenes. The extra side stories? Those are bonus materials. The illustrations? Those are concept art. The attractive packaging? That’s a menu.

The publishing industry has DVD Envy.

Can’t say I blame them – DVD’s sell truckloads. DVD of movies that failed in the box office sell truckloads. Music doesn’t sell, piracy hurts software, and the publishing industry has never really seen the glory it used to have, but DVD’s sell truckloads. Partially because they’re new but also partially because they’ve mastered value added content to the point where people who are otherwise stingy with their entertainment dollars will buy them.

So it makes sense that when you want to make some more money off of an old book you come out with a “special edition” of it that does more than put a leather cover on it.

In 2004 a game called Painkiller was released. A fairly standard FPS in the vein of Serious Sam, it gained a cult-like following without ever completely gaining mainstream popularity. It also spawned a mission pack. It did a couple of things really well and everything else just decently well.

A few months back the publisher/developer released what would for any other game be the “Gold” edition (a single SKU with both the original game and the expansion pack) but for fun they called it Painkiller Black. It included both the original game and the expansion pack on a single DVD. It also had the editing tools, a movie on making the game, concept art, a Penny Arcade poster, developer interviews, a music video, and CPL enhancements. All for $30 and in packaging that looked so good, I decided I had to have it (my Father-In-Law picked it up for me off my wishlist). Another game F.E.A.R., went a similar route and even has developer (director) commentary as an option. And of course Quake 4 had movies, extras, and even the original Quake II game plus expansions as extras in its DVD version.

So the game industry also has DVD Envy, but they can do one better – they can actually put the games not only on actual DVD’s but also in actual DVD cases. Of course in many instances they reluctant to put the games acutually on actual DVD’s but they are many times putting them in DVD cases – the new case a lot of them are using is actually thicker than a normal DVD case and can hold up to seven CD’s on a spindle.

The music industry is the worst however – they not only have DVD Envy and are even going to the lengths of including free DVD’s with music CD’s, but no one even cares really. Go to the music section of Target, or the music store in the mall. Most new releases are now in the special “dual jewel” cases with a free DVD enclosed. Even for really popular albums, when is the last time you ever heard of anyone mentioning the content of one of these DVD’s? This is both because they tend to be throwaway fluff, but also because no one’s buying them. The last one worth owning was included in the Nirvana boxed set, and even it was only worth watching once. So they’re driving up the cost of making the physical product that no one wants anymore.

Of course now everyone is predicting the end of DVD’s. I personally don’t see that happening. People point to the music industry and say that that’s a perfect example of the obsolescense of physical media. No, it’s because music is really easy to pirate, so people do it. Back when Napster 1.0 was hot I knew people who wouldn’t even steal a mint out of the candy bin at the grocery store with tons of music on their hard drives – I don’t think they even realized what was going on, they just thought it was some magical program where you typed in the name of a song and it started playing – something radio lacks. No, I think DVD will be around a while. HD-DVD or Blu-Ray will help supplant it, but you’ll re-buy Star Wars on whichever format wins, you won’t re-buy your DVD’s of TV shows since they won’t see improvements anyway.

People have too much of an attachment to physical items. You keep books on a shelf both to use and to display, and you do the same with DVD’s. The eBook didn’t replace DVD’s and neither will broadband. You don’t shove DVD’s into a binder and keep them in your car (unless you have a small apartment) but you do do that with music. In a way DVD’s can be said to have “Book Envy” but that’s another post.

First, before you read the last part of this post, read the post under it dated December 22, 2005. I really did write that on December 22nd but for some reason Blogger is having issues with the ISP I use to host this. Me and the ISP have narrowed it down to a Blogger glitch and I’ve notified Blogger about it but to no avail (you get what you pay for I suppose) so I finally got off my butt and worked around it.

Now go read the December 22, 2005 post and then come back

Ironically the day I posted that (unsuccessfully) a Century 21 sign went up and the lights at the house have been on ever since. So I guess we’ll never meet the phantom California owners. Wendy figures it’s someone whose transfer to DFW fell through. She’s probably right.