Thursday, March 14, 2013

Keeping A Record

You have memories, right? Sure, we all do. And we have ways to augment our brain's limited ability to remember in fine detail everything we've ever experienced. We write things down, we snap photographs, we record movies. These records do three rather important things:

  1. Help us to remember things more accurately and for a longer period of time. Putting forth the conscious effort to write things down or otherwise record them seems to flag the events or facts as "Important" to our brains, and they are retained longer and better.
  2. Remind us when we do forget. When details become fuzzy or lack of access makes a memory get unlinked completely, having a reminder can reconnect the memory in our minds, bringing back the memory and perhaps the feelings we have about it.
  3. Allows us to share. Your aunt Suzy wasn't at the beach with you that year for the 4th of July because she was at home with a severe case of dysentery, but that doesn't mean she had to miss out on the fun and drama of your cousin Sven sucker-punching a shark in the nose and rescuing little Maria, whom the tide pulled out a little too far while her whore of a mother was distracted hitting on some guy at the concession stand for two hours.
With the advent of the Internet, it is becoming increasingly easy to share data, videos, and imagery with people on the other end of the globe. In fact, too easy, as the constant stream of corporate and state security breaches remind us every couple of days. 

There are many things we want to record, and many reasons why we want to. But what am I struggling to record lately?

PC games.

Yep. Video games on my personal computer. I love my computer to death. It performs amazingly well at the things I use it for, and the only trouble is getting a record of the images it displays. Here, I lay out my struggles and leave an open ending about a new technique I hope to try soon.

Blast! I've spoiled the ending! Oh well, guess all I can do is proceed on as I had intended.

Why can't the computer just remember?

Probably the simplest way to record a computer screen is with screen recording software. Obvious, right? Well...

What I've had the most success with is AVS Screen Capture. It's part of a video editing suite I purchased some years ago and have had limited success with. I can't really recommend it for various reasons, but the screen capture software is quite decent. Simply, you select an area of the screen and hit the record button. When done, hit the stop button.

There are two major snags with AVS Video Capture. Firstly, the performance. A game that runs a super-smooth 59.97 frames per second (and much faster with V-Sync disabled) normally tends to run at just around 15 or 20 FPS, and the recorded video is actually closer to 8 FPS. The other major problem is that it doesn't seem to work well with fullscreen video. In Unreal Tournament 2004 and many other games, you MUST record the game running in a window or it will grab frames before they are done being rendered. Imagine driving a tank on a tropical island, and the various shrubbery, characters, items, vehicles, and even chunks of land are all constantly and rapidly blinking in and out of existence.

This means you have to get your games running in a window at 1280x720 and ignore the crap outside of the window. Trust me, it's hard to get immersed in a game running in a window, particularly when the best you can do is a tiny 640x480 window on a 1440x900 display.

Also a minor problem is the file sizes AVS spits out. They're ENORMOUS. I recorded about eight minutes of Duke Nukem 3D, and the file weighed in a whopping 9GB. And that was at 640x480, so just picture how massive any HD footage would get! Of course, these files can be compressed after the fact, using such utilities as the excellent Handbrake, but at some quality loss (though to drop the file down to 200MB, it was worth it).

"Why not use FRAPS?" queries one of my hypothetical hecklers. Actually a salient point, and while it probably performs better than AVS in terms of the performance hit I'd get while recording, it would still have a significant effect. Also, it costs money and I'm a cheapskate. I have tried the demo version and I found its options to be quite lacking, and its recording quality to be a bit lower than I wanted.

I have an HD-PVR that I use to record component video sources, and an easyCAP for recording composite and S-Video sources. These both work very well. Now if only I could...

Record from VGA?

Ah, wouldn't that be the Holy Grail! If only the devices for doing so cost less than $2,500. That's a little bit out of my price range.

If only I could just...

Use the HD-PVR?

Now THERE's an idea! Run the video feed through the HD-PVR and record it using that! All of the video transcoding happens inside the HD-PVR itself, so there'd be very little processing power required by the computer. Hell, I could probably record right onto the computer I was recording from! Plus, I have all the equipment I need!

Well, save one piece.

The HD-PVR takes component video, and that's not what my video card puts out. So now not only do I have to figure out how to get one of the video output options I have into the video input that I need, but I also have to explain what the hell the problem is.

Crash course in TV video connections

TV video connections are as such, generally from worst quality to best:

RF Coaxial: This was common in the 80's and carried on into 90's, particularly with video game consoles and cable TV connections. Before HDTVs became popular, this was fine, but sending the audio and video over the same line lead to a lot of noise, be it picture grain or literal noise on the speakers. The two signals, usually combined inside of a cheap, barely shielded box, would bleed into one another and cause such artifacts.

Composite ( AKA, RCA): How do you keep signals from interfering with each other? Separate them into separate cables! Thus, we ended up with a cable for video, usually with a yellow plug, and two audio channels with their own cables, white and red. These are still pretty common and work decently well. Audio artifacts are almost completely eliminated, but the video isn't particularly crisp.

S-Video: In order to get a better picture, the video signal is split up into two parts, luma (the brightness) and chroma (the colors). The audio cables are the same, but instead of the big yellow plug we're all used to, it looks more like the old-style keyboard or mouse plugs. If you look inside, you'll see four wires: luma, chroma, and a ground for each. Some s-video cables have more pins, but these aren't standard from what I have been able to find out. Many game systems support s-video, but I don't have any TVs that do. The signal is noticeably clearer in side-by-side comparisons (and you can check YouTube if you don't believe me) but not so amazing that the layman will probably be able to tell without direct comparison. S-video can only do 480i.

Component: This style splits things up even further. The luma channel is still used, but croma is split into two, giving more precise control and detail. With the three plugs for video and then two for audio, component uses a whopping five pins. The tradeoff for this inconvenience is that it supports resolutions up to 1080p or 1080i, depending on the source. The HD-PVR I use records component video, and can take up to 1080i. It looks damn good, but there is one step higher.

HDMI: This is a newer video standard that is completely different from the rest in every respect. First, it is pure digital. It's a stream of 0s and 1s that defines the picture and sound on the screen in a single cable from source to display. Second, it is heavily copy protected. Theoretically, I could use component cables with my HD-PVR to record Netflix movies streaming on my PS3 at 1080i and then burn copies of them and sell them on my front lawn, committing grievous violations of copyright law in the process. I could go on for hours about how tired I am of Hollywood and how pants-pissingly scared they are of the supposed millions of dollars they lose to piracy every year and the ass-backwards means they take to prevent it, but I'll save that for another time. Suffice to say that HDMI provides perfect audio and video from source to screen, but cannot be practically recorded in full quality. It is totally impractical and a complete waste of time to discuss. Why the hell am I still talking about it?

Computer Video Types

Computers use a completely different set of video standards.

VGA: Video Graphics Array is oldest one still in use, and fairly wide use I must say. I'm using it right now as I write this! Usually a blue plug with 15 pins, it is simple to use. It is pure analog, so it is prone to some interference.

DVI: This video standard is... actually a mess of several different video standards. It has a bunch of pins arranged in a wide grid, plus one wide flat pin. Digital Video Interface is a little bit of a misnomer when you consider DVI-I, and intermediate style of connector. It supports analog video as well as digital, so an adapter to VGA is simple and cheap to make, allowing older screens to be used while allowing for newer displays as well. DVI-I has four pins around the big flat prong, making it easy to identify. DVI-D on the other hand is all digital, and has no pins around the flat prong. Adapters to VGA are impractical and expensive, but they do exist. Since the signal is digital, it has to be converted to an analog signal; there's no way to simply reconnect the pins to match VGA, there is a full video conversion that has to be done. If your DVI port has a full set of pins all the way throughout instead of two groups of pins with and empty space between, you have yourself a Dual-Link DVI port! It can be used to connect two monitors (using an adapter, of course).

HDMI: Because the HDMI standard is so prevalent these days, many PC graphics cards have included them as well. While it offers the same digital picture as DVI-D, it also carries sound, whereas VGA and DVI do not.

Can We Get To The Damn Point Already?

Fine, fine...

Interestingly enough, I actually have had some success connecting older PCs to my easyCAP using s-video. The picture is okay, but not stellar.

What I really need to do is get an adapter from DVI-I to component. Seriously, I've been looking for solutions to this problem for a couple of years now and I just now discovered such an adapter exists. For the love of... and they aren't even that expensive!!

The only issue is that my new graphics card, a Gigabyte Radeon HD 7750, doesn't have the Dual-Link DVI-I connector that I need. At the time, HDMI/VGA/DVI-D sounded like a perfect combination. Damn it.

My old Sapphire Radeon HD 3850 has two Dual-Link DVI-I ports, so I guess I could use that to test it out, and if it works I'll shop for a newer card with the connector that I need.

And what am I doing all this for? So people can see how bad I suck in Wing Commander? Oh well, best not to think about it, plunge in headfirst.

Monday, June 25, 2012

Kaiken Lives, Sort Of

It's been a long time since I last touched Kaiken. Whether it's because I haven't been in a programming mood, didn't want to deal with all the corners I've coded myself into, or sheer laziness I can't say.

Regardless, I've been working on it since last night and have actually made some impressive progress. Well, impressive to me and only because of the time I've spent coding nothing and the short amount of time I worked.

The biggest thing I fixed was the Box Collision detection code. I use it for detecting when an object is touching the "ground", or a platform they can stand on. I decided to just rewrite the entire function, this time for readability and not minimum line count. While I could now understand what the function was trying to do, it still wasn't working. I traced the problem back to the way I calculated objects' previous positions.

I had been "reversing" the player's momentum to figure out where they'd been last. However, since a collision with one block would change the momentum (to prevent them from smashing through a wall instead of being stopped by it), this became unreliable when touching two blocks simultaneously. The first detected collision would return "Hey, he landed on top! Better stop him from falling!" and the second would say "Well, he didn't FALL here, he has no downward momentum! He must have collided from the side!". This would cause the player to be stopped by blocks he was standing on as if he was running into them. Not cool.

I needed a better way to figure out the previous position of the player. What I needed was a variable that stored the previous position reliably. It would need to be updated automatically whenever the player moved, perhaps in the function that moved the player.

For a while, I pondered how to implement such a system. Then, I dug through the code only to find that I'd already been doing this, but my collision code wasn't using this information. Sometimes I feel like a dumbass. This instance went well beyond that.

So the collision detection pretty much works, but when colliding against the left side of a block, there's a little weirdness. At this point, I don't care; I'll fix it later.

The next morning, I implemented a Goal Block. When touched, a congratulation message is displayed and a countdown begins. At the end of the countdown the level ends.

I can't believe how easy it was to fix this stuff. Hopefully after work I have the time and inclination to keep working on it.

Sunday, April 1, 2012

Lurid's One-Eyed Monster

After some fierce debate, I decided to point my alignment in the Law direction.

Demons work best when their alignment matches yours, so it was time to let my Pyro Jack sink to the bottom of the roster, rather than dominate the top. For too long I failed to realize how poorly he fit my play style at his current position in my depth chart. Myself being a mage and gunner combo, all his shots did was block my chances of shooting. We had to take turns, preventing each other from acting in a timely way.


Living in a post-apocalyptic world is no excuse to ignore good fashion sense! Hee-ho!!

As much as I'd miss our matching hats, it was time. He'd leveled quite a bit more than demons are intended to, and I needed something that better complemented my ranged combat style: I needed a strong, melee-based attacker. Someone who could act as my shield or  quickly swat off incoming targets.

My first choice was Taraka, a huge woman with four sword-wielding arms. This worked out pretty well, lending powerful swings to finish wounded enemies and enough defense to shield my retreats (in theory, as far as you know...).


Also, she was a hit at parties. Bachelor parties in particular.

Taraka, however useful, was a pain to summon, costing about 1500 magnetite with each expulsion from my COMP. I needed a Fighter, but I needed one of the Law alignment.

Reluctantly, I fused my Taraka and got Ogre.

Immediately, I wanted to nickname him Shrek, trite as it would be. Then I noticed something strange about the layered demon:


Lawful Evil. Like Darth Vader, or the guy who's always trying to screw Native Americans out of their oil-rich land.

I know from my AD&D games that Good and Evil have nothing to do with Law and Chaos. But still, this is a seldom-seen alignment combination. Not something I expected, but hey, wait, what are you doing STOP THAT THIS INSTANT YOU HORRIBLE BEAST WHAT THE HELL IS THE MATTER WITH YOU!!?!?


Is "horny" subtle enough to be used in a double entendre? Probably not.

Okay, so the strange alignment wasn't the thing that really stuck out with me. But hey, at 62 magnetite per summon, I can't complain. As long as he doesn't turn that horn of his (either one) on me...

Friday, February 17, 2012

Bad Architecture 101

Finally, the poor design of Kaiken's internals is giving me headaches.

For a while, the sloppiness gave me no real troubles. But now, the various design flaws are starting to make progress difficult. My main goal tonight was a tough problem, one I should have saved for a day when I've done enough coding to get back into the groove, one that could have waited until I did some study on the subject and potted a reasonable course of action.

Instead, I dove headlong into frame-rate independent animation and movement. Up until now, Kaiken was strictly based on frames. Every frame, tell the level to update itself, which would, in turn, tell each object in that level (players, enemies, etc.) to update itself. These updates include such key things as changes in position or momentum, as well as checks for collisions. Everything was perfect; every frame, tell each object to move based on its momentum. If the momentum was {5,10,0}, the object would move 5 pixels to the right and 10 pixels up before the next frame was rendered.

This simple design is easy to code, but it has flaws. The biggest being that not every computer runs at the same speed. While modern computers can chug through the entire update and render process at about 3 to 6 milliseconds, this shouldn't be counted on. If the system is under load, it may take longer, and when the game slows down, it literally slows down. Even under optimal circumstances, there are subtle variations in games speed.

The rate the game runs at should not have an impact on how fast objects move in the game. Even running at 20 frames per second, an object should cover the same area as an instance of the game running at 60 frames. It will look choppier, but appear to be going the same speed.

The simple way of doing this transition (if one does exist) is to, rather than move the object by its momentum every update, move the object by its momentum multiplied by the time that has passed.  Find the number of milliseconds that have passed since the last update and make that a factor in how far the object will move in the current update.

A simple concept, truly, but it changed an entire portion of my shabby little engine, most notably the Physics class. This class is simply a bunch of functions that can be called to handle collision checking and movement. For instance, the move() function would take an object and add its momentum to its location, thus moving it to a new location based on how it was moving. The function now also accepts a number of milliseconds, which is now factored into this equation. No big deal.

Too bad the rest of the functions weren't this easy. Applying friction and gravity this way will require a complete re-write of these functions. Well, I guess that isn't completely honest. I've got gravity working pretty well, but friction isn't going so great, but I've got a couple ideas up my sleeve.

On the worst end is my collision checking code. It checks the momentum of the object in motion to guess what its previous position was. Because the momentum is now pixels per second, not pixels per frame, it's allowing things to get inside other things in a way nature never intended. The way I do physics will have to be completely different. Perhaps I might get it working by storing the object's previous position instead of trying to backtrack it with momentum.

Hopefully Kaiken will be more robust and clean by the end of this disaster. I'm not expecting that to be the case, however.

Sunday, February 12, 2012

Introducing Project Kaiken

I lied.

My concerted effort to learn Nintendo Entertainment System assembly programming lasted about twelve hours: long enough to get a development environment set up, burn through some tutorials I've done before, and write a blog post about how "gung ho" I was.

It wasn't long before reading those memory maps began to eat away at my enthusiasm. Pattern Tables and Attribute Tables again reared their ugly heads and beat my motivation to dust. I turned my back on NES programming and went back to make serious efforts on my initial game engine project, written in C++ and using SDL.

I won't beat around the bush: this game engine sucks. It has "purely academic exercise" written on every pixel it renders. That said, great improvements have been made on it.

I was reminded that STL data structures exist and many, many places have seen <vector> and <list> added to clean up my sloppy-ass data management. While not ideal for high-performance gaming, it's much better than what I have been doing. Memory leaks have all but disappeared by using these tried-and-true classes over my disgusting solutions. Multiple sound effects can be played at the same time, and music tracks can be changed.

The engine even became stable enough to run more types of demos. A title screen is now present, as well as a basic platformer.

Not everything is sunshine and daisies however. Despite the code cleanups that have occurred and the new features that have been added, many key things are missing. In fact, calling it an "engine" at all is being quite generous. Everything is hard-coded except for the graphics data (images, fonts) and audio data (level music, sound effects). Not only is it impossible to add or adjust levels without recompiling the entire thing, but even simple things like controller configuration aren't possible. There is no simple way to do animation, levels have to be programmed 95% from scratch, and things such as gravity and AI are programmed into the levels, limiting code reuse and increasing programming difficulty.

Lots of things need to happen for the project to continue in a nice fashion. A short list would include and easy way to load a sprite sheet, define the animations that are included, and call for them to happen in-game. Not to mention that AI and gravity need to be broken out of the level's code and into their own for easier referencing (which is preferable over the copy/paste method). Some way of creating levels as external files would be great, but that's not a short-term goal right now.

Of course, once these things happen, then it would be nice to have a game concept to bring to life using the engine. I have a couple of rough ideas, but nothing solid or "fun" yet.

I'm not averse to going back to NES programming. Frankly, it's nice to have a final binary under 256 killobytes that I can easily send to people. Kaiken is a fat-ass project by comparison, the executable binary weighing in around a megabyte on its own, then there's the several megabytes of DLLs it depends on, and the megabytes of graphics and sounds. '

I haven't worked on this project in a couple of months now. In fact, I haven't really been doing much programming at all during that time. It's a fun hobby I ought to get back into. Maybe something good will come out of it.

Thursday, October 27, 2011

Programming for the NES

"Why the hell would you want to do that?" I can hear someone say.

I always had that nagging feeling that it would be a waste of time. Every time I'd try to start learning (again), I would end up convincing myself SDL would be a better use of my time. Or Irrlicht. Or basically anything using C++ and not assembly for an ancient 8-bit processor. If my friends are going to run my programs (and presumably they may), then I'd be better off using something I'm more comfortable with, already know a good deal of, and has much, much better capabilities.

To put it bluntly, screw that. More than once I've set out with SDL or Irrlicht and tried to make some kind of game engine. Every time, though, I end up with something that functions, but not well, and leaks memory. They're sloppily-written, and too large for me to go and clean up. Plus, I have two enormous books about writing game engines in C++ and they don't seem to be doing me a whole lot of good.

Will working with a limited machine help me to think in better, more efficient terms? Will I have a deeper appreciation for STL after having to do data structures with no help at all? Will I be a better game programmer and better programmer in general? God, I hope so. But the road is going to be bumpy; programming in assembly is no small ordeal.

But among those benefits that may come, there are some that I will certainly enjoy. Small file sizes are nice: my typical SDL programs, including graphics and extra libraries to run them are rarely under 2MB. Not a big deal most of the time, but I don't have web hosting that can handle that kind of thing currently. The simplicity of distributing one single file instead of a ZIP file and explaining how to run it would be much better for my friends when I have them try out my work. Finally, there are NES emulators everywhere. We all have them on our PCs, I have one on my Wii, my Dreamcast, my PSP, my DS... The fact that the NES is a limited and old hardware platform actually becomes an advantage when you consider how easily it can be emulated by other platforms.

So I've decided, again: I'm going to take another stab at it. I have a bunch of documents printed and laying around from my first try; I may as well use them.

Now, if only I can get these damn pattern tables and attribute tables figured out. I'm sure I'm just over-thinking them, but they've always been the stopping point when I've tried to learn NES programming. I fake it for a while, going along with the documents, but I never actually understand them well enough.

Here's to hoping I'll actually put forth a decent effort and that the effort will get me to a place I can be proud of. I've already started getting my build environment set up and began working through a tutorial. I don't have anything worth sharing yet, but I'm sure I'll be blogging some more when I have something to bitch about during my lessons!

Saturday, September 17, 2011

Farfetched Dreams for the Wii and WiiU

One of the main attractions that lead me to buy Nintendo's Wii console was the Virtual Console games. Through the Shop Channel service, one can purchase and download games from older systems. Being interested in classic video games, it allowed me to obtain games that were difficult to find or otherwise troublesome to obtain.

There's one major problem I have with the service, however: the enormous stack of games I already have for these classic game systems. If I wanted to play them on the Wii, I would have to purchase them through the Shop Channel, and many of them are not even available.

The Wii console is equipped with two USB ports. This gave me an idea: the Gamecube had an add-on called the GameBoy Player. This device attached to the bottom of the Gamecube and allowed one to play GameBoy and GameBoy Advance games on the Gamecube. Why not have similar devices for the Wii? These would attach to the console via USB, and there would be one for each type of console the Wii could emulate (or perhaps just ones for their own consoles).

This would make for an excellent way for me to play my NES and SNES games. Right now I am playing them on a cheap knock-off console, for which Nintendo has seen no profit. Why not sell me devices that will cut down on clutter and allow me to put money in the pockets of the company that invented the original consoles?

I know that myself and thousands of others would love to have the opportunity to own such devices. But there are significant reasons that will prevent this, and I'm all too aware of these.

First of all, USB is a widely-used standard. Hackers (non-malicious ones, mostly) would find ways to hack the devices or create drivers that allow one to dump the contents of their game cartridges to their computer and play the games there. Nintendo would be way too afraid that these people would distribute these games over the Internet. This is quite foolish, however, considering that all of these games are already out there and fairly easy to obtain. The cat's already out of the bag.

The second reason is the engineering costs. These probably would not be offset by the small percentage of Wii owners who are interested in this prospect. I could be wrong, but I feel as though I am in a quite small group of individuals that would put down money on something like this. Sure, there are probably thousands, but there are millions of Wii owners, and I doubt (and I am sure Nintendo would agree) that they wouldn't have an easy time making their money back.

Third, they would much rather you bought the games on their Shop Channel instead. The engineering for that is already done and whatever they rake in now is almost pure profit.

Finally, there's no way they'd want to do tech support on something like that. If you are old enough to remember owning an NES, you probably remember spending more time trying to get your games running than you did playing them. This is less true with the SNES and N64, but with the cost of engineering and production, the last thing Nintendo would need is to tack on the cost of helping people get them working over the phone.

So, this idea is just a crazy pipe dream (Mario pun partially intended). But what about the WiiU?

The WiiU implements a new, touch-sensitive controller/tablet. Using that and the television, it is not out of the question that it is possible for the WiiU to emulate Nintendo DS games the same way the Game Boy Player was used. In my mind, it is possible to create a device that accepts a Nintendo DS or 3DS card and connects to the WiiU via USB.

While I'm sure such a project is well within the realm of possibility, I am really not sure that Nintendo would create such a device. The fear of making piracy easier and costs of creating the device will likely kill such a project before it even gets started. I must admit, however, that I would love to own one.