All right, let’s talk about Portal. There’s no need to talk about the gameplay or the story or the dialog, because everyone agrees that all are stellar. So we’re going to talk about something else instead.
As an extra for the Orange Box, Portal is a fantastic value. But if you want to buy it separately, it’ll cost you…$19.95.
Is Portal worth that much? It’s a game that is as refined as any other out there, with fantastic production values. It took at least a year to make. But it’s only three hours long. Are those three hours worth $19.95?
Personally I would say hell yes, because I have beaten the game about five times, done all the advanced maps and about half the challenge maps and thus I’ve played Portal for over twenty hours. But not everybody is going to do that, even if they loved the game the first time through.
What’s Portal worth?
And another thought. Everybody loves Portal. I haven’t met a single person who has played it who didn’t think it was fantastic.
Can a game that is only three hours long win Game of the Year awards? Again, personally I’d say yes. Portal is by far the best game I’ve played this year and I don’t expect to play a better one between now and January.
But will professional gaming websites be willing to man up and say, “Yes it’s short, but the quality of what’s there is so great that it deserves a Game of the Year award”? Or will they cop out and give it easy awards like Best Writing, Best Voice Acting or Most Innovative?
I know, the site hasn’t been updated in over a week. And I was supposed to run a contest last week. All I can say is that we’re down to less than two weeks on Sims Castaway Stories and that’s taking up all my time. That also means that I didn’t get to go to the lecture last night, thus there is no recap. I apologize.
And I just had to wade through seven hundred spam comments.
This project will wrap up (or at least slack off) soon, though, and things will get back to normal. Just in time for me to start working on the holiday dinners!
Last night I went to the second session of Warren Spector’s series of lectures on game design. The speaker was Mike Morhaime, co-founder and current president of Blizzard Entertainment.
Mike’s kind of a nervous type. Frankly I just wanted to go up there and shake him and say, “Mike! Come on, man! You’re a millionaire! You run the premiere PC game development studio in the world! You were on South Park! What could you possibly have left to be nervous about?” But I get the feeling that it’s just his temperament. Unfortunately it does impact his public speaking ability…as does the fact that he’s got to be very, very careful about how he answers questions.
Since Mike started as a programmer and is now a business guy, his talk wasn’t about game design per se, but more about running a successful game studio. And his main thrust was, “Don’t ever betray your principles. Ever. For any reason. If it’s not great, don’t ship it.” He talked about “brand withdrawals”, which is when a company effectively betrays its user base in some way to make some quick cash. Needless to say, he was against doing so for any reason ever.
He also talked a lot about “opportunity cost” and the projects Blizzard canceled over the years. In every case, the game in question could have been brought up to Blizzard’s standards and shipped, but the amount of work to do so could have been applied more effectively to another game Blizzard was already working on. Shipping Warcraft Adventures would have been a double disaster not only because nobody was buying adventure games at the time, but also because all the work put into finishing it would have been much better applied to Starcraft.
He talked about the South Park/Warcraft episode. South Park episodes are developed very quickly and in a fairly haphazard fashion, which is diametrically opposed to how Blizzard does things. So they basically had to dispatch a team to help Matt and Trey get the in-game footage they wanted and then just trust that the episode would come out okay. Which it did 🙂
He talked about the movie. They want the movie bad, and they think it can be done right. They are teamed with Legendary Pictures, the same people who did Lord of the Rings, Batman Begins, Superman Returns and 300. Now, I’m going to interject something here. Carmack famously once said that story in a game is like story in a porn movie – you expect it to be there, but it’s not very important. Most game developers have relied upon the interactivity of their medium to gloss over deficiencies in their storytelling, and that’s why most video game movies suck. The movies suck because the stories suck. Warcraft’s story doesn’t suck. It’s big, it’s complete and it’s incredibly detailed. Frankly, they could make a trilogy of movies out of it. If the Warcraft movie sucks, it won’t be because of the story.
He also talked about Blizzard’s popularity in Korea, and it became clear to me that they didn’t just luck out there. Gaming is huge in Korea. How huge? Well, there are about 20,000 “game rooms” in Korea. To put that in perspective, there are about 30,000 McDonald’s in the whole world. When people first started creating game rooms, they didn’t have the best hardware. They needed a game that was easy to start up, easy to get into, had network play, was very fun, and ran on older hardware. Starcraft fit that bill perfectly. If Blizzard had cut any corners on that game – if they had betrayed their principles in any way – it wouldn’t have been chosen as the standard “game room” game and Blizzard would have missed out on that huge market. Oddly enough, the original Starcraft was never localized into Korean; the Koreans just play the English version.
Of course, as Mike talked about the history of Blizzard, it became clear that at no point has Blizzard ever had to put up with publisher pressure. After Warcraft II shipped they were basically untouchable even though they are publicly owned (by Vivendi, at this point). And since Blizzard is the only gaming company Mike has ever worked at, he didn’t really have anything useful to say when asked how to prevent publishers from forcing you to betray your core values.
That aside, it was still a very interesting evening. Next week’s guest will be some guy named Richard Garriott. I’m not sure if I’ve ever heard of him before…
Today we are kicking it really, really old-school.
Name and developer, please! Plus, a few years after this version was made, the same company (under a new name) remade it to be more action-oriented and released it under a different name. Bonus points if you can name that game too!
Your reward will be…hmmm…
Well, now I’m thinking. When I bought The Orange Box I got a free gift certificate for Half-Life 2 and Episode 1 (since I already owned them) and I’ve been wondering what to do with them. It wouldn’t be fair to give them away on this Name That Game…but maybe next week I’ll do a super-special one and award them as a real prize! So I guess your reward today is that you know I’m going to do this next week!
Warren Spector is hosting a series of master classes in game design at the University of Texas here in Austin.
Despite very short notice and a near lack of funds, I managed to squeak in. The first session was Monday night and it was with Mark LeBlanc, who is most famous for his work on the classic Blue Sky/Looking Glass games (Ultima Underworld 1 and 2, System Shock and Thief 1 and 2) and his more recent game, Oasis.
The session took place in a studio in the CMB building on the UT campus and was professionally recorded. Doubtless all the sessions will be available in some fashion after the series is over, but, having never had the opportunity to go to the GDC or any other game conference, I am very grateful for the chance to see them live.
When I got there I was surprised – for one thing, the studio wasn’t full to bursting, and for another, most of the people there were fresh-faced college students rather than the slew of industry grognards I was expecting. I found myself wondering if these kids even knew who Marc was…
The format was one I hadn’t seen before. Warren interviewed Marc for about an hour on Marc’s work history, then after a brief break Marc presented a lecture on his core design philosophies. Then Warren interviewed him again, this time asking Marc about specific games he had worked on or contributed to. The whole thing lasted about three hours and I was fascinated the whole time.
Now, I have to give Warren his props. I’d seen videos of him presenting at the GDC and he was very good there, but he also turns out to be an excellent interviewer.
But listening to Marc was a mind-expanding experience. This guy knows his stuff. You can get the gist of it by going to his blog and reading about the Eight Kinds of Fun and Mechanics, Dynamics and Aesthetics, but the real meat of his talk was how he actually applied those precepts to the design of Oasis. You can get the slides for that talk at his site as well, but it was much better live (and the ability to interact was key).
And now I’m just going to throw out random things that I remember from the talk in no particular order.
Blue Sky/Looking Glass actually started as a group of MIT students, one of whom had an uncle who was working at Origin and wanted to start his own company (Paul Neurath).
One of the really odd parallels between Blue Sky and Id Software is that at both studios all the developers started off living and working together in the same house – the Blue Sky house eventually had ten employees living in it. This both facilitated the work and kept initial production costs way down.
Warren said that when he first came to the Blue Sky house (to produce Ultima Underworld) the guys there wouldn’t talk to him until he got his laptop on the network and named it. Apparently, having a machine that you could name yourself was a big status symbol at MIT, and the idea that you weren’t “somebody” until your computer had a name carried over to Blue Sky. Warren said he named his computer “Elmer PHD” and that he uses that as his online tag now.
Warren said that Marc has the ability to play your game for a short time and tell you exactly what’s wrong with it and give you a whole bunch of ideas for improvement. How I wish I could have him play Planitia…
Marc finally left Blue Sky during the development of Terra Nova after he got into an argument with Dan Schmidt, the director, over a feature Marc didn’t want to implement.
Marc said that he liked the fact that his involvement with System Shock 2 was purely technical and didn’t have anything to do with the design because he could then actually play and enjoy the game!
Marc is very big on programmer/designers. He said that if you want to work at Mind Control Software, you can expect to get grilled on game design even if you’re interviewing for an art position. Warren chimed in and said that they do the same thing at Junction Point. Marc also mentioned that at Valve, there are no game designers – they have “gameplay programmers” instead. This neatly coincides with my twofavorite game postmortems.
After it was all over I went over, shook his hand and thanked him for the Looking Glass stuff. He said, “Hey, I was just on the team.” I said, “Well, you’re the member of the team who is here, so I’m thanking you.” He didn’t seem to mind that.
Frankly I think the whole thing was good enough to put on TV, and I’m hoping that’s where it will end up. Looking forward to next Monday’s session, which will be with Mike Morhaime, one of the founders of Blizzard.
I recently tried the demo for Age of Empires III: Asian Dynasties. I really enjoyed…most of it. The setting is far more interesting in my opinion than that of the Americas and holy cow it’s the prettiest Age game ever by a mile.
But in the end, it shares the design flaw that I think prevented Age of Empires III from replicating the success of its predecessors.
In the beginning, there was Age of Empires. Age of Empires had three basic units: archers, infantry, and cavalry. These are arranged in classic “rock-paper-scissors” format. Archers beat the slow infantrymen (as long as they get to attack at range). Cavalry beat archers because they can close quickly. And infantrymen beat cavalry…for some reason. Classic design, easy to understand.
Age of Empires II added a whole bunch new units but in the end didn’t mess with the basic formula too much. Most of the new units were simply better archers, infantry and cavalry and could be used in the same way.
With Age of Mythology, Ensemble decided it was time to start mixing up the design. They introduced three new classes of units – normal human units, heroes, and mythological units. These three classes are also arranged in the rock-paper-scissors wheel – humans beat heroes beat myth units beat humans. But each class also has archers, infantry and cavalry within them; thus human archers are really, really good at beating hero infantry because archers beat infantry and humans beat heroes. This wasn’t…too bad, but I did feel that the design was starting to get out of hand.
Age of Mythology also introduced the idea of counter-units. These are units that are only good against the same type of unit – that is, archers that are only good against other archers, infantry that are only good against other infantry, etc. Thus, you don’t have to remember what beats what if you’re using counter-units – you counter with the same unit you’re being attacked with. Not a terrible idea, but the only counter-units in the game were humans; it was still up to you to remember how the hero and myth wheels worked. So it probably just ended up confusing players even more.
And then in Age of Empires III they messed it up completely by expanding the wheel to five unit types – archers, infantry, hand cavalry, archer cavalry, and artillery. With three unit types there are exactly three interactions: archers beat infantry beat cavalry beat archers. With five there are now ten interactions: infantry beats hand cavalry beats artillery beats archers beats archer cavalry beats artillery beats infantry beats archer cavalry beats hand cavalry beats archers beats infantry.
Yeah, I think that last sentence sums up Age III’s design flaw perfectly. The interactions are now too big for most people to hold in their heads any more. Age III is a perfect example of designers on the latest iteration of a long-running series adding features just to make the current version different from its predecessors without thinking about how well those features work as a game. Why do they do this? Well, I think it’s mostly the fault of reviewers. I may have mentioned this before, but I was appalled at the reviews Dungeon Keeper 2 got; over and over I heard reviewers say, “It’s just Dungeon Keeper with a fully 3D engine, some minor design tweaks to fix problems, and some new units and room types.” Uh, yeah. That’s why it was one of the best games of 1999 in my opinion – it was an already great game made even better by improving the base design and not betraying it with lots of unnecessary changes. But if reviewers don’t see enough new stuff…
When designers write a sequel to a game, their goal should be to supersede the original. Once the sequel comes out, players should have no desire to go back to the previous version.
Or, what I learned writing Planitia and didn’t learn from Frank Luna’s book.
This article will be of most use to programmers who have run through some Direct3D tutorials and know how to draw shapes on the screen but haven’t done any serious Direct3D coding yet. If you’ve read and done the exercises in Introduction to 3D Programming with DirectX 9 then you should be fine. I’m going to be using my game Planitia as my example, since it is by far the most complex Direct3D program I’ve ever written.
First, let’s talk about what was actually necessary for Planitia.
Planitia is a 3D real-time-strategy game, played from a 3/4 perspective. The terrain of the game world is a heightfield and a second heightfield is used to represent water. Units are presented as billboarded sprites (simply because I had no animated models I could use). Other game objects like the meteor are true meshes. So the Planitia engine needed to be able to render all of these at a minimum.
Planitia’s design presented some interesting challenges because the terrain of the entire map is deformable. The player (as a god) can raise and lower terrain to make it more suitable for villagers to live on. Earthquakes and volcanoes can also deform the terrain at just about any moment of play. Thus, it was necessary for the game to constantly check to see if the game world had significantly changed and regenerate the Direct3D data if it had.
Since this was my first Direct3D project, I deliberately limited the number of technologies that I was going to use. I decided that I would not use any vertex or pixel shaders since I didn’t want to start learning them until I felt I was familiar enough with fixed-function Direct3D. I also wanted to make the game friendly to older hardware and laptops.
To this end, I don’t do a lot of capability checks when I initialize Direct3D. But one check that I did find useful was the check for hardware vertex processing. If that capability check fails, it’s a pretty good indicator of older/laptop hardware and I actually make some changes about how the terrain is rendered based on it (that I will detail in a bit).
Vertex Structure and FVF
My vertex structure is as follows:
Vertex(float x, float y, float z,
DWORD color, float u, float v, float u2 = 0, float v2 = 0);
float _x, _y, _z;
float _u, _v;
float _u2, _v2;
Notice that there are no normals. I’m using baked lightning for Planitia (as described in Frank Luna’s book – indeed, I used his code) and thus normals aren’t necessary. I am using two sets of UV coordinates because I “paint” various effects on top of the normal grass for the terrain (more on that in a minute).
Division of Labor – Creating the Index and Vertex Buffers
Okay, so what exactly is a Planitia map?
A Planitia map consists of a 64×64 grid of terrain cells. Thus, it must be drawn with 65×65 vertices. Each map has a heightfield of 65×65 values, as well as a 64×64 array of “terrain types”. Terrain types are identifiers I created that basically record what kind of terrain is in the cell. Values in the heightfield range from 0 to 8. If all four corners of a cell have a height of .2 or less, that cell is underwater has terrain type TT_WATER. If one corner of the cell is .2 but others are higher then the terrain type is TT_BEACH. Otherwise the terrain cell is TT_GRASS. Other terrain types like lava, flowers, ruined land and swamps are drawn over grass terrain and have their own terrain types.
And here’s my first fast/slow split. If I detect that hardware vertex processing is available, then each cell consists of five vertices – one each for the corners and one for the center. Drawing a terrain cell requires drawing four triangles.
If hardware vertex processing is not available, then I only use four vertices for each cell and only draw two triangles.
I set the UV coordinates across the entire terrain to the X/Y position of vertex in question. Thus the UV coordinates of vertex (0, 0) are (0, 0), the UV coordinates of (0, 1) are (0, 1), etc. This allows textures to tile properly while also giving me access to a few tricks (which I will get to in a minute). You’ll notice that this means that I’m not specifying what texture to draw with the UV coordinates – I do not have all my terrain textures on one big tile. That’s a good technique but I couldn’t use it for Planitia.
The diffuse color of each vertex actually stores two different sets of information. The RGB values are combined with the grass texture based on the lighting for that particular cell (again, using the pre-baked lighting code from Frank Luna’s book, page 224). The alpha value isn’t used for lighting. It’s actually used to create the beach effect, where sand blends evenly into grass. There’s more information on how this works in the Rendering section.
I actually create eight vertex buffers – one for each terrain type. Each vertex buffer contains data about the geometry of the terrain mesh and the shading of the terrain, but doesn’t contain any data about what texture to draw or how the vertices form into triangles.
Once the vertex buffers are done, I create index buffers to sort the vertices into triangles. Again, there’s an index buffer for every terrain type. And again, if hardware vertex processing is supported I create four triangles per quad; otherwise I only create two…but I use a technique called triangle flipping.
Here’s how it works: for every cell in the terrain that you create, you test its upper-left corner against the upper-left corner of three other cells – the one diagonally up-left from the target cell, the one to the left of the target cell, and the one above the target cell.
If the difference between the target cell and the one to the upper-left is higher than the cell to the left and the one above, we flip the cell by specifying a different set of vertices to draw than the standard.
If you didn’t completely understand that, that’s okay. Here’s the code.
If triFlip is false, we create the triangles normally.
If the test is true, we create the triangles like this instead:
The results are pretty impressive. Here’s Planitia with two-quads-per-triangle without triangle flipping:
Notice all the jagged edges. When we use triangle flipping, they go away:
That’s much better – it gets rid of the spikes – but now we’ve got lots of straight lines and the coast looks a bit boring. Using a center point on our quads looks even better:
Now it looks smooth and interesting. Which is why I do that when the hardware supports it.
Drawing The Scene
All right, the vertex and index buffers are created and it’s time to actually draw the terrain. Here’s the procedure I use.
The first thing I do is to turn alpha blending off. Then I draw all eight of my vertex buffers. I set the texture to be drawn based on the terrain type I am drawing (this is why data about what texture to draw isn’t stored anywhere in the vertex or index buffers). If the terrain type is “water” or “beach”, I set the sand texture and draw it. If it’s anything else, I set the grass texture and draw it. The result:
Time to do some blending. I turn alpha blending back on and set the grass texture as the active texture, and then I redraw the vertex buffer for the beach. Since blending is on, the grass is drawn fading out over the sand, resulting in a sand-to-grass effect. Now it looks like this:
This technique is called multipass multitexturing. Instead of…
Instead of using multiple sets of UV coordinates and setting multiple textures at once, you draw the same geometry twice with different textures. The upside of this is that it’s easy to do and very hardware-compatible. The downside is that you are drawing more polygons than you technically need to, but if you’ve got a good visibility test (which we’ll get to in a minute) it shouldn’t be a problem.
This is the one thing in Planitia that I’m proudest of (well, along with the water).
The other terrain types – lava, flowers, ruined land and swamp – are all drawn over grass and are masked so that the grass shows through. This is why I already drew these once with grass set as the active texture. But I’m using an additional trick here. These textures won’t get their alpha information from the vertices and they don’t have any alpha information of their own. They get their alpha information from another set of textures altogether.
You see, practically any grass terrain cell can be turned into any of the other four types at practically any time during the game. If I simply draw the cell with the new texture, I get big chunks of new terrain on top of the grass:
I can alter the textures so that they fade out at the edges, but that still gets me soft tiles of terrain lined up in neat columns and rows.
What I really needed was for tiles that were next to each other to sort of glom together…and be able to do so no matter how they were configured.
And then I remembered that I’d seen this problem already solved in Ultima VI! The slimes in that game would divide if you hurt them without killing them, but instead of making smaller slimes they’d make one big mass of connected slime. So I grabbed the Ultima VI tiles to take a look at how the Origin guys had done it.
Turns out that they had done it by disallowing diagonal connections, thus reducing the number of connection possibilities from 256 to 16, and then they had drawn custom tiles for each connection permutation. This would still look better than either of the previous two solutions.
So I fired up Photoshop and created an alpha mask texture based on the slime texture.
The thing was…I didn’t just want to burn this filter onto each of my terrain type textures. I had a couple reasons for this. First, it would make the terrain type textures very specialized. Second, I’d have to make them much bigger to handle the sixteen permutations. And third, it would mean I wouldn’t be able to make my lava move by altering its UV coordinates (more on that in a second).
So what I needed to do was to set two textures – the mask texture and whatever texture I was drawing with. I needed to tell Direct3D to take the alpha information from the mask texture and the color information from the other texture.
I’ve tried to keep this article code-light, but this was tricksy enough that I want to go ahead and post the complete code. So here it is!
First we set our lava texture to be texture 0 and our masking texture to be texture 1.
In the first texture stage state, we select both our alpha value and our color value to come from texture 0 (the lava texture). Note that I am modulating the color value with a texture factor – I’ll talk a bit more about that in a minute.
This means that the lava is always drawn fullbright and isn’t affected by the baked-in lighting. This makes the lava seem to glow with its own light.
I enhanced this effect by using the texture factor. This is simply an arbitrary number that you can set and then multiply the texture color by. I alter it on a per-frame basis to make the lava brighten and darken, thus looking like it’s glowing. Again, this is simply a render state that you set.
And finally I use a UV transformation to offset the lava’s UV coordinates over time, causing the lava to look like it’s flowing. A UV transform is just what it sounds like – it’s a matrix that the UV coordinates are multiplied by before they are applied.
Now, warning warning danger Will Robinson. Whenever a Direct3D programmer starts using this feature for the first time they almost always get confused. They typically try (just like I did) to create a transformation matrix using D3DXMatrixTransformation() or D3DXMatrixTransformation2D() and they end up (just like I did) with a very strange problem – for some reason, scaling and rotation seem to work just fine but translation does not.
That’s because the UV transformation matrix is a two-dimensional transformation matrix and two-dimensional transformation matrices are 3×3 matrices, not 4×4. The scaling and rotation numbers are in the same place in both, but the translation information is on line 3 of the 3×3 instead of on line 4 like the 4×4. This is why scaling and rotation work but translation does not. Put your translation values into the _31 and _32 values in your D3DXMATRIX structure and it’ll work fine.
(Now you may be asking, “Why doesn’t D3DXMatrixTransformation2D() produce a 3×3 matrix?” Good question. I have no idea why, but it doesn’t.)
Here’s the result:
All of these little tricks were suggested to me by Ryan Clark. Except the alpha masking, which is the one thing I came up with on my own which is why I’m ridiculously proud of it.
A Good Raypicker Is A 3D Programmer’s Best Friend
You can’t really write a 3D game without a raypicker, and this is where I’m going to ding Frank Luna a few points. While he does present the concept behind raypicking and some of the math behind it, in the end he cops out and does a line/sphere test once the ray has been transformed into world space. This is accurate enough for picking objects in a 3D world, but it’s not accurate enough to pick polygons within an object, and that’s exactly what I needed. I needed to be able to tell exactly in what triangle (or at least what cell) on the terrain the user clicked.
I made some manual improvements to the raypicker but it never seemed great. So I used a little google-fu and came up with…well, it’s pretty much the perfect ray-triangle intersection test. C source is included, which I was able to drop into my code pretty much unaltered and I was amazed at how much better it worked without any discernable performance hit. Get it, use it, love it.
(Not) Seeing the Unseen
“But why?” you may ask. “Sure, raypicking involves some 3D math, but it doesn’t involve 3D rendering, now does it?”
Actually, it does, because you can use a raypicker to find out which parts of your world are visible and which aren’t, and only draw the visible parts.
Which means that when I talked about how I fill out the index buffers above, I left out a step. Sorry, but it’s a big step and deserves a section of its own.
I think the most important thing I learned on this project was just how slow drawing triangles is. It’s slow. It’s dog-slow. It’s slow as Christmas. Slow as molasses flowing uphill in January.
When I first started programming I thought that a Planitia map would be small enough that I wouldn’t have to do any visibility testing. But it turns out that you can test your entire game world for visibility and compile a list of the visible triangles in less time than it takes to just draw the whole world. Even if your world is just a little 64×64 heightfield and some billboarded sprites.
That’s how slow drawing triangles is.
In case I haven’t made my point, drawing triangles is damn slow and you should only do it as a last resort. It’s so bad that actually having to draw a triangle should almost be seen as a failure case. Your code should not be gleefully throwing triangles at the hardware willy-nilly. Indeed, it should do so grudgingly, after forms have been filled out in triplicate. And duly notarized.
“Enough!” I hear you cry. “We get it! Drawing triangles is slow! Now would you please tell us how you did your visibility testing?”
Oh, right, the visibility testing. Well, there are actually two techniques I use.
The first is a simple distance test from the center of each cell to the camera’s look-at point. If the distance is larger than 25 (an arbitrary number I arrived at through experimentation) the cell cannot possibly be visible. This very quickly excludes most of the terrain on the first pass. There are 4096 terrain cells in a Planitia map; this first pass will let no more than 1964 (25 * 25 * pi) of them through.
In this video I have drawn the camera back so that you can see the circle of passing cells that moves as the camera does.
Now, that’s good, but it’s not good enough. Typically fewer than five hundred cells are actually visible and the circle test still has us drawing almost four times as many. So all the cells that passed the first test now go to the second test, which involves the raypicking code. Actually, it involves the inverse of the raypicking code. Instead of projecting a ray from screen space into world space, we project a point from world space into screen space.
For each cell, I take its four corner points and then project each one from world space into view space and then into projection space. This “flattens” that point into a 2D point that represents the pixel that point would be drawn as on the screen.
If any of these four points are inside the screen coordinates (which for Planitia is 0, 0 to 800, 600) then at least part of the cell is visible and the cell should be drawn. If all four of the points are outside the screen coordinates then the cell is not visible and should not be drawn.
Again, I’ve drawn the camera back in this video so that you can see how the visible area moves with the camera.
Only cells that pass both tests have their indices added to the index buffer, and thus it is the index buffer that limits how many triangles are drawn. The final result? We only draw what can be seen – and the game runs a whole lot faster.
Old Man River
And now for the last bit – the water.
Planitia’s water is its own heightfield. It uses the same vertex structure and FVF as the terrain. It’s a four-vertex, two-triangle heightfield and I don’t use triangle flipping on it. It’s pretty darn simple.
On the other hand, I do use an index buffer for it so I can do the same visibility tricks I do for the rest of the terrain.
The heightfield is updated fifteen times a second. During this update new heights are calculated based on a formula that changes over time, thus the heightfield seems to undulate. Yes, I could have used a vertex shader, but please recall what I said at the beginning about limiting the technologies I’m using.
While an undulating heightfield is nice, if the texture doesn’t animate the water can look more like blue slime. Populous: The Beginning has this problem.
So the second trick is to get the water texture to animate, and that is all done with the UV coordinates. I am not using a UV transformation matrix like I did for the lava, because that transformation matrix is applied to every UV coordinate identically and I needed to be able to customize them. So the UV coordinates are all individually calculated. And then hand-dipped in Bolivia chocolate before being delivered in a collectible tin.
The first thing we do is to simply add the current game time in seconds to all the UV coordinates. That gets the water moving.
The second thing we do is to add a very little bit of the camera’s movement to the UV coordinates. This is subtle but works really well, especially if your water texture incorporates reflected sky. Basically it makes it look like the reflected sky is moving at a different rate than the water, which it would be in reality. In the following movie, look at the edges to see the effect most clearly.
Now for the really clever bit. I add the same offset that I’m using to make the water undulate to the UV coordinate for that vertex. That is, if my undulation function says that the vertex is .015 above the normal height, I add .015 to the UV coordinates of that vertex. This has the effect of making the texture seem to squash and stretch as it moves. I think this does more to actually sell the idea that the water is flowing than anything else.
Now for one more thing. I actually add the height of each vertex in the terrain heightfield to the UV coordinates in the water heightfield. This has the effect of making the water “bunch up” around the land.
I could probably improve the water if I added another heightfield on top of the existing one, moving faster and in a different direction. If I did that, I would probably move the camera movement to the top heightfield, since it represents reflection movement. I may do this at some point, but I think Planitia’s water looks good enough for now.
And I think that’s about it. Planitia will be released with full source code so there won’t be any mysteries about how I did anything. If you’ve read this and you’re trying to replicate something I’ve done and are having trouble, please feel free to contact me at firstname.lastname@example.org. And good luck with your own 3D programming endeavors!
You may have noticed the lack of posts. I’ve been handed a rather important task here at work and I’m spending pretty much all my effort on it. Which means that posting will probably be light and work on Planitia is going to move very slowly if at all.
Then once that’s done, the holidays will be starting…
I do have one thing that I’ll be posting soon – an article on what writing Planitia taught me about Direct3D programming. It’s mostly written, it just needs some example graphics. But once that’s out, expect the dearth of posts to continue. But I’ll do the best I can.