Tuesday, April 8, 2008

Brute Terrain texture splatting

After getting texture splatting working I'm seriously re-thinking implementing my "Brute force chunk NO-LOD terrain." I was in such a rush to do it that I didn't really think about it.

Using nebula's fixed function render path, I need a pass for each detail texture plus an additional pass for the base texture and the performance is not too bad with 3 detail textures and a base texture. Since I was developing using the fixed function render path, it didn't occur to me that on shader model 2.0 hardware, I only need 1 pass to blend all my detail textures (4 so far) plus my base texture. Since the whole idea was to improve the look of distant geometry it really doesn't make sense anymore (see below). Plus I can use a single image to store alpha maps for 4 detail textures in each of its 4 channels.

About my splatting implementation; Initially, I wanted to blend the base texture based on the camera's distance from the particular point being rendered. As a first step to that, I tried a fixed blend value of 0.5 which looked really good. I then used the camera's distance method and distant geometry didn't look as good as with a fixed value. So I decided to go with the fixed value version.

See for yourself

I'm pretty happy with the results but any criticism/comments are welcome

Monday, March 31, 2008

Brute force chunk NO-LOD terrain

I’m still not sure whether this is a good idea but here is the idea. I managed to get texture splatting working on the brute force terrain. I’m testing it using a laptop that defaults to nebula’s fixed function render path. Using the fixed function, I have to do a pass for each detail texture. I’ve set the shader to use 4 details textures so 4 passes. I’m still getting a decent frame rate of about 60 but as always a new problem rears its ugly head.

Distant primitives look very obviously tiled. So I had a bright idea (I think). I want to split the terrain mesh into chunks. I believe the beauty of brute terrains is that the entire terrain is uploaded once into the graphics hardware, now I want to split the beauty of it?

With Splatting

Without Splatting

The whole point of chunking it is so that distant chunks will be rendered using the large diffuse texture and a single pass. Chunks closer to the current view will be rendered using a splatting shader and a number of detail textures. I’m not trying to gain any speed improvements but rather to get rid of the tiling in distant geometry. I’m thinking I should get a trade of by gaining some speed for reducing the number of passes in distant geometry and loosing some speed for sending more batches of geometry so I’m hoping for pretty much the same frame rate as I’m getting now with splatting.

Wish me luck.

Brute … not so bad

I started working on my terrain implementation for nebula over the weekend. I chose to do a brute force implementation first just to get the hang of concepts like generating vertices from height map values and generating texture coordinates.

When I initially developed an interest in making games, I never thought I would have to do a terrain, I remember skipping the terrain chapter in a DirectX book I was reading back then. Needless to say I went back to said chapter over the weekend.

I now have a brute force terrain renderer which uses nebula’s default standard shader with a diffuse texture stretched over the terrain. You can imagine that the texture up close is anything but pretty.

Here is how it works. The terrain object requires an input heightmap with 8 bits per pixel. You may also provide a tile size parameter which is used to determine the number of vertices required plus to control the resolution of the terrain. A higher tile size means less vertices and a less smooth terrain.

Say we provide a heightmap of 1024 x 1024 pixels, a terrain of 1024 x 1024 units will be generated. If we had set the tile size to 32, we will end up with a vertex every 32 pixels/units. Therefore we will end up with 33 x 33 vertices for a 1024 x 1024 unit terrain which is pretty decent but what I’m most impressed about is the frame rate. I’m getting a frame rate of about 120 on a laptop with an intel chip though its still needs work on the texturing.

My plan was to get a basic brute force implementation rendering then immediately move on to a chunk lod implementation. Right now, I’m actually thinking I can get away with brute force so I’m looking at whether I can add texture splatting to it and see how far I can get with it.

A question for anyone who can provide some insight on using triangle strip primitives. According to the direct sdk docs, every 2nd triangles should be oriented reversed for it to work properly. I can’t get the terrain to render properly using triangle strips. Looking at the bottom of the terrain, I see holes on every 2nd triangles and from the top it seems fine but I can see hanging meshes on some parts of the terrain.

Choosing an engine … again

As far as game engines go, I’ve been around the block and back a couple of times. I have tried my hand at every engine I could get my hands on. I don’t know if it’s just me but every once in a while I question my choice of a particular engine and retry engines that I had already decided against. We’ll, it’s that time again. If you’ve read my previous posts you might have figured that I’m totally sold on the nebula device.

Lately while laying out the ground work for project samurai, I have had doubts about using nebula for it. Almost the entire game is set out doors with vast landscapes, a lot of vegetation (grasslands and sparse trees). In its current state, nebula has no functioning terrain renderer. There is a multilayer scene node that I think is basically a brute force terrain implementation and in my view doesn’t look very pretty.

Some time last year while on one of my engine quests, I came across S2 Engine. Back then it was in version 1.4 and had no terrain implementation. Something else that put me off is that they didn’t/don’t provide a demo. Its videos and screenshots looked very impressive but I decide against it mostly since I couldn’t try it myself. Now version 1.5 has just come out with a lot of improvements including terrain rendering, vegetation painting and mesh instancing. Again the videos and screenshots look impressive. Being part of a really small team it looks like an ideal solution for project samurai.

The only disadvantage of using S2 is that for a 135 euro version, I can only use the engines script interface and I’ve always been one who likes to look under the hood. There is an option to upgrade to a license that provides a C++ SDK and another pricier option that provides full source access.

I’ve been looking at the available terrain renderers with the aim of picking one to fix. The two available ones are both based on Thatcher Ulrich’s work on chunk LOD terrain. The first one, nclodterrain has some pretty nice tools for generating terrain meshes and textures but on running it, my terrain seems to be tilted 90 degrees around the X axis. The other one, ncterrain2 just displays a box; I’m still trying to figure that one out.

I have no intention of abandoning nebula but I think using S2 will speed up our development. In the mean time, I’m planning on working on a terrain implementation for nebula but I don’t want project samurai’s development to be tied to this.

Wednesday, February 20, 2008

Blender exporter re-write

I finally started working on a skin animation exporter with the hope of merging it with the existing exporter but ended up re-writing the entire exporter.

I wrote a few stand alone scripts over the weekend just to test the python API and see how I can integrate it with nebula. Initially I didn't know much about nebula's animation files and I had to figure it out the hard way, by looking at existing code, specifically the 3DS Max exporter which was a lot of help. Nebula's documentation also offered some help.

By yesterday, I could export animations but my script still has a problem exporting the initial skeleton's pose. I posted my problem on blender's forum and went on with re-writing the rest of the exporter.

The mesh exporter is pretty much done and I just started working on adding animations yesterday. I've already run into a problem with transform animations though, I cant figure out how to get key frames from a blender object IPO yet but I'm looking into it. In the mean time, I will work on skin animation support, getting key frames for this is pretty straight forward, I wish it was as straight forward for object animations.

Wednesday, February 13, 2008

Back to Nebula 2

After a long while, I've gone back to developing with Nebula2 again. I've wanted to work a game for a while and I've been waiting for the perfect time, with the perfect team in the perfect weather but I just realized this will never happen. I need to work with what I have now and build from there.

We organised our first team meeting to get things started and I'm pretty happy with the progress, it was nice to have so much enthusiasm in one room. We managed to come up with at least a basic plot and assign roles in the team. We have as our project manager, Debra, one of the harshest (is that a word?) women I know, she should keep us in check. Bharat is in charge of the storyline and gameplay. We have 2 artists on the team, Dan and Mike. We are still considering a 3rd guy as a concept artist. I will be as the technical director/lead programmer.

We would eventually like to game to run on Nebula3 but as we wait for it to be stable, we decided to go with Nebula2 for now. Nebula3 might be the next best thing but I think Nebula2 is still way ahead of many engines today.

We are yet to settle for a title so for now its just project samurai, no points for guessing what its about.

Monday, January 21, 2008

CG Shaders and FBO

After getting textures working I wanted to have my test application run using nebula3's application framework. The only way to have this working was to implement shaders and render targets.

Nebula3 makes use of effect states and glsl does not have an effect framework so it was a natural choice to use Nvidia's CG. Nvidia just released CG 2.0 so I decided to give it a try. I have to say, CG's API is pretty straight forward and easy to work with.

According to the CG specification, semantic names are case-insensitive and CG returns semantics in uppercase so if e.g. I define a semantic as ModelViewProjection in my shader, CG returns it as MODELVIEWPROJECTION . This is what I feed into nebula as a the shader variable's semantic when its created. The problem comes in when a semantic is hard coded into the application as ModelViewProjection. This throws an error since the variable ModelViewProjection cant be found. As a temporary solution, I've changed all references to shader variable semantics in the source code to upper case.

Whever a ShaderInstance is created, a clone of the effect should be created, as of this writing, CG's cgCloneEffect function is not working, as a temporary solution, whenever a shader is loaded from file, I keep a copy of the source in the shader. Whenever a shader instance is created I re-create the effect using the source in the shader.

Apart from that, I'm generally happy with CG 2.0.

As a render target implementation, I've used Frame Buffer Objects(FBO), it was pretty straight foward to set it up but there are still issues I need to fix here and there. e.g. I'm testing it on a Radeon x1650 that does not have the None Power of Two(NPOT) extension which makes rendering to a 1024 x 768 texture very, very slow. The card however supports the texture rectangle extension so when I bind a rectangular texture as a render target the rendering works fine. But another issue rears its ugly head, rectangular texture coordinates are addressed using the textures pixel dimensions not the usual 0...1 range.

Still pondering on this one but I think Ill set it up such that if the hardware supports NPOT, setup will be done normally, if the hardware has the rectangular texture extension but not the NPOT extension, a rectangular texture will be used and if the hardware supports neither of the two, some sort of pixel padding could be used to make the texture a power of two (have to research on how to do this).

Finally here is a screenshot of the testviewer application running on CG shaders.


After a long break from nebula3 opengl development I got back into it after the holidays. My previous posts are on another blog http://larryweya.blogspot.com Blogspot have an option of moving my blog to a different email account which doesn't seem to work for me. It forces me to keep logging in and out which is really annoying so I decided to start a new blog.

Next on my list was textures, I was dreading this as I wasn't sure yet how I wanted to go about it. Initially, I thought it would be easier to use one of the open source image libraries. After trying to integrate FreeImage and OpenIL I decided to write my own parsing routines since this is a learning experience.

To start with I have only written a .dds loader (a subclass of a streamreader) which is so far working fine and supports some of the more common pixel formats (DXT1, DXT3, DXT5, X8R8G8B8, A8R8G8B8, A4R4G4B4, R5G6B5 and A1R5G5B5). The plan is to eventually have it such that the image loading class (StreamTextureLoader) keep some kind of registry that maps file extensions to specific loader classes.

From what I know, directx and nebula's texture coordinates have uv coordiante (0,0) at the top left corner while opengl has it at the bottom left, I thought I would eventually have to do some sort of flipping, perhaps of the texture on load and save time but oddly enough, the texture mapping is perfect. I had a problem with my post process rendering but this was easily fixed on the shader by deducting the v coordinate from 1 i.e. v = 1 -v;