November 09, 2009
Block Breakers Progress Report
Block Breakers has finally come to a close. We rallied the troops for one last push and have polished it up and sent it off to the Independent Game Festival. After graduation we have all become busy with family, jobs and new projects, but here is one last movie for good measure. Wish us luck in the student and professional IGF showcases. You can find the game available for download from DigiPen's site here. Thanks and enjoy!
April 25, 2009
Tri-Planar Texturing
Texture distortion occurs when images are stretched across surfaces. This can be caused by non-uniformly spaced texture coordinates or verticies. This doesn't happen very often with relatively small models like characters, vehicles or objects in a game. Such art assets are usually maintained by artists and have correct texture wraps. The real challenge lies with terrain.
Terrain is large, sometimes infinite and would be unreasonable to create texture coordinates by hand. It's best to construct texture coordinates based on world coordinates. Common practice involves projecting textures along one plane on top of the terrain. When there are steep surfaces this causes texture distortion.
Tri-planar texturing solves this problem by projecting textures along all three planes and then blending appropriately between the textures. Three sets of texture coordinates are created based on the xy, yz and zx planes. These coordinates are then used to sample from the various textures. The blend weights are computed based on the normal of the surface where the texture is to be applied.
My junior game project included very large terrains. We chose to apply rock texture to the steep cliff surfaces and grass and dirt elsewhere in the levels. To avoid stretching the rock texture I adapted the tri-planar technique on the cliffs. The image on the left shows the stretching that can happen. The image on the right shows the rock texture applied to the cliff faces without stretching.
Additional information can be found in the NVIDIA Cascades Demo.
Terrain is large, sometimes infinite and would be unreasonable to create texture coordinates by hand. It's best to construct texture coordinates based on world coordinates. Common practice involves projecting textures along one plane on top of the terrain. When there are steep surfaces this causes texture distortion.
Tri-planar texturing solves this problem by projecting textures along all three planes and then blending appropriately between the textures. Three sets of texture coordinates are created based on the xy, yz and zx planes. These coordinates are then used to sample from the various textures. The blend weights are computed based on the normal of the surface where the texture is to be applied.
My junior game project included very large terrains. We chose to apply rock texture to the steep cliff surfaces and grass and dirt elsewhere in the levels. To avoid stretching the rock texture I adapted the tri-planar technique on the cliffs. The image on the left shows the stretching that can happen. The image on the right shows the rock texture applied to the cliff faces without stretching.
Additional information can be found in the NVIDIA Cascades Demo.
April 23, 2009
April 22, 2009
Deferred Shading and Anti-Aliasing
When lots of dynamic lighting is necessary for your project (or just plain cool) then you may look to deferred shading over forward rendering. It has many benefits and some drawbacks that are all well documented. One of the downsides is the lack of hardware anti-aliasing support. There are various solutions to help alleviate the problem.
In my latest game project I wanted to attempt implementing my own anti-aliasing. I chose to use a screen space solution as part of my post processing. The first thing that needs to be done is determine where the edges are that may be aliased. Edge detection algorithms use a filter like Sobel to give weighted values at each pixel to show the likeliness that it lies on an edge. Common data to use as edge indicators are depth and normal of surrounding pixels. Applying the filter will point out discontinuities in these values.
Once the edges are found a blur can be applied based on the weight of the edge value. This is a fairly simple approach that can work quite well. In my project I found a quick and naive implementation was slow and had other downsides. There are a few things to watch out for.
The shader code has many texture lookups and many texture coordinate offsets. These offset texture coordinates should be computed in the vertex shader. Then they will automatically be interpolated and accessed in the pixel shader just like the non-offset texture coordinates to greatly minimize computations. Also, with a consistent filter size you will loose detail as objects move away from the eye. As an object takes up less space on the screen the blur will effect more than just the edges of the object. This can be helped by changing the sample offsets of the blur based on depth, but again this will increase computation that will have to be done per pixel.
This screen shot shows comparisons between aliased and anti-aliased as well as output from my edge detection.
Additional information on deferred shading and anti-aliasing approaches can be found at the following references:
Deferred Shading in STALKER by Oles Shishkovtsov
Deferred Shading Tutorial by Fabio Policarpo and Francisco Fonseca
In my latest game project I wanted to attempt implementing my own anti-aliasing. I chose to use a screen space solution as part of my post processing. The first thing that needs to be done is determine where the edges are that may be aliased. Edge detection algorithms use a filter like Sobel to give weighted values at each pixel to show the likeliness that it lies on an edge. Common data to use as edge indicators are depth and normal of surrounding pixels. Applying the filter will point out discontinuities in these values.
Once the edges are found a blur can be applied based on the weight of the edge value. This is a fairly simple approach that can work quite well. In my project I found a quick and naive implementation was slow and had other downsides. There are a few things to watch out for.
The shader code has many texture lookups and many texture coordinate offsets. These offset texture coordinates should be computed in the vertex shader. Then they will automatically be interpolated and accessed in the pixel shader just like the non-offset texture coordinates to greatly minimize computations. Also, with a consistent filter size you will loose detail as objects move away from the eye. As an object takes up less space on the screen the blur will effect more than just the edges of the object. This can be helped by changing the sample offsets of the blur based on depth, but again this will increase computation that will have to be done per pixel.
This screen shot shows comparisons between aliased and anti-aliased as well as output from my edge detection.
Additional information on deferred shading and anti-aliasing approaches can be found at the following references:
Deferred Shading in STALKER by Oles Shishkovtsov
Deferred Shading Tutorial by Fabio Policarpo and Francisco Fonseca
April 20, 2009
Block Breakers Trailer
I just wanted to share a bit of my current game. This is a fourth year DigiPen student project. It's a fast paced arena battle where your objective is to break the ground under your opponents and cause them to fall. Check out the sweet trailer!
April 04, 2009
Soft Particles
Particles are used all the time in games. Many effects can be produced from smoke and fire to stars and energy fields. Particles are typically semi-transparent planar surfaces. They can be drawn to the screen as sprites or textured quads. When these particles intersect with other geometry in the scene they often cause unnatural hard edges. The technique presented here is an attempt to eliminate such visually jarring edges.
The edges are caused by discarding portions of the particle textures that fail z-testing. The traditional z-test is an all or nothing operation. We need a gradual z-test that allows us to blend the alpha channel of the texture over some range. In order to implement our own z-testing we need access to the depth buffer. There are two options, use DX10 to access the depth buffer or create our own depth buffer. My current game project uses deferred shading so I have created my own depth buffer for this technique.
Like traditional rendering pipelines we will need to render the particles after other scene geometry. Once we have our depth buffer ready we then need to create our own z-buffering. In the particle pixel shader we need to compare the position of the pixel we are about to draw and the depth of other geometry already at that pixel. For this example let's assume linear depth values from zero at the near plane to far at the far plane. We subtract pixel depth from z-depth. If the value is greater than a preset threshold than we do nothing. If the value is negative than the traditional z-test will fail and the pixel will be discarded. And if the difference falls within zero and our threshold then we alter the pixel.
The whole goal of soft particles is to remove any noticable edges. Blending the particle as depth comparisons go from threshold to zero can be done many ways. A simple linear blend will greatly reduce the hard edges, but can still be very noticable. An exponential function is able to smooth out the changes across the blending range much better.
I have created a small video demonstration to compare standard and soft particles in my game project.
For more information on this wonderful technique please see the NVIDIA paper Soft Particles by Tristan Lorach.
The edges are caused by discarding portions of the particle textures that fail z-testing. The traditional z-test is an all or nothing operation. We need a gradual z-test that allows us to blend the alpha channel of the texture over some range. In order to implement our own z-testing we need access to the depth buffer. There are two options, use DX10 to access the depth buffer or create our own depth buffer. My current game project uses deferred shading so I have created my own depth buffer for this technique.
Like traditional rendering pipelines we will need to render the particles after other scene geometry. Once we have our depth buffer ready we then need to create our own z-buffering. In the particle pixel shader we need to compare the position of the pixel we are about to draw and the depth of other geometry already at that pixel. For this example let's assume linear depth values from zero at the near plane to far at the far plane. We subtract pixel depth from z-depth. If the value is greater than a preset threshold than we do nothing. If the value is negative than the traditional z-test will fail and the pixel will be discarded. And if the difference falls within zero and our threshold then we alter the pixel.
The whole goal of soft particles is to remove any noticable edges. Blending the particle as depth comparisons go from threshold to zero can be done many ways. A simple linear blend will greatly reduce the hard edges, but can still be very noticable. An exponential function is able to smooth out the changes across the blending range much better.
I have created a small video demonstration to compare standard and soft particles in my game project.
For more information on this wonderful technique please see the NVIDIA paper Soft Particles by Tristan Lorach.
Subscribe to:
Posts (Atom)