Showing posts with label Normals. Show all posts
Showing posts with label Normals. Show all posts

Monday, November 14, 2011

Giving shape to the world

I finally have an interesting playground to experiment in, so it's about time I focused on what I wanted to do in the first place. The main drive behind this project was procedural content, making use of algorithms to populate the world with variety instead of a fixed number of precomputed models. I've already been doing that, in fact. The distribution of spheres in the demo, for example, and their size, is all deterministically decided by the program at run time.

All I do is decide how many spheres I want, what the range of sizes should be, and the volume of space to use. The program then fills the space. Because the starting conditions are always the same, the program always builds the same scene. I could easily create a different distribution by changing the seed or any of the parameters. I want to go further, though. Right now, all these spheres are kind of boring. And not very asteroid like. To change this, I'll deform the spheres to create more interesting shapes.

My initial plan is to assign each sphere a height map. I'll then use the height map to raise or lower away from the center of the sphere each vertex. One difficulty I expect is how to adjust the normals as the terrain is reshaped, but I'll figure something out. On the height map end, I'll use the simplex noise algorithms I've talked about before to give each asteroid a distinct look. By seeding each asteroid with a distinct value, I ensure they'll all be unique.

A sample of simplex noise

For the height map, I only need it to match one pixel to a vertex. This way I can send the value to the vertex shader as an attribute. I already created the spheres using a gridlike pattern, so I can use the uv coordinates of each pixel for this purpose. This does mean that there will be more detail near the poles that along the equator, but because my source is going to be a 3d density function it should not result in any obvious distortions. The output will go into a Vertex Buffer Object, which will be fed to the video card and called when drawing.

Normally, height maps add a height to points on a grid, which is level with either the XZ or XY planes, meaning Z or Y is treated as height, and that value is replaced with the value of the height map. However, I have a sphere. On a sphere, height is defined as the distance from the center. To apply the height map to a sphere, I have to work a bit more.

The values I have will fall between -1 and 1. If I do straight multiplication of the surface point by the value, I'd have a mess. Instead, what I do is decide what I want the range of heights to be. If I want the highest points to jut out three times the radius of the sphere, for example, and the deepest valleys to reach half way to the center of the sphere, that gives me a range of (3 - 0.5 ) = 2.5 radius. The half point (where the height map is 0) would be 1.25 radius, so I could get the position of every point as ( (1+height)*1.25 + 0.5) * position.

The other complex bit is recalculating the normals. By changing the shape of the sphere, the surface normals all change as well and that impacts the light on each point. If I don't get them right, the lighting will look decidedly odd.

That's getting ahead of me a bit. First I need to be sure I can produce the information and get it into the shader. I had to add an extra attribute to the Model class to do it, but after a few short tweaks I got it up and running. What I'm doing right now is using the height value of each point as a color value, with -1 being black and 1 being white. So higher areas will look brighter.

Looks nicer than blue with red and green dots.


It's working, and I'm pretty happy about that. I'm now struggling with an odd problem; when running the project from a pendrive it works, but copying the project to the hard drive and running it there doesn't. Fixing that is probably going to give me enough for another post itself.

Wednesday, October 19, 2011

Collision Detection - Appendix A - Collisions

Not much to report this week. Work was busy, leaving little time for important stuff. Still, I don't want to let the week pass with no updates over here, so I decided to share some of the stuff I didn't go into detail before.

For starters, how I'm actually resolving the collisions. It's still a bit buggy, so writing it down might help me figure out what is wrong.

As I said in the first part, I'm doing elastic collisions between spheres. If I was making a billiards game, this would be all I really need. Upon detecting a collision, as I said, I determine the position when the two objects first touch by moving them back in time until the distance to their centers is the sum of their radii. Basically multiply their speed by a negative number and add it to their positions.

Next I need to calculate their new speeds. In order to do this, it's helpful to move ourselves into a convenient reference frame. It's often the case in physics problems that you can get halfway to a solution just by changing how you look at it. In my case, all speeds and positions are stored relative to a frame of reference where all the spheres are initially at rest, and the origin is where the player first appears.

This means that the first collision will always be between an object at rest and one moving. Further collisions may be between moving objects, with no reasonable expectation of any relationship between their movements. Instead of this, we'll move the collision to the reference frame of the center of mass of the two colliding objects.

This has certain advantages; in an elastic collision, the speed of the center of mass before the collision and after the collision will not change. From the center of mass, the collision looks as if two objects approached each other, hit, and then moved away, regardless of what their speed in the standard reference frame is. Their respective speeds depend on the relationship between their masses, as well.

For example, consider two spheres of equal mass, one at rest and the other rolling to hit it head on. From the center of mass, it looks as if the two spheres approach, each moving at half the speed of the original sphere in the original frame of reference, meet at the center, and then their speeds where reflected off each other. From the original reference frame, it looks as if the first sphere stops and the sphere that was at rest moves away at the speed of the first sphere.

The angle of the green line represents the movement of the center of mass through the three stages of the collision.

From the point of view of the center of mass, if the masses were different the larger mass would appear to move more slowly, and when they bounce it would move away more slowly as well.

This is fairly straightforward for head on collisions, but far more often collisions are slightly off-center. For these, we need to apply a little more thought. Still, the same rules apply. The main difference is that we need to reflect the speeds by the angle of the collision surface, the plane tangent to both spheres at the collision point.

In order to do this we need to know the normal of the collision surface, which is simple enough as it is the normalized vector that joins the center of both spheres. In our example the masses are equal, which means that in order for the center of mass to remain static the speeds must be equal and opposite, and this relationship must be maintained after the collision as well.

Off-center collision showing speed vectors, collision plane and normal.

The equation to reflect the speed vectors is

Vect2 = Vect1 - 2 * CollisionNormal * dot(CollisionNormal,Vect1)

This must be applied to both bodies, of course.

We can see that the situation of both bodies crashing into each other is actually a special case of the off-centre collision, as we would expect. The dot product of the speed vector and the collision normal in that case is the length of the speed vector, since the dot product can be considered the projection of one vector on the other, and both vectors share the same direction.

Multiplying the length of the speed vector by a normalized vector in the same direction yields the original speed vector, and when subtracted twice from the original vector we end with a vector of the same length and opposite direction, which is what happens in head on collisions.

In off-center collisions, the effect is that we reverse the component of the speed along the collision direction, and leave the perpendicular component intact. The more glancing a blow is, the greater the angle between the collision direction and the speed direction, resulting in a smaller speed component along that axis and as a consequence a smaller deflection.

Tuesday, July 12, 2011

Top Models

With everything I've done, it's about time I start getting to drawing something to the screen. Before I can do that, however, I have to define what it is I'm going to draw. The shapes that describe the different objects in the world are called models.

Models in three dimensions are composed of triangles, and these are made up of three vertices each. A vertex (singular of vertices) is described by several numbers. We need it's position in space, which is described by three coordinates (x,y,z). We also need the normal for each point, which is used in lighting calculations and describes a direction perpendicular to the surface. This vector is also described by three values and should be of unit length. We can specify a color for each point, which requires the color values (RGB, greyscale, transparency, etc). Finally we have the texture coordinates, which describe the point on a texture that should be mapped to the vertex. Since textures are flat, this is accomplished with just two values, (u,v).

This means there are between nine and twelve values that describe a point. These values can be stored on the video card itself, using structures called Vertex Buffer Objects. Each of these objects can be called with a unique ID called names, which OpenGL provides when you request the creation of one.

This means that my model structure in the program needs to know only the VBO name.

When storing the vertex positions, however, I am doing it in the model coordinates. Depending on how I translate them to screen coordinates, I could scale or stretch the model. If I want to draw a large sphere and a small sphere, I could either store the points for each, or I could store the points for a unit sphere and scale it to the size I want.

Additionally, I could have a model consisting of different sub-models. For example, one could create a snowman out of reusing the model for a sphere, instead of a single model storing the values of two spheres. In order to do this, the snowman model would need the name of the basic sphere and two matrices, to scale and translate the spheres to the correct shape and position.

This implies two kinds of models. The primitives would consist of the Vertex Buffer Object names, while the complex ones would consist of a list of primitives and matrices applied to them. Of course, the list could also contain other complex models itself. These two types could be condensed if we stored both properties in every model. Primitive types would hold empty lists, while the VBO names for complex types would be null.

The models would then be stored in the model cache, and different entities would call them when needed.

I would still need methods for loading the vertex information into the buffers, however, and for drawing them. But this is something to look into in another post.