Showing posts with label Models. Show all posts
Showing posts with label Models. Show all posts

Monday, January 16, 2012

Beautiful and unique snowflakes

So last time, I had managed to add some code to create a height-map from a simplex noise function, using the space coordinates as input and getting a pseudo-random number between -1 and 1 as the output. For now I'm just coloring the vertices, black for lowest value and white for highest, but eventually I'd use the values to distort the sphere.

The only problem was that the noise function used a shuffled array to create the pattern of noise, so in order to create new patterns I had to reshuffle that array. Reshuffling the array presented a few problems, mostly caused by my reliance on the STL's rand() function, which is really quite terrible. As a result I had to search for a better function that could do the job I needed.

I eventually found an excellent paper on pseudo random number generators, by David Jones. I settled for the JLKISS64 generator, and after altering it to fit the framework I managed to create multiple height maps on demand. The problem, then, was how to display them.

Currently, my method for displaying models is to create a Vertex Buffer Array for each model, and a custom transform for each object. The different spheres where really all the same array with different matrices applied. But the height map was created as an additional coloring at each vertex, meaning I had to create new Arrays that still used the same buffers for vertex position, normal and color, but different heights.

This was fairly straightforward, thankfully. Essentially, the process involved the same steps as creating a vertex array from scratch, but skipping the buffer step, since the information had already been loaded to the video card.

In short order I had the new code running, and the space that was once filled with the same patterned spheres now had unique marbles floating all about.

Next up, I'm going to be trying to get text into the program, create a console of sorts.

Monday, November 14, 2011

Giving shape to the world

I finally have an interesting playground to experiment in, so it's about time I focused on what I wanted to do in the first place. The main drive behind this project was procedural content, making use of algorithms to populate the world with variety instead of a fixed number of precomputed models. I've already been doing that, in fact. The distribution of spheres in the demo, for example, and their size, is all deterministically decided by the program at run time.

All I do is decide how many spheres I want, what the range of sizes should be, and the volume of space to use. The program then fills the space. Because the starting conditions are always the same, the program always builds the same scene. I could easily create a different distribution by changing the seed or any of the parameters. I want to go further, though. Right now, all these spheres are kind of boring. And not very asteroid like. To change this, I'll deform the spheres to create more interesting shapes.

My initial plan is to assign each sphere a height map. I'll then use the height map to raise or lower away from the center of the sphere each vertex. One difficulty I expect is how to adjust the normals as the terrain is reshaped, but I'll figure something out. On the height map end, I'll use the simplex noise algorithms I've talked about before to give each asteroid a distinct look. By seeding each asteroid with a distinct value, I ensure they'll all be unique.

A sample of simplex noise

For the height map, I only need it to match one pixel to a vertex. This way I can send the value to the vertex shader as an attribute. I already created the spheres using a gridlike pattern, so I can use the uv coordinates of each pixel for this purpose. This does mean that there will be more detail near the poles that along the equator, but because my source is going to be a 3d density function it should not result in any obvious distortions. The output will go into a Vertex Buffer Object, which will be fed to the video card and called when drawing.

Normally, height maps add a height to points on a grid, which is level with either the XZ or XY planes, meaning Z or Y is treated as height, and that value is replaced with the value of the height map. However, I have a sphere. On a sphere, height is defined as the distance from the center. To apply the height map to a sphere, I have to work a bit more.

The values I have will fall between -1 and 1. If I do straight multiplication of the surface point by the value, I'd have a mess. Instead, what I do is decide what I want the range of heights to be. If I want the highest points to jut out three times the radius of the sphere, for example, and the deepest valleys to reach half way to the center of the sphere, that gives me a range of (3 - 0.5 ) = 2.5 radius. The half point (where the height map is 0) would be 1.25 radius, so I could get the position of every point as ( (1+height)*1.25 + 0.5) * position.

The other complex bit is recalculating the normals. By changing the shape of the sphere, the surface normals all change as well and that impacts the light on each point. If I don't get them right, the lighting will look decidedly odd.

That's getting ahead of me a bit. First I need to be sure I can produce the information and get it into the shader. I had to add an extra attribute to the Model class to do it, but after a few short tweaks I got it up and running. What I'm doing right now is using the height value of each point as a color value, with -1 being black and 1 being white. So higher areas will look brighter.

Looks nicer than blue with red and green dots.


It's working, and I'm pretty happy about that. I'm now struggling with an odd problem; when running the project from a pendrive it works, but copying the project to the hard drive and running it there doesn't. Fixing that is probably going to give me enough for another post itself.

Wednesday, October 19, 2011

Collision Detection - Appendix A - Collisions

Not much to report this week. Work was busy, leaving little time for important stuff. Still, I don't want to let the week pass with no updates over here, so I decided to share some of the stuff I didn't go into detail before.

For starters, how I'm actually resolving the collisions. It's still a bit buggy, so writing it down might help me figure out what is wrong.

As I said in the first part, I'm doing elastic collisions between spheres. If I was making a billiards game, this would be all I really need. Upon detecting a collision, as I said, I determine the position when the two objects first touch by moving them back in time until the distance to their centers is the sum of their radii. Basically multiply their speed by a negative number and add it to their positions.

Next I need to calculate their new speeds. In order to do this, it's helpful to move ourselves into a convenient reference frame. It's often the case in physics problems that you can get halfway to a solution just by changing how you look at it. In my case, all speeds and positions are stored relative to a frame of reference where all the spheres are initially at rest, and the origin is where the player first appears.

This means that the first collision will always be between an object at rest and one moving. Further collisions may be between moving objects, with no reasonable expectation of any relationship between their movements. Instead of this, we'll move the collision to the reference frame of the center of mass of the two colliding objects.

This has certain advantages; in an elastic collision, the speed of the center of mass before the collision and after the collision will not change. From the center of mass, the collision looks as if two objects approached each other, hit, and then moved away, regardless of what their speed in the standard reference frame is. Their respective speeds depend on the relationship between their masses, as well.

For example, consider two spheres of equal mass, one at rest and the other rolling to hit it head on. From the center of mass, it looks as if the two spheres approach, each moving at half the speed of the original sphere in the original frame of reference, meet at the center, and then their speeds where reflected off each other. From the original reference frame, it looks as if the first sphere stops and the sphere that was at rest moves away at the speed of the first sphere.

The angle of the green line represents the movement of the center of mass through the three stages of the collision.

From the point of view of the center of mass, if the masses were different the larger mass would appear to move more slowly, and when they bounce it would move away more slowly as well.

This is fairly straightforward for head on collisions, but far more often collisions are slightly off-center. For these, we need to apply a little more thought. Still, the same rules apply. The main difference is that we need to reflect the speeds by the angle of the collision surface, the plane tangent to both spheres at the collision point.

In order to do this we need to know the normal of the collision surface, which is simple enough as it is the normalized vector that joins the center of both spheres. In our example the masses are equal, which means that in order for the center of mass to remain static the speeds must be equal and opposite, and this relationship must be maintained after the collision as well.

Off-center collision showing speed vectors, collision plane and normal.

The equation to reflect the speed vectors is

Vect2 = Vect1 - 2 * CollisionNormal * dot(CollisionNormal,Vect1)

This must be applied to both bodies, of course.

We can see that the situation of both bodies crashing into each other is actually a special case of the off-centre collision, as we would expect. The dot product of the speed vector and the collision normal in that case is the length of the speed vector, since the dot product can be considered the projection of one vector on the other, and both vectors share the same direction.

Multiplying the length of the speed vector by a normalized vector in the same direction yields the original speed vector, and when subtracted twice from the original vector we end with a vector of the same length and opposite direction, which is what happens in head on collisions.

In off-center collisions, the effect is that we reverse the component of the speed along the collision direction, and leave the perpendicular component intact. The more glancing a blow is, the greater the angle between the collision direction and the speed direction, resulting in a smaller speed component along that axis and as a consequence a smaller deflection.

Monday, September 26, 2011

Collision Detection - Part 1 - Detecting it

I apologize for the tardiness. Should have had this up last Monday. Never decided on an official update schedule, but I think Monday is a good day for it.

So, what have I been so busy that I couldn't post? Well, I've been trying to wrangle with Collision. If you've tested the last demo I put out, you might have noticed that things go through each other, including the player. That's because I haven't included any logic in the loop that looks for objects interacting. I essentially create the spheres and player ship, and draw them, but they're each doing their own thing.

Collision detection (and resolution) is about finding when these objects interact, and how. The clearest example is not letting objects go through each other. But it's also necessary for detecting bullets hitting targets and damaging them, and objects bouncing believably when they hit something.

This is a complex problem, but thankfully computers today are very powerful machines. Collisions on a scale unthinkable years ago can run on current hardware in real time. There's a great deal of studying that has gone into sorting this problem as well, both for research in scientific applications and in the video game industry looking for more realistic behaviors.

The first issue we find is how to determine that there was, indeed, a collision. There are many ways that this can be done, from checking every triangle in a model to every other triangle in the scene (where intersecting triangles identify a collision) to comparing bounding volumes, and ways of speeding up the process itself by culling impossible scenarios.

The way I chose to start with is essentially the simplest (and given my field of spheres, one that should work just fine). I create a bounding volume around each object. A bounding volume is a three dimensional shape built around an object that guarantees that no point in the object lies outside of the volume. Take a square, for example. A circle centered on the square and with a diameter at least as wide as either diagonal bounds the square, since all four points would lie inside it.

In my case, I'm using spheres as the bounding volume. I can center these on the entity, and size them so that they have the same size of the model (in the case of the spheres, for the ship the sphere is a bit bigger than it really needs to be). The idea behind the bounding volumes isn't to find if two objects intersect, but to be able to discard those that don't.

Comparing bounding volumes is faster and easier than comparing the models themselves, triangle to triangle. Since the volumes completely enclose the model, we know that if the two volumes don't meet there is no way for the models to meet, and we can discard the entire models at once instead of making sure triangle by triangle. Checking if two spheres intersect is a trivial check if you know their centers and their respective radii: If the distance between the centers is less than the sum of their radii, the spheres intersect. Else, they don't.

While we can guarantee that if the spheres don't intersect the models don't either (and we can then avoid checking further), it's not necessarily true that if they intersect the models are touching. Thus, it's possible (even expected) that the bounding volume method may yield "false positives", resulting in collisions when two objects aren't touching. The solution is to do a more expensive check, triangle against triangle for example, on these. The advantage is that by excluding the ones we know can't intersect with a cheap check, we save a considerable amount of computing power.

Other bounding volumes commonly used are AABBs and OBBs, which stand for Axis Aligned Bounding Box and Oriented Bounding Box. Both consist of building boxes around the models, with the difference that the OBB builds the smallest box that the model fits in and rotates it together with the model, while the AABB builds the smallest box that can align to the world coordinate system. If the object within an AABB changes orientation, the AABB must be recalculated.

The advantage the boxes have is that they produce fewer false positives than a sphere. Spheres leave too much empty volume, especially the less 'round' the object within it is. Consider a stick, for example. In such a case, where one dimension of the object is much larger than the other two, the sphere will be mostly empty and any objects passing through this empty space will need to be checked against the stick, resulting in wasted computation.

The second easiest volume to check is the AABB. To see if two volumes intersect, one must simply check the sides. It's simplified because they are aligned to the axis, so that the x,y or z values along each side are fixed. On the other hand, building the volume each time the model changes orientation can get expensive. In the case of the stick, for example, we find ourselves in pretty much the same situation as the sphere is it is oriented away from all three axis, and get the best results if lies along any one of them.

The OBB method, finally, guarantees the fewest false positives. Finding the smallest OBB can be a bit expensive, but it needs only be done once when you create the model. On the other hand, comparing OBBs to find intersections is more expensive than either of the previous methods.

There really isn't a 'best method', as they all have their own costs which need to be considered. AABBs are good for scenery, for example, which doesn't change orientation. On the other hand, if you have a roundish object, a sphere will produce less wasted space than either box method. The good thing is we don't need to 'choose' a single method, as for very little overhead we can implement box-sphere checks that let us mix things up.

Further, the bounding volume is only the first test. And as large and complex as models have become, it's not uncommon for there being a hierarchy of checks that are run, cutting the amount of triangle test required down to sane levels. With two ten thousand polygon models, for example, just checking one against the other would be too much wasted time.

Whew. That took more than I thought. Don't really have the time to fish for pictures though.

No new demo today, but soon. Next Monday: what happens when you find an intersection!

Sunday, September 4, 2011

In the driver's seat

So I have a basic implementation of newtonian movement worked out. But we're still controlling an object that eventually flies off into the distance. And once it's out of view, it's pretty unlikely that it will come back. Wouldn't it be nice, though, if we could keep track of it?

Yes, yes it would. So having worked out movement, I focused on the camera. I decided to start with two views. One from the cockpit, which tracks the rotation of the object, and the other the familiar chase camera, looking at the object from behind, and tracking the direction of movement.

The reason I made these decisions is that movement in three dimensions is pretty unintuitive, specially when you remove friction. Most space flight games get around this unintuitiveness by treating space as a giant pool. You can remain buoyant at any spot, but there's a constant friction that slows you down as soon as you stop your engines. This allows people to point themselves in the direction they want to go and fire the engines. Any transversal motion will eventually be slowed enough that they end up reaching their destination with a minimum of effort.

With newtonian physics, however, there's no meaning to the idea of slowing down. Slowing down with respect to what? Speed is wholly relative. Firing your engines in the direction you want to go leads you to shoot around your target. It is madness. I needed a way to recover some of the intuitiveness, without sacrificing the fun of being in space.

Surprisingly, still images do a poor job of demonstrating motion.
The chase cam was my solution. Other games implement a chase view, but they waste it by keeping it aligned with the ship, so you're always watching your ship from behind. This is useful for shooting things, since weapons tend to be aligned with the front of the ship, but not so much for navigation. It's essentially the same as the cockpit view, only from outside the ship.

With my implementation, the player will see the direction they're actually moving in. If they want to head towards something, they have to align their objective with the center of this view. In the meantime, the ship is free to rotate any which way, and accelerating in different directions will shift the view. Ironically, racing games make a better use of the chase view than space flight games. For the most part you sit squarely behind the car, but you can notice when turning hard around corners or drifting that your car will turn while you keep looking in the direction that your car is still going in.

Still, shooting things is important. I can see how it would be quite distressing not to be able to see what you're shooting at, which is what the cockpit view is for. Eventually, helpful GUI elements will minimize the difficulty of getting from A to B when you don't have friction helping you along, but for a start the chase view should help.

Of course, before applying these views I had to populate the space somehow. A view from the cockpit in empty space would look completely static regardless of your movement. I decided to do two things. First, I draw a cube around the ship, which remains in a fixed orientation at all times. This let's you see how your orientation changes, but since you don't move around the cube but sit always in the middle, you can't see your translation. This is how skyboxes work, but since I don't have textures working yet green lines will have to do.

So that's what those lines are for.
The second is, I actually start using all the infrastructure I'd been talking about before. During initialization I'm make a few different models (a sphere for 'asteroids', a piramid for the player), and store them in the cache. Then I create an entity for the player, and assign it the piramid model.

For the asteroids I do a little trick to get more variation. Based on the sphere model, I create new models which point to the same vertex buffers (where the sphere shape is stored in the video card), but add a scaling matrix to alter the size.

I make about fifty asteroids with random sizes and assign them random positions about the starting location for the player. Then during rendering I go through the list of entities, apply the scaling transformation and draw the result. Happily it works with very little additional fussing.

For the demo I only have the cockpit view working, which was pretty easy to implement by giving the camera the same reference frame as the player. It is available for download as usual: Framework-0.0.5r

Since it doesn't work on all machines just yet, I also created a short video of the demo.

 

So what's next? I should have the chase view working, and the possibility to switch between the two views. If I have the time I'll even throw in something special.

Thursday, September 1, 2011

Modeling a sphere, the new way

Back when I began this blog, almost a year ago, I set a goal for myself of drawing a sphere. Back then I didn't know about the fixed pipeline, and how it had been replaced by shaders and a more flexible approach. I was pretty much learning as I went, and I was just starting out.

Now on my 50th post, I have a bit more experience under my belt, and it's interesting to see how far I've come and how I've changed the approach for doing exactly the same. It hasn't been easy or smooth, I've had to start over three or four times, but through it all I've progressed.

Last year my approach had used the fixed pipeline, and I was sending triangle strips to be rendered, one slice of the sphere at a time. This time, what I've done is create a set of vertices, and build the triangles by connecting the dots. By creating a vertex buffer array, the information is stored in the video memory meaning I don't have to send a load of data to the graphics card each frame.

Same as before, I'm cutting the sphere like a globe, with parallels running east to west and meridians running north to south.
In spherical coordinates, parallels correspond to theta values and meridians to phi values.
Fair warning, there's a bit of geometry ahead. I'll try to be clear.

I want to find the positions of the points at the intersections, which are the ones I'll use as vertices for my triangles. My approach is to loop through the grid, finding the spherical coordinates (radius, angle theta from the horizontal plane and angle phi around the axis) for each point and translating them to Cartesian coordinates (x, y, z).

To make matters easier I'll create a sphere with radius 1. If I want to use a larger sphere later, I can just scale the model accordingly. I'm also going to be working out the normals for each point. Since this is a sphere with radius 1, though, the normals will be the same as the coordinates.

Working out the spherical coordinates of the points is simple enough. Theta can vary between 0 and pi, and phi can vary between 0 and 2*pi. For a sphere of a given resolution, determined by the number of cuts and slices I use, I can determine the variation between grid points along each coordinate. With three cuts for example, the theta values would be 0 (for the north pole), pi/4 (45°), pi/2(90°, ecuator), 3pi/4 (135°) and pi (for the south pole).

Then it's just a matter of looping through all the possible values for each spherical coordinate, and translating to Cartesian, which is a simple trigonometry operation.


I now have a list with all the points in the sphere. The next thing I need to do is to create a list of indices, telling me which points make each triangle. If I had the four points

1 ( 0, 1, 0)
2 (-1, 0, 0)
3 ( 0,-1, 0)
4 ( 1, 0, 0)

I could use the two triangles define by the points 1,2,4 and 2,3,4 to build a square, for example. With the sphere it's the same. Knowing how I traverse the sphere in the first place, I can determine which points build the triangles on the surface. It's important though to consider that the polar caps of the model behave differently than the main body, but that special case is simple enough to handle.

After I've stored all this information in arrays, I need to feed it to the video card. I build a vertex buffer array, which stores information about what buffer objects to use, and several vertex buffer objects to actually store the information. For each point I'm storing it's position, normal, color, and texture coordinates. Add the index array for telling the card how to group them into triangles, and I need to use five buffers. On the plus side, that's a whole lot of information I no longer need to worry about. When I next wish to draw the sphere, I simply bind the vertex buffer array and draw it.

The result is visible in the demo for Framewrok-0.0.3r, where I changed the old single triangle model for a fairly detailed sphere.

Monday, August 8, 2011

Back to square one again!

Well, I spent a lot of time scratching my head over a problem that turned out to be a simple dumb issue.

As I detailed in my last post, my models consists of pointers to video memory where the actual information is stored. In a fit of genius, I realized that if I deleted the model but forgot to clean up the information, I'd be leaking video memory. That would be bad. So I made sure that, before a model is destroyed, the information is cleaned up.

This seemed like a clever solution, and the subtle problem didn't become obvious to me until I spent two weeks wondering what I was doing wrong.

The problem happened with copying the model. I created the model in one part of the program, and copied it to the cache that would store it. This preserved the pointers, but when the original model was cleaned up it cleaned up the video memory. Just as I told it to. The copy was then left with pointers that were no longer valid. So when I then asked the program to render the model, it couldn't.

Thankfully the problem wasn't hard to solve. I instead created the model in a space in memory where I have more control of when it gets destroyed, and copy instead a pointer to it. This way the model object itself is never copied or destroyed and I can store it directly. In the process I learned a bit, so I call it a success.

I also had to add a shader to the program, which I should have probably worked into the framework before I did the work on the models. Thankfully I had a ready made function that handles all the aspects of reading the shader from file, loading it to video memory, compiling, linking and binding attributes.

So now, after a couple of months, I'm able to render a red triangle to the screen. Which is about what I was at before the rewrite. So, yay?

Well, to be honest I have a lot more than before. I can read input, and configure it (just need to add more input to be read). My storage is a lot more organized and extensible. I need to do some adjustments to my design to account for lessons learned, but I'm very happy with what I have.

Next up, I'll be adding some actual interactivity. Well, first deal with matrices, then interactivity.

Tuesday, July 12, 2011

Top Models

With everything I've done, it's about time I start getting to drawing something to the screen. Before I can do that, however, I have to define what it is I'm going to draw. The shapes that describe the different objects in the world are called models.

Models in three dimensions are composed of triangles, and these are made up of three vertices each. A vertex (singular of vertices) is described by several numbers. We need it's position in space, which is described by three coordinates (x,y,z). We also need the normal for each point, which is used in lighting calculations and describes a direction perpendicular to the surface. This vector is also described by three values and should be of unit length. We can specify a color for each point, which requires the color values (RGB, greyscale, transparency, etc). Finally we have the texture coordinates, which describe the point on a texture that should be mapped to the vertex. Since textures are flat, this is accomplished with just two values, (u,v).

This means there are between nine and twelve values that describe a point. These values can be stored on the video card itself, using structures called Vertex Buffer Objects. Each of these objects can be called with a unique ID called names, which OpenGL provides when you request the creation of one.

This means that my model structure in the program needs to know only the VBO name.

When storing the vertex positions, however, I am doing it in the model coordinates. Depending on how I translate them to screen coordinates, I could scale or stretch the model. If I want to draw a large sphere and a small sphere, I could either store the points for each, or I could store the points for a unit sphere and scale it to the size I want.

Additionally, I could have a model consisting of different sub-models. For example, one could create a snowman out of reusing the model for a sphere, instead of a single model storing the values of two spheres. In order to do this, the snowman model would need the name of the basic sphere and two matrices, to scale and translate the spheres to the correct shape and position.

This implies two kinds of models. The primitives would consist of the Vertex Buffer Object names, while the complex ones would consist of a list of primitives and matrices applied to them. Of course, the list could also contain other complex models itself. These two types could be condensed if we stored both properties in every model. Primitive types would hold empty lists, while the VBO names for complex types would be null.

The models would then be stored in the model cache, and different entities would call them when needed.

I would still need methods for loading the vertex information into the buffers, however, and for drawing them. But this is something to look into in another post.