I apologize for the tardiness. Should have had this up last Monday. Never decided on an official update schedule, but I think Monday is a good day for it.
So, what have I been so busy that I couldn't post? Well, I've been trying to wrangle with Collision. If you've tested the last demo I put out, you might have noticed that things go through each other, including the player. That's because I haven't included any logic in the loop that looks for objects interacting. I essentially create the spheres and player ship, and draw them, but they're each doing their own thing.
Collision detection (and resolution) is about finding when these objects interact, and how. The clearest example is not letting objects go through each other. But it's also necessary for detecting bullets hitting targets and damaging them, and objects bouncing believably when they hit something.
This is a complex problem, but thankfully computers today are very powerful machines. Collisions on a scale unthinkable years ago can run on current hardware in real time. There's a great deal of studying that has gone into sorting this problem as well, both for research in scientific applications and in the video game industry looking for more realistic behaviors.
The first issue we find is how to determine that there was, indeed, a collision. There are many ways that this can be done, from checking every triangle in a model to every other triangle in the scene (where intersecting triangles identify a collision) to comparing bounding volumes, and ways of speeding up the process itself by culling impossible scenarios.
The way I chose to start with is essentially the simplest (and given my field of spheres, one that should work just fine). I create a bounding volume around each object. A bounding volume is a three dimensional shape built around an object that guarantees that no point in the object lies outside of the volume. Take a square, for example. A circle centered on the square and with a diameter at least as wide as either diagonal bounds the square, since all four points would lie inside it.
In my case, I'm using spheres as the bounding volume. I can center these on the entity, and size them so that they have the same size of the model (in the case of the spheres, for the ship the sphere is a bit bigger than it really needs to be). The idea behind the bounding volumes isn't to find if two objects intersect, but to be able to discard those that don't.
Comparing bounding volumes is faster and easier than comparing the models themselves, triangle to triangle. Since the volumes completely enclose the model, we know that if the two volumes don't meet there is no way for the models to meet, and we can discard the entire models at once instead of making sure triangle by triangle. Checking if two spheres intersect is a trivial check if you know their centers and their respective radii: If the distance between the centers is less than the sum of their radii, the spheres intersect. Else, they don't.
While we can guarantee that if the spheres don't intersect the models don't either (and we can then avoid checking further), it's not necessarily true that if they intersect the models are touching. Thus, it's possible (even expected) that the bounding volume method may yield "false positives", resulting in collisions when two objects aren't touching. The solution is to do a more expensive check, triangle against triangle for example, on these. The advantage is that by excluding the ones we know can't intersect with a cheap check, we save a considerable amount of computing power.
Other bounding volumes commonly used are AABBs and OBBs, which stand for Axis Aligned Bounding Box and Oriented Bounding Box. Both consist of building boxes around the models, with the difference that the OBB builds the smallest box that the model fits in and rotates it together with the model, while the AABB builds the smallest box that can align to the world coordinate system. If the object within an AABB changes orientation, the AABB must be recalculated.
The advantage the boxes have is that they produce fewer false positives than a sphere. Spheres leave too much empty volume, especially the less 'round' the object within it is. Consider a stick, for example. In such a case, where one dimension of the object is much larger than the other two, the sphere will be mostly empty and any objects passing through this empty space will need to be checked against the stick, resulting in wasted computation.
The second easiest volume to check is the AABB. To see if two volumes intersect, one must simply check the sides. It's simplified because they are aligned to the axis, so that the x,y or z values along each side are fixed. On the other hand, building the volume each time the model changes orientation can get expensive. In the case of the stick, for example, we find ourselves in pretty much the same situation as the sphere is it is oriented away from all three axis, and get the best results if lies along any one of them.
The OBB method, finally, guarantees the fewest false positives. Finding the smallest OBB can be a bit expensive, but it needs only be done once when you create the model. On the other hand, comparing OBBs to find intersections is more expensive than either of the previous methods.
There really isn't a 'best method', as they all have their own costs which need to be considered. AABBs are good for scenery, for example, which doesn't change orientation. On the other hand, if you have a roundish object, a sphere will produce less wasted space than either box method. The good thing is we don't need to 'choose' a single method, as for very little overhead we can implement box-sphere checks that let us mix things up.
Further, the bounding volume is only the first test. And as large and complex as models have become, it's not uncommon for there being a hierarchy of checks that are run, cutting the amount of triangle test required down to sane levels. With two ten thousand polygon models, for example, just checking one against the other would be too much wasted time.
Whew. That took more than I thought. Don't really have the time to fish for pictures though.
No new demo today, but soon. Next Monday: what happens when you find an intersection!
Monday, September 26, 2011
Monday, September 12, 2011
It's full of stars!
I've gotten a few new features into the framework, such as a first implementation on 'shooting', but that's not what I want to talk about today.
So far, I've focused solely on functional details. Making models, making sure the models are displayed, movement, views and such. The result is... well, it's functional. Of course I want to have something good before I start polishing. A polished turd is still a turd, so I have to turn it into something else first.
The traditional way of making a background is making use of a skybox. This is basically a cube you draw around the scene. With the aid of something called cubemapping, you can slap the sky on the inside and the textures used fool the eye into thinking it's looking at a dome, rather than a cube. No seams or anything, if done correctly.
Of course, I didn't have the textures to make it look good handy.while there are plenty of images of space around, there's very few of a look at the complete sky, and those that are have fish eye effects. I could have slapped a few images, merged them together or something and called it a day, but I wanted something special.
So, I started a little side project. The idea being, I'd create the background out of a real galaxy I made up on the spot. This would mean create all the stars, jump into an appropriate position inside the galaxy and take pictures of what is visible in each direction. That way I would have the textures I needed for the cube map. I could scale the definition as much as I want, or move to a different spot and have the constelations in the background actually shift properly.
How to do it, though? A galaxy like our milky way has on the order of a hundred billion stars. Can't really draw them one at a time. Instead, my plan is to split the space the galaxy occupies into as many blocks as I can manage, and give it shape with the use of two functions.
One of the functions will give the galaxy a shape. The second will add some higher frequency noise to make it look more interesting.
I will represent the blocks with points, and once I have it all working try to improve the appearance. I really want the GPU to do the heavy lifting here. I can make a shader program that checks the position of each point and assigns it a value for each function, then multiplies them together. But if I tell the GPU to draw a million points one at a time, the overhead on the calls will slow things to a crawl. So what I do instead is make use of instanced drawing.
Instanced drawing means I tell the gpu to draw a singlemodel (in my case, as ingle point) as many times as I want. It's helpful for drawing many identical objects. The result, after setting the color to solid red, is not very impressive:
Of course, drawing several identical points means they get drawn in the same spot. In order to add variation, the gpu gives me an extra variable, an instance id, which I can manipulate for some interesting effects.
For my first test, I'm sending 0x1000 points to the gpu. That's 4096 in decimal, but you'll see why I work in hex in a bit. This means the instance ids range from 0x0 to 0xFFF (4095). What I can do is treat each of the hex digits as a coordinate in one of the axis, x, y and z, and move the point by that amount. So for example, the point 0x123 maps to position (1, 2, 3). I use bit masks and some bitwise arithmetic to work out the positions, which is why I do it this way. Computers, as you know, like binary and powers of 2 better than decimal.
Adding the instance id into the mix then, the result is a cube made of red dots. Not very impressinve right now, but it's a start.
I can play around with color too. For example, if I treat x as Red, y as Green and z as Blue, I get a colorful cube:
Ok, so back to the galaxy. First, I'll give it a proper shape. A cube is hardly the proper shape, so let's ignore points outside of a central sphere. For that, I'll give points further than 1 unit from the center a color of 0.1. There may be a few stragglers around the galaxy, but they're few and far between.
Much better. Now, galaxies tend to be shaped as disks, more than spheres, so we want to squash the sphere as well. What we want is to decrease the intensity with regards to the height from the plane. I'll use y to measure height from the plane, and to avoid the break being so obvious I'll also diminish the intensity along the diskthe farther you are from the center. The result:
Not so bad. Still need to add more features, though. For example, our galaxy has spiral arms. These will require a bit more thinking.
I didn't get quite as far as I meant to, mostly due to Deus Ex. Still, I don't want to leave you without an update so check out Framework-0.0.6r. You can do everything you could do before, and now shoot too! The space key now fires instead of accelerating, you accelerate with Left Shift. Fun* game to play: see how much you can shoot before it starts slowing down the game. TODO: delete the bullets when they're no longer relevant.
I've also made the folder with all the releases public, so anyone can look up the history without digging through the archives. Framework releases.
* It's fun for me, but then I had fun with the first release too.
So far, I've focused solely on functional details. Making models, making sure the models are displayed, movement, views and such. The result is... well, it's functional. Of course I want to have something good before I start polishing. A polished turd is still a turd, so I have to turn it into something else first.
The traditional way of making a background is making use of a skybox. This is basically a cube you draw around the scene. With the aid of something called cubemapping, you can slap the sky on the inside and the textures used fool the eye into thinking it's looking at a dome, rather than a cube. No seams or anything, if done correctly.
Of course, I didn't have the textures to make it look good handy.while there are plenty of images of space around, there's very few of a look at the complete sky, and those that are have fish eye effects. I could have slapped a few images, merged them together or something and called it a day, but I wanted something special.
So, I started a little side project. The idea being, I'd create the background out of a real galaxy I made up on the spot. This would mean create all the stars, jump into an appropriate position inside the galaxy and take pictures of what is visible in each direction. That way I would have the textures I needed for the cube map. I could scale the definition as much as I want, or move to a different spot and have the constelations in the background actually shift properly.
How to do it, though? A galaxy like our milky way has on the order of a hundred billion stars. Can't really draw them one at a time. Instead, my plan is to split the space the galaxy occupies into as many blocks as I can manage, and give it shape with the use of two functions.
One of the functions will give the galaxy a shape. The second will add some higher frequency noise to make it look more interesting.
I will represent the blocks with points, and once I have it all working try to improve the appearance. I really want the GPU to do the heavy lifting here. I can make a shader program that checks the position of each point and assigns it a value for each function, then multiplies them together. But if I tell the GPU to draw a million points one at a time, the overhead on the calls will slow things to a crawl. So what I do instead is make use of instanced drawing.
Instanced drawing means I tell the gpu to draw a singlemodel (in my case, as ingle point) as many times as I want. It's helpful for drawing many identical objects. The result, after setting the color to solid red, is not very impressive:
4096 points, all drawn on top of each other. |
For my first test, I'm sending 0x1000 points to the gpu. That's 4096 in decimal, but you'll see why I work in hex in a bit. This means the instance ids range from 0x0 to 0xFFF (4095). What I can do is treat each of the hex digits as a coordinate in one of the axis, x, y and z, and move the point by that amount. So for example, the point 0x123 maps to position (1, 2, 3). I use bit masks and some bitwise arithmetic to work out the positions, which is why I do it this way. Computers, as you know, like binary and powers of 2 better than decimal.
Adding the instance id into the mix then, the result is a cube made of red dots. Not very impressinve right now, but it's a start.
I'm moving the points a bit, as well, so they appear centered. |
Ok, so back to the galaxy. First, I'll give it a proper shape. A cube is hardly the proper shape, so let's ignore points outside of a central sphere. For that, I'll give points further than 1 unit from the center a color of 0.1. There may be a few stragglers around the galaxy, but they're few and far between.
Much better. Now, galaxies tend to be shaped as disks, more than spheres, so we want to squash the sphere as well. What we want is to decrease the intensity with regards to the height from the plane. I'll use y to measure height from the plane, and to avoid the break being so obvious I'll also diminish the intensity along the diskthe farther you are from the center. The result:
Not so bad. Still need to add more features, though. For example, our galaxy has spiral arms. These will require a bit more thinking.
I didn't get quite as far as I meant to, mostly due to Deus Ex. Still, I don't want to leave you without an update so check out Framework-0.0.6r. You can do everything you could do before, and now shoot too! The space key now fires instead of accelerating, you accelerate with Left Shift. Fun* game to play: see how much you can shoot before it starts slowing down the game. TODO: delete the bullets when they're no longer relevant.
I've also made the folder with all the releases public, so anyone can look up the history without digging through the archives. Framework releases.
* It's fun for me, but then I had fun with the first release too.
Sunday, September 4, 2011
In the driver's seat
So I have a basic implementation of newtonian movement worked out. But we're still controlling an object that eventually flies off into the distance. And once it's out of view, it's pretty unlikely that it will come back. Wouldn't it be nice, though, if we could keep track of it?
Yes, yes it would. So having worked out movement, I focused on the camera. I decided to start with two views. One from the cockpit, which tracks the rotation of the object, and the other the familiar chase camera, looking at the object from behind, and tracking the direction of movement.
The reason I made these decisions is that movement in three dimensions is pretty unintuitive, specially when you remove friction. Most space flight games get around this unintuitiveness by treating space as a giant pool. You can remain buoyant at any spot, but there's a constant friction that slows you down as soon as you stop your engines. This allows people to point themselves in the direction they want to go and fire the engines. Any transversal motion will eventually be slowed enough that they end up reaching their destination with a minimum of effort.
With newtonian physics, however, there's no meaning to the idea of slowing down. Slowing down with respect to what? Speed is wholly relative. Firing your engines in the direction you want to go leads you to shoot around your target. It is madness. I needed a way to recover some of the intuitiveness, without sacrificing the fun of being in space.
The chase cam was my solution. Other games implement a chase view, but they waste it by keeping it aligned with the ship, so you're always watching your ship from behind. This is useful for shooting things, since weapons tend to be aligned with the front of the ship, but not so much for navigation. It's essentially the same as the cockpit view, only from outside the ship.
With my implementation, the player will see the direction they're actually moving in. If they want to head towards something, they have to align their objective with the center of this view. In the meantime, the ship is free to rotate any which way, and accelerating in different directions will shift the view. Ironically, racing games make a better use of the chase view than space flight games. For the most part you sit squarely behind the car, but you can notice when turning hard around corners or drifting that your car will turn while you keep looking in the direction that your car is still going in.
Still, shooting things is important. I can see how it would be quite distressing not to be able to see what you're shooting at, which is what the cockpit view is for. Eventually, helpful GUI elements will minimize the difficulty of getting from A to B when you don't have friction helping you along, but for a start the chase view should help.
Of course, before applying these views I had to populate the space somehow. A view from the cockpit in empty space would look completely static regardless of your movement. I decided to do two things. First, I draw a cube around the ship, which remains in a fixed orientation at all times. This let's you see how your orientation changes, but since you don't move around the cube but sit always in the middle, you can't see your translation. This is how skyboxes work, but since I don't have textures working yet green lines will have to do.
The second is, I actually start using all the infrastructure I'd been talking about before. During initialization I'm make a few different models (a sphere for 'asteroids', a piramid for the player), and store them in the cache. Then I create an entity for the player, and assign it the piramid model.
For the asteroids I do a little trick to get more variation. Based on the sphere model, I create new models which point to the same vertex buffers (where the sphere shape is stored in the video card), but add a scaling matrix to alter the size.
I make about fifty asteroids with random sizes and assign them random positions about the starting location for the player. Then during rendering I go through the list of entities, apply the scaling transformation and draw the result. Happily it works with very little additional fussing.
For the demo I only have the cockpit view working, which was pretty easy to implement by giving the camera the same reference frame as the player. It is available for download as usual: Framework-0.0.5r
Since it doesn't work on all machines just yet, I also created a short video of the demo.
So what's next? I should have the chase view working, and the possibility to switch between the two views. If I have the time I'll even throw in something special.
Yes, yes it would. So having worked out movement, I focused on the camera. I decided to start with two views. One from the cockpit, which tracks the rotation of the object, and the other the familiar chase camera, looking at the object from behind, and tracking the direction of movement.
The reason I made these decisions is that movement in three dimensions is pretty unintuitive, specially when you remove friction. Most space flight games get around this unintuitiveness by treating space as a giant pool. You can remain buoyant at any spot, but there's a constant friction that slows you down as soon as you stop your engines. This allows people to point themselves in the direction they want to go and fire the engines. Any transversal motion will eventually be slowed enough that they end up reaching their destination with a minimum of effort.
With newtonian physics, however, there's no meaning to the idea of slowing down. Slowing down with respect to what? Speed is wholly relative. Firing your engines in the direction you want to go leads you to shoot around your target. It is madness. I needed a way to recover some of the intuitiveness, without sacrificing the fun of being in space.
Surprisingly, still images do a poor job of demonstrating motion. |
With my implementation, the player will see the direction they're actually moving in. If they want to head towards something, they have to align their objective with the center of this view. In the meantime, the ship is free to rotate any which way, and accelerating in different directions will shift the view. Ironically, racing games make a better use of the chase view than space flight games. For the most part you sit squarely behind the car, but you can notice when turning hard around corners or drifting that your car will turn while you keep looking in the direction that your car is still going in.
Still, shooting things is important. I can see how it would be quite distressing not to be able to see what you're shooting at, which is what the cockpit view is for. Eventually, helpful GUI elements will minimize the difficulty of getting from A to B when you don't have friction helping you along, but for a start the chase view should help.
Of course, before applying these views I had to populate the space somehow. A view from the cockpit in empty space would look completely static regardless of your movement. I decided to do two things. First, I draw a cube around the ship, which remains in a fixed orientation at all times. This let's you see how your orientation changes, but since you don't move around the cube but sit always in the middle, you can't see your translation. This is how skyboxes work, but since I don't have textures working yet green lines will have to do.
So that's what those lines are for. |
For the asteroids I do a little trick to get more variation. Based on the sphere model, I create new models which point to the same vertex buffers (where the sphere shape is stored in the video card), but add a scaling matrix to alter the size.
I make about fifty asteroids with random sizes and assign them random positions about the starting location for the player. Then during rendering I go through the list of entities, apply the scaling transformation and draw the result. Happily it works with very little additional fussing.
For the demo I only have the cockpit view working, which was pretty easy to implement by giving the camera the same reference frame as the player. It is available for download as usual: Framework-0.0.5r
Since it doesn't work on all machines just yet, I also created a short video of the demo.
Saturday, September 3, 2011
Angular Momentum
A couple of posts back, when dealing with rotation, I wondered how I could model the property of angular momentum. Inertial momentum is easy enough to represent with a speed vector which you add to the entity's position every frame. Velocity shares several characteristics with rotation, surprisingly. In physics angular velocity is also represented with a vector, with the direction determining the axis of rotation and the length the angular speed. I figured this was, after all, everything I needed. It would make sense then to try to use it.
It wasn't too much work to add a vector to the entity object, and modify the code so that pressing the rotation keys would add to this vector. The rather clever bit, I think, is that I store the vector in world coordinates. By keeping track of the frame of reference for my objects I can easily convert between local and world coordinates. This is important because I add rotation along the three local axis, but convert the addition to global coordinates before adding them to my rotation vector.
Then in the update logic I rotate along the vector's axis by an amount determined by its length. One thing to keep in mind though is that to make the rotation look realistic, I have to center models on their center of mass. Easy enough on regular geometric shapes, but I'll have to figure out something for complex models, or models with moving parts.
The demo for this is available for download: Framework-0.0.4r
It's fun to mess around with. There's a weird effect that happens when you spin up significantly, and the refresh rate starts to match up with the turning speed and the sphere appears to slow down or rotate backwards.
You can also hopelessly lose control of the thing if you mix too much spin in different directions. It should be possible to recover from the perspective of someone inside, since you can easily break down the spin one axis at a time. That's something for the next update, though.
The farther away from the axis of rotation, the greater the change in position. |
Then in the update logic I rotate along the vector's axis by an amount determined by its length. One thing to keep in mind though is that to make the rotation look realistic, I have to center models on their center of mass. Easy enough on regular geometric shapes, but I'll have to figure out something for complex models, or models with moving parts.
The demo for this is available for download: Framework-0.0.4r
It's fun to mess around with. There's a weird effect that happens when you spin up significantly, and the refresh rate starts to match up with the turning speed and the sphere appears to slow down or rotate backwards.
You can also hopelessly lose control of the thing if you mix too much spin in different directions. It should be possible to recover from the perspective of someone inside, since you can easily break down the spin one axis at a time. That's something for the next update, though.
Thursday, September 1, 2011
Modeling a sphere, the new way
Back when I began this blog, almost a year ago, I set a goal for myself of drawing a sphere. Back then I didn't know about the fixed pipeline, and how it had been replaced by shaders and a more flexible approach. I was pretty much learning as I went, and I was just starting out.
Now on my 50th post, I have a bit more experience under my belt, and it's interesting to see how far I've come and how I've changed the approach for doing exactly the same. It hasn't been easy or smooth, I've had to start over three or four times, but through it all I've progressed.
Last year my approach had used the fixed pipeline, and I was sending triangle strips to be rendered, one slice of the sphere at a time. This time, what I've done is create a set of vertices, and build the triangles by connecting the dots. By creating a vertex buffer array, the information is stored in the video memory meaning I don't have to send a load of data to the graphics card each frame.
Fair warning, there's a bit of geometry ahead. I'll try to be clear.
I want to find the positions of the points at the intersections, which are the ones I'll use as vertices for my triangles. My approach is to loop through the grid, finding the spherical coordinates (radius, angle theta from the horizontal plane and angle phi around the axis) for each point and translating them to Cartesian coordinates (x, y, z).
To make matters easier I'll create a sphere with radius 1. If I want to use a larger sphere later, I can just scale the model accordingly. I'm also going to be working out the normals for each point. Since this is a sphere with radius 1, though, the normals will be the same as the coordinates.
Working out the spherical coordinates of the points is simple enough. Theta can vary between 0 and pi, and phi can vary between 0 and 2*pi. For a sphere of a given resolution, determined by the number of cuts and slices I use, I can determine the variation between grid points along each coordinate. With three cuts for example, the theta values would be 0 (for the north pole), pi/4 (45°), pi/2(90°, ecuator), 3pi/4 (135°) and pi (for the south pole).
Then it's just a matter of looping through all the possible values for each spherical coordinate, and translating to Cartesian, which is a simple trigonometry operation.
I now have a list with all the points in the sphere. The next thing I need to do is to create a list of indices, telling me which points make each triangle. If I had the four points
I could use the two triangles define by the points 1,2,4 and 2,3,4 to build a square, for example. With the sphere it's the same. Knowing how I traverse the sphere in the first place, I can determine which points build the triangles on the surface. It's important though to consider that the polar caps of the model behave differently than the main body, but that special case is simple enough to handle.
After I've stored all this information in arrays, I need to feed it to the video card. I build a vertex buffer array, which stores information about what buffer objects to use, and several vertex buffer objects to actually store the information. For each point I'm storing it's position, normal, color, and texture coordinates. Add the index array for telling the card how to group them into triangles, and I need to use five buffers. On the plus side, that's a whole lot of information I no longer need to worry about. When I next wish to draw the sphere, I simply bind the vertex buffer array and draw it.
The result is visible in the demo for Framewrok-0.0.3r, where I changed the old single triangle model for a fairly detailed sphere.
Now on my 50th post, I have a bit more experience under my belt, and it's interesting to see how far I've come and how I've changed the approach for doing exactly the same. It hasn't been easy or smooth, I've had to start over three or four times, but through it all I've progressed.
Last year my approach had used the fixed pipeline, and I was sending triangle strips to be rendered, one slice of the sphere at a time. This time, what I've done is create a set of vertices, and build the triangles by connecting the dots. By creating a vertex buffer array, the information is stored in the video memory meaning I don't have to send a load of data to the graphics card each frame.
Same as before, I'm cutting the sphere like a globe, with parallels running east to west and meridians running north to south.
In spherical coordinates, parallels correspond to theta values and meridians to phi values. |
I want to find the positions of the points at the intersections, which are the ones I'll use as vertices for my triangles. My approach is to loop through the grid, finding the spherical coordinates (radius, angle theta from the horizontal plane and angle phi around the axis) for each point and translating them to Cartesian coordinates (x, y, z).
To make matters easier I'll create a sphere with radius 1. If I want to use a larger sphere later, I can just scale the model accordingly. I'm also going to be working out the normals for each point. Since this is a sphere with radius 1, though, the normals will be the same as the coordinates.
Working out the spherical coordinates of the points is simple enough. Theta can vary between 0 and pi, and phi can vary between 0 and 2*pi. For a sphere of a given resolution, determined by the number of cuts and slices I use, I can determine the variation between grid points along each coordinate. With three cuts for example, the theta values would be 0 (for the north pole), pi/4 (45°), pi/2(90°, ecuator), 3pi/4 (135°) and pi (for the south pole).
Then it's just a matter of looping through all the possible values for each spherical coordinate, and translating to Cartesian, which is a simple trigonometry operation.
I now have a list with all the points in the sphere. The next thing I need to do is to create a list of indices, telling me which points make each triangle. If I had the four points
1 ( 0, 1, 0)
2 (-1, 0, 0)
3 ( 0,-1, 0)
4 ( 1, 0, 0)
I could use the two triangles define by the points 1,2,4 and 2,3,4 to build a square, for example. With the sphere it's the same. Knowing how I traverse the sphere in the first place, I can determine which points build the triangles on the surface. It's important though to consider that the polar caps of the model behave differently than the main body, but that special case is simple enough to handle.
After I've stored all this information in arrays, I need to feed it to the video card. I build a vertex buffer array, which stores information about what buffer objects to use, and several vertex buffer objects to actually store the information. For each point I'm storing it's position, normal, color, and texture coordinates. Add the index array for telling the card how to group them into triangles, and I need to use five buffers. On the plus side, that's a whole lot of information I no longer need to worry about. When I next wish to draw the sphere, I simply bind the vertex buffer array and draw it.
The result is visible in the demo for Framewrok-0.0.3r, where I changed the old single triangle model for a fairly detailed sphere.
Subscribe to:
Posts (Atom)