Eventually, any project requires that we put some writing somewhere. This is a common issue in OpenGL projects, where there isn't native support for writing to the screen. After a look around possible solutions, I've decided to go with FTGL, which implements the functionality of FreeType 2 in a hopefully easy to use way.
Right now my interest is to throw some information up on the screen. Things like frames per second, for starters, so I have some notion of how different changes and platforms affect performance. Eventually, when more of a GUI is in place, it will be useful to have a library in place.
Thankfully SDL offers some handy ways of finding out time in milliseconds, as opposed to system time which runs in seconds.
Sunday, May 22, 2011
Saturday, May 21, 2011
Noisy terrain
A quick update. Waiting for a few books to arrive, but in the meantime I've been messing around and finally decided to add some funky terrain.
The way this works, I stored the values of calling the simplex function and then when I drew the ground I replaced the height with the value for the correct point. The grid is 100x100 in the picture. That's 10,000 squares, or 20,000 triangles. I stored the values of the height in double precision, 10,000 points. A double, in this implementation, is 8 bytes. That means it takes 80,000Bytes (78KB) to store just the heights.
Of course, there are ways to make it better. The important thing is that it looks pretty nifty.
That's the result of applying the simplex function only once, by the way. By running it several times at different frequencies and amplitudes, more interesting terrain should be produced.
The way this works, I stored the values of calling the simplex function and then when I drew the ground I replaced the height with the value for the correct point. The grid is 100x100 in the picture. That's 10,000 squares, or 20,000 triangles. I stored the values of the height in double precision, 10,000 points. A double, in this implementation, is 8 bytes. That means it takes 80,000Bytes (78KB) to store just the heights.
Of course, there are ways to make it better. The important thing is that it looks pretty nifty.
That's the result of applying the simplex function only once, by the way. By running it several times at different frequencies and amplitudes, more interesting terrain should be produced.
Sunday, May 15, 2011
Creating a world
The complex bit for this next leg of the journey, creating a world, is being able to cut the world up into small chunks that can be shown to the player at any point. I can think of two ways of solving this.
One is to load only the area immediately around the player, running the algorithms for terrain generation on the fly. The downside to this is that it is computationally intensive, and the terrain suffers from not being able to have several passes done, so it's going to be simplistic. The upside is that it is quick and easy to get it working.
The alternative is to create the world on a previous pass and store it in some manner, then send it to the program to display. This requires some computation upfront, and space to store the results, but allows a more complex world to be created since you can run several functions over the data set.
Clearly, in either case, you can't try to display the whole world at once. Instead, the attempt should be to display only portions at a time. An approach to try is to use oct-trees to store the data in memory and pass it to the program, after it has been created. I'm still not sure how that would work, actually. I have to do some reading up.
One is to load only the area immediately around the player, running the algorithms for terrain generation on the fly. The downside to this is that it is computationally intensive, and the terrain suffers from not being able to have several passes done, so it's going to be simplistic. The upside is that it is quick and easy to get it working.
The alternative is to create the world on a previous pass and store it in some manner, then send it to the program to display. This requires some computation upfront, and space to store the results, but allows a more complex world to be created since you can run several functions over the data set.
Clearly, in either case, you can't try to display the whole world at once. Instead, the attempt should be to display only portions at a time. An approach to try is to use oct-trees to store the data in memory and pass it to the program, after it has been created. I'm still not sure how that would work, actually. I have to do some reading up.
Saturday, May 14, 2011
Simplex Noise, done!
Well, I managed to get this working. It took entirely too long, had to deal with a few silly mistakes, but it is done.
Wow, underwhelming.
Interestingly, it's fairly slow. I'll have to find a few different ways of doing it. I also sort of cheated a bit. After my implementation of the simplex algorithm refused to work (would just produce 0 everywhere) I copied someone else's. I'll work on fixing that.
It's also given me an appreciation of just how computationally intensive it is. When run without compiler optimizations it crawled along, although it's fairly smooth otherwise.
With this worked out, I can work on the next part of the project. That's going to be creating a sphere which is deformed by the noise I generate. The good thing is that only requires calculating the vertices that need it, as opposed to a texture where each pixel has to be done.
Another aspect to look into is combining the noise with itself, to create smaller ripples and such which result in more realistic and detailed, sharper textures.
Finally, I've been looking into getting some kind of Source Control software. I'm torn between Subversion and Git right now. I'll be looking at reviews and the like to help me make a decision.
That's all from me right now.
Wow, underwhelming.
Interestingly, it's fairly slow. I'll have to find a few different ways of doing it. I also sort of cheated a bit. After my implementation of the simplex algorithm refused to work (would just produce 0 everywhere) I copied someone else's. I'll work on fixing that.
It's also given me an appreciation of just how computationally intensive it is. When run without compiler optimizations it crawled along, although it's fairly smooth otherwise.
With this worked out, I can work on the next part of the project. That's going to be creating a sphere which is deformed by the noise I generate. The good thing is that only requires calculating the vertices that need it, as opposed to a texture where each pixel has to be done.
Another aspect to look into is combining the noise with itself, to create smaller ripples and such which result in more realistic and detailed, sharper textures.
Finally, I've been looking into getting some kind of Source Control software. I'm torn between Subversion and Git right now. I'll be looking at reviews and the like to help me make a decision.
That's all from me right now.
Wednesday, May 11, 2011
Simplex Noise, almost done
I have been working with a couple of implementations of Simplex noise, trying to make sense of it so I can describe it in the technical specification. I've made some decent progress, even. I'm at the point were I could grab the java implementation, port it to c++ and get it to work.
But I won't.
The bit I'm at right now is the attenuation function. After I've figured out the influence of each vertex of a simplex on a point within it, the point on the gradient where it falls for each, I have to attenuate the gradient so that the vertex's influence ends at the simplex and doesn't spill over. While I grasp why this is necessary, I'm not so certain about the numbers I saw being used in other implementations. They all agree, so I expect I can figure this out easily enough. But I'm trying to reach that point first.
There definitely is some merit to writing this all down, as opposed to trying to work it out in my mind as I code it. I'll have to add drawing when I get the chance.
Back to trigonometry, I guess.
But I won't.
The bit I'm at right now is the attenuation function. After I've figured out the influence of each vertex of a simplex on a point within it, the point on the gradient where it falls for each, I have to attenuate the gradient so that the vertex's influence ends at the simplex and doesn't spill over. While I grasp why this is necessary, I'm not so certain about the numbers I saw being used in other implementations. They all agree, so I expect I can figure this out easily enough. But I'm trying to reach that point first.
There definitely is some merit to writing this all down, as opposed to trying to work it out in my mind as I code it. I'll have to add drawing when I get the chance.
Back to trigonometry, I guess.
Spec'ing Out
So my last post offered a functional specification for a simple program with a simple goal. Functional specifications describe the way the program works.
This is all well and good, and every project should start with something along those lines. However, when I tried to work that out I found myself stuck again. Not due to inability to do something (the way I was stuck with the textures before) but because I didn't know what the best way to get about doing what I wanted to do was.
To solve that, you need a technical specification. The two specs essentially design the program in English, before you get down to translating the program for the computer. So coming up, I'll be making the technical specification for this little program. Not now, though, because it's pretty late.
This is all well and good, and every project should start with something along those lines. However, when I tried to work that out I found myself stuck again. Not due to inability to do something (the way I was stuck with the textures before) but because I didn't know what the best way to get about doing what I wanted to do was.
To solve that, you need a technical specification. The two specs essentially design the program in English, before you get down to translating the program for the computer. So coming up, I'll be making the technical specification for this little program. Not now, though, because it's pretty late.
Friday, May 6, 2011
So Much Noise
I have decided that some of the parts I need for the overall project are modular enough that it may be best to develop them in a stand-alone project before adding them to the larger project.
One such example is the noise function. I am torn between the standard Perlin noise and the improved Simplex noise. Because I'm going to be running it a lot, however, the speed benefits in the Simplex noise is probably best.
The project I'm planning to make will be a single screen with a changing texture, the texture being just two dimensional Simplex noise.
One such example is the noise function. I am torn between the standard Perlin noise and the improved Simplex noise. Because I'm going to be running it a lot, however, the speed benefits in the Simplex noise is probably best.
The project I'm planning to make will be a single screen with a changing texture, the texture being just two dimensional Simplex noise.
Wednesday, May 4, 2011
Stuck on textures
I've been working on creating textures on the fly, instead of reading them from a file. Having issues sticking them on a quad. Not sure what the problem is. Will keep at it until I figure it out.
--
UPDATE
--
Ok, that got fixed. I had to set up some parameters before it would let me get the texture working. So some basic texturing is now in effect. So I can get back to working out how to make noise now.
--
UPDATE
--
Ok, that got fixed. I had to set up some parameters before it would let me get the texture working. So some basic texturing is now in effect. So I can get back to working out how to make noise now.
Thursday, April 28, 2011
Geting back in the swing of things.
Hoo boy. Been a while.
So things got complicated a bit ago. But I'm not giving up so easily. So, last time I had the camera set up and working. Next up, I mean to get a decent noise generator going. Probably going to use simplex noise. It is a quicker variant of the better known Perlin noise, aimed at solving some of the weaknesses the original implementation had.
But before I can do that, I want to be able to see the output of it. The easiest way to do that is to create a texture from the noise, then slap the texture somewhere to watch the resulting noise. Which means I have to get the hang of creating textures and slapping them on stuff.
So for the next part, I'm going to be working on that. Also, a comment on a problem I had when setting up a project in Code::Blocks. I have to set up the correct search directories.
In Build Options... -> Search Directories, it's Compiler tab, have to add D:\SDL-1.2.14\include. In the linker tab, D:\SDL-1.2.14\lib. The linker tab part was causing the linker to be unable to find stuff.
So things got complicated a bit ago. But I'm not giving up so easily. So, last time I had the camera set up and working. Next up, I mean to get a decent noise generator going. Probably going to use simplex noise. It is a quicker variant of the better known Perlin noise, aimed at solving some of the weaknesses the original implementation had.
But before I can do that, I want to be able to see the output of it. The easiest way to do that is to create a texture from the noise, then slap the texture somewhere to watch the resulting noise. Which means I have to get the hang of creating textures and slapping them on stuff.
So for the next part, I'm going to be working on that. Also, a comment on a problem I had when setting up a project in Code::Blocks. I have to set up the correct search directories.
In Build Options... -> Search Directories, it's Compiler tab, have to add D:\SDL-1.2.14\include. In the linker tab, D:\SDL-1.2.14\lib. The linker tab part was causing the linker to be unable to find stuff.
Monday, November 22, 2010
So awesome!
There's nothing quite like the feeling of accomplishment getting something done gives. Best natural high EVER!
What is it that I accomplished? Nothing much, just got a true 6 degrees of freedom camera to work correctly and fly around a boring plane. It can roll, it can climb, it can move forward, it can do pretty much everything I want it to do. The control scheme I'm using is WASD to move around the plane, RF to climb or fall, and QE to roll clockwise and counter clockwise, plus what I had done before of the mouse to pith and yaw.
I'm very happy, but I also need to document how I do this.
Keyboard events are generated whenever someone presses or releases a key. The camera class has a number of boolean toggles that are flipped whenever the corresponding event happens. Then, during the camera update, it checks for each of the toggles and if it is true, the camera is moved along the corresponding axis. Here, the use of vectors to store the camera's position and view pays off, since with these already in place advancing is trivial.
To move forward, I multiply the view vector by a scalar that determines my speed. The resulting vector is added to my camera's position vector, and it's done. To move back, it's the same but the resulting vector is subtracted from the camera's position. The other movements follow the same logic but along the up and right vectors. Finally, roll is a rotation of the up vector around the view vector, and is done with quaternions the same as yaw and pitch.
I'm very excited to be able to get around the world like this. Now I just need to populate this world with something interesting to visit. The next few updates should prove a lot more interesting, now that the basic framework at least is done.
What is it that I accomplished? Nothing much, just got a true 6 degrees of freedom camera to work correctly and fly around a boring plane. It can roll, it can climb, it can move forward, it can do pretty much everything I want it to do. The control scheme I'm using is WASD to move around the plane, RF to climb or fall, and QE to roll clockwise and counter clockwise, plus what I had done before of the mouse to pith and yaw.
I'm very happy, but I also need to document how I do this.
Keyboard events are generated whenever someone presses or releases a key. The camera class has a number of boolean toggles that are flipped whenever the corresponding event happens. Then, during the camera update, it checks for each of the toggles and if it is true, the camera is moved along the corresponding axis. Here, the use of vectors to store the camera's position and view pays off, since with these already in place advancing is trivial.
To move forward, I multiply the view vector by a scalar that determines my speed. The resulting vector is added to my camera's position vector, and it's done. To move back, it's the same but the resulting vector is subtracted from the camera's position. The other movements follow the same logic but along the up and right vectors. Finally, roll is a rotation of the up vector around the view vector, and is done with quaternions the same as yaw and pitch.
I'm very excited to be able to get around the world like this. Now I just need to populate this world with something interesting to visit. The next few updates should prove a lot more interesting, now that the basic framework at least is done.
Sunday, November 21, 2010
A small step for man, a giant leap for the game
I have finally managed to get the camera to work! I control the vertical! I control the horizontal!
Ok, so the mouse now moves the camera around. After days and hours of experimentation, I have figured out how it all fits together. I am making a 6DOF camera, although right now only attitude and bearing are affected (pitch and yaw).
The way this works is that the camera stores three vectors: the position of the camera, the view, and the up vector. Position is fairly straight forward. View is the direction in which the camera is looking. Up vector serves to orient the camera, indicating which way is up. All pretty self explanatory.
I then use the gluLookAt function to set the correct perspective. One warning about the way I do this, my view vector stores the direction the camera is facing, but gluLookAt expects as one of its parameters the position of something you're looking at, which is then used as the center of the screen. The way I get this position is to add the camera's position and the view vector and passing this along to gluLookAt.
gluLookAt then subtracts the position from the target to get its own view vector which it then normalizes. So gluLookAt is undoing some of my work. I will do a function of my own to do what gluLookAt does, so I avoid the unnecessary step, but it isn't a priority at this time.
The trick is how to get the camera to rotate, with what I have. The solution I am using is using quaternions to describe the rotations. The way this works is by incrementally rotating the view and up vectors with the quaternions.
I use the SDL to detect when the mouse moves, which creates an event. This provides me with both absolute and relative movement. Absolute movement provides the screen coordinates of the mouse, with (0,0) being the top left corner. Relative movement is the accumulated movement since the last event. Because the program doesn't keep track of the mouse outside of the screen, I have to warp the mouse back to the center of the screen each time it moves away.
This was creating a problem where the warp movement created a mouse movement event that was the opposite of the movement itself,. To fix it I am now checking the absolute position of the mouse when I catch an event, and if it is at the center I ignore that particular event. Since it can only be at the center right after warping, and any movement I care about draws it away, I don't lose anything.
Once I have the relative movement, I multiply it by a sensitivity factor and store it as the number of degrees the camera moved in that frame. Horizontal movement of the mouse affects the bearing of the camera, while vertical movement affects the attitude.
So, on to the rotations themselves. Changes in bearing are described as a rotation around the up vector of the camera. This modifies the view vector, but leaves position and up unchanged. Quaternions are used here to describe this rotation. I use a convert_axis_angle function, using the up vector and the angle to build the quaternion and then normalize it. I then create a quaternion from my view vector, which for any given vector v = [ x , y , z ] , q = [ 0 , v ] .
The rotation of the view vector is the sandwich product qbearing x qview x conjugate(qbearing) . This result is stored in view.
For attitude changes, I need to determine the cross product of view and up, which gives me the right vector of the coordinate trio ( view , up , right ). This time, both view and up are modified by the rotation. So I create the attitude quaternion with convert_axis_angle, using the right vector this time, and make both rotations:
qattitude x qview x conjugate(qattitude)
qattitude x qup x conjugate(qattitude)
The results are stored in view and up respectively.
After this is done, I use gluLookAt as described above, and voila, it is done.
There's room for growth, of course. Roll can be easily appended, once the input is mapped. In the case of roll, the rotation would be around the view vector, and only up would be modified. I also need to add translation, to move the camera around. This is easily done, however, now that I have a view vector. Lateral movement (strafing) is done by getting the right product of view and up, and moving along that vector, vertical movement is done along the up vector and forward movement is done along the view vector.
Ok, that's it for now. Next up, making the world a more interesting place.
Ok, so the mouse now moves the camera around. After days and hours of experimentation, I have figured out how it all fits together. I am making a 6DOF camera, although right now only attitude and bearing are affected (pitch and yaw).
The way this works is that the camera stores three vectors: the position of the camera, the view, and the up vector. Position is fairly straight forward. View is the direction in which the camera is looking. Up vector serves to orient the camera, indicating which way is up. All pretty self explanatory.
I then use the gluLookAt function to set the correct perspective. One warning about the way I do this, my view vector stores the direction the camera is facing, but gluLookAt expects as one of its parameters the position of something you're looking at, which is then used as the center of the screen. The way I get this position is to add the camera's position and the view vector and passing this along to gluLookAt.
gluLookAt then subtracts the position from the target to get its own view vector which it then normalizes. So gluLookAt is undoing some of my work. I will do a function of my own to do what gluLookAt does, so I avoid the unnecessary step, but it isn't a priority at this time.
The trick is how to get the camera to rotate, with what I have. The solution I am using is using quaternions to describe the rotations. The way this works is by incrementally rotating the view and up vectors with the quaternions.
I use the SDL to detect when the mouse moves, which creates an event. This provides me with both absolute and relative movement. Absolute movement provides the screen coordinates of the mouse, with (0,0) being the top left corner. Relative movement is the accumulated movement since the last event. Because the program doesn't keep track of the mouse outside of the screen, I have to warp the mouse back to the center of the screen each time it moves away.
This was creating a problem where the warp movement created a mouse movement event that was the opposite of the movement itself,. To fix it I am now checking the absolute position of the mouse when I catch an event, and if it is at the center I ignore that particular event. Since it can only be at the center right after warping, and any movement I care about draws it away, I don't lose anything.
Once I have the relative movement, I multiply it by a sensitivity factor and store it as the number of degrees the camera moved in that frame. Horizontal movement of the mouse affects the bearing of the camera, while vertical movement affects the attitude.
So, on to the rotations themselves. Changes in bearing are described as a rotation around the up vector of the camera. This modifies the view vector, but leaves position and up unchanged. Quaternions are used here to describe this rotation. I use a convert_axis_angle function, using the up vector and the angle to build the quaternion and then normalize it. I then create a quaternion from my view vector, which for any given vector v = [ x , y , z ] , q = [ 0 , v ] .
The rotation of the view vector is the sandwich product qbearing x qview x conjugate(qbearing) . This result is stored in view.
For attitude changes, I need to determine the cross product of view and up, which gives me the right vector of the coordinate trio ( view , up , right ). This time, both view and up are modified by the rotation. So I create the attitude quaternion with convert_axis_angle, using the right vector this time, and make both rotations:
qattitude x qview x conjugate(qattitude)
qattitude x qup x conjugate(qattitude)
The results are stored in view and up respectively.
After this is done, I use gluLookAt as described above, and voila, it is done.
There's room for growth, of course. Roll can be easily appended, once the input is mapped. In the case of roll, the rotation would be around the view vector, and only up would be modified. I also need to add translation, to move the camera around. This is easily done, however, now that I have a view vector. Lateral movement (strafing) is done by getting the right product of view and up, and moving along that vector, vertical movement is done along the up vector and forward movement is done along the view vector.
Ok, that's it for now. Next up, making the world a more interesting place.
Friday, November 19, 2010
Square One
Well, got the framework back up and running. That was fairly painless, though I noticed my previous posts were a bit light on details. So, just in case something like this ever happens again, the libraries I'm linking to are:
mingw32
SDLmain
SDL.dll
opengl32
glu32
I will try to focus on getting the camera working for now. The camera is after all at the heart of the game. You can't have a 3d space game without freedom in all three dimensions.
mingw32
SDLmain
SDL.dll
opengl32
glu32
I will try to focus on getting the camera working for now. The camera is after all at the heart of the game. You can't have a 3d space game without freedom in all three dimensions.
Thursday, November 18, 2010
Why backups are important
So it turns out the usb drive where I was keeping my project files died a couple days ago. Attempts to revive it have not been successful. As a result, I'm back at square one. What I learned up to now can help get up to speed quickly when I start up again, but it is fairly sad to see all that work gone. :(
Friday, November 12, 2010
Project 1 - Life getting in the way
Had a busy week. Between a short trip and work, haven't had the chance to fix the camera. I've managed to break it, and get the new computer's developing environment up and running though. I'll get some more stuff up here as soon as I can.
I might just get the camera up to some basic stuff and continue working on other things to keep everything going.
I might just get the camera up to some basic stuff and continue working on other things to keep everything going.
Wednesday, November 3, 2010
Project 1 - Moving the camera, part 5
I said I'd work on something else, but I couldn't leave the camera as it was. I knew there had to be a better way to do what I'd been doing, and it turns out there is. Its called quaternions, and its confusing. Thankfully I've found plenty of explanations online, most directed at modeling in 3d. So I'm taking a bit from a few different tutorials to make my camera more robust. The code from NeHe's Lesson: Quaternion Camera Class is coming particularly handy.
I'm not lifting the camera class completely, it doesn't do everything I need, but the quaternion class is handy and saves me the time it would take me to fully grasp how to multiply them and turn them into matrices. So I'm keeping that. I'll still need to read up on them, though. As it is things mostly worked, though I'm still stuck with the controls being reversed after making a half turn (inverting up and down).
After a couple of days of thinking about it, it's occurred to me that the way I'm going about it might be to blame. Particularly with the mouse. I'm trying to keep track of attitude and bearing, both on 360 degrees. I had to, because if I didn't let them move freely (by limitting attitude for instance to +/- 90°) then I wouldn't be able to make full revolutions. The problem is that when I go over 90° I start flying inverted, but moving the mouse still adjusts the absolute angle around the fixed y axis. Which inverts it. I could try telling the program to invert the rotation when I go over 90°, but then if I rolled while flying forward I'd have the same problem.
The idea I'm toying with now is to keep track of the camera's frame of reference. I am not yet sure how I'll go about doing this, though some vector algebra and the cross product seem to offer promising opportunities.
I'm not lifting the camera class completely, it doesn't do everything I need, but the quaternion class is handy and saves me the time it would take me to fully grasp how to multiply them and turn them into matrices. So I'm keeping that. I'll still need to read up on them, though. As it is things mostly worked, though I'm still stuck with the controls being reversed after making a half turn (inverting up and down).
After a couple of days of thinking about it, it's occurred to me that the way I'm going about it might be to blame. Particularly with the mouse. I'm trying to keep track of attitude and bearing, both on 360 degrees. I had to, because if I didn't let them move freely (by limitting attitude for instance to +/- 90°) then I wouldn't be able to make full revolutions. The problem is that when I go over 90° I start flying inverted, but moving the mouse still adjusts the absolute angle around the fixed y axis. Which inverts it. I could try telling the program to invert the rotation when I go over 90°, but then if I rolled while flying forward I'd have the same problem.
The idea I'm toying with now is to keep track of the camera's frame of reference. I am not yet sure how I'll go about doing this, though some vector algebra and the cross product seem to offer promising opportunities.
Sunday, October 31, 2010
Project 1 - Moving the camera, part 4
This is proving a little more complex than I had hoped. It appears my first attempt was on the right track. No way around mucking with trigonometry I guess. I'll start with rotation first this time around. The idea here is that while standing at the origin, I rotate the world around the origin so I'm looking in the right direction, and then translate to the correct spot.
I need to keep track of three angles, rotation about x, y and z axes. Let's see. Aha, here's one problem. I've been trying to make two rotations, one after the other, on each three axes. However, it appears that when using glRotate, the vector you give is relative to the absolute coordinate axes, not the relative coordinates after the previous rotation. This explains the odd behaviors I was finding, where in one direction mouse movement worked as it should and 90° to the right or left it made me roll about the axis.
So I need a way to calculate the direction of the new axis after the first rotation. This means calculating x and z components (y component remains 0). The components, for a given rotation will be the cosine of that angle on the x axis, and the sin of it on the z coordinate.
There we go, that's much better. I'm going to need to work out all three coordinates when I add rolling to the camera. Now, we add camera movement. Moving forwards and backwards is easy, since we can ignore camera roll as well. I need to first figure out the vector on which I'm moving (camera's z axis), normalize the vector, and move my speed along that vector.
Yes! There we go. Got the forward and back working correctly at last. As I said a couple of times, the way the coordinates work here is kind of confusing. Vertex positions are relative to your last translation, but glTranslate and glRotate work on absolute coordinates. Need to streamline the camera functions, but I have a good start now.
Next up, I'm going to try to draw some of the variables on the screen, for debugging purposes. One of the problems my current implementation is having is that if you do half a turn going up, all the controls end up wonky. Probably a sin or cosine somewhere giving me a negative number where I'm expecting a positive or something like that. One of the reasons I didn't want to muck too much with trigonometry.
I need to keep track of three angles, rotation about x, y and z axes. Let's see. Aha, here's one problem. I've been trying to make two rotations, one after the other, on each three axes. However, it appears that when using glRotate, the vector you give is relative to the absolute coordinate axes, not the relative coordinates after the previous rotation. This explains the odd behaviors I was finding, where in one direction mouse movement worked as it should and 90° to the right or left it made me roll about the axis.
So I need a way to calculate the direction of the new axis after the first rotation. This means calculating x and z components (y component remains 0). The components, for a given rotation will be the cosine of that angle on the x axis, and the sin of it on the z coordinate.
There we go, that's much better. I'm going to need to work out all three coordinates when I add rolling to the camera. Now, we add camera movement. Moving forwards and backwards is easy, since we can ignore camera roll as well. I need to first figure out the vector on which I'm moving (camera's z axis), normalize the vector, and move my speed along that vector.
Yes! There we go. Got the forward and back working correctly at last. As I said a couple of times, the way the coordinates work here is kind of confusing. Vertex positions are relative to your last translation, but glTranslate and glRotate work on absolute coordinates. Need to streamline the camera functions, but I have a good start now.
Next up, I'm going to try to draw some of the variables on the screen, for debugging purposes. One of the problems my current implementation is having is that if you do half a turn going up, all the controls end up wonky. Probably a sin or cosine somewhere giving me a negative number where I'm expecting a positive or something like that. One of the reasons I didn't want to muck too much with trigonometry.
Saturday, October 30, 2010
Project 1 - Moving the camera, part 3
Last time I cobbled together the very basics for a camera. Today I mean to make it somewhat more robust, and add mouse input.
Right now the camera is a bunch of variables, and lacking some important functionality. My plan is to create a class to hold all of the camera information. To know what the camera is looking at, we need a position and a bearing. The position is easy enough to describe. Bearing is a little more complex. Finding a good way to describe it is important since I will need to know the current bearing in order to be able to move forwards, sideways or back.
The bearing can be described in terms of the angle on two axis. We take the natural camera bearing (looking towards the negative z axis) as centered on the y and x axis. Increasing the angle with respect to the y axis turns the camera to the left, decreasing it turns it towards the right. Increasing the angle with respect to the x axis, on the other hand, makes the camera look up, while decreasing it makes the camera look down.
I will eventually need to add a third angle, twisting the camera with respect to the z axis. This would roll the camera one way or the other, but due to the affect that can have on further mouse movement I'll leave it aside for now.
I will need to ensure that the camera angles are always between 0° and 360° on the y axis and between 0° and +/-180° on the x axis. Allowing the x axis to perform a full revolution would cause the world to end up bottom-up if you look 'up' too much. This could be confusing, at least until I get the roll movement down.
All this means that there will be two distinct camera modes I will need. One, the one I'm working on now, follows FPS conventions. It properly represents someone walking on a fixed surface. However, it is very poor for a flying or space sim, or people in freefall. The second should provide free movement on all axis, and would be well suited to the rest of the game. It would be simpler overall to create the first type out of the second one, by adding the limitations I described above, but for now I want to be able to move around and the first type of camera is quicker to build.
It also occurs to me that there may be a better way around determining the position and orientation of the camera. The translate and rotate functions I spoke about before, and which I use to move the camera about ultimately create a matrix, which ModelView uses to render the scene. Instead of trying to deal with the trigonometry in the camera class, wasting CPU, I can use some of the OpenGL functions and work on a matrix with these, then feed the matrix directly to ModelView. I'll need to think about how best to do this.
Another good idea is to try and decouple the camera from the keys themselves. I am already using boolean variables for determining if the camera has to be moving. By having a pointer to them in the camera class that can be changed to look at a different key, it should be easy to reconfigure the controls if required.
Matrices in OpenGL are stored as arrays of 16 GLfloats or GLdoubles. Unfortunately, all matrix operations are applied to the current matrix, so it's all the same. I'll still need to do all the work on my own. Pity. I may still be able to use this to be quicker, however. The translation matrix is trivial to make, leaving only the rotation matrix left to finish up. First, we make room for the coordinates.
Now, the bearing. I'll use structs for these, to keep things tidy. We initialize all these to 0. When I move, I need to adjust my position on all three axis... no, this will not work. Too messy. There has to be a more elegant solution. What I have works on a fixed bearing, but as soon as I look around it will break down.
So, back to the drawing board. Thinking about it, there is no reason I can't do translations and rotations during the loop fase, and have the camera already set up for the rendering portion. If I push at the beginning of rendering, and pop at the end, I can work on the camera's position without problem. So, let's see how that works.
Well, it's kind of embarrasing after a couple of day's work, but these few lines of code:
void Camera::update()
{
if (*moveForward)
{
glTranslatef(0 , 0 , 1) ;
}
if (*moveBackward)
{
glTranslatef(0 , 0 , -1) ;
}
if (*strafeLeft)
{
glTranslatef(1 , 0 , 0) ;
}
if (*strafeRight)
{
glTranslatef(-1 , 0 , 0) ;
}
} ;
are all I ended up needing. They should work even as the bearing changes. I will however need a way to work out my position and bearing later. But for now, this is good. Let's see about adding the mouse then.
Well, the mouse works, but it's not behavinf the way I'd like it to. Probably because of the way the coordinate system works. Still, I have some ideas about how to get around that. Will continue later.
Right now the camera is a bunch of variables, and lacking some important functionality. My plan is to create a class to hold all of the camera information. To know what the camera is looking at, we need a position and a bearing. The position is easy enough to describe. Bearing is a little more complex. Finding a good way to describe it is important since I will need to know the current bearing in order to be able to move forwards, sideways or back.
The bearing can be described in terms of the angle on two axis. We take the natural camera bearing (looking towards the negative z axis) as centered on the y and x axis. Increasing the angle with respect to the y axis turns the camera to the left, decreasing it turns it towards the right. Increasing the angle with respect to the x axis, on the other hand, makes the camera look up, while decreasing it makes the camera look down.
I will eventually need to add a third angle, twisting the camera with respect to the z axis. This would roll the camera one way or the other, but due to the affect that can have on further mouse movement I'll leave it aside for now.
I will need to ensure that the camera angles are always between 0° and 360° on the y axis and between 0° and +/-180° on the x axis. Allowing the x axis to perform a full revolution would cause the world to end up bottom-up if you look 'up' too much. This could be confusing, at least until I get the roll movement down.
All this means that there will be two distinct camera modes I will need. One, the one I'm working on now, follows FPS conventions. It properly represents someone walking on a fixed surface. However, it is very poor for a flying or space sim, or people in freefall. The second should provide free movement on all axis, and would be well suited to the rest of the game. It would be simpler overall to create the first type out of the second one, by adding the limitations I described above, but for now I want to be able to move around and the first type of camera is quicker to build.
It also occurs to me that there may be a better way around determining the position and orientation of the camera. The translate and rotate functions I spoke about before, and which I use to move the camera about ultimately create a matrix, which ModelView uses to render the scene. Instead of trying to deal with the trigonometry in the camera class, wasting CPU, I can use some of the OpenGL functions and work on a matrix with these, then feed the matrix directly to ModelView. I'll need to think about how best to do this.
Another good idea is to try and decouple the camera from the keys themselves. I am already using boolean variables for determining if the camera has to be moving. By having a pointer to them in the camera class that can be changed to look at a different key, it should be easy to reconfigure the controls if required.
Matrices in OpenGL are stored as arrays of 16 GLfloats or GLdoubles. Unfortunately, all matrix operations are applied to the current matrix, so it's all the same. I'll still need to do all the work on my own. Pity. I may still be able to use this to be quicker, however. The translation matrix is trivial to make, leaving only the rotation matrix left to finish up. First, we make room for the coordinates.
Now, the bearing. I'll use structs for these, to keep things tidy. We initialize all these to 0. When I move, I need to adjust my position on all three axis... no, this will not work. Too messy. There has to be a more elegant solution. What I have works on a fixed bearing, but as soon as I look around it will break down.
So, back to the drawing board. Thinking about it, there is no reason I can't do translations and rotations during the loop fase, and have the camera already set up for the rendering portion. If I push at the beginning of rendering, and pop at the end, I can work on the camera's position without problem. So, let's see how that works.
Well, it's kind of embarrasing after a couple of day's work, but these few lines of code:
void Camera::update()
{
if (*moveForward)
{
glTranslatef(0 , 0 , 1) ;
}
if (*moveBackward)
{
glTranslatef(0 , 0 , -1) ;
}
if (*strafeLeft)
{
glTranslatef(1 , 0 , 0) ;
}
if (*strafeRight)
{
glTranslatef(-1 , 0 , 0) ;
}
} ;
are all I ended up needing. They should work even as the bearing changes. I will however need a way to work out my position and bearing later. But for now, this is good. Let's see about adding the mouse then.
Well, the mouse works, but it's not behavinf the way I'd like it to. Probably because of the way the coordinate system works. Still, I have some ideas about how to get around that. Will continue later.
Project 1 - Moving the camera, part 2
As I was saying in the last post, we will use the SDL to handle key presses and other events. There is a pretty complete tutorial for doing this in the SDL Tutorials page, and I'm going to lift the code from there since I could not possibly do it better. I'm also using the basic framework they offered in the previous tutorial, so it fits in nicely.
I said in the previous update that I would be using the WASD + mouse control scheme. There are a few things to keep in mind when building the camera controls though. For starters, SDL only detects an event when a key is pressed or released. If I want the camera to advance while the 'W' key is pressed, I have to keep track of whether it is currently pressed or not, making the camera move while it is pressed and having it stop when I detect that it has been released. Just something to keep in mind.
Once I have the keyboard commands working, I will get the mouse controls ironed out. To start off, I need to keep track of the camera's position. So I add the camx, camy and camz variables. These will store the current camera position. I also create the boolean variables keyWdown, keyAdown, keySdown, and keyDdown. When polling for events, if I detect one of these keys being pressed I will switch the proper boolean to true, and on a key up event I'll switch it to false again. This has a number of advantages, such as allowing me to detect multiple simultaneous key presses.
Then, on the loop portion of the program, I check the boolean variables. If W is being pressed I move forwards, if A is pressed I move to a side, etc. Doing it quick and dirty, I can get the camera moving on the z and x axis. However, before I add the mouse I will need to figure out what my current facing is, so I can move freely on all axis. I will leave that for tomorrow. For today, having the camera move within the world is a good place to stop.
I said in the previous update that I would be using the WASD + mouse control scheme. There are a few things to keep in mind when building the camera controls though. For starters, SDL only detects an event when a key is pressed or released. If I want the camera to advance while the 'W' key is pressed, I have to keep track of whether it is currently pressed or not, making the camera move while it is pressed and having it stop when I detect that it has been released. Just something to keep in mind.
Once I have the keyboard commands working, I will get the mouse controls ironed out. To start off, I need to keep track of the camera's position. So I add the camx, camy and camz variables. These will store the current camera position. I also create the boolean variables keyWdown, keyAdown, keySdown, and keyDdown. When polling for events, if I detect one of these keys being pressed I will switch the proper boolean to true, and on a key up event I'll switch it to false again. This has a number of advantages, such as allowing me to detect multiple simultaneous key presses.
Then, on the loop portion of the program, I check the boolean variables. If W is being pressed I move forwards, if A is pressed I move to a side, etc. Doing it quick and dirty, I can get the camera moving on the z and x axis. However, before I add the mouse I will need to figure out what my current facing is, so I can move freely on all axis. I will leave that for tomorrow. For today, having the camera move within the world is a good place to stop.
Thursday, October 28, 2010
Project 1 - Moving the camera
So, I have a working terrain, but the camera is fixed. I want to change that. Being able to see it from different angles without having to change the code and recompile would be handy.
I've mentioned a few times that OpenGL is kind of strange about how moving around the world works. Since I'm going to be talking about the camera now, it seems a good moment to delve into that. The first thing that should be explained I guess is that OpenGL doesn't have a camera. It has a viewport. The viewport never moves, in the sense that the absolute origin of OpenGL's coordinate system is always the middle of the viewport.
Instead, in order to have different views, we have to move the rest of the world. Why do I talk about a camera then? Because it is easier for me to think that way. A camera moving around a world is easier to think about than a world moving around a fixed point in space. Moving the world 5 units to the left is functionally the same as moving the 'camera' 5 units to the right, but the latter is easier to picture.
So, how do we move everything around then? OpenGL has this thing called the ModelView matrix. While working with this matrix, we can change the origin of the coordinate system. There are a couple of functions that do this, glTranslatef and glRotatef. glTranslatef moves the origin by the number of units you tell it to. glRotate changes its orientation. After modifing the matrix with these functions, the points we pass to the program are modified by these operations and placed in the proper place.
So, as soon as the program starts, if you draw something at 0,0,0 it will appear at the center of the screen. But if you then call glTranslatef( 5 , 0 , 0 ), and draw something at 0 , 0 , 0, the resulting image appears at 5,0,0, or to the right of the screen. If you then call glRotatef( 180 , 0 , 0 , 1 ) and draw at 1 , 0 , 0 the drawing will appear at 4 , 0 , 0 . We basically draw on a relative coordinate system centered where we want with the translate and rotate functions, and then it is drawn on the absolute coordinate system centered on the screen.
How does this relate to the camera then? As I said, to move the 'camera' around, what we do instead is move the world around. If before we start drawing we translate 5 units to the right, when we then start drawing everything will end up 5 units to the right. Because everything maintains its position relative to everything except the viewport, it ends up looking as though the camera had moved 5 units to the left.
This covers the OpenGL side of things. If we want to move inside the world in a given direction, we have to move everything else in the opposite direction. How do we get our commands to the system, though? The SDL will handle recognizing when an 'event' happens. An event is any keystroke, mouse movement or whatever other input method we use to give commands to the computer.
I've decided to go with a fairly standard set of commands to start with. 'W' and 'S' move the camera forwards and backwards. 'A' and 'D' move it sideways. Horizontal mouse movements will rotate it around the vertical axis, letting us look around. Vertical mouse movement will rotate it around the horizontal axis, letting us look up and down. This should be enough to enable us to move everywhere in the world.
Next up, how to get the SDL to catch our commands.
I've mentioned a few times that OpenGL is kind of strange about how moving around the world works. Since I'm going to be talking about the camera now, it seems a good moment to delve into that. The first thing that should be explained I guess is that OpenGL doesn't have a camera. It has a viewport. The viewport never moves, in the sense that the absolute origin of OpenGL's coordinate system is always the middle of the viewport.
Instead, in order to have different views, we have to move the rest of the world. Why do I talk about a camera then? Because it is easier for me to think that way. A camera moving around a world is easier to think about than a world moving around a fixed point in space. Moving the world 5 units to the left is functionally the same as moving the 'camera' 5 units to the right, but the latter is easier to picture.
So, how do we move everything around then? OpenGL has this thing called the ModelView matrix. While working with this matrix, we can change the origin of the coordinate system. There are a couple of functions that do this, glTranslatef and glRotatef. glTranslatef moves the origin by the number of units you tell it to. glRotate changes its orientation. After modifing the matrix with these functions, the points we pass to the program are modified by these operations and placed in the proper place.
So, as soon as the program starts, if you draw something at 0,0,0 it will appear at the center of the screen. But if you then call glTranslatef( 5 , 0 , 0 ), and draw something at 0 , 0 , 0, the resulting image appears at 5,0,0, or to the right of the screen. If you then call glRotatef( 180 , 0 , 0 , 1 ) and draw at 1 , 0 , 0 the drawing will appear at 4 , 0 , 0 . We basically draw on a relative coordinate system centered where we want with the translate and rotate functions, and then it is drawn on the absolute coordinate system centered on the screen.
How does this relate to the camera then? As I said, to move the 'camera' around, what we do instead is move the world around. If before we start drawing we translate 5 units to the right, when we then start drawing everything will end up 5 units to the right. Because everything maintains its position relative to everything except the viewport, it ends up looking as though the camera had moved 5 units to the left.
This covers the OpenGL side of things. If we want to move inside the world in a given direction, we have to move everything else in the opposite direction. How do we get our commands to the system, though? The SDL will handle recognizing when an 'event' happens. An event is any keystroke, mouse movement or whatever other input method we use to give commands to the computer.
I've decided to go with a fairly standard set of commands to start with. 'W' and 'S' move the camera forwards and backwards. 'A' and 'D' move it sideways. Horizontal mouse movements will rotate it around the vertical axis, letting us look around. Vertical mouse movement will rotate it around the horizontal axis, letting us look up and down. This should be enough to enable us to move everywhere in the world.
Next up, how to get the SDL to catch our commands.
Project 1 - Building a solar system, part 2
Last time we'd created a bunch of objects. It would be nice to be able to draw them, or something like them. OpenGL already has a coordinate system running. The sun dominates our solar system, meaning everything revolves around it, and so we'll put the sun at the origin.
For this part of the project, we won't give each planet the right proportions and distance from the sun. For one thing, if we did, we wouldn't be able to see most of the stuff with a quick glance. So we'll make the sun 10 units in radius, planets 5, and moons 1. The orbit of each planet will be 20 units from the previous one, while the moons orbit 2 units from each other.
First, we'll draw the planets on a line, make sure everything works. While OpenGL's coordinate system is a good thing to have, it is also somewhat strange to deal with. We won't be drawing everything where it goes. Rather, we move around the world and draw where we are standing. This requires a lot of pushing and popping matrices, and I am not sure yet how much resources that eats up. But I can look into optimizing in the future.
What I want, essentially, is a function that goes down the list in the celestial body and draws a circle, then moves down the list. Need also a way of differentiating between stars, planetary bodies and moons. For a first attempt, I go with
enum bodytype { star , planet , moon , asteroid field } // this would grow as needed
void drawSystem( CelestialBody * parent , bodytype body = star ) ;
The steps that I need to go through are draw the main body at the current location, then for each object in the list move to its position and call this function with the appropriate parameters. This should recursively build our entire system. Speaking of the position of each object, in OpenGL the three dimensions start off oriented so that the x axis increases to the right, the y axis increases toward the top of the screen, and the z axis increases towards the viewer. I'll be placing the plane of the system on the plane y = 0, so as things move away from the star they'll be towards the right, and we'll be looking at them from above. Of course, once I get some code to move the camera, the original orientation of the axis won't matter.
I am writing this as a stand alone function for now, but because of how intimately the function is tied to CelestialBody objects it should probably be a method of the class. Something to keep in mind going forwards.
After fixing a few silly mistakes, this is what we are left with:
Pretty sweet, huh? There's a little trouble with the moons running into the following planet, the orbits are too short. But it behaves mostly as expected.
That's a good place to leave for now. Next up, some camera controls.
For this part of the project, we won't give each planet the right proportions and distance from the sun. For one thing, if we did, we wouldn't be able to see most of the stuff with a quick glance. So we'll make the sun 10 units in radius, planets 5, and moons 1. The orbit of each planet will be 20 units from the previous one, while the moons orbit 2 units from each other.
First, we'll draw the planets on a line, make sure everything works. While OpenGL's coordinate system is a good thing to have, it is also somewhat strange to deal with. We won't be drawing everything where it goes. Rather, we move around the world and draw where we are standing. This requires a lot of pushing and popping matrices, and I am not sure yet how much resources that eats up. But I can look into optimizing in the future.
What I want, essentially, is a function that goes down the list in the celestial body and draws a circle, then moves down the list. Need also a way of differentiating between stars, planetary bodies and moons. For a first attempt, I go with
enum bodytype { star , planet , moon , asteroid field } // this would grow as needed
void drawSystem( CelestialBody * parent , bodytype body = star ) ;
The steps that I need to go through are draw the main body at the current location, then for each object in the list move to its position and call this function with the appropriate parameters. This should recursively build our entire system. Speaking of the position of each object, in OpenGL the three dimensions start off oriented so that the x axis increases to the right, the y axis increases toward the top of the screen, and the z axis increases towards the viewer. I'll be placing the plane of the system on the plane y = 0, so as things move away from the star they'll be towards the right, and we'll be looking at them from above. Of course, once I get some code to move the camera, the original orientation of the axis won't matter.
I am writing this as a stand alone function for now, but because of how intimately the function is tied to CelestialBody objects it should probably be a method of the class. Something to keep in mind going forwards.
After fixing a few silly mistakes, this is what we are left with:
Pretty sweet, huh? There's a little trouble with the moons running into the following planet, the orbits are too short. But it behaves mostly as expected.
That's a good place to leave for now. Next up, some camera controls.
Subscribe to:
Posts (Atom)