I'm making a Minecraft clone as my very first OpenGL project and am stuck at the box selection part. What would be the best method to make a reliable box selection?
I've been going through some AABB algorithms, but none of them explain well enough what they exactly do (especially the super tweaked ones) and I don't want to use stuff I don't understand.
Since the world is composed of cubes I used octrees to remove some strain on ray cast calculations, basically the only thing I need is this function:
float cube_intersect(Vector ray, Vector origin, Vector min, Vector max)
{
//???
}
The ray and origin are easily obtained with
Vector ray, origin, point_far;
double mx, my, mz;
gluUnProject(viewport[2]/2, viewport[3]/2, 1.0, (double*)modelview, (double*)projection, viewport, &mx, &my, &mz);
point_far = Vector(mx, my, mz);
gluUnProject(viewport[2]/2, viewport[3]/2, 0.0, (double*)modelview, (double*)projection, viewport, &mx, &my, &mz);
origin = Vector(mx, my, mz);
ray = point_far-origin;
min and max are the opposite corners of a cube.
I'm not even sure this is the right way to do this, considering the number of cubes I'd have to check, even with octrees.
I've also tried gluProject
, it works, but is very unreliable and doesn't give me the selected face of the cube.
EDIT
So this is what I've done: calculate a position in space with the ray:
float t = 0;
for(int i=0; i<10; i++)
{
Vector p = ray*t+origin;
while(visible octree)
{
if(p inside octree)
{
// then call recursive function until a cube is found
break;
}
octree = octree->next;
}
if(found a cube)
{
break;
}
t += .5;
}
It's actually surprisingly fast and stops after the first found cube.
As you can see the ray has to go trough multiple octrees before it finds a cube 开发者_StackOverflow中文版(actually a position in space) - there is a crosshair in the middle of the screen. The lower the increment step the more precise the selection, but also slower.
Working with boxes as primitives is overkill in memory requirements and processing power. Cubes are fine for rendering and even there you can find more advanced algorithms that give you a better final image (Marching cubes). Minecraft's graphics are very primitive in this sense as voxel rendering has been around for a long time and significant progress has been made.
Basically you should exploit the fact that all your boxes are equally spaced and of the same size. These are called voxels. Raycasting in a grid is trivial in comparison to what you have - a broad-phase oct-tree and a narrow phase AABB test. I suggest you research a bit on voxels and voxel set collision detection/raycasting as you will find both algorithms that are easier to implement and that would run faster.
You don't need any explicit octree structure in memory. All that is needed is byte[,,]. Just generate 8 children of a box lazily during search (like a chess engine generates children game states).
I'd also argue you don't have to rely on actual raycasts to determine what to render. Given you are in a predefined grid formation you really aren't held hostage to "exact visible" requirements. If you can track your camera's position and allocate a NSWE compass of some sorts you can also use that to determine if the GPU buffer should even consider array of vertex's for rendering..
I cover this theory in detail here https://stackoverflow.com/a/18028366/94167
But using Octree and Camera positioning + camera distance/bounds you basically know what the user is seeing without having to resort to raytrace(s) for exacts? If you can consolidate your triangles into larger ones for rendering purposes and then use a texture to break the visibility of large back to a cubic form (slight of hand) you reduce your vertex count significantly. Then its a matter of just rendering out the hull and by keeping track of what your camera direction is and where it sits xyz you can get away with letting some faces that shouldn't be displayed in as it will have minimal impact on performance (especially if your shaders themselves also do some of the work)
I'm experimenting further by tracking the centre point of the camera to determine it's horizontal focal point and from that you can determine the angle which in turn determines the depth of the chunk you're likely to see in the direction it faces.
精彩评论