So I've decided to rewrite an old ray tracer I had which was written in C++ and to do it in C#, leveraging the XNA framework.
I still have my old book and can follow the notes however I am confused regarding a few ideas and I was wondering whether someone could articulate it nicely.
for each x pixel do
for each y pixel do
//Generate Ray
//1 - Calculate world coordinates of current pixel
//1.1 Calculate Normalized Device coordinates for current pixel 1- to -1 (u, v)
u = (2*x/ WIDTH) - 1 ;
v = (2*y/HEIGHT) - 1 ;
Vector3 rayDirection = -1*focalLength + u'*u + v'*v
In the above code u' and v' are the orthnormal basis calculated for the given camera (I know the same names make it confusing)
If I follow the book and do it the way it expresses, it works. However I am trying to leverage XNA and getting confused on how to perform the same actions but using Matrices.
So I've tried to replace the following steps with the XNA code
class Camera
{
public Camera(float width, float height)
{
AspectRatio = width/height;
FOV = Math.PI / 2.0f;
NearPlane = 1.0f;
FarPlane = 100.0f;
ViewMatrix = Matrix.CreateLookAt(Position, Direction,this.Up);
ProjectionMatrix=Matrix.CreatePerspectiveFieldOfView(FOV,
AspectRatio,NearPlane,FarPlane);
}
}
It's at this point I'm confused in the order of operations I am supposed to apply in order to get the direction vector for any pixel (x, y) ?
In my head I'm thinking: (u,v) = ProjectionMatrix * ViewMatrix * ModelToWorld * Vertex(in model space)
Therefore it would make sense that
Vertex (in world space) = Inverse(ViewMatrix) * Inverse(ProjectionMatrix) * [u, v, 0]
I also remembered something about how the view Matrix ca开发者_JAVA技巧n be Transposed as well as Inverted since it is orthonormal.
Theres really no need to be using matrices for ray tracing. Perspective projection just falls out of the system. That's one of the benefits of ray tracing.
ur comments are confusing too.
//1 - Calculate world coordinates of current pixel //1.1 Calculate Normalized Device coordinates for current pixel 1- to -1 (u, v)
NDC doesnt have any role in ray tracing so I don't know what you are talking about here. All you are doing with that u, v code is calculating a direction for the ray based on the virtual grid of pixels you set up in world space. then you are going to trace the ray out into the scene and see if it intersects with anything.
Really you don't really need to be worrying about different spaces rite now. Just put everything into world co-ordinates and call it a day. if you want to do complicated models (model transforms like scale rotate and shit) a model->world transform might be needed but when you first start writing a ray tracer you don't need to worry about that stuff.
if you want to use XNA you can use that camera class but like some of the members are going to be useless. i.e. the matricies and near and far plane.
The reason for NDC is to that you can map an image height/width in pixels to an arbitrary sized image (not necessarily 1:1) Essentially what I understood was the following:
- You want to convert pixel X&Y to a uniform rectangle from -1 to 1 (essentially centering the camera within the viewing frame)
- Perform the inverse projection matrix to which uses FOV, aspect ratio and near plane to place the pixel (in NDC coordinates) into world space coordinates
- Perform the inverse of camera matrix to put the coordinate relative to the camera in world space
- Calculate direction
精彩评论