I have this obsession with doing realtime character animations based on inverse kinematics and morph targets.
I got a fair way with Animata, an open source (FLTK-based, sadly) IK-chain-style animation program. I even ported their rendering code to a variety of platforms (Java / Processing and iPhone) alt video http://ats.vimeo.com/612/732/61273232_100.jpg video of Animata renderers
However, I've never been convinced that their code is particularly optimised and it seems to take a lot of simulation on the CPU to render each frame, which seems a little unnecessary to me.
I am now starting a project to make an app on the iPad that relies heavily on realtime character animation, and leafing through the iOS documentation I discovered a code snippet for a 'two bone skinning shader'
// A vertex shader that efficiently implements two bone skinning.
attribute vec4 a_position;
attribute float a_joint1, a_joint2;
attribute float a_weight1, a_weight2;
uniform mat4 u_skinningMatrix[JOINT_COUNT];
uniform mat4 u_modelViewProjectionMatrix;
void main(void)
{
vec4 p0 = u_skinningMatrix[int(a_joint1)] * a_position;
vec4 p1 = u_skinningMatrix[int(a_joint2)] * a_position;
vec4 p = p0 * a_weight1 + p1 * a_weight2;
gl_Position = u_modelViewProjectionMatrix * p;
}
Does anybody know how I would use such a snippet? It is p开发者_运维百科resented with very little context. I think it's what I need to be doing to do the IK chain bone-based animation I want to do, but on the GPU.
I have done a lot of research and now feel like I almost understand what this is all about.
The first important lesson I learned is that OpenGL 1.1 is very different to OpenGL 2.0. In v2.0, the principle seems to be that arrays of data are fed to the GPU and shaders used for rendering details. This is distinct from v1.1 where more is done in normal application code with pushmatrix/popmatrix and various inline drawing commands.
An excellent series of blog posts introducing the latest approaches to OpenGL available here: Joe's Blog: An intro to modern OpenGL
The vertex shader I describe above is a runs a transformation on a set of vertex positions. 'attribute' members are per-vertex and 'uniform' members are common across all vertices.
To make this code work you would feed in an array of vector positions (the original positions, I guess), corresponding arrays of joints and weights (the other attribute variables) and this shader would reposition the input vertices according to their attached joints.
The uniform variables relate first to the supplied texture image, and the projection matrix which I think is something to do with transforming the world coordinate system to something more appropriate to the particular requirements.
Relating this back to iPhone development, the best thing to do is to create an OpenGL ES template project and pay attention to the two different rendering classes. One is for the more linear and outdated OpenGL 1.1 and the other is for OpenGL 2.0. Personally I'm throwing out the GL1.1 code given that it applies mainly to older iPhone devices and since I'm targeting the iPad it's not relevant any more. I can get better performance with shaders on the GPU using GL2.0.
精彩评论