-
Notifications
You must be signed in to change notification settings - Fork 2
Frame animation

Sometimes an image says more than 1000 words. As you can see above, an animated model based on frames is just a rendering of this model in a given position, based on an index changing in time. It's the exact same technique used since as very long time - in fact since the first video games were created - to animate 2d sprites, but applied to 3d models.
The only complexity to achieve this kind of animations is to understand how the model file is structured, in other words where the data are located. Because once read into memory, all you have to do is to draw each geometry in the correct order and at the correct speed.
The first Quake video game was using the Quake Engine, written in the middle of the 90's by John Carmack, an ID Software developer until 2013. At that time, the computers were very less powerful than today, and create a video game which was supporting a full 3d environment was a standout performance, so that it is still acclaimed today as one of the major turning points in the gaming world.
In a such engine, a very few part of the calculation could be grant to the models animations in order to keep a good global performance, and the simplest way to achieve a such task was to calculate the geometry of each model position in advance, then to load them in the memory from a file, and to render a given position based on an index evolving in time.
However this had a cost: the more complex was a model, the more memory it occupied, the more time it took to be rendered, and of course the more heavy the file was. For that reason, a such model was generally limited to a very few amount of polygons (until 2048 max), and a very limited number of frames (until 256 max).
Basically, the Quake models are binary files, divided in 2 main parts, which are the header and the data. The header is a kind of summary which will points on were important data starts. If you need further information, the following article, on which I based myself to write my own source code, describes in details how a Quake model file is built, and how to write a Quake model reader.
With the Quake II game, a second version of the Quake Engine was released, which contained several improvements, among others concerning the 3d models, although the basic idea remained the same. First, an OpenGL command system was added, allowing to organize the geometry in different manners (i.e by triangles, by triangle strip, or by triangle fan). Also, the textures and the normal's table were extracted from the model files, and were provided instead in separate files, alongside with each model file. Finally, a small script file describing animations was added.
The Quake II (.md2) models support was not implemented in the CompactStar engine, because I wanted to keep it as simple as possible, and for this project the Quake I models were sufficient. However if you're interested, I implemented a such reader in a demo project, here.
Also, as for the Quake models, a very interesting article describing in details the Quake II format, as well as a step by step explanation about how to write a complete model reader, is available here.
As for the previous opuses, the Quake III game was accompanied by a 3rd version of the Quake Engine, in which the models design began to take a new direction. Mainly, an extremely basic concept of skeleton appeared. Indeed, the models were divided in different sub-geometries, instead of one unique geometry as in the previous model files, and bones have appeared to connect them together. This way, such models were kind of hybrids, or rather precursors, of the next generation of models animations techniques which started to appear at this time.
Also, a Quake III model is no longer an unique file as in Quake, or even a small group of files as in Quake II, but rather a kind of packed folder containing several files required for different purposes. First of all, each sub-geometry is now contained in its own .md3 file. Each of them is associated with a .skin file, which describes the resources required for each sub-geometry (e.g textures), and how it is linked with other sub-geometries. Shader files were also added, to describe how a given geometry may be rendered, as well as a configuration file, which describes the animation for each geometry. The remaining files are the texture images and other required resources.
As for the Quake II model, the CompactStar engine doesn't support the Quake III models. However if you're interested, I wrote a such reader in a demo project, available here.
Once again, the Quake IV game came with a 4th version of the Quake Engine, but this time I barely took a look on how the models were implemented, although it remained interesting. All I know about them is that this time the old frame system was replaced by a skeleton, but I prefered learn about the Collada and Filmbox formats instead, because they were more commonly used and documented.
However a very interesting description about this format is available here.
First of all, I don't pretend to explain here the .mdl format itself, this excellent document, on which I based all my work, will do it better than anything I would be able to describe. If you're unfamiliar with this file format, I suggest you to take a look on it before continuing with the below explanations.
Mainly the model reader is written in the mdl.c and mdl.h files. This code reads a Quake .mdl file, and populate a CSR_MDL structure, which contains the data the renderer will use to draw the model.
Take a look to the CSR_MDL structure:
/**
* Quake I (.mdl) model
*/
typedef struct
{
CSR_Model* m_pModel;
size_t m_ModelCount;
CSR_Animation_Frame* m_pAnimation;
size_t m_AnimationCount;
CSR_Skin* m_pSkin;
size_t m_SkinCount;
} CSR_MDL;The most interesting part of this structure is the m_pModel member. It's a collection of ready-to-be-rendered models, in other words the geometries of each animation frames. The m_ModelCount member contains the number of geometries contained in the collection.
NOTE the CSR_Model structure is a generic structure containing a collection of meshes. Each mesh is contained in a CSR_Mesh structure, which is composed by one or several vertex buffers. Although the mesh structure provides a skin to contain the textures, they are not used in the .mdl models, because the textures may be animated, and are thus processed differently (see below).
The m_pAnimation member is an collection of CSR_Animation_Frame structure, and each of them describes one animation, which is composed by the start and end indices, related to the m_pModel member, as well as an optional name, which may be useful to retrieve one animation in the collection. The m_AnimationCount member contains the number of animations contained in the collection. The CSR_Animation_Frame structure itself is declared in the CSR_Model.h file.
/**
* Model animation (based on frames)
*/
typedef struct
{
char m_Name[16];
size_t m_Start;
size_t m_End;
} CSR_Animation_Frame;Finally, the m_pSkin member is a collection of CSR_Skin structure, which contains the model textures. Normally a .mdl model contains only one texture, but the texture itself may be animated, it's the reason why the m_pSkin is a collection, and why the CSR_Skin structure contains a m_Time member, which represents the time before the next texture should be painted. The m_SkinCount member contains the number of skins contained in the collection. The CSR_Skin structure itself is declared in the CSR_Texture.h file.
/**
* Skin
*/
typedef struct
{
CSR_Texture m_Texture;
CSR_Texture m_BumpMap;
CSR_Texture m_CubeMap;
double m_Time;
} CSR_Skin;The CSR_MDL structure is used by the renderer to draw the model, either in the csrOpenGLDrawMDL() function of the CSR_Renderer_OpenGL.c file:
void csrOpenGLDrawMDL(const CSR_MDL* pMDL,
const CSR_OpenGLShader* pShader,
const CSR_Array* pMatrixArray,
size_t skinIndex,
size_t modelIndex,
size_t meshIndex,
const CSR_fOnGetID fOnGetID)
{
// get the current model mesh to draw
const CSR_Mesh* pMesh = csrMDLGetMesh(pMDL, modelIndex, meshIndex);
// found it?
if (!pMesh)
return;
// normally each mesh should contain only one vertex buffer
if (pMesh->m_Count != 1)
// unsupported if not (because cannot know which texture should be binded. If a such model
// exists, a custom version of this function should also be written for it)
return;
// can use texture?
if (fOnGetID && pMesh->m_pVB->m_Format.m_HasTexCoords && skinIndex < pMDL->m_SkinCount)
{
// get the OpenGL identifier matching with the texture
const CSR_OpenGLID* pTextureID = (CSR_OpenGLID*)fOnGetID(&pMDL->m_pSkin[skinIndex].m_Texture);
// found it?
if (pTextureID && (GLuint)pTextureID->m_ID != M_CSR_Error_Code)
{
// select the texture sampler to use (GL_TEXTURE0 for normal textures)
glActiveTexture(GL_TEXTURE0);
glUniform1i(pShader->m_TextureSlot, GL_TEXTURE0);
// bind the texture to use
glBindTexture(GL_TEXTURE_2D, pTextureID->m_ID);
}
}
// draw the model mesh
csrOpenGLDrawMesh(pMesh, pShader, pMatrixArray, fOnGetID);
}Or in the in the csrMetalDrawMDL() function of the CSR_Renderer_Metal.mm file:
- (void) csrMetalDrawMDL :(const CSR_MDL* _Nullable)pMDL
:(const void* _Nullable)pShader
:(const CSR_Array* _Nullable)pMatrixArray
:(size_t)skinIndex
:(size_t)modelIndex
:(size_t)meshIndex
:(const CSR_fOnGetID _Nullable)fOnGetID
{
// get the current model mesh to draw
const CSR_Mesh* pMesh = csrMDLGetMesh(pMDL, modelIndex, meshIndex);
// found it?
if (!pMesh)
return;
// normally each mesh should contain only one vertex buffer
if (pMesh->m_Count != 1)
// unsupported if not (because cannot know which texture should be binded. If a such model
// exists, a custom version of this function should also be written for it)
return;
// can use texture?
if (fOnGetID && pMesh->m_pVB->m_Format.m_HasTexCoords && skinIndex < pMDL->m_SkinCount)
{
// get the OpenGL identifier matching with the texture
const id<MTLTexture> pTexture =
(__bridge id<MTLTexture>)fOnGetID(&pMDL->m_pSkin[skinIndex].m_Texture);
// found it?
if (pTexture && m_pRenderEncoder)
// bind the model texture
[m_pRenderEncoder setFragmentTexture:pTexture atIndex:0];
}
// draw the model mesh
[self csrMetalDrawMesh :pMesh :pShader :pMatrixArray :fOnGetID];
}The interpolation is a technique allowing to turn the model animation smoother, by blending 2 vertex buffers together in order to generate an intermediate buffer, which will be rendered.

The best way to implement a such technique would be to write it in a Shader program, for that reason it was not currently implemented in the CompactStar engine. However you may find below a simple (but not very efficient) example function from my .md2 reader demo project, showing how a such interpolation may be implemented:
QR_Int32 QR_MD2::Interpolate(const QR_Float& position, const QR_Mesh& mesh1,
const QR_Mesh& mesh2, QR_Mesh& result)
{
// get vertices count
const QR_SizeT count = mesh1.size();
// are mesh compatible?
if (count != mesh2.size())
return QR_MD2Common::IE_C_IncompatibleVertices;
// iterate through mesh to interpolate
for (QR_SizeT i = 0; i < count; ++i)
{
// are frame compatibles?
if (!mesh1[i]->CompareFormat(*mesh2[i]))
return QR_MD2Common::IE_C_IncompatibleFrames;
// not a 3D coordinate?
if (mesh1[i]->m_CoordType != QR_Vertex::IE_VC_XYZ)
return QR_MD2Common::IE_C_Not3DCoords;
// create and populate new interpolation vertex
std::auto_ptr<QR_Vertex> pVertex(new QR_Vertex());
pVertex->m_Stride = mesh1[i]->m_Stride;
pVertex->m_Type = mesh1[i]->m_Type;
pVertex->m_Format = mesh1[i]->m_Format;
pVertex->m_CoordType = mesh1[i]->m_CoordType;
// get vertex buffer data count
const QR_SizeT bufferCount = mesh1[i]->m_Buffer.size();
// iterate through vertex buffer content
for (QR_SizeT j = 0; j < bufferCount; j += mesh1[i]->m_Stride)
{
QR_UInt32 index = 3;
// get positions
QR_Vector3DP srcVec(mesh1[i]->m_Buffer[j],
mesh1[i]->m_Buffer[j + 1],
mesh1[i]->m_Buffer[j + 2]);
QR_Vector3DP dstVec(mesh2[i]->m_Buffer[j],
mesh2[i]->m_Buffer[j + 1],
mesh2[i]->m_Buffer[j + 2]);
// interpolate positions
QR_Vector3DP vec = srcVec.Interpolate(dstVec, position);
// set interpolated positions in destination buffer
pVertex->m_Buffer.push_back(vec.m_X);
pVertex->m_Buffer.push_back(vec.m_Y);
pVertex->m_Buffer.push_back(vec.m_Z);
// do include normals?
if (mesh1[i]->m_Format & QR_Vertex::IE_VF_Normals)
{
// get normals
QR_Vector3DP srcNormal(mesh1[i]->m_Buffer[j + index],
mesh1[i]->m_Buffer[j + index + 1],
mesh1[i]->m_Buffer[j + index + 2]);
QR_Vector3DP dstNormal(mesh2[i]->m_Buffer[j + index],
mesh2[i]->m_Buffer[j + index + 1],
mesh2[i]->m_Buffer[j + index + 2]);
// interpolate normals
QR_Vector3DP normal = srcNormal.Interpolate(dstNormal, position);
// set interpolated normals in destination buffer
pVertex->m_Buffer.push_back(normal.m_X);
pVertex->m_Buffer.push_back(normal.m_Y);
pVertex->m_Buffer.push_back(normal.m_Z);
index += 3;
}
// do include texture coordinates?
if (mesh1[i]->m_Format & QR_Vertex::IE_VF_TexCoords)
{
// copy texture coordinates from source
pVertex->m_Buffer.push_back(mesh1[i]->m_Buffer[j + index]);
pVertex->m_Buffer.push_back(mesh1[i]->m_Buffer[j + index + 1]);
index += 2;
}
// do include colors?
if (mesh1[i]->m_Format & QR_Vertex::IE_VF_Colors)
{
// copy color from source
pVertex->m_Buffer.push_back(mesh1[i]->m_Buffer[j + index]);
pVertex->m_Buffer.push_back(mesh1[i]->m_Buffer[j + index + 1]);
pVertex->m_Buffer.push_back(mesh1[i]->m_Buffer[j + index + 2]);
pVertex->m_Buffer.push_back(mesh1[i]->m_Buffer[j + index + 3]);
}
}
// add interpolated mesh to output list
result.push_back(pVertex.get());
pVertex.release();
}
return QR_MD2Common::IE_C_Success;
}And below is the Interpolate() function itself, as implemented in the QR_Vector3D class:
template <class T>
QR_Vector3D<T> QR_Vector3D<T>::Interpolate(const QR_Vector3D<T>& other,
const QR_Float& position) const
{
// is position out of bounds? Limit to min or max values in this case
if (position < 0.0f)
return *this;
else
if (position > 1.0f)
return other;
QR_Vector3D<T> interpolation;
// calculate interpolation
interpolation.m_X = m_X + position * (other.m_X - m_X);
interpolation.m_Y = m_Y + position * (other.m_Y - m_Y);
interpolation.m_Z = m_Z + position * (other.m_Z - m_Z);
return interpolation;
}