# How to Compute the Position in a Vertex Shader (GLSL, Part 3, the end…)

After the publication of my previous posts about **how to compute the position in a vertex shader** (see HERE and HERE), I received this detailed explanation from Graham about coordinates spaces and of course the compute of… position! I think it’s a worthwhile text to share with Geeks3D’s readers. I slighthly modified the original text to match the needs of the post, but most of the explanation is there.

Coordinate spaces is an important notion and must be clear for people. In essence, there are four major coordinate systems: **Object**, **World**, **View** and **Clip**. We need a matrix to transform between each. So, our model matrix transforms from Object to World space, the view matrix transforms from World to View space and our projection matrix (in combination with the implicit homogeneous divide) transforms from View to Clip space. The incoming vertices (vertex shader input) are generally in Object space, unless we have a mesh which is in our reference frame (such as a terrain). For lighting, we would either need world or view space positions but usually not both. Lighting in world space requires that we know the viewer’s position in world space, whereas lighting in view space requires that we have the light coordinates in view space. The first is generally easier to deal with (1 update per frame rather than 1 per light). Therefore, it’s going to be more efficient to bake matrices together. In *classic GL*, we would have gl_ModelViewMatrix. This is the model and view matrices baked together and takes vertices from Object to View space directly. We also have gl_ProjectionMatrix and gl_ModelViewProjectionMatrix, which take vertices from View to Clip and from Object to Clip directly, respectively. It’s recommended to use uniforms for these anyway. So, our minimal vertex shader could be something as simple as:

uniform mat4 model_view_projection_matrix; void main (void) { gl_Position = model_view_projection_matrix * gl_Vertex; }

One matrix multiply. If we never actually output the intermediate results, this is sufficient.

If we need world or clip space coordinates, we’re going to need at least one more matrix transformation. The following vertex shaders are equivalent:

uniform mat4 model_view_matrix; uniform mat4 projection_marix; void main (void) { vec4 view_space_vertex = model_view_matrix * gl_Vertex; gl_Position = projection_matrix * view_space_vertex; }

and

uniform mat4 model_matrix; uniform mat4 view_projection_matrix; void main (void) { vec4 world_space_vertex = model_matrix * gl_Vertex; gl_Position = view_projection_matrix * world_space_vertex; }

Both require two matrix multiplies – which we need to use depends on which intermediate space we need (whether we want to do lighting in world or view space). However, there are dependencies here. The second may be better because the model_matrix uniform will change between each mesh that you’re rendering whereas the view_projection_matrix uniform probably only changes once per frame. Also, it probably wouldn’t make much difference overall, but the second multiply requires the result of the first. Depending on the content of the matrix and the hardware we’re running on, this could cause issues. What is likely to be better is to go directly from object space to the space we want to be in:

uniform mat4 model_matrix; uniform mat4 model_view_matrix; uniform mat4 model_view_projection_matrix; void main (void) { vec4 world_space_vertex = model_matrix * gl_Vertex; vec4 view_space_vertex = model_view_matrix * gl_Vertex; gl_Position = model_view_projection_matrix * gl_Vertex; }

Here, all three calculations are independent and more accurate because there are no intermediates. We probably only need two of them depending on our algorithm requirements and how we bake our matrices. Finally, all of these transforms are linear and so the results of them can be interpolated. Let’s say, for example, that we want to light in world space. We have the world space position of our viewer, and we calculate the world space coordinates of our vertex, both can be interpolated and so the delta can be interpolated too:

uniform mat4 model_matrix; uniform mat4 model_view_projection_matrix; uniform vec3 viewer_position; // Position of viewer in world coordinates out vec3 view_vector; void main (void) { vec4 world_space_vertex = model_matrix * gl_Vertex; view_vector = world_space_vertex.xyz – viewer_position; gl_Position = model_view_projection_matrix * gl_Vertex; }

This introduces an additional subtraction into the vertex shader, but saves us having to update a model_view_matrix uniform for every update of our model_matrix. This is particularly important if model_matrix is a big array and we’re using instancing to index into it, for example. The extra subtraction doesn’t matter though, because **GPU ALU is cheap compared to CPU and uniform updates**: it’s much cheaper to update a couple of matrices per frame than a couple hundred or more. Of course, we may need a normal matrix too. This transforms normals from object space to view space and so is derived from the model and view matrices.

Be careful about doing calculations in world space – you can quickly run out of precision on largeish scenes.

I think this is the reason OpenGL traditionally did it’s lighting in view space.