## Alex McGilvray

Writing

Portfolio

Misc

PSM Tutorial #2-5 : Wrapping it Up

2013/02/26 1:34:09 AM

In the last tutorial we discussed vertex buffers. The last major concept we will go over in this series.

At this point we’ve gone over all the major topics regarding sprite rendering in modern OpenGL. The lessons you’ve learned so far will apply far beyond sprite rendering. Vertex buffers and shaders are the heart of modern rendering techniques.

This final tutorial in the series will be going over the commands in the main render loop. You already know all the important concepts. We are merely activating them at this point. Nevertheless this is meant to be a thorough study of the theoretical portions of OpenGL and PSM/Vita rendering so let’s go 🙂

Now we will look at the render loop. As always I will start by including the full code sample.

```public class AppMain
{
static protected GraphicsContext graphics;
static Texture2D texture;

static float[] vertices=new float;

static float[] texcoords = {
0.0f, 0.0f,	// 0 top left.
0.0f, 1.0f,	// 1 bottom left.
1.0f, 0.0f,	// 2 top right.
1.0f, 1.0f,	// 3 bottom right.
};

static float[] colors = {
1.0f,	1.0f,	1.0f,	1.0f,	// 0 top left.
1.0f,	1.0f,	1.0f,	1.0f,	// 1 bottom left.
1.0f,	1.0f,	1.0f,	1.0f,	// 2 top right.
1.0f,	1.0f,	1.0f,	1.0f,	// 3 bottom right.
};

const int indexSize = 4;
static ushort[] indices;

static VertexBuffer vertexBuffer;

// Width of texture.
static float Width;

// Height of texture.
static float Height;

static Matrix4 unitScreenMatrix;

public static void Main (string[] args)
{
Initialize ();

while (true) {
SystemEvents.CheckEvents ();
Update ();
Render ();
}
}

public static void Initialize ()
{
graphics = new GraphicsContext();
ImageRect rectScreen = graphics.Screen.Rectangle;

texture = new Texture2D("/Application/resources/Player.png", false);

Width = texture.Width;
Height = texture.Height;

vertices=0.0f;	// x0
vertices=0.0f;	// y0
vertices=0.0f;	// z0

vertices=0.0f;	// x1
vertices=1.0f;	// y1
vertices=0.0f;	// z1

vertices=1.0f;	// x2
vertices=0.0f;	// y2
vertices=0.0f;	// z2

vertices=1.0f;	// x3
vertices=1.0f;	// y3
vertices=0.0f;	// z3

indices = new ushort[indexSize];
indices = 0;
indices = 1;
indices = 2;
indices = 3;

//												vertex pos,               texture,       color
vertexBuffer = new VertexBuffer(4, indexSize, VertexFormat.Float3, VertexFormat.Float2, VertexFormat.Float4);

vertexBuffer.SetVertices(0, vertices);
vertexBuffer.SetVertices(1, texcoords);
vertexBuffer.SetVertices(2, colors);

vertexBuffer.SetIndices(indices);
graphics.SetVertexBuffer(0, vertexBuffer);

unitScreenMatrix = new Matrix4(
Width*2.0f/rectScreen.Width,	0.0f,	    0.0f, 0.0f,
0.0f,   Height*(-2.0f)/rectScreen.Height,	0.0f, 0.0f,
0.0f,   0.0f, 1.0f, 0.0f,
-1.0f,  1.0f, 0.0f, 1.0f
);

}

public static void Update ()
{

}

public static void Render ()
{
graphics.Clear();

graphics.SetTexture(0, texture);

graphics.DrawArrays(DrawMode.TriangleStrip, 0, indexSize);

graphics.SwapBuffers();
}
}
```

```	public static void Render ()
{
graphics.Clear();

graphics.SetTexture(0, texture);

graphics.DrawArrays(DrawMode.TriangleStrip, 0, indexSize);

graphics.SwapBuffers();
}
```

First we clear the screen so that we have a blank canvas to draw on. In many cases you will be doing this at the start of every render loop. Next we set our shader program. If you need a refresher on how shaders work you can go back to the second tutorial on this series which explains shaders.

Next we set the active texture. A general rule you can go by is that you can only have one active texture at a time in OpenGL (in reality you can have a few more with more advanced use of shaders but we will leave that for a later tutorial series). So when drawing what you need to do is set the active texture then draw your geometry that uses that texture. You may notice that this implies that you will need a different set of vertex buffers for each texture you use. This is true. Also sending many vertex buffers via DrawArrays to the GPU is a performance intensive process. As a result you need to keep this in mind. Many people will put all their art into one very large image (sometimes referred to as a texture atlas) in order to reduce DrawArrays calls. There are other methods you can also use to reduce draw calls. We will investigate this in a later tutorial series.

Next we set the uniform data in the shader program. If you recall from the second tutorial in the series, uniform data is extra data you send to a shader for various purposes such as passing the camera transformation matrix.

Next we call DrawArrays. This takes the currently bound vertex buffer, shader and texture and does the final act of actually processing and drawing everything. This is essentially the final call that brings everything together.

You may notice if you omit the last call SwapBuffers you will not see anything on the screen. This is because when we draw we are drawing to the back buffer. In the case of PSM we use double buffering.

Think of this of this as 2 pieces of paper. We have one that is showing on the screen and another that you can’t see which we are actually drawing on. Every time we call SwapBuffers we swap the 2 pieces of paper so the one on the screen goes out of sight and the one that we’ve been drawing on in the background comes into view. The reason for this is to prevent something called screen tearing. This is the case where we might be drawing to a surface and only half finish by the time the next frame is scheduled to draw. What happens is we have a tearing effect on the final rendered image. By swapping the 2 surfaces we can ensure that this does not happen because because the images are never swapped until the back buffer is finished drawing.

You can see an example of screen tear here.

So that’s it for this tutorial series. I hope you found it useful for your studies. If there is anything that’s not clear or if you found anything that might be considered inaccurate please let me know in the comments or by email and I’ll do my best to rectify the situation.

I’m currently considering what to do for the next Vita tutorial series. Most likely it will be something more fun such as a simple game. I’ve also been working on a 2D engine for the last few months for PSM. I may consider doing a tutorial on making a game with the engine. If there’s a topic related to OpenGL or Vita/PSM you are interested in learning leave a comment or send a message and let me know.

PSM Tutorial #2-4 : Vertex Buffers

2013/02/13 4:15:55 AM

In the last tutorial we went over the concepts of texture coordinates. Now we only have one major topic to go over before wrapping everything up. Vertex Buffers.

Now we will look at the next set of declarations in the code. As always I will start by including the full code sample.

```public class AppMain
{
static protected GraphicsContext graphics;
static Texture2D texture;

static float[] vertices=new float;

static float[] texcoords = {
0.0f, 0.0f,	// 0 top left.
0.0f, 1.0f,	// 1 bottom left.
1.0f, 0.0f,	// 2 top right.
1.0f, 1.0f,	// 3 bottom right.
};

static float[] colors = {
1.0f,	1.0f,	1.0f,	1.0f,	// 0 top left.
1.0f,	1.0f,	1.0f,	1.0f,	// 1 bottom left.
1.0f,	1.0f,	1.0f,	1.0f,	// 2 top right.
1.0f,	1.0f,	1.0f,	1.0f,	// 3 bottom right.
};

const int indexSize = 4;
static ushort[] indices;

static VertexBuffer vertexBuffer;

// Width of texture.
static float Width;

// Height of texture.
static float Height;

static Matrix4 unitScreenMatrix;

public static void Main (string[] args)
{
Initialize ();

while (true) {
SystemEvents.CheckEvents ();
Update ();
Render ();
}
}

public static void Initialize ()
{
graphics = new GraphicsContext();
ImageRect rectScreen = graphics.Screen.Rectangle;

texture = new Texture2D("/Application/resources/Player.png", false);

Width = texture.Width;
Height = texture.Height;

vertices=0.0f;	// x0
vertices=0.0f;	// y0
vertices=0.0f;	// z0

vertices=0.0f;	// x1
vertices=1.0f;	// y1
vertices=0.0f;	// z1

vertices=1.0f;	// x2
vertices=0.0f;	// y2
vertices=0.0f;	// z2

vertices=1.0f;	// x3
vertices=1.0f;	// y3
vertices=0.0f;	// z3

indices = new ushort[indexSize];
indices = 0;
indices = 1;
indices = 2;
indices = 3;

//												vertex pos,               texture,       color
vertexBuffer = new VertexBuffer(4, indexSize, VertexFormat.Float3, VertexFormat.Float2, VertexFormat.Float4);

vertexBuffer.SetVertices(0, vertices);
vertexBuffer.SetVertices(1, texcoords);
vertexBuffer.SetVertices(2, colors);

vertexBuffer.SetIndices(indices);
graphics.SetVertexBuffer(0, vertexBuffer);

unitScreenMatrix = new Matrix4(
Width*2.0f/rectScreen.Width,	0.0f,	    0.0f, 0.0f,
0.0f,   Height*(-2.0f)/rectScreen.Height,	0.0f, 0.0f,
0.0f,   0.0f, 1.0f, 0.0f,
-1.0f,  1.0f, 0.0f, 1.0f
);

}

public static void Update ()
{

}

public static void Render ()
{
graphics.Clear();

graphics.SetTexture(0, texture);

graphics.DrawArrays(DrawMode.TriangleStrip, 0, indexSize);

graphics.SwapBuffers();
}
}
```

Now as we browse past the declarations covered in the previous tutorials we arrive at the vertex buffer declaration:

```		static VertexBuffer vertexBuffer;
```

In OpenGL there are numerous ways to send data to the GPU to be drawn on the screen. The modern way of doing this is via vertex buffers.

A vertex buffer is a contiguous stream of data that you send to the GPU from your program. The data structure used for storing this data is an array. In plain OpenGL you setup an array of data then call a function that tells OpenGL which array to stream into memory along with some other information such as the size of the array and type of data it represents.

In PSM it’s essentially the same but Sony provides a VertexBuffer object that encapsulates a lot of the common functionality you tend to use with a vertex buffer. We can define a vertex buffer that associates vertex, UV, and color data with the same object. You can also associate other types of data if you wish to do so. Sony also specifically takes in the index buffer size in the VertexBuffer constructor.

So lets take a look at the code where we setup out vertex buffers.

```
//	vertex pos,               texture,       color
vertexBuffer = new VertexBuffer(4, indexSize, VertexFormat.Float3, VertexFormat.Float2, VertexFormat.Float4);

vertexBuffer.SetVertices(0, vertices);
vertexBuffer.SetVertices(1, texcoords);
vertexBuffer.SetVertices(2, colors);

vertexBuffer.SetIndices(indices);
graphics.SetVertexBuffer(0, vertexBuffer);

```

First we create out VertexBuffer object. In the constructor we first pass the number of vertices we will be sending to the GPU. We then pass the number of indices we will be passing. After the first 2 arguments we have a variable length of arguments we can pass to the constructor.

Variable length arguments are very similar to C’s varargs functionality. For more information about c#’s feature for non variable length arguments see this page.

http://msdn.microsoft.com/en-us/library/w5zay9db(v=vs.71).aspx

Each of the arguments for the variable portion of the parameters defines a vertex buffer. The buffer is declared by an enum that describes the format of data that will be passed in through that buffer.

```			//	vertex pos,               texture,       color
vertexBuffer = new VertexBuffer(4, indexSize, VertexFormat.Float3, VertexFormat.Float2, VertexFormat.Float4);
```

So here we say we want to pass in 4 vertices, 4 indices (the value of indexSize at the time), a set of 3 floats per vertex, a set of 2 floats per vertex and a set of 4 floats per vertex. The reason for these sizes will be more apparent if you look the following code:

```			vertexBuffer.SetVertices(0, vertices);
vertexBuffer.SetVertices(1, texcoords);
vertexBuffer.SetVertices(2, colors);

vertexBuffer.SetIndices(indices);
graphics.SetVertexBuffer(0, vertexBuffer);
```

As you can see we use the SetVertieces method to assign which array represents which piece of data. The first coordinate is the index of the array we are passing into the VertexBuffer object. This corresponds to the order that we entered the variable length parameters in the VertexBuffer constructor. In index 0 we pass in the vertices which need 3 floats per vertex to describe. In index 1 we pass the texture coordinates which take 2 floats per vertex to describe and finally in index 2 we pass the colors which take 4 floats per vertex to describe.

Then we send our index array to the VertexBuffer via the SetIndicies method.

Finally we send the entire vertex buffer to the graphics context.

Since we covered most of the more difficult to grasp concepts in previous tutorials this one ended up being fairly short. In the next tutorial we will go over the render method and wrap up this tutorial series!

After that we can get into more interesting things like making a game 🙂

PSM Tutorial #2-3 : Texture Coordinates

2013/01/22 2:01:22 AM

In the last tutorial we went over the concepts of vertices, indices and vertex colors. Now we will go over an important concept : Texture coordinates.

Now we will look at the next set of declarations in the code. As always I will start by including the full code sample.

```public class AppMain
{
static protected GraphicsContext graphics;
static Texture2D texture;

static float[] vertices=new float;

static float[] texcoords = {
0.0f, 0.0f,	// 0 top left.
0.0f, 1.0f,	// 1 bottom left.
1.0f, 0.0f,	// 2 top right.
1.0f, 1.0f,	// 3 bottom right.
};

static float[] colors = {
1.0f,	1.0f,	1.0f,	1.0f,	// 0 top left.
1.0f,	1.0f,	1.0f,	1.0f,	// 1 bottom left.
1.0f,	1.0f,	1.0f,	1.0f,	// 2 top right.
1.0f,	1.0f,	1.0f,	1.0f,	// 3 bottom right.
};

const int indexSize = 4;
static ushort[] indices;

static VertexBuffer vertexBuffer;

// Width of texture.
static float Width;

// Height of texture.
static float Height;

static Matrix4 unitScreenMatrix;

public static void Main (string[] args)
{
Initialize ();

while (true) {
SystemEvents.CheckEvents ();
Update ();
Render ();
}
}

public static void Initialize ()
{
graphics = new GraphicsContext();
ImageRect rectScreen = graphics.Screen.Rectangle;

texture = new Texture2D("/Application/resources/Player.png", false);

Width = texture.Width;
Height = texture.Height;

vertices=0.0f;	// x0
vertices=0.0f;	// y0
vertices=0.0f;	// z0

vertices=0.0f;	// x1
vertices=1.0f;	// y1
vertices=0.0f;	// z1

vertices=1.0f;	// x2
vertices=0.0f;	// y2
vertices=0.0f;	// z2

vertices=1.0f;	// x3
vertices=1.0f;	// y3
vertices=0.0f;	// z3

indices = new ushort[indexSize];
indices = 0;
indices = 1;
indices = 2;
indices = 3;

//												vertex pos,               texture,       color
vertexBuffer = new VertexBuffer(4, indexSize, VertexFormat.Float3, VertexFormat.Float2, VertexFormat.Float4);

vertexBuffer.SetVertices(0, vertices);
vertexBuffer.SetVertices(1, texcoords);
vertexBuffer.SetVertices(2, colors);

vertexBuffer.SetIndices(indices);
graphics.SetVertexBuffer(0, vertexBuffer);

unitScreenMatrix = new Matrix4(
Width*2.0f/rectScreen.Width,	0.0f,	    0.0f, 0.0f,
0.0f,   Height*(-2.0f)/rectScreen.Height,	0.0f, 0.0f,
0.0f,   0.0f, 1.0f, 0.0f,
-1.0f,  1.0f, 0.0f, 1.0f
);

}

public static void Update ()
{

}

public static void Render ()
{
graphics.Clear();

graphics.SetTexture(0, texture);

graphics.DrawArrays(DrawMode.TriangleStrip, 0, indexSize);

graphics.SwapBuffers();
}
}
```

Texture coordinates are interesting concept. They define how a 2d image is mapped onto a 3d plane. It’s a concept that is used heavily in both 2d and 3d rendering and is a very useful bit of foundation knowledge.

If you want to apply a texture to a plane you need to be able to define how it maps to that plane. As such each vertex requires a extra 2D set of coordinates which define what parts of an image to apply to the plane. UV coordinate space extends from 0 to 1. On the X-Axis of an image file  0 would be the left side of the image and 1 would be the right-most pixel of the image (equal to the width of the image. The concept is the same for the Y-Axis. So if we wanted to show a whole image on a square plane then we would have the following coordinates:

• Top left vertex
• X = 0
• Y = 0
• Bottom left vertex
• X = 0
• Y = 1
• Top right vertex
• X = 1
• Y = 0
• Bottom right vertex
• X = 1
• Y = 1

Now if you wanted to only show the half top of an image on a plane you would have the following coordinates:

• Top left vertex
• X = 0
• Y = 0
• Bottom left vertex
• X = 0
• Y = 0.5
• Top right vertex
• X = 1
• Y = 0
• Bottom right vertex
• X = 1
• Y = 0.5

Now nothing helps more than a visual example so here is a image that describes the two situations we have just outlined. Notice the effect of displaying half a square image on a square plane results in stretching distortion. Now lets take a look at this in code.

```static float[] texcoords = {
0.0f, 0.0f,	// 0 top left.
0.0f, 1.0f,	// 1 bottom left.
1.0f, 0.0f,	// 2 top right.
1.0f, 1.0f,	// 3 bottom right.
};
```

Here we have a float array declaration with UV coordinates for each vertex on the quad we wish to draw. As usual this goes counter-clockwise along the vertices. If you compare these declarations to the previous example in this tutorial you most likely will have deduced that a quad with these coordinates will display an entire image.

Before I end this section I would like you to think about some of the uses of displaying the subsection of an image.

• You can use this for animations. Have all your animated frames of a character on one image and “slide” the coordinates each frame over a portion of the image. This is a technique used very frequently for animation.
• You can have effects like moving water by moving a sub rectangle over a large image of water.

One last thing I haven’t address is it’s entirely possible to have your texture coordinate ranges outside of the 0-1 coordinate space. The effect this has is different depending on your renderer  and settings. One very common usage of this is tiling. If you have a plane whose coordinates extend from 0 to 10 then the image will repeat 10 times across the plane. This is used a lot in 3d rendering for tiling textures like grass or brick.

That’s all there really is to UV coordinates. Next up we will focus on Vertex Buffers.

PSM Tutorial #2-2 : Verticies, Indicies and Vertex Colors

2013/01/05 2:57:21 AM

In the last tutorial we went in depth over what a Shader Program is and how it relates to graphics hardware in the Vita (although many of these concepts carry over to other platforms).

Now we will look at the next set of declarations in the code. For easy reference as always I will start by including the full code sample.

```public class AppMain
{
static protected GraphicsContext graphics;
static Texture2D texture;

static float[] vertices=new float;

static float[] texcoords = {
0.0f, 0.0f,	// 0 top left.
0.0f, 1.0f,	// 1 bottom left.
1.0f, 0.0f,	// 2 top right.
1.0f, 1.0f,	// 3 bottom right.
};

static float[] colors = {
1.0f,	1.0f,	1.0f,	1.0f,	// 0 top left.
1.0f,	1.0f,	1.0f,	1.0f,	// 1 bottom left.
1.0f,	1.0f,	1.0f,	1.0f,	// 2 top right.
1.0f,	1.0f,	1.0f,	1.0f,	// 3 bottom right.
};

const int indexSize = 4;
static ushort[] indices;

static VertexBuffer vertexBuffer;

// Width of texture.
static float Width;

// Height of texture.
static float Height;

static Matrix4 unitScreenMatrix;

public static void Main (string[] args)
{
Initialize ();

while (true) {
SystemEvents.CheckEvents ();
Update ();
Render ();
}
}

public static void Initialize ()
{
graphics = new GraphicsContext();
ImageRect rectScreen = graphics.Screen.Rectangle;

texture = new Texture2D("/Application/resources/Player.png", false);

Width = texture.Width;
Height = texture.Height;

vertices=0.0f;	// x0
vertices=0.0f;	// y0
vertices=0.0f;	// z0

vertices=0.0f;	// x1
vertices=1.0f;	// y1
vertices=0.0f;	// z1

vertices=1.0f;	// x2
vertices=0.0f;	// y2
vertices=0.0f;	// z2

vertices=1.0f;	// x3
vertices=1.0f;	// y3
vertices=0.0f;	// z3

indices = new ushort[indexSize];
indices = 0;
indices = 1;
indices = 2;
indices = 3;

//												vertex pos,               texture,       color
vertexBuffer = new VertexBuffer(4, indexSize, VertexFormat.Float3, VertexFormat.Float2, VertexFormat.Float4);

vertexBuffer.SetVertices(0, vertices);
vertexBuffer.SetVertices(1, texcoords);
vertexBuffer.SetVertices(2, colors);

vertexBuffer.SetIndices(indices);
graphics.SetVertexBuffer(0, vertexBuffer);

unitScreenMatrix = new Matrix4(
Width*2.0f/rectScreen.Width,	0.0f,	    0.0f, 0.0f,
0.0f,   Height*(-2.0f)/rectScreen.Height,	0.0f, 0.0f,
0.0f,   0.0f, 1.0f, 0.0f,
-1.0f,  1.0f, 0.0f, 1.0f
);

}

public static void Update ()
{

}

public static void Render ()
{
graphics.Clear();

graphics.SetTexture(0, texture);

graphics.DrawArrays(DrawMode.TriangleStrip, 0, indexSize);

graphics.SwapBuffers();
}
}
```

Now lets focus on the next set of declarations. The vertex array, index array and color array. Ignore the “texcoords” array for now. That will be covered in the next tutorial.

```		static float[] vertices=new float;

static float[] colors = {
1.0f,	1.0f,	1.0f,	1.0f,	// 0 top left.
1.0f,	1.0f,	1.0f,	1.0f,	// 1 bottom left.
1.0f,	1.0f,	1.0f,	1.0f,	// 2 top right.
1.0f,	1.0f,	1.0f,	1.0f,	// 3 bottom right.
};

const int indexSize = 4;
static ushort[] indices;

```

First up we have a declaration of floats that represent our vertices:

```		static float[] vertices=new float;
```

If we want to draw an image on the screen we need to take an image and draw it onto a piece of geometry. In the case of drawing an image you would want to draw onto a plane with a width and height that have the same relative proportions as the image. So if you have a 100 by 200 image you would want a 100 by 200 sized plane. You could also have a 50 by 100 sized plane or a 200 by 400 sized plane. As long as the proportions match. If the proportions don’t match then you will have a stretched or squashed enemy. This is sometimes desirable. One example I can think of is if you had a Mario-Like game where you jump on an enemies head you could squish the height of the plane to make the enemy look squished.

To draw a plane we will need an array of 12 floats to store the 4 vertices we need to draw a plane. Each vertex has 3 floats it needs to describe its X, Y and Z coordinates in space. Here is the code to declare the array:

```		static float[] vertices=new float;
```

Here is the code used later to define the array.

```			vertices=0.0f;	// x0
vertices=0.0f;	// y0
vertices=0.0f;	// z0

vertices=0.0f;	// x1
vertices=1.0f;	// y1
vertices=0.0f;	// z1

vertices=1.0f;	// x2
vertices=0.0f;	// y2
vertices=0.0f;	// z2

vertices=1.0f;	// x3
vertices=1.0f;	// y3
vertices=0.0f;	// z3
```

Here is a example image to help you understand how the vertices of the plane fit into the array: Now that we have our vertices we need to explain to OpenGL how to use those vertices to make a plane. The majority of modern graphics hardware looks at everything as triangles so what we need to do is define 2 triangles to make ourselves a plane (also referred to as a quad).

Here we define the indices of the triangle.

```indices = new ushort[indexSize];
indices = 0;
indices = 1;
indices = 2;
indices = 3;
```

This is a way to tell OpenGL which vertices are used to make triangles. In this case we are defining 4 indices. There are multiple ways to define indices depending on how you tell OpenGL to render.

If we do each triangle separately then we would need 6 indices to define 2 triangles. The first triangle would be indices 0,1,2 and the second triangle would be indices 1,2,3.

In this case you may notice only 4 indices are defined. That is because in this example the triangles are rendered in triangle strip mode. What this essentially means is that you define the first triangle with 3 indices then every subsequent triangle in the array uses the previous 2 indices plus one new index. So in the case of the array we have defined here the first triangle would use indices 0,1,2 and triangle two would use 1,2,3. If you try to visualize this you will see that it makes a strip of triangles formation.

Personally when I write game code I use the method where every triangles indices are defined separately with no triangle stripping optimization. The reason for this is simplicity. If I were to write a 3d model importer it’s much easier to parse common 3d model formats for each triangles index. Few formats have strips defined natively so you would have to write some pretty elaborate code to identify triangle strips. Also in my personal experience I’ve found little performance benefit of using strips over basic indexing.

Next up we have our vertex colors defined.

```
static float[] colors = {
1.0f,	1.0f,	1.0f,	1.0f,	// 0 top left.
1.0f,	1.0f,	1.0f,	1.0f,	// 1 bottom left.
1.0f,	1.0f,	1.0f,	1.0f,	// 2 top right.
1.0f,	1.0f,	1.0f,	1.0f,	// 3 bottom right.
};
```

Vertex colors define how a vertex is colored. These colors are linearly interpolated across the face of a triangle to make a smooth blend of color. Each vertex needs 4 floats for its colors. These 4 floats represent Red,Green,Blue and Alpha (RGBA). Here is a screenshot of a triangle with red on the top vertex, blue on the left and green on the right. Notice how it blends moving across the face from one vert to the next? If you had a textured triangle it would shade the triangle with those colors. If you were making a 2d game and you didn’t want any added color you would set these colors to white or maybe not even use them. They are useful though. One very useful way is to add a faux-lighting system to your game. You can perhaps have everything set to 0.5f to give the sprite half-brightness then depending on each of the sprites vertices distance from lightsources you increase the color value. The linear interpolation of the lighting values will help make the lighting look natural.

In the next tutorial we will discuss texture coordinates. Then we will discuss vertex buffers. A very important part of OpenGL and PSM. After that we will go over the rendering code which at this point should be familiar to you. Then we are ready to move on to making some games!

PSM Tutorial #2-1 : Shader Programs

2012/12/05 2:36:44 AM

We’ll start out by breaking down sony example project for drawing an image on screen. Once we understand what’s going on we can move on to making our own sprite rendering system which will be a lot more flexible.

Let’s take a look at the code then go through it section by section like the previous tutorial. You can also open this code up and run it. It’s located in the “Tutorial\Sample02_01” of your samples folder for PSM. Here is the code:

```public class AppMain
{
static protected GraphicsContext graphics;
static Texture2D texture;

static float[] vertices=new float;

static float[] texcoords = {
0.0f, 0.0f,	// 0 top left.
0.0f, 1.0f,	// 1 bottom left.
1.0f, 0.0f,	// 2 top right.
1.0f, 1.0f,	// 3 bottom right.
};

static float[] colors = {
1.0f,	1.0f,	1.0f,	1.0f,	// 0 top left.
1.0f,	1.0f,	1.0f,	1.0f,	// 1 bottom left.
1.0f,	1.0f,	1.0f,	1.0f,	// 2 top right.
1.0f,	1.0f,	1.0f,	1.0f,	// 3 bottom right.
};

const int indexSize = 4;
static ushort[] indices;

static VertexBuffer vertexBuffer;

// Width of texture.
static float Width;

// Height of texture.
static float Height;

static Matrix4 unitScreenMatrix;

public static void Main (string[] args)
{
Initialize ();

while (true) {
SystemEvents.CheckEvents ();
Update ();
Render ();
}
}

public static void Initialize ()
{
graphics = new GraphicsContext();
ImageRect rectScreen = graphics.Screen.Rectangle;

texture = new Texture2D("/Application/resources/Player.png", false);

Width = texture.Width;
Height = texture.Height;

vertices=0.0f;	// x0
vertices=0.0f;	// y0
vertices=0.0f;	// z0

vertices=0.0f;	// x1
vertices=1.0f;	// y1
vertices=0.0f;	// z1

vertices=1.0f;	// x2
vertices=0.0f;	// y2
vertices=0.0f;	// z2

vertices=1.0f;	// x3
vertices=1.0f;	// y3
vertices=0.0f;	// z3

indices = new ushort[indexSize];
indices = 0;
indices = 1;
indices = 2;
indices = 3;

//												vertex pos,               texture,       color
vertexBuffer = new VertexBuffer(4, indexSize, VertexFormat.Float3, VertexFormat.Float2, VertexFormat.Float4);

vertexBuffer.SetVertices(0, vertices);
vertexBuffer.SetVertices(1, texcoords);
vertexBuffer.SetVertices(2, colors);

vertexBuffer.SetIndices(indices);
graphics.SetVertexBuffer(0, vertexBuffer);

unitScreenMatrix = new Matrix4(
Width*2.0f/rectScreen.Width,	0.0f,	    0.0f, 0.0f,
0.0f,   Height*(-2.0f)/rectScreen.Height,	0.0f, 0.0f,
0.0f,   0.0f, 1.0f, 0.0f,
-1.0f,  1.0f, 0.0f, 1.0f
);

}

public static void Update ()
{

}

public static void Render ()
{
graphics.Clear();

graphics.SetTexture(0, texture);

graphics.DrawArrays(DrawMode.TriangleStrip, 0, indexSize);

graphics.SwapBuffers();
}
}
```

Here is what it looks like when you run the code: A lot more things to look at for this tutorial. We’ll be taking a look at what OpenGL is really doing. There’s a lot of concepts that I’ll touch on but are quite deep so I will provide some supplementary reading for those who want to further their knowledge on the various topics we touch on.

Let’s start by looking over the declarations:

```public class AppMain
{
static protected GraphicsContext graphics;
static Texture2D texture;

static float[] vertices=new float;

static float[] texcoords = {
0.0f, 0.0f,	// 0 top left.
0.0f, 1.0f,	// 1 bottom left.
1.0f, 0.0f,	// 2 top right.
1.0f, 1.0f,	// 3 bottom right.
};

static float[] colors = {
1.0f,	1.0f,	1.0f,	1.0f,	// 0 top left.
1.0f,	1.0f,	1.0f,	1.0f,	// 1 bottom left.
1.0f,	1.0f,	1.0f,	1.0f,	// 2 top right.
1.0f,	1.0f,	1.0f,	1.0f,	// 3 bottom right.
};

const int indexSize = 4;
static ushort[] indices;

static VertexBuffer vertexBuffer;

// Width of texture.
static float Width;

// Height of texture.
static float Height;

static Matrix4 unitScreenMatrix;

```

We already know what a graphics context is from the last tutorial so let’s move on to the ShaderProgram class:

```static ShaderProgram shaderProgram;
```

A shader program is a class that encapsulates shader code. A shader is a set of code that instructs the GPU chip on how to handle graphics data you send it from your program. The way this works is first you prepare your data in your programming code, then you send it to the Vertex processing unit of the GPU. The vertex shader program is run once for every vertex that you send to the vertex processing unit. This is where the geometric functions are performed and data is prepared to be handed to the next step of the process, the fragment unit.

The fragment unit takes the vertex data and then draws every fragment onto the screen. A fragment is a pixel on a piece of geometry. This is where you perform pixel operations. If you have ever seen the cool filters and color changing stuff you can do in a image editing program such as photoshop or GIMP this is where you would perform similar effects.

The vertex step of the process can be used for many useful things. One example would be transforming all your vertex data. For instance one of the most common uses is if you have a large world that is stored in a array of vertices that you send to the vertex unit and then send it some mathematical information that explains where a camera is inside your world, the vertex processing unit will use this camera data to move the entire world so that when it renders the world is positioned according to the camera information. Essentially what this means is the actual geometric math required to have a camera is performed on the vertex processing unit.

Here is a very very simple vertex shader.

```void main(float4 in a_Position    : POSITION,
float4 out v_Position   : POSITION,
uniform float4x4 u_WorldMatrix)
{
v_Position = mul(a_Position, u_WorldMatrix);
}
```

What we have here in the main function parameters is a in variable which represents the current vertex we have sent the vertex processing unit.

The word “in” marks the data as incoming data from the program code. Next we have an out variable which will represent the transformed vertex data that we sent to the next stage of the GPU process (the fragment shader). The word “out” says that this data will be sent to the fragment shader. In the case of vertex data the out parameter doesn’t directly send the data to the fragment shader. There are internal processes which determine whether or not the data reaches the fragment shader by calculating it’s depth from the camera and whether or not it is actually in the space viewable by the camera. We will discuss this more in depth in a later tutorial when we deal with more advanced fragment operations. For the most part other data such as colors and texture coordinates are sent directly to the fragment shader.

Finally we have a uniform matrix variable. Uniform means a piece of data in our programming code that is marked to come into the vertex shader. Uniforms are how we pass information we need from the game code to our vertex shader. In this case you can think of the uniform matrix as our camera we discussed earlier. Finally we have the line:

```	v_Position = mul(a_Position, u_WorldMatrix);
```

What this does is performs matrix multiplication to calculate the new position of the vertex based on the input camera data. Matrix and vector math operations are part of shader language math operations so luckily we do not have to have full knowledge of how they work. Essentially if you multiply the vertex against the “camera transformation matrix” you have a vertex that is now in the correct position relative to the camera.

Some other examples of cool things you can do on the vertex processing unit include things like skeletal character animation and model deformations. While it’s true you could do some of these operations in your programming code it’s almost always advisable to do this on the vertex processing unit because it’s a specialized processor made specifically to do geometric operations, you will almost always have much better performance doing these operations on the GPU rather than the CPU.

Once the vertex processing unit is finished processing its data it is sent to the fragment unit (also knows and the pixel shader). As stated before a fragment is a pixel on a piece of geometry. The fragment unit program is run once for every fragment. At it’s most basic level you just receive an incoming fragment and then send it back to the screen with a color. In reality depending on what you are doing you might be sampling a texture for a 3d model or performing a cool filter to give the game a artistic impressionist effect or toon style. Here is an example of an very simple fragment shader.

```void main(float4 out color      : COLOR)
{
color =  float4(0, 1.0, 0, 1.0);
}
```

As you can see here we have one out parameter which is the color of the pixel we will be rendering to the screen. Usually we would have some information coming in from the vertex shader but in the case of simplicity we will just color the incoming fragment green and paint it to the screen. The coloring of the fragment green is done on this line:

```		color =  float4(0, 1.0, 0, 1.0);
```

The color out variable is assigned to be a float4. Float4’s are an array of 4 floating point variables. In this case we interpret it as a color structure with the format RGBA which stands for Red,Green,Blue,Alpha. The RGB portion should be fairly self-explanatory. The alpha represents how transparent the pixel is. 0 is fully transparent and 1 is fully opaque. So in our case we have R=0, G=1.0, B=0 and A=1.0. So this is why we would get a fragment that is green and fully opaque.

So to reiterate the process of how graphics get drawn to the screen I’ve written this small schematic to reinforce the concept. In the next part of this tutorial we will look at the shaders included with this example which are slightly more complex. It will be a shorter part. Hopefully you have the knowledge of how the graphics system works now. If you have any feedback, questions or comments leave a comment in on this post and I will do my best to address them.

I would also like to thank Bruce Sutherland for reviewing this tutorial and making some corrections in regards to how vertex out parameters in the vertex shader are not directly sent to the fragment shader. Here is his explanation on what happens to a vertex out parameter from the vertex processing unit:

The output of the vertex shader is still a vertex in 3D space.

In OpenGL and DirectX the vertices are mapped to what’s called the canonical view volume. In OpenGL that’s a cube which goes from -1, -1, -1 to 1, 1, 1.

You then have another stage in the GPU which carries out clipping and then screen mapping. These stages are carried out in Geometry Shaders on newer versions of DirectX and OpenGL on the desktop but haven’t made it to mobile gpus yet.

Some people can get confused by vertex shaders when they don’t realise what the output they are trying to generate actually is.

Extra vertex data like texture co-ordinates would mostly go untouched, except for clipped vertices, the texture co-ord sent to the fragment shader would probably be whatever the GPU calculates the text coord to be at the point of intersection.

Bruce also has a blog with tutorials and topics on programming including Android related topics if you are interested in that. You can visit his site here:

http://brucesutherland.blogspot.com.au/