Alex McGilvray

Writing

Portfolio

Misc

New PSM/Vita tutorial. Focus on UV coordinates

January 22nd, 2013

Just put up a new PSM/Vita tutorial with a focus on UV coordinates. You can see it here:

See tutorial

PSM Tutorial #2-3 : Texture Coordinates

January 22nd, 2013

In the last tutorial we went over the concepts of vertices, indices and vertex colors. Now we will go over an important concept : Texture coordinates.

Now we will look at the next set of declarations in the code. As always I will start by including the full code sample.

```public class AppMain
{
static protected GraphicsContext graphics;
static Texture2D texture;

static float[] vertices=new float[12];

static float[] texcoords = {
0.0f, 0.0f,	// 0 top left.
0.0f, 1.0f,	// 1 bottom left.
1.0f, 0.0f,	// 2 top right.
1.0f, 1.0f,	// 3 bottom right.
};

static float[] colors = {
1.0f,	1.0f,	1.0f,	1.0f,	// 0 top left.
1.0f,	1.0f,	1.0f,	1.0f,	// 1 bottom left.
1.0f,	1.0f,	1.0f,	1.0f,	// 2 top right.
1.0f,	1.0f,	1.0f,	1.0f,	// 3 bottom right.
};

const int indexSize = 4;
static ushort[] indices;

static VertexBuffer vertexBuffer;

// Width of texture.
static float Width;

// Height of texture.
static float Height;

static Matrix4 unitScreenMatrix;

public static void Main (string[] args)
{
Initialize ();

while (true) {
SystemEvents.CheckEvents ();
Update ();
Render ();
}
}

public static void Initialize ()
{
graphics = new GraphicsContext();
ImageRect rectScreen = graphics.Screen.Rectangle;

texture = new Texture2D("/Application/resources/Player.png", false);

Width = texture.Width;
Height = texture.Height;

vertices[0]=0.0f;	// x0
vertices[1]=0.0f;	// y0
vertices[2]=0.0f;	// z0

vertices[3]=0.0f;	// x1
vertices[4]=1.0f;	// y1
vertices[5]=0.0f;	// z1

vertices[6]=1.0f;	// x2
vertices[7]=0.0f;	// y2
vertices[8]=0.0f;	// z2

vertices[9]=1.0f;	// x3
vertices[10]=1.0f;	// y3
vertices[11]=0.0f;	// z3

indices = new ushort[indexSize];
indices[0] = 0;
indices[1] = 1;
indices[2] = 2;
indices[3] = 3;

//												vertex pos,               texture,       color
vertexBuffer = new VertexBuffer(4, indexSize, VertexFormat.Float3, VertexFormat.Float2, VertexFormat.Float4);

vertexBuffer.SetVertices(0, vertices);
vertexBuffer.SetVertices(1, texcoords);
vertexBuffer.SetVertices(2, colors);

vertexBuffer.SetIndices(indices);
graphics.SetVertexBuffer(0, vertexBuffer);

unitScreenMatrix = new Matrix4(
Width*2.0f/rectScreen.Width,	0.0f,	    0.0f, 0.0f,
0.0f,   Height*(-2.0f)/rectScreen.Height,	0.0f, 0.0f,
0.0f,   0.0f, 1.0f, 0.0f,
-1.0f,  1.0f, 0.0f, 1.0f
);

}

public static void Update ()
{

}

public static void Render ()
{
graphics.Clear();

graphics.SetTexture(0, texture);

graphics.DrawArrays(DrawMode.TriangleStrip, 0, indexSize);

graphics.SwapBuffers();
}
}
```

Texture coordinates are interesting concept. They define how a 2d image is mapped onto a 3d plane. It’s a concept that is used heavily in both 2d and 3d rendering and is a very useful bit of foundation knowledge.

If you want to apply a texture to a plane you need to be able to define how it maps to that plane. As such each vertex requires a extra 2D set of coordinates which define what parts of an image to apply to the plane. UV coordinate space extends from 0 to 1. On the X-Axis of an image file  0 would be the left side of the image and 1 would be the right-most pixel of the image (equal to the width of the image. The concept is the same for the Y-Axis. So if we wanted to show a whole image on a square plane then we would have the following coordinates:

• Top left vertex
• X = 0
• Y = 0
• Bottom left vertex
• X = 0
• Y = 1
• Top right vertex
• X = 1
• Y = 0
• Bottom right vertex
• X = 1
• Y = 1

Now if you wanted to only show the half top of an image on a plane you would have the following coordinates:

• Top left vertex
• X = 0
• Y = 0
• Bottom left vertex
• X = 0
• Y = 0.5
• Top right vertex
• X = 1
• Y = 0
• Bottom right vertex
• X = 1
• Y = 0.5

Now nothing helps more than a visual example so here is a image that describes the two situations we have just outlined. Notice the effect of displaying half a square image on a square plane results in stretching distortion.

Now lets take a look at this in code.

```static float[] texcoords = {
0.0f, 0.0f,	// 0 top left.
0.0f, 1.0f,	// 1 bottom left.
1.0f, 0.0f,	// 2 top right.
1.0f, 1.0f,	// 3 bottom right.
};
```

Here we have a float array declaration with UV coordinates for each vertex on the quad we wish to draw. As usual this goes counter-clockwise along the vertices. If you compare these declarations to the previous example in this tutorial you most likely will have deduced that a quad with these coordinates will display an entire image.

Before I end this section I would like you to think about some of the uses of displaying the subsection of an image.

• You can use this for animations. Have all your animated frames of a character on one image and “slide” the coordinates each frame over a portion of the image. This is a technique used very frequently for animation.
• You can have effects like moving water by moving a sub rectangle over a large image of water.

One last thing I haven’t address is it’s entirely possible to have your texture coordinate ranges outside of the 0-1 coordinate space. The effect this has is different depending on your renderer  and settings. One very common usage of this is tiling. If you have a plane whose coordinates extend from 0 to 10 then the image will repeat 10 times across the plane. This is used a lot in 3d rendering for tiling textures like grass or brick.

That’s all there really is to UV coordinates. Next up we will focus on Vertex Buffers.

Preview #01 of my new PSM/Vita game Neko Rush

January 13th, 2013

Here’s the first screenshots of my new game NekoRush. It’s first game for the PSP-Vita / PSM. It’s a continuous runner where you are a cat running through the neighborhood and must recruit other cats while avoiding obstacles. The two main cats you play as are my cats that are currently staying with my family in Canada while I’m in Japan.

Cat artwork is done my talented SO when she has time.

This is also using the custom 2D engine I developed for PSM. I’ve been doing a lot of reworking on it over the weekend and hope to eventually be able to release it to the public for anyone to use.

lil Commando get some Press!

January 6th, 2013

lil Commando got some surprise press. I’m extremely happy to see this. Did not expect it at all.

I was covered on indiegames.com

http://indiegames.com/2012/12/browser_game_pick_lil_commando.html

I was also covered on PcWorld

PSM Tutorial #2-2 : Verticies, Indicies and Vertex Colors

January 5th, 2013

Just finished a new PSM tutorial. Apologies for the month delay between the tutorials. I have more time this month to work on the tutorials and you should see them once every 10 days or so now.

Here is a link to the new tutorial.

Tutorial #2 Part #2 : Drawing Something on the Screen : Verticies, Indicies and Vertex Colors

PSM Tutorial #2-2 : Verticies, Indicies and Vertex Colors

January 5th, 2013

In the last tutorial we went in depth over what a Shader Program is and how it relates to graphics hardware in the Vita (although many of these concepts carry over to other platforms).

Now we will look at the next set of declarations in the code. For easy reference as always I will start by including the full code sample.

```public class AppMain
{
static protected GraphicsContext graphics;
static Texture2D texture;

static float[] vertices=new float[12];

static float[] texcoords = {
0.0f, 0.0f,	// 0 top left.
0.0f, 1.0f,	// 1 bottom left.
1.0f, 0.0f,	// 2 top right.
1.0f, 1.0f,	// 3 bottom right.
};

static float[] colors = {
1.0f,	1.0f,	1.0f,	1.0f,	// 0 top left.
1.0f,	1.0f,	1.0f,	1.0f,	// 1 bottom left.
1.0f,	1.0f,	1.0f,	1.0f,	// 2 top right.
1.0f,	1.0f,	1.0f,	1.0f,	// 3 bottom right.
};

const int indexSize = 4;
static ushort[] indices;

static VertexBuffer vertexBuffer;

// Width of texture.
static float Width;

// Height of texture.
static float Height;

static Matrix4 unitScreenMatrix;

public static void Main (string[] args)
{
Initialize ();

while (true) {
SystemEvents.CheckEvents ();
Update ();
Render ();
}
}

public static void Initialize ()
{
graphics = new GraphicsContext();
ImageRect rectScreen = graphics.Screen.Rectangle;

texture = new Texture2D("/Application/resources/Player.png", false);

Width = texture.Width;
Height = texture.Height;

vertices[0]=0.0f;	// x0
vertices[1]=0.0f;	// y0
vertices[2]=0.0f;	// z0

vertices[3]=0.0f;	// x1
vertices[4]=1.0f;	// y1
vertices[5]=0.0f;	// z1

vertices[6]=1.0f;	// x2
vertices[7]=0.0f;	// y2
vertices[8]=0.0f;	// z2

vertices[9]=1.0f;	// x3
vertices[10]=1.0f;	// y3
vertices[11]=0.0f;	// z3

indices = new ushort[indexSize];
indices[0] = 0;
indices[1] = 1;
indices[2] = 2;
indices[3] = 3;

//												vertex pos,               texture,       color
vertexBuffer = new VertexBuffer(4, indexSize, VertexFormat.Float3, VertexFormat.Float2, VertexFormat.Float4);

vertexBuffer.SetVertices(0, vertices);
vertexBuffer.SetVertices(1, texcoords);
vertexBuffer.SetVertices(2, colors);

vertexBuffer.SetIndices(indices);
graphics.SetVertexBuffer(0, vertexBuffer);

unitScreenMatrix = new Matrix4(
Width*2.0f/rectScreen.Width,	0.0f,	    0.0f, 0.0f,
0.0f,   Height*(-2.0f)/rectScreen.Height,	0.0f, 0.0f,
0.0f,   0.0f, 1.0f, 0.0f,
-1.0f,  1.0f, 0.0f, 1.0f
);

}

public static void Update ()
{

}

public static void Render ()
{
graphics.Clear();

graphics.SetTexture(0, texture);

graphics.DrawArrays(DrawMode.TriangleStrip, 0, indexSize);

graphics.SwapBuffers();
}
}
```

Now lets focus on the next set of declarations. The vertex array, index array and color array. Ignore the “texcoords” array for now. That will be covered in the next tutorial.

```		static float[] vertices=new float[12];

static float[] colors = {
1.0f,	1.0f,	1.0f,	1.0f,	// 0 top left.
1.0f,	1.0f,	1.0f,	1.0f,	// 1 bottom left.
1.0f,	1.0f,	1.0f,	1.0f,	// 2 top right.
1.0f,	1.0f,	1.0f,	1.0f,	// 3 bottom right.
};

const int indexSize = 4;
static ushort[] indices;

```

First up we have a declaration of floats that represent our vertices:

```		static float[] vertices=new float[12];
```

If we want to draw an image on the screen we need to take an image and draw it onto a piece of geometry. In the case of drawing an image you would want to draw onto a plane with a width and height that have the same relative proportions as the image. So if you have a 100 by 200 image you would want a 100 by 200 sized plane. You could also have a 50 by 100 sized plane or a 200 by 400 sized plane. As long as the proportions match. If the proportions don’t match then you will have a stretched or squashed enemy. This is sometimes desirable. One example I can think of is if you had a Mario-Like game where you jump on an enemies head you could squish the height of the plane to make the enemy look squished.

To draw a plane we will need an array of 12 floats to store the 4 vertices we need to draw a plane. Each vertex has 3 floats it needs to describe its X, Y and Z coordinates in space. Here is the code to declare the array:

```		static float[] vertices=new float[12];
```

Here is the code used later to define the array.

```			vertices[0]=0.0f;	// x0
vertices[1]=0.0f;	// y0
vertices[2]=0.0f;	// z0

vertices[3]=0.0f;	// x1
vertices[4]=1.0f;	// y1
vertices[5]=0.0f;	// z1

vertices[6]=1.0f;	// x2
vertices[7]=0.0f;	// y2
vertices[8]=0.0f;	// z2

vertices[9]=1.0f;	// x3
vertices[10]=1.0f;	// y3
vertices[11]=0.0f;	// z3
```

Here is a example image to help you understand how the vertices of the plane fit into the array:

Now that we have our vertices we need to explain to OpenGL how to use those vertices to make a plane. The majority of modern graphics hardware looks at everything as triangles so what we need to do is define 2 triangles to make ourselves a plane (also referred to as a quad).

Here we define the indices of the triangle.

```indices = new ushort[indexSize];
indices[0] = 0;
indices[1] = 1;
indices[2] = 2;
indices[3] = 3;
```

This is a way to tell OpenGL which vertices are used to make triangles. In this case we are defining 4 indices. There are multiple ways to define indices depending on how you tell OpenGL to render.

If we do each triangle separately then we would need 6 indices to define 2 triangles. The first triangle would be indices 0,1,2 and the second triangle would be indices 1,2,3.

In this case you may notice only 4 indices are defined. That is because in this example the triangles are rendered in triangle strip mode. What this essentially means is that you define the first triangle with 3 indices then every subsequent triangle in the array uses the previous 2 indices plus one new index. So in the case of the array we have defined here the first triangle would use indices 0,1,2 and triangle two would use 1,2,3. If you try to visualize this you will see that it makes a strip of triangles formation.

Personally when I write game code I use the method where every triangles indices are defined separately with no triangle stripping optimization. The reason for this is simplicity. If I were to write a 3d model importer it’s much easier to parse common 3d model formats for each triangles index. Few formats have strips defined natively so you would have to write some pretty elaborate code to identify triangle strips. Also in my personal experience I’ve found little performance benefit of using strips over basic indexing.

Next up we have our vertex colors defined.

```
static float[] colors = {
1.0f,	1.0f,	1.0f,	1.0f,	// 0 top left.
1.0f,	1.0f,	1.0f,	1.0f,	// 1 bottom left.
1.0f,	1.0f,	1.0f,	1.0f,	// 2 top right.
1.0f,	1.0f,	1.0f,	1.0f,	// 3 bottom right.
};
```

Vertex colors define how a vertex is colored. These colors are linearly interpolated across the face of a triangle to make a smooth blend of color. Each vertex needs 4 floats for its colors. These 4 floats represent Red,Green,Blue and Alpha (RGBA). Here is a screenshot of a triangle with red on the top vertex, blue on the left and green on the right. Notice how it blends moving across the face from one vert to the next?

If you had a textured triangle it would shade the triangle with those colors. If you were making a 2d game and you didn’t want any added color you would set these colors to white or maybe not even use them. They are useful though. One very useful way is to add a faux-lighting system to your game. You can perhaps have everything set to 0.5f to give the sprite half-brightness then depending on each of the sprites vertices distance from lightsources you increase the color value. The linear interpolation of the lighting values will help make the lighting look natural.

In the next tutorial we will discuss texture coordinates. Then we will discuss vertex buffers. A very important part of OpenGL and PSM. After that we will go over the rendering code which at this point should be familiar to you. Then we are ready to move on to making some games!

Japan Week #16 New Years in Japan

January 3rd, 2013

Apologies for the out of order Japan posts (I have some earlier posts still in draft mode) but I’m fresh out of new years and have this post ready to go.

Anyways this new years I went to my SO’s family to celebrate new years which in Japan is the major important holiday. Compared to North America where Christmas is the major family vacation Christmas in Japan is not celebrated in the same way. It’s mostly a couples dating holiday and in my case it was a regular working day. My vacation period runs across the new years week.

My SO is from Hiroshima so that’s where we went to visit her family. One of the first sights we went to see was the Hiroshima memorial site. The city chose to preserve the memorial site in a vote. Here are some photos of the site.

It was brutally cold though. Despite that we continued on to Hiroshima cities famed light show along the main downtown arcade. The city works on a very intricate light show which is actually solar powered! They gather the energy from the sun then use it to power the lights!

Finally we returned home for the night to start the New Years where the new years meal is prepared. For this specific occasion I was introduced into the preparation of Onigiri. Onigiri is a triangular rice ball which is typically covered in a patch of seaweed called nori.

Here is an example of Onigiri. Some of these are terrible (the ones I made) and some are good (not the ones I made). The main idea is that you take a ball of rice, flatten it, then put some filling in the center, finally you wrap the rice over the filling and perform a difficult (for me at least) repeated rolling and wrapping technique to create the Onigiri triangle.

After that was done we moved on the to the osechi box which is traditional in Japanese culture. As it has been explained to me in the past the matriarch or the household would prepare food for one or two days non stop but the food that was prepared was food that was able to be preserved for days at a time. The concept was that the food would be prepared in advance so that the matriarch as well as the rest of the family would not have to work for days after. The family would go through each tray of food one by one with enough food to last for days if not weeks.

This is how it was explained to me by local people if I am incorrect in my information please let me know in the comments.

These days it’s also very popular to buy the osechi from a company that prepares them in advance. The average price i’ve seen when I cheeked was between 200-400\$ american currency.

Our dinner was a combination of the two forms. My SO’s mother was very generous in providing both homecooked and osechi meals.

One other thing to note is that the osechi food all have a form of symbolism. I’m not adept at identifying these symbols but I can identify a few. The shirmp/prawn represents the will to live until the point your back arches like that of a shrimp. The first picture in the following set has a photo of a lotus root which you can imagine why you can use it as a lens to see your future. The photo is towards the lower right of the photo.

Finally after the huge meal the tradition is to have a soba noodle dish. Here is the photo of the dish. In my case I love heat in my food. I added a lot of ichi-mi (japanese hot spice) to my bowl. Possibly too much.

New years day included even more food. I’m a lucky man.  This time we visited the entire family and had a huge meal. Here is a sampling of the food. First up is sushi which many people in north america might consider to be the rolled fish/rice/nori food it’s actually rice with vinigar. The sushi we are used to in North America is sushi maki (rolled sushi).

My SO’s father is an accomplished fisherman and caught the sashimi that we enjoyed for our new years day meal. The sashimi fish was hage.

Here’s some more dishes from the new years day celebration.

After the meal we had an afternoon of relaxation. In the case of my SO’s older family it meant Japanese Mah-Jong. I’ve been trying to learn the game but I’m not good enough to join. Just to take a photograph.

Finally we finished the whole deal by ringing the bell at a local shrine and receiving our fortune for the next year. Here is the bell we rang. As for the fortune you will just have to be in the dark on that one.

Was certainly a fun new year celebration!