Alex McGilvray


June 25th, 2016

I’ve been taking some time off work and every now and then I’ve been working on a small arcade flight shooter game I currently call Lune.


I don’t have any harsh timelines or anything so I’ve been able to do things like completely scrap and re-implement the input and play control systems 3 times. It’s been interesting seeing how good the controls feel when I try different things like having the analog sticks control the players ship directly or having the analog sticks control the players reticle in screen space and have the ship try to “catch up” to the reticle.

There is no clear superior method from my tests so far. Each method has both positive and negative qualities. Right now what I’m doing is directly controlling the ship and then emitting 2 points along a ray originating from the player ship’s transform. I then convert the 2 points coordinates from world space to screen space and use those coordinates to determine where I should draw the aiming reticles on the screen.

Here is a youtube video of some of the gameplay.

LevelViz 004 : First version now available, creating a plugin and making a new UI

May 3rd, 2016

I’ve now hit a major milestone for LevelViz. All the general features I wanted at conception are now implemented. There’s still a lot that can be done though. Mobile support will need some work for the plan view due to platform support of post-processes. The tools could use a second pass and be cleaned up. Finally the UI can always benefit from a little more time and love ­čÖé

Here is a video showing how to author a new ArchViz scene followed by a demonstration of the resulting built application.

Conversion to plugin and Github source

Until recently I have been developing LevelViz as a standard Unreal project which means it can’t really be used in any other projects easily. I spent about half a day converting the project to a plugin so that it can be used as intended. Implementing plugins for different engines and frameworks can sometimes be quite difficult or frustrating. I’m happy to say that overall the process is quite easy with Unreal. If you’ve already been properly setting up your code modules then you have actually already done most of the work.

The plugin source is now on github. There are some UI assets that are recommended to use with the plugin which I don’t have on the repository yet. I’m still undecided on whether I will distribute them similar to how Epic does with the Unreal engine in a separate download or if I will update the github repo to contain an entire driver project. In the coming weeks I will have my solution up. If anyone is attempting to use the plugin in the meantime and needs the assets they can email me directly and I can either send them or prioritize getting the assets up on source control.

The github repo can be found here :

New UI

As you might notice I also did a complete redo of the UI. The previous version use render targets to display thumbnails of the connected vantage points you could transition to. It seemed like it would work in my head but the reality is the render targets are so small on the screen that they are too hard to read. There is also a fairly significant performance issue with having so many render targets.

Instead of using the render targets I decided to author some UI icons which I would place on the screen roughly where the next vantage point is (done using some world to screen coordinate conversions). This method works quite well but doesn’t work for the case where a connected vantage point is not visible because it’s outside of the cameras frustum. Projecting from world to screen coordinates for objects not visible in the cameras frustum is not a good idea so I made a bit of a custom solution to deal with vantage points that aren’t in view.

Here is the old UI


In a nutshell what I do in the UI is any offscreen vantage point uses the standard vantage point UI element with an arrow attached that points roughly in the direction of the vantage point. I say roughly because the arrow is pointing on the UI flat 2d coordinate space at an object in 3d space.


To do this I do a series of steps. First for each vantage point, I test to see if it’s within the cameras frustrum, if it’s not in the frustrum then I continue to the next step. At this point I consider the current vantage point the center of a clock where it’s direction on the XY axis is 12 oclock. From here I take the target vantage point and calculate a direction vector relative to the current vantage point. Finally I rotate the resulting direction so it’s relative to the source vantage points orientation.

FVector2D UHelperBlueprints::GetClockRotationOfActorRelativeToOtherActor(AActor * const TargetActor, AActor * const Source)
   FVector TargetLocation = TargetActor->GetActorLocation();
   FVector SourceLocation = Source->GetActorLocation();
   FVector FinalDirection = SourceLocation - TargetLocation;
   FVector SourceForward = Source->GetActorForwardVector();
   FinalDirection = FinalDirection.RotateAngleAxis(FMath::RadiansToDegrees(FMath::Atan2(SourceForward.X, SourceForward.Y)), FVector(0, 0, 1));
   return FVector2D(FinalDirection.X, FinalDirection.Y);

Now the reason I did it this way is ultimately I’m projecting this vector onto the UI facing the correct direction. Imagine you are looking down on the actor in question from a birds eye view where the actors forward direction vector is always north and we then draw an arrow from that actor to the vantage point in question. We then project this onto the UI and have an arrow that roughly points at the offscreen vantage point.

Of course this is just a direction vector at this point. We need to scale it so that it gets close to the edge of the screen without being cutoff. I did most of this math logic in blueprints which I think might have been a bit of a mistake. The initial iteration and setup was nice but the constant re-wiring as I iterated was pretty tedious and slower compared to writing code. Going forward I plan to do any sizable math formulas in code then expose them as blueprint nodes. Essentially if I’m doing lots of basic arithmetic in blueprints then that chunk of logic is a good candidate to be done in code.

UT Mapcore phase 1 submission

May 1st, 2016

Just finished the phase 1 greyblock submission for the mapcore UT mapping contest. Here’s some shots from the submission.







Unreal Tournament Level Design contest

April 30th, 2016

I decided to take it easy this weekend and do an entry for the first phase of the Mapcore Unreal Tournament level design competition. I haven’t played with Unreal’s brush editing tools that much so I thought this would be a good excuse to get up to speed with them. The brush tools are better than I expected but I’ve already discovered about 5 or 6 things I would change already. Working on improved brush tools for the unreal engine would definitely be an interesting and fun project to do.

Here’s an image showing a days progress from the initial sketch to a playable greyblock. Took about 5-6 hours.


LevelViz 003 : Adding a plan view and a custom editor

April 26th, 2016

For the next major feature of LevelViz I wanted to add the ability to seamless transition into a plan view so people could see a topdown view of the scene (as well as possibly side and front views). This is very similar to how you see a orthographic projection of the top, front and side views of a scene in most 3d authoring packages. The idea is to essentially be able to generate nice drawings of the scene with little to no work required from the scene author.

Here is a preview of what the functionality looks like right now :

To accomplish this I had to do a few things. First I needed to get a decent looking outline effect. Second I needed to create a small editor for selecting the geometry that will contribute to the plan view. Finally I needed a way to figure out how to manipulate the camera so that it can seamless transition into a nice framing of the plan view. I’ll cover each of these 3 major steps over the course of this development log post.

Outline effect

Essentially what I want is clean lines around the bounds of a static mesh for each mesh we choose to be part of the plan view.

I considered a few options on how to get the look for the plan view. At first I considered rendering as a wireframe but the issue with the default method of rendering as a wireframe is that by the point the mesh is processed on the GPU it has become triangulated which is not something desirable for this type of drawing. Here’s what it looks like :


One way I could have possibly fixed this is by not rendering an edge who’s two attached faces are co-planar. It’s possible this could have worked but I was worried about issues with non-desirable edges who adjacent faces were slightly non-co-planar rendering by accident. I could somewhat reduce this issue by introducing a co-planarity threshold┬ábut I started to worry about other edge cases I haven’t anticipated plus while considering this method I came upon another method that accomplishes the same effect but with less room for edge case errors.

What I did was render all the objects of interest to a depth buffer then used edge detection to give them a render outline. I used a well known edge detection technique which luckily has ample documentation on how to implement using the unreal engine and their material system. Here’s what it looks like :


Adding a custom editor

Next I wanted to be able to select various static meshes to be flagged as meshes which contribute to the drawing of the plan view.

The core of this plugin relies on placing a “ViewManager” actor into a scene. The view manager does a lot of management of various objects relevant to the plugin such as tracking vantage points as well as storing the array of meshes which will contribute to the plan view. Now I could have just exposed the plan view mesh array so I can add them one by one with the eye dropper but that is tedious. I wanted a way to select a group of meshes and add them to the plan view. The problem here is that if I select a bunch of meshes then I’m no longer selecting the view manager and as such I can’t add them to the view manager array.


To solve this what I decided I needed to do was make a small floating window which persists even when the view manager is not selected. This was a good opportunity for me to start digging deep into what it takes to augment the editor further than simple details customization. I studied up on Michael Noland’s essential “Extending the Editor” youtube video which is by far my favorite training video Epic has produced because rather than going every detail to the point of being tedious he shows the major ways you can extend the editor then points the audience to various types and keywords we can look up in the engine source code to see how stuff is done. One of the great things about a video like this is it’s very resistant to obsolescence when new versions of the Unreal engine are released. Implementation details change frequently but the overall concepts are the same.

You can view the video here : Extending the Unreal editor

After studying how an editor is implemented and doing some small experiments adding my own commands I concluded that my use case doesn’t justify a new full blown editor but rather just a new window. You launch the window from the view managers details panel and it persists until you close it. Here’s how you would launch it (the view manager is selected in this image).


Setting this up is fairly easy. First I have to setup a new button for launching the editor. This is fairly straightforward, I do a details customization on the view manager where I add the button.

void FViewManagerDetailsCustomization::CustomizeDetails(IDetailLayoutBuilder& DetailBuilder)
	DetailBuilder.GetObjectsBeingCustomized(/*out*/ ObjectsBeingCustomized);
	IDetailCategoryBuilder& ViewManagerCategory = DetailBuilder.EditCategory("Arch Viz Debugging", FText::GetEmpty());

		ViewManagerCategory.AddCustomRow(LOCTEXT("Plan View", "Plan View"))
				+ SHorizontalBox::Slot()
			.Padding(2.0f, 0.0f)
				.Text(LOCTEXT("Launch Editor", "Launch Editor"))
			.ToolTipText(LOCTEXT("LaunchEditor_ToolTip", "Launches the ViewManager standalone editor."))
			.OnClicked(this, &FViewManagerDetailsCustomization::LaunchViewManagerEditor)

From here when the button is clicked in the callback I request a reference to my editor module which has a data member that is the type of my editor and from there I launch the editor. This is how I launch the editor.

	static FName ArchVizEditorModuleName("ArchVizViewerEditor");
	FArchVizViewerEditorModule& ArchVizEditorModule = FModuleManager::GetModuleChecked<FArchVizViewerEditorModule>(ArchVizEditorModuleName);

As an aside I found some really interesting/weird behavior. I can actually remove the view manager editor data member from my module and directly load it like this :

	static FName ArchVizEditorModuleName("ArchVizViewerEditor");
	FViewManagerEditor& ArchVizEditorModule = FModuleManager::GetModuleChecked<FViewManagerEditor>(ArchVizEditorModuleName);

This is weird for a number of reasons. First it doesn’t implement IModuleInterface which I had suspected was required. Second it somehow creates an instance of FViewManagerEditor which I would have thought might have involved calling the default constructor. As a test I made the default ctor private and made a specialized ctor and it STILL managed to create a valid instance of this class. I did some light digging through the source code for the module manager singleton and couldn’t find any template magic that was somehow able to create a valid instance of my editor without a default ctor.

There is one issue however. The editor will crash on exit trying to find a module. I suspect that while this works, it’s a side effect and is not intended to be used that way and is probably dying on module de-initialization. Requesting my actual module with the editor as a data member avoids the crash.

Anyways from here I simply add 4 buttons for all the actions I wish to perform. When a button performs an action that requires a reference to the view manager I do a heavy query of all the objects in the editors world for the view manager then cache that reference. With my view manager reference available to my editor, adding and removing objects from the plan view is easy. It’s merely a matter of getting an array of all selected static mesh objects and adding them to the view managers array of plan meshes.

Setting up the transition

Next I wanted to be able to do a nice transition to plan view from any arbitrary vantage point. There’s a few challenges to this.

First is the camera projection. The camera is using a perspective projection matrix which is not ideal for a top-down plan view. This would cause the plans to look like they are jumping out at you rather than look like a top down drawing. This requires switching the camera to use an orthographic projection matrix. This is very easy to do in Unreal, in fact it’s a matter of setting a single boolean as well as setting up the typical values that go into setting up an orthographic projection matrix such as defining the width and height of the projection volume. What Unreal doesn’t provide is a way to smoothly lerp from perspective to orthographic projection matrices. This will be something I’ll be tackling in the near future.

Next is generating a transform for the camera position to look down on the scene. To do this I added a post-process component to the view manager which the user can setup to define the region for the plan view. I need the post-process component for the outline rendering so this essentially kills two birds with one stone. This way I make the transform conform to the rotation of the post-process volume as well as use it to derive the parameters for setting up the orthographic projection matrix.

One thing I still need to do is add an editor-only arrow component so people can know which direction is up on the plan when it’s generated.

Here’s what the volume looks like in the editor.


And here is the topdown plan view generated from this volume when viewing ingame (sorry about the little test icon in there, that’s something I’m currently working on that’s unfinished).


If the view is a little confusing here is a roughly aligned view in the editor of what the plan view is displaying.



If this blog post was a little much to take in I apologize. These development blogs I treat like a stream of consciousness and make a point to try to keep the time to write them under an hour so I can actually focus on development ­čÖé

LevelViz 002 Vantage point system first version review

March 20th, 2016

I’ve just gotten the first version of LevelViz done. It consists of 2 parts. LevelViz itself and a small environment I’m doing in Unreal to help show off how it works. Here is a screenshot from yesterday showing the kitchen area of the environment I’m working on.


It introduces the concept of vantage points which are essentially static cameras placed around the level. The author then links these vantage points together and you end up with a graph of camera connections. From this graph the UI is generated with navigation widgets.

Here is a video I put of of me authoring some vantage points, linking them together and then viewing the level and generated UI to navigate around the environment.

Here is how the overall development and process works

For example here are 3 vantage points that are linked.


The red lines between the cameras are editor only lines I draw between camreas so you can see their relations. Any 2 cameras with a red line joining them are cameras that can be transitioned between each other. If you were on one camera with 2 red lines leading to other camera then the UI would generate 2 buttons on the screen with thumbnails that show the other cameras view. If you press them then your camera will transition to the camera at the other end of the red line.

The UI is pretty simple right now. It has a series of image custom widgets that have render target textures. There some functionality to tell the UI which image widgets are bound to which vantage points. When that binding happens the image widget’s render target is drawn to by the vantage point. Right now it’s fully rendering to the target all the time but my plan is to change that to only render once when the level loads so that I can get a performance benefit. This project is mainly for static scenes and I want it to be used on touch devices so using a lot of render targets that are actively drawing all the time is not a good idea for performance. Especially when it can be logically offloaded to the loading phase rather than the ingame phase.

Here is an ingame shot of what it would look like if you were at a vantage point with 2 links to other vantage points. Clicking either thumbnail would trigger a transition to the linked vantage point.


The vantage points are implemented solely in C++ and I’ve begun to do some work to make editing vantage points easier for the user. There are 2 parts to this.

The first part is relatively simple. I override PostEditChangeProperty(FPropertyChangedEvent & PropertyChangedEvent) of type AActor. This is so when I select a vantage point and link it to another vantage point the linking is bi-direction. I do this all in code. This is to reduce a significant amount of tedium for the user setting up links between vantage points on both sides.

The second part involves customizing the details panel for my vantage point actor type. What I want to to is implement a custom editor in the details panel for vantage points so things like blend time and curves could be edited rather than just the default array view that the unreal engine provides by default. I’m currently in the process of figuring this out but I have a basic custom editor setup that currently only displays a button per object. Here is an image to hopefully better explain what I’m trying to do.


Just to get the custom details up takes a fair amount of work and understanding of some core concepts.

The first step is you need to create a new code module. By default a new code project in unreal generates a runtime module. What we need to modify the editor though is an editor module. The unreal module system takes a little while to get acquainted to but once you figure it out it’s a great pattern for making modular, composable applications and editor extensions with the unreal engine.

I’ve been quite surprised at how reasonable the build time for unreal engine c++ projects.┬áI suspect their module system helps a lot to reduce build times by simply being able to better determine when to not build something. Anyone serious about programming in the unreal engine environment should put the following module documentation on their reading list.

The second step is you need to make a class that inherits from┬áIDetailCustomization. I should warn 4.10+ users. I found that the typenames in the 4.9 documentation no longer exist in the codebase. I had to do a fair amount of grepping through engine code to see how details customization was implmented in 4.10. It’s generally the same but a lot of the typenames have changed so keep that in mind if you are having trouble with details customization in 4.10.

Once you inherit from IDetailCustomization you can ovveride a method called CustomizeDetails(IDetailLayoutBuilder& DetailBuilder) which as you see will pass you a builder object. With this you can modify how the detail panel is built and inject your own UI code.

Finally the last step involves drawing the UI using Slate which is the Unreal engine’s GUI drawing framework. Slate is really interesting. It’s essentially a declarative UI focused DSL that is built from liberal operator overloading. This means that while it looks different, it’s actually valid C++ and as such benefits from a lot of compile time error checking. It’s quite cool but also carries a bit of a learning curve. Here’s a snippet that simply draws the 2 buttons in the screenshot above. This is run once per button (the code is inside a loop).

VantagePointCategory.AddCustomRow(LOCTEXT("MergeSearchText", "Merge"))
+ SHorizontalBox::Slot()
.Padding(2.0f, 0.0f)


I wasn’t particularly happy with the first version of my UI. So while I will be working on making a much nicer UI I will also be focusing on making the UMG content asset abstract enough that if a user wanted to create their own UI they would only need to make a new UMG widget and inherit from my custom UMG class. They could then have all the data they need from the plugin to create their own UI if they choose to do so.

Essentially I provide the model, controller and an optional view and the user can choose to create a new view if they like.

That covers the broad picture of what has been done for this update but I wanted to touch on one more thing. The ocean and sky system. This is part of a large ongoing unreal engine community project which you can use in your unreal projects. The group of people involved in this have been doing amazing work. If you want to learn more about the ocean and sky system the home of the project is on this unreal forums thread :

Here’s a screenshot of the systems in my scene.


LevelViz 001 Introducing LevelViz

March 16th, 2016

My next project for the Unreal engine will not be a game. It will be a special set of tools which artists can use to set up an interactive viewing environment to display their art in-engine. Users will setup various camera vantage points in their scene and link them together. The project will then use this to make a UI for end-users to be able to navigate and view the environment without having to use the complex control schemes you typically see in video games. The idea is that this UI will be ideal for both PC and touch device usage.

I’ll also be working on a new environment because doing environment art is relaxing for me ­čÖé

After completing my previous unreal engine project for the Tower Jam game jam I’m very comfortable with the game systems in the engine I hadn’t previously worked with such as AI, animation, UI and setting up a custom character. I decided for the next step I should pick a project that digs a lot deeper into the internals of the Unreal engine.

The major components to this project will be the following

  • Unreal editor modification for specific data types
  • In-editor custom visualization
  • Camera systems
  • Advanced Native/Blueprint communication
  • Advanced UMG
  • Slate UI framework
  • Authoring a plugin for the Unreal engine

I don’t intend for this to be a particularly large project ultimately. I’m anticipating a lot of difficulty┬álearning how to build and deploy a clean plugin for the Unreal engine but I’ve been surprised at how easy things have been in the past so maybe I’ll get lucky.

Here is my current set of milestones. Anything that is crossed out is complete already so as you can see I’m currently finishing up the second milestone.

Milestone 1

  • Feature
    • Implement first version of a vantage point and vantage point manager. The vantage point represent the view at a specific position while the manager handles transitioning between each vantage point.
    • Get vantage point transitions working. Use existing unreal camera blending logic if possible.
  • UI
    • Setup basic UI with thumbnails that represent a vantage point view.
  • Art
    • Setup bare bones scene for testing.

Milestone 2

  • Feature
    • User can select the first vantage point to use when start up the game rather than the first camera in the vantage point array.
      • Make sure that we also modify the player start to be near to the vantage point so that the game doesn’t start up with a crazy camera blend involving the camera moving across the entire level.
  • UI
    • Setup basic UI with thumbnails that represent a vantage point view. Use render targets so that the thumbnails show what they represent.
    • Animate buttons in and out of the scene when transitions occur.
      • Have the buttons slide off the left side of the screen when a transition occurs.
      • Have a small button on the bottom left of the screen to show/hide thumbnails.
  • Art
    • Spend a small amount of time working on the test scene.

Milestone 3 

  • Editor
    • In editor representation of the cameras transition relations to each other. See image below.
      • 03
    • Customize details panel for editing vantage points and transition relationships. See image.
      • 04


For human interest here are the first 2 pages I wrote up for the initial design of this project before moving over to Evernote.




Next here is some evolution of my test scene. I wanted to make a nice modern beach house hanging off a cliff. Sort of like a lot of the really fancy houses you might think of on the coastline of california. Here is what it first looked like.



Turns out I didn’t like the tall protrusion in the middle of the scene so I removed it for a flatter layout. I also adjusted the lighting to roughly match the direction I’m intending to have for the final project. I want it to be like a sun setting towards the deck of the house.



Finally I started some work on the back portion of the deck. This is where the open layout of the main area of the house is. It transitions into the deck with the idea that almost anywhere in the house always has a nice view of the ocean. The separated potion is the master bedroom.



TowerJam 005 Final entry and AI systems

February 23rd, 2016

TowerJam 2016 is complete.

At the time I decided to do the jam my schedule was a lot more free . Unfortunately life happened, my workload increased and I had a bout with the worst sickness I recall every having in my lifetime. I could have easily given up and not submitted to the jam but I decided instead to massively scope down to get something complete and submitted. I consider this a big victory for myself.

It’s submitted and available on GameJolt here┬á┬á

The game is not particularly fun, in fact I would recommend watching a youtube video of the game being played rather than downloading and playing the game. In fact you can do that right here!

Here’s some screenshots of the final game.





One of the development goals for this jam was to really dig into the aspects of the Unreal engine I’m unfamiliar with. This includes but is not limited to :

  • Character animation state machines
  • AI – Blackboards
  • AI – Tree editor
  • Matinee
  • The C# build system
  • Input abstraction
  • Blendspaces
  • UMG

I can now say I’m far more familiar with these systems. I’m so glad I dived into AI and character animation state machines because I previously feared working with them due to unfamiliarity with those types of systems but now that I understand them I feel I’m really able to use them effectively now. In fact I’m now wondering why I was so fearful of learning them in the first place.

In fact I think after this project, future projects in the Unreal engine will be more about making the game rather than spending most of my time learning how to do something in the engine itself. Unreal has a pretty steep learning curve if you want a holistic education but it’s well worth the time. I’ve used other engines in the past where getting something functional is quite easy but this comes at the cost of a project being tough to maintain when it starts to grow in size. In some cases the issues at scale are so bad that ultimately not using an engine would have actually been less time consuming. Unreal seems to handle this well and the dearth of large scale games that have already been released on the engine over the past decade is further proof it’s a capable engine.

The final bits of work I had to do to get my game complete was to implement an enemy. This means I had to learn how AI works in Unreal. Lucklily from a conceptual standpoint the methods Unreal uses to implement AI are very standard in both the game and AI world. I’d been exposed to the concept of blackboards in previous non-game related jobs I’ve had and behavior trees I’ve seen used in other engines I’ve worked with. That said I still had to learn how to use them in the context of Unreal.

Now I’m far from a professional AI programmer so take my opinion with a grain of salt but I think Unreal has done a very good job implementing tooling for AI via their blackboard and behavior tree editors.

A blackboard is essentially a set of key/value pairs where the value is a variant. In some implementations I’ve used the type of value is unknown so you have to remember what the type is and apply appropriate casts when getting and setting values from the blackboard.

Here’s what a blackboard looks like. As you can see it’s just a group of data that we wish to track. Unlike other blackboards I’ve used I can deduce the type of a value to some degree.


In┬áUnreal implementing a blackboard was dead simple. I merely define the data that I’m interested in tracking and then periodically update the data in my enemy controller blueprint. From there I bind a blackboard to a behavior tree and use the data in the blackboard to make decisions in the behavior tree.

Here’s the relevant section of my controller where I update and set values in my blackboard. What I’m doing here is updating the “Target” value every update along the bottom line of execution.


The behavior tree editor was easy enough to use after I got past some roadblocks. I have to admit for a few hours I was quite stuck. This is partially due to some lacking documentation and partially due to my inexperience with the editor. My main issue was I wasn’t able to determine how to setup various conditional tests in a behavior tree node. It was only after looking around the unreal forums at screenshots of other peoples trees that I realized a tree node can be modified with decorators to execute the conditional logic to determine whether a node should be traversed or not. For instance I would have a node that tells the enemy to follow a target if they are within range of said target.

The other gotcha for the behavior tree is that task nodes such as an “approach target” task are only visible to be added to the tree if the appropriate type exists in the accompanying blackboard. So for the “approach target” node to show up I need an actor reference node in my blackboard. This makes sense but it was initially confusing to me as I was merely looking for the existence of the task when playing with the editor. Now that I know this restriction I’ll make sure next time I want to look for a pre-defined task that I have what I believe to be the necessary data already in my blackboard.

Here is what my enemy behavior tree looked at the end of the project. The decorator value for determining whether or not the enemy should approach is a blackboard value set from the controller blueprint. I suspect I could have gotten rid of that bool and replaced on a validity check to the target actor reference. I will definitely look into doing that in the future (the less state the better).


You can learn more about blackboard and behavior trees here :


TowerJam 004 Setting the character up

February 6th, 2016

I’ve been working on getting the character setup for movement and a simple attack combo system. This is my progress on the character at the time of writing this post.

This post was written in 3 parts as I progressed through the process of setting the character up.

Part 1

To start off I’ve added 2 new action mappings to the project for light and heavy attacks. I’m very impressed with the action mapping stuff. It’s a nice way to add new actions to characters without locking yourself in too much at the code level. By that I mean when you want to define a new action mapping (aka input abstraction) you go to the project settings, choose the action mapping type and then give it a name. The name you give the mapping is what you use across the editor and the codebase to refer to the mapping.I’m not usually a fan of stringly typed stuff or using strings to reference data from code but I think in this case it’s actually a quite elegant solution as it allows easy extensibility and you can add new mapping without having to recompile the code portion of the project.

For instance to bind an light attack named “AttackLight”┬áI would use this line of code in my character cpp file.

InputComponent->BindAction("AttackLight", IE_Pressed, this, &ATowerJamCharacter::AttackLight);

Where I’m referencing the action name I defined in the projects settings as well as a callback when the action event occurs. Inside the callback I set a bool called IsLightAttacking to true and later let the animation system use that to determine what animation plays.

The animation system is a little complicated at first but I’m now at the point where I can see the value in such a complex solution. It consists of many different components. Code, blueprint event graphs, special state machine graphs and state transition graphs. I’ve had some experience setting up characters in source and quake engines but this was quite different from an implementation standpoint. It took me about 3 hours of studying until I got comfortable with the tool. Compared to other tools I’ve used in the past is about the fastest ramp up time I’ve ever experienced.

Here is the starting point for learning about how character animation works in unreal.

Persona animation system documentation

Part 2

Once I got the light and heavy animations and working I started looking at how I could do Devil May Cry style combo attacks where one attack animation can be cancelled into another attack animation. I started out by purchasing a package of sword animations from the Unreal marketplace. I had a 30$ credit from when Unreal 4 went free so I figured this was easily worth the money. The animations I used are available here on the unreal marketplace.

My first attempt to get it working as fast as possible so I can see something on a screen was both naive and ugly. If I was in the first attack in a combo I set it to allow switching to the next animation in the combo if the player pressed the attack button and the current animation was more than 50% complete. This caused the animation to change far too abruptly.

Now I want the animation to change abruptly. The term itself is called cancelling so I figured I had the right overall idea but I’m just approaching it too bluntly. So I went back to the unreal answer hub with specific questions which didn’t answer my questions but linked to the relevant documentation.

After reading the docs I determined that the next step to improving the animation transitions was to pick the transition points with more care than simply allowing it after the animation was more than 50% complete. For this I used animation notify events. Here’s a screenshot of them setup in the timeline. As you can see I have 4 custom events. The regions CanCancel and ResetCancel define the portion of the animation that the user can cancel into the next attack.



Here is a screenshot of where these events are used. This is inside the animation blueprints event graph.


At first I thought my animation notifies weren’t working correctly or were inaccurate. This was exacerbated by the fact animation notify events have a property called “montage tick type” which seemed to indicate the timing accuracy the event was fired. I chose the more accurate but also more computationally expensive “Branching Point” value. It turned out the biggest issue was that I don’t have an eye for animation and I had simply placed the animation notify at a badly chosen location in the animation timeline. I moved the notify events to earlier points in the timeline than my instincts initially thought was correct and everything started looking good.

Once I sorted out how to cancel an animation based on an animation notify event things were massively improved but I had accidentally set the blend logic transition property to “Custom” when I was playing with options and suffered a bug. The character model would T-pose for the duration of the animation transition. The temporary solution was to set the transition duration to 0.0 seconds. By removing the transition I removed the T-pose error. Ideally I’d like to have a slight transition between animations. It might turn out that it will look bad but I would at least like to see how it looks before deciding whether or not I want to use transition blending. Once I set the Blend Logic property back to “Standard Blend” everything was solved.

Once the blend was in I was BLOWN AWAY by how good it looked. In my head it seemed like animation blending couldn’t possibly yield such great results but I was dead wrong. This first attempt was with the linear blending mode. Here is the first combo results. The first animation is cancelling into the next one.



Part 3

Finally to wrap it up I decided I would add 2 major “hub states”. This is to say I have 2 idle poses. A standard idle pose with the characters weapon sheathed and an attack stance which the player must be in to execute attacks. When in the attack stance the players movement speed is reduced.

Here is the state machine graph that shows a general overview of the flow between all the animations and animation states.




TowerJam 003 Adding the forest area

January 23rd, 2016

Did some work this morning on finishing up the base of the bridge area and adding in the forest area. Now there is a complete path from the starting point to the tower. As you move towards the core of the forest I change the tint towards a washed out greenish color.

I also took a second to step back and focus on what to do next.

  • Add attack animations and logic to the character
  • Finish the graph paper floor plan for the first floor of the tower
  • Setup the streaming level system to allow me to transition into the first floor of the tower

Here’s some screenshots of the forest and bridge areas.