After almost 3 years the blog is back and I've ditched wordpress for a static site generation library which I authored. There are many reasons I chose to write my own generator rather than use an existing solution. Not good reasons but reasons! Here they are :
Solves my exact use case
Since the target user for this is me I get software that solves my exact use case. It's a rare case to be able to get software so custom tailored to your use case.
Managing a static website package doesn't sound nearly as fun as writing my own static site generator! I got a change to play with a lot of different .NET libraries.
I really had a great time using the Managed Extensibility Framework (MEF) for reasons I'll explain in the next section. I enjoyed using the Razor templating engine in ASP/MVC so I pulled it out rather than write my own. I used System.Reactive very lightly and for the problems it solved it was really interesting to play with.
Learning and practice
Besides being able to get better aqquainted with a lot of prominent .NET libraries I was able to learn a lot.
At work you usually have to take a fairly conservative approach to designing software but at home I'm free to try all sorts of crazy program designs and since I have no deadline I'm free to heavily refactor and restructure my code repeatedly.
MEF was really useful here. I basically used it as a way to reduce friction when heavily restructuring my program. When my dependencies change I just need to edit a classes importing constructor. I don't need to deal with all the tedium of fixing up constructor calls everywhere.
After working and creating a program I can also now see how I would pull out components of my program into a plugin system if I wanted to make this generator applicable to other websites I would like to make such as a level design blog or a recipe database. The way to design the plugin API is a lot more clear after writing the program. I imagine if I do take the time to write a plugin api it will be a lot stronger compared to if I had designed it before writing a static site generator.
I’ve been taking some time off work and every now and then I’ve been working on a small arcade flight shooter game I currently call Lune.
I don’t have any harsh timelines or anything so I’ve been able to do things like completely scrap and re-implement the input and play control systems 3 times. It’s been interesting seeing how good the controls feel when I try different things like having the analog sticks control the players ship directly or having the analog sticks control the players reticle in screen space and have the ship try to “catch up” to the reticle.
There is no clear superior method from my tests so far. Each method has both positive and negative qualities. Right now what I’m doing is directly controlling the ship and then emitting 2 points along a ray originating from the player ship’s transform. I then convert the 2 points coordinates from world space to screen space and use those coordinates to determine where I should draw the aiming reticles on the screen.
LevelViz 004 : First version now available,
creating a plugin and making a new UI
2016-05-03 12:16 PM
I’ve now hit a major milestone for LevelViz. All the general features I wanted at conception are now implemented. There’s still a lot that can be done though. Mobile support will need some work for the plan view due to platform support of post-processes. The tools could use a second pass and be cleaned up. Finally the UI can always benefit from a little more time and love
Hereis a video showing how to author a new ArchViz scene followed by a demonstration of the resulting built application.
Conversion to plugin and Github sourceUntil recently I have been developing LevelViz as a standard Unreal project which means it can’t really be used in any other projects easily. I spent about half a day converting the project to a plugin so that it can be used as intended. Implementing plugins for different engines and frameworks can sometimes be quite difficult or frustrating. I’m happy to say that overall the process is quite easy with Unreal. If you’ve already been properly setting up your code modules then you have actually already done most of the work.
The plugin source is now on github. There are some UI assets that are recommended to use with the plugin which I don’t have on the repository yet. I’m still undecided on whether I will distribute them similar to how Epic does with the Unreal engine in a separate download or if I will update the github repo to contain an entire driver project. In the coming weeks I will have my solution up. If anyone is attempting to use the plugin in the meantime and needs the assets they can email me directly and I can either send them or prioritize getting the assets up on source control.
New UIAs you might notice I also did a complete redo of the UI. The previous version use render targets to display thumbnails of the connected vantage points you could transition to. It seemed like it would work in my head but the reality is the render targets are so small on the screen that they are too hard to read. There is also a fairly significant performance issue with having so many render targets.
Instead of using the render targets I decided to author some UI icons which I would place on the screen roughly where the next vantage point is (done using some world to screen coordinate conversions). This method works quite well but doesn’t work for the case where a connected vantage point is not visible because it’s outside of the cameras frustum. Projecting from world to screen coordinates for objects not visible in the cameras frustum is not a good idea so I made a bit of a custom solution to deal with vantage points that aren’t in view.
Here is the old UI
In a nutshell what I do in the UI is any offscreen vantage point uses the standard vantage point UI element with an arrow attached that points roughly in the direction of the vantage point. I say roughly because the arrow is pointing on the UI flat 2d coordinate space at an object in 3d space.
To do this I do a series of steps. First for each vantage point, I test to see if it’s within the cameras frustrum, if it’s not in the frustrum then I continue to the next step. At this point I consider the current vantage point the center of a clock where it’s direction on the XY axis is 12 oclock. From here I take the target vantage point and calculate a direction vector relative to the current vantage point. Finally I rotate the resulting direction so it’s relative to the source vantage points orientation.
FVector2D UHelperBlueprints::GetClockRotationOfActorRelativeToOtherActor(AActor * const TargetActor, AActor * const Source)
FVector TargetLocation = TargetActor->GetActorLocation();
FVector SourceLocation = Source->GetActorLocation();
FVector FinalDirection = SourceLocation - TargetLocation;
FVector SourceForward = Source->GetActorForwardVector();
FinalDirection = FinalDirection.RotateAngleAxis(FMath::RadiansToDegrees(FMath::Atan2(SourceForward.X, SourceForward.Y)), FVector(0, 0, 1));
return FVector2D(FinalDirection.X, FinalDirection.Y);
```Now the reason I did it this way is ultimately I’m projecting this vector onto the UI facing the correct direction. Imagine you are looking down on the actor in question from a birds eye view where the actors forward direction vector is always north and we then draw an arrow from that actor to the vantage point in question. We then project this onto the UI and have an arrow that roughly points at the offscreen vantage point.
Of course this is just a direction vector at this point. We need to scale it so that it gets close to the edge of the screen without being cutoff. I did most of this math logic in blueprints which I think might have been a bit of a mistake. The initial iteration and setup was nice but the constant re-wiring as I iterated was pretty tedious and slower compared to writing code. Going forward I plan to do any sizable math formulas in code then expose them as blueprint nodes. Essentially if I’m doing lots of basic arithmetic in blueprints then that chunk of logic is a good candidate to be done in code.