Posts RSS Comments RSS Del.icio.us Digg Technorati Blinklist Furl reddit 63 Posts and Comments till now
This wordpress theme is downloaded from wordpress themes website.

Archive for September, 2016 (2016/09)

Fog of War

Motivation

In previous posts I’ve described my guiding design principle as “the nostalgia of authenticity”.  The physical board game does not visually represent fog of war, but I decided fog of war is a good idea (and also good experience to implement it).  Fog of war is inline with the general idea of making the game very similar to the physical board game, but it’s an enhancement that make sense for a video game version.  I think of it as part of the user interface – a way of communicating information about the game state to the player.

Unlike other games that have Fog of War (such as classic RTS Warcraft/StarCraft), HeroQuest fog of war only fogs out unvisited board squares on the actual board.  There’s no need to fog out miniatures (furniture, monsters/heroes), because when a square or miniature is revealed, it’s permanently revealed (until we get to the next quest).

Idea – Draw Batching

My first idea to try was to just use actor boxes like we do for highlighting movement and action squares.  Here’s a screen shot of my low effort gray-out-with-actors implementation.

image

Unfortunately it dropped the FPS to a crawl, which is kind of sad for drawing less than 26×19 simple one-color boxes.  I learned that apparently UE4 (as of 4.12) does not do automatic draw call batching – ie for each material, combine vertices from different actors into a single vertex buffer so that they can be drawn in a single draw call.  One issue with implementing automatic draw call batching is that when you aggregate vertex buffers, is that you have to consider culling because you may not want to just blindly put all vertices from the same material in the same vertex buffer – because the vertex locations for a particular material are not necessarily close to each other in world space.

UE4 has an instancing system, but apparently this doesn’t actually reduce draw calls – it just sorts the draw calls so that actors with the same material are drawn in order.  See ( https://answers.unrealengine.com/questions/127435/using-instanced-meshes-doesnt-reduce-draw-calls.html ).

Unity can do draw call batching ( https://docs.unity3d.com/Manual/DrawCallBatching.html ) with some caveats.  In addition to culling, transparency must be considered.

It would be nice to have the option to give each actor a group label for automatic draw call batching – because for my particular use case, putting my fog of war actors into a single vertex buffer makes sense.  Culling is unnecessary for 26×19 simple box actors, and there’s no transparency.  So dynamically generating a fog of war vertex buffer in C++ code was an option.

Idea – Dynamic Texture

However, once I started thinking about optimization, I realized a dynamic texture is also a good option.  One idea was to just dynamically edit the 2048×2048 board texture.  However, this apparently does not work with mip maps as of UE4 version 4.12.  It’s possible that I’m doing it wrong, but when I tried editing a dynamic texture in C++ code, it crashed when the texture has mip maps, and I read comments (not official doc) saying it’s not supported.

So I decided to instead let the 2048×2048 texture use mip maps, and I’ll create a second texture that’s 32×32 without mip maps.  Each pixel in the 26×19 subset of the texture is either fogged out or not.  A simple material can blend the two textures.

I considered how to do this using math in the material shader to convert the board static mesh’s input UV coordinates to map from the 2048×2048 texture to the 32×32 texture.  However, I later decided to instead just have two sets of texture coordinates in my board mesh – one for the 2048×2048 texture, and one for the 32×32 texture.

Creating a second UV channel in Maya

I had created my board.obj file by hand editing the text file.  However, Wavefront OBJ does not appear to support two UV channels per vertex.  So I figured out how to add a second UV Channel in Maya.  To do it in Maya, I did the following.

Import my board.obj.  Open a second window with UV –> UV Editor.  Put my main editor window and UV Editor window side by side.

Set the main editor to let me select individual vertices with main editor toolbar -> Select By Component Type, Select parm points components.  In the UV Editor, I used the Tweak UI Tool.

I didn’t want to edit my existing UV set.  Rather I wanted to create a second one.  So I did UV Editor –> Polygons –> Copy UVs to UV Set –> Copy into New UV Set.  Then UV Set –> select my second UV set.

Next I used my side-by-side window setup to manually select vertices in the main editor window, and then enter values in the UV Editor window for those vertices.  In the following screenshot, I entered values for the selected vertices.

image

When I did File –> Save Scene As –> Maya ASCII, I was able to see the UV files in my board.ma text file.  I was also able to see the values in my FBX file when I exported FBX as a text file.  Both files were more complex than my OBJ text file.

In UE4, I had to do a bit of tweaking.  UE4 board mesh settings.  Import rotation 90, 180, 180.  Destination Lightmap Index = 2, Lightmap Coordinate Index = 2, this is important since my mesh has two UV channels (plus the lightmap generated by UE4).  After doing some math and then some testing to tweak the values, I came up with a UV range of (-0.009, 1.007) to (0.822, 0.342).

Creating a third UV channel in Maya for the lightmap

I ended up getting an error related to the autogenerated lightmaps (in 4.12) – “object has wrapping uvs”.  So I ended up making a third UV channel in Maya.  For this third UV Channel, I did Polygons > Unfold, Layout > Prescale to Object, Spacing Presets 2048.

Back in UE4, just doing material editor –> Asset –> Reimport, was not enough (even if I unchecked Generate Lightmap UVs).  So I had to re-import my FBX file from UE4 Content, and uncheck Generate Lightmap UVs.  After import, I set Lightmap Coordinate Index = 2.

Result

Here’s the result using a test/debug 32×32 texture blended with the board:

image image image

Here’s fog of war with a subtle gray fog:

image image

Here’s fog of war with an extreme black fog with specular = 0:

image

Here’s full black fog with specular defaults to 0.5:

image

I’m leaning towards a more subtle gray fog as the default – I’m going for a good balance between making it look closer to the physical board game yet also effectively communicating the information (which rooms and corridor squares have been seen).

Another balance is between effectively communicating the information without making it visually loud in a way that strays (without good reason) from the goal of making it look-and-feel more like the physical board game.  Later I may add a non-default option for pitch black fog (and/or a sliding scale for varying degrees of fog of war darkness).

Here’s a video showing the effect:

Pathfinding with A* (A Star)

Labor Day weekend gave me an opportunity to implement pathfinding.  Here’s notes on some of what I read.  BFS (breadth first search) explores equally in all directions.  Dijkstra favors lower cost paths.  A* is modified Dijkstra optimized for a single dst (uses heuristic to search towards dst).  A* is the standard pathfinding for games (I also used it in college game projects ~12-14 years ago), so it’s easy to find pseudocode, articles, tutorials, and videos about it.  A* can be further optimized a number of ways – use more efficient data structures, bidirectionoal search, jump point search can optimize uniform cost grids (like HeroQuest).

My HeroQuest map is an array of 26×19 square spaces, but pathfinding works on nodes in a graph.  So before I implemented pathfinding, I added C++ code to generate the graph.  Each space has a list of 0 to 4 neighbor nodes that are valid moves.  The graph is for monster AI, so a hero is a blocker.  When a space is revealed for the first time, we generate neighbors for that space.  When a hero moves a space, we update neighbor list for src node, dst node, src’s neighbors, and dst’s neighbors.  Once I got that working, I moved on to actual pathfinding.

To keep things simple, I just implemented A*.  Two references I used were Vincent Cogne’s 2010 blog post ( http://bit.ly/2c6vTnF ) and wikipedia pseudocode ( http://bit.ly/2cimKrr ).

A* uses an open list and a closed list (open list means we will visit it later, closed list means we already visited it).  When a node is visited, its neighbors are added to the open list (unless it’s already in the open list or closed list).  Each node has an F, G, and H value.  The way A* searches towards the goal is that each iteration, we visit the node in the open list with the lowest F value.  F = G + H.  G is the lowest cost path we’ve found so far to the node.  When we visit a node, we update it’s G value and the parent node used to get that G value (unless the node already has a lower G value using a different parent) (parent node aka “came from” node).  H is a heuristic function to estimate the remaining cost to the goal.  For HeroQuest, I used the manhattan distance as my heuristic.  The heuristic function needs to be equal to or less than the actual cost (HeroQuest only allows orthogonal movement).

The following is a simple example that I used as a test case.  The numbers are just x, y, and the order that A* visited each node (in terms of the current node).  Overall it was pretty straightforward.

image

In the above example, the Orc searches for the hero with the lowest Body (eg Wizard), then he runs A* towards the wizard.  However, the Wizard is actually a blocking node, so A* never visits the Wizard’s node.  So we end our search when when the Orc is in range to attack the Wizard (ie the Oric is orthogonally adjacent to the Wizard without a wall blocking his attack).

Here’s a screen shot:

image

And here’s an example where no path was found:

image

Aside – I added the “Back” button to the UI as part of an effort to enable the interface to work without a keyboard (eg Android tablet mode) (I also enabled finger tapping to move or attack eg on an Android tablet).

Pathfinding is a fundamental building block for a HeroQuest Zargon AI that will be required to enable a single-player campaign mode.

I’ll close with a video that shows some examples of monster pathfinding to reach the Wizard:

Update: here’s a photo of pathfinding on my Android (technically Kindle Fire) tablet

IMG_20160906_2144484