Posts RSS Comments RSS Del.icio.us Digg Technorati Blinklist Furl reddit 66 Posts and Comments till now
This wordpress theme is downloaded from wordpress themes website.

Archive for the 'cross-platform SDL demos' Category

Fix minor camera bug (with GLM camera)

My previous post’s Release 001 had a camera bug such that turning right from the initial state was broken.  You can see this if you run the WebGL (Emscripten) previous post’s Release 001 demo – just press “E” to turn right a little.

The bug was related to mixing up degrees and radians.  Basically what happened was that I used a simple camera from this tutorial ( http://bit.ly/1M3FLfL ) (by Tom Dalling) to generate my viewProj matrix.  The tutorial is dated 2013/01/21 and uses GLM ( http://glm.gtruc.net ).  But GLM 0.9.6.0 released 2014/11/30 changed GLM from degrees to radians, which broke the camera.  See GLM release notes “Transition from degrees to radians compatibility break and GLM 0.9.5.4 help” here ( http://glm.g-truc.net/0.9.5/updates.html ).

I fixed this tiny bug (and I started rewriting/refactoring the camera).  I also added keyboard controls “R” to look up and “F” to look down.  Here’s a WebGL (Emscripten) Release 002 with the fix – Release 002

Emscripten build target (JavaScript asm.js WebGL)

I’ve posted a WebGL version of my recent “Tile Map Geometry” post.  It was built using Emscripten, which is just one of the build targets for my (C++, SDL2, GLES2, CMake) project.  For editing the web code (HTML, CSS, JavaScript) not generated by Emscripten, I tried out Brackets (brackts.io).

Rather than embed the JavaScript directly in this WordPress post, I uploaded it to a separate area labeled release “001”.  Click the following screen shot – it’s a link.  For future versions I’ll do the same thing – post a release at “..\pemproj\grfxdemossdl2015\" and link to it from a WordPress post.

Controls: WASD moves, QE turns, ZX flies (up, down).

image

Tile Map Geometry

Today I implemented tile map geometry.

Basically the way it works is I have a class TileFloorData that stores a 2D array of Tile’s.  For now, each struct Tile contains a height.  In the future, Tile will also have (a) texture ID(s).  For rendering, TiledFloor is an renderable Entity that generates geometry based on its TileFloorData.

To start with, I just did the geometry (no textures yet).  The geometry is designed such that the TileFloorData can be drawn with a single draw call – so it’s a single list of vertex positions, a single list of vertex colors, and a single list of indices.  The geometry is also designed to work with a texture atlas.  So each tile has 5 faces (we ignore the bottom face) and 4 verts per face (so 20 verts per tile).  Vertices are not shared between faces – because we want independent texture coordinates for texture atlas lookup.  Each tile face also has 6 indices per face (so 30 indices per tile).

For GLES2 core profile, glDrawElements() can only use GL_UNSIGNED_BYTE or GL_UNSIGNED_SHORT (ie GL_UNSIGNED_INT is not allowed), so that’s a max of 65536 vertices.  With 20 verts per tile, that’s a max of 3276 tiles, which is only 57×57 tiles.  If we want more tiles than that, then we can break it up into multiple draw calls (such as four draw calls for 114×114 tiles).  Or we could use glDrawArrays() in which case we’d need 30 verts per tile (instead of 20).

Just for fun…  A single vertex has 3 coordinates per position and 3 values (red, green, blue) for color.  Each of these values is a GLfloat.  So a single vertex has (sizeof(GLfloat) is 4 bytes) * (3 + 3 is 6) = 24 bytes.  Each tile has 20 verts, so that’s 480 bytes per tile.  57×57 tiles * 480 bytes per tile would be 1,559,520 bytes or a little under 1.5 MB.  Using GLshort, indices are only 2 bytes each, so 2 bytes * 30 indices per tile = 60 bytes per tile.  57×57 tiles * 60 bytes per tile would be 194,940 bytes.  For both vertices + indices, 57*57*(480+60) = 1,754,460 bytes or a little over 1.67 MB.

Here’s some screen shots – 10×10 stair-step (incrementing) heights, 10×10 random heights, 50×50 random heights.  For the 50×50 example, I had to increase my zFar plane from 100 to 200.  The fourth screen shot shows that with 58×58 we lost some tiles.  This is because on GLushort overflow it wraps around, so any index value of 65536 or greater will wrap-around and cause us to redraw tiles we already drew (ie 65536 becomes 0, 65537 becomes 1, etc).

The final screen shot shows Z-fighting.  This happens because adjacent tiles share bottom verts and their top verts are on the same plane, so we have overlapping triangles (tile faces) on the same plane.  However, we only see this from the bottom side of the tile map.  In the final game, these won’t be visible to the player, so they will all be back face culled.  The top side doesn’t have this problem, because you only see the tallest part of tile sides (everything else gets drawn over).

 image image image image image

C++, SDL2, OpenGL ES 2 (GLES2), CMake, Git

Background, CMake

Two C++ OpenGL side projects I’ve posted about in the last two years are…  One uses (Qt + OpenGL + Qt Creator with qmake .pro build files) for an STL model viewer in Qt Creator for (Windows, Mac OS X, Linux).  The other uses (SDL2 + OpenGL) for cross-platform demos such as a bookshelf style grid view and a shadow mapping demo (using assimp for models).  The build system for this latter project was a bit messy because I started from SDL2 sample projects for (Windows Visual Studio, Linux make files, Android nmake, XCode OSX, XCode iOS).  This made sense at the time (because my real job keeps me busy so I wanted a faster short-term path to see my cross-platform code running).  However, I decided I wanted a cleaner long-term build solution, so I gave this SDL2 project a complete “reboot”.

I started a new (but similar) project from scratch using CMake.  This was a great way to get some experience using CMake.  At my job, we use Visual Studio for Windows and Makefiles for Linux (with GCC).  For this side project there’s a lot more native build systems, so it makes more sense to use a tool like CMake (or SCons etc).

GLES2

To make the cross-platform aspect simpler at this stage, I’m doing all the OpenGL stuff with OpenGL ES 2 core profile (obviously in the long-term I could have multiple rendering paths).  The current list of platforms is – Windows (ANGLE), Linux, OS X, Android, iOS, Emscripten.  I kind of wanted to use GLES3 because of new features like (vertex array objects) and (texture arrays).  However, there is currently better cross-platform support for GLES2.  In particular, for Emscripten GLES3, better WebGL 2 browser support is required.

ANGLE is also currently lacking in GLES3 support.  Wikipedia says OpenGL 4.3 provides full compatibility with OpenGL ES 3.0, but I’m not sure whether it’s 100% true.  Plus I’m eventually planning to use ANGLE for Windows Mobile 10 support (or Universal Windows Platform) too.

OSX is lacking in terms of GLES2 support.  GL_ARB_ES2_compatibility does not provide full compatibility with GLES2.  I found that even simple GLES2 code requires changes to work on OSX – for example, a VAO (vertex array object) must be bound for OSX, while GLES2 core does not support VAOs.  GL_ARB_ES2_compatibility Overview states “will ease the process of porting applications from OpenGL ES 2.0 to OpenGL”, so it’s not full ES2 support.

So for OSX I will need some conditionals (or an alternate rendering path).  Or I could wait for ANGLE to add GLES2 support.  Or I could try MetalGL.  With the release of OSX 10.11 (just a few days ago), OSX now supports Metal, and there is a project MetalGL that can map GLES2 to Metal.  The day OSX 10.11 came out (9/30) I was able to run the demo (DrawLoadDemo) for MetalGL on OSX.  I was able to connect my own project to it (using CMake), but there is some compatibility issue with SDL2.  I’m not yet sure how easy it will be to get MetalGL to work with SDL2 on OSX.  TBD on this.

Qt Creator

I’m using Qt Creator as my primary editor and IDE (when possible).  Qt Creator has support for CMake.

Git

I’m using Git (instead of Subversion).  For basic use, the main difference between Git and Subversion is that Git is not designed around a centralized server.  So when I do a git commit, I also do a git push to push my local commit to the online central server.  Another obvious difference I noticed is that when I check svn log using TortoiseSVN it takes a long time to get the log from the central server (unless you use something to cache it).

Modern C++

An interesting aside is that Linus Torvald (who initially designed and developed git) likes C better than C++.  I actually have some sympathy with that view – C is simpler than C++ (so it’s less likely to result in pointlessly complex code) and it’s lower level (so it’s easier to know what the compiler will do).  Plus my job (GPU modeling) relates to systems software, and systems software programmers (eg computer engineers) tend to like lower layer code (ie closer to the hardware).

For my job we haven’t upgraded our project yet from Visual Studio 2010.

For this side project, I’m making a conscious effort to use more modern C++ features (eg C++11) when useful.

Screen Shots

I’ll figure out later how best to post the Emscripten version (uses JavaScript and WebGL).  For now I’ll just share a screenshot.  I’m also sharing a photo of the same demo running on a Kindle Fire HD 6.  This simple demo also runs on Linux, Windows, iOS, OSX.  A lot of the changes (ie the rewrite) I describe here relate to the cross-platform support (eg CMake usage), so I’m also including a screen shot of CMake in Qt Creator.  I’m also  including a screen shot of SourceTree for Git

image image image  image

Next Steps

I had a great time and learned a lot building this cross-platform infrastructure from the ground up.  I’m tempted to just focus more on that aspect.  I could make it an open source project, focus on making it more robust and better designed as a generic engine base.  Make it simpler and more streamlined for others to use for their projects.  Take feedback and contributions.  Add features and general-use functionality that isn’t specific to a particular application.  Integrate additional libraries (make some parts optional).  It might become more like a game engine or similar to Marmalade ( www.madewithmarmalade.com ).

Another path I’d love to pursue is writing additional graphics demos (eg beyond the basic shadow map demo seen in an earlier post).

However, the path I’ve decided to focus on next is to make a playable game.  I’ve been playing “Clash of Clans”, so I’ve decided to write a cross-platform tower defense game similar to CoC.  Here’s a random screen shot of CoC to give an idea – it’s an isometric tile-based tower defense strategy game.  I’m making it 3D so the rendering may look more like “Galaxy Control”.

image image

To keep things simple, I’m planning to focus on the rendering, basic UI, and a basic single player demo.  This means things like (art, sound, networking, content, game design details) are lower priority (at least initially).  I suspect if my goal was to release a commercial game as a solo developer, then using a pre-existing game engine (like Unity) (or at least Marmalade) would make more sense.  And going all crazy with the ground-up cross-platform support wouldn’t be necessary.  I also have a busy real job (GPU model development) and this project is just for fun and learning…  So we’ll see how far I get :-).  However far I get, I’ll at least expect to get some more great experience and learning 🙂

Adding some 3D models with Open Asset Import Library (assimp)

The build setup for assimp turned out to be a little more effort than I’d expected.  My goal was cross-platform support most likely by building assimp from source, but I was hoping to focus on getting a proof of concept working first using a Windows binary release.  Unfortunately, assimp-3.1.1 released 2014-06-14 was not as easy to setup as the assimp-3.0 releases from 2012-07-10, so I ended up using assimp-3.0 (for first pass).  The assimp-3.1.1 Windows release binaries only worked with older visual studio (not with 2012), and assimp-3.1.1 code release did not include a vcxproj, and running cmake gave me an error.  assimp-3.0 came with a vcxproj file, so I started from that.

I imported this vcxproj into my solution.  To build assimp as a static library, I also needed to download boost.  Using the static library caused link errors relating to C++ standard library, see ( http://assimp.sourceforge.net/lib_html/install.html ).  So I built assimp as a dll (instead of a lib).  (“There’s no STL used in the binary DLL/SO interface, so it doesn’t care whether your application uses the same STL settings or not”).  I also added to assimp -> properties -> Configuration -> C/C++ -> Preprocessor -> ASSIMP_BUILD_DLL_EXPORT.  This enabled __decclspec(dllexport)’s needed when building the dll.  Without ASSIMP_BUILD_DLL_EXPORT, I was able to build assimp.dll but there was no corresponding assimp.lib (which I need to statically load, aka implicitly link to assimp.dll).

image image image

Once I got the build working, the real fun began.  I skimmed a few assimp tutorials and some of the official assimp documentation, but I found this one ( http://nickthecoder.wordpress.com/2013/01/20/mesh-loading-with-assimp/ ) to be a great starting point.  I used that as a guide along with my existing shadow map objects (class Entity extends HelloWorldBox).  I started with a Cube.obj model file for debugging.  I started by printing (SDL_Log()) the values that I read (using assimp) from Cube.obj file.  From there, it was easy to use those values (position, normal) in the same manner.  Once I got Cube.obj working, I verified an obj model with more verts would work (utah teapot) (wt_teapot.obj).  Then I created a scene with a few different imported models.  The model files I used had normals, but my current shadow mapping example has shadows without lighting (eg no Phong Illumination), so I used the normals as colors.

image image image image image

In modern graphics (API’s and GPU’s), each vertex has a position and 0 to N parameters, and is drawn using (shaders, textures, primitive assembly & other graphics modes).  So a 3d model file format would at least need some way to specify vertex data.  Additional data needed for a model may include textures, mesh hierarchies, animations, deformations.

One simple plan text file format is Wavefront OBJ (.obj), see ( https://en.wikipedia.org/wiki/Wavefront_.obj_file ).  OBJ specifies vertex positions, specific types of attributes (texture UV, normals), and primitives (eg faces).  OBJ only specifies geometry, but you could have separate files for other things like textures and materials (Material Template Library).  More details on OBJ here ( http://www.martinreddy.net/gfx/3d/OBJ.spec ).

For some applications (certain games), it might make more sense to design a custom file format (or choose one that matches your needs), then write an export plugin (such as for Blender), and an importer (for your application).  OBJ might be a good choice for an educational project, since it’s standard and simple and plain text.  But for a commercial project, you probably want to use a more optimized format.  For some applications (such as tools for 3d modeling), there is a genuine need to load lots of different 3D model file formats – which is why assimp supports a list ( http://assimp.sourceforge.net/main_features_formats.html ).

Shadow Mapping

As described in this blog category’s previous posts (cross-platform SDL demos), I did initial development on this project in Nov Dec of 2013.  After the holiday breaks, I was back to being busy with my day job, and doing other learning on the side (such as more general Computer Science algorithms) (one example is algorithms for trees and tries).  Then this weekend, I resurrected my “cross-platform SDL demos” project and implemented shadow mapping.  (Aside – I wrote this on my laptop because in the background I was replacing my desktop computer’s motherboard and reinstalling the OS etc).s

In other blog posts (CG intro Harvard), I’ve mentioned the 2008 Harvard extension school introduction to computer graphics video lectures ( https://itunes.apple.com/us/itunes-u/csci-e-234-introduction-to/id429428034?mt=10 ) ( http://dev.miroguide.com/items/3985688 ).  Lecture 7 gives a great overview of shadow mapping.

Screen shot from 55m 36s shows the idea of doing two draw calls – one from the perspective of the light (draws to the shadow map), and one from the perspective of the camera (draws to the screen).  Screen shot from 54m shows that an object (a position in the vertex shader, or pixel shader) shares object space and world space in both spaces (light space and eye space).  If you want to know, for a particular pixel (fragment) in camera space whether that pixel is in shadow, you need to check the shadow map using that pixel’s position in light space.

vlcsnap-2014-08-24-16h04m12s215 vlcsnap-2014-08-24-16h02m49s33

Another of my favorite learning resources, Essential Mathematics for Games & Interactive Applications – A Programmer’s Guide (by James M. Van Verth, Lars M. Bishop), doesn’t go into shadow mapping. Section 8.12 (second edition page 367) (in the Lighting chapter) has a one page overview that explains how shadows are different than lighting. The book explains – “Shadowing is generally a multipass technique over the entire scene”.  Also – “Since the real core of shadowing methods lie in the structure of the two-pass algorithms rather than in the mathematics of lighting, the details of this shadowing algorithms are beyond the scope of this book”.

I mention this because one interesting thing about my shadow mapping implementation is that I do not implement lighting (ie no Phong illumination, no bump mapping).  Shadow mapping does an extra off-screen draw call (from the perspective of the light) to create the shadow map. Then each of your scene’s on-screen draw calls (from the perspective of the camera) need to check the shadow map in their shader code, to determine whether a given pixel fragment is in shadow.  It’s optional whether you also do other things in your shader (eg texturing, Phong lighting, bump mapping).

To implement shadow mapping in my SDL project, I looked at this tutorial ( http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-16-shadow-mapping/ ).  One quirk I noticed with the sample code is that it’s “depthModelMatrix” and "ModelMatrix” happen to both be the identity matrix, so it is okay to do (depthMVP = depthProjectionMatrix * depthViewMatrix * depthModelMatrix), then (depthBiasMVP = biasMatrix*depthMVP). However, for my project, I use each object’s own model matrix in the second draw call.

Shadow mapping screen shots with light rays perpendicular to the boxes.  Also shows rendering the shadow map (depth values drawn from the light’s perspective) to the screen:

image image image image

When I moved the light to a different angle, I got horrible shadow acne:

image image image

One way to avoid the shadow acne is to use a depth bias (polygon offset) when rendering the shadow map.
glPolygonOffset(2.0f, 1.0f);
glEnable(GL_POLYGON_OFFSET_FILL);

image image

There’s a good article here ( http://msdn.microsoft.com/en-us/library/windows/desktop/ee416324(v=vs.85).aspx ) to give more ideas on how to improve a shadow map implementation to reduce artifacts related to precision, aliasing, shadow acne, shimmering edges, peter panning.

Simple texture example

I loaded three simple bmp’s and blended them with a GLSL fragment shader:
out_Color = texture(sam0, ex_TexCoord)/3.0 + texture(sam1, ex_TexCoord)/3.0 + texture(sam2, ex_TexCoord)/3.0;

This was just to make sure I had a cross-platform texture example for SDL2 with OpenGL 4 (OpenGL ES 2 or 3) on (Windows, Linux, OS X, iOS, Android).  I loaded pixels from bmp file with this simple code ( http://stackoverflow.com/questions/12518111/how-to-load-a-bmp-on-glut-to-use-it-as-a-texture ).  SLD2 doesn’t include any image loading functionality; it’s FAQ ( http://wiki.libsdl.org/FAQDevelopment ) references SDL_image.  The uncompressed source code for SDL2_image-2.0.0 is 27.7 MB (31.3 MB on disk).  I may later integrate SDL2_image (or some other image loading library), but for now I am just using bmp’s.  Another option would be to use KTX file format (an official Khronos OpenGL & OpenGL ES texture file format) or DDS; this link ( http://stackoverflow.com/questions/18058669/ktx-vs-dds-images-in-opengl ) says NVIDIA OpenGL driver supports GPU acceleration for compressed textures with DDS but not for KTX (as of Aug 14 2013).  Or I could even write code that converts from BMP (etc) to a custom format file, and code that loads the texture from that custom format file.

image image

Another relevant texture issue is “texture atlas” and “array texture”.  This is relevant to the example of a thumbnail viewer, because the folder could contain an arbitrary number of image files (png bmp etc).

My AMD Radeon HD 6520G (HP laptop) has a GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS value of 32 (checked on Windows 7).  According to gfxbench.com, the Asus Nexus 7 has a value of 16 for GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS.  A naïve approach of using one texture unit per thumbnail would limit you to only one GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS thumbnails (loaded at a time).  To render more thumbnails (than GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS), you would require state changes.

A more sane approach would be to fit all the thumbnails on one texture with a texture atlas, or to use hardware support for array textures.  My AMD Radeon HD 6520G (HP laptop) has a value of 8192 for GL_MAX_ARRAY_TEXTURE_LAYERS.  According to ( http://www.opengl.org/wiki/Array_Texture ), array textures (EXT_texture_array) was core since OpenGL 3.0.  OpenGL ES 3.0 lists array textures as a new feature.  Unfortunately, OpenGL ES 2.0 does not appear to support array textures, except as an extension – see ( http://stackoverflow.com/questions/16147700/opengl-es-using-tegra-specific-extensions-gl-ext-texture-array ).

Linux with modern OpenGL

I didn’t have a great experience finding modern OpenGL (3, 4) support in virtual machines such as VMware Player for guest Linux, and I didn’t want to bother with the hassle of dual-booting.  So I built a (good but not super high end) desktop computer from parts with a recent (but not high end PCIe) GPU .  I installed Ubuntu 12.04 LTS, installed updates (including “Additional Drivers” AMD proprietary) and got the C++ compilation to work for my cross-platform C++ OpenGL SDL2 based program.  Aside, NoMachine on my LAN works great for remote desktop from Windows to Linux with modern OpenGL.  To compile my source code, I needed a few small edits to the build file, and to fix some linker errors I had to install some libraries with “sudo apt-get install”.

Screenshot from 2013-12-21 01_53_38

So my cross-platform code setup with C++ OpenGL SDL2 upgraded to use modern OpenGL (OpenGL 3 & 4, OpenGL ES 2 & 3) style code now builds & runs on five platforms (Windows, OS X, Linux eg Ubuntu, Android, iOS).  Ideas for what I might do next are…  Further refactor / cleanup / streamline the code & build process.  Some less trivial but still simple graphics demos (eg lighting, textures, bump mapping, shadow mapping).  OpenGL-based GUI (eg expand the thumbnail viewer).  Add another platform (I think the most obvious choice is Windows Phone, which would possibly mean adding a DirectX code path for Windows & Windows Phone).

Grid view prototype (such as a for thumbnails)

Here’s an Android (DroitAtScreen) screen shot and a Windows screen shot for a quick prototype of a grid view (such as for a thumbnail viewer).  Cross-platform OpenGL C++ SDL2.  Uses orthographic projection (not perspective).  The camera in my previous post did not have an ortho mode, so I added one (this was easy because although the camera did not have ortho, GLM does).
image image

Here’s what the same program looks like with projective perspective:
image image image image

Ideas.  A texture on each thumbnail.  Resize thumbnails with pinch-to-zoom.  Logic to organize the boxes based on the thumbnail size and the near plane dimensions…  Keep thumbnails in view horizontally, scroll down to see vertically off-screen thumbnails.  Fill screen top-left to bottom-right.  Horizontal spacing between thumbnails could either be constant, or a range (used to fill the row).

Edit: SDL2 made it super-easy to add a prototype pinch-zoom (tested on Android).  For this example, “zoom” just means to walk forward (or backward):
case SDL_MULTIGESTURE:
{
    const float fZoomSpeed = 10.0f;
    g_Camera.offsetPosition(fZoomSpeed * event.mgesture.dDist * g_Camera.forward());
    break;
}
image image

Edit: I added resizing to allow any number of boxes and different widths.  This could be used with vertical-only scrolling
image image image

Edit: I added the ability (quick prototype) to dynamically resize the grid based on a change in window width – here it is with orthographic projection.  For ortho, I use:
xMin = -m_state->window_w / m_fScaleOrtho + m_HalfModelW + xGap;
xMax = m_state->window_w / m_fScaleOrtho – m_HalfModelW – xGap;
image image image image image image image

Camera integration

Summary

Basically what I did was just to integrate the camera from this tutorial/project (and GLM) into my cross-platform SDL2 OpenGL project.  This is also a good tutorial series to review basics (concepts, code) using modern OpenGL 3 & 4 style:

http://tomdalling.com/blog/modern-opengl/04-cameras-vectors-and-input/

http://glm.g-truc.net

Screen shots on Windows 7 shows… Window resize (notice the cubes are no longer stretched / distorted).  Keyboard controls to walk around (WASD, QE turn left/right):
image image image image

Here are screen shots on Android (Nexus 7, DroidAtScreen) in portrait and landscape modes:

image image

For Android screen rotation to work properly, I added this to AndroidManifest.xml –> <manifest> –> <application> –> <activity>:
android:configChanges=”keyboard|keyboardHidden|orientation|screenSize”

The WASD camera controls are time independent because I use use Simulate(Uint32 dt) as described in an earlier post:
const Uint8 *state = SDL_GetKeyboardState(NULL);
const float moveSpeed = 0.01f;
if (state[SDL_SCANCODE_W]) {
    g_Camera.offsetPosition(dt * moveSpeed * g_Camera.forward());
}

Once I got my code to work on Windows and Android, it also worked on OS X and iOS:
image image image

I haven’t tried this on Linux yet because I’m planning to build a separate desktop computer to build & test for Linux support (my experience with Ubuntu Linux on VMWare Player was that OpenGL 2 support worked but not OpenGL 4).

Next »