UDK and graphic engines to develop Indie
What is an indie game?
The path to follow when creating a videogame doesnt only require the necessary use of powerful graphics; in fact, the main thing needed in the creation of a videogame is probably one of its essentials: the capability of joy and to entertain. Basically an indie videogame is a product done by a reduced group of people with a micro budget. A huge part of those indie videogames created are made as a hobby, not as a business with a benefits objective. One of the main characteristics of indie videogames is that theyre strongly creative. This can be explained because theres total freedom during the process and developers have no creative limitations. They bring something special to the audience, different, something that will never be produced by that industry that only follows market tendencies. Indie videogames are products that look for innovation, giving a message and in most cases, to emulate an interactive story between the user and the application. Playing an indie game is also described as a visual experience rather than just playing a videogame or games that try to come back to the origins. Indie videogames are difficult to distribute because of their low budgets and this issue affects directly to their publicity campaigns. Some of the most famous and successful indie games are Minecraft, Amnesia, Super Meat Boy, Braid, Trine and Limbo. Three different game engines are controlling the most part of indie videogames of the market. These are Unity, UDK and CryEngine. These three engines have really actual technical characteristics and are in constant development.
What is a game engine?
A game engine is basically a system of programs (great amounts of code normally, C++ accompanied with self designing tools) that allows the user to move inside the game including what you see on the screen, what you hear or even the actions that can take part in the virtual world of the game. Modern engines are distributed with specific editors for materials, vehicles, rivers and paths, lands, vegetation and atmospheric effects. Some of them even have their own script language. A game engine can create worlds from a set of rules: physical laws, sounds, animations, effects and everything necessary to build a virtual world.
Creating a game engine, with its graphics and physics engine, is a difficult and expensive process. Once this engine is created, it allows you to create games quicker without starting from zero. A lot of videogame engines have only one exclusively task and probably the more spectacular one: real time render and its animation. These are the graphic engines, or so called 3D engines, whose main task is moving what you see on the screen while playing. When the first videogames were created, they were programmed from zero because of their lack of complexity. Graphic engines were impulse by first person shooters. There are two ways to treat a graphic engine at the development stage of a videogame: 1) Creating a graphic engine from zero for the game is the most common decision that big development teams whose objectives are not very ambitious choose. But generally, studios cant face the amount of time and work that entails creating a graphic engine. 2) The second option consists of using graphic engines already existing. The market is full of graphic engines that you can use for free or also pay for them. The use of an already created tool forces the creative team to learn its way of working. This few things mentioned let the team use a tested product and technologically new that is updated periodically and, at the same time, it can be modified. Obviously, the second option is the most frequented and thats probably why lots of games look alike or have a similar appearance: because of the way the engine treats with the illumination or postproduction effects. As stated before, the most common game engines used by indie developers are Unity, UDK and CryEngine.
Unity Unity is one of the most typical engines used by the industry because of its simplicity and its owned by Unity Technologies, who recently released the fourth version. One strong point about this engine is the ease to adapt it to any kind of platform. Unity Technologies was founded in 2004 by David Helgason. The following year they released the first version of Unity at the Apple Worldwide Developers Conference (WWDC), but it wasnt until 2009 that they released their product for free. Unity currently has two versions of its engine: the free version and the Pro version which has an initial cost of 1,500$. The free version offers a big part of the characteristics of the whole engine but the most advanced ones are missing and can only be found in the Pro version of the engine. Also, to be able to publish in systems like Android, iOs or Flash, you need to acquire some extra modules for the Pro version. Another advantage of the Pro version is that theres only one payment and theres no need to pay any neither license nor interests of the commercialization benefits. The graphic engine of Unity uses Direct3D (Windows), OpenGL (Mac, Linux), OpenGL ES (Android, iOS) and APIs (Wii). It offers a support for relief mappings, reflection mapping, parallax mapping, screen space ambient occlusion (SSAO), dynamic shadows using shadow maps, texture render and post processing effects on full screen.
CryEngine CryEngine is a graphic engine developed by Crytek in 2006, creators of the videogame Crysis. Actually it is on its third version and is the most powerful engine of the three we will be commenting in this dissertation. Like Unity, there are different types of licenses for using this engine depending on the magnitude of our project. The use of the engine and its tools for independent developers is totally free but the only payment to do is the 20% of the incomes that our product will generate once it has been released. The possibility of licensing the full graphic engine exists for those who want to have access to the source code of the self.
Unreal Engine Unreal Engine is one of the engines most used in AAA projects and it has been developed by Epic Games, creators behind Gears of War and Unreal. There are two kinds of license existing to be able to use the Unreal engine. The first one is UDK, also called Unreal Developer Kit, which allows the developing of full projects using all the characteristics of the third version of the engine but theres no way to modify them for free in non commercial projects. In commercial projects we would have to buy a 99$ license without having to pay more unless the benefits of our product overpass the 50,000$ top. If it does, we will have to pay the 25% of the incomes to Epic Games. The second option that Epic Games provides us is a full engine license of the third version of the engine with access to the source code with options to change it to adjust it to our necessities.
UDK
History In late nineties, Epic Games created the king of the graphic engines: the Unreal Engine. This engine was designed to create the Unreal and Unreal Tournament videogames. Its main characteristic is the one that made it famous: the possibility of creating Mods. The second version of the Unreal Engine was released in 2002. Most part of the source code was rewritten and the physics engine Karma was included allowing body collisions. The engine was improved and updated to the 2.5 version before releasing Unreal Tournament 2004. This improvement added physics for vehicles, a new particle editor and lastly a support for 64 bits. In 2006, Unreal Engine 3 was released. As previously said, it is one of the most advanced graphic engines of the actual console generation. UE3 was designed to create videogames for PC, Xbox 360, PS3 and also Wii-U. Unreal Engine 3 offers an HDR rendering channel of 64 bits. The gamma correction provides an impeccable color precision; while it supports a wide post processing gamma effects such as blur movement, depth of field, bloom, ambient occlusion and all the materials defined by the artist. UE3 is also compatible with all illumination techniques and rendering for actual pixels, including normal mapping, normal illumination with Phong parameters, anisotropic effects, displacement maps, light fading functions, shadow masks previously calculated and directional light maps. This engine also provides of volumetric ambient effects which are perfectly integrated in any kind of environment. Unreal Engine 3 has been bought by numerous developers to use their technology in their own projects. In 2009, the Unreal Development Kit (UDK) was released so the independent developers could create games based on UE3. This development kit contains a series of tools that embrace designing, modeling, animation, programming and many other aspects related to the creation of videogames. The use of UDK for non commercial goals is free. If you use UDK for any other commercial goal or something not specifically authorized in the end user license agreement (EULA), youll have to buy a 99$ license. And after passing the 50,000$ benefits, youll have to pay the 25% of the product incomes as royalties.
Program
Navigation (menus) UDK: Left Mouse Button (LMB): Pan. Right/Left/Forward/Back Movements Right Mouse Button (RMB): Rotate, Look Around LMB+RMB: Up/Down
WASD Navigation: Click and Hold Right Mouse Button. As you hold RMB use the W A S D keyboard keys to move around as you would playing a First Person Shooter game. W A S D movement is great if you are familiar with Hammer Source mapping.
Maya Users: Hold down U key U+ LMB: Rotate, Look Around U+ RMB: Forward/Back Movements U+ MMB: Right/Left/Up/Down Movements
The Editor First of all an empty scene, which the user can fill with assets included in the installation or equities, is visualized. Then, while holding the right mouse button and using the WASD keys, we can move through this world weve created. Alternatively we can drag the mouse to go forward or back, or we can even click both and drag to pan our view. Objects are selected with one click and with the space bar we can change our object selection tool to others ones such as: movement, rotation or scale. Any change is done with the Alt button pressed and a copy of the object during the start of the movement is saved. The Play button, which is green, must be pressed to start playing the default game in a window. We press Esc to exit. UDK manages all game assets with an internal database. Assets such as Static Meshes or Skeletal Meshes are imported and then they are saved in packs, together with their creation data. These packs are saved before exiting.
After opening the Content Browser, we search Static Meshes in the pack list that we find to our left. Once we find it, we drag it from the library to our world and it will stay there like a Static Mesh. Each time we apply a change to that asset; there will be a process where the lights are recalculated. This can be done with the Build All button located in the toolbar, but its recommended to use it once we have done all our modifications.
Scale and Coordinates in UDK When working with 3-D programs its important to equal our models scale, otherwise unifying a level with different characters and articles with varied sizes is imprecise and difficult to work with. ActorX plugin transforms Maya units into Unreal units in the editor, that is to say, if in Maya we work in meters, one unity in the grid represents a meter. In our editor, a unity is a little bit bigger than a centimeter, but it depends on the game. On Epics website documentation its recommended to model characters for Unreal Tournament with less than a hundred units of high. For Gears of War, a character should measure around 180 units. These standards bear on the modus operandi of each game, as personalized models depend of other models already in the game. For example, if we define that a unit in Maya and in the editor represents one centimeter, and our character design should measure 1,40m, our model in Maya will have 140 units of high and itll be maintained during the rendering. When working with own models there are no problems, but if we want to use a model included in Unreal Tournament, personalized models would look giant. On the other hand, if we plan to work using the grid we should use a scale that uses paperS (PONENCIAS?!) of 2. So, e.g., 256x256 wall segments will be created and well ensure that they will coincide in the editor and will be easier to use due to the grid. The pivot of a model in the Unreal Editor will become the origin point (0, 0, 0) of the 3D application. Its recommended to have the snapping option activated in the Unreal Editor.
Coordinates System The coordinates system in Maya is a Y up, which means that the Y axis is the one that points upwards. Remember that in the three dimensions there are three axes: X, Y and Z, each one pointing to one of those three directions. X always defines the width, Y the height and Z the depth. Some programs like 3DS Max and Unreal Editor work with Z up coordinates system where the Z and Y axis are exchanged. For static models theres no need to make any conversion, but while importing a skeletal model (Skeletal Mesh) the AssumeMayaCoordinates option must be checked. It will make our character appear upright instead of lying on the floor. If this isnt done while importing, we can modify it in the AnimsetViewer of the UnrealEd.
Textures
Texture detail vs. geometry As already said before, the number of polygons in a model for videogames is limited. This means that lots of details of an object wont be represented as geometry. Is exactly right there were textures become even more important, because they help to simulate the detail where isnt. Printed lightning Lightning is a complex system of real life. Up to now, lightning in game engines is simulated in a simple form. There are lightning characteristics that cant yet be simulated efficiently in a real time engine, but they can be falsified. In videogame texturing, a shading layer is normally added to the color texture to give it more depth to the basic illumination that engines usually have. Phenomena such as Ambient Occlussion or Subsurface Scattering are commonly incorporated to the texture.
Creation of materials in UDK
A texture is only a part of what materiality gives to a model. The other part is the material or shader, which is an instruction series that define how the video card will represent the surface of a model. UDK allows the creation of material intuitively through a nodes network connected together.
Creating Materials Right click on any empty space of the Content Browser and select Create New Material. After that, we choose the name of the packet where its going to be saved and a name for our new material, and then we press Ok. A new Unreal Material Editor window will appear and in its left well find a previous view of our applied material on a PRIMITIVO?. On the right side of the screen we have some working space for the network nodes. Finally below theres the properties panel. The working space has a frame with lots of entries. By moving our mouse we can move into that space and by using the mouse wheel we can approach or move away into the scene. The grey entries are the ones which are not compatibles with the actual Blend Mode of the material. In the Unreal material system, the nods have been created in a way where they are thought as mathematical operations so they can be modified with other to create some visual changes. Now, right click on the empty space and then select Constants -> New Constant. A constant is a number that wont change. A black node will be created with a 0 above, next to a little red frame. In the properties panel below, there exists a line simply called R where a number can be introduced. First of all, the Constant output is connected into the Diffuse input (if the node is really far, with Ctrl + click we can drag it automatically upward). There isnt any visual change in the preview because a constant node with 0 value (which equals to black) is being connected to the Diffuse. Right click on the node and in its properties, and we change the 0 for a 1. Now, the preview shows us a white PRIMITIVE?, because 0 equals to white and 0.5 to grey. We can assign a value between 0 and 1 to change our PRIMITIVE? color. What happens if we introduce a higher number? If we write down 10, this number will still be valid. What wed see in our preview is that our object will start to shine. This is because Unreal works with HDR values, that is to say, it can be pulled out of the normal rank that a display unit can render, which is from 0 to 1, and still have some real data. The way to demonstrate that an object color is more intensive than an absolute white is through a bloom. Everything mentioned above is great for Grayscale objects, but how we add colors? After deleting our node, we create a Constant3Vector. A black node with the numbers 0, 0, 0 above will be generated. This is a vector, which essentially is a combination of three numbers. This three numbers represent three different channels in our material editor context. In the properties below, we can see that the three numbers have the RGB letters assigned. When connecting it to the Diffuse, we assign a color, i.e. 1,1,0 (yellow).
One of the advantages of having numeric nodes is that we can do mathematical functions on them. A new Constant is created, this time with a single number (0,5). Now the connection between Constant3Vector and the Diffuse is cutted, and a new node is created (Math -> New Add). This node has two entries. The Constant3Vector and the Constant is connected to the inputs and the output is connected to the Diffuse. If we sum up 0,5 to the 1,1,0 value, it results 1.5,1.5,0.5, which visually is translated as a more intensive yellow. Lets add some textures. Right click on the Content Browser and then Import. We choose one of the textures we desire and then we import it with its default settings. Now, the texture is selected again in the Content Browser and then we come back to our Material Editor. A new node is created (Texture -> New Texture Sample). If we had our texture selected in the Content Browser, itd appear automatically applied. If not, our node will remain empty but we can assign it selecting it and doing right click on the green pointer. The texture node is connected directly to the Diffuse to be able to see what were doing. If we want a material with a simple texture, this is everything needed. A great thing about the Material Editor is that it allows operations with the nodes: like connecting a node (Math -> Multiply) with a texture node and a Constant3Vector with any color. As we see, theres an extensive amount of options and operations.
Modular Modeling
Why Modular Modeling? There are many reasons why it is useful no reutilize models, some of them are very obvious. When facing a development its really useful to have a library with reutilized assets than rather having to model some unique objects. If we can create a beam and reutilize it for the rest of the beams of the same building, we wont create new beams separately. Another reason is because of performance and space saving. More models involve more information stored in memory. Despite the high technology that consoles have today, theyre very limited in memory terms so any saving is valid. A repeated model weighs less in memory than a unique version for every object. Instances A repeated model several times weighs less because its only been loaded once in the memory. Videogame engines also have optimizations where they save resources of the video card. An instanced model can be repeated thousands of times with a little impact in memory. UDK is designed with that in mind, advising the use of repeated assets.
Modularity A modular model has to be thought exclusively for this use. In many games that use UDK, such as Hawken, the buildings faade and robots pieces are divided in repeatable pieces. Tubes, doors, landfills, sidewalks and almost everything is repeated. In a normal development, to save polygons, the faces that cant be seen in a model are deleted. If we want an asset to be multipurpose, it must be totally solid. The idea is to find creative uses for models already done. In Unreal Tournament 3, a wall model can be used as floor, or a decorative object as a structure. Many parts of the levels are created by combining re-usable building blocks into unique structures, which can allow new levels to come together quickly. Jon Kreuzer, Technical Lead of Hawken (Source: Create Digital Motion)
Tiling In this screenshot of Dark Souls, a creative way to save resources is repeating textures. We can also save in textures by using little textures repeated in a creative way, with this we can have more detail using less resources. The scenarios of Dishonored use this technique, among others. Its very common to repeat the texture in a mirror form to avoid its obviousness or to avoid overlapping. Borders are usually hidden behind other objects. Using some repeatable textures we can bring life to a whole scenario.
Characters Finally, the same characters of a game can be repeated to save resources. In this case, several modular characters are done with interchangeable pieces like different heads, different clothes or different accessories. Multiplying these interchangeable elements we can obtain a great number of variations with little impact to the production. In conclusion, its always better to work with modular models to be able to combine them in creative ways. But we mustnt forget to break the repetition with other trimmings or with different textures of normal maps.
Import in UDK
In the UDK software, open the Content Browser and then click Import. After finding the file with the modeling, open it. In the options, fill the Package, Groups and Name blank with whatever you want. But be sure to name the packet with the definitive name of the asset, because thus you can place a cube in the level, like a placeholder, and then keep working on the definitive model with all the instances automatically pocketing. The model is dragged from the Content Browser to the viewfinders floor, next to the default blank cube. The size of the model is analyzed to decide if its correct or not. To check how it looks from a first person view, well just have to click on Play In Editor. Probably, the modeling results may be too small or too big. To fix it, we go to the 3D packet used and we settle the scale of the cube. 1 3DMax/Maya Unit = 1 Unreal unit Then, we export it as a FBX with the same name of the previous file. Back to the UDK, in the Content Browser, right click on the asset and select Reimport. The assets and the instances that have been set in the map will be automatically updated. Assets next-gen for the Unreal Engine 3 One of the methods used for UE3 models with normal maps done in Maya. The steps are: 1. 2. 3. 4. 5. 6. Base Model HigPoly Modeling LowPoly modeling Removal of normal maps Deriving cavities maps Base Model texturing
Im going to proceed to do a very basic modeling of some stones. First, we need some reference images to see the distribution of objects. Ive created a Base Model which will let me sculpt in Zbrush, so Ill only be worried about the volume, the adequate form and the squared faces to facilitate the deformation and the subdivisions in Zbrush. I had to add a bevel on those faces that needed hard edges because when subdividing in Zbrush, this faces shrink by the Catmull-Clark algorithm. I ended up with something like this:
For this particular case I exported all pieces combined in a single one .OBJ, because the complexity of the geometry is very simple. If it were more complex, I would export different pieces.
HigPoly Modeling After bringing the modeling to Zbrush, I started adding definition and details. The finest details can be added later directly to the normal map through Photoshop. Again, I used some reference images to be able to know what kind of details I can use when sculpting. Once finished, it looks like this:
Its exported one more time as .OBJ and then we go back to Maya.
LowPoly Modeling The next step is creating the definitive LowPoly Modeling. After importing the HigPoly in Maya (Live mode), the LowPoly modeling is done with the one imported as the base. When moving the LowPoly modeling points, this should stick into the surface of the HigPoly. In this modeling it is not important to maintain the polygons quadrilaterals. Triangles can be added to obtain more definition. But my advice is to give priority to the silhouette definition, normal maps will do the rest. As an alternative, we can create a LowPoly modeling in a specific application for the retopology process, like Topogun. Same as before, the LowPoly modeling is exported as .OBK and now lets move on to the extraction of normal maps.
LowPoly modelings UVs I created the UVs with Blender because of its using simplicity. I wont do any overlapping that no shell superposes one another to avoid creation errors of th enormal maps.
Extraction of normal maps xNormal is the best tool, in my opinin, for extracting normal maps. In the High Definition meshes section, the HigPoly modeling is loaded (right click on the row -> Browse meshes). In the HigPoly of the Zbrush, we make sure that the Smooth normals box is activated (Zbrush works with everything in the rough with strong edges). The LowPoling modeling is loaded in the Low definition meshes section. Now, well a few interesting options. If we leave any of the edges strong to add some definition, then well have to activate Use Exported Normals. On the other hand, if we want light edges in the whole modeling and were not sure if its been done, then we will have to select Smooth normals, just in case. The Use cage option lets us define a modeling apart like a cage to define the direction and limit the rays that arrive to the HigPoly modeling from the LowPoly one. Generally, the first maps generated by xNormal are good. But if we appreciate that the map has some mistakes, well have to model a cage from the LowPoly modeling where any of the faces are intersected so we wont have any problems. Few columns to the right, the External cage file button is there: right click -> Browse meshes. A file name for the map and an output size (in base 2) is settled in the Baking options section. The bigger the map, the bigger the render time our computer will need. In the background color we can leave it black, to isolate the useful area easily later. The Closest hit if ray fails and Discard back-faces hits boxes remain activated, as this gives the optimal result. If for any reason we have the necessity to have the normal map upside down, we mark the Flip vertically option. In Edge padding is left in 16, thats the number of pixels that every UV piece will extend, useful for avoiding seams in our model. Bucket size is left in default, if somethings wrong later, we change it to see if theres any difference. The Antialiasing minimum value is 1x, which gives pretty good results, but if we need more, we can change it to 2x or 4x, but the generation of the map will last longer. Everythings ready now so we can press the giant Generate Maps button and wait, depending of the computer used it will last more or less. If our modeling is divided in parts, we will repeat the process for each pair of HigPoli LowPoly modelings. The maps generated are saved into the intended folder of our project and then we unite them by using Photoshop. The LowPoly modeling with the normal map applied (in a HQ mode in Maya for activating the Pixel Shader) looks like this:
Important: In order to see the normal map good in UDK/Unreal Engine 3, we must invert the green channel value in Photoshop before importing it.
Deriving cavities maps Its obvious that when texturing we need certain references to know where to paint erosion, humility, etc. The cavity map is a shading maps of the cavities of the HigPoly modeling. Also, after using it as reference, we can put it as a layer in a Multiply or Color Burn (underexpose) mode over the rest of the texture to give it more depth. To create it by deriving it from the normal map in xNormal, we move to Tools and then we choose Tangent-Space Normal to Cavity Map and we click Generate. The brightness and contrast can be adjusted in the options below. Then we save it and we use it to create the texture.
How to import a Static Mesh in UDK
A Static Mesh is a Static model in UDK in the sense of not capable to deform with bones. The Static Meshes serve to decorate the level and they can be as simple as a couple of stones placed on the floor to a mountain with lots of detail. In the Unreal software, the Static Meshes are easy models to render, thats why they are used a lot. The ideal thing is to work with lots of little Static Meshes and repeat them creatively wherever its possible. As mentioned before, these repeated instances dont occupy any extra memory. Also, the little modular models let the engine only render the visible models on camera. If a Static Mesh is really big, i.e. a floor, it wont be invisible at any moment. To import to UDK, the model is prepared in the 3D pack. We make sure that theres only one object on camera. With the textured model, we verify if it has a second channel of UVs for the lightmap. Once everythings ready, we select File -> Export and we export it as a FBX. In the options, we deactivate animation, because we dont need it. Triangulate can be activated, if it doesnt triangulate manually (anyway UDK triangulates automatically when importing). In UDK, we open the Content Browser and press Import. We look for the FBX file and then we proceed to import it. If this file hasnt bones associated, itll be imported as a Static Mesh. If the default options are OK, we press Import. The textures are imported in the same way and are applied to a material that will be later applied to our model by dragging it to the map. If theres a second channel of UVs assigned to the model, we press Build All and then the lights on the model will be processed.
Drawbacks The engine provides an excellent quality of textures, but many times a pop-up effect on the textures appears. This is induced by the pre-loading of the structure before and after putting the textures on the model after the load to reduce the processing time. Unfortunately this is a common error produced when finishing an engine load.
Conclusions After testing the material creation, navigation and importing system of UDK, Ive noticed that I dont really like it much. Unity will allow me to reach all what Ive been looking for with this project because its interface similarity with Maya and it lets me use .FBX files with assigned materials easily. Unity has also a similar outliner with Maya to control all the objects. Besides, Unity is less graphically powerful than UDK so my computer will bear the elevated polygonal charge that the whole city Im creating will cause. Unity is also free and easier to control for somebody whose special interests are modelings and textures. Unlike UDK, its learning curve is less pronounced. What I miss from Unity is a lightning and shadowing system similar to the one UDK uses. The second one is a spectacular system that gives some incredible results but its also true that it consumes a lot of resources. UDK also counts with better normal and displacement maps than Unity whose ones are plainer and which carries all the work is the diffuse map. For an experienced indie team, UDK would let them reach new qualities shares because of its potency and also because of its renders with better graphics than Unity. On the other hand, individuals teams or not experienced groups whore actually starting to learn, like me, the current best option is Unity because of its ease of use and no need of powerful computers. Unitys workflow with Maya is easier than UDKs. Another reason that made my choose Unity instead of UDK is the huge amount of information and free tutorials to learn how to use it while for the UDK ones you have to pay. Unity requirements are minor, a major number of people would be able to see my work and try it on their own computers with no need to have a computer with powerful graphic cards. Definitely, this dissertation allowed me to realize that UDK, which was my first option because its visual aspect, might not be the best option for my project. For me its better to use Unity for its ease, sturdiness and better expansion of my work in return of minor graphics. Also, an external module will allow me to export it to tablets.