<?xml version="1.0"?>
<?xml-stylesheet type="text/css" href="http://wiki.polycount.com/w/skins/common/feed.css?303"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>http://wiki.polycount.com/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Cman2k</id>
		<title>polycount - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="http://wiki.polycount.com/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Cman2k"/>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/Special:Contributions/Cman2k"/>
		<updated>2026-04-04T21:45:18Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.23.2</generator>

	<entry>
		<id>http://wiki.polycount.com/wiki/Normal_map</id>
		<title>Normal map</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/Normal_map"/>
				<updated>2014-11-29T08:31:38Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: /* Blending Normal Maps Together */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!-- ## page was renamed from Normal Map --&amp;gt;&lt;br /&gt;
= Normal Map =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;WhatIsANormalMap&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;WIANM&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
== What is a Normal Map? ==&lt;br /&gt;
A Normal Map is usually used to fake high-res geometry detail when it's mapped onto a low-res mesh. The pixels of the normal map each store a ''normal'', a vector that describes the surface slope of the original high-res mesh at that point. The red, green, and blue channels of the normal map are used to control the direction of each pixel's normal. &lt;br /&gt;
&lt;br /&gt;
When a normal map is applied to a low-poly mesh, the texture pixels control the direction each of the pixels on the low-poly mesh will be facing in 3D space, creating the illusion of more surface detail or better curvature. However, the silhouette of the model doesn't change. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;packed&amp;quot;&amp;gt;&lt;br /&gt;
Whatif_normalmap_mapped2.jpg|A model with a normal map.&amp;lt;br&amp;gt;Image by [http://www.jameskuart.com/ James Ku].&lt;br /&gt;
Whatif_normalmap_low.jpg|The model without its normal map.&amp;lt;br&amp;gt;Image by [http://www.jameskuart.com/ James Ku].&lt;br /&gt;
Whatif_normalmap_high.jpg|The high-resolution model used to create the normal map.&amp;lt;br&amp;gt;Image by [http://www.jameskuart.com/ James Ku].&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Tangent-Space vs. Object-Space==&lt;br /&gt;
&lt;br /&gt;
Normal maps can be made in either of two basic flavors: tangent-space or object-space. Object-space is also called local-space or model-space, same thing. World-space is basically the same as object-space, except it requires the model to remain in its original orientation, neither rotating nor deforming, so it's almost never used.&lt;br /&gt;
&lt;br /&gt;
===Tangent-space normal map===&lt;br /&gt;
[[image:normalmap_tangentspace.jpg|frame|none|A tangent-space normal map. &amp;lt;br&amp;gt;Image by [http://ericchadwick.com Eric Chadwick].]]&lt;br /&gt;
&lt;br /&gt;
Predominantly-blue colors. Object can rotate and deform. Good for deforming meshes, like characters, animals, flags, etc.&lt;br /&gt;
&lt;br /&gt;
Pros:&lt;br /&gt;
* Maps can be reused easily, like on differently-shaped meshes.&lt;br /&gt;
* Maps can be tiled and mirrored easily, though some games might not support mirroring very well.&lt;br /&gt;
* Easier to overlay painted details.&lt;br /&gt;
* Easier to use image compression.&lt;br /&gt;
&lt;br /&gt;
Cons:&lt;br /&gt;
* More difficult to avoid smoothing problems from the low-poly vertex normals (see Smoothing Groups and Hard Edges).&lt;br /&gt;
* Slightly slower performance than an object-space map (but not by much).&lt;br /&gt;
&lt;br /&gt;
===Object-space normal map===&lt;br /&gt;
[[image:normalmap_worldspace.jpg|frame|none|An object-space normal map. &amp;lt;br&amp;gt;Image by [http://ericchadwick.com Eric Chadwick].]]&lt;br /&gt;
&lt;br /&gt;
Rainbow colors. Objects can rotate, but usually shouldn't be deformed, unless the shader has been modified to support deformation.&lt;br /&gt;
&lt;br /&gt;
Pros:&lt;br /&gt;
* Easier to generate high-quality curvature because it completely ignores the crude smoothing of the low-poly vertex normals.&lt;br /&gt;
* Slightly better performance than a tangent-space map (but not by much).&lt;br /&gt;
&lt;br /&gt;
Cons:&lt;br /&gt;
* Can't easily reuse maps, different mesh shapes require unique maps.&lt;br /&gt;
* Difficult to tile properly, and mirroring requires specific shader support.&lt;br /&gt;
* Harder to overlay painted details because the base colors vary across the surface of the mesh. Painted details must be converted into Object Space to be combined properly with the OS map.&lt;br /&gt;
* They don't compress very well, since the blue channel can't be recreated in the shader like with tangent-space maps. Also the three color channels contain very different data which doesn't compress well, creating many artifacts. Using a half-resolution object-space map is one option. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=&amp;quot;all&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Converting Between Spaces ===&lt;br /&gt;
Normal maps can be converted between tangent space and object space, in order to use them with different blending tools and shaders, which require one type or the other.&lt;br /&gt;
&lt;br /&gt;
[http://diogo.codingcorner.net Diogo &amp;quot;fozi&amp;quot; Teixeira] created a tool called [http://boards.polycount.net/showthread.php?p=1072599#post1072599 NSpace] that converts an object-space normal map into a tangent-space map, which then works seamlessly in the Max viewport. He converts the map by using the same tangent basis that 3ds Max uses for its hardware shader. To see the results, load the converted map via the ''Normal Bump'' map and enable &amp;quot;Show Hardware Map in Viewport&amp;quot;. [http://gameartist.nl/ Osman &amp;quot;osman&amp;quot; Tsjardiwal] created a GUI for NSpace, you can [http://boards.polycount.net/showthread.php?p=1075143#post1075143 download it here], just put it in the same folder as the NSpace exe and run it. Diogo has further [http://boards.polycount.net/showthread.php?p=1074160#post1074160 plans for the tool] as well.&lt;br /&gt;
&lt;br /&gt;
[[File:NSpace_Gui_osman.png|frame|none|NSpace interface. &amp;lt;br&amp;gt;Image by [http://diogo.codingcorner.net Diogo &amp;quot;fozi&amp;quot; Teixeira] and [http://gameartist.nl Osman &amp;quot;osman&amp;quot; Tsjardiwal]]]&lt;br /&gt;
&lt;br /&gt;
[http://www.linkedin.com/in/earthquake Joe &amp;quot;EarthQuake&amp;quot; Wilson] said: &amp;quot;[8Monkey Labs has] a tool that lets you load up your reference mesh and object space map. Then load up your tangent normals, and adjust some sliders for things like tile and amount. We need to load up a mesh to know how to correctly orient the tangent normals or else things will come out upside down or reverse etc. It mostly works, but it tends to &amp;quot;bend&amp;quot; the resulting normals, so you gotta split the mesh up into some smoothing groups before you run it, and then I usually will just composite this &amp;quot;combo&amp;quot; texture over my orig map in Photoshop.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;RGBC&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;RGBChannels&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== RGB Channels ==&lt;br /&gt;
Shaders can use different techniques to render tangent-space normal maps, but the normal map directions are usually consistent within a game. Usually the red channel of a tangent-space normal map stores the X axis (pointing the normals predominantly leftwards or rightwards), the green channel stores the Y axis (pointing the normals predominantly upwards or downwards), and the blue channel stores the Z axis (pointing the normals outwards away from the surface).&lt;br /&gt;
&lt;br /&gt;
[[image:tangentspace_rgb.jpg|frame|none|The red, green, and blue channels of a tangent-space normal map. &amp;lt;br&amp;gt;Image by [http://ericchadwick.com Eric Chadwick].]]&lt;br /&gt;
&lt;br /&gt;
If you see lighting coming from the wrong angle when you're looking at your normal-mapped model, and the model is using a tangent-space normal map, the normal map shader might be expecting the red or green channel (or both) to point in the opposite direction. To fix this either change the shader, or simply invert the appropriate color channels in an image editor, so that the black pixels become white and the white pixels become black.&lt;br /&gt;
&lt;br /&gt;
Some shaders expect the color channels to be swapped or re-arranged to work with a particular [[#NormalMapCompression|compression format]]. For example the DXT5_nm format usually expects the X axis to be in the alpha channel, the Y axis to be in the green channel, and the red and blue channels to be empty.&lt;br /&gt;
&lt;br /&gt;
== Tangent Basis ==&lt;br /&gt;
[[#TangentSpaceVsObjectSpace|Tangent-space]] normal maps use a special kind of vertex data called the ''tangent basis''. This is similar to UV coordinates except it provides directionality across the surface, it forms a surface-relative coordinate system for the per-pixel normals stored in the normal map. &lt;br /&gt;
&lt;br /&gt;
Light rays are in world space, but the normals stored in the normal map are in tangent space. When a normal-mapped model is being rendered, the light rays must be converted from world space into tangent space, using the tangent basis to get there. At that point the incoming light rays are compared against the directions of the normals in the normal map, and this determines how much each pixel of the mesh is going to be lit. Alternatively, instead of converting the light rays some shaders will convert the normals in the normal map from tangent space into world space. Then those world-space normals are compared against the light rays, and the model is lit appropriately. The method depends on who wrote the shader, but the end result is the same.&lt;br /&gt;
&lt;br /&gt;
Unfortunately for artists, there are many different ways to calculate the tangent basis: [http://area.autodesk.com/blogs/chris/how_the_3ds_max_scanline_renderer_computes_tangent_and_binormal_vectors_for_normal_mapping 3ds Max], [http://download.autodesk.com/us/maya/2011help/index.html?url=./files/Appendix_A_Tangent_and_binormal_vectors.htm,topicNumber=d0e227193 Maya], [http://www.codesampler.com/dx9src/dx9src_4.htm#dx9_dot3_bump_mapping DirectX 9], [http://developer.nvidia.com/object/NVMeshMender.html NVMeshMender], [http://www.terathon.com/code/tangent.html Eric Lengyel], a custom solution, etc. This means a normal map baked in one application probably won't shade correctly in another. Artists must do some testing with different [[#T|baking tools]] to find which works best with their output. When the renderer (or game engine) renders your game model, [[#ShadersAndSeams|the shader]] must use the same tangent basis as the normal map baker, otherwise you'll get incorrect lighting, especially across the seams between UV shells.&lt;br /&gt;
&lt;br /&gt;
The [http://www.xnormal.net/ xNormal] SDK supports custom tangent basis methods. When a programmer uses it to implement their renderer's own tangent basis, artists can then use Xnormal to bake normal maps that will match their renderer perfectly.&lt;br /&gt;
&lt;br /&gt;
The [[#UVC|UVs]] and the [[#SGAHE|vertex normals]] on the low-res mesh directly influence the coloring of a [[#TSNM|tangent-space]] normal map when it is baked. Each tangent basis vertex is a combination of three things: the mesh vertex's normal (influenced by smoothing), the vertex's tangent (usually derived from the V texture coordinate), and the vertex's bitangent (derived in code, also called the binormal). These three vectors create an axis for each vertex, giving it a specific orientation in the tangent space. These axes are used to properly transform the incoming lighting from world space into tangent space, so your normal-mapped model will be lit correctly.&lt;br /&gt;
&lt;br /&gt;
When a triangle's vertex normals are pointing straight out, and a pixel in the normal map is neutral blue (128,128,255) this means that pixel's normal will be pointing straight out from the surface of the low-poly mesh. When that pixel normal is tilted towards the left or the right in the tangent coordinate space, it will get either more or less red color, depending on whether the normal map is set to store the X axis as either a positive or a negative value. Same goes for when the normal is tilted up or down in tangent space, it will either get more or less green color. If the vertex normals aren't exactly perpendicular to the triangle, the normal map pixels will be tinted away from neutral blue as well. The vertex normals and the pixel normals in the normal map are combined together to create the final per-pixel surface normals.&lt;br /&gt;
&lt;br /&gt;
[[#SAS|Shaders]] are written to use a particular direction or &amp;quot;handedness&amp;quot; for the X and Y axes in a normal map. Most apps tend to prefer +X (red facing right) and +Y (green facing up), while others like 3ds Max prefer +X and -Y. This is why you often need to invert the green channel of a normal map to get it to render correctly in this or that app... the shader is expecting a particular handedness.&lt;br /&gt;
&lt;br /&gt;
[[image:tangentseams.jpg|frame|none|When shared edges are at different angles in UV space, different colors will show up&lt;br /&gt;
along the seam. The tangent basis uses these colors to light the model properly. &amp;lt;br&amp;gt;Image by [http://ericchadwick.com Eric Chadwick].]]&lt;br /&gt;
&lt;br /&gt;
When you look at a tangent-space normal map for a character, you typically see different colors along the UV seams. This is because the UV shells are often oriented at different angles on the mesh, a necessary evil when translating the 3D mesh into 2D textures. The body might be mapped with a vertical shell, and the arm mapped with a horizontal one. This requires the normals in the normal map to be twisted for the different orientations of those UV shells. The UVs are twisted, so the normals must be twisted in order to compensate. The tangent basis helps reorient (twist) the lighting as it comes into the surface's local space, so the lighting will then look uniform across the normal mapped mesh.&lt;br /&gt;
&lt;br /&gt;
When an artist tiles a tangent-space normal map across an arbitrary mesh, like a landscape, this tends to shade correctly because the mesh has a uniform direction in tangent space. If the mesh has discontinuous UV coordinates (UV seams), or the normal map has large directional gradients across it, the tangent space won't be uniform anymore so the surface will probably have shading seams.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;MTLPM&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Modeling the Low-Poly Mesh ==&lt;br /&gt;
The in-game mesh usually needs to be carefully optimized to create a good silhouette, define edge-loops for better deformation, and minimize extreme changes between the vertex normals for better shading (see [[#SmoothingGroupsAndHardEdges|Smoothing Groups &amp;amp; Hard Edges]]).&lt;br /&gt;
&lt;br /&gt;
In order to create an optimized in-game mesh including a good silhouette and loops for deforming in animation, you can start with the 2nd subdivision level of your [[DigitalSculpting|digital sculpt]], or in some cases with the base mesh itself. Then you can just collapse edge loops or cut in new edges to add/remove detail as necessary. Or you can [[DigitalSculpting#OART|re-toplogize]] from scratch if that works better for you.&lt;br /&gt;
&lt;br /&gt;
See [http://www.polycount.com/forum/showthread.php?t=107196 You're making me hard. Making sense of hard edges, uvs, normal maps and vertex counts] on the Polycount forum.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;UVC&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;UVCoordinates&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== UV Coordinates ===&lt;br /&gt;
Normal map baking tools only capture normals within the 0-1 UV square, any UV bits outside this area are ignored. &lt;br /&gt;
&lt;br /&gt;
Only one copy of the forward-facing UVs should remain in the 0-1 UV square at baking time. If the mesh uses overlapping UVs, this will likely cause artifacts to appear in the baked map, since the baker will try render each UV shell into the map. Before baking, it's best to move all the overlaps and mirrored bits outside the 0-1 square. &lt;br /&gt;
&lt;br /&gt;
[[image:Normalmap_uvcoord_offset.jpg|frame|none|The mirrored UVs (in red) are offset 1 unit before baking. &amp;lt;br&amp;gt;Image by [http://ericchadwick.com Eric Chadwick].]]&lt;br /&gt;
&lt;br /&gt;
If you move all the overlaps and mirrored bits exactly 1 UV unit (any whole number will do), then you can leave them there after the bake and they will still be mapped correctly. You can move them back if you want, it doesn't matter to most game engines. Be aware that ZBrush does use UV offsets to manage mesh visibility, however this usually doesn't matter because the ZBrush cage mesh is often a different mesh than the in-game mesh used for baking.&lt;br /&gt;
&lt;br /&gt;
You should avoid changing the UVs after baking the normal map, because rotating or mirroring UVs after baking will cause the normal map not to match the [[#TB|tangent basis]] anymore, which will likely cause lighting problems. &lt;br /&gt;
&lt;br /&gt;
In 3ds Max, W is a third texture coordinate. It's used for 3D procedural textures and for storing vertex color in UV channels (you need 3 axes for RGB, so UVW can store vertex color). Bake problems can be avoided by moving any overlapping UVs to -1 on the W axis, with the same results as moving them 1 unit on the U or V axes. The tool Render To Texture will always bake whatever UVs are the highest along the W axis. However using W can be messy... it's generally hidden unless you purposefully look for it (bad for team work), doesn't get preserved on export to other apps, and high W values can prevent selecting and/or welding UVs. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;M&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;Mirroring&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Mirroring ===&lt;br /&gt;
Normal maps can be mirrored across a model to create symmetrical details, and save UV space, which allows more detail in the normal map since the texture pixels are smaller on the model. &lt;br /&gt;
&lt;br /&gt;
With [[#OSNM|object-space]] maps, mirroring requires [http://boards.polycount.net/showthread.php?t=53986 specific shader support]. For [[#TSNM|tangent-space]] maps, mirroring typically creates a shading seam, but this can be reduced or hidden altogether, depending on the method used.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;TMW&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
==== Typical Mirroring Workflow ====&lt;br /&gt;
# Delete the mesh half that will be mirrored. &lt;br /&gt;
# Arrange the UVs for the remaining model, filling the UV square.&lt;br /&gt;
# Mirror the model to create a &amp;quot;whole&amp;quot; mesh, welding the mesh vertices along the seam. &lt;br /&gt;
# Move the mirrored UVs exactly 1 unit (or any whole number) out of the 0-1 UV square.&lt;br /&gt;
# Bake the normal map.&lt;br /&gt;
&lt;br /&gt;
Sometimes an artist will decide to delete half of a symmetrical model before baking. &lt;br /&gt;
&lt;br /&gt;
This is a mistake however because often the vertex normals along the hole will bend towards the hole a bit; there are no faces on the other side to average the normals with. This will create a strong lighting seam in the normal map. &lt;br /&gt;
&lt;br /&gt;
It's typically best to use the complete mirrored model to bake the normal map, not just the unique half. &lt;br /&gt;
&lt;br /&gt;
To prevent the mirrored UVs from causing overlaps or baking errors, move the mirrored [[#UVC|UVs]] out of the 0-1 UV space, so only one copy of the non-mirrored UVs is left within the 0-1 square.&lt;br /&gt;
&lt;br /&gt;
To avoid texel &amp;quot;leaks&amp;quot; between the UV shells, make sure there's enough [[#Edge_padding|Edge Padding]] around each shell, including along the edges of the normal map. None of the UV shells should be touching the edge of the 0-1 UV square, unless they're meant to tile with the other side of the map.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;CM&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Center Mirroring ====&lt;br /&gt;
If the mirror seam runs along the surface of a continuous mesh, like down the center of a human face for example, then it will probably create a lighting seam. &lt;br /&gt;
&lt;br /&gt;
In Epic Games' [http://www.unrealtechnology.com/technology.php Unreal Engine 3] (UE3) their symmetrical models commonly use centered mirroring. Epic uses materials that mix a [[DetailMap]] with the normal maps; these seem to scatter the diffuse/specular lighting and help minimize the obviousness of the mirror seams. For their [[Light Map]]ped models they use [http://udn.epicgames.com/Three/LightMapUnwrapping.html a technique] that can almost completely hide the mirror seam.&lt;br /&gt;
&lt;br /&gt;
[[image:Epic_MirroringCicada.jpg|frame|none| In UE3 a center mirror seam is reduced by using a detail normal map. &amp;lt;br&amp;gt; Image by &amp;quot;[http://epicgames.com Epic Games]&amp;quot;]]&lt;br /&gt;
&lt;br /&gt;
'''''[http://www.zbrushcentral.com/showpost.php?p=573108&amp;amp;postcount=28 GOW2 normal map seams], [http://utforums.epicgames.com/showthread.php?p=27166791#post27166791 UDK normal map seams]'''''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;OM&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Offset Mirroring ====&lt;br /&gt;
Offset mirroring is a method where you move the mirror seam off to one side of the model, so the seam doesn't run exactly down the center. For example with a character's head, the UV seam can go down along the side of the head in front of the ear. The UV shell for the nearest ear can then be mirrored to use the area on the other side of the head. &lt;br /&gt;
&lt;br /&gt;
This avoids the &amp;quot;Rorschach&amp;quot; effect and allows non-symmetrical details, but it still saves texture space because the two sides of the head can be mirrored (they're never seen at the same time anyhow).&lt;br /&gt;
&lt;br /&gt;
Offset mirroring doesn't get rid of the seam, but it does move it off to a place where it can either be less obvious, or where it can be hidden in a natural seam on the model.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;FCM&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
==== Flat Color Mirroring ====&lt;br /&gt;
[http://boards.polycount.net/showthread.php?t=51088 Tutorial for painting out seams on mirrored tangent-space normal maps] by [http://www.warbeast.de/ warby] solves seams by painting a flat set of normals along the seam, using neutral blue (128,128,255). However it only works along horizontal or vertical UV seams, not across any angled UVs. It also removes any details along the mirror seam, creating blank areas. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;EM&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
==== Element Mirroring ====&lt;br /&gt;
The mirror seam can be avoided completely when it doesn't run directly through any mesh. For example if there's a detached mesh element that runs down the center of the model, this can be uniquely mapped, while the meshes on either side can be mirrors of each other. Whenever the mirrored parts don't share any vertex normals with the non-mirrored parts, there won't be any seams. &lt;br /&gt;
&lt;br /&gt;
[[image:normalmap_mirrored-binocs-racer445.jpg|frame|none|The middle part (highlighted in red) uses unique non-mirrored UVs, allowing the mesh on the right to be mirrored without any seams. &amp;lt;br&amp;gt;Image by [http://http://racer445.com/ &amp;quot;racer445&amp;quot;]]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;SGAHE&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Smoothing Groups &amp;amp; Hard Edges ===&lt;br /&gt;
Each vertex in a mesh has at least one vertex normal. Vertex normals are used to control the direction a triangle will be lit from; if the normal is facing the light the triangle will be fully lit, if facing away from the light the triangle won't be lit. &lt;br /&gt;
&lt;br /&gt;
Each vertex however can have more than one vertex normal. When two triangles have different vertex normals along their shared edge, this creates a shading seam, called a ''hard edge'' in most modeling tools. 3ds Max uses ''Smoothing Groups'' to create hard/soft edges, Maya uses ''Harden Edge'' and ''Soften Edge''. These tools create hard and soft edges by splitting and combining the vertex normals.&lt;br /&gt;
&lt;br /&gt;
[[image:BenMathis_SmoothingGroups_Excerpt.gif|frame|none|Hard edges occur where the vertices have multiple normals. &amp;lt;br&amp;gt;Image by [http://poopinmymouth.com Ben 'poopinmymouth' Mathis] ([http://poopinmymouth.com/process/tips/smoothing_groups.jpg tutorial here])]]&lt;br /&gt;
&lt;br /&gt;
When a mesh uses all soft normals (a single smoothing group) the lighting has to be interpolated across the extreme differences between the vertex normals. If your renderer doesn't support the same [[#TangentBasis|tangent basis]] that the baker uses, this can produce extreme shading differences across the model, which creates shading artifacts. It is generally best to reduce these extremes when you can because a mismatched renderer can only do so much to counteract it.&lt;br /&gt;
&lt;br /&gt;
Hard edges are usually best where the model already has a natural seam. For example, you can add a hard edge along the rim of a car's wheel well, to prevent the inside of the wheel well from distorting the shading for the outside of the car body. Mechanical models usually need hard edges where ever the surface bends more than about 45 degrees. &lt;br /&gt;
&lt;br /&gt;
For most meshes, the best results usually come from adding hard edges where ever there are UV seams. There are no hard rules however, you must experiment with different approaches to find what works best in your game.&lt;br /&gt;
&lt;br /&gt;
When you use object-space normal maps the vertex normal problem goes away since you're no longer relying on the crude vertex normals of the mesh. An object-space normal map completely ignores vertex normals. Object-space mapping allows you to use all soft edges and no bevels on the low-res mesh, without showing lighting errors.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;HEDAT&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
==== Hard Edge Discussions &amp;amp; Tutorials ====&lt;br /&gt;
* [http://www.polycount.com/forum/showthread.php?p=2090450#post2090450 Maya MEL Script help needed (UV border edges)]&lt;br /&gt;
* [http://www.polycount.com/forum/showthread.php?t=107196 You're making me hard. Making sense of hard edges, uvs, normal maps and vertex counts]&lt;br /&gt;
* [http://www.polycount.com/forum/showthread.php?t=73593 Normal Maps: Can Somone Explain This &amp;quot;Black Edge&amp;quot; issue]&lt;br /&gt;
* [http://www.polycount.com/forum/showthread.php?t=73566 Normal Maps: Can someone explain normals, tangents and split UVs?]&lt;br /&gt;
* [http://www.polycount.com/forum/showthread.php?t=72861 3Point Shader Lite - Shader material editor and Quality Mode normalmaps for 3ds Max]&lt;br /&gt;
* [http://boards.polycount.net/showthread.php?t=68173 Why you should NOT trust 3ds Max's viewport normal-map display!]&lt;br /&gt;
* [http://www.game-artist.net/forums/support-tech-discussion/10503-xsi-normal-mapped-cube-looks-bad.html XSI - normal mapped cube looks bad]&lt;br /&gt;
* [http://www.game-artist.net/forums/support-tech-discussion/11924-weird-maya-normal-map-seam-artifact-problem-am-i-making-simple-mistake.html Weird Maya normal map seam/artifact problem]&lt;br /&gt;
* [http://boards.polycount.net/showthread.php?p=1080600 Seams in Normals when Creating Tiling Environment Trims and other Tiles]&lt;br /&gt;
* The tutorial [http://www.svartberg.com/tutorials/article_normalmaps/normalmaps.html Normalmaps for the Technical Game Modeler] by [http://www.svartberg.com Ariel Chai] shows how low-poly smoothing can affect the normal map.&lt;br /&gt;
* The tutorial: [http://wiki.polycount.net/3D_Tutorials/Modeling_High-Low_Poly_Models_for_Next_Gen_Games Modeling High/Low Poly Models for Next Gen Games] by [http://www.acetylenegames.com/artbymasa/ João &amp;quot;Masakari&amp;quot; Costa] shows how smoothing affects raycasting.&lt;br /&gt;
* The [http://tech-artists.org/wiki/Beveling Beveling section on the Tech-Artists.Org Wiki] discusses the breaking of normals and smoothing groups in general terms.&lt;br /&gt;
* The two-part article [http://www.ericchadwick.com/examples/provost/byf2.html#wts Beautiful, Yet Friendly] by [http://www.linkedin.com/in/gprovost Guillaume Provost] explains how smoothing groups and other mesh attributes cause vertices to be duplicated in the game. The vertex count is actually what matters in the game, not the triangle count.&lt;br /&gt;
* The Crysis documentation [http://doc.crymod.com/AssetCreation/PolyBumpReference.html PolyBump Reference] has a section towards the bottom that shows how smoothing affects their baked normal maps.&lt;br /&gt;
* The polycount thread [http://boards.polycount.net/showthread.php?t=60694 Toying around with normal map approaches] has a great discussion of how best to use smoothing groups and bevels for better shading.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;UB&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
==== Using Bevels ====&lt;br /&gt;
Bevels/chamfers generally improve the silhouette of the model, and can also help reflect specular highlights better. &lt;br /&gt;
&lt;br /&gt;
However bevels tend to produce long thin triangles, which slow down the in-game rendering of your model. Real-time renderers have trouble rendering long thin triangles because they create a lot of sub-pixel areas to render. &lt;br /&gt;
&lt;br /&gt;
Bevels also balloon the vertex count, which can increase the transform cost and memory usage. Hard edges increase the vertex count too, but not when  the edge also shares a seam in UV space. For a good explanation of the vertex count issue, see [http://www.ericchadwick.com/examples/provost/byf2.html#wts Beautiful, Yet Friendly].&lt;br /&gt;
&lt;br /&gt;
Using hard edges with matching UV shells tends to give better performance and better cosmetic results than using bevels. However there are differing opinions on this, see the Polycount thread &amp;quot;[http://boards.polycount.net/showthread.php?t=71760 Maya transfer maps help]&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;EVN&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
==== Edited Vertex Normals ====&lt;br /&gt;
If you use bevels the shading will be improved by editing the vertex normals so the larger flat surfaces have perpendicular normals. The vertex normals are then forced to blend across the smaller bevel faces, instead of across the larger faces. See the Polycount thread [http://boards.polycount.net/showthread.php?t=66139 Superspecular soft edges tutorial chapter 1].&lt;br /&gt;
&lt;br /&gt;
[[image:oliverio_bevel_normals.gif|frame|none|Bending normals on bevelled models. &amp;lt;br&amp;gt;From the tutorial [http://deadlineproof.com/model-shading-techniques-soft-edge-superspecular/ Shading techniques Superspecular soft edges]&amp;lt;br&amp;gt;Image by [http://deadlineproof.com/ Paolo Oliverio]]]&lt;br /&gt;
&lt;br /&gt;
== Level of Detail Models ==&lt;br /&gt;
See [http://www.polycount.com/forum/showthread.php?p=1216945#post1216945 Problem if you're using 3point-style normals with an LOD].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;MTHPM&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Modeling The High-Poly Mesh ==&lt;br /&gt;
[[Subdivision Surface Modeling]] and [[DigitalSculpting]] are the techniques most often used for modeling a normal map. &lt;br /&gt;
&lt;br /&gt;
Some artists prefer to model the in-game mesh first, other artists prefer to model the high-res mesh first, and others start somewhere in the middle. The modeling order is ultimately a personal choice though, all three methods can produce excellent results:&lt;br /&gt;
* Build the in-game model, then up-res it and sculpt it.&lt;br /&gt;
* Build and sculpt a high resolution model, then build a new in-game model around that.&lt;br /&gt;
* Build a basemesh model, up-res and sculpt it, then step down a few levels of detail and use that as a base for building a better in-game mesh.&lt;br /&gt;
If the in-game mesh is started from one of the subdivision levels of the basemesh sculpt, various edge loops can be collapsed or new edges can be cut to add/remove detail as necessary.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;SE&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Sloped Extrusions ===&lt;br /&gt;
[[image:normal_slopes_hatred.jpg|frame|none|Extrusions on the high-poly model should be sloped to make a better normal map. &amp;lt;br&amp;gt;Image by [http://www.hatred.gameartisans.org/ Krzysztof &amp;quot;Hatred&amp;quot; Dolas].]]&lt;br /&gt;
&lt;br /&gt;
=== Floating Geometry ===&lt;br /&gt;
[[image:FloatingGeo.jpg|frame|none|Normal map stores the direction the surface is facing rather than real depth information, thus allowing to save time using floating geometry. &amp;lt;br&amp;gt;To correctly bake AO with floating geo make it a separate object and turn off it's shadow casting. &amp;lt;br&amp;gt;Image by [http://artisaverb.info/ Andrew &amp;quot;d1ver&amp;quot; Maximov].]]&lt;br /&gt;
&lt;br /&gt;
See also [[3DTutorials/Modeling High-Low Poly Models for Next Gen Games|Modeling High/Low Poly Models for Next Gen Games]] by [http://www.acetylenegames.com/artbymasa/ João &amp;quot;Masakari&amp;quot; Costa]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;ET&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Edge Thickness ===&lt;br /&gt;
[[image:normal_edge_thickness.jpg|frame|none|When creating edges of the Highpoly, sometimes you'll need to make them rounded than in real life to &amp;lt;br&amp;gt;work better at the size they will be seen.&amp;lt;br&amp;gt;Image by [http://racer445.com/Evan &amp;quot;racer445&amp;quot; Herbert]]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;MRF&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;MRRCB&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== mental ray Round Corners Bump ===&lt;br /&gt;
The mental ray renderer offers an automatic bevel rendering effect called Round Corners Bump that can be baked into a normal map. This is available in 3ds Max, Maya, and XSI. See [http://boards.polycount.net/showthread.php?t=71995 Zero Effort Beveling for normal maps] - by [http://boards.polycount.net/member.php?u=31662 Robert &amp;quot;r_fletch_r&amp;quot; Fletcher].&lt;br /&gt;
&lt;br /&gt;
[http://jeffpatton.net/ Jeff Patton] posted about [http://jeffpatton.cgsociety.org/blog/archive/2007/10/ how to expose Round Corners Bump] in 3ds Max so you can use it in other materials.&lt;br /&gt;
&lt;br /&gt;
[http://cryrid.com/art/ Michael &amp;quot;cryrid&amp;quot; Taylor] posted a tutorial about how to use [http://cryrid.com/images/temp/XSI/zeroeffort_bevels.jpg Round Corners in XSI].&lt;br /&gt;
&lt;br /&gt;
XSI is able to bake a good normal map with it, but 3ds Max seems to bake it incorrectly, and Maya isn't able to bake the effect at all. Maybe Max might be able to bake it correctly, if the .mi shader is edited to use the correct coordinate space?&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;Baking&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;B&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
== Baking ==&lt;br /&gt;
The process of transferring normals from the high-res model to the in-game model is often called baking. The baking tool usually starts projecting a certain numerical distance out from the low-poly mesh, and sends rays inwards towards the high-poly mesh. When a ray intersects the high-poly mesh, it records the mesh's surface normal and saves it in the normal map.&lt;br /&gt;
&lt;br /&gt;
To get an understanding of how all the options affect your normal map, do some test bakes on simple meshes like boxes. They generate quickly so you can experiment with [[#UVCoordinates|UV mirroring]], [[#SGAHE|smoothing groups]], etc. This helps you learn the settings that really matter.&lt;br /&gt;
* The tutorial [http://wiki.polycount.net/3D_Tutorials/Modeling_High-Low_Poly_Models_for_Next_Gen_Games Modeling High/Low Poly Models for Next Gen Games] by [http://www.acetylenegames.com/artbymasa/ João &amp;quot;Masakari&amp;quot; Costa] has more examples of ray-casting, plus how to get better results from the bake.&lt;br /&gt;
&lt;br /&gt;
Baking sub-sections:&lt;br /&gt;
# [[#Anti-Aliasing|Anti-Aliasing]]&lt;br /&gt;
# [[#Baking_Transparency|Baking Transparency]]&lt;br /&gt;
# [[#Edge_Padding|Edge Padding]]&lt;br /&gt;
# [[#High_Poly_Materials|High Poly Materials]]&lt;br /&gt;
# [[#Reset_Transforms|Reset Transforms]]&lt;br /&gt;
# [[#Solving_Intersections|Solving Intersections]]&lt;br /&gt;
# [[#Solving_Pixel_Artifacts|Solving Pixel Artifacts]]&lt;br /&gt;
# [[#Solving_Wavy_Lines|Solving Wavy Lines]]&lt;br /&gt;
# [[#Triangulating|Triangulating]]&lt;br /&gt;
# [[#Working_with_Cages|Working with Cages]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;AA&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Anti-Aliasing ===&lt;br /&gt;
Turning on super-sampling or anti-aliasing (or whatever multi-ray casting is called in your normal map baking tool) will help to fix any jagged edges where the high-res model overlaps itself within the UV borders of the low-poly mesh, or wherever the background shows through holes in the mesh. Unfortunately this tends to render much much slower, and takes more memory.&lt;br /&gt;
&lt;br /&gt;
[[image:normalmap_aliasing_knak47.jpg|frame|none|A bake without anti-aliasing shows artifacts where the high-poly mesh has overlaps. &amp;lt;br&amp;gt;Image by [http://www.polycount.com/forum/member.php?u=35938 'knak47']]]&lt;br /&gt;
&lt;br /&gt;
One trick to speed this up is to render 2x the intended image size then scale the normal map down 1/2 in a paint program like Photoshop. The reduction's pixel resampling will add anti-aliasing for you in a very quick process. After scaling, make sure to re-normalize the map if your game doesn't do that already, because the un-normalized pixels in your normalmap may cause pixelly artifacts in your specular highlights. Re-normalizing can be done with [http://developer.nvidia.com/nvidia-texture-tools-adobe-photoshop NVIDIA's normal map filter] for Photoshop.&lt;br /&gt;
&lt;br /&gt;
3ds Max's supersampling doesn't work nicely with edge padding, it produces dark streaks in the padded pixels. If so then turn off padding and re-do the padding later, either by re-baking without supersampling or by using a Photoshop filter like the one that comes with [[#3DTools|Xnormal]].&lt;br /&gt;
&lt;br /&gt;
=== Baking Transparency ===&lt;br /&gt;
Sometimes you need to bake a normal map from an object that uses opacity maps, like a branch with opacity-mapped leaves. Unfortunately baking apps often completely ignore any transparency mapping on your high-poly mesh.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|[[image:JoeWilson_ivynormals_error.jpg]] &lt;br /&gt;
|[[image:JoeWilson_ivynormals_rendered.jpg]]&lt;br /&gt;
|-&lt;br /&gt;
|3ds Max's RTT baker causes transparency errors.&amp;lt;br&amp;gt;image by [http://www.linkedin.com/in/earthquake Joe &amp;quot;EarthQuake&amp;quot; Wilson]&lt;br /&gt;
|The lighting method bakes perfect transparency.&amp;lt;br&amp;gt;image by [http://www.linkedin.com/in/earthquake Joe &amp;quot;EarthQuake&amp;quot; Wilson]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
To solve this, render a Top view of the mesh. This only works if you're using a planar UV projection for your low-poly mesh and you're baking a tangent-space normal map.&lt;br /&gt;
&lt;br /&gt;
* Make sure the Top view matches the dimensions of the planar UV projection used by the low-poly mesh. It helps to use an orthographic camera for precise placement.&lt;br /&gt;
* On the high-poly mesh either use a specific lighting setup or a use special material shader:&lt;br /&gt;
* 1) The lighting setup is described in these tutorials:&lt;br /&gt;
* * [http://www.bencloward.com/tutorials_normal_maps11.shtml Creating A Normal Map Right In Your 3D App] by [http://www.bencloward.com/ Ben Cloward]&lt;br /&gt;
* *[http://www.pinwire.com/articles/26/1/Generating-High-Fidelity-Normal-Maps-with-3-D-Software.html Generating High Fidelity Normal Maps with 3-D Software] by [http://www.linkedin.com/pub/0/277/4AB Dave McCoy], Graphics Techniques Consultant, Xbox Content and Design Team&lt;br /&gt;
* 2) The material shader does the same thing, but doesn't require lights.&lt;br /&gt;
* * [http://www.scriptspot.com/3ds-max/normaltexmap NormalTexMap] scripted map for 3ds Max by [http://www.scriptspot.com/users/dave-locke Dave Locke].&lt;br /&gt;
* * [http://www.footools.com/3dsmax_plugins.html InfoTexture] map plugin for 3ds Max by [http://www.footools.com John Burnett]&lt;br /&gt;
&lt;br /&gt;
[[image:BenCloward_NormalMapLighting.gif|frame|none|The lighting setup for top-down rendering. &amp;lt;br&amp;gt;Image by [http://www.bencloward.com Ben Cloward]]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;EP&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Edge Padding ===&lt;br /&gt;
If a normal map doesn't have enough [[Edge_padding |Edge Padding]], this will create shading seams on the UV borders.&lt;br /&gt;
&lt;br /&gt;
=== High Poly Materials ===&lt;br /&gt;
3ds Max will not bake a normal map properly if the high-res model has a mental ray Arch &amp;amp; Design material applied. If your normal map comes out mostly blank, either use a Standard material or none at all. For an example see the Polycount thread [http://www.polycount.com/forum/showthread.php?t=74792 Render to Texture &amp;gt;:O].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;RT&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Reset Transforms ===&lt;br /&gt;
Before baking, make sure your low-poly model's transforms have been reset. '''''This is very important!''''' Often during the modeling process a model will be rotated and scaled, but these compounded transforms can create a messy local &amp;quot;space&amp;quot; for the model, which in turn often creates rendering errors for normal maps. &lt;br /&gt;
&lt;br /&gt;
In 3ds Max, use the Reset Xforms utility then Collapse the Modifier Stack. In Maya use Freeze Transformation. In XSI use the Freeze button.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;SI&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Solving Intersections ===&lt;br /&gt;
The projection process often causes problems like misses, or overlaps, or intersections. It can be difficult generating a clean normal map in areas where the high-poly mesh intersects or nearly intersects itself, like in between the fingers of a hand. Setting the ray distance too large will make the baker pick the other finger as the source normal, while setting the ray distance too small will lead to problems at other places on the mesh where the distances between in-game mesh and high-poly mesh are greater.&lt;br /&gt;
&lt;br /&gt;
Fortunately there are several methods for solving these problems.&lt;br /&gt;
&lt;br /&gt;
# Change the shape of the cage. Manually edit points on the projection cage to help solve tight bits like the gaps between fingers.&lt;br /&gt;
# Limit the projection to matching materials, or matching UVs.&lt;br /&gt;
# Explode the meshes. See the polycount thread [http://boards.polycount.net/showthread.php?t=62921 Explode script needed (for baking purposes)].&lt;br /&gt;
# Bake two or more times using different cage sizes, and combine them in Photoshop.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;SPA&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Solving Pixel Artifacts ===&lt;br /&gt;
[[image:filterMaps_artifact.jpg|frame|none|Random pixel artifacts in the bake. &amp;lt;br&amp;gt;Image by [http://www.linkedin.com/in/ericchadwick Eric Chadwick]]]&lt;br /&gt;
&lt;br /&gt;
If you are using 3ds Max's ''Render To Texture'' to bake from one UV layout to another, you may see stray pixels scattered across the bake. This only happens if you are using a copy of the original mesh in the Projection, and that mesh is using a different UV channel than the original mesh.&lt;br /&gt;
&lt;br /&gt;
There are two solutions for this:&lt;br /&gt;
&lt;br /&gt;
* Add a Push modifier to the copied mesh, and set it to a low value like 0.01.&lt;br /&gt;
- or -&lt;br /&gt;
&lt;br /&gt;
* Turn off ''Filter Maps'' in the render settings (Rendering menu &amp;gt; Render Setup &amp;gt; Renderer tab &amp;gt; uncheck Filter Maps). To prevent aliasing you may want to enable the Global Supersampler in Render Setup.&lt;br /&gt;
&lt;br /&gt;
See also [[#Anti-Aliasing]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;SWL&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Solving Wavy Lines ===&lt;br /&gt;
When capturing from a cylindrical shape, often the differences between the low-poly mesh and the high-poly mesh will create a wavy edge in the normal map. There are a couple ways to avoid this:&lt;br /&gt;
&lt;br /&gt;
# The best way... create your lowpoly model with better supporting edges. See the Polycount threads [http://www.polycount.com/forum/showthread.php?t=81154 Understanding averaged normals and ray projection/Who put waviness in my normal map?], [http://boards.polycount.net/showthread.php?t=55754 approach to techy stuff], [http://www.polycount.com/forum/showthread.php?t=72713 Any tips for normal mapping curved surface?].&lt;br /&gt;
# Adjust the shape of the cage to influence the directions the rays will be cast. Beware... this work will have to be re-done every time you edit the lowpoly mesh, as the cage will be invalidated. At the bottom of [http://www.poopinmymouth.com/tutorial/normal_workflow_2.htm this page of his normal map tutorial], [http://www.poopinmymouth.com/ Ben &amp;quot;poopinmymouth&amp;quot; Mathis] shows how to do this in 3ds Max. Same method can be seen in the image below.&lt;br /&gt;
# Subdivide the low-res mesh so it more closely matches the high-res mesh. Beware... this will cause the normal map not to match your lowpoly vertex normals, probably causing shading errors. [http://www.custom-airbrush.com/ Jeff &amp;quot;airbrush&amp;quot; Ross] has a [http://dodownload.filefront.com/9086954//72f71c0147df53765045a22253c18361a29a6d532425842007ead644d39cbb85d0794ab560365cfa video tutorial] that shows how to do this in Maya.&lt;br /&gt;
# Paint out the wavy line.  Beware... this work will have to be re-done every time you re-bake the normal map. The [http://www.poopinmymouth.com/tutorial/normal_workflow_3.htm normal map process tutorial] by [http://www.poopinmymouth.com/ Ben &amp;quot;poopinmymouth&amp;quot; Mathis] includes an example of painting out wavy lines in a baked normal map.&lt;br /&gt;
# Use a separate planar-projected mesh for the details that wrap around the barrel area, so the ray-casting is more even. Beware... this will cause the normal map not to match your lowpoly vertex normals, probably causing shading errors. For example to add tread around a tire, the tread can be baked from a tread model that is laid out flat, then that bake can layered onto the bake from the cylindrical tire mesh in a paint program.&lt;br /&gt;
&lt;br /&gt;
[[image:timothy_evison_normalmap_projections.jpg|frame|none|Adjusting the shape of the cage to remove distortion. &amp;lt;br&amp;gt;Image by [http://users.cybercity.dk/~dsl11905/resume/resume.html Timothy &amp;quot;tpe&amp;quot; Evison]]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;TRI&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Triangulating ===&lt;br /&gt;
Before baking, it is usually best to triangulate the low-poly model, converting it from polygons into pure triangles. This prevents the vertex normals from being changed later on, which can create specular artifacts.&lt;br /&gt;
&lt;br /&gt;
[[image:triangulation_modo_ohare.jpg|frame|none| When quads are triangulated in [http://www.luxology.com/modo/ Modo], the internal edges are sometimes flipped, which causes shading differences.&amp;lt;br&amp;gt;Image by [http://www.farfarer.com/|James &amp;quot;Talon&amp;quot; O'Hare]]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Sometimes a baking tool or a mesh exporter/importer will re-triangulate the polygons. A quad polygon is actually treated as two triangles, and the internal edge between them is often switched diagonally during modeling operations. When the vertices of the quad are moved around in certain shapes, the software's algorithm for polygon models tries to keep the quad surface in a &amp;quot;rational&amp;quot; non-overlapping shape. It does this by switching the internal edge between its triangles.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[image:triangulation_spec_tychovii.jpg|frame|none| The specular highlight is affected by triangulation. Flip edges to fix skewing. See the Polycount thread [http://boards.polycount.net/showthread.php?t=66651 Skewed Specular Highlight?] for pictures and more info.&amp;lt;br&amp;gt; Image by [http://robertkreps.com Robert &amp;quot;TychoVII&amp;quot; Kreps]]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;WWC&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Working with Cages ===&lt;br /&gt;
''Cage'' has two meanings in the normal-mapping process: a low-poly base for [[subdivision surface modeling]] (usually called the [[DigitalSculpting#BM|basemesh]]), or a ray-casting mesh used for normal map baking. This section covers the ray-casting cage.&lt;br /&gt;
&lt;br /&gt;
Most normal map baking tools allow you to use a distance-based raycast. A ray is sent outwards along each vertex normal, then at the distance you set a ray is cast back inwards. Where ever that ray intersects the high poly mesh, it will sample the normals from it. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|[[Image:Normalmap_raycasting_1.jpg]] &lt;br /&gt;
|[[Image:Normalmap_raycasting_2.jpg]]&lt;br /&gt;
|-&lt;br /&gt;
|Hard edges and a distance-based raycast (gray areas) cause ray misses (yellow) and ray overlaps (cyan).&amp;lt;br&amp;gt; Image by [http://www.mankua.com/ Diego Castaño]&lt;br /&gt;
|The gray area shows that using all soft edges (or hard edges and a cage-based raycast) will avoid ray-casting errors from split normals.&amp;lt;br&amp;gt; Image by [http://www.mankua.com/ Diego Castaño] &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Unfortunately with a distance-based raycast, [[#SGAHE|split vertex normals]] will cause the bake to miss parts of the high-res mesh, causing errors and seams. &lt;br /&gt;
&lt;br /&gt;
Some software allows you to use ''cage mesh'' option instead, which basically inflates a copy of the low-poly mesh, then raycasts inwards from each vertex. This ballooned-out mesh is the cage.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|&amp;lt;tablebgcolor=&amp;quot;#ffaaaa&amp;quot;&amp;gt;| &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
In 3ds Max the cage controls both the distance and the direction of the raycasting. &lt;br /&gt;
&lt;br /&gt;
In Maya the cage only controls the distance; the ray direction matches the vertex normals (inverted).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;text-decoration: line-through&amp;quot;&amp;gt; This may have been fixed in the latest release...&amp;lt;&amp;lt;BR&amp;gt;&amp;gt;&lt;br /&gt;
In Xnormal the cage is split everywhere the model has [[#SGAHE|hard edges]], causing ray misses in the bake. You can fix the hard edge split problem but it involves an overly complex workflow. You must also repeat the whole process any time you change your mesh:&amp;lt;/span&amp;gt;&lt;br /&gt;
# &amp;lt;s&amp;gt; Load the 3d viewer.&amp;lt;/s&amp;gt;&lt;br /&gt;
# &amp;lt;s&amp;gt; Turn on the cage editing tools.&amp;lt;/s&amp;gt;&lt;br /&gt;
# &amp;lt;s&amp;gt; Select all of the vertices.&amp;lt;/s&amp;gt;&lt;br /&gt;
# &amp;lt;s&amp;gt; Weld all vertices.&amp;lt;/s&amp;gt;&lt;br /&gt;
# &amp;lt;s&amp;gt; Expand the cage as you normally would.&amp;lt;/s&amp;gt;&lt;br /&gt;
# &amp;lt;s&amp;gt; Save out your mesh using the Xnormal format.&amp;lt;/s&amp;gt;&lt;br /&gt;
# &amp;lt;s&amp;gt; Make sure Xnormal is loading the correct mesh.&amp;lt;/s&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;P&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;Painting&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Painting ==&lt;br /&gt;
Don't be afraid to edit normal maps in Photoshop. After all it is just a texture, so you can clone, blur, copy, blend all you want... as long as it looks good of course. Some understanding of [[#RGBChannels|the way colors work]] in normal maps will go a long way in helping you paint effectively.&lt;br /&gt;
&lt;br /&gt;
A normal map sampled from a high-poly mesh will nearly always be better than one sampled from a texture, since you're actually grabbing &amp;quot;proper&amp;quot; normals from an accurate, highly detailed surface. That means your normal map's pixels will basically be recreating the surface angles of your high-poly mesh, resulting in a very believable look.&lt;br /&gt;
&lt;br /&gt;
If you only convert an image into a normal-map, it can look very flat, and in some cases it can be completely wrong unless you're very careful about your value ranges. Most image conversion tools assume the input is a heightmap, where black is low and white is high. If you try to convert a diffuse texture that you've painted, the results are often very poor. Often the best results are obtained by baking the large and mid-level details from a high-poly mesh, and then combined with photo-sourced &amp;quot;fine detail&amp;quot; normals for surface details such as fabric weave, scratches and grain.&lt;br /&gt;
&lt;br /&gt;
Sometimes creating a high poly surface takes more time than your budget allows. For character or significant environment assets then that is the best route, but for less significant environment surfaces working from a heightmap-based texture will provide a good enough result for a much less commitment in time.&lt;br /&gt;
&lt;br /&gt;
* [http://crazybump.com/ CrazyBump] is a commercial normal map converter.&lt;br /&gt;
* [http://www.renderingsystems.com/support/showthread.php?tid=3 ShaderMap] is a commercial normal map converter.&lt;br /&gt;
* [http://www.pixplant.com/ PixPlant] is a commercial normal map converter.&lt;br /&gt;
* [http://boards.polycount.net/showthread.php?t=68860 NJob] is a free normal map converter.&lt;br /&gt;
* [http://developer.nvidia.com/nvidia-texture-tools-adobe-photoshop NVIDIA normalmap filter for Photoshop] is a free normal map converter.&lt;br /&gt;
* [http://xnormal.net Xnormal height-to-normals filter for Photoshop] is a free normal map converter.&lt;br /&gt;
* [http://www.poopinmymouth.com/tutorial/normal_workflow_3.htm Normal map process tutorial] by [http://www.poopinmymouth.com/ Ben &amp;quot;poopinmymouth&amp;quot; Mathis] includes an example of painting out wavy lines in a baked normal map.&lt;br /&gt;
&lt;br /&gt;
=== Flat Color ===&lt;br /&gt;
The color (128,128,255) creates normals that are completely perpendicular to the polygon, as long as the vertex normals are also perpendicular. Remember a normal map's per-pixel normals create ''offsets'' from the vertex normals. If you want an area in the normal map to be flat, so it creates no offsets from the vertex normals, then use the color (128,128,255). &lt;br /&gt;
&lt;br /&gt;
This becomes especially obvious when [[#Mirroring|mirroring a normal map]] and using a shader with a reflection ingredient. Reflection tends to accentuate the angles between the normals, so any errors become much more apparent.&lt;br /&gt;
&lt;br /&gt;
[[image:normalmap_127seam.jpg|thumb|600px|none| Mirrored normal maps show a seam when (127,127,255) is used for the flat color; 128 is better.&amp;lt;br&amp;gt;Image by [http://www.ericchadwick.com Eric Chadwick]]]&lt;br /&gt;
&lt;br /&gt;
In a purely logical way, 127 seems like it would be the halfway point between 0 and 255. However 128 is the color that actually works in practice. When a test is done comparing (127,127,255) versus (128,128,255) it becomes obvious that 127 creates a slightly bent normal, and 128 creates a flat one.&lt;br /&gt;
&lt;br /&gt;
This is because most game pipelines use ''unsigned'' normal maps. For details see the Polycount forum thread [http://www.polycount.com/forum/showpost.php?p=771360&amp;amp;postcount=22 tutorial: fixing mirrored normal map seams].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;BNMT&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Blending Normal Maps Together ===&lt;br /&gt;
Blending normal maps together is a quick way to add high-frequency detail like wrinkles, cracks, and the like. Fine details can be painted as a height map, then it can be converted into a normal map using one of the normal map tools. Then this &amp;quot;details&amp;quot; normal map can be blended with a geometry-derived normal map using one of the methods below. &lt;br /&gt;
&lt;br /&gt;
Here is a comparison of four of the blending methods. Note that in these examples the default values were used for CrazyBump (Intensity 50, Strength 33, Strength 33), but the tool allows each layer's strength to be adjusted individually for stronger or milder results. Each of the normal maps below were [[#Renormalizing|re-normalized]] after blending.&lt;br /&gt;
&lt;br /&gt;
[[Image:nrmlmap_blending_methods_Maps.png|none|The blended normal maps.&amp;lt;BR&amp;gt;image by [http://www.ericchadwick.com Eric Chadwick].]]&lt;br /&gt;
&lt;br /&gt;
The four blending methods used above:&lt;br /&gt;
# [http://www.crazybump.com CrazyBump] by Ryan Clark blends normal maps together using calculations in 3D space rather than just in 2D. This does probably the best job at preserving details, and each layer's strength settings can be tweaked individually. &lt;br /&gt;
# [http://www.rodgreen.com/?p=4 Combining Normal Maps in Photoshop] by Rod Green blends normal maps together using Linear Dodge mode for the positive values and Difference mode for the negative values, along with a Photoshop Action to simplify the process. It's free, but the results may be less accurate than CrazyBump.&lt;br /&gt;
# [http://www.paultosca.com/makingofvarga.html Making of Varga] by [http://www.paultosca.com/ Paul &amp;quot;paultosca&amp;quot; Tosca] blends normal maps together using Overlay mode for the red and green channels and Multiply mode for the blue channel. This gives a slightly stronger bump than the Overlay-only method. [http://www.leocov.com/ Leo &amp;quot;chronic&amp;quot; Covarrubias] has a step-by-step tutorial for this method in [http://www.cgbootcamp.com/tutorials/2009/12/9/photoshop-combine-normal-maps.html CG Bootcamp Combine Normal Maps].&lt;br /&gt;
# [[3DTutorials/Normal Map Deepening|Normal Map Deepening]] by [http://www.poopinmymouth.com/ Ben &amp;quot;poopinmymouth&amp;quot; Mathis] shows how to blend normal maps together using Overlay mode. [http://cgtextures.com/content.php?action=tutorial&amp;amp;name=normalmap CGTextures tutorial for the NVIDIA Photoshop filter] by [http://hirezstudios.com/ Scott Warren] also shows how to create normalmaps using multiple layers (Note: to work with the Overlay blend mode each layer's Output Level should be 128 instead of 255, you can use the Levels tool for this).&lt;br /&gt;
&lt;br /&gt;
The [http://boards.polycount.net/showthread.php?t=69615 Getting good height from Nvidia-filter normalizing grayscale height] thread on the Polycount forum has a discussion of different painting/blending options. Also see the [[#2DT|2D Tools]] section for painting and conversion tools.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;PCT&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Pre-Created Templates ===&lt;br /&gt;
A library of shapes can be developed and stored for later use, to save creation time for future normal maps. Things like screws, ports, pipes, and other doo-dads. These shapes can be stored as bitmaps with transparency so they can be layered into baked normal maps.&lt;br /&gt;
&lt;br /&gt;
* [http://www.beautifulrobot.com/?p=69 Creating &amp;amp; Using NormalMap &amp;quot;Widgets&amp;quot;] - by ''[http://www.beautifulrobot.com Steev &amp;quot;kobra&amp;quot; Kelly]''&amp;lt;&amp;lt;BR&amp;gt;&amp;gt; How to set up and render template objects.&lt;br /&gt;
* [http://www.akramparvez.com/portfolio/scripts/normalmap-widget-for-3ds-max/ NormalMap Widget for 3ds Max] - by ''[http://www.akramparvez.com Akram Parvez]''&amp;lt;&amp;lt;BR&amp;gt;&amp;gt;A script to automate the setup and rendering process.&lt;br /&gt;
* See the section [[#BT|Baking Transparency]] for more template-rendering tools and tutorials.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;RN&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;Renormalizing&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Re-normalizing ===&lt;br /&gt;
Re-normalizing means resetting the length of each normal in the map to 1.&lt;br /&gt;
&lt;br /&gt;
A normal mapping shader takes the three color channels of a normal map and combines them to create the direction and length of each pixel's normal. These normals are then used to apply the scene lighting to the mesh. However if you edit normal maps by hand or if you blend multiple normal maps together this can cause those lengths to change. Most shaders expect the length of the normals to always be 1 (normalized), but some are written to re-normalize the normal map dynamically (for example, 3ds Max's Hardware Shaders do re-normalize).&lt;br /&gt;
&lt;br /&gt;
If the normals in your normal map are not normalized, and your shader doesn't re-normalize them either, then you may see artifacts on the shaded surface... the specular highlight may speckle like crazy, the surface may get patches of odd shadowing, etc. To help you avoid this NVIDIA's normal map filter for Photoshop provides an easy way to re-normalize a map after editing; just use the '''Normalize Only''' option. [http://xnormal.net Xnormal] also comes with a Normalize filter for Photoshop.&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Some shaders use [[#NormalMapCompression|compressed normal maps]]. Usually this means the blue channel is thrown away completely, so it's recalculated on-the-fly in the shader. However the shader has to re-normalize in order to recreate that data, so any custom normal lengths that were edited into the map will be ignored completely. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;AOIANM&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;AmbientOcclusionIntoANormalMap&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Ambient Occlusion into a Normal Map ===&lt;br /&gt;
If the shader doesn't re-normalize the normal map, an [[Ambient Occlusion Map]] can actually be baked into the normal map. This will shorten the normals in the crevices of the surface, causing the surface to receive less light there. This works with both diffuse and specular, or any other pass that uses the normal map, like reflection.&lt;br /&gt;
&lt;br /&gt;
However it's usually best to keep the AO as a separate map (or in an alpha channel) and multiply it against the ambient lighting only. This is usually done with a custom [[Category:Shaders|shader]]. If you multiply it against the diffuse map or normal map then it also occludes diffuse lighting which can make the model look dirty. Ambient occlusion is best when it occludes ambient lighting only, for example a [[DiffuselyConvolvedCubeMap|diffusely convolved cubemap]].&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
To bake the AO into a normal map, adjust the levels of the AO layer first so the darks only go as low as 128 gray, then set the AO layer to Darken mode. This will shorten the normals in the normalmap, causing the surface to receive less light in the darker areas. &lt;br /&gt;
&lt;br /&gt;
This trick doesn't work with any shaders that re-normalize, like 3ds Max's Hardware Shaders. The shader must be altered to actually use the lengths of your custom normals; most shaders just assume all normals are 1 in length because this makes the shader code simpler. Also this trick will not work with most of the common [[#NormalMapCompression|normal map compression formats]], which often discard the blue channel and recalculate it in the shader, which requires re-normalization.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;BLE&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;&amp;lt;Anchor([[BacklightingExample]])&amp;gt;&amp;gt;&lt;br /&gt;
=== Back Lighting Example ===&lt;br /&gt;
[[Image:tree_front.jpg|thumb|Tree simulating subsurface scattering. &amp;lt;BR&amp;gt;image by [http://www.linkedin.com/in/ericchadwick Eric Chadwick].]]&lt;br /&gt;
You can customize normal maps for some interesting effects. If you invert the blue channel of a tangent-space map, the normals will be pointing to the opposite side of the surface, which can simulate backlighting.&lt;br /&gt;
&lt;br /&gt;
The tree leaves use a shader than adds together two diffuse maps, one using a regular tangent-space normal map, the other using the same normal map but with the blue channel inverted. This causes the diffuse map using the regular normal map to only get lit on the side facing the light (front view), while the diffuse map using the inverted normal map only gets lit on the opposite side of the leaves (back view). The leaf geometry is 2-sided but uses the same shader on both sides, so the effect works no matter the lighting angle. As an added bonus, because the tree is self-shadowing the leaves in shadow do not receive direct lighting, which means their backsides do not show the inverted normal map, so the fake subsurface scatter effect only appears where the light directly hits the leaves. This wouldn't work for a whole forest because of the computational cost of self-shadowing and double normal maps, but could be useful for a single &amp;quot;star&amp;quot; asset, or if LODs switched the distant trees to a model that uses a cheaper shader.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;SAS&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;&amp;lt;Anchor([[ShadersAndSeams]])&amp;gt;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Shaders and Seams ==&lt;br /&gt;
You need to use the right kind of shader to avoid seeing seams where UV breaks occur. It must be written to use the same [[#TangentBasis|tangent basis]] that was used during baking. If the shader doesn't, the lighting will either be inconsistent across UV borders or it will show smoothing errors from the low-poly vertex normals.&lt;br /&gt;
&lt;br /&gt;
Xnormal generates accurate normals when displayed in Xnormal, and the SDK includes a method to write your own custom tangent space generator for the tool. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;3MS&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== 3ds Max Shaders ===&lt;br /&gt;
[[Image:max2010_normalmap_workarounds.png|thumb|Viewport methods in 3ds Max 2010. &amp;lt;BR&amp;gt; image by [http://www.linkedin.com/in/ericchadwick Eric Chadwick].]]&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;Render To Texture&amp;quot; tool in 3ds Max 2011 and older generates [[#TSNM|tangent-space]] normal maps that render correctly in the offline renderer (scanline) but do not render correctly in the realtime viewport with the 3ds Max shaders. Max is using a different [[#TangentBasis|tangent basis]] for each. This is readily apparent when creating non-organic hard surface normalmaps; smoothing errors appear in the viewport that do not appear when rendered. &lt;br /&gt;
&lt;br /&gt;
The errors can be fixed by using &amp;quot;Render To Texture&amp;quot; to bake a [[#TSNM|tangent-space]] or [[#OSNM|object-space]] map, and using the free [http://www.3pointstudios.com/3pointshader_about.shtml &amp;quot;3Point Shader&amp;quot;] by Christoph '[[CrazyButcher]]' Kubisch and Per 'perna' Abrahamsen. The shader uses the same tangent basis as the baking tool, so it produces nearly flawless results. It also works with old bakes.&lt;br /&gt;
&lt;br /&gt;
You can get OK results in the Max viewport using a tangent-space map baked in Maya, loading it in a Standard material, and enabling &amp;quot;Show Hardware Map in Viewport&amp;quot;. Another method is to use Render To Texture to bake an [[#OSNM|object-space]] map then use [[#CBS|Nspace]] to convert it into a tangent-space map then load that in a DirectX material and use the RTTNormalMap.fx shader. &lt;br /&gt;
&lt;br /&gt;
Autodesk is aware of these issues, and plans to address them in an upcoming release. See these links for more information:&lt;br /&gt;
* Christoph &amp;quot;[[CrazyButcher]]&amp;quot; Kubisch and Per &amp;quot;perna&amp;quot; Abrahamsen designed a shader/modifier combination approach that fixes the viewport problem, see the Polycount forum post [http://boards.polycount.net/showthread.php?t=72861 3Point Shader Lite - Shader material editor and Quality Mode normalmaps for 3ds Max].&lt;br /&gt;
* Jean-Francois &amp;quot;jfyelle&amp;quot; Yelle, Autodesk Media &amp;amp; Entertainment Technical Product Manager, has [http://boards.polycount.net/showthread.php?p=1115812#post1115812 this post]. &lt;br /&gt;
* Ben Cloward posted [http://boards.polycount.net/showthread.php?p=1100270#post1100270 workarounds and FX code].&lt;br /&gt;
* Christopher &amp;quot;cdiggins&amp;quot; Diggins, SDK writer for 3ds Max, shares some of the SDK code in his blog posts &amp;quot;[http://area.autodesk.com/blogs/chris/how_the_3ds_max_scanline_renderer_computes_tangent_and_binormal_vectors_for_normal_mapping How the 3ds Max Scanline Renderer Computes Tangent and Binormal Vectors for Normal Mapping]&amp;quot; and &amp;quot;[http://area.autodesk.com/blogs/chris/3ds_max_normal_map_baking_and_face_angle_weighting_the_plot_thickens 3ds Max Normal Map Baking and Face Angle Weighting: The Plot Thickens]&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;3MENT&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 3ds Max Edit Normals Trick ===&lt;br /&gt;
After baking, if you add an Edit Normals modifier to your low-poly normalmapped model, this seems to &amp;quot;relax&amp;quot; the vertex normals for more accurate viewport shading. The modifier can be collapsed if desired.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;MS&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Maya Shaders ===&lt;br /&gt;
Maya seems to correctly generate normals to view in realtime, with the correct [[#TangentBasis|tangent basis]], with much less smoothing errors than 3ds Max. &lt;br /&gt;
* [http://www.mentalwarp.com/~brice/shader.php BRDF shader] by [http://www.mentalwarp.com/~brice/ Brice Vandemoortele] and [http://www.kjapi.com/ Cedric Caillaud] (more info in [http://boards.polycount.net/showthread.php?t=49920 this Polycount thread]) '''Update:''' [http://boards.polycount.net/showthread.php?p=821862#post821862 New version here] with many updates, including object-space normal maps, relief mapping, self-shadowing, etc. Make sure you enable cgFX shaders in the Maya plugin manager, then you can create them in the same way you create a Lambert, Phong etc. Switch OFF high quality rendering in the viewports to see them correctly too.&lt;br /&gt;
* If you want to use the software renderer, use mental ray instead of Maya's software renderer because mental ray correctly interprets tangent space normals. The Maya renderer treats the normal map as a grayscale bump map, giving nasty results. Mental ray supports Maya's Phong shader just fine (amongst others), although it won't recognise a gloss map plugged into the &amp;quot;cosine power&amp;quot; slot. The slider still works though, if you don't mind having a uniform value for gloss. Spec maps work fine though. Just use the same set up as you would for viewport rendering. You'll need to have your textures saved as TGAs or similar for mental ray to work though. - from [http://boards.polycount.net/member.php?u=14235 CheeseOnToast]&lt;br /&gt;
&amp;lt;&amp;lt;Anchor([[NormalMapCompression]])&amp;gt;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;NMC&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
== Normal Map Compression ==&lt;br /&gt;
see; [[Normal Map Compression]]&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
=== Related Pages ===&lt;br /&gt;
* [[Curvature map]]&lt;br /&gt;
* [[DuDv map]]&lt;br /&gt;
* [[Flow map]]&lt;br /&gt;
* [[Normal map]]&lt;br /&gt;
* [[Radiosity normal map]]&lt;br /&gt;
* [[Vector displacement map]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;3DT&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; &amp;lt;span id=&amp;quot;Tools&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; &amp;lt;span id=&amp;quot;3DTools&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== 3D Tools ===&lt;br /&gt;
See [[Category:Tools#A3D_Normal_Map_Software|Category:Tools#3D_Normal_Map_Software]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;2DT&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;2DTools&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== 2D Tools ===&lt;br /&gt;
See [[Category:Tools#A2D_Normal_Map_Software|Category:Tools#2D_Normal_Map_Software]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;T&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;Tutorials&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Tutorials ===&lt;br /&gt;
* [http://area.autodesk.com/userdata/fckdata/239955/The%20Generation%20and%20Display%20of%20Normal%20Maps%20in%203ds%20Max.pdf The Generation and Display of Normal Maps in 3ds Max] (500kb PDF) &amp;lt;&amp;lt;BR&amp;gt;&amp;gt; Excellent whitepaper from Autodesk about normal mapping in 3ds Max and other apps.&lt;br /&gt;
* [http://www.katsbits.com/htm/tutorials/blender-baking-normal-maps-from-models.htm Renderbump and baking normal maps from high poly models using Blender 3D] by ''[http://www.katsbits.com/htm/about.htm &amp;quot;katsbits&amp;quot;]''&amp;lt;&amp;lt;BR&amp;gt;&amp;gt;Baking normal maps in Blender.&lt;br /&gt;
* [http://udn.epicgames.com/Three/CreatingNormalMaps.html Techniques for Creating Normal Maps] in the Unreal Developer Network's [http://udn.epicgames.com/Three/WebHome.html Unreal Engine 3 section] contains advice from [http://www.epicgames.com/ Epic Games] artists on creating normal maps for UE3. The [http://udn.epicgames.com/Three/DesignWorkflow.html#Creating%20normal%20maps%20from%20meshes Design Workflow page] has a summary.&lt;br /&gt;
* [http://www.iddevnet.com/quake4/ArtReference_CreatingModels#head-3400c230e92ff7d57424b2a68f6e0ea75dee4afa Creating Models in Quake 4] by [http://www.ravensoft.com/ Raven Software] is a comprehensive guide to creating Quake 4 characters.&lt;br /&gt;
* [http://www.svartberg.com/tutorials/article_normalmaps/normalmaps.html Normalmaps for the Technical Game Modeler] by [http://www.svartberg.com Ariel Chai] shows how low-poly smoothing and UVs can affect normal maps in Doom 3.&lt;br /&gt;
* [http://wiki.polycount.net/3D_Tutorials/Modeling_High-Low_Poly_Models_for_Next_Gen_Games Modeling High/Low Poly Models for Next Gen Games] by [http://www.acetylenegames.com/artbymasa/ João &amp;quot;Masakari&amp;quot; Costa] is an overview of modeling for normal maps.&lt;br /&gt;
* The [http://tech-artists.org/wiki/Beveling Beveling section on the Tech-Artists.Org Wiki] discusses how smoothing groups and bevels affect the topology of the low-poly model.&lt;br /&gt;
* The two-part article [http://www.ericchadwick.com/examples/provost/byf2.html#wts Beautiful, Yet Friendly] by [http://www.linkedin.com/in/gprovost Guillaume Provost] explains how smoothing groups and other mesh attributes cause vertices to be duplicated in the game. The vertex count is actually what matters in-game, not the triangle or poly count.&lt;br /&gt;
* [http://www.poopinmymouth.com/tutorial/normal_workflow_2.htm Normal map workflow] by [http://www.poopinmymouth.com/ Ben &amp;quot;poopinmymouth&amp;quot; Mathis] demonstrates his normal mapping workflow in 3ds Max and Photoshop.&lt;br /&gt;
* [http://dodownload.filefront.com/9086954//72f71c0147df53765045a22253c18361a29a6d532425842007ead644d39cbb85d0794ab560365cfa This video tutorial] by [http://www.custom-airbrush.com/ Jeff &amp;quot;airbrush&amp;quot; Ross] shows in Maya how to subdivide the low-poly mesh so it more closely matches the high-poly mesh, to help solve wavy lines in the bake.&lt;br /&gt;
* [http://www.bencloward.com/tutorials_normal_maps1.shtml Normal Mapping Tutorial] by [http://www.bencloward.com/ Ben Cloward] is a comprehensive tutorial about the entire normal map creation process.&lt;br /&gt;
* [http://www.pinwire.com/articles/26/1/Generating-High-Fidelity-Normal-Maps-with-3-D-Software.html Generating High Fidelity Normal Maps with 3-D Software] by [http://www.linkedin.com/pub/0/277/4AB Dave McCoy] shows how to use a special lighting setup to render normal maps (instead of baking them).&lt;br /&gt;
* [http://cgtextures.com/content.php?action=tutorial&amp;amp;name=normalmap Tutorial for the NVIDIA Photoshop filter] by [http://hirezstudios.com/ Scott Warren] shows how to create deep normal maps using multiple layers. Note: to use Overlay blend mode properly, make sure to change each layer's Levels ''Output Level'' to 128 instead of 255.&lt;br /&gt;
* [http://www.poopinmymouth.com/process/tips/normalmap_deepening.jpg Normalmap Deepening] by [http://www.poopinmymouth.com/ Ben &amp;quot;poopinmymouth&amp;quot; Mathis] shows how to adjust normal maps, and how to layer together painted and baked normal maps.&lt;br /&gt;
* [http://boards.polycount.net/showthread.php?t=51088 Tutorial for painting out seams on mirrored tangent-space normal maps] by [http://www.warbeast.de/ warby] helps to solve seams along horizontal or vertical UV edges, but not across angled UVs.&lt;br /&gt;
* [http://planetpixelemporium.com/tutorialpages/normal.html Cinema 4D and Normal Maps For Games] by [http://planetpixelemporium.com/index.php James Hastings-Trew] describes normal maps in plain language, with tips on creating them in Cinema 4D.&lt;br /&gt;
* [http://forums.cgsociety.org/showthread.php?f=39&amp;amp;t=359082 3ds Max normal mapping overview] by [http://www.alan-noon.com/ Alan Noon] is a great thread on CGTalk about the normal mapping process.&lt;br /&gt;
* [http://forums.cgsociety.org/showthread.php?f=46&amp;amp;t=373024 Hard Surface Texture Painting] by [http://stefan-morrell.cgsociety.org/gallery/ Stefan Morrell] is a good introduction to painting textures for metal surfaces.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;D&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;Discussion&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Discussion ===&lt;br /&gt;
[http://boards.polycount.net/showthread.php?p=820218 Discuss this page on the Polycount forums]. Suggestions welcome.&lt;br /&gt;
&lt;br /&gt;
Even though only one person has been editing this page so far, the information here was gathered from many different sources. We wish to thank all the contributors for their hard-earned knowledge. It is much appreciated! [http://wiki.polycount.net {{http://boards.polycount.net/images/smilies/pcount/icons/smokin.gif}}]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[:Category:Texturing]] [[:Category:TextureTypes]] [[:Category:Bump map]]&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	<entry>
		<id>http://wiki.polycount.com/wiki/Normal_map</id>
		<title>Normal map</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/Normal_map"/>
				<updated>2014-11-29T08:29:56Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: /* Back Lighting Example */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!-- ## page was renamed from Normal Map --&amp;gt;&lt;br /&gt;
= Normal Map =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;WhatIsANormalMap&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;WIANM&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
== What is a Normal Map? ==&lt;br /&gt;
A Normal Map is usually used to fake high-res geometry detail when it's mapped onto a low-res mesh. The pixels of the normal map each store a ''normal'', a vector that describes the surface slope of the original high-res mesh at that point. The red, green, and blue channels of the normal map are used to control the direction of each pixel's normal. &lt;br /&gt;
&lt;br /&gt;
When a normal map is applied to a low-poly mesh, the texture pixels control the direction each of the pixels on the low-poly mesh will be facing in 3D space, creating the illusion of more surface detail or better curvature. However, the silhouette of the model doesn't change. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;packed&amp;quot;&amp;gt;&lt;br /&gt;
Whatif_normalmap_mapped2.jpg|A model with a normal map.&amp;lt;br&amp;gt;Image by [http://www.jameskuart.com/ James Ku].&lt;br /&gt;
Whatif_normalmap_low.jpg|The model without its normal map.&amp;lt;br&amp;gt;Image by [http://www.jameskuart.com/ James Ku].&lt;br /&gt;
Whatif_normalmap_high.jpg|The high-resolution model used to create the normal map.&amp;lt;br&amp;gt;Image by [http://www.jameskuart.com/ James Ku].&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Tangent-Space vs. Object-Space==&lt;br /&gt;
&lt;br /&gt;
Normal maps can be made in either of two basic flavors: tangent-space or object-space. Object-space is also called local-space or model-space, same thing. World-space is basically the same as object-space, except it requires the model to remain in its original orientation, neither rotating nor deforming, so it's almost never used.&lt;br /&gt;
&lt;br /&gt;
===Tangent-space normal map===&lt;br /&gt;
[[image:normalmap_tangentspace.jpg|frame|none|A tangent-space normal map. &amp;lt;br&amp;gt;Image by [http://ericchadwick.com Eric Chadwick].]]&lt;br /&gt;
&lt;br /&gt;
Predominantly-blue colors. Object can rotate and deform. Good for deforming meshes, like characters, animals, flags, etc.&lt;br /&gt;
&lt;br /&gt;
Pros:&lt;br /&gt;
* Maps can be reused easily, like on differently-shaped meshes.&lt;br /&gt;
* Maps can be tiled and mirrored easily, though some games might not support mirroring very well.&lt;br /&gt;
* Easier to overlay painted details.&lt;br /&gt;
* Easier to use image compression.&lt;br /&gt;
&lt;br /&gt;
Cons:&lt;br /&gt;
* More difficult to avoid smoothing problems from the low-poly vertex normals (see Smoothing Groups and Hard Edges).&lt;br /&gt;
* Slightly slower performance than an object-space map (but not by much).&lt;br /&gt;
&lt;br /&gt;
===Object-space normal map===&lt;br /&gt;
[[image:normalmap_worldspace.jpg|frame|none|An object-space normal map. &amp;lt;br&amp;gt;Image by [http://ericchadwick.com Eric Chadwick].]]&lt;br /&gt;
&lt;br /&gt;
Rainbow colors. Objects can rotate, but usually shouldn't be deformed, unless the shader has been modified to support deformation.&lt;br /&gt;
&lt;br /&gt;
Pros:&lt;br /&gt;
* Easier to generate high-quality curvature because it completely ignores the crude smoothing of the low-poly vertex normals.&lt;br /&gt;
* Slightly better performance than a tangent-space map (but not by much).&lt;br /&gt;
&lt;br /&gt;
Cons:&lt;br /&gt;
* Can't easily reuse maps, different mesh shapes require unique maps.&lt;br /&gt;
* Difficult to tile properly, and mirroring requires specific shader support.&lt;br /&gt;
* Harder to overlay painted details because the base colors vary across the surface of the mesh. Painted details must be converted into Object Space to be combined properly with the OS map.&lt;br /&gt;
* They don't compress very well, since the blue channel can't be recreated in the shader like with tangent-space maps. Also the three color channels contain very different data which doesn't compress well, creating many artifacts. Using a half-resolution object-space map is one option. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=&amp;quot;all&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Converting Between Spaces ===&lt;br /&gt;
Normal maps can be converted between tangent space and object space, in order to use them with different blending tools and shaders, which require one type or the other.&lt;br /&gt;
&lt;br /&gt;
[http://diogo.codingcorner.net Diogo &amp;quot;fozi&amp;quot; Teixeira] created a tool called [http://boards.polycount.net/showthread.php?p=1072599#post1072599 NSpace] that converts an object-space normal map into a tangent-space map, which then works seamlessly in the Max viewport. He converts the map by using the same tangent basis that 3ds Max uses for its hardware shader. To see the results, load the converted map via the ''Normal Bump'' map and enable &amp;quot;Show Hardware Map in Viewport&amp;quot;. [http://gameartist.nl/ Osman &amp;quot;osman&amp;quot; Tsjardiwal] created a GUI for NSpace, you can [http://boards.polycount.net/showthread.php?p=1075143#post1075143 download it here], just put it in the same folder as the NSpace exe and run it. Diogo has further [http://boards.polycount.net/showthread.php?p=1074160#post1074160 plans for the tool] as well.&lt;br /&gt;
&lt;br /&gt;
[[File:NSpace_Gui_osman.png|frame|none|NSpace interface. &amp;lt;br&amp;gt;Image by [http://diogo.codingcorner.net Diogo &amp;quot;fozi&amp;quot; Teixeira] and [http://gameartist.nl Osman &amp;quot;osman&amp;quot; Tsjardiwal]]]&lt;br /&gt;
&lt;br /&gt;
[http://www.linkedin.com/in/earthquake Joe &amp;quot;EarthQuake&amp;quot; Wilson] said: &amp;quot;[8Monkey Labs has] a tool that lets you load up your reference mesh and object space map. Then load up your tangent normals, and adjust some sliders for things like tile and amount. We need to load up a mesh to know how to correctly orient the tangent normals or else things will come out upside down or reverse etc. It mostly works, but it tends to &amp;quot;bend&amp;quot; the resulting normals, so you gotta split the mesh up into some smoothing groups before you run it, and then I usually will just composite this &amp;quot;combo&amp;quot; texture over my orig map in Photoshop.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;RGBC&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;RGBChannels&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== RGB Channels ==&lt;br /&gt;
Shaders can use different techniques to render tangent-space normal maps, but the normal map directions are usually consistent within a game. Usually the red channel of a tangent-space normal map stores the X axis (pointing the normals predominantly leftwards or rightwards), the green channel stores the Y axis (pointing the normals predominantly upwards or downwards), and the blue channel stores the Z axis (pointing the normals outwards away from the surface).&lt;br /&gt;
&lt;br /&gt;
[[image:tangentspace_rgb.jpg|frame|none|The red, green, and blue channels of a tangent-space normal map. &amp;lt;br&amp;gt;Image by [http://ericchadwick.com Eric Chadwick].]]&lt;br /&gt;
&lt;br /&gt;
If you see lighting coming from the wrong angle when you're looking at your normal-mapped model, and the model is using a tangent-space normal map, the normal map shader might be expecting the red or green channel (or both) to point in the opposite direction. To fix this either change the shader, or simply invert the appropriate color channels in an image editor, so that the black pixels become white and the white pixels become black.&lt;br /&gt;
&lt;br /&gt;
Some shaders expect the color channels to be swapped or re-arranged to work with a particular [[#NormalMapCompression|compression format]]. For example the DXT5_nm format usually expects the X axis to be in the alpha channel, the Y axis to be in the green channel, and the red and blue channels to be empty.&lt;br /&gt;
&lt;br /&gt;
== Tangent Basis ==&lt;br /&gt;
[[#TangentSpaceVsObjectSpace|Tangent-space]] normal maps use a special kind of vertex data called the ''tangent basis''. This is similar to UV coordinates except it provides directionality across the surface, it forms a surface-relative coordinate system for the per-pixel normals stored in the normal map. &lt;br /&gt;
&lt;br /&gt;
Light rays are in world space, but the normals stored in the normal map are in tangent space. When a normal-mapped model is being rendered, the light rays must be converted from world space into tangent space, using the tangent basis to get there. At that point the incoming light rays are compared against the directions of the normals in the normal map, and this determines how much each pixel of the mesh is going to be lit. Alternatively, instead of converting the light rays some shaders will convert the normals in the normal map from tangent space into world space. Then those world-space normals are compared against the light rays, and the model is lit appropriately. The method depends on who wrote the shader, but the end result is the same.&lt;br /&gt;
&lt;br /&gt;
Unfortunately for artists, there are many different ways to calculate the tangent basis: [http://area.autodesk.com/blogs/chris/how_the_3ds_max_scanline_renderer_computes_tangent_and_binormal_vectors_for_normal_mapping 3ds Max], [http://download.autodesk.com/us/maya/2011help/index.html?url=./files/Appendix_A_Tangent_and_binormal_vectors.htm,topicNumber=d0e227193 Maya], [http://www.codesampler.com/dx9src/dx9src_4.htm#dx9_dot3_bump_mapping DirectX 9], [http://developer.nvidia.com/object/NVMeshMender.html NVMeshMender], [http://www.terathon.com/code/tangent.html Eric Lengyel], a custom solution, etc. This means a normal map baked in one application probably won't shade correctly in another. Artists must do some testing with different [[#T|baking tools]] to find which works best with their output. When the renderer (or game engine) renders your game model, [[#ShadersAndSeams|the shader]] must use the same tangent basis as the normal map baker, otherwise you'll get incorrect lighting, especially across the seams between UV shells.&lt;br /&gt;
&lt;br /&gt;
The [http://www.xnormal.net/ xNormal] SDK supports custom tangent basis methods. When a programmer uses it to implement their renderer's own tangent basis, artists can then use Xnormal to bake normal maps that will match their renderer perfectly.&lt;br /&gt;
&lt;br /&gt;
The [[#UVC|UVs]] and the [[#SGAHE|vertex normals]] on the low-res mesh directly influence the coloring of a [[#TSNM|tangent-space]] normal map when it is baked. Each tangent basis vertex is a combination of three things: the mesh vertex's normal (influenced by smoothing), the vertex's tangent (usually derived from the V texture coordinate), and the vertex's bitangent (derived in code, also called the binormal). These three vectors create an axis for each vertex, giving it a specific orientation in the tangent space. These axes are used to properly transform the incoming lighting from world space into tangent space, so your normal-mapped model will be lit correctly.&lt;br /&gt;
&lt;br /&gt;
When a triangle's vertex normals are pointing straight out, and a pixel in the normal map is neutral blue (128,128,255) this means that pixel's normal will be pointing straight out from the surface of the low-poly mesh. When that pixel normal is tilted towards the left or the right in the tangent coordinate space, it will get either more or less red color, depending on whether the normal map is set to store the X axis as either a positive or a negative value. Same goes for when the normal is tilted up or down in tangent space, it will either get more or less green color. If the vertex normals aren't exactly perpendicular to the triangle, the normal map pixels will be tinted away from neutral blue as well. The vertex normals and the pixel normals in the normal map are combined together to create the final per-pixel surface normals.&lt;br /&gt;
&lt;br /&gt;
[[#SAS|Shaders]] are written to use a particular direction or &amp;quot;handedness&amp;quot; for the X and Y axes in a normal map. Most apps tend to prefer +X (red facing right) and +Y (green facing up), while others like 3ds Max prefer +X and -Y. This is why you often need to invert the green channel of a normal map to get it to render correctly in this or that app... the shader is expecting a particular handedness.&lt;br /&gt;
&lt;br /&gt;
[[image:tangentseams.jpg|frame|none|When shared edges are at different angles in UV space, different colors will show up&lt;br /&gt;
along the seam. The tangent basis uses these colors to light the model properly. &amp;lt;br&amp;gt;Image by [http://ericchadwick.com Eric Chadwick].]]&lt;br /&gt;
&lt;br /&gt;
When you look at a tangent-space normal map for a character, you typically see different colors along the UV seams. This is because the UV shells are often oriented at different angles on the mesh, a necessary evil when translating the 3D mesh into 2D textures. The body might be mapped with a vertical shell, and the arm mapped with a horizontal one. This requires the normals in the normal map to be twisted for the different orientations of those UV shells. The UVs are twisted, so the normals must be twisted in order to compensate. The tangent basis helps reorient (twist) the lighting as it comes into the surface's local space, so the lighting will then look uniform across the normal mapped mesh.&lt;br /&gt;
&lt;br /&gt;
When an artist tiles a tangent-space normal map across an arbitrary mesh, like a landscape, this tends to shade correctly because the mesh has a uniform direction in tangent space. If the mesh has discontinuous UV coordinates (UV seams), or the normal map has large directional gradients across it, the tangent space won't be uniform anymore so the surface will probably have shading seams.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;MTLPM&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Modeling the Low-Poly Mesh ==&lt;br /&gt;
The in-game mesh usually needs to be carefully optimized to create a good silhouette, define edge-loops for better deformation, and minimize extreme changes between the vertex normals for better shading (see [[#SmoothingGroupsAndHardEdges|Smoothing Groups &amp;amp; Hard Edges]]).&lt;br /&gt;
&lt;br /&gt;
In order to create an optimized in-game mesh including a good silhouette and loops for deforming in animation, you can start with the 2nd subdivision level of your [[DigitalSculpting|digital sculpt]], or in some cases with the base mesh itself. Then you can just collapse edge loops or cut in new edges to add/remove detail as necessary. Or you can [[DigitalSculpting#OART|re-toplogize]] from scratch if that works better for you.&lt;br /&gt;
&lt;br /&gt;
See [http://www.polycount.com/forum/showthread.php?t=107196 You're making me hard. Making sense of hard edges, uvs, normal maps and vertex counts] on the Polycount forum.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;UVC&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;UVCoordinates&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== UV Coordinates ===&lt;br /&gt;
Normal map baking tools only capture normals within the 0-1 UV square, any UV bits outside this area are ignored. &lt;br /&gt;
&lt;br /&gt;
Only one copy of the forward-facing UVs should remain in the 0-1 UV square at baking time. If the mesh uses overlapping UVs, this will likely cause artifacts to appear in the baked map, since the baker will try render each UV shell into the map. Before baking, it's best to move all the overlaps and mirrored bits outside the 0-1 square. &lt;br /&gt;
&lt;br /&gt;
[[image:Normalmap_uvcoord_offset.jpg|frame|none|The mirrored UVs (in red) are offset 1 unit before baking. &amp;lt;br&amp;gt;Image by [http://ericchadwick.com Eric Chadwick].]]&lt;br /&gt;
&lt;br /&gt;
If you move all the overlaps and mirrored bits exactly 1 UV unit (any whole number will do), then you can leave them there after the bake and they will still be mapped correctly. You can move them back if you want, it doesn't matter to most game engines. Be aware that ZBrush does use UV offsets to manage mesh visibility, however this usually doesn't matter because the ZBrush cage mesh is often a different mesh than the in-game mesh used for baking.&lt;br /&gt;
&lt;br /&gt;
You should avoid changing the UVs after baking the normal map, because rotating or mirroring UVs after baking will cause the normal map not to match the [[#TB|tangent basis]] anymore, which will likely cause lighting problems. &lt;br /&gt;
&lt;br /&gt;
In 3ds Max, W is a third texture coordinate. It's used for 3D procedural textures and for storing vertex color in UV channels (you need 3 axes for RGB, so UVW can store vertex color). Bake problems can be avoided by moving any overlapping UVs to -1 on the W axis, with the same results as moving them 1 unit on the U or V axes. The tool Render To Texture will always bake whatever UVs are the highest along the W axis. However using W can be messy... it's generally hidden unless you purposefully look for it (bad for team work), doesn't get preserved on export to other apps, and high W values can prevent selecting and/or welding UVs. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;M&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;Mirroring&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Mirroring ===&lt;br /&gt;
Normal maps can be mirrored across a model to create symmetrical details, and save UV space, which allows more detail in the normal map since the texture pixels are smaller on the model. &lt;br /&gt;
&lt;br /&gt;
With [[#OSNM|object-space]] maps, mirroring requires [http://boards.polycount.net/showthread.php?t=53986 specific shader support]. For [[#TSNM|tangent-space]] maps, mirroring typically creates a shading seam, but this can be reduced or hidden altogether, depending on the method used.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;TMW&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
==== Typical Mirroring Workflow ====&lt;br /&gt;
# Delete the mesh half that will be mirrored. &lt;br /&gt;
# Arrange the UVs for the remaining model, filling the UV square.&lt;br /&gt;
# Mirror the model to create a &amp;quot;whole&amp;quot; mesh, welding the mesh vertices along the seam. &lt;br /&gt;
# Move the mirrored UVs exactly 1 unit (or any whole number) out of the 0-1 UV square.&lt;br /&gt;
# Bake the normal map.&lt;br /&gt;
&lt;br /&gt;
Sometimes an artist will decide to delete half of a symmetrical model before baking. &lt;br /&gt;
&lt;br /&gt;
This is a mistake however because often the vertex normals along the hole will bend towards the hole a bit; there are no faces on the other side to average the normals with. This will create a strong lighting seam in the normal map. &lt;br /&gt;
&lt;br /&gt;
It's typically best to use the complete mirrored model to bake the normal map, not just the unique half. &lt;br /&gt;
&lt;br /&gt;
To prevent the mirrored UVs from causing overlaps or baking errors, move the mirrored [[#UVC|UVs]] out of the 0-1 UV space, so only one copy of the non-mirrored UVs is left within the 0-1 square.&lt;br /&gt;
&lt;br /&gt;
To avoid texel &amp;quot;leaks&amp;quot; between the UV shells, make sure there's enough [[#Edge_padding|Edge Padding]] around each shell, including along the edges of the normal map. None of the UV shells should be touching the edge of the 0-1 UV square, unless they're meant to tile with the other side of the map.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;CM&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Center Mirroring ====&lt;br /&gt;
If the mirror seam runs along the surface of a continuous mesh, like down the center of a human face for example, then it will probably create a lighting seam. &lt;br /&gt;
&lt;br /&gt;
In Epic Games' [http://www.unrealtechnology.com/technology.php Unreal Engine 3] (UE3) their symmetrical models commonly use centered mirroring. Epic uses materials that mix a [[DetailMap]] with the normal maps; these seem to scatter the diffuse/specular lighting and help minimize the obviousness of the mirror seams. For their [[Light Map]]ped models they use [http://udn.epicgames.com/Three/LightMapUnwrapping.html a technique] that can almost completely hide the mirror seam.&lt;br /&gt;
&lt;br /&gt;
[[image:Epic_MirroringCicada.jpg|frame|none| In UE3 a center mirror seam is reduced by using a detail normal map. &amp;lt;br&amp;gt; Image by &amp;quot;[http://epicgames.com Epic Games]&amp;quot;]]&lt;br /&gt;
&lt;br /&gt;
'''''[http://www.zbrushcentral.com/showpost.php?p=573108&amp;amp;postcount=28 GOW2 normal map seams], [http://utforums.epicgames.com/showthread.php?p=27166791#post27166791 UDK normal map seams]'''''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;OM&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Offset Mirroring ====&lt;br /&gt;
Offset mirroring is a method where you move the mirror seam off to one side of the model, so the seam doesn't run exactly down the center. For example with a character's head, the UV seam can go down along the side of the head in front of the ear. The UV shell for the nearest ear can then be mirrored to use the area on the other side of the head. &lt;br /&gt;
&lt;br /&gt;
This avoids the &amp;quot;Rorschach&amp;quot; effect and allows non-symmetrical details, but it still saves texture space because the two sides of the head can be mirrored (they're never seen at the same time anyhow).&lt;br /&gt;
&lt;br /&gt;
Offset mirroring doesn't get rid of the seam, but it does move it off to a place where it can either be less obvious, or where it can be hidden in a natural seam on the model.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;FCM&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
==== Flat Color Mirroring ====&lt;br /&gt;
[http://boards.polycount.net/showthread.php?t=51088 Tutorial for painting out seams on mirrored tangent-space normal maps] by [http://www.warbeast.de/ warby] solves seams by painting a flat set of normals along the seam, using neutral blue (128,128,255). However it only works along horizontal or vertical UV seams, not across any angled UVs. It also removes any details along the mirror seam, creating blank areas. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;EM&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
==== Element Mirroring ====&lt;br /&gt;
The mirror seam can be avoided completely when it doesn't run directly through any mesh. For example if there's a detached mesh element that runs down the center of the model, this can be uniquely mapped, while the meshes on either side can be mirrors of each other. Whenever the mirrored parts don't share any vertex normals with the non-mirrored parts, there won't be any seams. &lt;br /&gt;
&lt;br /&gt;
[[image:normalmap_mirrored-binocs-racer445.jpg|frame|none|The middle part (highlighted in red) uses unique non-mirrored UVs, allowing the mesh on the right to be mirrored without any seams. &amp;lt;br&amp;gt;Image by [http://http://racer445.com/ &amp;quot;racer445&amp;quot;]]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;SGAHE&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Smoothing Groups &amp;amp; Hard Edges ===&lt;br /&gt;
Each vertex in a mesh has at least one vertex normal. Vertex normals are used to control the direction a triangle will be lit from; if the normal is facing the light the triangle will be fully lit, if facing away from the light the triangle won't be lit. &lt;br /&gt;
&lt;br /&gt;
Each vertex however can have more than one vertex normal. When two triangles have different vertex normals along their shared edge, this creates a shading seam, called a ''hard edge'' in most modeling tools. 3ds Max uses ''Smoothing Groups'' to create hard/soft edges, Maya uses ''Harden Edge'' and ''Soften Edge''. These tools create hard and soft edges by splitting and combining the vertex normals.&lt;br /&gt;
&lt;br /&gt;
[[image:BenMathis_SmoothingGroups_Excerpt.gif|frame|none|Hard edges occur where the vertices have multiple normals. &amp;lt;br&amp;gt;Image by [http://poopinmymouth.com Ben 'poopinmymouth' Mathis] ([http://poopinmymouth.com/process/tips/smoothing_groups.jpg tutorial here])]]&lt;br /&gt;
&lt;br /&gt;
When a mesh uses all soft normals (a single smoothing group) the lighting has to be interpolated across the extreme differences between the vertex normals. If your renderer doesn't support the same [[#TangentBasis|tangent basis]] that the baker uses, this can produce extreme shading differences across the model, which creates shading artifacts. It is generally best to reduce these extremes when you can because a mismatched renderer can only do so much to counteract it.&lt;br /&gt;
&lt;br /&gt;
Hard edges are usually best where the model already has a natural seam. For example, you can add a hard edge along the rim of a car's wheel well, to prevent the inside of the wheel well from distorting the shading for the outside of the car body. Mechanical models usually need hard edges where ever the surface bends more than about 45 degrees. &lt;br /&gt;
&lt;br /&gt;
For most meshes, the best results usually come from adding hard edges where ever there are UV seams. There are no hard rules however, you must experiment with different approaches to find what works best in your game.&lt;br /&gt;
&lt;br /&gt;
When you use object-space normal maps the vertex normal problem goes away since you're no longer relying on the crude vertex normals of the mesh. An object-space normal map completely ignores vertex normals. Object-space mapping allows you to use all soft edges and no bevels on the low-res mesh, without showing lighting errors.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;HEDAT&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
==== Hard Edge Discussions &amp;amp; Tutorials ====&lt;br /&gt;
* [http://www.polycount.com/forum/showthread.php?p=2090450#post2090450 Maya MEL Script help needed (UV border edges)]&lt;br /&gt;
* [http://www.polycount.com/forum/showthread.php?t=107196 You're making me hard. Making sense of hard edges, uvs, normal maps and vertex counts]&lt;br /&gt;
* [http://www.polycount.com/forum/showthread.php?t=73593 Normal Maps: Can Somone Explain This &amp;quot;Black Edge&amp;quot; issue]&lt;br /&gt;
* [http://www.polycount.com/forum/showthread.php?t=73566 Normal Maps: Can someone explain normals, tangents and split UVs?]&lt;br /&gt;
* [http://www.polycount.com/forum/showthread.php?t=72861 3Point Shader Lite - Shader material editor and Quality Mode normalmaps for 3ds Max]&lt;br /&gt;
* [http://boards.polycount.net/showthread.php?t=68173 Why you should NOT trust 3ds Max's viewport normal-map display!]&lt;br /&gt;
* [http://www.game-artist.net/forums/support-tech-discussion/10503-xsi-normal-mapped-cube-looks-bad.html XSI - normal mapped cube looks bad]&lt;br /&gt;
* [http://www.game-artist.net/forums/support-tech-discussion/11924-weird-maya-normal-map-seam-artifact-problem-am-i-making-simple-mistake.html Weird Maya normal map seam/artifact problem]&lt;br /&gt;
* [http://boards.polycount.net/showthread.php?p=1080600 Seams in Normals when Creating Tiling Environment Trims and other Tiles]&lt;br /&gt;
* The tutorial [http://www.svartberg.com/tutorials/article_normalmaps/normalmaps.html Normalmaps for the Technical Game Modeler] by [http://www.svartberg.com Ariel Chai] shows how low-poly smoothing can affect the normal map.&lt;br /&gt;
* The tutorial: [http://wiki.polycount.net/3D_Tutorials/Modeling_High-Low_Poly_Models_for_Next_Gen_Games Modeling High/Low Poly Models for Next Gen Games] by [http://www.acetylenegames.com/artbymasa/ João &amp;quot;Masakari&amp;quot; Costa] shows how smoothing affects raycasting.&lt;br /&gt;
* The [http://tech-artists.org/wiki/Beveling Beveling section on the Tech-Artists.Org Wiki] discusses the breaking of normals and smoothing groups in general terms.&lt;br /&gt;
* The two-part article [http://www.ericchadwick.com/examples/provost/byf2.html#wts Beautiful, Yet Friendly] by [http://www.linkedin.com/in/gprovost Guillaume Provost] explains how smoothing groups and other mesh attributes cause vertices to be duplicated in the game. The vertex count is actually what matters in the game, not the triangle count.&lt;br /&gt;
* The Crysis documentation [http://doc.crymod.com/AssetCreation/PolyBumpReference.html PolyBump Reference] has a section towards the bottom that shows how smoothing affects their baked normal maps.&lt;br /&gt;
* The polycount thread [http://boards.polycount.net/showthread.php?t=60694 Toying around with normal map approaches] has a great discussion of how best to use smoothing groups and bevels for better shading.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;UB&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
==== Using Bevels ====&lt;br /&gt;
Bevels/chamfers generally improve the silhouette of the model, and can also help reflect specular highlights better. &lt;br /&gt;
&lt;br /&gt;
However bevels tend to produce long thin triangles, which slow down the in-game rendering of your model. Real-time renderers have trouble rendering long thin triangles because they create a lot of sub-pixel areas to render. &lt;br /&gt;
&lt;br /&gt;
Bevels also balloon the vertex count, which can increase the transform cost and memory usage. Hard edges increase the vertex count too, but not when  the edge also shares a seam in UV space. For a good explanation of the vertex count issue, see [http://www.ericchadwick.com/examples/provost/byf2.html#wts Beautiful, Yet Friendly].&lt;br /&gt;
&lt;br /&gt;
Using hard edges with matching UV shells tends to give better performance and better cosmetic results than using bevels. However there are differing opinions on this, see the Polycount thread &amp;quot;[http://boards.polycount.net/showthread.php?t=71760 Maya transfer maps help]&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;EVN&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
==== Edited Vertex Normals ====&lt;br /&gt;
If you use bevels the shading will be improved by editing the vertex normals so the larger flat surfaces have perpendicular normals. The vertex normals are then forced to blend across the smaller bevel faces, instead of across the larger faces. See the Polycount thread [http://boards.polycount.net/showthread.php?t=66139 Superspecular soft edges tutorial chapter 1].&lt;br /&gt;
&lt;br /&gt;
[[image:oliverio_bevel_normals.gif|frame|none|Bending normals on bevelled models. &amp;lt;br&amp;gt;From the tutorial [http://deadlineproof.com/model-shading-techniques-soft-edge-superspecular/ Shading techniques Superspecular soft edges]&amp;lt;br&amp;gt;Image by [http://deadlineproof.com/ Paolo Oliverio]]]&lt;br /&gt;
&lt;br /&gt;
== Level of Detail Models ==&lt;br /&gt;
See [http://www.polycount.com/forum/showthread.php?p=1216945#post1216945 Problem if you're using 3point-style normals with an LOD].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;MTHPM&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Modeling The High-Poly Mesh ==&lt;br /&gt;
[[Subdivision Surface Modeling]] and [[DigitalSculpting]] are the techniques most often used for modeling a normal map. &lt;br /&gt;
&lt;br /&gt;
Some artists prefer to model the in-game mesh first, other artists prefer to model the high-res mesh first, and others start somewhere in the middle. The modeling order is ultimately a personal choice though, all three methods can produce excellent results:&lt;br /&gt;
* Build the in-game model, then up-res it and sculpt it.&lt;br /&gt;
* Build and sculpt a high resolution model, then build a new in-game model around that.&lt;br /&gt;
* Build a basemesh model, up-res and sculpt it, then step down a few levels of detail and use that as a base for building a better in-game mesh.&lt;br /&gt;
If the in-game mesh is started from one of the subdivision levels of the basemesh sculpt, various edge loops can be collapsed or new edges can be cut to add/remove detail as necessary.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;SE&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Sloped Extrusions ===&lt;br /&gt;
[[image:normal_slopes_hatred.jpg|frame|none|Extrusions on the high-poly model should be sloped to make a better normal map. &amp;lt;br&amp;gt;Image by [http://www.hatred.gameartisans.org/ Krzysztof &amp;quot;Hatred&amp;quot; Dolas].]]&lt;br /&gt;
&lt;br /&gt;
=== Floating Geometry ===&lt;br /&gt;
[[image:FloatingGeo.jpg|frame|none|Normal map stores the direction the surface is facing rather than real depth information, thus allowing to save time using floating geometry. &amp;lt;br&amp;gt;To correctly bake AO with floating geo make it a separate object and turn off it's shadow casting. &amp;lt;br&amp;gt;Image by [http://artisaverb.info/ Andrew &amp;quot;d1ver&amp;quot; Maximov].]]&lt;br /&gt;
&lt;br /&gt;
See also [[3DTutorials/Modeling High-Low Poly Models for Next Gen Games|Modeling High/Low Poly Models for Next Gen Games]] by [http://www.acetylenegames.com/artbymasa/ João &amp;quot;Masakari&amp;quot; Costa]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;ET&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Edge Thickness ===&lt;br /&gt;
[[image:normal_edge_thickness.jpg|frame|none|When creating edges of the Highpoly, sometimes you'll need to make them rounded than in real life to &amp;lt;br&amp;gt;work better at the size they will be seen.&amp;lt;br&amp;gt;Image by [http://racer445.com/Evan &amp;quot;racer445&amp;quot; Herbert]]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;MRF&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;MRRCB&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== mental ray Round Corners Bump ===&lt;br /&gt;
The mental ray renderer offers an automatic bevel rendering effect called Round Corners Bump that can be baked into a normal map. This is available in 3ds Max, Maya, and XSI. See [http://boards.polycount.net/showthread.php?t=71995 Zero Effort Beveling for normal maps] - by [http://boards.polycount.net/member.php?u=31662 Robert &amp;quot;r_fletch_r&amp;quot; Fletcher].&lt;br /&gt;
&lt;br /&gt;
[http://jeffpatton.net/ Jeff Patton] posted about [http://jeffpatton.cgsociety.org/blog/archive/2007/10/ how to expose Round Corners Bump] in 3ds Max so you can use it in other materials.&lt;br /&gt;
&lt;br /&gt;
[http://cryrid.com/art/ Michael &amp;quot;cryrid&amp;quot; Taylor] posted a tutorial about how to use [http://cryrid.com/images/temp/XSI/zeroeffort_bevels.jpg Round Corners in XSI].&lt;br /&gt;
&lt;br /&gt;
XSI is able to bake a good normal map with it, but 3ds Max seems to bake it incorrectly, and Maya isn't able to bake the effect at all. Maybe Max might be able to bake it correctly, if the .mi shader is edited to use the correct coordinate space?&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;Baking&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;B&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
== Baking ==&lt;br /&gt;
The process of transferring normals from the high-res model to the in-game model is often called baking. The baking tool usually starts projecting a certain numerical distance out from the low-poly mesh, and sends rays inwards towards the high-poly mesh. When a ray intersects the high-poly mesh, it records the mesh's surface normal and saves it in the normal map.&lt;br /&gt;
&lt;br /&gt;
To get an understanding of how all the options affect your normal map, do some test bakes on simple meshes like boxes. They generate quickly so you can experiment with [[#UVCoordinates|UV mirroring]], [[#SGAHE|smoothing groups]], etc. This helps you learn the settings that really matter.&lt;br /&gt;
* The tutorial [http://wiki.polycount.net/3D_Tutorials/Modeling_High-Low_Poly_Models_for_Next_Gen_Games Modeling High/Low Poly Models for Next Gen Games] by [http://www.acetylenegames.com/artbymasa/ João &amp;quot;Masakari&amp;quot; Costa] has more examples of ray-casting, plus how to get better results from the bake.&lt;br /&gt;
&lt;br /&gt;
Baking sub-sections:&lt;br /&gt;
# [[#Anti-Aliasing|Anti-Aliasing]]&lt;br /&gt;
# [[#Baking_Transparency|Baking Transparency]]&lt;br /&gt;
# [[#Edge_Padding|Edge Padding]]&lt;br /&gt;
# [[#High_Poly_Materials|High Poly Materials]]&lt;br /&gt;
# [[#Reset_Transforms|Reset Transforms]]&lt;br /&gt;
# [[#Solving_Intersections|Solving Intersections]]&lt;br /&gt;
# [[#Solving_Pixel_Artifacts|Solving Pixel Artifacts]]&lt;br /&gt;
# [[#Solving_Wavy_Lines|Solving Wavy Lines]]&lt;br /&gt;
# [[#Triangulating|Triangulating]]&lt;br /&gt;
# [[#Working_with_Cages|Working with Cages]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;AA&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Anti-Aliasing ===&lt;br /&gt;
Turning on super-sampling or anti-aliasing (or whatever multi-ray casting is called in your normal map baking tool) will help to fix any jagged edges where the high-res model overlaps itself within the UV borders of the low-poly mesh, or wherever the background shows through holes in the mesh. Unfortunately this tends to render much much slower, and takes more memory.&lt;br /&gt;
&lt;br /&gt;
[[image:normalmap_aliasing_knak47.jpg|frame|none|A bake without anti-aliasing shows artifacts where the high-poly mesh has overlaps. &amp;lt;br&amp;gt;Image by [http://www.polycount.com/forum/member.php?u=35938 'knak47']]]&lt;br /&gt;
&lt;br /&gt;
One trick to speed this up is to render 2x the intended image size then scale the normal map down 1/2 in a paint program like Photoshop. The reduction's pixel resampling will add anti-aliasing for you in a very quick process. After scaling, make sure to re-normalize the map if your game doesn't do that already, because the un-normalized pixels in your normalmap may cause pixelly artifacts in your specular highlights. Re-normalizing can be done with [http://developer.nvidia.com/nvidia-texture-tools-adobe-photoshop NVIDIA's normal map filter] for Photoshop.&lt;br /&gt;
&lt;br /&gt;
3ds Max's supersampling doesn't work nicely with edge padding, it produces dark streaks in the padded pixels. If so then turn off padding and re-do the padding later, either by re-baking without supersampling or by using a Photoshop filter like the one that comes with [[#3DTools|Xnormal]].&lt;br /&gt;
&lt;br /&gt;
=== Baking Transparency ===&lt;br /&gt;
Sometimes you need to bake a normal map from an object that uses opacity maps, like a branch with opacity-mapped leaves. Unfortunately baking apps often completely ignore any transparency mapping on your high-poly mesh.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|[[image:JoeWilson_ivynormals_error.jpg]] &lt;br /&gt;
|[[image:JoeWilson_ivynormals_rendered.jpg]]&lt;br /&gt;
|-&lt;br /&gt;
|3ds Max's RTT baker causes transparency errors.&amp;lt;br&amp;gt;image by [http://www.linkedin.com/in/earthquake Joe &amp;quot;EarthQuake&amp;quot; Wilson]&lt;br /&gt;
|The lighting method bakes perfect transparency.&amp;lt;br&amp;gt;image by [http://www.linkedin.com/in/earthquake Joe &amp;quot;EarthQuake&amp;quot; Wilson]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
To solve this, render a Top view of the mesh. This only works if you're using a planar UV projection for your low-poly mesh and you're baking a tangent-space normal map.&lt;br /&gt;
&lt;br /&gt;
* Make sure the Top view matches the dimensions of the planar UV projection used by the low-poly mesh. It helps to use an orthographic camera for precise placement.&lt;br /&gt;
* On the high-poly mesh either use a specific lighting setup or a use special material shader:&lt;br /&gt;
* 1) The lighting setup is described in these tutorials:&lt;br /&gt;
* * [http://www.bencloward.com/tutorials_normal_maps11.shtml Creating A Normal Map Right In Your 3D App] by [http://www.bencloward.com/ Ben Cloward]&lt;br /&gt;
* *[http://www.pinwire.com/articles/26/1/Generating-High-Fidelity-Normal-Maps-with-3-D-Software.html Generating High Fidelity Normal Maps with 3-D Software] by [http://www.linkedin.com/pub/0/277/4AB Dave McCoy], Graphics Techniques Consultant, Xbox Content and Design Team&lt;br /&gt;
* 2) The material shader does the same thing, but doesn't require lights.&lt;br /&gt;
* * [http://www.scriptspot.com/3ds-max/normaltexmap NormalTexMap] scripted map for 3ds Max by [http://www.scriptspot.com/users/dave-locke Dave Locke].&lt;br /&gt;
* * [http://www.footools.com/3dsmax_plugins.html InfoTexture] map plugin for 3ds Max by [http://www.footools.com John Burnett]&lt;br /&gt;
&lt;br /&gt;
[[image:BenCloward_NormalMapLighting.gif|frame|none|The lighting setup for top-down rendering. &amp;lt;br&amp;gt;Image by [http://www.bencloward.com Ben Cloward]]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;EP&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Edge Padding ===&lt;br /&gt;
If a normal map doesn't have enough [[Edge_padding |Edge Padding]], this will create shading seams on the UV borders.&lt;br /&gt;
&lt;br /&gt;
=== High Poly Materials ===&lt;br /&gt;
3ds Max will not bake a normal map properly if the high-res model has a mental ray Arch &amp;amp; Design material applied. If your normal map comes out mostly blank, either use a Standard material or none at all. For an example see the Polycount thread [http://www.polycount.com/forum/showthread.php?t=74792 Render to Texture &amp;gt;:O].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;RT&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Reset Transforms ===&lt;br /&gt;
Before baking, make sure your low-poly model's transforms have been reset. '''''This is very important!''''' Often during the modeling process a model will be rotated and scaled, but these compounded transforms can create a messy local &amp;quot;space&amp;quot; for the model, which in turn often creates rendering errors for normal maps. &lt;br /&gt;
&lt;br /&gt;
In 3ds Max, use the Reset Xforms utility then Collapse the Modifier Stack. In Maya use Freeze Transformation. In XSI use the Freeze button.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;SI&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Solving Intersections ===&lt;br /&gt;
The projection process often causes problems like misses, or overlaps, or intersections. It can be difficult generating a clean normal map in areas where the high-poly mesh intersects or nearly intersects itself, like in between the fingers of a hand. Setting the ray distance too large will make the baker pick the other finger as the source normal, while setting the ray distance too small will lead to problems at other places on the mesh where the distances between in-game mesh and high-poly mesh are greater.&lt;br /&gt;
&lt;br /&gt;
Fortunately there are several methods for solving these problems.&lt;br /&gt;
&lt;br /&gt;
# Change the shape of the cage. Manually edit points on the projection cage to help solve tight bits like the gaps between fingers.&lt;br /&gt;
# Limit the projection to matching materials, or matching UVs.&lt;br /&gt;
# Explode the meshes. See the polycount thread [http://boards.polycount.net/showthread.php?t=62921 Explode script needed (for baking purposes)].&lt;br /&gt;
# Bake two or more times using different cage sizes, and combine them in Photoshop.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;SPA&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Solving Pixel Artifacts ===&lt;br /&gt;
[[image:filterMaps_artifact.jpg|frame|none|Random pixel artifacts in the bake. &amp;lt;br&amp;gt;Image by [http://www.linkedin.com/in/ericchadwick Eric Chadwick]]]&lt;br /&gt;
&lt;br /&gt;
If you are using 3ds Max's ''Render To Texture'' to bake from one UV layout to another, you may see stray pixels scattered across the bake. This only happens if you are using a copy of the original mesh in the Projection, and that mesh is using a different UV channel than the original mesh.&lt;br /&gt;
&lt;br /&gt;
There are two solutions for this:&lt;br /&gt;
&lt;br /&gt;
* Add a Push modifier to the copied mesh, and set it to a low value like 0.01.&lt;br /&gt;
- or -&lt;br /&gt;
&lt;br /&gt;
* Turn off ''Filter Maps'' in the render settings (Rendering menu &amp;gt; Render Setup &amp;gt; Renderer tab &amp;gt; uncheck Filter Maps). To prevent aliasing you may want to enable the Global Supersampler in Render Setup.&lt;br /&gt;
&lt;br /&gt;
See also [[#Anti-Aliasing]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;SWL&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Solving Wavy Lines ===&lt;br /&gt;
When capturing from a cylindrical shape, often the differences between the low-poly mesh and the high-poly mesh will create a wavy edge in the normal map. There are a couple ways to avoid this:&lt;br /&gt;
&lt;br /&gt;
# The best way... create your lowpoly model with better supporting edges. See the Polycount threads [http://www.polycount.com/forum/showthread.php?t=81154 Understanding averaged normals and ray projection/Who put waviness in my normal map?], [http://boards.polycount.net/showthread.php?t=55754 approach to techy stuff], [http://www.polycount.com/forum/showthread.php?t=72713 Any tips for normal mapping curved surface?].&lt;br /&gt;
# Adjust the shape of the cage to influence the directions the rays will be cast. Beware... this work will have to be re-done every time you edit the lowpoly mesh, as the cage will be invalidated. At the bottom of [http://www.poopinmymouth.com/tutorial/normal_workflow_2.htm this page of his normal map tutorial], [http://www.poopinmymouth.com/ Ben &amp;quot;poopinmymouth&amp;quot; Mathis] shows how to do this in 3ds Max. Same method can be seen in the image below.&lt;br /&gt;
# Subdivide the low-res mesh so it more closely matches the high-res mesh. Beware... this will cause the normal map not to match your lowpoly vertex normals, probably causing shading errors. [http://www.custom-airbrush.com/ Jeff &amp;quot;airbrush&amp;quot; Ross] has a [http://dodownload.filefront.com/9086954//72f71c0147df53765045a22253c18361a29a6d532425842007ead644d39cbb85d0794ab560365cfa video tutorial] that shows how to do this in Maya.&lt;br /&gt;
# Paint out the wavy line.  Beware... this work will have to be re-done every time you re-bake the normal map. The [http://www.poopinmymouth.com/tutorial/normal_workflow_3.htm normal map process tutorial] by [http://www.poopinmymouth.com/ Ben &amp;quot;poopinmymouth&amp;quot; Mathis] includes an example of painting out wavy lines in a baked normal map.&lt;br /&gt;
# Use a separate planar-projected mesh for the details that wrap around the barrel area, so the ray-casting is more even. Beware... this will cause the normal map not to match your lowpoly vertex normals, probably causing shading errors. For example to add tread around a tire, the tread can be baked from a tread model that is laid out flat, then that bake can layered onto the bake from the cylindrical tire mesh in a paint program.&lt;br /&gt;
&lt;br /&gt;
[[image:timothy_evison_normalmap_projections.jpg|frame|none|Adjusting the shape of the cage to remove distortion. &amp;lt;br&amp;gt;Image by [http://users.cybercity.dk/~dsl11905/resume/resume.html Timothy &amp;quot;tpe&amp;quot; Evison]]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;TRI&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Triangulating ===&lt;br /&gt;
Before baking, it is usually best to triangulate the low-poly model, converting it from polygons into pure triangles. This prevents the vertex normals from being changed later on, which can create specular artifacts.&lt;br /&gt;
&lt;br /&gt;
[[image:triangulation_modo_ohare.jpg|frame|none| When quads are triangulated in [http://www.luxology.com/modo/ Modo], the internal edges are sometimes flipped, which causes shading differences.&amp;lt;br&amp;gt;Image by [http://www.farfarer.com/|James &amp;quot;Talon&amp;quot; O'Hare]]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Sometimes a baking tool or a mesh exporter/importer will re-triangulate the polygons. A quad polygon is actually treated as two triangles, and the internal edge between them is often switched diagonally during modeling operations. When the vertices of the quad are moved around in certain shapes, the software's algorithm for polygon models tries to keep the quad surface in a &amp;quot;rational&amp;quot; non-overlapping shape. It does this by switching the internal edge between its triangles.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[image:triangulation_spec_tychovii.jpg|frame|none| The specular highlight is affected by triangulation. Flip edges to fix skewing. See the Polycount thread [http://boards.polycount.net/showthread.php?t=66651 Skewed Specular Highlight?] for pictures and more info.&amp;lt;br&amp;gt; Image by [http://robertkreps.com Robert &amp;quot;TychoVII&amp;quot; Kreps]]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;WWC&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Working with Cages ===&lt;br /&gt;
''Cage'' has two meanings in the normal-mapping process: a low-poly base for [[subdivision surface modeling]] (usually called the [[DigitalSculpting#BM|basemesh]]), or a ray-casting mesh used for normal map baking. This section covers the ray-casting cage.&lt;br /&gt;
&lt;br /&gt;
Most normal map baking tools allow you to use a distance-based raycast. A ray is sent outwards along each vertex normal, then at the distance you set a ray is cast back inwards. Where ever that ray intersects the high poly mesh, it will sample the normals from it. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|[[Image:Normalmap_raycasting_1.jpg]] &lt;br /&gt;
|[[Image:Normalmap_raycasting_2.jpg]]&lt;br /&gt;
|-&lt;br /&gt;
|Hard edges and a distance-based raycast (gray areas) cause ray misses (yellow) and ray overlaps (cyan).&amp;lt;br&amp;gt; Image by [http://www.mankua.com/ Diego Castaño]&lt;br /&gt;
|The gray area shows that using all soft edges (or hard edges and a cage-based raycast) will avoid ray-casting errors from split normals.&amp;lt;br&amp;gt; Image by [http://www.mankua.com/ Diego Castaño] &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Unfortunately with a distance-based raycast, [[#SGAHE|split vertex normals]] will cause the bake to miss parts of the high-res mesh, causing errors and seams. &lt;br /&gt;
&lt;br /&gt;
Some software allows you to use ''cage mesh'' option instead, which basically inflates a copy of the low-poly mesh, then raycasts inwards from each vertex. This ballooned-out mesh is the cage.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|&amp;lt;tablebgcolor=&amp;quot;#ffaaaa&amp;quot;&amp;gt;| &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
In 3ds Max the cage controls both the distance and the direction of the raycasting. &lt;br /&gt;
&lt;br /&gt;
In Maya the cage only controls the distance; the ray direction matches the vertex normals (inverted).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;text-decoration: line-through&amp;quot;&amp;gt; This may have been fixed in the latest release...&amp;lt;&amp;lt;BR&amp;gt;&amp;gt;&lt;br /&gt;
In Xnormal the cage is split everywhere the model has [[#SGAHE|hard edges]], causing ray misses in the bake. You can fix the hard edge split problem but it involves an overly complex workflow. You must also repeat the whole process any time you change your mesh:&amp;lt;/span&amp;gt;&lt;br /&gt;
# &amp;lt;s&amp;gt; Load the 3d viewer.&amp;lt;/s&amp;gt;&lt;br /&gt;
# &amp;lt;s&amp;gt; Turn on the cage editing tools.&amp;lt;/s&amp;gt;&lt;br /&gt;
# &amp;lt;s&amp;gt; Select all of the vertices.&amp;lt;/s&amp;gt;&lt;br /&gt;
# &amp;lt;s&amp;gt; Weld all vertices.&amp;lt;/s&amp;gt;&lt;br /&gt;
# &amp;lt;s&amp;gt; Expand the cage as you normally would.&amp;lt;/s&amp;gt;&lt;br /&gt;
# &amp;lt;s&amp;gt; Save out your mesh using the Xnormal format.&amp;lt;/s&amp;gt;&lt;br /&gt;
# &amp;lt;s&amp;gt; Make sure Xnormal is loading the correct mesh.&amp;lt;/s&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;P&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;Painting&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Painting ==&lt;br /&gt;
Don't be afraid to edit normal maps in Photoshop. After all it is just a texture, so you can clone, blur, copy, blend all you want... as long as it looks good of course. Some understanding of [[#RGBChannels|the way colors work]] in normal maps will go a long way in helping you paint effectively.&lt;br /&gt;
&lt;br /&gt;
A normal map sampled from a high-poly mesh will nearly always be better than one sampled from a texture, since you're actually grabbing &amp;quot;proper&amp;quot; normals from an accurate, highly detailed surface. That means your normal map's pixels will basically be recreating the surface angles of your high-poly mesh, resulting in a very believable look.&lt;br /&gt;
&lt;br /&gt;
If you only convert an image into a normal-map, it can look very flat, and in some cases it can be completely wrong unless you're very careful about your value ranges. Most image conversion tools assume the input is a heightmap, where black is low and white is high. If you try to convert a diffuse texture that you've painted, the results are often very poor. Often the best results are obtained by baking the large and mid-level details from a high-poly mesh, and then combined with photo-sourced &amp;quot;fine detail&amp;quot; normals for surface details such as fabric weave, scratches and grain.&lt;br /&gt;
&lt;br /&gt;
Sometimes creating a high poly surface takes more time than your budget allows. For character or significant environment assets then that is the best route, but for less significant environment surfaces working from a heightmap-based texture will provide a good enough result for a much less commitment in time.&lt;br /&gt;
&lt;br /&gt;
* [http://crazybump.com/ CrazyBump] is a commercial normal map converter.&lt;br /&gt;
* [http://www.renderingsystems.com/support/showthread.php?tid=3 ShaderMap] is a commercial normal map converter.&lt;br /&gt;
* [http://www.pixplant.com/ PixPlant] is a commercial normal map converter.&lt;br /&gt;
* [http://boards.polycount.net/showthread.php?t=68860 NJob] is a free normal map converter.&lt;br /&gt;
* [http://developer.nvidia.com/nvidia-texture-tools-adobe-photoshop NVIDIA normalmap filter for Photoshop] is a free normal map converter.&lt;br /&gt;
* [http://xnormal.net Xnormal height-to-normals filter for Photoshop] is a free normal map converter.&lt;br /&gt;
* [http://www.poopinmymouth.com/tutorial/normal_workflow_3.htm Normal map process tutorial] by [http://www.poopinmymouth.com/ Ben &amp;quot;poopinmymouth&amp;quot; Mathis] includes an example of painting out wavy lines in a baked normal map.&lt;br /&gt;
&lt;br /&gt;
=== Flat Color ===&lt;br /&gt;
The color (128,128,255) creates normals that are completely perpendicular to the polygon, as long as the vertex normals are also perpendicular. Remember a normal map's per-pixel normals create ''offsets'' from the vertex normals. If you want an area in the normal map to be flat, so it creates no offsets from the vertex normals, then use the color (128,128,255). &lt;br /&gt;
&lt;br /&gt;
This becomes especially obvious when [[#Mirroring|mirroring a normal map]] and using a shader with a reflection ingredient. Reflection tends to accentuate the angles between the normals, so any errors become much more apparent.&lt;br /&gt;
&lt;br /&gt;
[[image:normalmap_127seam.jpg|thumb|600px|none| Mirrored normal maps show a seam when (127,127,255) is used for the flat color; 128 is better.&amp;lt;br&amp;gt;Image by [http://www.ericchadwick.com Eric Chadwick]]]&lt;br /&gt;
&lt;br /&gt;
In a purely logical way, 127 seems like it would be the halfway point between 0 and 255. However 128 is the color that actually works in practice. When a test is done comparing (127,127,255) versus (128,128,255) it becomes obvious that 127 creates a slightly bent normal, and 128 creates a flat one.&lt;br /&gt;
&lt;br /&gt;
This is because most game pipelines use ''unsigned'' normal maps. For details see the Polycount forum thread [http://www.polycount.com/forum/showpost.php?p=771360&amp;amp;postcount=22 tutorial: fixing mirrored normal map seams].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;BNMT&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Blending Normal Maps Together ===&lt;br /&gt;
Blending normal maps together is a quick way to add high-frequency detail like wrinkles, cracks, and the like. Fine details can be painted as a height map, then it can be converted into a normal map using one of the normal map tools. Then this &amp;quot;details&amp;quot; normal map can be blended with a geometry-derived normal map using one of the methods below. &lt;br /&gt;
&lt;br /&gt;
Here is a comparison of four of the blending methods. Note that in these examples the default values were used for CrazyBump (Intensity 50, Strength 33, Strength 33), but the tool allows each layer's strength to be adjusted individually for stronger or milder results. Each of the normal maps below were [[#Renormalizing|re-normalized]] after blending.&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
| [[Image:NormalMap$nrmlmap_blending_methods_Maps.png}}&lt;br /&gt;
|-&lt;br /&gt;
| The blended normal maps.&amp;lt;&amp;lt;BR&amp;gt;&amp;gt;&amp;lt;span style=&amp;quot;font-size: smaller&amp;quot;&amp;gt;image by[http://www.ericchadwick.com Eric Chadwick]&amp;lt;/span&amp;gt; &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The four blending methods used above:&lt;br /&gt;
# [http://www.crazybump.com CrazyBump] by Ryan Clark blends normal maps together using calculations in 3D space rather than just in 2D. This does probably the best job at preserving details, and each layer's strength settings can be tweaked individually. &lt;br /&gt;
# [http://www.rodgreen.com/?p=4 Combining Normal Maps in Photoshop] by Rod Green blends normal maps together using Linear Dodge mode for the positive values and Difference mode for the negative values, along with a Photoshop Action to simplify the process. It's free, but the results may be less accurate than CrazyBump.&lt;br /&gt;
# [http://www.paultosca.com/makingofvarga.html Making of Varga] by [http://www.paultosca.com/ Paul &amp;quot;paultosca&amp;quot; Tosca] blends normal maps together using Overlay mode for the red and green channels and Multiply mode for the blue channel. This gives a slightly stronger bump than the Overlay-only method. [http://www.leocov.com/ Leo &amp;quot;chronic&amp;quot; Covarrubias] has a step-by-step tutorial for this method in [http://www.cgbootcamp.com/tutorials/2009/12/9/photoshop-combine-normal-maps.html CG Bootcamp Combine Normal Maps].&lt;br /&gt;
# [[3DTutorials/Normal Map Deepening|Normal Map Deepening]] by [http://www.poopinmymouth.com/ Ben &amp;quot;poopinmymouth&amp;quot; Mathis] shows how to blend normal maps together using Overlay mode. [http://cgtextures.com/content.php?action=tutorial&amp;amp;name=normalmap CGTextures tutorial for the NVIDIA Photoshop filter] by [http://hirezstudios.com/ Scott Warren] also shows how to create normalmaps using multiple layers (Note: to work with the Overlay blend mode each layer's Output Level should be 128 instead of 255, you can use the Levels tool for this).&lt;br /&gt;
&lt;br /&gt;
The [http://boards.polycount.net/showthread.php?t=69615 Getting good height from Nvidia-filter normalizing grayscale height] thread on the Polycount forum has a discussion of different painting/blending options. Also see the [[#2DT|2D Tools]] section for painting and conversion tools.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;PCT&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Pre-Created Templates ===&lt;br /&gt;
A library of shapes can be developed and stored for later use, to save creation time for future normal maps. Things like screws, ports, pipes, and other doo-dads. These shapes can be stored as bitmaps with transparency so they can be layered into baked normal maps.&lt;br /&gt;
&lt;br /&gt;
* [http://www.beautifulrobot.com/?p=69 Creating &amp;amp; Using NormalMap &amp;quot;Widgets&amp;quot;] - by ''[http://www.beautifulrobot.com Steev &amp;quot;kobra&amp;quot; Kelly]''&amp;lt;&amp;lt;BR&amp;gt;&amp;gt; How to set up and render template objects.&lt;br /&gt;
* [http://www.akramparvez.com/portfolio/scripts/normalmap-widget-for-3ds-max/ NormalMap Widget for 3ds Max] - by ''[http://www.akramparvez.com Akram Parvez]''&amp;lt;&amp;lt;BR&amp;gt;&amp;gt;A script to automate the setup and rendering process.&lt;br /&gt;
* See the section [[#BT|Baking Transparency]] for more template-rendering tools and tutorials.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;RN&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;Renormalizing&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Re-normalizing ===&lt;br /&gt;
Re-normalizing means resetting the length of each normal in the map to 1.&lt;br /&gt;
&lt;br /&gt;
A normal mapping shader takes the three color channels of a normal map and combines them to create the direction and length of each pixel's normal. These normals are then used to apply the scene lighting to the mesh. However if you edit normal maps by hand or if you blend multiple normal maps together this can cause those lengths to change. Most shaders expect the length of the normals to always be 1 (normalized), but some are written to re-normalize the normal map dynamically (for example, 3ds Max's Hardware Shaders do re-normalize).&lt;br /&gt;
&lt;br /&gt;
If the normals in your normal map are not normalized, and your shader doesn't re-normalize them either, then you may see artifacts on the shaded surface... the specular highlight may speckle like crazy, the surface may get patches of odd shadowing, etc. To help you avoid this NVIDIA's normal map filter for Photoshop provides an easy way to re-normalize a map after editing; just use the '''Normalize Only''' option. [http://xnormal.net Xnormal] also comes with a Normalize filter for Photoshop.&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Some shaders use [[#NormalMapCompression|compressed normal maps]]. Usually this means the blue channel is thrown away completely, so it's recalculated on-the-fly in the shader. However the shader has to re-normalize in order to recreate that data, so any custom normal lengths that were edited into the map will be ignored completely. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;AOIANM&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;AmbientOcclusionIntoANormalMap&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Ambient Occlusion into a Normal Map ===&lt;br /&gt;
If the shader doesn't re-normalize the normal map, an [[Ambient Occlusion Map]] can actually be baked into the normal map. This will shorten the normals in the crevices of the surface, causing the surface to receive less light there. This works with both diffuse and specular, or any other pass that uses the normal map, like reflection.&lt;br /&gt;
&lt;br /&gt;
However it's usually best to keep the AO as a separate map (or in an alpha channel) and multiply it against the ambient lighting only. This is usually done with a custom [[Category:Shaders|shader]]. If you multiply it against the diffuse map or normal map then it also occludes diffuse lighting which can make the model look dirty. Ambient occlusion is best when it occludes ambient lighting only, for example a [[DiffuselyConvolvedCubeMap|diffusely convolved cubemap]].&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
To bake the AO into a normal map, adjust the levels of the AO layer first so the darks only go as low as 128 gray, then set the AO layer to Darken mode. This will shorten the normals in the normalmap, causing the surface to receive less light in the darker areas. &lt;br /&gt;
&lt;br /&gt;
This trick doesn't work with any shaders that re-normalize, like 3ds Max's Hardware Shaders. The shader must be altered to actually use the lengths of your custom normals; most shaders just assume all normals are 1 in length because this makes the shader code simpler. Also this trick will not work with most of the common [[#NormalMapCompression|normal map compression formats]], which often discard the blue channel and recalculate it in the shader, which requires re-normalization.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;BLE&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;&amp;lt;Anchor([[BacklightingExample]])&amp;gt;&amp;gt;&lt;br /&gt;
=== Back Lighting Example ===&lt;br /&gt;
[[Image:tree_front.jpg|thumb|Tree simulating subsurface scattering. &amp;lt;BR&amp;gt;image by [http://www.linkedin.com/in/ericchadwick Eric Chadwick].]]&lt;br /&gt;
You can customize normal maps for some interesting effects. If you invert the blue channel of a tangent-space map, the normals will be pointing to the opposite side of the surface, which can simulate backlighting.&lt;br /&gt;
&lt;br /&gt;
The tree leaves use a shader than adds together two diffuse maps, one using a regular tangent-space normal map, the other using the same normal map but with the blue channel inverted. This causes the diffuse map using the regular normal map to only get lit on the side facing the light (front view), while the diffuse map using the inverted normal map only gets lit on the opposite side of the leaves (back view). The leaf geometry is 2-sided but uses the same shader on both sides, so the effect works no matter the lighting angle. As an added bonus, because the tree is self-shadowing the leaves in shadow do not receive direct lighting, which means their backsides do not show the inverted normal map, so the fake subsurface scatter effect only appears where the light directly hits the leaves. This wouldn't work for a whole forest because of the computational cost of self-shadowing and double normal maps, but could be useful for a single &amp;quot;star&amp;quot; asset, or if LODs switched the distant trees to a model that uses a cheaper shader.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;SAS&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;&amp;lt;Anchor([[ShadersAndSeams]])&amp;gt;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Shaders and Seams ==&lt;br /&gt;
You need to use the right kind of shader to avoid seeing seams where UV breaks occur. It must be written to use the same [[#TangentBasis|tangent basis]] that was used during baking. If the shader doesn't, the lighting will either be inconsistent across UV borders or it will show smoothing errors from the low-poly vertex normals.&lt;br /&gt;
&lt;br /&gt;
Xnormal generates accurate normals when displayed in Xnormal, and the SDK includes a method to write your own custom tangent space generator for the tool. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;3MS&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== 3ds Max Shaders ===&lt;br /&gt;
[[Image:max2010_normalmap_workarounds.png|thumb|Viewport methods in 3ds Max 2010. &amp;lt;BR&amp;gt; image by [http://www.linkedin.com/in/ericchadwick Eric Chadwick].]]&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;Render To Texture&amp;quot; tool in 3ds Max 2011 and older generates [[#TSNM|tangent-space]] normal maps that render correctly in the offline renderer (scanline) but do not render correctly in the realtime viewport with the 3ds Max shaders. Max is using a different [[#TangentBasis|tangent basis]] for each. This is readily apparent when creating non-organic hard surface normalmaps; smoothing errors appear in the viewport that do not appear when rendered. &lt;br /&gt;
&lt;br /&gt;
The errors can be fixed by using &amp;quot;Render To Texture&amp;quot; to bake a [[#TSNM|tangent-space]] or [[#OSNM|object-space]] map, and using the free [http://www.3pointstudios.com/3pointshader_about.shtml &amp;quot;3Point Shader&amp;quot;] by Christoph '[[CrazyButcher]]' Kubisch and Per 'perna' Abrahamsen. The shader uses the same tangent basis as the baking tool, so it produces nearly flawless results. It also works with old bakes.&lt;br /&gt;
&lt;br /&gt;
You can get OK results in the Max viewport using a tangent-space map baked in Maya, loading it in a Standard material, and enabling &amp;quot;Show Hardware Map in Viewport&amp;quot;. Another method is to use Render To Texture to bake an [[#OSNM|object-space]] map then use [[#CBS|Nspace]] to convert it into a tangent-space map then load that in a DirectX material and use the RTTNormalMap.fx shader. &lt;br /&gt;
&lt;br /&gt;
Autodesk is aware of these issues, and plans to address them in an upcoming release. See these links for more information:&lt;br /&gt;
* Christoph &amp;quot;[[CrazyButcher]]&amp;quot; Kubisch and Per &amp;quot;perna&amp;quot; Abrahamsen designed a shader/modifier combination approach that fixes the viewport problem, see the Polycount forum post [http://boards.polycount.net/showthread.php?t=72861 3Point Shader Lite - Shader material editor and Quality Mode normalmaps for 3ds Max].&lt;br /&gt;
* Jean-Francois &amp;quot;jfyelle&amp;quot; Yelle, Autodesk Media &amp;amp; Entertainment Technical Product Manager, has [http://boards.polycount.net/showthread.php?p=1115812#post1115812 this post]. &lt;br /&gt;
* Ben Cloward posted [http://boards.polycount.net/showthread.php?p=1100270#post1100270 workarounds and FX code].&lt;br /&gt;
* Christopher &amp;quot;cdiggins&amp;quot; Diggins, SDK writer for 3ds Max, shares some of the SDK code in his blog posts &amp;quot;[http://area.autodesk.com/blogs/chris/how_the_3ds_max_scanline_renderer_computes_tangent_and_binormal_vectors_for_normal_mapping How the 3ds Max Scanline Renderer Computes Tangent and Binormal Vectors for Normal Mapping]&amp;quot; and &amp;quot;[http://area.autodesk.com/blogs/chris/3ds_max_normal_map_baking_and_face_angle_weighting_the_plot_thickens 3ds Max Normal Map Baking and Face Angle Weighting: The Plot Thickens]&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;3MENT&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 3ds Max Edit Normals Trick ===&lt;br /&gt;
After baking, if you add an Edit Normals modifier to your low-poly normalmapped model, this seems to &amp;quot;relax&amp;quot; the vertex normals for more accurate viewport shading. The modifier can be collapsed if desired.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;MS&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Maya Shaders ===&lt;br /&gt;
Maya seems to correctly generate normals to view in realtime, with the correct [[#TangentBasis|tangent basis]], with much less smoothing errors than 3ds Max. &lt;br /&gt;
* [http://www.mentalwarp.com/~brice/shader.php BRDF shader] by [http://www.mentalwarp.com/~brice/ Brice Vandemoortele] and [http://www.kjapi.com/ Cedric Caillaud] (more info in [http://boards.polycount.net/showthread.php?t=49920 this Polycount thread]) '''Update:''' [http://boards.polycount.net/showthread.php?p=821862#post821862 New version here] with many updates, including object-space normal maps, relief mapping, self-shadowing, etc. Make sure you enable cgFX shaders in the Maya plugin manager, then you can create them in the same way you create a Lambert, Phong etc. Switch OFF high quality rendering in the viewports to see them correctly too.&lt;br /&gt;
* If you want to use the software renderer, use mental ray instead of Maya's software renderer because mental ray correctly interprets tangent space normals. The Maya renderer treats the normal map as a grayscale bump map, giving nasty results. Mental ray supports Maya's Phong shader just fine (amongst others), although it won't recognise a gloss map plugged into the &amp;quot;cosine power&amp;quot; slot. The slider still works though, if you don't mind having a uniform value for gloss. Spec maps work fine though. Just use the same set up as you would for viewport rendering. You'll need to have your textures saved as TGAs or similar for mental ray to work though. - from [http://boards.polycount.net/member.php?u=14235 CheeseOnToast]&lt;br /&gt;
&amp;lt;&amp;lt;Anchor([[NormalMapCompression]])&amp;gt;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;NMC&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
== Normal Map Compression ==&lt;br /&gt;
see; [[Normal Map Compression]]&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
=== Related Pages ===&lt;br /&gt;
* [[Curvature map]]&lt;br /&gt;
* [[DuDv map]]&lt;br /&gt;
* [[Flow map]]&lt;br /&gt;
* [[Normal map]]&lt;br /&gt;
* [[Radiosity normal map]]&lt;br /&gt;
* [[Vector displacement map]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;3DT&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; &amp;lt;span id=&amp;quot;Tools&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; &amp;lt;span id=&amp;quot;3DTools&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== 3D Tools ===&lt;br /&gt;
See [[Category:Tools#A3D_Normal_Map_Software|Category:Tools#3D_Normal_Map_Software]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;2DT&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;2DTools&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== 2D Tools ===&lt;br /&gt;
See [[Category:Tools#A2D_Normal_Map_Software|Category:Tools#2D_Normal_Map_Software]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;T&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;Tutorials&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Tutorials ===&lt;br /&gt;
* [http://area.autodesk.com/userdata/fckdata/239955/The%20Generation%20and%20Display%20of%20Normal%20Maps%20in%203ds%20Max.pdf The Generation and Display of Normal Maps in 3ds Max] (500kb PDF) &amp;lt;&amp;lt;BR&amp;gt;&amp;gt; Excellent whitepaper from Autodesk about normal mapping in 3ds Max and other apps.&lt;br /&gt;
* [http://www.katsbits.com/htm/tutorials/blender-baking-normal-maps-from-models.htm Renderbump and baking normal maps from high poly models using Blender 3D] by ''[http://www.katsbits.com/htm/about.htm &amp;quot;katsbits&amp;quot;]''&amp;lt;&amp;lt;BR&amp;gt;&amp;gt;Baking normal maps in Blender.&lt;br /&gt;
* [http://udn.epicgames.com/Three/CreatingNormalMaps.html Techniques for Creating Normal Maps] in the Unreal Developer Network's [http://udn.epicgames.com/Three/WebHome.html Unreal Engine 3 section] contains advice from [http://www.epicgames.com/ Epic Games] artists on creating normal maps for UE3. The [http://udn.epicgames.com/Three/DesignWorkflow.html#Creating%20normal%20maps%20from%20meshes Design Workflow page] has a summary.&lt;br /&gt;
* [http://www.iddevnet.com/quake4/ArtReference_CreatingModels#head-3400c230e92ff7d57424b2a68f6e0ea75dee4afa Creating Models in Quake 4] by [http://www.ravensoft.com/ Raven Software] is a comprehensive guide to creating Quake 4 characters.&lt;br /&gt;
* [http://www.svartberg.com/tutorials/article_normalmaps/normalmaps.html Normalmaps for the Technical Game Modeler] by [http://www.svartberg.com Ariel Chai] shows how low-poly smoothing and UVs can affect normal maps in Doom 3.&lt;br /&gt;
* [http://wiki.polycount.net/3D_Tutorials/Modeling_High-Low_Poly_Models_for_Next_Gen_Games Modeling High/Low Poly Models for Next Gen Games] by [http://www.acetylenegames.com/artbymasa/ João &amp;quot;Masakari&amp;quot; Costa] is an overview of modeling for normal maps.&lt;br /&gt;
* The [http://tech-artists.org/wiki/Beveling Beveling section on the Tech-Artists.Org Wiki] discusses how smoothing groups and bevels affect the topology of the low-poly model.&lt;br /&gt;
* The two-part article [http://www.ericchadwick.com/examples/provost/byf2.html#wts Beautiful, Yet Friendly] by [http://www.linkedin.com/in/gprovost Guillaume Provost] explains how smoothing groups and other mesh attributes cause vertices to be duplicated in the game. The vertex count is actually what matters in-game, not the triangle or poly count.&lt;br /&gt;
* [http://www.poopinmymouth.com/tutorial/normal_workflow_2.htm Normal map workflow] by [http://www.poopinmymouth.com/ Ben &amp;quot;poopinmymouth&amp;quot; Mathis] demonstrates his normal mapping workflow in 3ds Max and Photoshop.&lt;br /&gt;
* [http://dodownload.filefront.com/9086954//72f71c0147df53765045a22253c18361a29a6d532425842007ead644d39cbb85d0794ab560365cfa This video tutorial] by [http://www.custom-airbrush.com/ Jeff &amp;quot;airbrush&amp;quot; Ross] shows in Maya how to subdivide the low-poly mesh so it more closely matches the high-poly mesh, to help solve wavy lines in the bake.&lt;br /&gt;
* [http://www.bencloward.com/tutorials_normal_maps1.shtml Normal Mapping Tutorial] by [http://www.bencloward.com/ Ben Cloward] is a comprehensive tutorial about the entire normal map creation process.&lt;br /&gt;
* [http://www.pinwire.com/articles/26/1/Generating-High-Fidelity-Normal-Maps-with-3-D-Software.html Generating High Fidelity Normal Maps with 3-D Software] by [http://www.linkedin.com/pub/0/277/4AB Dave McCoy] shows how to use a special lighting setup to render normal maps (instead of baking them).&lt;br /&gt;
* [http://cgtextures.com/content.php?action=tutorial&amp;amp;name=normalmap Tutorial for the NVIDIA Photoshop filter] by [http://hirezstudios.com/ Scott Warren] shows how to create deep normal maps using multiple layers. Note: to use Overlay blend mode properly, make sure to change each layer's Levels ''Output Level'' to 128 instead of 255.&lt;br /&gt;
* [http://www.poopinmymouth.com/process/tips/normalmap_deepening.jpg Normalmap Deepening] by [http://www.poopinmymouth.com/ Ben &amp;quot;poopinmymouth&amp;quot; Mathis] shows how to adjust normal maps, and how to layer together painted and baked normal maps.&lt;br /&gt;
* [http://boards.polycount.net/showthread.php?t=51088 Tutorial for painting out seams on mirrored tangent-space normal maps] by [http://www.warbeast.de/ warby] helps to solve seams along horizontal or vertical UV edges, but not across angled UVs.&lt;br /&gt;
* [http://planetpixelemporium.com/tutorialpages/normal.html Cinema 4D and Normal Maps For Games] by [http://planetpixelemporium.com/index.php James Hastings-Trew] describes normal maps in plain language, with tips on creating them in Cinema 4D.&lt;br /&gt;
* [http://forums.cgsociety.org/showthread.php?f=39&amp;amp;t=359082 3ds Max normal mapping overview] by [http://www.alan-noon.com/ Alan Noon] is a great thread on CGTalk about the normal mapping process.&lt;br /&gt;
* [http://forums.cgsociety.org/showthread.php?f=46&amp;amp;t=373024 Hard Surface Texture Painting] by [http://stefan-morrell.cgsociety.org/gallery/ Stefan Morrell] is a good introduction to painting textures for metal surfaces.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;D&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;Discussion&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Discussion ===&lt;br /&gt;
[http://boards.polycount.net/showthread.php?p=820218 Discuss this page on the Polycount forums]. Suggestions welcome.&lt;br /&gt;
&lt;br /&gt;
Even though only one person has been editing this page so far, the information here was gathered from many different sources. We wish to thank all the contributors for their hard-earned knowledge. It is much appreciated! [http://wiki.polycount.net {{http://boards.polycount.net/images/smilies/pcount/icons/smokin.gif}}]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[:Category:Texturing]] [[:Category:TextureTypes]] [[:Category:Bump map]]&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	<entry>
		<id>http://wiki.polycount.com/wiki/Normal_map</id>
		<title>Normal map</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/Normal_map"/>
				<updated>2014-11-29T08:27:57Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: /* 3ds Max Shaders */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!-- ## page was renamed from Normal Map --&amp;gt;&lt;br /&gt;
= Normal Map =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;WhatIsANormalMap&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;WIANM&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
== What is a Normal Map? ==&lt;br /&gt;
A Normal Map is usually used to fake high-res geometry detail when it's mapped onto a low-res mesh. The pixels of the normal map each store a ''normal'', a vector that describes the surface slope of the original high-res mesh at that point. The red, green, and blue channels of the normal map are used to control the direction of each pixel's normal. &lt;br /&gt;
&lt;br /&gt;
When a normal map is applied to a low-poly mesh, the texture pixels control the direction each of the pixels on the low-poly mesh will be facing in 3D space, creating the illusion of more surface detail or better curvature. However, the silhouette of the model doesn't change. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;packed&amp;quot;&amp;gt;&lt;br /&gt;
Whatif_normalmap_mapped2.jpg|A model with a normal map.&amp;lt;br&amp;gt;Image by [http://www.jameskuart.com/ James Ku].&lt;br /&gt;
Whatif_normalmap_low.jpg|The model without its normal map.&amp;lt;br&amp;gt;Image by [http://www.jameskuart.com/ James Ku].&lt;br /&gt;
Whatif_normalmap_high.jpg|The high-resolution model used to create the normal map.&amp;lt;br&amp;gt;Image by [http://www.jameskuart.com/ James Ku].&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Tangent-Space vs. Object-Space==&lt;br /&gt;
&lt;br /&gt;
Normal maps can be made in either of two basic flavors: tangent-space or object-space. Object-space is also called local-space or model-space, same thing. World-space is basically the same as object-space, except it requires the model to remain in its original orientation, neither rotating nor deforming, so it's almost never used.&lt;br /&gt;
&lt;br /&gt;
===Tangent-space normal map===&lt;br /&gt;
[[image:normalmap_tangentspace.jpg|frame|none|A tangent-space normal map. &amp;lt;br&amp;gt;Image by [http://ericchadwick.com Eric Chadwick].]]&lt;br /&gt;
&lt;br /&gt;
Predominantly-blue colors. Object can rotate and deform. Good for deforming meshes, like characters, animals, flags, etc.&lt;br /&gt;
&lt;br /&gt;
Pros:&lt;br /&gt;
* Maps can be reused easily, like on differently-shaped meshes.&lt;br /&gt;
* Maps can be tiled and mirrored easily, though some games might not support mirroring very well.&lt;br /&gt;
* Easier to overlay painted details.&lt;br /&gt;
* Easier to use image compression.&lt;br /&gt;
&lt;br /&gt;
Cons:&lt;br /&gt;
* More difficult to avoid smoothing problems from the low-poly vertex normals (see Smoothing Groups and Hard Edges).&lt;br /&gt;
* Slightly slower performance than an object-space map (but not by much).&lt;br /&gt;
&lt;br /&gt;
===Object-space normal map===&lt;br /&gt;
[[image:normalmap_worldspace.jpg|frame|none|An object-space normal map. &amp;lt;br&amp;gt;Image by [http://ericchadwick.com Eric Chadwick].]]&lt;br /&gt;
&lt;br /&gt;
Rainbow colors. Objects can rotate, but usually shouldn't be deformed, unless the shader has been modified to support deformation.&lt;br /&gt;
&lt;br /&gt;
Pros:&lt;br /&gt;
* Easier to generate high-quality curvature because it completely ignores the crude smoothing of the low-poly vertex normals.&lt;br /&gt;
* Slightly better performance than a tangent-space map (but not by much).&lt;br /&gt;
&lt;br /&gt;
Cons:&lt;br /&gt;
* Can't easily reuse maps, different mesh shapes require unique maps.&lt;br /&gt;
* Difficult to tile properly, and mirroring requires specific shader support.&lt;br /&gt;
* Harder to overlay painted details because the base colors vary across the surface of the mesh. Painted details must be converted into Object Space to be combined properly with the OS map.&lt;br /&gt;
* They don't compress very well, since the blue channel can't be recreated in the shader like with tangent-space maps. Also the three color channels contain very different data which doesn't compress well, creating many artifacts. Using a half-resolution object-space map is one option. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=&amp;quot;all&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Converting Between Spaces ===&lt;br /&gt;
Normal maps can be converted between tangent space and object space, in order to use them with different blending tools and shaders, which require one type or the other.&lt;br /&gt;
&lt;br /&gt;
[http://diogo.codingcorner.net Diogo &amp;quot;fozi&amp;quot; Teixeira] created a tool called [http://boards.polycount.net/showthread.php?p=1072599#post1072599 NSpace] that converts an object-space normal map into a tangent-space map, which then works seamlessly in the Max viewport. He converts the map by using the same tangent basis that 3ds Max uses for its hardware shader. To see the results, load the converted map via the ''Normal Bump'' map and enable &amp;quot;Show Hardware Map in Viewport&amp;quot;. [http://gameartist.nl/ Osman &amp;quot;osman&amp;quot; Tsjardiwal] created a GUI for NSpace, you can [http://boards.polycount.net/showthread.php?p=1075143#post1075143 download it here], just put it in the same folder as the NSpace exe and run it. Diogo has further [http://boards.polycount.net/showthread.php?p=1074160#post1074160 plans for the tool] as well.&lt;br /&gt;
&lt;br /&gt;
[[File:NSpace_Gui_osman.png|frame|none|NSpace interface. &amp;lt;br&amp;gt;Image by [http://diogo.codingcorner.net Diogo &amp;quot;fozi&amp;quot; Teixeira] and [http://gameartist.nl Osman &amp;quot;osman&amp;quot; Tsjardiwal]]]&lt;br /&gt;
&lt;br /&gt;
[http://www.linkedin.com/in/earthquake Joe &amp;quot;EarthQuake&amp;quot; Wilson] said: &amp;quot;[8Monkey Labs has] a tool that lets you load up your reference mesh and object space map. Then load up your tangent normals, and adjust some sliders for things like tile and amount. We need to load up a mesh to know how to correctly orient the tangent normals or else things will come out upside down or reverse etc. It mostly works, but it tends to &amp;quot;bend&amp;quot; the resulting normals, so you gotta split the mesh up into some smoothing groups before you run it, and then I usually will just composite this &amp;quot;combo&amp;quot; texture over my orig map in Photoshop.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;RGBC&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;RGBChannels&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== RGB Channels ==&lt;br /&gt;
Shaders can use different techniques to render tangent-space normal maps, but the normal map directions are usually consistent within a game. Usually the red channel of a tangent-space normal map stores the X axis (pointing the normals predominantly leftwards or rightwards), the green channel stores the Y axis (pointing the normals predominantly upwards or downwards), and the blue channel stores the Z axis (pointing the normals outwards away from the surface).&lt;br /&gt;
&lt;br /&gt;
[[image:tangentspace_rgb.jpg|frame|none|The red, green, and blue channels of a tangent-space normal map. &amp;lt;br&amp;gt;Image by [http://ericchadwick.com Eric Chadwick].]]&lt;br /&gt;
&lt;br /&gt;
If you see lighting coming from the wrong angle when you're looking at your normal-mapped model, and the model is using a tangent-space normal map, the normal map shader might be expecting the red or green channel (or both) to point in the opposite direction. To fix this either change the shader, or simply invert the appropriate color channels in an image editor, so that the black pixels become white and the white pixels become black.&lt;br /&gt;
&lt;br /&gt;
Some shaders expect the color channels to be swapped or re-arranged to work with a particular [[#NormalMapCompression|compression format]]. For example the DXT5_nm format usually expects the X axis to be in the alpha channel, the Y axis to be in the green channel, and the red and blue channels to be empty.&lt;br /&gt;
&lt;br /&gt;
== Tangent Basis ==&lt;br /&gt;
[[#TangentSpaceVsObjectSpace|Tangent-space]] normal maps use a special kind of vertex data called the ''tangent basis''. This is similar to UV coordinates except it provides directionality across the surface, it forms a surface-relative coordinate system for the per-pixel normals stored in the normal map. &lt;br /&gt;
&lt;br /&gt;
Light rays are in world space, but the normals stored in the normal map are in tangent space. When a normal-mapped model is being rendered, the light rays must be converted from world space into tangent space, using the tangent basis to get there. At that point the incoming light rays are compared against the directions of the normals in the normal map, and this determines how much each pixel of the mesh is going to be lit. Alternatively, instead of converting the light rays some shaders will convert the normals in the normal map from tangent space into world space. Then those world-space normals are compared against the light rays, and the model is lit appropriately. The method depends on who wrote the shader, but the end result is the same.&lt;br /&gt;
&lt;br /&gt;
Unfortunately for artists, there are many different ways to calculate the tangent basis: [http://area.autodesk.com/blogs/chris/how_the_3ds_max_scanline_renderer_computes_tangent_and_binormal_vectors_for_normal_mapping 3ds Max], [http://download.autodesk.com/us/maya/2011help/index.html?url=./files/Appendix_A_Tangent_and_binormal_vectors.htm,topicNumber=d0e227193 Maya], [http://www.codesampler.com/dx9src/dx9src_4.htm#dx9_dot3_bump_mapping DirectX 9], [http://developer.nvidia.com/object/NVMeshMender.html NVMeshMender], [http://www.terathon.com/code/tangent.html Eric Lengyel], a custom solution, etc. This means a normal map baked in one application probably won't shade correctly in another. Artists must do some testing with different [[#T|baking tools]] to find which works best with their output. When the renderer (or game engine) renders your game model, [[#ShadersAndSeams|the shader]] must use the same tangent basis as the normal map baker, otherwise you'll get incorrect lighting, especially across the seams between UV shells.&lt;br /&gt;
&lt;br /&gt;
The [http://www.xnormal.net/ xNormal] SDK supports custom tangent basis methods. When a programmer uses it to implement their renderer's own tangent basis, artists can then use Xnormal to bake normal maps that will match their renderer perfectly.&lt;br /&gt;
&lt;br /&gt;
The [[#UVC|UVs]] and the [[#SGAHE|vertex normals]] on the low-res mesh directly influence the coloring of a [[#TSNM|tangent-space]] normal map when it is baked. Each tangent basis vertex is a combination of three things: the mesh vertex's normal (influenced by smoothing), the vertex's tangent (usually derived from the V texture coordinate), and the vertex's bitangent (derived in code, also called the binormal). These three vectors create an axis for each vertex, giving it a specific orientation in the tangent space. These axes are used to properly transform the incoming lighting from world space into tangent space, so your normal-mapped model will be lit correctly.&lt;br /&gt;
&lt;br /&gt;
When a triangle's vertex normals are pointing straight out, and a pixel in the normal map is neutral blue (128,128,255) this means that pixel's normal will be pointing straight out from the surface of the low-poly mesh. When that pixel normal is tilted towards the left or the right in the tangent coordinate space, it will get either more or less red color, depending on whether the normal map is set to store the X axis as either a positive or a negative value. Same goes for when the normal is tilted up or down in tangent space, it will either get more or less green color. If the vertex normals aren't exactly perpendicular to the triangle, the normal map pixels will be tinted away from neutral blue as well. The vertex normals and the pixel normals in the normal map are combined together to create the final per-pixel surface normals.&lt;br /&gt;
&lt;br /&gt;
[[#SAS|Shaders]] are written to use a particular direction or &amp;quot;handedness&amp;quot; for the X and Y axes in a normal map. Most apps tend to prefer +X (red facing right) and +Y (green facing up), while others like 3ds Max prefer +X and -Y. This is why you often need to invert the green channel of a normal map to get it to render correctly in this or that app... the shader is expecting a particular handedness.&lt;br /&gt;
&lt;br /&gt;
[[image:tangentseams.jpg|frame|none|When shared edges are at different angles in UV space, different colors will show up&lt;br /&gt;
along the seam. The tangent basis uses these colors to light the model properly. &amp;lt;br&amp;gt;Image by [http://ericchadwick.com Eric Chadwick].]]&lt;br /&gt;
&lt;br /&gt;
When you look at a tangent-space normal map for a character, you typically see different colors along the UV seams. This is because the UV shells are often oriented at different angles on the mesh, a necessary evil when translating the 3D mesh into 2D textures. The body might be mapped with a vertical shell, and the arm mapped with a horizontal one. This requires the normals in the normal map to be twisted for the different orientations of those UV shells. The UVs are twisted, so the normals must be twisted in order to compensate. The tangent basis helps reorient (twist) the lighting as it comes into the surface's local space, so the lighting will then look uniform across the normal mapped mesh.&lt;br /&gt;
&lt;br /&gt;
When an artist tiles a tangent-space normal map across an arbitrary mesh, like a landscape, this tends to shade correctly because the mesh has a uniform direction in tangent space. If the mesh has discontinuous UV coordinates (UV seams), or the normal map has large directional gradients across it, the tangent space won't be uniform anymore so the surface will probably have shading seams.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;MTLPM&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Modeling the Low-Poly Mesh ==&lt;br /&gt;
The in-game mesh usually needs to be carefully optimized to create a good silhouette, define edge-loops for better deformation, and minimize extreme changes between the vertex normals for better shading (see [[#SmoothingGroupsAndHardEdges|Smoothing Groups &amp;amp; Hard Edges]]).&lt;br /&gt;
&lt;br /&gt;
In order to create an optimized in-game mesh including a good silhouette and loops for deforming in animation, you can start with the 2nd subdivision level of your [[DigitalSculpting|digital sculpt]], or in some cases with the base mesh itself. Then you can just collapse edge loops or cut in new edges to add/remove detail as necessary. Or you can [[DigitalSculpting#OART|re-toplogize]] from scratch if that works better for you.&lt;br /&gt;
&lt;br /&gt;
See [http://www.polycount.com/forum/showthread.php?t=107196 You're making me hard. Making sense of hard edges, uvs, normal maps and vertex counts] on the Polycount forum.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;UVC&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;UVCoordinates&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== UV Coordinates ===&lt;br /&gt;
Normal map baking tools only capture normals within the 0-1 UV square, any UV bits outside this area are ignored. &lt;br /&gt;
&lt;br /&gt;
Only one copy of the forward-facing UVs should remain in the 0-1 UV square at baking time. If the mesh uses overlapping UVs, this will likely cause artifacts to appear in the baked map, since the baker will try render each UV shell into the map. Before baking, it's best to move all the overlaps and mirrored bits outside the 0-1 square. &lt;br /&gt;
&lt;br /&gt;
[[image:Normalmap_uvcoord_offset.jpg|frame|none|The mirrored UVs (in red) are offset 1 unit before baking. &amp;lt;br&amp;gt;Image by [http://ericchadwick.com Eric Chadwick].]]&lt;br /&gt;
&lt;br /&gt;
If you move all the overlaps and mirrored bits exactly 1 UV unit (any whole number will do), then you can leave them there after the bake and they will still be mapped correctly. You can move them back if you want, it doesn't matter to most game engines. Be aware that ZBrush does use UV offsets to manage mesh visibility, however this usually doesn't matter because the ZBrush cage mesh is often a different mesh than the in-game mesh used for baking.&lt;br /&gt;
&lt;br /&gt;
You should avoid changing the UVs after baking the normal map, because rotating or mirroring UVs after baking will cause the normal map not to match the [[#TB|tangent basis]] anymore, which will likely cause lighting problems. &lt;br /&gt;
&lt;br /&gt;
In 3ds Max, W is a third texture coordinate. It's used for 3D procedural textures and for storing vertex color in UV channels (you need 3 axes for RGB, so UVW can store vertex color). Bake problems can be avoided by moving any overlapping UVs to -1 on the W axis, with the same results as moving them 1 unit on the U or V axes. The tool Render To Texture will always bake whatever UVs are the highest along the W axis. However using W can be messy... it's generally hidden unless you purposefully look for it (bad for team work), doesn't get preserved on export to other apps, and high W values can prevent selecting and/or welding UVs. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;M&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;Mirroring&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Mirroring ===&lt;br /&gt;
Normal maps can be mirrored across a model to create symmetrical details, and save UV space, which allows more detail in the normal map since the texture pixels are smaller on the model. &lt;br /&gt;
&lt;br /&gt;
With [[#OSNM|object-space]] maps, mirroring requires [http://boards.polycount.net/showthread.php?t=53986 specific shader support]. For [[#TSNM|tangent-space]] maps, mirroring typically creates a shading seam, but this can be reduced or hidden altogether, depending on the method used.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;TMW&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
==== Typical Mirroring Workflow ====&lt;br /&gt;
# Delete the mesh half that will be mirrored. &lt;br /&gt;
# Arrange the UVs for the remaining model, filling the UV square.&lt;br /&gt;
# Mirror the model to create a &amp;quot;whole&amp;quot; mesh, welding the mesh vertices along the seam. &lt;br /&gt;
# Move the mirrored UVs exactly 1 unit (or any whole number) out of the 0-1 UV square.&lt;br /&gt;
# Bake the normal map.&lt;br /&gt;
&lt;br /&gt;
Sometimes an artist will decide to delete half of a symmetrical model before baking. &lt;br /&gt;
&lt;br /&gt;
This is a mistake however because often the vertex normals along the hole will bend towards the hole a bit; there are no faces on the other side to average the normals with. This will create a strong lighting seam in the normal map. &lt;br /&gt;
&lt;br /&gt;
It's typically best to use the complete mirrored model to bake the normal map, not just the unique half. &lt;br /&gt;
&lt;br /&gt;
To prevent the mirrored UVs from causing overlaps or baking errors, move the mirrored [[#UVC|UVs]] out of the 0-1 UV space, so only one copy of the non-mirrored UVs is left within the 0-1 square.&lt;br /&gt;
&lt;br /&gt;
To avoid texel &amp;quot;leaks&amp;quot; between the UV shells, make sure there's enough [[#Edge_padding|Edge Padding]] around each shell, including along the edges of the normal map. None of the UV shells should be touching the edge of the 0-1 UV square, unless they're meant to tile with the other side of the map.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;CM&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Center Mirroring ====&lt;br /&gt;
If the mirror seam runs along the surface of a continuous mesh, like down the center of a human face for example, then it will probably create a lighting seam. &lt;br /&gt;
&lt;br /&gt;
In Epic Games' [http://www.unrealtechnology.com/technology.php Unreal Engine 3] (UE3) their symmetrical models commonly use centered mirroring. Epic uses materials that mix a [[DetailMap]] with the normal maps; these seem to scatter the diffuse/specular lighting and help minimize the obviousness of the mirror seams. For their [[Light Map]]ped models they use [http://udn.epicgames.com/Three/LightMapUnwrapping.html a technique] that can almost completely hide the mirror seam.&lt;br /&gt;
&lt;br /&gt;
[[image:Epic_MirroringCicada.jpg|frame|none| In UE3 a center mirror seam is reduced by using a detail normal map. &amp;lt;br&amp;gt; Image by &amp;quot;[http://epicgames.com Epic Games]&amp;quot;]]&lt;br /&gt;
&lt;br /&gt;
'''''[http://www.zbrushcentral.com/showpost.php?p=573108&amp;amp;postcount=28 GOW2 normal map seams], [http://utforums.epicgames.com/showthread.php?p=27166791#post27166791 UDK normal map seams]'''''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;OM&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Offset Mirroring ====&lt;br /&gt;
Offset mirroring is a method where you move the mirror seam off to one side of the model, so the seam doesn't run exactly down the center. For example with a character's head, the UV seam can go down along the side of the head in front of the ear. The UV shell for the nearest ear can then be mirrored to use the area on the other side of the head. &lt;br /&gt;
&lt;br /&gt;
This avoids the &amp;quot;Rorschach&amp;quot; effect and allows non-symmetrical details, but it still saves texture space because the two sides of the head can be mirrored (they're never seen at the same time anyhow).&lt;br /&gt;
&lt;br /&gt;
Offset mirroring doesn't get rid of the seam, but it does move it off to a place where it can either be less obvious, or where it can be hidden in a natural seam on the model.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;FCM&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
==== Flat Color Mirroring ====&lt;br /&gt;
[http://boards.polycount.net/showthread.php?t=51088 Tutorial for painting out seams on mirrored tangent-space normal maps] by [http://www.warbeast.de/ warby] solves seams by painting a flat set of normals along the seam, using neutral blue (128,128,255). However it only works along horizontal or vertical UV seams, not across any angled UVs. It also removes any details along the mirror seam, creating blank areas. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;EM&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
==== Element Mirroring ====&lt;br /&gt;
The mirror seam can be avoided completely when it doesn't run directly through any mesh. For example if there's a detached mesh element that runs down the center of the model, this can be uniquely mapped, while the meshes on either side can be mirrors of each other. Whenever the mirrored parts don't share any vertex normals with the non-mirrored parts, there won't be any seams. &lt;br /&gt;
&lt;br /&gt;
[[image:normalmap_mirrored-binocs-racer445.jpg|frame|none|The middle part (highlighted in red) uses unique non-mirrored UVs, allowing the mesh on the right to be mirrored without any seams. &amp;lt;br&amp;gt;Image by [http://http://racer445.com/ &amp;quot;racer445&amp;quot;]]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;SGAHE&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Smoothing Groups &amp;amp; Hard Edges ===&lt;br /&gt;
Each vertex in a mesh has at least one vertex normal. Vertex normals are used to control the direction a triangle will be lit from; if the normal is facing the light the triangle will be fully lit, if facing away from the light the triangle won't be lit. &lt;br /&gt;
&lt;br /&gt;
Each vertex however can have more than one vertex normal. When two triangles have different vertex normals along their shared edge, this creates a shading seam, called a ''hard edge'' in most modeling tools. 3ds Max uses ''Smoothing Groups'' to create hard/soft edges, Maya uses ''Harden Edge'' and ''Soften Edge''. These tools create hard and soft edges by splitting and combining the vertex normals.&lt;br /&gt;
&lt;br /&gt;
[[image:BenMathis_SmoothingGroups_Excerpt.gif|frame|none|Hard edges occur where the vertices have multiple normals. &amp;lt;br&amp;gt;Image by [http://poopinmymouth.com Ben 'poopinmymouth' Mathis] ([http://poopinmymouth.com/process/tips/smoothing_groups.jpg tutorial here])]]&lt;br /&gt;
&lt;br /&gt;
When a mesh uses all soft normals (a single smoothing group) the lighting has to be interpolated across the extreme differences between the vertex normals. If your renderer doesn't support the same [[#TangentBasis|tangent basis]] that the baker uses, this can produce extreme shading differences across the model, which creates shading artifacts. It is generally best to reduce these extremes when you can because a mismatched renderer can only do so much to counteract it.&lt;br /&gt;
&lt;br /&gt;
Hard edges are usually best where the model already has a natural seam. For example, you can add a hard edge along the rim of a car's wheel well, to prevent the inside of the wheel well from distorting the shading for the outside of the car body. Mechanical models usually need hard edges where ever the surface bends more than about 45 degrees. &lt;br /&gt;
&lt;br /&gt;
For most meshes, the best results usually come from adding hard edges where ever there are UV seams. There are no hard rules however, you must experiment with different approaches to find what works best in your game.&lt;br /&gt;
&lt;br /&gt;
When you use object-space normal maps the vertex normal problem goes away since you're no longer relying on the crude vertex normals of the mesh. An object-space normal map completely ignores vertex normals. Object-space mapping allows you to use all soft edges and no bevels on the low-res mesh, without showing lighting errors.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;HEDAT&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
==== Hard Edge Discussions &amp;amp; Tutorials ====&lt;br /&gt;
* [http://www.polycount.com/forum/showthread.php?p=2090450#post2090450 Maya MEL Script help needed (UV border edges)]&lt;br /&gt;
* [http://www.polycount.com/forum/showthread.php?t=107196 You're making me hard. Making sense of hard edges, uvs, normal maps and vertex counts]&lt;br /&gt;
* [http://www.polycount.com/forum/showthread.php?t=73593 Normal Maps: Can Somone Explain This &amp;quot;Black Edge&amp;quot; issue]&lt;br /&gt;
* [http://www.polycount.com/forum/showthread.php?t=73566 Normal Maps: Can someone explain normals, tangents and split UVs?]&lt;br /&gt;
* [http://www.polycount.com/forum/showthread.php?t=72861 3Point Shader Lite - Shader material editor and Quality Mode normalmaps for 3ds Max]&lt;br /&gt;
* [http://boards.polycount.net/showthread.php?t=68173 Why you should NOT trust 3ds Max's viewport normal-map display!]&lt;br /&gt;
* [http://www.game-artist.net/forums/support-tech-discussion/10503-xsi-normal-mapped-cube-looks-bad.html XSI - normal mapped cube looks bad]&lt;br /&gt;
* [http://www.game-artist.net/forums/support-tech-discussion/11924-weird-maya-normal-map-seam-artifact-problem-am-i-making-simple-mistake.html Weird Maya normal map seam/artifact problem]&lt;br /&gt;
* [http://boards.polycount.net/showthread.php?p=1080600 Seams in Normals when Creating Tiling Environment Trims and other Tiles]&lt;br /&gt;
* The tutorial [http://www.svartberg.com/tutorials/article_normalmaps/normalmaps.html Normalmaps for the Technical Game Modeler] by [http://www.svartberg.com Ariel Chai] shows how low-poly smoothing can affect the normal map.&lt;br /&gt;
* The tutorial: [http://wiki.polycount.net/3D_Tutorials/Modeling_High-Low_Poly_Models_for_Next_Gen_Games Modeling High/Low Poly Models for Next Gen Games] by [http://www.acetylenegames.com/artbymasa/ João &amp;quot;Masakari&amp;quot; Costa] shows how smoothing affects raycasting.&lt;br /&gt;
* The [http://tech-artists.org/wiki/Beveling Beveling section on the Tech-Artists.Org Wiki] discusses the breaking of normals and smoothing groups in general terms.&lt;br /&gt;
* The two-part article [http://www.ericchadwick.com/examples/provost/byf2.html#wts Beautiful, Yet Friendly] by [http://www.linkedin.com/in/gprovost Guillaume Provost] explains how smoothing groups and other mesh attributes cause vertices to be duplicated in the game. The vertex count is actually what matters in the game, not the triangle count.&lt;br /&gt;
* The Crysis documentation [http://doc.crymod.com/AssetCreation/PolyBumpReference.html PolyBump Reference] has a section towards the bottom that shows how smoothing affects their baked normal maps.&lt;br /&gt;
* The polycount thread [http://boards.polycount.net/showthread.php?t=60694 Toying around with normal map approaches] has a great discussion of how best to use smoothing groups and bevels for better shading.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;UB&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
==== Using Bevels ====&lt;br /&gt;
Bevels/chamfers generally improve the silhouette of the model, and can also help reflect specular highlights better. &lt;br /&gt;
&lt;br /&gt;
However bevels tend to produce long thin triangles, which slow down the in-game rendering of your model. Real-time renderers have trouble rendering long thin triangles because they create a lot of sub-pixel areas to render. &lt;br /&gt;
&lt;br /&gt;
Bevels also balloon the vertex count, which can increase the transform cost and memory usage. Hard edges increase the vertex count too, but not when  the edge also shares a seam in UV space. For a good explanation of the vertex count issue, see [http://www.ericchadwick.com/examples/provost/byf2.html#wts Beautiful, Yet Friendly].&lt;br /&gt;
&lt;br /&gt;
Using hard edges with matching UV shells tends to give better performance and better cosmetic results than using bevels. However there are differing opinions on this, see the Polycount thread &amp;quot;[http://boards.polycount.net/showthread.php?t=71760 Maya transfer maps help]&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;EVN&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
==== Edited Vertex Normals ====&lt;br /&gt;
If you use bevels the shading will be improved by editing the vertex normals so the larger flat surfaces have perpendicular normals. The vertex normals are then forced to blend across the smaller bevel faces, instead of across the larger faces. See the Polycount thread [http://boards.polycount.net/showthread.php?t=66139 Superspecular soft edges tutorial chapter 1].&lt;br /&gt;
&lt;br /&gt;
[[image:oliverio_bevel_normals.gif|frame|none|Bending normals on bevelled models. &amp;lt;br&amp;gt;From the tutorial [http://deadlineproof.com/model-shading-techniques-soft-edge-superspecular/ Shading techniques Superspecular soft edges]&amp;lt;br&amp;gt;Image by [http://deadlineproof.com/ Paolo Oliverio]]]&lt;br /&gt;
&lt;br /&gt;
== Level of Detail Models ==&lt;br /&gt;
See [http://www.polycount.com/forum/showthread.php?p=1216945#post1216945 Problem if you're using 3point-style normals with an LOD].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;MTHPM&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Modeling The High-Poly Mesh ==&lt;br /&gt;
[[Subdivision Surface Modeling]] and [[DigitalSculpting]] are the techniques most often used for modeling a normal map. &lt;br /&gt;
&lt;br /&gt;
Some artists prefer to model the in-game mesh first, other artists prefer to model the high-res mesh first, and others start somewhere in the middle. The modeling order is ultimately a personal choice though, all three methods can produce excellent results:&lt;br /&gt;
* Build the in-game model, then up-res it and sculpt it.&lt;br /&gt;
* Build and sculpt a high resolution model, then build a new in-game model around that.&lt;br /&gt;
* Build a basemesh model, up-res and sculpt it, then step down a few levels of detail and use that as a base for building a better in-game mesh.&lt;br /&gt;
If the in-game mesh is started from one of the subdivision levels of the basemesh sculpt, various edge loops can be collapsed or new edges can be cut to add/remove detail as necessary.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;SE&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Sloped Extrusions ===&lt;br /&gt;
[[image:normal_slopes_hatred.jpg|frame|none|Extrusions on the high-poly model should be sloped to make a better normal map. &amp;lt;br&amp;gt;Image by [http://www.hatred.gameartisans.org/ Krzysztof &amp;quot;Hatred&amp;quot; Dolas].]]&lt;br /&gt;
&lt;br /&gt;
=== Floating Geometry ===&lt;br /&gt;
[[image:FloatingGeo.jpg|frame|none|Normal map stores the direction the surface is facing rather than real depth information, thus allowing to save time using floating geometry. &amp;lt;br&amp;gt;To correctly bake AO with floating geo make it a separate object and turn off it's shadow casting. &amp;lt;br&amp;gt;Image by [http://artisaverb.info/ Andrew &amp;quot;d1ver&amp;quot; Maximov].]]&lt;br /&gt;
&lt;br /&gt;
See also [[3DTutorials/Modeling High-Low Poly Models for Next Gen Games|Modeling High/Low Poly Models for Next Gen Games]] by [http://www.acetylenegames.com/artbymasa/ João &amp;quot;Masakari&amp;quot; Costa]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;ET&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Edge Thickness ===&lt;br /&gt;
[[image:normal_edge_thickness.jpg|frame|none|When creating edges of the Highpoly, sometimes you'll need to make them rounded than in real life to &amp;lt;br&amp;gt;work better at the size they will be seen.&amp;lt;br&amp;gt;Image by [http://racer445.com/Evan &amp;quot;racer445&amp;quot; Herbert]]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;MRF&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;MRRCB&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== mental ray Round Corners Bump ===&lt;br /&gt;
The mental ray renderer offers an automatic bevel rendering effect called Round Corners Bump that can be baked into a normal map. This is available in 3ds Max, Maya, and XSI. See [http://boards.polycount.net/showthread.php?t=71995 Zero Effort Beveling for normal maps] - by [http://boards.polycount.net/member.php?u=31662 Robert &amp;quot;r_fletch_r&amp;quot; Fletcher].&lt;br /&gt;
&lt;br /&gt;
[http://jeffpatton.net/ Jeff Patton] posted about [http://jeffpatton.cgsociety.org/blog/archive/2007/10/ how to expose Round Corners Bump] in 3ds Max so you can use it in other materials.&lt;br /&gt;
&lt;br /&gt;
[http://cryrid.com/art/ Michael &amp;quot;cryrid&amp;quot; Taylor] posted a tutorial about how to use [http://cryrid.com/images/temp/XSI/zeroeffort_bevels.jpg Round Corners in XSI].&lt;br /&gt;
&lt;br /&gt;
XSI is able to bake a good normal map with it, but 3ds Max seems to bake it incorrectly, and Maya isn't able to bake the effect at all. Maybe Max might be able to bake it correctly, if the .mi shader is edited to use the correct coordinate space?&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;Baking&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;B&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
== Baking ==&lt;br /&gt;
The process of transferring normals from the high-res model to the in-game model is often called baking. The baking tool usually starts projecting a certain numerical distance out from the low-poly mesh, and sends rays inwards towards the high-poly mesh. When a ray intersects the high-poly mesh, it records the mesh's surface normal and saves it in the normal map.&lt;br /&gt;
&lt;br /&gt;
To get an understanding of how all the options affect your normal map, do some test bakes on simple meshes like boxes. They generate quickly so you can experiment with [[#UVCoordinates|UV mirroring]], [[#SGAHE|smoothing groups]], etc. This helps you learn the settings that really matter.&lt;br /&gt;
* The tutorial [http://wiki.polycount.net/3D_Tutorials/Modeling_High-Low_Poly_Models_for_Next_Gen_Games Modeling High/Low Poly Models for Next Gen Games] by [http://www.acetylenegames.com/artbymasa/ João &amp;quot;Masakari&amp;quot; Costa] has more examples of ray-casting, plus how to get better results from the bake.&lt;br /&gt;
&lt;br /&gt;
Baking sub-sections:&lt;br /&gt;
# [[#Anti-Aliasing|Anti-Aliasing]]&lt;br /&gt;
# [[#Baking_Transparency|Baking Transparency]]&lt;br /&gt;
# [[#Edge_Padding|Edge Padding]]&lt;br /&gt;
# [[#High_Poly_Materials|High Poly Materials]]&lt;br /&gt;
# [[#Reset_Transforms|Reset Transforms]]&lt;br /&gt;
# [[#Solving_Intersections|Solving Intersections]]&lt;br /&gt;
# [[#Solving_Pixel_Artifacts|Solving Pixel Artifacts]]&lt;br /&gt;
# [[#Solving_Wavy_Lines|Solving Wavy Lines]]&lt;br /&gt;
# [[#Triangulating|Triangulating]]&lt;br /&gt;
# [[#Working_with_Cages|Working with Cages]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;AA&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Anti-Aliasing ===&lt;br /&gt;
Turning on super-sampling or anti-aliasing (or whatever multi-ray casting is called in your normal map baking tool) will help to fix any jagged edges where the high-res model overlaps itself within the UV borders of the low-poly mesh, or wherever the background shows through holes in the mesh. Unfortunately this tends to render much much slower, and takes more memory.&lt;br /&gt;
&lt;br /&gt;
[[image:normalmap_aliasing_knak47.jpg|frame|none|A bake without anti-aliasing shows artifacts where the high-poly mesh has overlaps. &amp;lt;br&amp;gt;Image by [http://www.polycount.com/forum/member.php?u=35938 'knak47']]]&lt;br /&gt;
&lt;br /&gt;
One trick to speed this up is to render 2x the intended image size then scale the normal map down 1/2 in a paint program like Photoshop. The reduction's pixel resampling will add anti-aliasing for you in a very quick process. After scaling, make sure to re-normalize the map if your game doesn't do that already, because the un-normalized pixels in your normalmap may cause pixelly artifacts in your specular highlights. Re-normalizing can be done with [http://developer.nvidia.com/nvidia-texture-tools-adobe-photoshop NVIDIA's normal map filter] for Photoshop.&lt;br /&gt;
&lt;br /&gt;
3ds Max's supersampling doesn't work nicely with edge padding, it produces dark streaks in the padded pixels. If so then turn off padding and re-do the padding later, either by re-baking without supersampling or by using a Photoshop filter like the one that comes with [[#3DTools|Xnormal]].&lt;br /&gt;
&lt;br /&gt;
=== Baking Transparency ===&lt;br /&gt;
Sometimes you need to bake a normal map from an object that uses opacity maps, like a branch with opacity-mapped leaves. Unfortunately baking apps often completely ignore any transparency mapping on your high-poly mesh.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|[[image:JoeWilson_ivynormals_error.jpg]] &lt;br /&gt;
|[[image:JoeWilson_ivynormals_rendered.jpg]]&lt;br /&gt;
|-&lt;br /&gt;
|3ds Max's RTT baker causes transparency errors.&amp;lt;br&amp;gt;image by [http://www.linkedin.com/in/earthquake Joe &amp;quot;EarthQuake&amp;quot; Wilson]&lt;br /&gt;
|The lighting method bakes perfect transparency.&amp;lt;br&amp;gt;image by [http://www.linkedin.com/in/earthquake Joe &amp;quot;EarthQuake&amp;quot; Wilson]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
To solve this, render a Top view of the mesh. This only works if you're using a planar UV projection for your low-poly mesh and you're baking a tangent-space normal map.&lt;br /&gt;
&lt;br /&gt;
* Make sure the Top view matches the dimensions of the planar UV projection used by the low-poly mesh. It helps to use an orthographic camera for precise placement.&lt;br /&gt;
* On the high-poly mesh either use a specific lighting setup or a use special material shader:&lt;br /&gt;
* 1) The lighting setup is described in these tutorials:&lt;br /&gt;
* * [http://www.bencloward.com/tutorials_normal_maps11.shtml Creating A Normal Map Right In Your 3D App] by [http://www.bencloward.com/ Ben Cloward]&lt;br /&gt;
* *[http://www.pinwire.com/articles/26/1/Generating-High-Fidelity-Normal-Maps-with-3-D-Software.html Generating High Fidelity Normal Maps with 3-D Software] by [http://www.linkedin.com/pub/0/277/4AB Dave McCoy], Graphics Techniques Consultant, Xbox Content and Design Team&lt;br /&gt;
* 2) The material shader does the same thing, but doesn't require lights.&lt;br /&gt;
* * [http://www.scriptspot.com/3ds-max/normaltexmap NormalTexMap] scripted map for 3ds Max by [http://www.scriptspot.com/users/dave-locke Dave Locke].&lt;br /&gt;
* * [http://www.footools.com/3dsmax_plugins.html InfoTexture] map plugin for 3ds Max by [http://www.footools.com John Burnett]&lt;br /&gt;
&lt;br /&gt;
[[image:BenCloward_NormalMapLighting.gif|frame|none|The lighting setup for top-down rendering. &amp;lt;br&amp;gt;Image by [http://www.bencloward.com Ben Cloward]]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;EP&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Edge Padding ===&lt;br /&gt;
If a normal map doesn't have enough [[Edge_padding |Edge Padding]], this will create shading seams on the UV borders.&lt;br /&gt;
&lt;br /&gt;
=== High Poly Materials ===&lt;br /&gt;
3ds Max will not bake a normal map properly if the high-res model has a mental ray Arch &amp;amp; Design material applied. If your normal map comes out mostly blank, either use a Standard material or none at all. For an example see the Polycount thread [http://www.polycount.com/forum/showthread.php?t=74792 Render to Texture &amp;gt;:O].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;RT&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Reset Transforms ===&lt;br /&gt;
Before baking, make sure your low-poly model's transforms have been reset. '''''This is very important!''''' Often during the modeling process a model will be rotated and scaled, but these compounded transforms can create a messy local &amp;quot;space&amp;quot; for the model, which in turn often creates rendering errors for normal maps. &lt;br /&gt;
&lt;br /&gt;
In 3ds Max, use the Reset Xforms utility then Collapse the Modifier Stack. In Maya use Freeze Transformation. In XSI use the Freeze button.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;SI&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Solving Intersections ===&lt;br /&gt;
The projection process often causes problems like misses, or overlaps, or intersections. It can be difficult generating a clean normal map in areas where the high-poly mesh intersects or nearly intersects itself, like in between the fingers of a hand. Setting the ray distance too large will make the baker pick the other finger as the source normal, while setting the ray distance too small will lead to problems at other places on the mesh where the distances between in-game mesh and high-poly mesh are greater.&lt;br /&gt;
&lt;br /&gt;
Fortunately there are several methods for solving these problems.&lt;br /&gt;
&lt;br /&gt;
# Change the shape of the cage. Manually edit points on the projection cage to help solve tight bits like the gaps between fingers.&lt;br /&gt;
# Limit the projection to matching materials, or matching UVs.&lt;br /&gt;
# Explode the meshes. See the polycount thread [http://boards.polycount.net/showthread.php?t=62921 Explode script needed (for baking purposes)].&lt;br /&gt;
# Bake two or more times using different cage sizes, and combine them in Photoshop.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;SPA&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Solving Pixel Artifacts ===&lt;br /&gt;
[[image:filterMaps_artifact.jpg|frame|none|Random pixel artifacts in the bake. &amp;lt;br&amp;gt;Image by [http://www.linkedin.com/in/ericchadwick Eric Chadwick]]]&lt;br /&gt;
&lt;br /&gt;
If you are using 3ds Max's ''Render To Texture'' to bake from one UV layout to another, you may see stray pixels scattered across the bake. This only happens if you are using a copy of the original mesh in the Projection, and that mesh is using a different UV channel than the original mesh.&lt;br /&gt;
&lt;br /&gt;
There are two solutions for this:&lt;br /&gt;
&lt;br /&gt;
* Add a Push modifier to the copied mesh, and set it to a low value like 0.01.&lt;br /&gt;
- or -&lt;br /&gt;
&lt;br /&gt;
* Turn off ''Filter Maps'' in the render settings (Rendering menu &amp;gt; Render Setup &amp;gt; Renderer tab &amp;gt; uncheck Filter Maps). To prevent aliasing you may want to enable the Global Supersampler in Render Setup.&lt;br /&gt;
&lt;br /&gt;
See also [[#Anti-Aliasing]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;SWL&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Solving Wavy Lines ===&lt;br /&gt;
When capturing from a cylindrical shape, often the differences between the low-poly mesh and the high-poly mesh will create a wavy edge in the normal map. There are a couple ways to avoid this:&lt;br /&gt;
&lt;br /&gt;
# The best way... create your lowpoly model with better supporting edges. See the Polycount threads [http://www.polycount.com/forum/showthread.php?t=81154 Understanding averaged normals and ray projection/Who put waviness in my normal map?], [http://boards.polycount.net/showthread.php?t=55754 approach to techy stuff], [http://www.polycount.com/forum/showthread.php?t=72713 Any tips for normal mapping curved surface?].&lt;br /&gt;
# Adjust the shape of the cage to influence the directions the rays will be cast. Beware... this work will have to be re-done every time you edit the lowpoly mesh, as the cage will be invalidated. At the bottom of [http://www.poopinmymouth.com/tutorial/normal_workflow_2.htm this page of his normal map tutorial], [http://www.poopinmymouth.com/ Ben &amp;quot;poopinmymouth&amp;quot; Mathis] shows how to do this in 3ds Max. Same method can be seen in the image below.&lt;br /&gt;
# Subdivide the low-res mesh so it more closely matches the high-res mesh. Beware... this will cause the normal map not to match your lowpoly vertex normals, probably causing shading errors. [http://www.custom-airbrush.com/ Jeff &amp;quot;airbrush&amp;quot; Ross] has a [http://dodownload.filefront.com/9086954//72f71c0147df53765045a22253c18361a29a6d532425842007ead644d39cbb85d0794ab560365cfa video tutorial] that shows how to do this in Maya.&lt;br /&gt;
# Paint out the wavy line.  Beware... this work will have to be re-done every time you re-bake the normal map. The [http://www.poopinmymouth.com/tutorial/normal_workflow_3.htm normal map process tutorial] by [http://www.poopinmymouth.com/ Ben &amp;quot;poopinmymouth&amp;quot; Mathis] includes an example of painting out wavy lines in a baked normal map.&lt;br /&gt;
# Use a separate planar-projected mesh for the details that wrap around the barrel area, so the ray-casting is more even. Beware... this will cause the normal map not to match your lowpoly vertex normals, probably causing shading errors. For example to add tread around a tire, the tread can be baked from a tread model that is laid out flat, then that bake can layered onto the bake from the cylindrical tire mesh in a paint program.&lt;br /&gt;
&lt;br /&gt;
[[image:timothy_evison_normalmap_projections.jpg|frame|none|Adjusting the shape of the cage to remove distortion. &amp;lt;br&amp;gt;Image by [http://users.cybercity.dk/~dsl11905/resume/resume.html Timothy &amp;quot;tpe&amp;quot; Evison]]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;TRI&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Triangulating ===&lt;br /&gt;
Before baking, it is usually best to triangulate the low-poly model, converting it from polygons into pure triangles. This prevents the vertex normals from being changed later on, which can create specular artifacts.&lt;br /&gt;
&lt;br /&gt;
[[image:triangulation_modo_ohare.jpg|frame|none| When quads are triangulated in [http://www.luxology.com/modo/ Modo], the internal edges are sometimes flipped, which causes shading differences.&amp;lt;br&amp;gt;Image by [http://www.farfarer.com/|James &amp;quot;Talon&amp;quot; O'Hare]]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Sometimes a baking tool or a mesh exporter/importer will re-triangulate the polygons. A quad polygon is actually treated as two triangles, and the internal edge between them is often switched diagonally during modeling operations. When the vertices of the quad are moved around in certain shapes, the software's algorithm for polygon models tries to keep the quad surface in a &amp;quot;rational&amp;quot; non-overlapping shape. It does this by switching the internal edge between its triangles.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[image:triangulation_spec_tychovii.jpg|frame|none| The specular highlight is affected by triangulation. Flip edges to fix skewing. See the Polycount thread [http://boards.polycount.net/showthread.php?t=66651 Skewed Specular Highlight?] for pictures and more info.&amp;lt;br&amp;gt; Image by [http://robertkreps.com Robert &amp;quot;TychoVII&amp;quot; Kreps]]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;WWC&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Working with Cages ===&lt;br /&gt;
''Cage'' has two meanings in the normal-mapping process: a low-poly base for [[subdivision surface modeling]] (usually called the [[DigitalSculpting#BM|basemesh]]), or a ray-casting mesh used for normal map baking. This section covers the ray-casting cage.&lt;br /&gt;
&lt;br /&gt;
Most normal map baking tools allow you to use a distance-based raycast. A ray is sent outwards along each vertex normal, then at the distance you set a ray is cast back inwards. Where ever that ray intersects the high poly mesh, it will sample the normals from it. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|[[Image:Normalmap_raycasting_1.jpg]] &lt;br /&gt;
|[[Image:Normalmap_raycasting_2.jpg]]&lt;br /&gt;
|-&lt;br /&gt;
|Hard edges and a distance-based raycast (gray areas) cause ray misses (yellow) and ray overlaps (cyan).&amp;lt;br&amp;gt; Image by [http://www.mankua.com/ Diego Castaño]&lt;br /&gt;
|The gray area shows that using all soft edges (or hard edges and a cage-based raycast) will avoid ray-casting errors from split normals.&amp;lt;br&amp;gt; Image by [http://www.mankua.com/ Diego Castaño] &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Unfortunately with a distance-based raycast, [[#SGAHE|split vertex normals]] will cause the bake to miss parts of the high-res mesh, causing errors and seams. &lt;br /&gt;
&lt;br /&gt;
Some software allows you to use ''cage mesh'' option instead, which basically inflates a copy of the low-poly mesh, then raycasts inwards from each vertex. This ballooned-out mesh is the cage.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|&amp;lt;tablebgcolor=&amp;quot;#ffaaaa&amp;quot;&amp;gt;| &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
In 3ds Max the cage controls both the distance and the direction of the raycasting. &lt;br /&gt;
&lt;br /&gt;
In Maya the cage only controls the distance; the ray direction matches the vertex normals (inverted).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;text-decoration: line-through&amp;quot;&amp;gt; This may have been fixed in the latest release...&amp;lt;&amp;lt;BR&amp;gt;&amp;gt;&lt;br /&gt;
In Xnormal the cage is split everywhere the model has [[#SGAHE|hard edges]], causing ray misses in the bake. You can fix the hard edge split problem but it involves an overly complex workflow. You must also repeat the whole process any time you change your mesh:&amp;lt;/span&amp;gt;&lt;br /&gt;
# &amp;lt;s&amp;gt; Load the 3d viewer.&amp;lt;/s&amp;gt;&lt;br /&gt;
# &amp;lt;s&amp;gt; Turn on the cage editing tools.&amp;lt;/s&amp;gt;&lt;br /&gt;
# &amp;lt;s&amp;gt; Select all of the vertices.&amp;lt;/s&amp;gt;&lt;br /&gt;
# &amp;lt;s&amp;gt; Weld all vertices.&amp;lt;/s&amp;gt;&lt;br /&gt;
# &amp;lt;s&amp;gt; Expand the cage as you normally would.&amp;lt;/s&amp;gt;&lt;br /&gt;
# &amp;lt;s&amp;gt; Save out your mesh using the Xnormal format.&amp;lt;/s&amp;gt;&lt;br /&gt;
# &amp;lt;s&amp;gt; Make sure Xnormal is loading the correct mesh.&amp;lt;/s&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;P&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;Painting&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Painting ==&lt;br /&gt;
Don't be afraid to edit normal maps in Photoshop. After all it is just a texture, so you can clone, blur, copy, blend all you want... as long as it looks good of course. Some understanding of [[#RGBChannels|the way colors work]] in normal maps will go a long way in helping you paint effectively.&lt;br /&gt;
&lt;br /&gt;
A normal map sampled from a high-poly mesh will nearly always be better than one sampled from a texture, since you're actually grabbing &amp;quot;proper&amp;quot; normals from an accurate, highly detailed surface. That means your normal map's pixels will basically be recreating the surface angles of your high-poly mesh, resulting in a very believable look.&lt;br /&gt;
&lt;br /&gt;
If you only convert an image into a normal-map, it can look very flat, and in some cases it can be completely wrong unless you're very careful about your value ranges. Most image conversion tools assume the input is a heightmap, where black is low and white is high. If you try to convert a diffuse texture that you've painted, the results are often very poor. Often the best results are obtained by baking the large and mid-level details from a high-poly mesh, and then combined with photo-sourced &amp;quot;fine detail&amp;quot; normals for surface details such as fabric weave, scratches and grain.&lt;br /&gt;
&lt;br /&gt;
Sometimes creating a high poly surface takes more time than your budget allows. For character or significant environment assets then that is the best route, but for less significant environment surfaces working from a heightmap-based texture will provide a good enough result for a much less commitment in time.&lt;br /&gt;
&lt;br /&gt;
* [http://crazybump.com/ CrazyBump] is a commercial normal map converter.&lt;br /&gt;
* [http://www.renderingsystems.com/support/showthread.php?tid=3 ShaderMap] is a commercial normal map converter.&lt;br /&gt;
* [http://www.pixplant.com/ PixPlant] is a commercial normal map converter.&lt;br /&gt;
* [http://boards.polycount.net/showthread.php?t=68860 NJob] is a free normal map converter.&lt;br /&gt;
* [http://developer.nvidia.com/nvidia-texture-tools-adobe-photoshop NVIDIA normalmap filter for Photoshop] is a free normal map converter.&lt;br /&gt;
* [http://xnormal.net Xnormal height-to-normals filter for Photoshop] is a free normal map converter.&lt;br /&gt;
* [http://www.poopinmymouth.com/tutorial/normal_workflow_3.htm Normal map process tutorial] by [http://www.poopinmymouth.com/ Ben &amp;quot;poopinmymouth&amp;quot; Mathis] includes an example of painting out wavy lines in a baked normal map.&lt;br /&gt;
&lt;br /&gt;
=== Flat Color ===&lt;br /&gt;
The color (128,128,255) creates normals that are completely perpendicular to the polygon, as long as the vertex normals are also perpendicular. Remember a normal map's per-pixel normals create ''offsets'' from the vertex normals. If you want an area in the normal map to be flat, so it creates no offsets from the vertex normals, then use the color (128,128,255). &lt;br /&gt;
&lt;br /&gt;
This becomes especially obvious when [[#Mirroring|mirroring a normal map]] and using a shader with a reflection ingredient. Reflection tends to accentuate the angles between the normals, so any errors become much more apparent.&lt;br /&gt;
&lt;br /&gt;
[[image:normalmap_127seam.jpg|thumb|600px|none| Mirrored normal maps show a seam when (127,127,255) is used for the flat color; 128 is better.&amp;lt;br&amp;gt;Image by [http://www.ericchadwick.com Eric Chadwick]]]&lt;br /&gt;
&lt;br /&gt;
In a purely logical way, 127 seems like it would be the halfway point between 0 and 255. However 128 is the color that actually works in practice. When a test is done comparing (127,127,255) versus (128,128,255) it becomes obvious that 127 creates a slightly bent normal, and 128 creates a flat one.&lt;br /&gt;
&lt;br /&gt;
This is because most game pipelines use ''unsigned'' normal maps. For details see the Polycount forum thread [http://www.polycount.com/forum/showpost.php?p=771360&amp;amp;postcount=22 tutorial: fixing mirrored normal map seams].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;BNMT&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Blending Normal Maps Together ===&lt;br /&gt;
Blending normal maps together is a quick way to add high-frequency detail like wrinkles, cracks, and the like. Fine details can be painted as a height map, then it can be converted into a normal map using one of the normal map tools. Then this &amp;quot;details&amp;quot; normal map can be blended with a geometry-derived normal map using one of the methods below. &lt;br /&gt;
&lt;br /&gt;
Here is a comparison of four of the blending methods. Note that in these examples the default values were used for CrazyBump (Intensity 50, Strength 33, Strength 33), but the tool allows each layer's strength to be adjusted individually for stronger or milder results. Each of the normal maps below were [[#Renormalizing|re-normalized]] after blending.&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
| [[Image:NormalMap$nrmlmap_blending_methods_Maps.png}}&lt;br /&gt;
|-&lt;br /&gt;
| The blended normal maps.&amp;lt;&amp;lt;BR&amp;gt;&amp;gt;&amp;lt;span style=&amp;quot;font-size: smaller&amp;quot;&amp;gt;image by[http://www.ericchadwick.com Eric Chadwick]&amp;lt;/span&amp;gt; &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The four blending methods used above:&lt;br /&gt;
# [http://www.crazybump.com CrazyBump] by Ryan Clark blends normal maps together using calculations in 3D space rather than just in 2D. This does probably the best job at preserving details, and each layer's strength settings can be tweaked individually. &lt;br /&gt;
# [http://www.rodgreen.com/?p=4 Combining Normal Maps in Photoshop] by Rod Green blends normal maps together using Linear Dodge mode for the positive values and Difference mode for the negative values, along with a Photoshop Action to simplify the process. It's free, but the results may be less accurate than CrazyBump.&lt;br /&gt;
# [http://www.paultosca.com/makingofvarga.html Making of Varga] by [http://www.paultosca.com/ Paul &amp;quot;paultosca&amp;quot; Tosca] blends normal maps together using Overlay mode for the red and green channels and Multiply mode for the blue channel. This gives a slightly stronger bump than the Overlay-only method. [http://www.leocov.com/ Leo &amp;quot;chronic&amp;quot; Covarrubias] has a step-by-step tutorial for this method in [http://www.cgbootcamp.com/tutorials/2009/12/9/photoshop-combine-normal-maps.html CG Bootcamp Combine Normal Maps].&lt;br /&gt;
# [[3DTutorials/Normal Map Deepening|Normal Map Deepening]] by [http://www.poopinmymouth.com/ Ben &amp;quot;poopinmymouth&amp;quot; Mathis] shows how to blend normal maps together using Overlay mode. [http://cgtextures.com/content.php?action=tutorial&amp;amp;name=normalmap CGTextures tutorial for the NVIDIA Photoshop filter] by [http://hirezstudios.com/ Scott Warren] also shows how to create normalmaps using multiple layers (Note: to work with the Overlay blend mode each layer's Output Level should be 128 instead of 255, you can use the Levels tool for this).&lt;br /&gt;
&lt;br /&gt;
The [http://boards.polycount.net/showthread.php?t=69615 Getting good height from Nvidia-filter normalizing grayscale height] thread on the Polycount forum has a discussion of different painting/blending options. Also see the [[#2DT|2D Tools]] section for painting and conversion tools.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;PCT&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Pre-Created Templates ===&lt;br /&gt;
A library of shapes can be developed and stored for later use, to save creation time for future normal maps. Things like screws, ports, pipes, and other doo-dads. These shapes can be stored as bitmaps with transparency so they can be layered into baked normal maps.&lt;br /&gt;
&lt;br /&gt;
* [http://www.beautifulrobot.com/?p=69 Creating &amp;amp; Using NormalMap &amp;quot;Widgets&amp;quot;] - by ''[http://www.beautifulrobot.com Steev &amp;quot;kobra&amp;quot; Kelly]''&amp;lt;&amp;lt;BR&amp;gt;&amp;gt; How to set up and render template objects.&lt;br /&gt;
* [http://www.akramparvez.com/portfolio/scripts/normalmap-widget-for-3ds-max/ NormalMap Widget for 3ds Max] - by ''[http://www.akramparvez.com Akram Parvez]''&amp;lt;&amp;lt;BR&amp;gt;&amp;gt;A script to automate the setup and rendering process.&lt;br /&gt;
* See the section [[#BT|Baking Transparency]] for more template-rendering tools and tutorials.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;RN&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;Renormalizing&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Re-normalizing ===&lt;br /&gt;
Re-normalizing means resetting the length of each normal in the map to 1.&lt;br /&gt;
&lt;br /&gt;
A normal mapping shader takes the three color channels of a normal map and combines them to create the direction and length of each pixel's normal. These normals are then used to apply the scene lighting to the mesh. However if you edit normal maps by hand or if you blend multiple normal maps together this can cause those lengths to change. Most shaders expect the length of the normals to always be 1 (normalized), but some are written to re-normalize the normal map dynamically (for example, 3ds Max's Hardware Shaders do re-normalize).&lt;br /&gt;
&lt;br /&gt;
If the normals in your normal map are not normalized, and your shader doesn't re-normalize them either, then you may see artifacts on the shaded surface... the specular highlight may speckle like crazy, the surface may get patches of odd shadowing, etc. To help you avoid this NVIDIA's normal map filter for Photoshop provides an easy way to re-normalize a map after editing; just use the '''Normalize Only''' option. [http://xnormal.net Xnormal] also comes with a Normalize filter for Photoshop.&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Some shaders use [[#NormalMapCompression|compressed normal maps]]. Usually this means the blue channel is thrown away completely, so it's recalculated on-the-fly in the shader. However the shader has to re-normalize in order to recreate that data, so any custom normal lengths that were edited into the map will be ignored completely. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;AOIANM&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;AmbientOcclusionIntoANormalMap&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Ambient Occlusion into a Normal Map ===&lt;br /&gt;
If the shader doesn't re-normalize the normal map, an [[Ambient Occlusion Map]] can actually be baked into the normal map. This will shorten the normals in the crevices of the surface, causing the surface to receive less light there. This works with both diffuse and specular, or any other pass that uses the normal map, like reflection.&lt;br /&gt;
&lt;br /&gt;
However it's usually best to keep the AO as a separate map (or in an alpha channel) and multiply it against the ambient lighting only. This is usually done with a custom [[Category:Shaders|shader]]. If you multiply it against the diffuse map or normal map then it also occludes diffuse lighting which can make the model look dirty. Ambient occlusion is best when it occludes ambient lighting only, for example a [[DiffuselyConvolvedCubeMap|diffusely convolved cubemap]].&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
To bake the AO into a normal map, adjust the levels of the AO layer first so the darks only go as low as 128 gray, then set the AO layer to Darken mode. This will shorten the normals in the normalmap, causing the surface to receive less light in the darker areas. &lt;br /&gt;
&lt;br /&gt;
This trick doesn't work with any shaders that re-normalize, like 3ds Max's Hardware Shaders. The shader must be altered to actually use the lengths of your custom normals; most shaders just assume all normals are 1 in length because this makes the shader code simpler. Also this trick will not work with most of the common [[#NormalMapCompression|normal map compression formats]], which often discard the blue channel and recalculate it in the shader, which requires re-normalization.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;BLE&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;&amp;lt;Anchor([[BacklightingExample]])&amp;gt;&amp;gt;&lt;br /&gt;
=== Back Lighting Example ===&lt;br /&gt;
You can customize normal maps for some interesting effects. If you invert the blue channel of a tangent-space map, the normals will be pointing to the opposite side of the surface, which can simulate backlighting.&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|  [[Image:NormalMap$tree_front.jpg]] &lt;br /&gt;
|-&lt;br /&gt;
| Tree simulating subsurface scattering (front view).&amp;lt;&amp;lt;BR&amp;gt;&amp;gt;&amp;lt;span style=&amp;quot;font-size: smaller&amp;quot;&amp;gt;image by[http://www.linkedin.com/in/ericchadwick Eric Chadwick]&amp;lt;/span&amp;gt; &lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The tree leaves use a shader than adds together two diffuse maps, one using a regular tangent-space normal map, the other using the same normal map but with the blue channel inverted. This causes the diffuse map using the regular normal map to only get lit on the side facing the light (front view), while the diffuse map using the inverted normal map only gets lit on the opposite side of the leaves (back view). The leaf geometry is 2-sided but uses the same shader on both sides, so the effect works no matter the lighting angle. As an added bonus, because the tree is self-shadowing the leaves in shadow do not receive direct lighting, which means their backsides do not show the inverted normal map, so the fake subsurface scatter effect only appears where the light directly hits the leaves. This wouldn't work for a whole forest because of the computational cost of self-shadowing and double normal maps, but could be useful for a single &amp;quot;star&amp;quot; asset, or if LODs switched the distant trees to a model that uses a cheaper shader.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;SAS&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;&amp;lt;Anchor([[ShadersAndSeams]])&amp;gt;&amp;gt;&lt;br /&gt;
== Shaders and Seams ==&lt;br /&gt;
You need to use the right kind of shader to avoid seeing seams where UV breaks occur. It must be written to use the same [[#TangentBasis|tangent basis]] that was used during baking. If the shader doesn't, the lighting will either be inconsistent across UV borders or it will show smoothing errors from the low-poly vertex normals.&lt;br /&gt;
&lt;br /&gt;
Xnormal generates accurate normals when displayed in Xnormal, and the SDK includes a method to write your own custom tangent space generator for the tool. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;3MS&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== 3ds Max Shaders ===&lt;br /&gt;
[[Image:max2010_normalmap_workarounds.png|thumb|Viewport methods in 3ds Max 2010. &amp;lt;BR&amp;gt; image by [http://www.linkedin.com/in/ericchadwick Eric Chadwick].]]&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;Render To Texture&amp;quot; tool in 3ds Max 2011 and older generates [[#TSNM|tangent-space]] normal maps that render correctly in the offline renderer (scanline) but do not render correctly in the realtime viewport with the 3ds Max shaders. Max is using a different [[#TangentBasis|tangent basis]] for each. This is readily apparent when creating non-organic hard surface normalmaps; smoothing errors appear in the viewport that do not appear when rendered. &lt;br /&gt;
&lt;br /&gt;
The errors can be fixed by using &amp;quot;Render To Texture&amp;quot; to bake a [[#TSNM|tangent-space]] or [[#OSNM|object-space]] map, and using the free [http://www.3pointstudios.com/3pointshader_about.shtml &amp;quot;3Point Shader&amp;quot;] by Christoph '[[CrazyButcher]]' Kubisch and Per 'perna' Abrahamsen. The shader uses the same tangent basis as the baking tool, so it produces nearly flawless results. It also works with old bakes.&lt;br /&gt;
&lt;br /&gt;
You can get OK results in the Max viewport using a tangent-space map baked in Maya, loading it in a Standard material, and enabling &amp;quot;Show Hardware Map in Viewport&amp;quot;. Another method is to use Render To Texture to bake an [[#OSNM|object-space]] map then use [[#CBS|Nspace]] to convert it into a tangent-space map then load that in a DirectX material and use the RTTNormalMap.fx shader. &lt;br /&gt;
&lt;br /&gt;
Autodesk is aware of these issues, and plans to address them in an upcoming release. See these links for more information:&lt;br /&gt;
* Christoph &amp;quot;[[CrazyButcher]]&amp;quot; Kubisch and Per &amp;quot;perna&amp;quot; Abrahamsen designed a shader/modifier combination approach that fixes the viewport problem, see the Polycount forum post [http://boards.polycount.net/showthread.php?t=72861 3Point Shader Lite - Shader material editor and Quality Mode normalmaps for 3ds Max].&lt;br /&gt;
* Jean-Francois &amp;quot;jfyelle&amp;quot; Yelle, Autodesk Media &amp;amp; Entertainment Technical Product Manager, has [http://boards.polycount.net/showthread.php?p=1115812#post1115812 this post]. &lt;br /&gt;
* Ben Cloward posted [http://boards.polycount.net/showthread.php?p=1100270#post1100270 workarounds and FX code].&lt;br /&gt;
* Christopher &amp;quot;cdiggins&amp;quot; Diggins, SDK writer for 3ds Max, shares some of the SDK code in his blog posts &amp;quot;[http://area.autodesk.com/blogs/chris/how_the_3ds_max_scanline_renderer_computes_tangent_and_binormal_vectors_for_normal_mapping How the 3ds Max Scanline Renderer Computes Tangent and Binormal Vectors for Normal Mapping]&amp;quot; and &amp;quot;[http://area.autodesk.com/blogs/chris/3ds_max_normal_map_baking_and_face_angle_weighting_the_plot_thickens 3ds Max Normal Map Baking and Face Angle Weighting: The Plot Thickens]&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;3MENT&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 3ds Max Edit Normals Trick ===&lt;br /&gt;
After baking, if you add an Edit Normals modifier to your low-poly normalmapped model, this seems to &amp;quot;relax&amp;quot; the vertex normals for more accurate viewport shading. The modifier can be collapsed if desired.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;MS&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Maya Shaders ===&lt;br /&gt;
Maya seems to correctly generate normals to view in realtime, with the correct [[#TangentBasis|tangent basis]], with much less smoothing errors than 3ds Max. &lt;br /&gt;
* [http://www.mentalwarp.com/~brice/shader.php BRDF shader] by [http://www.mentalwarp.com/~brice/ Brice Vandemoortele] and [http://www.kjapi.com/ Cedric Caillaud] (more info in [http://boards.polycount.net/showthread.php?t=49920 this Polycount thread]) '''Update:''' [http://boards.polycount.net/showthread.php?p=821862#post821862 New version here] with many updates, including object-space normal maps, relief mapping, self-shadowing, etc. Make sure you enable cgFX shaders in the Maya plugin manager, then you can create them in the same way you create a Lambert, Phong etc. Switch OFF high quality rendering in the viewports to see them correctly too.&lt;br /&gt;
* If you want to use the software renderer, use mental ray instead of Maya's software renderer because mental ray correctly interprets tangent space normals. The Maya renderer treats the normal map as a grayscale bump map, giving nasty results. Mental ray supports Maya's Phong shader just fine (amongst others), although it won't recognise a gloss map plugged into the &amp;quot;cosine power&amp;quot; slot. The slider still works though, if you don't mind having a uniform value for gloss. Spec maps work fine though. Just use the same set up as you would for viewport rendering. You'll need to have your textures saved as TGAs or similar for mental ray to work though. - from [http://boards.polycount.net/member.php?u=14235 CheeseOnToast]&lt;br /&gt;
&amp;lt;&amp;lt;Anchor([[NormalMapCompression]])&amp;gt;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;NMC&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
== Normal Map Compression ==&lt;br /&gt;
see; [[Normal Map Compression]]&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
=== Related Pages ===&lt;br /&gt;
* [[Curvature map]]&lt;br /&gt;
* [[DuDv map]]&lt;br /&gt;
* [[Flow map]]&lt;br /&gt;
* [[Normal map]]&lt;br /&gt;
* [[Radiosity normal map]]&lt;br /&gt;
* [[Vector displacement map]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;3DT&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; &amp;lt;span id=&amp;quot;Tools&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; &amp;lt;span id=&amp;quot;3DTools&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== 3D Tools ===&lt;br /&gt;
See [[Category:Tools#A3D_Normal_Map_Software|Category:Tools#3D_Normal_Map_Software]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;2DT&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;2DTools&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== 2D Tools ===&lt;br /&gt;
See [[Category:Tools#A2D_Normal_Map_Software|Category:Tools#2D_Normal_Map_Software]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;T&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;Tutorials&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Tutorials ===&lt;br /&gt;
* [http://area.autodesk.com/userdata/fckdata/239955/The%20Generation%20and%20Display%20of%20Normal%20Maps%20in%203ds%20Max.pdf The Generation and Display of Normal Maps in 3ds Max] (500kb PDF) &amp;lt;&amp;lt;BR&amp;gt;&amp;gt; Excellent whitepaper from Autodesk about normal mapping in 3ds Max and other apps.&lt;br /&gt;
* [http://www.katsbits.com/htm/tutorials/blender-baking-normal-maps-from-models.htm Renderbump and baking normal maps from high poly models using Blender 3D] by ''[http://www.katsbits.com/htm/about.htm &amp;quot;katsbits&amp;quot;]''&amp;lt;&amp;lt;BR&amp;gt;&amp;gt;Baking normal maps in Blender.&lt;br /&gt;
* [http://udn.epicgames.com/Three/CreatingNormalMaps.html Techniques for Creating Normal Maps] in the Unreal Developer Network's [http://udn.epicgames.com/Three/WebHome.html Unreal Engine 3 section] contains advice from [http://www.epicgames.com/ Epic Games] artists on creating normal maps for UE3. The [http://udn.epicgames.com/Three/DesignWorkflow.html#Creating%20normal%20maps%20from%20meshes Design Workflow page] has a summary.&lt;br /&gt;
* [http://www.iddevnet.com/quake4/ArtReference_CreatingModels#head-3400c230e92ff7d57424b2a68f6e0ea75dee4afa Creating Models in Quake 4] by [http://www.ravensoft.com/ Raven Software] is a comprehensive guide to creating Quake 4 characters.&lt;br /&gt;
* [http://www.svartberg.com/tutorials/article_normalmaps/normalmaps.html Normalmaps for the Technical Game Modeler] by [http://www.svartberg.com Ariel Chai] shows how low-poly smoothing and UVs can affect normal maps in Doom 3.&lt;br /&gt;
* [http://wiki.polycount.net/3D_Tutorials/Modeling_High-Low_Poly_Models_for_Next_Gen_Games Modeling High/Low Poly Models for Next Gen Games] by [http://www.acetylenegames.com/artbymasa/ João &amp;quot;Masakari&amp;quot; Costa] is an overview of modeling for normal maps.&lt;br /&gt;
* The [http://tech-artists.org/wiki/Beveling Beveling section on the Tech-Artists.Org Wiki] discusses how smoothing groups and bevels affect the topology of the low-poly model.&lt;br /&gt;
* The two-part article [http://www.ericchadwick.com/examples/provost/byf2.html#wts Beautiful, Yet Friendly] by [http://www.linkedin.com/in/gprovost Guillaume Provost] explains how smoothing groups and other mesh attributes cause vertices to be duplicated in the game. The vertex count is actually what matters in-game, not the triangle or poly count.&lt;br /&gt;
* [http://www.poopinmymouth.com/tutorial/normal_workflow_2.htm Normal map workflow] by [http://www.poopinmymouth.com/ Ben &amp;quot;poopinmymouth&amp;quot; Mathis] demonstrates his normal mapping workflow in 3ds Max and Photoshop.&lt;br /&gt;
* [http://dodownload.filefront.com/9086954//72f71c0147df53765045a22253c18361a29a6d532425842007ead644d39cbb85d0794ab560365cfa This video tutorial] by [http://www.custom-airbrush.com/ Jeff &amp;quot;airbrush&amp;quot; Ross] shows in Maya how to subdivide the low-poly mesh so it more closely matches the high-poly mesh, to help solve wavy lines in the bake.&lt;br /&gt;
* [http://www.bencloward.com/tutorials_normal_maps1.shtml Normal Mapping Tutorial] by [http://www.bencloward.com/ Ben Cloward] is a comprehensive tutorial about the entire normal map creation process.&lt;br /&gt;
* [http://www.pinwire.com/articles/26/1/Generating-High-Fidelity-Normal-Maps-with-3-D-Software.html Generating High Fidelity Normal Maps with 3-D Software] by [http://www.linkedin.com/pub/0/277/4AB Dave McCoy] shows how to use a special lighting setup to render normal maps (instead of baking them).&lt;br /&gt;
* [http://cgtextures.com/content.php?action=tutorial&amp;amp;name=normalmap Tutorial for the NVIDIA Photoshop filter] by [http://hirezstudios.com/ Scott Warren] shows how to create deep normal maps using multiple layers. Note: to use Overlay blend mode properly, make sure to change each layer's Levels ''Output Level'' to 128 instead of 255.&lt;br /&gt;
* [http://www.poopinmymouth.com/process/tips/normalmap_deepening.jpg Normalmap Deepening] by [http://www.poopinmymouth.com/ Ben &amp;quot;poopinmymouth&amp;quot; Mathis] shows how to adjust normal maps, and how to layer together painted and baked normal maps.&lt;br /&gt;
* [http://boards.polycount.net/showthread.php?t=51088 Tutorial for painting out seams on mirrored tangent-space normal maps] by [http://www.warbeast.de/ warby] helps to solve seams along horizontal or vertical UV edges, but not across angled UVs.&lt;br /&gt;
* [http://planetpixelemporium.com/tutorialpages/normal.html Cinema 4D and Normal Maps For Games] by [http://planetpixelemporium.com/index.php James Hastings-Trew] describes normal maps in plain language, with tips on creating them in Cinema 4D.&lt;br /&gt;
* [http://forums.cgsociety.org/showthread.php?f=39&amp;amp;t=359082 3ds Max normal mapping overview] by [http://www.alan-noon.com/ Alan Noon] is a great thread on CGTalk about the normal mapping process.&lt;br /&gt;
* [http://forums.cgsociety.org/showthread.php?f=46&amp;amp;t=373024 Hard Surface Texture Painting] by [http://stefan-morrell.cgsociety.org/gallery/ Stefan Morrell] is a good introduction to painting textures for metal surfaces.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;D&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;Discussion&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Discussion ===&lt;br /&gt;
[http://boards.polycount.net/showthread.php?p=820218 Discuss this page on the Polycount forums]. Suggestions welcome.&lt;br /&gt;
&lt;br /&gt;
Even though only one person has been editing this page so far, the information here was gathered from many different sources. We wish to thank all the contributors for their hard-earned knowledge. It is much appreciated! [http://wiki.polycount.net {{http://boards.polycount.net/images/smilies/pcount/icons/smokin.gif}}]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[:Category:Texturing]] [[:Category:TextureTypes]] [[:Category:Bump map]]&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	<entry>
		<id>http://wiki.polycount.com/wiki/Normal_map</id>
		<title>Normal map</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/Normal_map"/>
				<updated>2014-11-29T08:24:48Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: fixed category links at bottom&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!-- ## page was renamed from Normal Map --&amp;gt;&lt;br /&gt;
= Normal Map =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;WhatIsANormalMap&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;WIANM&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
== What is a Normal Map? ==&lt;br /&gt;
A Normal Map is usually used to fake high-res geometry detail when it's mapped onto a low-res mesh. The pixels of the normal map each store a ''normal'', a vector that describes the surface slope of the original high-res mesh at that point. The red, green, and blue channels of the normal map are used to control the direction of each pixel's normal. &lt;br /&gt;
&lt;br /&gt;
When a normal map is applied to a low-poly mesh, the texture pixels control the direction each of the pixels on the low-poly mesh will be facing in 3D space, creating the illusion of more surface detail or better curvature. However, the silhouette of the model doesn't change. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;packed&amp;quot;&amp;gt;&lt;br /&gt;
Whatif_normalmap_mapped2.jpg|A model with a normal map.&amp;lt;br&amp;gt;Image by [http://www.jameskuart.com/ James Ku].&lt;br /&gt;
Whatif_normalmap_low.jpg|The model without its normal map.&amp;lt;br&amp;gt;Image by [http://www.jameskuart.com/ James Ku].&lt;br /&gt;
Whatif_normalmap_high.jpg|The high-resolution model used to create the normal map.&amp;lt;br&amp;gt;Image by [http://www.jameskuart.com/ James Ku].&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Tangent-Space vs. Object-Space==&lt;br /&gt;
&lt;br /&gt;
Normal maps can be made in either of two basic flavors: tangent-space or object-space. Object-space is also called local-space or model-space, same thing. World-space is basically the same as object-space, except it requires the model to remain in its original orientation, neither rotating nor deforming, so it's almost never used.&lt;br /&gt;
&lt;br /&gt;
===Tangent-space normal map===&lt;br /&gt;
[[image:normalmap_tangentspace.jpg|frame|none|A tangent-space normal map. &amp;lt;br&amp;gt;Image by [http://ericchadwick.com Eric Chadwick].]]&lt;br /&gt;
&lt;br /&gt;
Predominantly-blue colors. Object can rotate and deform. Good for deforming meshes, like characters, animals, flags, etc.&lt;br /&gt;
&lt;br /&gt;
Pros:&lt;br /&gt;
* Maps can be reused easily, like on differently-shaped meshes.&lt;br /&gt;
* Maps can be tiled and mirrored easily, though some games might not support mirroring very well.&lt;br /&gt;
* Easier to overlay painted details.&lt;br /&gt;
* Easier to use image compression.&lt;br /&gt;
&lt;br /&gt;
Cons:&lt;br /&gt;
* More difficult to avoid smoothing problems from the low-poly vertex normals (see Smoothing Groups and Hard Edges).&lt;br /&gt;
* Slightly slower performance than an object-space map (but not by much).&lt;br /&gt;
&lt;br /&gt;
===Object-space normal map===&lt;br /&gt;
[[image:normalmap_worldspace.jpg|frame|none|An object-space normal map. &amp;lt;br&amp;gt;Image by [http://ericchadwick.com Eric Chadwick].]]&lt;br /&gt;
&lt;br /&gt;
Rainbow colors. Objects can rotate, but usually shouldn't be deformed, unless the shader has been modified to support deformation.&lt;br /&gt;
&lt;br /&gt;
Pros:&lt;br /&gt;
* Easier to generate high-quality curvature because it completely ignores the crude smoothing of the low-poly vertex normals.&lt;br /&gt;
* Slightly better performance than a tangent-space map (but not by much).&lt;br /&gt;
&lt;br /&gt;
Cons:&lt;br /&gt;
* Can't easily reuse maps, different mesh shapes require unique maps.&lt;br /&gt;
* Difficult to tile properly, and mirroring requires specific shader support.&lt;br /&gt;
* Harder to overlay painted details because the base colors vary across the surface of the mesh. Painted details must be converted into Object Space to be combined properly with the OS map.&lt;br /&gt;
* They don't compress very well, since the blue channel can't be recreated in the shader like with tangent-space maps. Also the three color channels contain very different data which doesn't compress well, creating many artifacts. Using a half-resolution object-space map is one option. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=&amp;quot;all&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Converting Between Spaces ===&lt;br /&gt;
Normal maps can be converted between tangent space and object space, in order to use them with different blending tools and shaders, which require one type or the other.&lt;br /&gt;
&lt;br /&gt;
[http://diogo.codingcorner.net Diogo &amp;quot;fozi&amp;quot; Teixeira] created a tool called [http://boards.polycount.net/showthread.php?p=1072599#post1072599 NSpace] that converts an object-space normal map into a tangent-space map, which then works seamlessly in the Max viewport. He converts the map by using the same tangent basis that 3ds Max uses for its hardware shader. To see the results, load the converted map via the ''Normal Bump'' map and enable &amp;quot;Show Hardware Map in Viewport&amp;quot;. [http://gameartist.nl/ Osman &amp;quot;osman&amp;quot; Tsjardiwal] created a GUI for NSpace, you can [http://boards.polycount.net/showthread.php?p=1075143#post1075143 download it here], just put it in the same folder as the NSpace exe and run it. Diogo has further [http://boards.polycount.net/showthread.php?p=1074160#post1074160 plans for the tool] as well.&lt;br /&gt;
&lt;br /&gt;
[[File:NSpace_Gui_osman.png|frame|none|NSpace interface. &amp;lt;br&amp;gt;Image by [http://diogo.codingcorner.net Diogo &amp;quot;fozi&amp;quot; Teixeira] and [http://gameartist.nl Osman &amp;quot;osman&amp;quot; Tsjardiwal]]]&lt;br /&gt;
&lt;br /&gt;
[http://www.linkedin.com/in/earthquake Joe &amp;quot;EarthQuake&amp;quot; Wilson] said: &amp;quot;[8Monkey Labs has] a tool that lets you load up your reference mesh and object space map. Then load up your tangent normals, and adjust some sliders for things like tile and amount. We need to load up a mesh to know how to correctly orient the tangent normals or else things will come out upside down or reverse etc. It mostly works, but it tends to &amp;quot;bend&amp;quot; the resulting normals, so you gotta split the mesh up into some smoothing groups before you run it, and then I usually will just composite this &amp;quot;combo&amp;quot; texture over my orig map in Photoshop.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;RGBC&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;RGBChannels&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== RGB Channels ==&lt;br /&gt;
Shaders can use different techniques to render tangent-space normal maps, but the normal map directions are usually consistent within a game. Usually the red channel of a tangent-space normal map stores the X axis (pointing the normals predominantly leftwards or rightwards), the green channel stores the Y axis (pointing the normals predominantly upwards or downwards), and the blue channel stores the Z axis (pointing the normals outwards away from the surface).&lt;br /&gt;
&lt;br /&gt;
[[image:tangentspace_rgb.jpg|frame|none|The red, green, and blue channels of a tangent-space normal map. &amp;lt;br&amp;gt;Image by [http://ericchadwick.com Eric Chadwick].]]&lt;br /&gt;
&lt;br /&gt;
If you see lighting coming from the wrong angle when you're looking at your normal-mapped model, and the model is using a tangent-space normal map, the normal map shader might be expecting the red or green channel (or both) to point in the opposite direction. To fix this either change the shader, or simply invert the appropriate color channels in an image editor, so that the black pixels become white and the white pixels become black.&lt;br /&gt;
&lt;br /&gt;
Some shaders expect the color channels to be swapped or re-arranged to work with a particular [[#NormalMapCompression|compression format]]. For example the DXT5_nm format usually expects the X axis to be in the alpha channel, the Y axis to be in the green channel, and the red and blue channels to be empty.&lt;br /&gt;
&lt;br /&gt;
== Tangent Basis ==&lt;br /&gt;
[[#TangentSpaceVsObjectSpace|Tangent-space]] normal maps use a special kind of vertex data called the ''tangent basis''. This is similar to UV coordinates except it provides directionality across the surface, it forms a surface-relative coordinate system for the per-pixel normals stored in the normal map. &lt;br /&gt;
&lt;br /&gt;
Light rays are in world space, but the normals stored in the normal map are in tangent space. When a normal-mapped model is being rendered, the light rays must be converted from world space into tangent space, using the tangent basis to get there. At that point the incoming light rays are compared against the directions of the normals in the normal map, and this determines how much each pixel of the mesh is going to be lit. Alternatively, instead of converting the light rays some shaders will convert the normals in the normal map from tangent space into world space. Then those world-space normals are compared against the light rays, and the model is lit appropriately. The method depends on who wrote the shader, but the end result is the same.&lt;br /&gt;
&lt;br /&gt;
Unfortunately for artists, there are many different ways to calculate the tangent basis: [http://area.autodesk.com/blogs/chris/how_the_3ds_max_scanline_renderer_computes_tangent_and_binormal_vectors_for_normal_mapping 3ds Max], [http://download.autodesk.com/us/maya/2011help/index.html?url=./files/Appendix_A_Tangent_and_binormal_vectors.htm,topicNumber=d0e227193 Maya], [http://www.codesampler.com/dx9src/dx9src_4.htm#dx9_dot3_bump_mapping DirectX 9], [http://developer.nvidia.com/object/NVMeshMender.html NVMeshMender], [http://www.terathon.com/code/tangent.html Eric Lengyel], a custom solution, etc. This means a normal map baked in one application probably won't shade correctly in another. Artists must do some testing with different [[#T|baking tools]] to find which works best with their output. When the renderer (or game engine) renders your game model, [[#ShadersAndSeams|the shader]] must use the same tangent basis as the normal map baker, otherwise you'll get incorrect lighting, especially across the seams between UV shells.&lt;br /&gt;
&lt;br /&gt;
The [http://www.xnormal.net/ xNormal] SDK supports custom tangent basis methods. When a programmer uses it to implement their renderer's own tangent basis, artists can then use Xnormal to bake normal maps that will match their renderer perfectly.&lt;br /&gt;
&lt;br /&gt;
The [[#UVC|UVs]] and the [[#SGAHE|vertex normals]] on the low-res mesh directly influence the coloring of a [[#TSNM|tangent-space]] normal map when it is baked. Each tangent basis vertex is a combination of three things: the mesh vertex's normal (influenced by smoothing), the vertex's tangent (usually derived from the V texture coordinate), and the vertex's bitangent (derived in code, also called the binormal). These three vectors create an axis for each vertex, giving it a specific orientation in the tangent space. These axes are used to properly transform the incoming lighting from world space into tangent space, so your normal-mapped model will be lit correctly.&lt;br /&gt;
&lt;br /&gt;
When a triangle's vertex normals are pointing straight out, and a pixel in the normal map is neutral blue (128,128,255) this means that pixel's normal will be pointing straight out from the surface of the low-poly mesh. When that pixel normal is tilted towards the left or the right in the tangent coordinate space, it will get either more or less red color, depending on whether the normal map is set to store the X axis as either a positive or a negative value. Same goes for when the normal is tilted up or down in tangent space, it will either get more or less green color. If the vertex normals aren't exactly perpendicular to the triangle, the normal map pixels will be tinted away from neutral blue as well. The vertex normals and the pixel normals in the normal map are combined together to create the final per-pixel surface normals.&lt;br /&gt;
&lt;br /&gt;
[[#SAS|Shaders]] are written to use a particular direction or &amp;quot;handedness&amp;quot; for the X and Y axes in a normal map. Most apps tend to prefer +X (red facing right) and +Y (green facing up), while others like 3ds Max prefer +X and -Y. This is why you often need to invert the green channel of a normal map to get it to render correctly in this or that app... the shader is expecting a particular handedness.&lt;br /&gt;
&lt;br /&gt;
[[image:tangentseams.jpg|frame|none|When shared edges are at different angles in UV space, different colors will show up&lt;br /&gt;
along the seam. The tangent basis uses these colors to light the model properly. &amp;lt;br&amp;gt;Image by [http://ericchadwick.com Eric Chadwick].]]&lt;br /&gt;
&lt;br /&gt;
When you look at a tangent-space normal map for a character, you typically see different colors along the UV seams. This is because the UV shells are often oriented at different angles on the mesh, a necessary evil when translating the 3D mesh into 2D textures. The body might be mapped with a vertical shell, and the arm mapped with a horizontal one. This requires the normals in the normal map to be twisted for the different orientations of those UV shells. The UVs are twisted, so the normals must be twisted in order to compensate. The tangent basis helps reorient (twist) the lighting as it comes into the surface's local space, so the lighting will then look uniform across the normal mapped mesh.&lt;br /&gt;
&lt;br /&gt;
When an artist tiles a tangent-space normal map across an arbitrary mesh, like a landscape, this tends to shade correctly because the mesh has a uniform direction in tangent space. If the mesh has discontinuous UV coordinates (UV seams), or the normal map has large directional gradients across it, the tangent space won't be uniform anymore so the surface will probably have shading seams.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;MTLPM&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Modeling the Low-Poly Mesh ==&lt;br /&gt;
The in-game mesh usually needs to be carefully optimized to create a good silhouette, define edge-loops for better deformation, and minimize extreme changes between the vertex normals for better shading (see [[#SmoothingGroupsAndHardEdges|Smoothing Groups &amp;amp; Hard Edges]]).&lt;br /&gt;
&lt;br /&gt;
In order to create an optimized in-game mesh including a good silhouette and loops for deforming in animation, you can start with the 2nd subdivision level of your [[DigitalSculpting|digital sculpt]], or in some cases with the base mesh itself. Then you can just collapse edge loops or cut in new edges to add/remove detail as necessary. Or you can [[DigitalSculpting#OART|re-toplogize]] from scratch if that works better for you.&lt;br /&gt;
&lt;br /&gt;
See [http://www.polycount.com/forum/showthread.php?t=107196 You're making me hard. Making sense of hard edges, uvs, normal maps and vertex counts] on the Polycount forum.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;UVC&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;UVCoordinates&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== UV Coordinates ===&lt;br /&gt;
Normal map baking tools only capture normals within the 0-1 UV square, any UV bits outside this area are ignored. &lt;br /&gt;
&lt;br /&gt;
Only one copy of the forward-facing UVs should remain in the 0-1 UV square at baking time. If the mesh uses overlapping UVs, this will likely cause artifacts to appear in the baked map, since the baker will try render each UV shell into the map. Before baking, it's best to move all the overlaps and mirrored bits outside the 0-1 square. &lt;br /&gt;
&lt;br /&gt;
[[image:Normalmap_uvcoord_offset.jpg|frame|none|The mirrored UVs (in red) are offset 1 unit before baking. &amp;lt;br&amp;gt;Image by [http://ericchadwick.com Eric Chadwick].]]&lt;br /&gt;
&lt;br /&gt;
If you move all the overlaps and mirrored bits exactly 1 UV unit (any whole number will do), then you can leave them there after the bake and they will still be mapped correctly. You can move them back if you want, it doesn't matter to most game engines. Be aware that ZBrush does use UV offsets to manage mesh visibility, however this usually doesn't matter because the ZBrush cage mesh is often a different mesh than the in-game mesh used for baking.&lt;br /&gt;
&lt;br /&gt;
You should avoid changing the UVs after baking the normal map, because rotating or mirroring UVs after baking will cause the normal map not to match the [[#TB|tangent basis]] anymore, which will likely cause lighting problems. &lt;br /&gt;
&lt;br /&gt;
In 3ds Max, W is a third texture coordinate. It's used for 3D procedural textures and for storing vertex color in UV channels (you need 3 axes for RGB, so UVW can store vertex color). Bake problems can be avoided by moving any overlapping UVs to -1 on the W axis, with the same results as moving them 1 unit on the U or V axes. The tool Render To Texture will always bake whatever UVs are the highest along the W axis. However using W can be messy... it's generally hidden unless you purposefully look for it (bad for team work), doesn't get preserved on export to other apps, and high W values can prevent selecting and/or welding UVs. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;M&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;Mirroring&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Mirroring ===&lt;br /&gt;
Normal maps can be mirrored across a model to create symmetrical details, and save UV space, which allows more detail in the normal map since the texture pixels are smaller on the model. &lt;br /&gt;
&lt;br /&gt;
With [[#OSNM|object-space]] maps, mirroring requires [http://boards.polycount.net/showthread.php?t=53986 specific shader support]. For [[#TSNM|tangent-space]] maps, mirroring typically creates a shading seam, but this can be reduced or hidden altogether, depending on the method used.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;TMW&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
==== Typical Mirroring Workflow ====&lt;br /&gt;
# Delete the mesh half that will be mirrored. &lt;br /&gt;
# Arrange the UVs for the remaining model, filling the UV square.&lt;br /&gt;
# Mirror the model to create a &amp;quot;whole&amp;quot; mesh, welding the mesh vertices along the seam. &lt;br /&gt;
# Move the mirrored UVs exactly 1 unit (or any whole number) out of the 0-1 UV square.&lt;br /&gt;
# Bake the normal map.&lt;br /&gt;
&lt;br /&gt;
Sometimes an artist will decide to delete half of a symmetrical model before baking. &lt;br /&gt;
&lt;br /&gt;
This is a mistake however because often the vertex normals along the hole will bend towards the hole a bit; there are no faces on the other side to average the normals with. This will create a strong lighting seam in the normal map. &lt;br /&gt;
&lt;br /&gt;
It's typically best to use the complete mirrored model to bake the normal map, not just the unique half. &lt;br /&gt;
&lt;br /&gt;
To prevent the mirrored UVs from causing overlaps or baking errors, move the mirrored [[#UVC|UVs]] out of the 0-1 UV space, so only one copy of the non-mirrored UVs is left within the 0-1 square.&lt;br /&gt;
&lt;br /&gt;
To avoid texel &amp;quot;leaks&amp;quot; between the UV shells, make sure there's enough [[#Edge_padding|Edge Padding]] around each shell, including along the edges of the normal map. None of the UV shells should be touching the edge of the 0-1 UV square, unless they're meant to tile with the other side of the map.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;CM&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Center Mirroring ====&lt;br /&gt;
If the mirror seam runs along the surface of a continuous mesh, like down the center of a human face for example, then it will probably create a lighting seam. &lt;br /&gt;
&lt;br /&gt;
In Epic Games' [http://www.unrealtechnology.com/technology.php Unreal Engine 3] (UE3) their symmetrical models commonly use centered mirroring. Epic uses materials that mix a [[DetailMap]] with the normal maps; these seem to scatter the diffuse/specular lighting and help minimize the obviousness of the mirror seams. For their [[Light Map]]ped models they use [http://udn.epicgames.com/Three/LightMapUnwrapping.html a technique] that can almost completely hide the mirror seam.&lt;br /&gt;
&lt;br /&gt;
[[image:Epic_MirroringCicada.jpg|frame|none| In UE3 a center mirror seam is reduced by using a detail normal map. &amp;lt;br&amp;gt; Image by &amp;quot;[http://epicgames.com Epic Games]&amp;quot;]]&lt;br /&gt;
&lt;br /&gt;
'''''[http://www.zbrushcentral.com/showpost.php?p=573108&amp;amp;postcount=28 GOW2 normal map seams], [http://utforums.epicgames.com/showthread.php?p=27166791#post27166791 UDK normal map seams]'''''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;OM&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Offset Mirroring ====&lt;br /&gt;
Offset mirroring is a method where you move the mirror seam off to one side of the model, so the seam doesn't run exactly down the center. For example with a character's head, the UV seam can go down along the side of the head in front of the ear. The UV shell for the nearest ear can then be mirrored to use the area on the other side of the head. &lt;br /&gt;
&lt;br /&gt;
This avoids the &amp;quot;Rorschach&amp;quot; effect and allows non-symmetrical details, but it still saves texture space because the two sides of the head can be mirrored (they're never seen at the same time anyhow).&lt;br /&gt;
&lt;br /&gt;
Offset mirroring doesn't get rid of the seam, but it does move it off to a place where it can either be less obvious, or where it can be hidden in a natural seam on the model.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;FCM&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
==== Flat Color Mirroring ====&lt;br /&gt;
[http://boards.polycount.net/showthread.php?t=51088 Tutorial for painting out seams on mirrored tangent-space normal maps] by [http://www.warbeast.de/ warby] solves seams by painting a flat set of normals along the seam, using neutral blue (128,128,255). However it only works along horizontal or vertical UV seams, not across any angled UVs. It also removes any details along the mirror seam, creating blank areas. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;EM&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
==== Element Mirroring ====&lt;br /&gt;
The mirror seam can be avoided completely when it doesn't run directly through any mesh. For example if there's a detached mesh element that runs down the center of the model, this can be uniquely mapped, while the meshes on either side can be mirrors of each other. Whenever the mirrored parts don't share any vertex normals with the non-mirrored parts, there won't be any seams. &lt;br /&gt;
&lt;br /&gt;
[[image:normalmap_mirrored-binocs-racer445.jpg|frame|none|The middle part (highlighted in red) uses unique non-mirrored UVs, allowing the mesh on the right to be mirrored without any seams. &amp;lt;br&amp;gt;Image by [http://http://racer445.com/ &amp;quot;racer445&amp;quot;]]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;SGAHE&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Smoothing Groups &amp;amp; Hard Edges ===&lt;br /&gt;
Each vertex in a mesh has at least one vertex normal. Vertex normals are used to control the direction a triangle will be lit from; if the normal is facing the light the triangle will be fully lit, if facing away from the light the triangle won't be lit. &lt;br /&gt;
&lt;br /&gt;
Each vertex however can have more than one vertex normal. When two triangles have different vertex normals along their shared edge, this creates a shading seam, called a ''hard edge'' in most modeling tools. 3ds Max uses ''Smoothing Groups'' to create hard/soft edges, Maya uses ''Harden Edge'' and ''Soften Edge''. These tools create hard and soft edges by splitting and combining the vertex normals.&lt;br /&gt;
&lt;br /&gt;
[[image:BenMathis_SmoothingGroups_Excerpt.gif|frame|none|Hard edges occur where the vertices have multiple normals. &amp;lt;br&amp;gt;Image by [http://poopinmymouth.com Ben 'poopinmymouth' Mathis] ([http://poopinmymouth.com/process/tips/smoothing_groups.jpg tutorial here])]]&lt;br /&gt;
&lt;br /&gt;
When a mesh uses all soft normals (a single smoothing group) the lighting has to be interpolated across the extreme differences between the vertex normals. If your renderer doesn't support the same [[#TangentBasis|tangent basis]] that the baker uses, this can produce extreme shading differences across the model, which creates shading artifacts. It is generally best to reduce these extremes when you can because a mismatched renderer can only do so much to counteract it.&lt;br /&gt;
&lt;br /&gt;
Hard edges are usually best where the model already has a natural seam. For example, you can add a hard edge along the rim of a car's wheel well, to prevent the inside of the wheel well from distorting the shading for the outside of the car body. Mechanical models usually need hard edges where ever the surface bends more than about 45 degrees. &lt;br /&gt;
&lt;br /&gt;
For most meshes, the best results usually come from adding hard edges where ever there are UV seams. There are no hard rules however, you must experiment with different approaches to find what works best in your game.&lt;br /&gt;
&lt;br /&gt;
When you use object-space normal maps the vertex normal problem goes away since you're no longer relying on the crude vertex normals of the mesh. An object-space normal map completely ignores vertex normals. Object-space mapping allows you to use all soft edges and no bevels on the low-res mesh, without showing lighting errors.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;HEDAT&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
==== Hard Edge Discussions &amp;amp; Tutorials ====&lt;br /&gt;
* [http://www.polycount.com/forum/showthread.php?p=2090450#post2090450 Maya MEL Script help needed (UV border edges)]&lt;br /&gt;
* [http://www.polycount.com/forum/showthread.php?t=107196 You're making me hard. Making sense of hard edges, uvs, normal maps and vertex counts]&lt;br /&gt;
* [http://www.polycount.com/forum/showthread.php?t=73593 Normal Maps: Can Somone Explain This &amp;quot;Black Edge&amp;quot; issue]&lt;br /&gt;
* [http://www.polycount.com/forum/showthread.php?t=73566 Normal Maps: Can someone explain normals, tangents and split UVs?]&lt;br /&gt;
* [http://www.polycount.com/forum/showthread.php?t=72861 3Point Shader Lite - Shader material editor and Quality Mode normalmaps for 3ds Max]&lt;br /&gt;
* [http://boards.polycount.net/showthread.php?t=68173 Why you should NOT trust 3ds Max's viewport normal-map display!]&lt;br /&gt;
* [http://www.game-artist.net/forums/support-tech-discussion/10503-xsi-normal-mapped-cube-looks-bad.html XSI - normal mapped cube looks bad]&lt;br /&gt;
* [http://www.game-artist.net/forums/support-tech-discussion/11924-weird-maya-normal-map-seam-artifact-problem-am-i-making-simple-mistake.html Weird Maya normal map seam/artifact problem]&lt;br /&gt;
* [http://boards.polycount.net/showthread.php?p=1080600 Seams in Normals when Creating Tiling Environment Trims and other Tiles]&lt;br /&gt;
* The tutorial [http://www.svartberg.com/tutorials/article_normalmaps/normalmaps.html Normalmaps for the Technical Game Modeler] by [http://www.svartberg.com Ariel Chai] shows how low-poly smoothing can affect the normal map.&lt;br /&gt;
* The tutorial: [http://wiki.polycount.net/3D_Tutorials/Modeling_High-Low_Poly_Models_for_Next_Gen_Games Modeling High/Low Poly Models for Next Gen Games] by [http://www.acetylenegames.com/artbymasa/ João &amp;quot;Masakari&amp;quot; Costa] shows how smoothing affects raycasting.&lt;br /&gt;
* The [http://tech-artists.org/wiki/Beveling Beveling section on the Tech-Artists.Org Wiki] discusses the breaking of normals and smoothing groups in general terms.&lt;br /&gt;
* The two-part article [http://www.ericchadwick.com/examples/provost/byf2.html#wts Beautiful, Yet Friendly] by [http://www.linkedin.com/in/gprovost Guillaume Provost] explains how smoothing groups and other mesh attributes cause vertices to be duplicated in the game. The vertex count is actually what matters in the game, not the triangle count.&lt;br /&gt;
* The Crysis documentation [http://doc.crymod.com/AssetCreation/PolyBumpReference.html PolyBump Reference] has a section towards the bottom that shows how smoothing affects their baked normal maps.&lt;br /&gt;
* The polycount thread [http://boards.polycount.net/showthread.php?t=60694 Toying around with normal map approaches] has a great discussion of how best to use smoothing groups and bevels for better shading.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;UB&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
==== Using Bevels ====&lt;br /&gt;
Bevels/chamfers generally improve the silhouette of the model, and can also help reflect specular highlights better. &lt;br /&gt;
&lt;br /&gt;
However bevels tend to produce long thin triangles, which slow down the in-game rendering of your model. Real-time renderers have trouble rendering long thin triangles because they create a lot of sub-pixel areas to render. &lt;br /&gt;
&lt;br /&gt;
Bevels also balloon the vertex count, which can increase the transform cost and memory usage. Hard edges increase the vertex count too, but not when  the edge also shares a seam in UV space. For a good explanation of the vertex count issue, see [http://www.ericchadwick.com/examples/provost/byf2.html#wts Beautiful, Yet Friendly].&lt;br /&gt;
&lt;br /&gt;
Using hard edges with matching UV shells tends to give better performance and better cosmetic results than using bevels. However there are differing opinions on this, see the Polycount thread &amp;quot;[http://boards.polycount.net/showthread.php?t=71760 Maya transfer maps help]&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;EVN&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
==== Edited Vertex Normals ====&lt;br /&gt;
If you use bevels the shading will be improved by editing the vertex normals so the larger flat surfaces have perpendicular normals. The vertex normals are then forced to blend across the smaller bevel faces, instead of across the larger faces. See the Polycount thread [http://boards.polycount.net/showthread.php?t=66139 Superspecular soft edges tutorial chapter 1].&lt;br /&gt;
&lt;br /&gt;
[[image:oliverio_bevel_normals.gif|frame|none|Bending normals on bevelled models. &amp;lt;br&amp;gt;From the tutorial [http://deadlineproof.com/model-shading-techniques-soft-edge-superspecular/ Shading techniques Superspecular soft edges]&amp;lt;br&amp;gt;Image by [http://deadlineproof.com/ Paolo Oliverio]]]&lt;br /&gt;
&lt;br /&gt;
== Level of Detail Models ==&lt;br /&gt;
See [http://www.polycount.com/forum/showthread.php?p=1216945#post1216945 Problem if you're using 3point-style normals with an LOD].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;MTHPM&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Modeling The High-Poly Mesh ==&lt;br /&gt;
[[Subdivision Surface Modeling]] and [[DigitalSculpting]] are the techniques most often used for modeling a normal map. &lt;br /&gt;
&lt;br /&gt;
Some artists prefer to model the in-game mesh first, other artists prefer to model the high-res mesh first, and others start somewhere in the middle. The modeling order is ultimately a personal choice though, all three methods can produce excellent results:&lt;br /&gt;
* Build the in-game model, then up-res it and sculpt it.&lt;br /&gt;
* Build and sculpt a high resolution model, then build a new in-game model around that.&lt;br /&gt;
* Build a basemesh model, up-res and sculpt it, then step down a few levels of detail and use that as a base for building a better in-game mesh.&lt;br /&gt;
If the in-game mesh is started from one of the subdivision levels of the basemesh sculpt, various edge loops can be collapsed or new edges can be cut to add/remove detail as necessary.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;SE&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Sloped Extrusions ===&lt;br /&gt;
[[image:normal_slopes_hatred.jpg|frame|none|Extrusions on the high-poly model should be sloped to make a better normal map. &amp;lt;br&amp;gt;Image by [http://www.hatred.gameartisans.org/ Krzysztof &amp;quot;Hatred&amp;quot; Dolas].]]&lt;br /&gt;
&lt;br /&gt;
=== Floating Geometry ===&lt;br /&gt;
[[image:FloatingGeo.jpg|frame|none|Normal map stores the direction the surface is facing rather than real depth information, thus allowing to save time using floating geometry. &amp;lt;br&amp;gt;To correctly bake AO with floating geo make it a separate object and turn off it's shadow casting. &amp;lt;br&amp;gt;Image by [http://artisaverb.info/ Andrew &amp;quot;d1ver&amp;quot; Maximov].]]&lt;br /&gt;
&lt;br /&gt;
See also [[3DTutorials/Modeling High-Low Poly Models for Next Gen Games|Modeling High/Low Poly Models for Next Gen Games]] by [http://www.acetylenegames.com/artbymasa/ João &amp;quot;Masakari&amp;quot; Costa]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;ET&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Edge Thickness ===&lt;br /&gt;
[[image:normal_edge_thickness.jpg|frame|none|When creating edges of the Highpoly, sometimes you'll need to make them rounded than in real life to &amp;lt;br&amp;gt;work better at the size they will be seen.&amp;lt;br&amp;gt;Image by [http://racer445.com/Evan &amp;quot;racer445&amp;quot; Herbert]]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;MRF&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;MRRCB&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== mental ray Round Corners Bump ===&lt;br /&gt;
The mental ray renderer offers an automatic bevel rendering effect called Round Corners Bump that can be baked into a normal map. This is available in 3ds Max, Maya, and XSI. See [http://boards.polycount.net/showthread.php?t=71995 Zero Effort Beveling for normal maps] - by [http://boards.polycount.net/member.php?u=31662 Robert &amp;quot;r_fletch_r&amp;quot; Fletcher].&lt;br /&gt;
&lt;br /&gt;
[http://jeffpatton.net/ Jeff Patton] posted about [http://jeffpatton.cgsociety.org/blog/archive/2007/10/ how to expose Round Corners Bump] in 3ds Max so you can use it in other materials.&lt;br /&gt;
&lt;br /&gt;
[http://cryrid.com/art/ Michael &amp;quot;cryrid&amp;quot; Taylor] posted a tutorial about how to use [http://cryrid.com/images/temp/XSI/zeroeffort_bevels.jpg Round Corners in XSI].&lt;br /&gt;
&lt;br /&gt;
XSI is able to bake a good normal map with it, but 3ds Max seems to bake it incorrectly, and Maya isn't able to bake the effect at all. Maybe Max might be able to bake it correctly, if the .mi shader is edited to use the correct coordinate space?&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;Baking&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;B&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
== Baking ==&lt;br /&gt;
The process of transferring normals from the high-res model to the in-game model is often called baking. The baking tool usually starts projecting a certain numerical distance out from the low-poly mesh, and sends rays inwards towards the high-poly mesh. When a ray intersects the high-poly mesh, it records the mesh's surface normal and saves it in the normal map.&lt;br /&gt;
&lt;br /&gt;
To get an understanding of how all the options affect your normal map, do some test bakes on simple meshes like boxes. They generate quickly so you can experiment with [[#UVCoordinates|UV mirroring]], [[#SGAHE|smoothing groups]], etc. This helps you learn the settings that really matter.&lt;br /&gt;
* The tutorial [http://wiki.polycount.net/3D_Tutorials/Modeling_High-Low_Poly_Models_for_Next_Gen_Games Modeling High/Low Poly Models for Next Gen Games] by [http://www.acetylenegames.com/artbymasa/ João &amp;quot;Masakari&amp;quot; Costa] has more examples of ray-casting, plus how to get better results from the bake.&lt;br /&gt;
&lt;br /&gt;
Baking sub-sections:&lt;br /&gt;
# [[#Anti-Aliasing|Anti-Aliasing]]&lt;br /&gt;
# [[#Baking_Transparency|Baking Transparency]]&lt;br /&gt;
# [[#Edge_Padding|Edge Padding]]&lt;br /&gt;
# [[#High_Poly_Materials|High Poly Materials]]&lt;br /&gt;
# [[#Reset_Transforms|Reset Transforms]]&lt;br /&gt;
# [[#Solving_Intersections|Solving Intersections]]&lt;br /&gt;
# [[#Solving_Pixel_Artifacts|Solving Pixel Artifacts]]&lt;br /&gt;
# [[#Solving_Wavy_Lines|Solving Wavy Lines]]&lt;br /&gt;
# [[#Triangulating|Triangulating]]&lt;br /&gt;
# [[#Working_with_Cages|Working with Cages]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;AA&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Anti-Aliasing ===&lt;br /&gt;
Turning on super-sampling or anti-aliasing (or whatever multi-ray casting is called in your normal map baking tool) will help to fix any jagged edges where the high-res model overlaps itself within the UV borders of the low-poly mesh, or wherever the background shows through holes in the mesh. Unfortunately this tends to render much much slower, and takes more memory.&lt;br /&gt;
&lt;br /&gt;
[[image:normalmap_aliasing_knak47.jpg|frame|none|A bake without anti-aliasing shows artifacts where the high-poly mesh has overlaps. &amp;lt;br&amp;gt;Image by [http://www.polycount.com/forum/member.php?u=35938 'knak47']]]&lt;br /&gt;
&lt;br /&gt;
One trick to speed this up is to render 2x the intended image size then scale the normal map down 1/2 in a paint program like Photoshop. The reduction's pixel resampling will add anti-aliasing for you in a very quick process. After scaling, make sure to re-normalize the map if your game doesn't do that already, because the un-normalized pixels in your normalmap may cause pixelly artifacts in your specular highlights. Re-normalizing can be done with [http://developer.nvidia.com/nvidia-texture-tools-adobe-photoshop NVIDIA's normal map filter] for Photoshop.&lt;br /&gt;
&lt;br /&gt;
3ds Max's supersampling doesn't work nicely with edge padding, it produces dark streaks in the padded pixels. If so then turn off padding and re-do the padding later, either by re-baking without supersampling or by using a Photoshop filter like the one that comes with [[#3DTools|Xnormal]].&lt;br /&gt;
&lt;br /&gt;
=== Baking Transparency ===&lt;br /&gt;
Sometimes you need to bake a normal map from an object that uses opacity maps, like a branch with opacity-mapped leaves. Unfortunately baking apps often completely ignore any transparency mapping on your high-poly mesh.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|[[image:JoeWilson_ivynormals_error.jpg]] &lt;br /&gt;
|[[image:JoeWilson_ivynormals_rendered.jpg]]&lt;br /&gt;
|-&lt;br /&gt;
|3ds Max's RTT baker causes transparency errors.&amp;lt;br&amp;gt;image by [http://www.linkedin.com/in/earthquake Joe &amp;quot;EarthQuake&amp;quot; Wilson]&lt;br /&gt;
|The lighting method bakes perfect transparency.&amp;lt;br&amp;gt;image by [http://www.linkedin.com/in/earthquake Joe &amp;quot;EarthQuake&amp;quot; Wilson]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
To solve this, render a Top view of the mesh. This only works if you're using a planar UV projection for your low-poly mesh and you're baking a tangent-space normal map.&lt;br /&gt;
&lt;br /&gt;
* Make sure the Top view matches the dimensions of the planar UV projection used by the low-poly mesh. It helps to use an orthographic camera for precise placement.&lt;br /&gt;
* On the high-poly mesh either use a specific lighting setup or a use special material shader:&lt;br /&gt;
* 1) The lighting setup is described in these tutorials:&lt;br /&gt;
* * [http://www.bencloward.com/tutorials_normal_maps11.shtml Creating A Normal Map Right In Your 3D App] by [http://www.bencloward.com/ Ben Cloward]&lt;br /&gt;
* *[http://www.pinwire.com/articles/26/1/Generating-High-Fidelity-Normal-Maps-with-3-D-Software.html Generating High Fidelity Normal Maps with 3-D Software] by [http://www.linkedin.com/pub/0/277/4AB Dave McCoy], Graphics Techniques Consultant, Xbox Content and Design Team&lt;br /&gt;
* 2) The material shader does the same thing, but doesn't require lights.&lt;br /&gt;
* * [http://www.scriptspot.com/3ds-max/normaltexmap NormalTexMap] scripted map for 3ds Max by [http://www.scriptspot.com/users/dave-locke Dave Locke].&lt;br /&gt;
* * [http://www.footools.com/3dsmax_plugins.html InfoTexture] map plugin for 3ds Max by [http://www.footools.com John Burnett]&lt;br /&gt;
&lt;br /&gt;
[[image:BenCloward_NormalMapLighting.gif|frame|none|The lighting setup for top-down rendering. &amp;lt;br&amp;gt;Image by [http://www.bencloward.com Ben Cloward]]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;EP&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Edge Padding ===&lt;br /&gt;
If a normal map doesn't have enough [[Edge_padding |Edge Padding]], this will create shading seams on the UV borders.&lt;br /&gt;
&lt;br /&gt;
=== High Poly Materials ===&lt;br /&gt;
3ds Max will not bake a normal map properly if the high-res model has a mental ray Arch &amp;amp; Design material applied. If your normal map comes out mostly blank, either use a Standard material or none at all. For an example see the Polycount thread [http://www.polycount.com/forum/showthread.php?t=74792 Render to Texture &amp;gt;:O].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;RT&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Reset Transforms ===&lt;br /&gt;
Before baking, make sure your low-poly model's transforms have been reset. '''''This is very important!''''' Often during the modeling process a model will be rotated and scaled, but these compounded transforms can create a messy local &amp;quot;space&amp;quot; for the model, which in turn often creates rendering errors for normal maps. &lt;br /&gt;
&lt;br /&gt;
In 3ds Max, use the Reset Xforms utility then Collapse the Modifier Stack. In Maya use Freeze Transformation. In XSI use the Freeze button.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;SI&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Solving Intersections ===&lt;br /&gt;
The projection process often causes problems like misses, or overlaps, or intersections. It can be difficult generating a clean normal map in areas where the high-poly mesh intersects or nearly intersects itself, like in between the fingers of a hand. Setting the ray distance too large will make the baker pick the other finger as the source normal, while setting the ray distance too small will lead to problems at other places on the mesh where the distances between in-game mesh and high-poly mesh are greater.&lt;br /&gt;
&lt;br /&gt;
Fortunately there are several methods for solving these problems.&lt;br /&gt;
&lt;br /&gt;
# Change the shape of the cage. Manually edit points on the projection cage to help solve tight bits like the gaps between fingers.&lt;br /&gt;
# Limit the projection to matching materials, or matching UVs.&lt;br /&gt;
# Explode the meshes. See the polycount thread [http://boards.polycount.net/showthread.php?t=62921 Explode script needed (for baking purposes)].&lt;br /&gt;
# Bake two or more times using different cage sizes, and combine them in Photoshop.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;SPA&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Solving Pixel Artifacts ===&lt;br /&gt;
[[image:filterMaps_artifact.jpg|frame|none|Random pixel artifacts in the bake. &amp;lt;br&amp;gt;Image by [http://www.linkedin.com/in/ericchadwick Eric Chadwick]]]&lt;br /&gt;
&lt;br /&gt;
If you are using 3ds Max's ''Render To Texture'' to bake from one UV layout to another, you may see stray pixels scattered across the bake. This only happens if you are using a copy of the original mesh in the Projection, and that mesh is using a different UV channel than the original mesh.&lt;br /&gt;
&lt;br /&gt;
There are two solutions for this:&lt;br /&gt;
&lt;br /&gt;
* Add a Push modifier to the copied mesh, and set it to a low value like 0.01.&lt;br /&gt;
- or -&lt;br /&gt;
&lt;br /&gt;
* Turn off ''Filter Maps'' in the render settings (Rendering menu &amp;gt; Render Setup &amp;gt; Renderer tab &amp;gt; uncheck Filter Maps). To prevent aliasing you may want to enable the Global Supersampler in Render Setup.&lt;br /&gt;
&lt;br /&gt;
See also [[#Anti-Aliasing]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;SWL&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Solving Wavy Lines ===&lt;br /&gt;
When capturing from a cylindrical shape, often the differences between the low-poly mesh and the high-poly mesh will create a wavy edge in the normal map. There are a couple ways to avoid this:&lt;br /&gt;
&lt;br /&gt;
# The best way... create your lowpoly model with better supporting edges. See the Polycount threads [http://www.polycount.com/forum/showthread.php?t=81154 Understanding averaged normals and ray projection/Who put waviness in my normal map?], [http://boards.polycount.net/showthread.php?t=55754 approach to techy stuff], [http://www.polycount.com/forum/showthread.php?t=72713 Any tips for normal mapping curved surface?].&lt;br /&gt;
# Adjust the shape of the cage to influence the directions the rays will be cast. Beware... this work will have to be re-done every time you edit the lowpoly mesh, as the cage will be invalidated. At the bottom of [http://www.poopinmymouth.com/tutorial/normal_workflow_2.htm this page of his normal map tutorial], [http://www.poopinmymouth.com/ Ben &amp;quot;poopinmymouth&amp;quot; Mathis] shows how to do this in 3ds Max. Same method can be seen in the image below.&lt;br /&gt;
# Subdivide the low-res mesh so it more closely matches the high-res mesh. Beware... this will cause the normal map not to match your lowpoly vertex normals, probably causing shading errors. [http://www.custom-airbrush.com/ Jeff &amp;quot;airbrush&amp;quot; Ross] has a [http://dodownload.filefront.com/9086954//72f71c0147df53765045a22253c18361a29a6d532425842007ead644d39cbb85d0794ab560365cfa video tutorial] that shows how to do this in Maya.&lt;br /&gt;
# Paint out the wavy line.  Beware... this work will have to be re-done every time you re-bake the normal map. The [http://www.poopinmymouth.com/tutorial/normal_workflow_3.htm normal map process tutorial] by [http://www.poopinmymouth.com/ Ben &amp;quot;poopinmymouth&amp;quot; Mathis] includes an example of painting out wavy lines in a baked normal map.&lt;br /&gt;
# Use a separate planar-projected mesh for the details that wrap around the barrel area, so the ray-casting is more even. Beware... this will cause the normal map not to match your lowpoly vertex normals, probably causing shading errors. For example to add tread around a tire, the tread can be baked from a tread model that is laid out flat, then that bake can layered onto the bake from the cylindrical tire mesh in a paint program.&lt;br /&gt;
&lt;br /&gt;
[[image:timothy_evison_normalmap_projections.jpg|frame|none|Adjusting the shape of the cage to remove distortion. &amp;lt;br&amp;gt;Image by [http://users.cybercity.dk/~dsl11905/resume/resume.html Timothy &amp;quot;tpe&amp;quot; Evison]]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;TRI&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Triangulating ===&lt;br /&gt;
Before baking, it is usually best to triangulate the low-poly model, converting it from polygons into pure triangles. This prevents the vertex normals from being changed later on, which can create specular artifacts.&lt;br /&gt;
&lt;br /&gt;
[[image:triangulation_modo_ohare.jpg|frame|none| When quads are triangulated in [http://www.luxology.com/modo/ Modo], the internal edges are sometimes flipped, which causes shading differences.&amp;lt;br&amp;gt;Image by [http://www.farfarer.com/|James &amp;quot;Talon&amp;quot; O'Hare]]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Sometimes a baking tool or a mesh exporter/importer will re-triangulate the polygons. A quad polygon is actually treated as two triangles, and the internal edge between them is often switched diagonally during modeling operations. When the vertices of the quad are moved around in certain shapes, the software's algorithm for polygon models tries to keep the quad surface in a &amp;quot;rational&amp;quot; non-overlapping shape. It does this by switching the internal edge between its triangles.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[image:triangulation_spec_tychovii.jpg|frame|none| The specular highlight is affected by triangulation. Flip edges to fix skewing. See the Polycount thread [http://boards.polycount.net/showthread.php?t=66651 Skewed Specular Highlight?] for pictures and more info.&amp;lt;br&amp;gt; Image by [http://robertkreps.com Robert &amp;quot;TychoVII&amp;quot; Kreps]]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;WWC&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Working with Cages ===&lt;br /&gt;
''Cage'' has two meanings in the normal-mapping process: a low-poly base for [[subdivision surface modeling]] (usually called the [[DigitalSculpting#BM|basemesh]]), or a ray-casting mesh used for normal map baking. This section covers the ray-casting cage.&lt;br /&gt;
&lt;br /&gt;
Most normal map baking tools allow you to use a distance-based raycast. A ray is sent outwards along each vertex normal, then at the distance you set a ray is cast back inwards. Where ever that ray intersects the high poly mesh, it will sample the normals from it. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|[[Image:Normalmap_raycasting_1.jpg]] &lt;br /&gt;
|[[Image:Normalmap_raycasting_2.jpg]]&lt;br /&gt;
|-&lt;br /&gt;
|Hard edges and a distance-based raycast (gray areas) cause ray misses (yellow) and ray overlaps (cyan).&amp;lt;br&amp;gt; Image by [http://www.mankua.com/ Diego Castaño]&lt;br /&gt;
|The gray area shows that using all soft edges (or hard edges and a cage-based raycast) will avoid ray-casting errors from split normals.&amp;lt;br&amp;gt; Image by [http://www.mankua.com/ Diego Castaño] &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Unfortunately with a distance-based raycast, [[#SGAHE|split vertex normals]] will cause the bake to miss parts of the high-res mesh, causing errors and seams. &lt;br /&gt;
&lt;br /&gt;
Some software allows you to use ''cage mesh'' option instead, which basically inflates a copy of the low-poly mesh, then raycasts inwards from each vertex. This ballooned-out mesh is the cage.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|&amp;lt;tablebgcolor=&amp;quot;#ffaaaa&amp;quot;&amp;gt;| &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
In 3ds Max the cage controls both the distance and the direction of the raycasting. &lt;br /&gt;
&lt;br /&gt;
In Maya the cage only controls the distance; the ray direction matches the vertex normals (inverted).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;text-decoration: line-through&amp;quot;&amp;gt; This may have been fixed in the latest release...&amp;lt;&amp;lt;BR&amp;gt;&amp;gt;&lt;br /&gt;
In Xnormal the cage is split everywhere the model has [[#SGAHE|hard edges]], causing ray misses in the bake. You can fix the hard edge split problem but it involves an overly complex workflow. You must also repeat the whole process any time you change your mesh:&amp;lt;/span&amp;gt;&lt;br /&gt;
# &amp;lt;s&amp;gt; Load the 3d viewer.&amp;lt;/s&amp;gt;&lt;br /&gt;
# &amp;lt;s&amp;gt; Turn on the cage editing tools.&amp;lt;/s&amp;gt;&lt;br /&gt;
# &amp;lt;s&amp;gt; Select all of the vertices.&amp;lt;/s&amp;gt;&lt;br /&gt;
# &amp;lt;s&amp;gt; Weld all vertices.&amp;lt;/s&amp;gt;&lt;br /&gt;
# &amp;lt;s&amp;gt; Expand the cage as you normally would.&amp;lt;/s&amp;gt;&lt;br /&gt;
# &amp;lt;s&amp;gt; Save out your mesh using the Xnormal format.&amp;lt;/s&amp;gt;&lt;br /&gt;
# &amp;lt;s&amp;gt; Make sure Xnormal is loading the correct mesh.&amp;lt;/s&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;P&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;Painting&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Painting ==&lt;br /&gt;
Don't be afraid to edit normal maps in Photoshop. After all it is just a texture, so you can clone, blur, copy, blend all you want... as long as it looks good of course. Some understanding of [[#RGBChannels|the way colors work]] in normal maps will go a long way in helping you paint effectively.&lt;br /&gt;
&lt;br /&gt;
A normal map sampled from a high-poly mesh will nearly always be better than one sampled from a texture, since you're actually grabbing &amp;quot;proper&amp;quot; normals from an accurate, highly detailed surface. That means your normal map's pixels will basically be recreating the surface angles of your high-poly mesh, resulting in a very believable look.&lt;br /&gt;
&lt;br /&gt;
If you only convert an image into a normal-map, it can look very flat, and in some cases it can be completely wrong unless you're very careful about your value ranges. Most image conversion tools assume the input is a heightmap, where black is low and white is high. If you try to convert a diffuse texture that you've painted, the results are often very poor. Often the best results are obtained by baking the large and mid-level details from a high-poly mesh, and then combined with photo-sourced &amp;quot;fine detail&amp;quot; normals for surface details such as fabric weave, scratches and grain.&lt;br /&gt;
&lt;br /&gt;
Sometimes creating a high poly surface takes more time than your budget allows. For character or significant environment assets then that is the best route, but for less significant environment surfaces working from a heightmap-based texture will provide a good enough result for a much less commitment in time.&lt;br /&gt;
&lt;br /&gt;
* [http://crazybump.com/ CrazyBump] is a commercial normal map converter.&lt;br /&gt;
* [http://www.renderingsystems.com/support/showthread.php?tid=3 ShaderMap] is a commercial normal map converter.&lt;br /&gt;
* [http://www.pixplant.com/ PixPlant] is a commercial normal map converter.&lt;br /&gt;
* [http://boards.polycount.net/showthread.php?t=68860 NJob] is a free normal map converter.&lt;br /&gt;
* [http://developer.nvidia.com/nvidia-texture-tools-adobe-photoshop NVIDIA normalmap filter for Photoshop] is a free normal map converter.&lt;br /&gt;
* [http://xnormal.net Xnormal height-to-normals filter for Photoshop] is a free normal map converter.&lt;br /&gt;
* [http://www.poopinmymouth.com/tutorial/normal_workflow_3.htm Normal map process tutorial] by [http://www.poopinmymouth.com/ Ben &amp;quot;poopinmymouth&amp;quot; Mathis] includes an example of painting out wavy lines in a baked normal map.&lt;br /&gt;
&lt;br /&gt;
=== Flat Color ===&lt;br /&gt;
The color (128,128,255) creates normals that are completely perpendicular to the polygon, as long as the vertex normals are also perpendicular. Remember a normal map's per-pixel normals create ''offsets'' from the vertex normals. If you want an area in the normal map to be flat, so it creates no offsets from the vertex normals, then use the color (128,128,255). &lt;br /&gt;
&lt;br /&gt;
This becomes especially obvious when [[#Mirroring|mirroring a normal map]] and using a shader with a reflection ingredient. Reflection tends to accentuate the angles between the normals, so any errors become much more apparent.&lt;br /&gt;
&lt;br /&gt;
[[image:normalmap_127seam.jpg|thumb|600px|none| Mirrored normal maps show a seam when (127,127,255) is used for the flat color; 128 is better.&amp;lt;br&amp;gt;Image by [http://www.ericchadwick.com Eric Chadwick]]]&lt;br /&gt;
&lt;br /&gt;
In a purely logical way, 127 seems like it would be the halfway point between 0 and 255. However 128 is the color that actually works in practice. When a test is done comparing (127,127,255) versus (128,128,255) it becomes obvious that 127 creates a slightly bent normal, and 128 creates a flat one.&lt;br /&gt;
&lt;br /&gt;
This is because most game pipelines use ''unsigned'' normal maps. For details see the Polycount forum thread [http://www.polycount.com/forum/showpost.php?p=771360&amp;amp;postcount=22 tutorial: fixing mirrored normal map seams].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;BNMT&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Blending Normal Maps Together ===&lt;br /&gt;
Blending normal maps together is a quick way to add high-frequency detail like wrinkles, cracks, and the like. Fine details can be painted as a height map, then it can be converted into a normal map using one of the normal map tools. Then this &amp;quot;details&amp;quot; normal map can be blended with a geometry-derived normal map using one of the methods below. &lt;br /&gt;
&lt;br /&gt;
Here is a comparison of four of the blending methods. Note that in these examples the default values were used for CrazyBump (Intensity 50, Strength 33, Strength 33), but the tool allows each layer's strength to be adjusted individually for stronger or milder results. Each of the normal maps below were [[#Renormalizing|re-normalized]] after blending.&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
| [[Image:NormalMap$nrmlmap_blending_methods_Maps.png}}&lt;br /&gt;
|-&lt;br /&gt;
| The blended normal maps.&amp;lt;&amp;lt;BR&amp;gt;&amp;gt;&amp;lt;span style=&amp;quot;font-size: smaller&amp;quot;&amp;gt;image by[http://www.ericchadwick.com Eric Chadwick]&amp;lt;/span&amp;gt; &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The four blending methods used above:&lt;br /&gt;
# [http://www.crazybump.com CrazyBump] by Ryan Clark blends normal maps together using calculations in 3D space rather than just in 2D. This does probably the best job at preserving details, and each layer's strength settings can be tweaked individually. &lt;br /&gt;
# [http://www.rodgreen.com/?p=4 Combining Normal Maps in Photoshop] by Rod Green blends normal maps together using Linear Dodge mode for the positive values and Difference mode for the negative values, along with a Photoshop Action to simplify the process. It's free, but the results may be less accurate than CrazyBump.&lt;br /&gt;
# [http://www.paultosca.com/makingofvarga.html Making of Varga] by [http://www.paultosca.com/ Paul &amp;quot;paultosca&amp;quot; Tosca] blends normal maps together using Overlay mode for the red and green channels and Multiply mode for the blue channel. This gives a slightly stronger bump than the Overlay-only method. [http://www.leocov.com/ Leo &amp;quot;chronic&amp;quot; Covarrubias] has a step-by-step tutorial for this method in [http://www.cgbootcamp.com/tutorials/2009/12/9/photoshop-combine-normal-maps.html CG Bootcamp Combine Normal Maps].&lt;br /&gt;
# [[3DTutorials/Normal Map Deepening|Normal Map Deepening]] by [http://www.poopinmymouth.com/ Ben &amp;quot;poopinmymouth&amp;quot; Mathis] shows how to blend normal maps together using Overlay mode. [http://cgtextures.com/content.php?action=tutorial&amp;amp;name=normalmap CGTextures tutorial for the NVIDIA Photoshop filter] by [http://hirezstudios.com/ Scott Warren] also shows how to create normalmaps using multiple layers (Note: to work with the Overlay blend mode each layer's Output Level should be 128 instead of 255, you can use the Levels tool for this).&lt;br /&gt;
&lt;br /&gt;
The [http://boards.polycount.net/showthread.php?t=69615 Getting good height from Nvidia-filter normalizing grayscale height] thread on the Polycount forum has a discussion of different painting/blending options. Also see the [[#2DT|2D Tools]] section for painting and conversion tools.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;PCT&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Pre-Created Templates ===&lt;br /&gt;
A library of shapes can be developed and stored for later use, to save creation time for future normal maps. Things like screws, ports, pipes, and other doo-dads. These shapes can be stored as bitmaps with transparency so they can be layered into baked normal maps.&lt;br /&gt;
&lt;br /&gt;
* [http://www.beautifulrobot.com/?p=69 Creating &amp;amp; Using NormalMap &amp;quot;Widgets&amp;quot;] - by ''[http://www.beautifulrobot.com Steev &amp;quot;kobra&amp;quot; Kelly]''&amp;lt;&amp;lt;BR&amp;gt;&amp;gt; How to set up and render template objects.&lt;br /&gt;
* [http://www.akramparvez.com/portfolio/scripts/normalmap-widget-for-3ds-max/ NormalMap Widget for 3ds Max] - by ''[http://www.akramparvez.com Akram Parvez]''&amp;lt;&amp;lt;BR&amp;gt;&amp;gt;A script to automate the setup and rendering process.&lt;br /&gt;
* See the section [[#BT|Baking Transparency]] for more template-rendering tools and tutorials.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;RN&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;Renormalizing&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Re-normalizing ===&lt;br /&gt;
Re-normalizing means resetting the length of each normal in the map to 1.&lt;br /&gt;
&lt;br /&gt;
A normal mapping shader takes the three color channels of a normal map and combines them to create the direction and length of each pixel's normal. These normals are then used to apply the scene lighting to the mesh. However if you edit normal maps by hand or if you blend multiple normal maps together this can cause those lengths to change. Most shaders expect the length of the normals to always be 1 (normalized), but some are written to re-normalize the normal map dynamically (for example, 3ds Max's Hardware Shaders do re-normalize).&lt;br /&gt;
&lt;br /&gt;
If the normals in your normal map are not normalized, and your shader doesn't re-normalize them either, then you may see artifacts on the shaded surface... the specular highlight may speckle like crazy, the surface may get patches of odd shadowing, etc. To help you avoid this NVIDIA's normal map filter for Photoshop provides an easy way to re-normalize a map after editing; just use the '''Normalize Only''' option. [http://xnormal.net Xnormal] also comes with a Normalize filter for Photoshop.&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Some shaders use [[#NormalMapCompression|compressed normal maps]]. Usually this means the blue channel is thrown away completely, so it's recalculated on-the-fly in the shader. However the shader has to re-normalize in order to recreate that data, so any custom normal lengths that were edited into the map will be ignored completely. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;AOIANM&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;AmbientOcclusionIntoANormalMap&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Ambient Occlusion into a Normal Map ===&lt;br /&gt;
If the shader doesn't re-normalize the normal map, an [[Ambient Occlusion Map]] can actually be baked into the normal map. This will shorten the normals in the crevices of the surface, causing the surface to receive less light there. This works with both diffuse and specular, or any other pass that uses the normal map, like reflection.&lt;br /&gt;
&lt;br /&gt;
However it's usually best to keep the AO as a separate map (or in an alpha channel) and multiply it against the ambient lighting only. This is usually done with a custom [[Category:Shaders|shader]]. If you multiply it against the diffuse map or normal map then it also occludes diffuse lighting which can make the model look dirty. Ambient occlusion is best when it occludes ambient lighting only, for example a [[DiffuselyConvolvedCubeMap|diffusely convolved cubemap]].&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
To bake the AO into a normal map, adjust the levels of the AO layer first so the darks only go as low as 128 gray, then set the AO layer to Darken mode. This will shorten the normals in the normalmap, causing the surface to receive less light in the darker areas. &lt;br /&gt;
&lt;br /&gt;
This trick doesn't work with any shaders that re-normalize, like 3ds Max's Hardware Shaders. The shader must be altered to actually use the lengths of your custom normals; most shaders just assume all normals are 1 in length because this makes the shader code simpler. Also this trick will not work with most of the common [[#NormalMapCompression|normal map compression formats]], which often discard the blue channel and recalculate it in the shader, which requires re-normalization.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;BLE&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;&amp;lt;Anchor([[BacklightingExample]])&amp;gt;&amp;gt;&lt;br /&gt;
=== Back Lighting Example ===&lt;br /&gt;
You can customize normal maps for some interesting effects. If you invert the blue channel of a tangent-space map, the normals will be pointing to the opposite side of the surface, which can simulate backlighting.&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|  [[Image:NormalMap$tree_front.jpg]] &lt;br /&gt;
|-&lt;br /&gt;
| Tree simulating subsurface scattering (front view).&amp;lt;&amp;lt;BR&amp;gt;&amp;gt;&amp;lt;span style=&amp;quot;font-size: smaller&amp;quot;&amp;gt;image by[http://www.linkedin.com/in/ericchadwick Eric Chadwick]&amp;lt;/span&amp;gt; &lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The tree leaves use a shader than adds together two diffuse maps, one using a regular tangent-space normal map, the other using the same normal map but with the blue channel inverted. This causes the diffuse map using the regular normal map to only get lit on the side facing the light (front view), while the diffuse map using the inverted normal map only gets lit on the opposite side of the leaves (back view). The leaf geometry is 2-sided but uses the same shader on both sides, so the effect works no matter the lighting angle. As an added bonus, because the tree is self-shadowing the leaves in shadow do not receive direct lighting, which means their backsides do not show the inverted normal map, so the fake subsurface scatter effect only appears where the light directly hits the leaves. This wouldn't work for a whole forest because of the computational cost of self-shadowing and double normal maps, but could be useful for a single &amp;quot;star&amp;quot; asset, or if LODs switched the distant trees to a model that uses a cheaper shader.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;SAS&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;&amp;lt;Anchor([[ShadersAndSeams]])&amp;gt;&amp;gt;&lt;br /&gt;
== Shaders and Seams ==&lt;br /&gt;
You need to use the right kind of shader to avoid seeing seams where UV breaks occur. It must be written to use the same [[#TangentBasis|tangent basis]] that was used during baking. If the shader doesn't, the lighting will either be inconsistent across UV borders or it will show smoothing errors from the low-poly vertex normals.&lt;br /&gt;
&lt;br /&gt;
Xnormal generates accurate normals when displayed in Xnormal, and the SDK includes a method to write your own custom tangent space generator for the tool. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;3MS&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== 3ds Max Shaders ===&lt;br /&gt;
The &amp;quot;Render To Texture&amp;quot; tool in 3ds Max 2011 and older generates [[#TSNM|tangent-space]] normal maps that render correctly in the offline renderer (scanline) but do not render correctly in the realtime viewport with the 3ds Max shaders. Max is using a different [[#TangentBasis|tangent basis]] for each. This is readily apparent when creating non-organic hard surface normalmaps; smoothing errors appear in the viewport that do not appear when rendered. &lt;br /&gt;
&lt;br /&gt;
The errors can be fixed by using &amp;quot;Render To Texture&amp;quot; to bake a [[#TSNM|tangent-space]] or [[#OSNM|object-space]] map, and using the free [http://www.3pointstudios.com/3pointshader_about.shtml &amp;quot;3Point Shader&amp;quot;] by Christoph '[[CrazyButcher]]' Kubisch and Per 'perna' Abrahamsen. The shader uses the same tangent basis as the baking tool, so it produces nearly flawless results. It also works with old bakes.&lt;br /&gt;
&lt;br /&gt;
You can get OK results in the Max viewport using a tangent-space map baked in Maya, loading it in a Standard material, and enabling &amp;quot;Show Hardware Map in Viewport&amp;quot;. Another method is to use Render To Texture to bake an [[#OSNM|object-space]] map then use [[#CBS|Nspace]] to convert it into a tangent-space map then load that in a DirectX material and use the RTTNormalMap.fx shader. &lt;br /&gt;
&lt;br /&gt;
Autodesk is aware of these issues, and plans to address them in an upcoming release. See these links for more information:&lt;br /&gt;
* Christoph &amp;quot;[[CrazyButcher]]&amp;quot; Kubisch and Per &amp;quot;perna&amp;quot; Abrahamsen designed a shader/modifier combination approach that fixes the viewport problem, see the Polycount forum post [http://boards.polycount.net/showthread.php?t=72861 3Point Shader Lite - Shader material editor and Quality Mode normalmaps for 3ds Max].&lt;br /&gt;
* Jean-Francois &amp;quot;jfyelle&amp;quot; Yelle, Autodesk Media &amp;amp; Entertainment Technical Product Manager, has [http://boards.polycount.net/showthread.php?p=1115812#post1115812 this post]. &lt;br /&gt;
* Ben Cloward posted [http://boards.polycount.net/showthread.php?p=1100270#post1100270 workarounds and FX code].&lt;br /&gt;
* Christopher &amp;quot;cdiggins&amp;quot; Diggins, SDK writer for 3ds Max, shares some of the SDK code in his blog posts &amp;quot;[http://area.autodesk.com/blogs/chris/how_the_3ds_max_scanline_renderer_computes_tangent_and_binormal_vectors_for_normal_mapping How the 3ds Max Scanline Renderer Computes Tangent and Binormal Vectors for Normal Mapping]&amp;quot; and &amp;quot;[http://area.autodesk.com/blogs/chris/3ds_max_normal_map_baking_and_face_angle_weighting_the_plot_thickens 3ds Max Normal Map Baking and Face Angle Weighting: The Plot Thickens]&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|  [[Image:NormalMap$max2010_normalmap_workarounds_thumb.png]] &lt;br /&gt;
|-&lt;br /&gt;
| Viewport methods in 3ds Max 2010.&amp;lt;&amp;lt;BR&amp;gt;&amp;gt;[[attachment:max2010_normalmap_workarounds.png|Actual size]]&amp;lt;&amp;lt;BR&amp;gt;&amp;gt;&amp;lt;span style=&amp;quot;font-size: smaller&amp;quot;&amp;gt;image by [http://www.linkedin.com/in/ericchadwick Eric Chadwick]&amp;lt;/span&amp;gt; &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;3MENT&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== 3ds Max Edit Normals Trick ===&lt;br /&gt;
After baking, if you add an Edit Normals modifier to your low-poly normalmapped model, this seems to &amp;quot;relax&amp;quot; the vertex normals for more accurate viewport shading. The modifier can be collapsed if desired.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;MS&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Maya Shaders ===&lt;br /&gt;
Maya seems to correctly generate normals to view in realtime, with the correct [[#TangentBasis|tangent basis]], with much less smoothing errors than 3ds Max. &lt;br /&gt;
* [http://www.mentalwarp.com/~brice/shader.php BRDF shader] by [http://www.mentalwarp.com/~brice/ Brice Vandemoortele] and [http://www.kjapi.com/ Cedric Caillaud] (more info in [http://boards.polycount.net/showthread.php?t=49920 this Polycount thread]) '''Update:''' [http://boards.polycount.net/showthread.php?p=821862#post821862 New version here] with many updates, including object-space normal maps, relief mapping, self-shadowing, etc. Make sure you enable cgFX shaders in the Maya plugin manager, then you can create them in the same way you create a Lambert, Phong etc. Switch OFF high quality rendering in the viewports to see them correctly too.&lt;br /&gt;
* If you want to use the software renderer, use mental ray instead of Maya's software renderer because mental ray correctly interprets tangent space normals. The Maya renderer treats the normal map as a grayscale bump map, giving nasty results. Mental ray supports Maya's Phong shader just fine (amongst others), although it won't recognise a gloss map plugged into the &amp;quot;cosine power&amp;quot; slot. The slider still works though, if you don't mind having a uniform value for gloss. Spec maps work fine though. Just use the same set up as you would for viewport rendering. You'll need to have your textures saved as TGAs or similar for mental ray to work though. - from [http://boards.polycount.net/member.php?u=14235 CheeseOnToast]&lt;br /&gt;
&amp;lt;&amp;lt;Anchor([[NormalMapCompression]])&amp;gt;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;NMC&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
== Normal Map Compression ==&lt;br /&gt;
see; [[Normal Map Compression]]&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
=== Related Pages ===&lt;br /&gt;
* [[Curvature map]]&lt;br /&gt;
* [[DuDv map]]&lt;br /&gt;
* [[Flow map]]&lt;br /&gt;
* [[Normal map]]&lt;br /&gt;
* [[Radiosity normal map]]&lt;br /&gt;
* [[Vector displacement map]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;3DT&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; &amp;lt;span id=&amp;quot;Tools&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; &amp;lt;span id=&amp;quot;3DTools&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== 3D Tools ===&lt;br /&gt;
See [[Category:Tools#A3D_Normal_Map_Software|Category:Tools#3D_Normal_Map_Software]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;2DT&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;2DTools&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== 2D Tools ===&lt;br /&gt;
See [[Category:Tools#A2D_Normal_Map_Software|Category:Tools#2D_Normal_Map_Software]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;T&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;Tutorials&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Tutorials ===&lt;br /&gt;
* [http://area.autodesk.com/userdata/fckdata/239955/The%20Generation%20and%20Display%20of%20Normal%20Maps%20in%203ds%20Max.pdf The Generation and Display of Normal Maps in 3ds Max] (500kb PDF) &amp;lt;&amp;lt;BR&amp;gt;&amp;gt; Excellent whitepaper from Autodesk about normal mapping in 3ds Max and other apps.&lt;br /&gt;
* [http://www.katsbits.com/htm/tutorials/blender-baking-normal-maps-from-models.htm Renderbump and baking normal maps from high poly models using Blender 3D] by ''[http://www.katsbits.com/htm/about.htm &amp;quot;katsbits&amp;quot;]''&amp;lt;&amp;lt;BR&amp;gt;&amp;gt;Baking normal maps in Blender.&lt;br /&gt;
* [http://udn.epicgames.com/Three/CreatingNormalMaps.html Techniques for Creating Normal Maps] in the Unreal Developer Network's [http://udn.epicgames.com/Three/WebHome.html Unreal Engine 3 section] contains advice from [http://www.epicgames.com/ Epic Games] artists on creating normal maps for UE3. The [http://udn.epicgames.com/Three/DesignWorkflow.html#Creating%20normal%20maps%20from%20meshes Design Workflow page] has a summary.&lt;br /&gt;
* [http://www.iddevnet.com/quake4/ArtReference_CreatingModels#head-3400c230e92ff7d57424b2a68f6e0ea75dee4afa Creating Models in Quake 4] by [http://www.ravensoft.com/ Raven Software] is a comprehensive guide to creating Quake 4 characters.&lt;br /&gt;
* [http://www.svartberg.com/tutorials/article_normalmaps/normalmaps.html Normalmaps for the Technical Game Modeler] by [http://www.svartberg.com Ariel Chai] shows how low-poly smoothing and UVs can affect normal maps in Doom 3.&lt;br /&gt;
* [http://wiki.polycount.net/3D_Tutorials/Modeling_High-Low_Poly_Models_for_Next_Gen_Games Modeling High/Low Poly Models for Next Gen Games] by [http://www.acetylenegames.com/artbymasa/ João &amp;quot;Masakari&amp;quot; Costa] is an overview of modeling for normal maps.&lt;br /&gt;
* The [http://tech-artists.org/wiki/Beveling Beveling section on the Tech-Artists.Org Wiki] discusses how smoothing groups and bevels affect the topology of the low-poly model.&lt;br /&gt;
* The two-part article [http://www.ericchadwick.com/examples/provost/byf2.html#wts Beautiful, Yet Friendly] by [http://www.linkedin.com/in/gprovost Guillaume Provost] explains how smoothing groups and other mesh attributes cause vertices to be duplicated in the game. The vertex count is actually what matters in-game, not the triangle or poly count.&lt;br /&gt;
* [http://www.poopinmymouth.com/tutorial/normal_workflow_2.htm Normal map workflow] by [http://www.poopinmymouth.com/ Ben &amp;quot;poopinmymouth&amp;quot; Mathis] demonstrates his normal mapping workflow in 3ds Max and Photoshop.&lt;br /&gt;
* [http://dodownload.filefront.com/9086954//72f71c0147df53765045a22253c18361a29a6d532425842007ead644d39cbb85d0794ab560365cfa This video tutorial] by [http://www.custom-airbrush.com/ Jeff &amp;quot;airbrush&amp;quot; Ross] shows in Maya how to subdivide the low-poly mesh so it more closely matches the high-poly mesh, to help solve wavy lines in the bake.&lt;br /&gt;
* [http://www.bencloward.com/tutorials_normal_maps1.shtml Normal Mapping Tutorial] by [http://www.bencloward.com/ Ben Cloward] is a comprehensive tutorial about the entire normal map creation process.&lt;br /&gt;
* [http://www.pinwire.com/articles/26/1/Generating-High-Fidelity-Normal-Maps-with-3-D-Software.html Generating High Fidelity Normal Maps with 3-D Software] by [http://www.linkedin.com/pub/0/277/4AB Dave McCoy] shows how to use a special lighting setup to render normal maps (instead of baking them).&lt;br /&gt;
* [http://cgtextures.com/content.php?action=tutorial&amp;amp;name=normalmap Tutorial for the NVIDIA Photoshop filter] by [http://hirezstudios.com/ Scott Warren] shows how to create deep normal maps using multiple layers. Note: to use Overlay blend mode properly, make sure to change each layer's Levels ''Output Level'' to 128 instead of 255.&lt;br /&gt;
* [http://www.poopinmymouth.com/process/tips/normalmap_deepening.jpg Normalmap Deepening] by [http://www.poopinmymouth.com/ Ben &amp;quot;poopinmymouth&amp;quot; Mathis] shows how to adjust normal maps, and how to layer together painted and baked normal maps.&lt;br /&gt;
* [http://boards.polycount.net/showthread.php?t=51088 Tutorial for painting out seams on mirrored tangent-space normal maps] by [http://www.warbeast.de/ warby] helps to solve seams along horizontal or vertical UV edges, but not across angled UVs.&lt;br /&gt;
* [http://planetpixelemporium.com/tutorialpages/normal.html Cinema 4D and Normal Maps For Games] by [http://planetpixelemporium.com/index.php James Hastings-Trew] describes normal maps in plain language, with tips on creating them in Cinema 4D.&lt;br /&gt;
* [http://forums.cgsociety.org/showthread.php?f=39&amp;amp;t=359082 3ds Max normal mapping overview] by [http://www.alan-noon.com/ Alan Noon] is a great thread on CGTalk about the normal mapping process.&lt;br /&gt;
* [http://forums.cgsociety.org/showthread.php?f=46&amp;amp;t=373024 Hard Surface Texture Painting] by [http://stefan-morrell.cgsociety.org/gallery/ Stefan Morrell] is a good introduction to painting textures for metal surfaces.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;D&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;Discussion&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Discussion ===&lt;br /&gt;
[http://boards.polycount.net/showthread.php?p=820218 Discuss this page on the Polycount forums]. Suggestions welcome.&lt;br /&gt;
&lt;br /&gt;
Even though only one person has been editing this page so far, the information here was gathered from many different sources. We wish to thank all the contributors for their hard-earned knowledge. It is much appreciated! [http://wiki.polycount.net {{http://boards.polycount.net/images/smilies/pcount/icons/smokin.gif}}]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[:Category:Texturing]] [[:Category:TextureTypes]] [[:Category:Bump map]]&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	<entry>
		<id>http://wiki.polycount.com/wiki/Normal_Map_Compression</id>
		<title>Normal Map Compression</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/Normal_Map_Compression"/>
				<updated>2014-11-29T08:24:24Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
Normal maps can take up a lot of memory. Compression can reduce the size of a map to 1/4 of what it was uncompressed, which means you can either increase the resolution or you can use more maps.&lt;br /&gt;
&lt;br /&gt;
Usually the compression method is to throw away the Blue channel, because this can be re-computing at minimal cost in the shader code. Then the bitmap only has to store two color channels, instead of four (red, green, blue, and alpha).&lt;br /&gt;
&lt;br /&gt;
* The article [http://developer.download.nvidia.com/whitepapers/2008/real-time-normal-map-dxt-compression.pdf Real-Time Normal Map DXT Compression] (PDF) from [http://www.idsoftware.com/ id software] and [http://developer.nvidia.com NVIDIA] is an excellent introduction to compression.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== DXT5nm Compression ==&lt;br /&gt;
DXT5nm is the same file format as DXT5 except before compression the red channel is moved into the alpha channel, the green channel is left as-is, and the red and blue channels are blanked with the same solid color. This re-arranging of the normal map axes is called ''swizzling''.&lt;br /&gt;
&lt;br /&gt;
The Green and Alpha channels are used because in the DXT format they are compressed using somewhat higher bit depths than the Red and Blue channels. Red and Blue have to be filled with the same solid color because DXT uses a compression system that compares differences between the three color channels. If you try to store some kind of texture in Red and/or Blue (specular power, height map, etc.) then the compressor will create more compression artifacts because it has to compare all three channels.&lt;br /&gt;
&lt;br /&gt;
There are some options in the NVIDIA DXT compressor that help reduce the artifacts if you want to add texture to the Red or Blue channels. The artifacts will be greater than if you keep Red and Blue empty, but it might be a tradeoff worth making. Some notes about this on the [http://developer.nvidia.com/forums/index.php?showtopic=1366 NVIDIA Developer Forums].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== DXT1 Compression ==&lt;br /&gt;
DXT1 is also used sometimes for tangent-space normal maps, because it is half the size of a DXT5. The downside though is that it causes many more compression artifacts, so much so that most people end up not using it. &lt;br /&gt;
&lt;br /&gt;
* The blog post [http://realtimecollisiondetection.net/blog/?p=28#more-28 I like spilled beans!] by [http://realtimecollisiondetection.net/blog/?page_id=2 Christer Ericson] has a section about Capcom's clever use of DXT1 and DXT5.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== 3Dc Compression ==&lt;br /&gt;
3Dc compression is also known as BC5 in DirectX 10. It works similar to DXT5nm, because it only stores the X and Y channels. The difference is it stores both the same way as the DXT5 Alpha channel, which is a slightly higher bit depth than DXT5nm's Green channel. 3Dc yields the best results of any listed algorithm for tangent space normal map compression, and requires no extra processing time or unique hardware. See [[3Dc]] for more information.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== [[A8L8]] Compression ==&lt;br /&gt;
The DDS format !A8L8 isn't actually compressed, it's just two 8bit grayscale channels (256 grays each). It does save you from having to store all three color channels. Your shader has to recompute the blue channel for it to work. However, !A8L8 does not actually save any space in texture memory, it is typically converted to a four-channel 32bit texture when it's sent to the card. This format really only helps save disk space.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Normal Map]] [[:Category:Texturing]] [[:Category:TextureTypes]] [[:Category:Bump map]]&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	<entry>
		<id>http://wiki.polycount.com/wiki/Normal_map</id>
		<title>Normal map</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/Normal_map"/>
				<updated>2014-11-29T08:23:10Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: /* Normal Map Compression */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!-- ## page was renamed from Normal Map --&amp;gt;&lt;br /&gt;
= Normal Map =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;WhatIsANormalMap&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;WIANM&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
== What is a Normal Map? ==&lt;br /&gt;
A Normal Map is usually used to fake high-res geometry detail when it's mapped onto a low-res mesh. The pixels of the normal map each store a ''normal'', a vector that describes the surface slope of the original high-res mesh at that point. The red, green, and blue channels of the normal map are used to control the direction of each pixel's normal. &lt;br /&gt;
&lt;br /&gt;
When a normal map is applied to a low-poly mesh, the texture pixels control the direction each of the pixels on the low-poly mesh will be facing in 3D space, creating the illusion of more surface detail or better curvature. However, the silhouette of the model doesn't change. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;packed&amp;quot;&amp;gt;&lt;br /&gt;
Whatif_normalmap_mapped2.jpg|A model with a normal map.&amp;lt;br&amp;gt;Image by [http://www.jameskuart.com/ James Ku].&lt;br /&gt;
Whatif_normalmap_low.jpg|The model without its normal map.&amp;lt;br&amp;gt;Image by [http://www.jameskuart.com/ James Ku].&lt;br /&gt;
Whatif_normalmap_high.jpg|The high-resolution model used to create the normal map.&amp;lt;br&amp;gt;Image by [http://www.jameskuart.com/ James Ku].&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Tangent-Space vs. Object-Space==&lt;br /&gt;
&lt;br /&gt;
Normal maps can be made in either of two basic flavors: tangent-space or object-space. Object-space is also called local-space or model-space, same thing. World-space is basically the same as object-space, except it requires the model to remain in its original orientation, neither rotating nor deforming, so it's almost never used.&lt;br /&gt;
&lt;br /&gt;
===Tangent-space normal map===&lt;br /&gt;
[[image:normalmap_tangentspace.jpg|frame|none|A tangent-space normal map. &amp;lt;br&amp;gt;Image by [http://ericchadwick.com Eric Chadwick].]]&lt;br /&gt;
&lt;br /&gt;
Predominantly-blue colors. Object can rotate and deform. Good for deforming meshes, like characters, animals, flags, etc.&lt;br /&gt;
&lt;br /&gt;
Pros:&lt;br /&gt;
* Maps can be reused easily, like on differently-shaped meshes.&lt;br /&gt;
* Maps can be tiled and mirrored easily, though some games might not support mirroring very well.&lt;br /&gt;
* Easier to overlay painted details.&lt;br /&gt;
* Easier to use image compression.&lt;br /&gt;
&lt;br /&gt;
Cons:&lt;br /&gt;
* More difficult to avoid smoothing problems from the low-poly vertex normals (see Smoothing Groups and Hard Edges).&lt;br /&gt;
* Slightly slower performance than an object-space map (but not by much).&lt;br /&gt;
&lt;br /&gt;
===Object-space normal map===&lt;br /&gt;
[[image:normalmap_worldspace.jpg|frame|none|An object-space normal map. &amp;lt;br&amp;gt;Image by [http://ericchadwick.com Eric Chadwick].]]&lt;br /&gt;
&lt;br /&gt;
Rainbow colors. Objects can rotate, but usually shouldn't be deformed, unless the shader has been modified to support deformation.&lt;br /&gt;
&lt;br /&gt;
Pros:&lt;br /&gt;
* Easier to generate high-quality curvature because it completely ignores the crude smoothing of the low-poly vertex normals.&lt;br /&gt;
* Slightly better performance than a tangent-space map (but not by much).&lt;br /&gt;
&lt;br /&gt;
Cons:&lt;br /&gt;
* Can't easily reuse maps, different mesh shapes require unique maps.&lt;br /&gt;
* Difficult to tile properly, and mirroring requires specific shader support.&lt;br /&gt;
* Harder to overlay painted details because the base colors vary across the surface of the mesh. Painted details must be converted into Object Space to be combined properly with the OS map.&lt;br /&gt;
* They don't compress very well, since the blue channel can't be recreated in the shader like with tangent-space maps. Also the three color channels contain very different data which doesn't compress well, creating many artifacts. Using a half-resolution object-space map is one option. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=&amp;quot;all&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Converting Between Spaces ===&lt;br /&gt;
Normal maps can be converted between tangent space and object space, in order to use them with different blending tools and shaders, which require one type or the other.&lt;br /&gt;
&lt;br /&gt;
[http://diogo.codingcorner.net Diogo &amp;quot;fozi&amp;quot; Teixeira] created a tool called [http://boards.polycount.net/showthread.php?p=1072599#post1072599 NSpace] that converts an object-space normal map into a tangent-space map, which then works seamlessly in the Max viewport. He converts the map by using the same tangent basis that 3ds Max uses for its hardware shader. To see the results, load the converted map via the ''Normal Bump'' map and enable &amp;quot;Show Hardware Map in Viewport&amp;quot;. [http://gameartist.nl/ Osman &amp;quot;osman&amp;quot; Tsjardiwal] created a GUI for NSpace, you can [http://boards.polycount.net/showthread.php?p=1075143#post1075143 download it here], just put it in the same folder as the NSpace exe and run it. Diogo has further [http://boards.polycount.net/showthread.php?p=1074160#post1074160 plans for the tool] as well.&lt;br /&gt;
&lt;br /&gt;
[[File:NSpace_Gui_osman.png|frame|none|NSpace interface. &amp;lt;br&amp;gt;Image by [http://diogo.codingcorner.net Diogo &amp;quot;fozi&amp;quot; Teixeira] and [http://gameartist.nl Osman &amp;quot;osman&amp;quot; Tsjardiwal]]]&lt;br /&gt;
&lt;br /&gt;
[http://www.linkedin.com/in/earthquake Joe &amp;quot;EarthQuake&amp;quot; Wilson] said: &amp;quot;[8Monkey Labs has] a tool that lets you load up your reference mesh and object space map. Then load up your tangent normals, and adjust some sliders for things like tile and amount. We need to load up a mesh to know how to correctly orient the tangent normals or else things will come out upside down or reverse etc. It mostly works, but it tends to &amp;quot;bend&amp;quot; the resulting normals, so you gotta split the mesh up into some smoothing groups before you run it, and then I usually will just composite this &amp;quot;combo&amp;quot; texture over my orig map in Photoshop.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;RGBC&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;RGBChannels&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== RGB Channels ==&lt;br /&gt;
Shaders can use different techniques to render tangent-space normal maps, but the normal map directions are usually consistent within a game. Usually the red channel of a tangent-space normal map stores the X axis (pointing the normals predominantly leftwards or rightwards), the green channel stores the Y axis (pointing the normals predominantly upwards or downwards), and the blue channel stores the Z axis (pointing the normals outwards away from the surface).&lt;br /&gt;
&lt;br /&gt;
[[image:tangentspace_rgb.jpg|frame|none|The red, green, and blue channels of a tangent-space normal map. &amp;lt;br&amp;gt;Image by [http://ericchadwick.com Eric Chadwick].]]&lt;br /&gt;
&lt;br /&gt;
If you see lighting coming from the wrong angle when you're looking at your normal-mapped model, and the model is using a tangent-space normal map, the normal map shader might be expecting the red or green channel (or both) to point in the opposite direction. To fix this either change the shader, or simply invert the appropriate color channels in an image editor, so that the black pixels become white and the white pixels become black.&lt;br /&gt;
&lt;br /&gt;
Some shaders expect the color channels to be swapped or re-arranged to work with a particular [[#NormalMapCompression|compression format]]. For example the DXT5_nm format usually expects the X axis to be in the alpha channel, the Y axis to be in the green channel, and the red and blue channels to be empty.&lt;br /&gt;
&lt;br /&gt;
== Tangent Basis ==&lt;br /&gt;
[[#TangentSpaceVsObjectSpace|Tangent-space]] normal maps use a special kind of vertex data called the ''tangent basis''. This is similar to UV coordinates except it provides directionality across the surface, it forms a surface-relative coordinate system for the per-pixel normals stored in the normal map. &lt;br /&gt;
&lt;br /&gt;
Light rays are in world space, but the normals stored in the normal map are in tangent space. When a normal-mapped model is being rendered, the light rays must be converted from world space into tangent space, using the tangent basis to get there. At that point the incoming light rays are compared against the directions of the normals in the normal map, and this determines how much each pixel of the mesh is going to be lit. Alternatively, instead of converting the light rays some shaders will convert the normals in the normal map from tangent space into world space. Then those world-space normals are compared against the light rays, and the model is lit appropriately. The method depends on who wrote the shader, but the end result is the same.&lt;br /&gt;
&lt;br /&gt;
Unfortunately for artists, there are many different ways to calculate the tangent basis: [http://area.autodesk.com/blogs/chris/how_the_3ds_max_scanline_renderer_computes_tangent_and_binormal_vectors_for_normal_mapping 3ds Max], [http://download.autodesk.com/us/maya/2011help/index.html?url=./files/Appendix_A_Tangent_and_binormal_vectors.htm,topicNumber=d0e227193 Maya], [http://www.codesampler.com/dx9src/dx9src_4.htm#dx9_dot3_bump_mapping DirectX 9], [http://developer.nvidia.com/object/NVMeshMender.html NVMeshMender], [http://www.terathon.com/code/tangent.html Eric Lengyel], a custom solution, etc. This means a normal map baked in one application probably won't shade correctly in another. Artists must do some testing with different [[#T|baking tools]] to find which works best with their output. When the renderer (or game engine) renders your game model, [[#ShadersAndSeams|the shader]] must use the same tangent basis as the normal map baker, otherwise you'll get incorrect lighting, especially across the seams between UV shells.&lt;br /&gt;
&lt;br /&gt;
The [http://www.xnormal.net/ xNormal] SDK supports custom tangent basis methods. When a programmer uses it to implement their renderer's own tangent basis, artists can then use Xnormal to bake normal maps that will match their renderer perfectly.&lt;br /&gt;
&lt;br /&gt;
The [[#UVC|UVs]] and the [[#SGAHE|vertex normals]] on the low-res mesh directly influence the coloring of a [[#TSNM|tangent-space]] normal map when it is baked. Each tangent basis vertex is a combination of three things: the mesh vertex's normal (influenced by smoothing), the vertex's tangent (usually derived from the V texture coordinate), and the vertex's bitangent (derived in code, also called the binormal). These three vectors create an axis for each vertex, giving it a specific orientation in the tangent space. These axes are used to properly transform the incoming lighting from world space into tangent space, so your normal-mapped model will be lit correctly.&lt;br /&gt;
&lt;br /&gt;
When a triangle's vertex normals are pointing straight out, and a pixel in the normal map is neutral blue (128,128,255) this means that pixel's normal will be pointing straight out from the surface of the low-poly mesh. When that pixel normal is tilted towards the left or the right in the tangent coordinate space, it will get either more or less red color, depending on whether the normal map is set to store the X axis as either a positive or a negative value. Same goes for when the normal is tilted up or down in tangent space, it will either get more or less green color. If the vertex normals aren't exactly perpendicular to the triangle, the normal map pixels will be tinted away from neutral blue as well. The vertex normals and the pixel normals in the normal map are combined together to create the final per-pixel surface normals.&lt;br /&gt;
&lt;br /&gt;
[[#SAS|Shaders]] are written to use a particular direction or &amp;quot;handedness&amp;quot; for the X and Y axes in a normal map. Most apps tend to prefer +X (red facing right) and +Y (green facing up), while others like 3ds Max prefer +X and -Y. This is why you often need to invert the green channel of a normal map to get it to render correctly in this or that app... the shader is expecting a particular handedness.&lt;br /&gt;
&lt;br /&gt;
[[image:tangentseams.jpg|frame|none|When shared edges are at different angles in UV space, different colors will show up&lt;br /&gt;
along the seam. The tangent basis uses these colors to light the model properly. &amp;lt;br&amp;gt;Image by [http://ericchadwick.com Eric Chadwick].]]&lt;br /&gt;
&lt;br /&gt;
When you look at a tangent-space normal map for a character, you typically see different colors along the UV seams. This is because the UV shells are often oriented at different angles on the mesh, a necessary evil when translating the 3D mesh into 2D textures. The body might be mapped with a vertical shell, and the arm mapped with a horizontal one. This requires the normals in the normal map to be twisted for the different orientations of those UV shells. The UVs are twisted, so the normals must be twisted in order to compensate. The tangent basis helps reorient (twist) the lighting as it comes into the surface's local space, so the lighting will then look uniform across the normal mapped mesh.&lt;br /&gt;
&lt;br /&gt;
When an artist tiles a tangent-space normal map across an arbitrary mesh, like a landscape, this tends to shade correctly because the mesh has a uniform direction in tangent space. If the mesh has discontinuous UV coordinates (UV seams), or the normal map has large directional gradients across it, the tangent space won't be uniform anymore so the surface will probably have shading seams.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;MTLPM&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Modeling the Low-Poly Mesh ==&lt;br /&gt;
The in-game mesh usually needs to be carefully optimized to create a good silhouette, define edge-loops for better deformation, and minimize extreme changes between the vertex normals for better shading (see [[#SmoothingGroupsAndHardEdges|Smoothing Groups &amp;amp; Hard Edges]]).&lt;br /&gt;
&lt;br /&gt;
In order to create an optimized in-game mesh including a good silhouette and loops for deforming in animation, you can start with the 2nd subdivision level of your [[DigitalSculpting|digital sculpt]], or in some cases with the base mesh itself. Then you can just collapse edge loops or cut in new edges to add/remove detail as necessary. Or you can [[DigitalSculpting#OART|re-toplogize]] from scratch if that works better for you.&lt;br /&gt;
&lt;br /&gt;
See [http://www.polycount.com/forum/showthread.php?t=107196 You're making me hard. Making sense of hard edges, uvs, normal maps and vertex counts] on the Polycount forum.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;UVC&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;UVCoordinates&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== UV Coordinates ===&lt;br /&gt;
Normal map baking tools only capture normals within the 0-1 UV square, any UV bits outside this area are ignored. &lt;br /&gt;
&lt;br /&gt;
Only one copy of the forward-facing UVs should remain in the 0-1 UV square at baking time. If the mesh uses overlapping UVs, this will likely cause artifacts to appear in the baked map, since the baker will try render each UV shell into the map. Before baking, it's best to move all the overlaps and mirrored bits outside the 0-1 square. &lt;br /&gt;
&lt;br /&gt;
[[image:Normalmap_uvcoord_offset.jpg|frame|none|The mirrored UVs (in red) are offset 1 unit before baking. &amp;lt;br&amp;gt;Image by [http://ericchadwick.com Eric Chadwick].]]&lt;br /&gt;
&lt;br /&gt;
If you move all the overlaps and mirrored bits exactly 1 UV unit (any whole number will do), then you can leave them there after the bake and they will still be mapped correctly. You can move them back if you want, it doesn't matter to most game engines. Be aware that ZBrush does use UV offsets to manage mesh visibility, however this usually doesn't matter because the ZBrush cage mesh is often a different mesh than the in-game mesh used for baking.&lt;br /&gt;
&lt;br /&gt;
You should avoid changing the UVs after baking the normal map, because rotating or mirroring UVs after baking will cause the normal map not to match the [[#TB|tangent basis]] anymore, which will likely cause lighting problems. &lt;br /&gt;
&lt;br /&gt;
In 3ds Max, W is a third texture coordinate. It's used for 3D procedural textures and for storing vertex color in UV channels (you need 3 axes for RGB, so UVW can store vertex color). Bake problems can be avoided by moving any overlapping UVs to -1 on the W axis, with the same results as moving them 1 unit on the U or V axes. The tool Render To Texture will always bake whatever UVs are the highest along the W axis. However using W can be messy... it's generally hidden unless you purposefully look for it (bad for team work), doesn't get preserved on export to other apps, and high W values can prevent selecting and/or welding UVs. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;M&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;Mirroring&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Mirroring ===&lt;br /&gt;
Normal maps can be mirrored across a model to create symmetrical details, and save UV space, which allows more detail in the normal map since the texture pixels are smaller on the model. &lt;br /&gt;
&lt;br /&gt;
With [[#OSNM|object-space]] maps, mirroring requires [http://boards.polycount.net/showthread.php?t=53986 specific shader support]. For [[#TSNM|tangent-space]] maps, mirroring typically creates a shading seam, but this can be reduced or hidden altogether, depending on the method used.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;TMW&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
==== Typical Mirroring Workflow ====&lt;br /&gt;
# Delete the mesh half that will be mirrored. &lt;br /&gt;
# Arrange the UVs for the remaining model, filling the UV square.&lt;br /&gt;
# Mirror the model to create a &amp;quot;whole&amp;quot; mesh, welding the mesh vertices along the seam. &lt;br /&gt;
# Move the mirrored UVs exactly 1 unit (or any whole number) out of the 0-1 UV square.&lt;br /&gt;
# Bake the normal map.&lt;br /&gt;
&lt;br /&gt;
Sometimes an artist will decide to delete half of a symmetrical model before baking. &lt;br /&gt;
&lt;br /&gt;
This is a mistake however because often the vertex normals along the hole will bend towards the hole a bit; there are no faces on the other side to average the normals with. This will create a strong lighting seam in the normal map. &lt;br /&gt;
&lt;br /&gt;
It's typically best to use the complete mirrored model to bake the normal map, not just the unique half. &lt;br /&gt;
&lt;br /&gt;
To prevent the mirrored UVs from causing overlaps or baking errors, move the mirrored [[#UVC|UVs]] out of the 0-1 UV space, so only one copy of the non-mirrored UVs is left within the 0-1 square.&lt;br /&gt;
&lt;br /&gt;
To avoid texel &amp;quot;leaks&amp;quot; between the UV shells, make sure there's enough [[#Edge_padding|Edge Padding]] around each shell, including along the edges of the normal map. None of the UV shells should be touching the edge of the 0-1 UV square, unless they're meant to tile with the other side of the map.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;CM&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Center Mirroring ====&lt;br /&gt;
If the mirror seam runs along the surface of a continuous mesh, like down the center of a human face for example, then it will probably create a lighting seam. &lt;br /&gt;
&lt;br /&gt;
In Epic Games' [http://www.unrealtechnology.com/technology.php Unreal Engine 3] (UE3) their symmetrical models commonly use centered mirroring. Epic uses materials that mix a [[DetailMap]] with the normal maps; these seem to scatter the diffuse/specular lighting and help minimize the obviousness of the mirror seams. For their [[Light Map]]ped models they use [http://udn.epicgames.com/Three/LightMapUnwrapping.html a technique] that can almost completely hide the mirror seam.&lt;br /&gt;
&lt;br /&gt;
[[image:Epic_MirroringCicada.jpg|frame|none| In UE3 a center mirror seam is reduced by using a detail normal map. &amp;lt;br&amp;gt; Image by &amp;quot;[http://epicgames.com Epic Games]&amp;quot;]]&lt;br /&gt;
&lt;br /&gt;
'''''[http://www.zbrushcentral.com/showpost.php?p=573108&amp;amp;postcount=28 GOW2 normal map seams], [http://utforums.epicgames.com/showthread.php?p=27166791#post27166791 UDK normal map seams]'''''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;OM&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Offset Mirroring ====&lt;br /&gt;
Offset mirroring is a method where you move the mirror seam off to one side of the model, so the seam doesn't run exactly down the center. For example with a character's head, the UV seam can go down along the side of the head in front of the ear. The UV shell for the nearest ear can then be mirrored to use the area on the other side of the head. &lt;br /&gt;
&lt;br /&gt;
This avoids the &amp;quot;Rorschach&amp;quot; effect and allows non-symmetrical details, but it still saves texture space because the two sides of the head can be mirrored (they're never seen at the same time anyhow).&lt;br /&gt;
&lt;br /&gt;
Offset mirroring doesn't get rid of the seam, but it does move it off to a place where it can either be less obvious, or where it can be hidden in a natural seam on the model.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;FCM&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
==== Flat Color Mirroring ====&lt;br /&gt;
[http://boards.polycount.net/showthread.php?t=51088 Tutorial for painting out seams on mirrored tangent-space normal maps] by [http://www.warbeast.de/ warby] solves seams by painting a flat set of normals along the seam, using neutral blue (128,128,255). However it only works along horizontal or vertical UV seams, not across any angled UVs. It also removes any details along the mirror seam, creating blank areas. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;EM&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
==== Element Mirroring ====&lt;br /&gt;
The mirror seam can be avoided completely when it doesn't run directly through any mesh. For example if there's a detached mesh element that runs down the center of the model, this can be uniquely mapped, while the meshes on either side can be mirrors of each other. Whenever the mirrored parts don't share any vertex normals with the non-mirrored parts, there won't be any seams. &lt;br /&gt;
&lt;br /&gt;
[[image:normalmap_mirrored-binocs-racer445.jpg|frame|none|The middle part (highlighted in red) uses unique non-mirrored UVs, allowing the mesh on the right to be mirrored without any seams. &amp;lt;br&amp;gt;Image by [http://http://racer445.com/ &amp;quot;racer445&amp;quot;]]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;SGAHE&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Smoothing Groups &amp;amp; Hard Edges ===&lt;br /&gt;
Each vertex in a mesh has at least one vertex normal. Vertex normals are used to control the direction a triangle will be lit from; if the normal is facing the light the triangle will be fully lit, if facing away from the light the triangle won't be lit. &lt;br /&gt;
&lt;br /&gt;
Each vertex however can have more than one vertex normal. When two triangles have different vertex normals along their shared edge, this creates a shading seam, called a ''hard edge'' in most modeling tools. 3ds Max uses ''Smoothing Groups'' to create hard/soft edges, Maya uses ''Harden Edge'' and ''Soften Edge''. These tools create hard and soft edges by splitting and combining the vertex normals.&lt;br /&gt;
&lt;br /&gt;
[[image:BenMathis_SmoothingGroups_Excerpt.gif|frame|none|Hard edges occur where the vertices have multiple normals. &amp;lt;br&amp;gt;Image by [http://poopinmymouth.com Ben 'poopinmymouth' Mathis] ([http://poopinmymouth.com/process/tips/smoothing_groups.jpg tutorial here])]]&lt;br /&gt;
&lt;br /&gt;
When a mesh uses all soft normals (a single smoothing group) the lighting has to be interpolated across the extreme differences between the vertex normals. If your renderer doesn't support the same [[#TangentBasis|tangent basis]] that the baker uses, this can produce extreme shading differences across the model, which creates shading artifacts. It is generally best to reduce these extremes when you can because a mismatched renderer can only do so much to counteract it.&lt;br /&gt;
&lt;br /&gt;
Hard edges are usually best where the model already has a natural seam. For example, you can add a hard edge along the rim of a car's wheel well, to prevent the inside of the wheel well from distorting the shading for the outside of the car body. Mechanical models usually need hard edges where ever the surface bends more than about 45 degrees. &lt;br /&gt;
&lt;br /&gt;
For most meshes, the best results usually come from adding hard edges where ever there are UV seams. There are no hard rules however, you must experiment with different approaches to find what works best in your game.&lt;br /&gt;
&lt;br /&gt;
When you use object-space normal maps the vertex normal problem goes away since you're no longer relying on the crude vertex normals of the mesh. An object-space normal map completely ignores vertex normals. Object-space mapping allows you to use all soft edges and no bevels on the low-res mesh, without showing lighting errors.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;HEDAT&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
==== Hard Edge Discussions &amp;amp; Tutorials ====&lt;br /&gt;
* [http://www.polycount.com/forum/showthread.php?p=2090450#post2090450 Maya MEL Script help needed (UV border edges)]&lt;br /&gt;
* [http://www.polycount.com/forum/showthread.php?t=107196 You're making me hard. Making sense of hard edges, uvs, normal maps and vertex counts]&lt;br /&gt;
* [http://www.polycount.com/forum/showthread.php?t=73593 Normal Maps: Can Somone Explain This &amp;quot;Black Edge&amp;quot; issue]&lt;br /&gt;
* [http://www.polycount.com/forum/showthread.php?t=73566 Normal Maps: Can someone explain normals, tangents and split UVs?]&lt;br /&gt;
* [http://www.polycount.com/forum/showthread.php?t=72861 3Point Shader Lite - Shader material editor and Quality Mode normalmaps for 3ds Max]&lt;br /&gt;
* [http://boards.polycount.net/showthread.php?t=68173 Why you should NOT trust 3ds Max's viewport normal-map display!]&lt;br /&gt;
* [http://www.game-artist.net/forums/support-tech-discussion/10503-xsi-normal-mapped-cube-looks-bad.html XSI - normal mapped cube looks bad]&lt;br /&gt;
* [http://www.game-artist.net/forums/support-tech-discussion/11924-weird-maya-normal-map-seam-artifact-problem-am-i-making-simple-mistake.html Weird Maya normal map seam/artifact problem]&lt;br /&gt;
* [http://boards.polycount.net/showthread.php?p=1080600 Seams in Normals when Creating Tiling Environment Trims and other Tiles]&lt;br /&gt;
* The tutorial [http://www.svartberg.com/tutorials/article_normalmaps/normalmaps.html Normalmaps for the Technical Game Modeler] by [http://www.svartberg.com Ariel Chai] shows how low-poly smoothing can affect the normal map.&lt;br /&gt;
* The tutorial: [http://wiki.polycount.net/3D_Tutorials/Modeling_High-Low_Poly_Models_for_Next_Gen_Games Modeling High/Low Poly Models for Next Gen Games] by [http://www.acetylenegames.com/artbymasa/ João &amp;quot;Masakari&amp;quot; Costa] shows how smoothing affects raycasting.&lt;br /&gt;
* The [http://tech-artists.org/wiki/Beveling Beveling section on the Tech-Artists.Org Wiki] discusses the breaking of normals and smoothing groups in general terms.&lt;br /&gt;
* The two-part article [http://www.ericchadwick.com/examples/provost/byf2.html#wts Beautiful, Yet Friendly] by [http://www.linkedin.com/in/gprovost Guillaume Provost] explains how smoothing groups and other mesh attributes cause vertices to be duplicated in the game. The vertex count is actually what matters in the game, not the triangle count.&lt;br /&gt;
* The Crysis documentation [http://doc.crymod.com/AssetCreation/PolyBumpReference.html PolyBump Reference] has a section towards the bottom that shows how smoothing affects their baked normal maps.&lt;br /&gt;
* The polycount thread [http://boards.polycount.net/showthread.php?t=60694 Toying around with normal map approaches] has a great discussion of how best to use smoothing groups and bevels for better shading.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;UB&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
==== Using Bevels ====&lt;br /&gt;
Bevels/chamfers generally improve the silhouette of the model, and can also help reflect specular highlights better. &lt;br /&gt;
&lt;br /&gt;
However bevels tend to produce long thin triangles, which slow down the in-game rendering of your model. Real-time renderers have trouble rendering long thin triangles because they create a lot of sub-pixel areas to render. &lt;br /&gt;
&lt;br /&gt;
Bevels also balloon the vertex count, which can increase the transform cost and memory usage. Hard edges increase the vertex count too, but not when  the edge also shares a seam in UV space. For a good explanation of the vertex count issue, see [http://www.ericchadwick.com/examples/provost/byf2.html#wts Beautiful, Yet Friendly].&lt;br /&gt;
&lt;br /&gt;
Using hard edges with matching UV shells tends to give better performance and better cosmetic results than using bevels. However there are differing opinions on this, see the Polycount thread &amp;quot;[http://boards.polycount.net/showthread.php?t=71760 Maya transfer maps help]&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;EVN&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
==== Edited Vertex Normals ====&lt;br /&gt;
If you use bevels the shading will be improved by editing the vertex normals so the larger flat surfaces have perpendicular normals. The vertex normals are then forced to blend across the smaller bevel faces, instead of across the larger faces. See the Polycount thread [http://boards.polycount.net/showthread.php?t=66139 Superspecular soft edges tutorial chapter 1].&lt;br /&gt;
&lt;br /&gt;
[[image:oliverio_bevel_normals.gif|frame|none|Bending normals on bevelled models. &amp;lt;br&amp;gt;From the tutorial [http://deadlineproof.com/model-shading-techniques-soft-edge-superspecular/ Shading techniques Superspecular soft edges]&amp;lt;br&amp;gt;Image by [http://deadlineproof.com/ Paolo Oliverio]]]&lt;br /&gt;
&lt;br /&gt;
== Level of Detail Models ==&lt;br /&gt;
See [http://www.polycount.com/forum/showthread.php?p=1216945#post1216945 Problem if you're using 3point-style normals with an LOD].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;MTHPM&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Modeling The High-Poly Mesh ==&lt;br /&gt;
[[Subdivision Surface Modeling]] and [[DigitalSculpting]] are the techniques most often used for modeling a normal map. &lt;br /&gt;
&lt;br /&gt;
Some artists prefer to model the in-game mesh first, other artists prefer to model the high-res mesh first, and others start somewhere in the middle. The modeling order is ultimately a personal choice though, all three methods can produce excellent results:&lt;br /&gt;
* Build the in-game model, then up-res it and sculpt it.&lt;br /&gt;
* Build and sculpt a high resolution model, then build a new in-game model around that.&lt;br /&gt;
* Build a basemesh model, up-res and sculpt it, then step down a few levels of detail and use that as a base for building a better in-game mesh.&lt;br /&gt;
If the in-game mesh is started from one of the subdivision levels of the basemesh sculpt, various edge loops can be collapsed or new edges can be cut to add/remove detail as necessary.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;SE&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Sloped Extrusions ===&lt;br /&gt;
[[image:normal_slopes_hatred.jpg|frame|none|Extrusions on the high-poly model should be sloped to make a better normal map. &amp;lt;br&amp;gt;Image by [http://www.hatred.gameartisans.org/ Krzysztof &amp;quot;Hatred&amp;quot; Dolas].]]&lt;br /&gt;
&lt;br /&gt;
=== Floating Geometry ===&lt;br /&gt;
[[image:FloatingGeo.jpg|frame|none|Normal map stores the direction the surface is facing rather than real depth information, thus allowing to save time using floating geometry. &amp;lt;br&amp;gt;To correctly bake AO with floating geo make it a separate object and turn off it's shadow casting. &amp;lt;br&amp;gt;Image by [http://artisaverb.info/ Andrew &amp;quot;d1ver&amp;quot; Maximov].]]&lt;br /&gt;
&lt;br /&gt;
See also [[3DTutorials/Modeling High-Low Poly Models for Next Gen Games|Modeling High/Low Poly Models for Next Gen Games]] by [http://www.acetylenegames.com/artbymasa/ João &amp;quot;Masakari&amp;quot; Costa]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;ET&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Edge Thickness ===&lt;br /&gt;
[[image:normal_edge_thickness.jpg|frame|none|When creating edges of the Highpoly, sometimes you'll need to make them rounded than in real life to &amp;lt;br&amp;gt;work better at the size they will be seen.&amp;lt;br&amp;gt;Image by [http://racer445.com/Evan &amp;quot;racer445&amp;quot; Herbert]]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;MRF&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;MRRCB&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== mental ray Round Corners Bump ===&lt;br /&gt;
The mental ray renderer offers an automatic bevel rendering effect called Round Corners Bump that can be baked into a normal map. This is available in 3ds Max, Maya, and XSI. See [http://boards.polycount.net/showthread.php?t=71995 Zero Effort Beveling for normal maps] - by [http://boards.polycount.net/member.php?u=31662 Robert &amp;quot;r_fletch_r&amp;quot; Fletcher].&lt;br /&gt;
&lt;br /&gt;
[http://jeffpatton.net/ Jeff Patton] posted about [http://jeffpatton.cgsociety.org/blog/archive/2007/10/ how to expose Round Corners Bump] in 3ds Max so you can use it in other materials.&lt;br /&gt;
&lt;br /&gt;
[http://cryrid.com/art/ Michael &amp;quot;cryrid&amp;quot; Taylor] posted a tutorial about how to use [http://cryrid.com/images/temp/XSI/zeroeffort_bevels.jpg Round Corners in XSI].&lt;br /&gt;
&lt;br /&gt;
XSI is able to bake a good normal map with it, but 3ds Max seems to bake it incorrectly, and Maya isn't able to bake the effect at all. Maybe Max might be able to bake it correctly, if the .mi shader is edited to use the correct coordinate space?&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;Baking&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;B&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
== Baking ==&lt;br /&gt;
The process of transferring normals from the high-res model to the in-game model is often called baking. The baking tool usually starts projecting a certain numerical distance out from the low-poly mesh, and sends rays inwards towards the high-poly mesh. When a ray intersects the high-poly mesh, it records the mesh's surface normal and saves it in the normal map.&lt;br /&gt;
&lt;br /&gt;
To get an understanding of how all the options affect your normal map, do some test bakes on simple meshes like boxes. They generate quickly so you can experiment with [[#UVCoordinates|UV mirroring]], [[#SGAHE|smoothing groups]], etc. This helps you learn the settings that really matter.&lt;br /&gt;
* The tutorial [http://wiki.polycount.net/3D_Tutorials/Modeling_High-Low_Poly_Models_for_Next_Gen_Games Modeling High/Low Poly Models for Next Gen Games] by [http://www.acetylenegames.com/artbymasa/ João &amp;quot;Masakari&amp;quot; Costa] has more examples of ray-casting, plus how to get better results from the bake.&lt;br /&gt;
&lt;br /&gt;
Baking sub-sections:&lt;br /&gt;
# [[#Anti-Aliasing|Anti-Aliasing]]&lt;br /&gt;
# [[#Baking_Transparency|Baking Transparency]]&lt;br /&gt;
# [[#Edge_Padding|Edge Padding]]&lt;br /&gt;
# [[#High_Poly_Materials|High Poly Materials]]&lt;br /&gt;
# [[#Reset_Transforms|Reset Transforms]]&lt;br /&gt;
# [[#Solving_Intersections|Solving Intersections]]&lt;br /&gt;
# [[#Solving_Pixel_Artifacts|Solving Pixel Artifacts]]&lt;br /&gt;
# [[#Solving_Wavy_Lines|Solving Wavy Lines]]&lt;br /&gt;
# [[#Triangulating|Triangulating]]&lt;br /&gt;
# [[#Working_with_Cages|Working with Cages]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;AA&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Anti-Aliasing ===&lt;br /&gt;
Turning on super-sampling or anti-aliasing (or whatever multi-ray casting is called in your normal map baking tool) will help to fix any jagged edges where the high-res model overlaps itself within the UV borders of the low-poly mesh, or wherever the background shows through holes in the mesh. Unfortunately this tends to render much much slower, and takes more memory.&lt;br /&gt;
&lt;br /&gt;
[[image:normalmap_aliasing_knak47.jpg|frame|none|A bake without anti-aliasing shows artifacts where the high-poly mesh has overlaps. &amp;lt;br&amp;gt;Image by [http://www.polycount.com/forum/member.php?u=35938 'knak47']]]&lt;br /&gt;
&lt;br /&gt;
One trick to speed this up is to render 2x the intended image size then scale the normal map down 1/2 in a paint program like Photoshop. The reduction's pixel resampling will add anti-aliasing for you in a very quick process. After scaling, make sure to re-normalize the map if your game doesn't do that already, because the un-normalized pixels in your normalmap may cause pixelly artifacts in your specular highlights. Re-normalizing can be done with [http://developer.nvidia.com/nvidia-texture-tools-adobe-photoshop NVIDIA's normal map filter] for Photoshop.&lt;br /&gt;
&lt;br /&gt;
3ds Max's supersampling doesn't work nicely with edge padding, it produces dark streaks in the padded pixels. If so then turn off padding and re-do the padding later, either by re-baking without supersampling or by using a Photoshop filter like the one that comes with [[#3DTools|Xnormal]].&lt;br /&gt;
&lt;br /&gt;
=== Baking Transparency ===&lt;br /&gt;
Sometimes you need to bake a normal map from an object that uses opacity maps, like a branch with opacity-mapped leaves. Unfortunately baking apps often completely ignore any transparency mapping on your high-poly mesh.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|[[image:JoeWilson_ivynormals_error.jpg]] &lt;br /&gt;
|[[image:JoeWilson_ivynormals_rendered.jpg]]&lt;br /&gt;
|-&lt;br /&gt;
|3ds Max's RTT baker causes transparency errors.&amp;lt;br&amp;gt;image by [http://www.linkedin.com/in/earthquake Joe &amp;quot;EarthQuake&amp;quot; Wilson]&lt;br /&gt;
|The lighting method bakes perfect transparency.&amp;lt;br&amp;gt;image by [http://www.linkedin.com/in/earthquake Joe &amp;quot;EarthQuake&amp;quot; Wilson]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
To solve this, render a Top view of the mesh. This only works if you're using a planar UV projection for your low-poly mesh and you're baking a tangent-space normal map.&lt;br /&gt;
&lt;br /&gt;
* Make sure the Top view matches the dimensions of the planar UV projection used by the low-poly mesh. It helps to use an orthographic camera for precise placement.&lt;br /&gt;
* On the high-poly mesh either use a specific lighting setup or a use special material shader:&lt;br /&gt;
* 1) The lighting setup is described in these tutorials:&lt;br /&gt;
* * [http://www.bencloward.com/tutorials_normal_maps11.shtml Creating A Normal Map Right In Your 3D App] by [http://www.bencloward.com/ Ben Cloward]&lt;br /&gt;
* *[http://www.pinwire.com/articles/26/1/Generating-High-Fidelity-Normal-Maps-with-3-D-Software.html Generating High Fidelity Normal Maps with 3-D Software] by [http://www.linkedin.com/pub/0/277/4AB Dave McCoy], Graphics Techniques Consultant, Xbox Content and Design Team&lt;br /&gt;
* 2) The material shader does the same thing, but doesn't require lights.&lt;br /&gt;
* * [http://www.scriptspot.com/3ds-max/normaltexmap NormalTexMap] scripted map for 3ds Max by [http://www.scriptspot.com/users/dave-locke Dave Locke].&lt;br /&gt;
* * [http://www.footools.com/3dsmax_plugins.html InfoTexture] map plugin for 3ds Max by [http://www.footools.com John Burnett]&lt;br /&gt;
&lt;br /&gt;
[[image:BenCloward_NormalMapLighting.gif|frame|none|The lighting setup for top-down rendering. &amp;lt;br&amp;gt;Image by [http://www.bencloward.com Ben Cloward]]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;EP&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Edge Padding ===&lt;br /&gt;
If a normal map doesn't have enough [[Edge_padding |Edge Padding]], this will create shading seams on the UV borders.&lt;br /&gt;
&lt;br /&gt;
=== High Poly Materials ===&lt;br /&gt;
3ds Max will not bake a normal map properly if the high-res model has a mental ray Arch &amp;amp; Design material applied. If your normal map comes out mostly blank, either use a Standard material or none at all. For an example see the Polycount thread [http://www.polycount.com/forum/showthread.php?t=74792 Render to Texture &amp;gt;:O].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;RT&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Reset Transforms ===&lt;br /&gt;
Before baking, make sure your low-poly model's transforms have been reset. '''''This is very important!''''' Often during the modeling process a model will be rotated and scaled, but these compounded transforms can create a messy local &amp;quot;space&amp;quot; for the model, which in turn often creates rendering errors for normal maps. &lt;br /&gt;
&lt;br /&gt;
In 3ds Max, use the Reset Xforms utility then Collapse the Modifier Stack. In Maya use Freeze Transformation. In XSI use the Freeze button.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;SI&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Solving Intersections ===&lt;br /&gt;
The projection process often causes problems like misses, or overlaps, or intersections. It can be difficult generating a clean normal map in areas where the high-poly mesh intersects or nearly intersects itself, like in between the fingers of a hand. Setting the ray distance too large will make the baker pick the other finger as the source normal, while setting the ray distance too small will lead to problems at other places on the mesh where the distances between in-game mesh and high-poly mesh are greater.&lt;br /&gt;
&lt;br /&gt;
Fortunately there are several methods for solving these problems.&lt;br /&gt;
&lt;br /&gt;
# Change the shape of the cage. Manually edit points on the projection cage to help solve tight bits like the gaps between fingers.&lt;br /&gt;
# Limit the projection to matching materials, or matching UVs.&lt;br /&gt;
# Explode the meshes. See the polycount thread [http://boards.polycount.net/showthread.php?t=62921 Explode script needed (for baking purposes)].&lt;br /&gt;
# Bake two or more times using different cage sizes, and combine them in Photoshop.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;SPA&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Solving Pixel Artifacts ===&lt;br /&gt;
[[image:filterMaps_artifact.jpg|frame|none|Random pixel artifacts in the bake. &amp;lt;br&amp;gt;Image by [http://www.linkedin.com/in/ericchadwick Eric Chadwick]]]&lt;br /&gt;
&lt;br /&gt;
If you are using 3ds Max's ''Render To Texture'' to bake from one UV layout to another, you may see stray pixels scattered across the bake. This only happens if you are using a copy of the original mesh in the Projection, and that mesh is using a different UV channel than the original mesh.&lt;br /&gt;
&lt;br /&gt;
There are two solutions for this:&lt;br /&gt;
&lt;br /&gt;
* Add a Push modifier to the copied mesh, and set it to a low value like 0.01.&lt;br /&gt;
- or -&lt;br /&gt;
&lt;br /&gt;
* Turn off ''Filter Maps'' in the render settings (Rendering menu &amp;gt; Render Setup &amp;gt; Renderer tab &amp;gt; uncheck Filter Maps). To prevent aliasing you may want to enable the Global Supersampler in Render Setup.&lt;br /&gt;
&lt;br /&gt;
See also [[#Anti-Aliasing]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;SWL&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Solving Wavy Lines ===&lt;br /&gt;
When capturing from a cylindrical shape, often the differences between the low-poly mesh and the high-poly mesh will create a wavy edge in the normal map. There are a couple ways to avoid this:&lt;br /&gt;
&lt;br /&gt;
# The best way... create your lowpoly model with better supporting edges. See the Polycount threads [http://www.polycount.com/forum/showthread.php?t=81154 Understanding averaged normals and ray projection/Who put waviness in my normal map?], [http://boards.polycount.net/showthread.php?t=55754 approach to techy stuff], [http://www.polycount.com/forum/showthread.php?t=72713 Any tips for normal mapping curved surface?].&lt;br /&gt;
# Adjust the shape of the cage to influence the directions the rays will be cast. Beware... this work will have to be re-done every time you edit the lowpoly mesh, as the cage will be invalidated. At the bottom of [http://www.poopinmymouth.com/tutorial/normal_workflow_2.htm this page of his normal map tutorial], [http://www.poopinmymouth.com/ Ben &amp;quot;poopinmymouth&amp;quot; Mathis] shows how to do this in 3ds Max. Same method can be seen in the image below.&lt;br /&gt;
# Subdivide the low-res mesh so it more closely matches the high-res mesh. Beware... this will cause the normal map not to match your lowpoly vertex normals, probably causing shading errors. [http://www.custom-airbrush.com/ Jeff &amp;quot;airbrush&amp;quot; Ross] has a [http://dodownload.filefront.com/9086954//72f71c0147df53765045a22253c18361a29a6d532425842007ead644d39cbb85d0794ab560365cfa video tutorial] that shows how to do this in Maya.&lt;br /&gt;
# Paint out the wavy line.  Beware... this work will have to be re-done every time you re-bake the normal map. The [http://www.poopinmymouth.com/tutorial/normal_workflow_3.htm normal map process tutorial] by [http://www.poopinmymouth.com/ Ben &amp;quot;poopinmymouth&amp;quot; Mathis] includes an example of painting out wavy lines in a baked normal map.&lt;br /&gt;
# Use a separate planar-projected mesh for the details that wrap around the barrel area, so the ray-casting is more even. Beware... this will cause the normal map not to match your lowpoly vertex normals, probably causing shading errors. For example to add tread around a tire, the tread can be baked from a tread model that is laid out flat, then that bake can layered onto the bake from the cylindrical tire mesh in a paint program.&lt;br /&gt;
&lt;br /&gt;
[[image:timothy_evison_normalmap_projections.jpg|frame|none|Adjusting the shape of the cage to remove distortion. &amp;lt;br&amp;gt;Image by [http://users.cybercity.dk/~dsl11905/resume/resume.html Timothy &amp;quot;tpe&amp;quot; Evison]]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;TRI&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Triangulating ===&lt;br /&gt;
Before baking, it is usually best to triangulate the low-poly model, converting it from polygons into pure triangles. This prevents the vertex normals from being changed later on, which can create specular artifacts.&lt;br /&gt;
&lt;br /&gt;
[[image:triangulation_modo_ohare.jpg|frame|none| When quads are triangulated in [http://www.luxology.com/modo/ Modo], the internal edges are sometimes flipped, which causes shading differences.&amp;lt;br&amp;gt;Image by [http://www.farfarer.com/|James &amp;quot;Talon&amp;quot; O'Hare]]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Sometimes a baking tool or a mesh exporter/importer will re-triangulate the polygons. A quad polygon is actually treated as two triangles, and the internal edge between them is often switched diagonally during modeling operations. When the vertices of the quad are moved around in certain shapes, the software's algorithm for polygon models tries to keep the quad surface in a &amp;quot;rational&amp;quot; non-overlapping shape. It does this by switching the internal edge between its triangles.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[image:triangulation_spec_tychovii.jpg|frame|none| The specular highlight is affected by triangulation. Flip edges to fix skewing. See the Polycount thread [http://boards.polycount.net/showthread.php?t=66651 Skewed Specular Highlight?] for pictures and more info.&amp;lt;br&amp;gt; Image by [http://robertkreps.com Robert &amp;quot;TychoVII&amp;quot; Kreps]]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;WWC&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Working with Cages ===&lt;br /&gt;
''Cage'' has two meanings in the normal-mapping process: a low-poly base for [[subdivision surface modeling]] (usually called the [[DigitalSculpting#BM|basemesh]]), or a ray-casting mesh used for normal map baking. This section covers the ray-casting cage.&lt;br /&gt;
&lt;br /&gt;
Most normal map baking tools allow you to use a distance-based raycast. A ray is sent outwards along each vertex normal, then at the distance you set a ray is cast back inwards. Where ever that ray intersects the high poly mesh, it will sample the normals from it. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|[[Image:Normalmap_raycasting_1.jpg]] &lt;br /&gt;
|[[Image:Normalmap_raycasting_2.jpg]]&lt;br /&gt;
|-&lt;br /&gt;
|Hard edges and a distance-based raycast (gray areas) cause ray misses (yellow) and ray overlaps (cyan).&amp;lt;br&amp;gt; Image by [http://www.mankua.com/ Diego Castaño]&lt;br /&gt;
|The gray area shows that using all soft edges (or hard edges and a cage-based raycast) will avoid ray-casting errors from split normals.&amp;lt;br&amp;gt; Image by [http://www.mankua.com/ Diego Castaño] &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Unfortunately with a distance-based raycast, [[#SGAHE|split vertex normals]] will cause the bake to miss parts of the high-res mesh, causing errors and seams. &lt;br /&gt;
&lt;br /&gt;
Some software allows you to use ''cage mesh'' option instead, which basically inflates a copy of the low-poly mesh, then raycasts inwards from each vertex. This ballooned-out mesh is the cage.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|&amp;lt;tablebgcolor=&amp;quot;#ffaaaa&amp;quot;&amp;gt;| &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
In 3ds Max the cage controls both the distance and the direction of the raycasting. &lt;br /&gt;
&lt;br /&gt;
In Maya the cage only controls the distance; the ray direction matches the vertex normals (inverted).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;text-decoration: line-through&amp;quot;&amp;gt; This may have been fixed in the latest release...&amp;lt;&amp;lt;BR&amp;gt;&amp;gt;&lt;br /&gt;
In Xnormal the cage is split everywhere the model has [[#SGAHE|hard edges]], causing ray misses in the bake. You can fix the hard edge split problem but it involves an overly complex workflow. You must also repeat the whole process any time you change your mesh:&amp;lt;/span&amp;gt;&lt;br /&gt;
# &amp;lt;s&amp;gt; Load the 3d viewer.&amp;lt;/s&amp;gt;&lt;br /&gt;
# &amp;lt;s&amp;gt; Turn on the cage editing tools.&amp;lt;/s&amp;gt;&lt;br /&gt;
# &amp;lt;s&amp;gt; Select all of the vertices.&amp;lt;/s&amp;gt;&lt;br /&gt;
# &amp;lt;s&amp;gt; Weld all vertices.&amp;lt;/s&amp;gt;&lt;br /&gt;
# &amp;lt;s&amp;gt; Expand the cage as you normally would.&amp;lt;/s&amp;gt;&lt;br /&gt;
# &amp;lt;s&amp;gt; Save out your mesh using the Xnormal format.&amp;lt;/s&amp;gt;&lt;br /&gt;
# &amp;lt;s&amp;gt; Make sure Xnormal is loading the correct mesh.&amp;lt;/s&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;P&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;Painting&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Painting ==&lt;br /&gt;
Don't be afraid to edit normal maps in Photoshop. After all it is just a texture, so you can clone, blur, copy, blend all you want... as long as it looks good of course. Some understanding of [[#RGBChannels|the way colors work]] in normal maps will go a long way in helping you paint effectively.&lt;br /&gt;
&lt;br /&gt;
A normal map sampled from a high-poly mesh will nearly always be better than one sampled from a texture, since you're actually grabbing &amp;quot;proper&amp;quot; normals from an accurate, highly detailed surface. That means your normal map's pixels will basically be recreating the surface angles of your high-poly mesh, resulting in a very believable look.&lt;br /&gt;
&lt;br /&gt;
If you only convert an image into a normal-map, it can look very flat, and in some cases it can be completely wrong unless you're very careful about your value ranges. Most image conversion tools assume the input is a heightmap, where black is low and white is high. If you try to convert a diffuse texture that you've painted, the results are often very poor. Often the best results are obtained by baking the large and mid-level details from a high-poly mesh, and then combined with photo-sourced &amp;quot;fine detail&amp;quot; normals for surface details such as fabric weave, scratches and grain.&lt;br /&gt;
&lt;br /&gt;
Sometimes creating a high poly surface takes more time than your budget allows. For character or significant environment assets then that is the best route, but for less significant environment surfaces working from a heightmap-based texture will provide a good enough result for a much less commitment in time.&lt;br /&gt;
&lt;br /&gt;
* [http://crazybump.com/ CrazyBump] is a commercial normal map converter.&lt;br /&gt;
* [http://www.renderingsystems.com/support/showthread.php?tid=3 ShaderMap] is a commercial normal map converter.&lt;br /&gt;
* [http://www.pixplant.com/ PixPlant] is a commercial normal map converter.&lt;br /&gt;
* [http://boards.polycount.net/showthread.php?t=68860 NJob] is a free normal map converter.&lt;br /&gt;
* [http://developer.nvidia.com/nvidia-texture-tools-adobe-photoshop NVIDIA normalmap filter for Photoshop] is a free normal map converter.&lt;br /&gt;
* [http://xnormal.net Xnormal height-to-normals filter for Photoshop] is a free normal map converter.&lt;br /&gt;
* [http://www.poopinmymouth.com/tutorial/normal_workflow_3.htm Normal map process tutorial] by [http://www.poopinmymouth.com/ Ben &amp;quot;poopinmymouth&amp;quot; Mathis] includes an example of painting out wavy lines in a baked normal map.&lt;br /&gt;
&lt;br /&gt;
=== Flat Color ===&lt;br /&gt;
The color (128,128,255) creates normals that are completely perpendicular to the polygon, as long as the vertex normals are also perpendicular. Remember a normal map's per-pixel normals create ''offsets'' from the vertex normals. If you want an area in the normal map to be flat, so it creates no offsets from the vertex normals, then use the color (128,128,255). &lt;br /&gt;
&lt;br /&gt;
This becomes especially obvious when [[#Mirroring|mirroring a normal map]] and using a shader with a reflection ingredient. Reflection tends to accentuate the angles between the normals, so any errors become much more apparent.&lt;br /&gt;
&lt;br /&gt;
[[image:normalmap_127seam.jpg|thumb|600px|none| Mirrored normal maps show a seam when (127,127,255) is used for the flat color; 128 is better.&amp;lt;br&amp;gt;Image by [http://www.ericchadwick.com Eric Chadwick]]]&lt;br /&gt;
&lt;br /&gt;
In a purely logical way, 127 seems like it would be the halfway point between 0 and 255. However 128 is the color that actually works in practice. When a test is done comparing (127,127,255) versus (128,128,255) it becomes obvious that 127 creates a slightly bent normal, and 128 creates a flat one.&lt;br /&gt;
&lt;br /&gt;
This is because most game pipelines use ''unsigned'' normal maps. For details see the Polycount forum thread [http://www.polycount.com/forum/showpost.php?p=771360&amp;amp;postcount=22 tutorial: fixing mirrored normal map seams].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;BNMT&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Blending Normal Maps Together ===&lt;br /&gt;
Blending normal maps together is a quick way to add high-frequency detail like wrinkles, cracks, and the like. Fine details can be painted as a height map, then it can be converted into a normal map using one of the normal map tools. Then this &amp;quot;details&amp;quot; normal map can be blended with a geometry-derived normal map using one of the methods below. &lt;br /&gt;
&lt;br /&gt;
Here is a comparison of four of the blending methods. Note that in these examples the default values were used for CrazyBump (Intensity 50, Strength 33, Strength 33), but the tool allows each layer's strength to be adjusted individually for stronger or milder results. Each of the normal maps below were [[#Renormalizing|re-normalized]] after blending.&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
| [[Image:NormalMap$nrmlmap_blending_methods_Maps.png}}&lt;br /&gt;
|-&lt;br /&gt;
| The blended normal maps.&amp;lt;&amp;lt;BR&amp;gt;&amp;gt;&amp;lt;span style=&amp;quot;font-size: smaller&amp;quot;&amp;gt;image by[http://www.ericchadwick.com Eric Chadwick]&amp;lt;/span&amp;gt; &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The four blending methods used above:&lt;br /&gt;
# [http://www.crazybump.com CrazyBump] by Ryan Clark blends normal maps together using calculations in 3D space rather than just in 2D. This does probably the best job at preserving details, and each layer's strength settings can be tweaked individually. &lt;br /&gt;
# [http://www.rodgreen.com/?p=4 Combining Normal Maps in Photoshop] by Rod Green blends normal maps together using Linear Dodge mode for the positive values and Difference mode for the negative values, along with a Photoshop Action to simplify the process. It's free, but the results may be less accurate than CrazyBump.&lt;br /&gt;
# [http://www.paultosca.com/makingofvarga.html Making of Varga] by [http://www.paultosca.com/ Paul &amp;quot;paultosca&amp;quot; Tosca] blends normal maps together using Overlay mode for the red and green channels and Multiply mode for the blue channel. This gives a slightly stronger bump than the Overlay-only method. [http://www.leocov.com/ Leo &amp;quot;chronic&amp;quot; Covarrubias] has a step-by-step tutorial for this method in [http://www.cgbootcamp.com/tutorials/2009/12/9/photoshop-combine-normal-maps.html CG Bootcamp Combine Normal Maps].&lt;br /&gt;
# [[3DTutorials/Normal Map Deepening|Normal Map Deepening]] by [http://www.poopinmymouth.com/ Ben &amp;quot;poopinmymouth&amp;quot; Mathis] shows how to blend normal maps together using Overlay mode. [http://cgtextures.com/content.php?action=tutorial&amp;amp;name=normalmap CGTextures tutorial for the NVIDIA Photoshop filter] by [http://hirezstudios.com/ Scott Warren] also shows how to create normalmaps using multiple layers (Note: to work with the Overlay blend mode each layer's Output Level should be 128 instead of 255, you can use the Levels tool for this).&lt;br /&gt;
&lt;br /&gt;
The [http://boards.polycount.net/showthread.php?t=69615 Getting good height from Nvidia-filter normalizing grayscale height] thread on the Polycount forum has a discussion of different painting/blending options. Also see the [[#2DT|2D Tools]] section for painting and conversion tools.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;PCT&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Pre-Created Templates ===&lt;br /&gt;
A library of shapes can be developed and stored for later use, to save creation time for future normal maps. Things like screws, ports, pipes, and other doo-dads. These shapes can be stored as bitmaps with transparency so they can be layered into baked normal maps.&lt;br /&gt;
&lt;br /&gt;
* [http://www.beautifulrobot.com/?p=69 Creating &amp;amp; Using NormalMap &amp;quot;Widgets&amp;quot;] - by ''[http://www.beautifulrobot.com Steev &amp;quot;kobra&amp;quot; Kelly]''&amp;lt;&amp;lt;BR&amp;gt;&amp;gt; How to set up and render template objects.&lt;br /&gt;
* [http://www.akramparvez.com/portfolio/scripts/normalmap-widget-for-3ds-max/ NormalMap Widget for 3ds Max] - by ''[http://www.akramparvez.com Akram Parvez]''&amp;lt;&amp;lt;BR&amp;gt;&amp;gt;A script to automate the setup and rendering process.&lt;br /&gt;
* See the section [[#BT|Baking Transparency]] for more template-rendering tools and tutorials.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;RN&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;Renormalizing&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Re-normalizing ===&lt;br /&gt;
Re-normalizing means resetting the length of each normal in the map to 1.&lt;br /&gt;
&lt;br /&gt;
A normal mapping shader takes the three color channels of a normal map and combines them to create the direction and length of each pixel's normal. These normals are then used to apply the scene lighting to the mesh. However if you edit normal maps by hand or if you blend multiple normal maps together this can cause those lengths to change. Most shaders expect the length of the normals to always be 1 (normalized), but some are written to re-normalize the normal map dynamically (for example, 3ds Max's Hardware Shaders do re-normalize).&lt;br /&gt;
&lt;br /&gt;
If the normals in your normal map are not normalized, and your shader doesn't re-normalize them either, then you may see artifacts on the shaded surface... the specular highlight may speckle like crazy, the surface may get patches of odd shadowing, etc. To help you avoid this NVIDIA's normal map filter for Photoshop provides an easy way to re-normalize a map after editing; just use the '''Normalize Only''' option. [http://xnormal.net Xnormal] also comes with a Normalize filter for Photoshop.&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Some shaders use [[#NormalMapCompression|compressed normal maps]]. Usually this means the blue channel is thrown away completely, so it's recalculated on-the-fly in the shader. However the shader has to re-normalize in order to recreate that data, so any custom normal lengths that were edited into the map will be ignored completely. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;AOIANM&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;AmbientOcclusionIntoANormalMap&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Ambient Occlusion into a Normal Map ===&lt;br /&gt;
If the shader doesn't re-normalize the normal map, an [[Ambient Occlusion Map]] can actually be baked into the normal map. This will shorten the normals in the crevices of the surface, causing the surface to receive less light there. This works with both diffuse and specular, or any other pass that uses the normal map, like reflection.&lt;br /&gt;
&lt;br /&gt;
However it's usually best to keep the AO as a separate map (or in an alpha channel) and multiply it against the ambient lighting only. This is usually done with a custom [[Category:Shaders|shader]]. If you multiply it against the diffuse map or normal map then it also occludes diffuse lighting which can make the model look dirty. Ambient occlusion is best when it occludes ambient lighting only, for example a [[DiffuselyConvolvedCubeMap|diffusely convolved cubemap]].&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
To bake the AO into a normal map, adjust the levels of the AO layer first so the darks only go as low as 128 gray, then set the AO layer to Darken mode. This will shorten the normals in the normalmap, causing the surface to receive less light in the darker areas. &lt;br /&gt;
&lt;br /&gt;
This trick doesn't work with any shaders that re-normalize, like 3ds Max's Hardware Shaders. The shader must be altered to actually use the lengths of your custom normals; most shaders just assume all normals are 1 in length because this makes the shader code simpler. Also this trick will not work with most of the common [[#NormalMapCompression|normal map compression formats]], which often discard the blue channel and recalculate it in the shader, which requires re-normalization.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;BLE&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;&amp;lt;Anchor([[BacklightingExample]])&amp;gt;&amp;gt;&lt;br /&gt;
=== Back Lighting Example ===&lt;br /&gt;
You can customize normal maps for some interesting effects. If you invert the blue channel of a tangent-space map, the normals will be pointing to the opposite side of the surface, which can simulate backlighting.&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|  [[Image:NormalMap$tree_front.jpg]] &lt;br /&gt;
|-&lt;br /&gt;
| Tree simulating subsurface scattering (front view).&amp;lt;&amp;lt;BR&amp;gt;&amp;gt;&amp;lt;span style=&amp;quot;font-size: smaller&amp;quot;&amp;gt;image by[http://www.linkedin.com/in/ericchadwick Eric Chadwick]&amp;lt;/span&amp;gt; &lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The tree leaves use a shader than adds together two diffuse maps, one using a regular tangent-space normal map, the other using the same normal map but with the blue channel inverted. This causes the diffuse map using the regular normal map to only get lit on the side facing the light (front view), while the diffuse map using the inverted normal map only gets lit on the opposite side of the leaves (back view). The leaf geometry is 2-sided but uses the same shader on both sides, so the effect works no matter the lighting angle. As an added bonus, because the tree is self-shadowing the leaves in shadow do not receive direct lighting, which means their backsides do not show the inverted normal map, so the fake subsurface scatter effect only appears where the light directly hits the leaves. This wouldn't work for a whole forest because of the computational cost of self-shadowing and double normal maps, but could be useful for a single &amp;quot;star&amp;quot; asset, or if LODs switched the distant trees to a model that uses a cheaper shader.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;SAS&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;&amp;lt;Anchor([[ShadersAndSeams]])&amp;gt;&amp;gt;&lt;br /&gt;
== Shaders and Seams ==&lt;br /&gt;
You need to use the right kind of shader to avoid seeing seams where UV breaks occur. It must be written to use the same [[#TangentBasis|tangent basis]] that was used during baking. If the shader doesn't, the lighting will either be inconsistent across UV borders or it will show smoothing errors from the low-poly vertex normals.&lt;br /&gt;
&lt;br /&gt;
Xnormal generates accurate normals when displayed in Xnormal, and the SDK includes a method to write your own custom tangent space generator for the tool. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;3MS&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== 3ds Max Shaders ===&lt;br /&gt;
The &amp;quot;Render To Texture&amp;quot; tool in 3ds Max 2011 and older generates [[#TSNM|tangent-space]] normal maps that render correctly in the offline renderer (scanline) but do not render correctly in the realtime viewport with the 3ds Max shaders. Max is using a different [[#TangentBasis|tangent basis]] for each. This is readily apparent when creating non-organic hard surface normalmaps; smoothing errors appear in the viewport that do not appear when rendered. &lt;br /&gt;
&lt;br /&gt;
The errors can be fixed by using &amp;quot;Render To Texture&amp;quot; to bake a [[#TSNM|tangent-space]] or [[#OSNM|object-space]] map, and using the free [http://www.3pointstudios.com/3pointshader_about.shtml &amp;quot;3Point Shader&amp;quot;] by Christoph '[[CrazyButcher]]' Kubisch and Per 'perna' Abrahamsen. The shader uses the same tangent basis as the baking tool, so it produces nearly flawless results. It also works with old bakes.&lt;br /&gt;
&lt;br /&gt;
You can get OK results in the Max viewport using a tangent-space map baked in Maya, loading it in a Standard material, and enabling &amp;quot;Show Hardware Map in Viewport&amp;quot;. Another method is to use Render To Texture to bake an [[#OSNM|object-space]] map then use [[#CBS|Nspace]] to convert it into a tangent-space map then load that in a DirectX material and use the RTTNormalMap.fx shader. &lt;br /&gt;
&lt;br /&gt;
Autodesk is aware of these issues, and plans to address them in an upcoming release. See these links for more information:&lt;br /&gt;
* Christoph &amp;quot;[[CrazyButcher]]&amp;quot; Kubisch and Per &amp;quot;perna&amp;quot; Abrahamsen designed a shader/modifier combination approach that fixes the viewport problem, see the Polycount forum post [http://boards.polycount.net/showthread.php?t=72861 3Point Shader Lite - Shader material editor and Quality Mode normalmaps for 3ds Max].&lt;br /&gt;
* Jean-Francois &amp;quot;jfyelle&amp;quot; Yelle, Autodesk Media &amp;amp; Entertainment Technical Product Manager, has [http://boards.polycount.net/showthread.php?p=1115812#post1115812 this post]. &lt;br /&gt;
* Ben Cloward posted [http://boards.polycount.net/showthread.php?p=1100270#post1100270 workarounds and FX code].&lt;br /&gt;
* Christopher &amp;quot;cdiggins&amp;quot; Diggins, SDK writer for 3ds Max, shares some of the SDK code in his blog posts &amp;quot;[http://area.autodesk.com/blogs/chris/how_the_3ds_max_scanline_renderer_computes_tangent_and_binormal_vectors_for_normal_mapping How the 3ds Max Scanline Renderer Computes Tangent and Binormal Vectors for Normal Mapping]&amp;quot; and &amp;quot;[http://area.autodesk.com/blogs/chris/3ds_max_normal_map_baking_and_face_angle_weighting_the_plot_thickens 3ds Max Normal Map Baking and Face Angle Weighting: The Plot Thickens]&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|  [[Image:NormalMap$max2010_normalmap_workarounds_thumb.png]] &lt;br /&gt;
|-&lt;br /&gt;
| Viewport methods in 3ds Max 2010.&amp;lt;&amp;lt;BR&amp;gt;&amp;gt;[[attachment:max2010_normalmap_workarounds.png|Actual size]]&amp;lt;&amp;lt;BR&amp;gt;&amp;gt;&amp;lt;span style=&amp;quot;font-size: smaller&amp;quot;&amp;gt;image by [http://www.linkedin.com/in/ericchadwick Eric Chadwick]&amp;lt;/span&amp;gt; &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;3MENT&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== 3ds Max Edit Normals Trick ===&lt;br /&gt;
After baking, if you add an Edit Normals modifier to your low-poly normalmapped model, this seems to &amp;quot;relax&amp;quot; the vertex normals for more accurate viewport shading. The modifier can be collapsed if desired.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;MS&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Maya Shaders ===&lt;br /&gt;
Maya seems to correctly generate normals to view in realtime, with the correct [[#TangentBasis|tangent basis]], with much less smoothing errors than 3ds Max. &lt;br /&gt;
* [http://www.mentalwarp.com/~brice/shader.php BRDF shader] by [http://www.mentalwarp.com/~brice/ Brice Vandemoortele] and [http://www.kjapi.com/ Cedric Caillaud] (more info in [http://boards.polycount.net/showthread.php?t=49920 this Polycount thread]) '''Update:''' [http://boards.polycount.net/showthread.php?p=821862#post821862 New version here] with many updates, including object-space normal maps, relief mapping, self-shadowing, etc. Make sure you enable cgFX shaders in the Maya plugin manager, then you can create them in the same way you create a Lambert, Phong etc. Switch OFF high quality rendering in the viewports to see them correctly too.&lt;br /&gt;
* If you want to use the software renderer, use mental ray instead of Maya's software renderer because mental ray correctly interprets tangent space normals. The Maya renderer treats the normal map as a grayscale bump map, giving nasty results. Mental ray supports Maya's Phong shader just fine (amongst others), although it won't recognise a gloss map plugged into the &amp;quot;cosine power&amp;quot; slot. The slider still works though, if you don't mind having a uniform value for gloss. Spec maps work fine though. Just use the same set up as you would for viewport rendering. You'll need to have your textures saved as TGAs or similar for mental ray to work though. - from [http://boards.polycount.net/member.php?u=14235 CheeseOnToast]&lt;br /&gt;
&amp;lt;&amp;lt;Anchor([[NormalMapCompression]])&amp;gt;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;NMC&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
== Normal Map Compression ==&lt;br /&gt;
see; [[Normal Map Compression]]&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
=== Related Pages ===&lt;br /&gt;
* [[Curvature map]]&lt;br /&gt;
* [[DuDv map]]&lt;br /&gt;
* [[Flow map]]&lt;br /&gt;
* [[Normal map]]&lt;br /&gt;
* [[Radiosity normal map]]&lt;br /&gt;
* [[Vector displacement map]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;3DT&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; &amp;lt;span id=&amp;quot;Tools&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; &amp;lt;span id=&amp;quot;3DTools&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== 3D Tools ===&lt;br /&gt;
See [[Category:Tools#A3D_Normal_Map_Software|Category:Tools#3D_Normal_Map_Software]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;2DT&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;2DTools&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== 2D Tools ===&lt;br /&gt;
See [[Category:Tools#A2D_Normal_Map_Software|Category:Tools#2D_Normal_Map_Software]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;T&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;Tutorials&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Tutorials ===&lt;br /&gt;
* [http://area.autodesk.com/userdata/fckdata/239955/The%20Generation%20and%20Display%20of%20Normal%20Maps%20in%203ds%20Max.pdf The Generation and Display of Normal Maps in 3ds Max] (500kb PDF) &amp;lt;&amp;lt;BR&amp;gt;&amp;gt; Excellent whitepaper from Autodesk about normal mapping in 3ds Max and other apps.&lt;br /&gt;
* [http://www.katsbits.com/htm/tutorials/blender-baking-normal-maps-from-models.htm Renderbump and baking normal maps from high poly models using Blender 3D] by ''[http://www.katsbits.com/htm/about.htm &amp;quot;katsbits&amp;quot;]''&amp;lt;&amp;lt;BR&amp;gt;&amp;gt;Baking normal maps in Blender.&lt;br /&gt;
* [http://udn.epicgames.com/Three/CreatingNormalMaps.html Techniques for Creating Normal Maps] in the Unreal Developer Network's [http://udn.epicgames.com/Three/WebHome.html Unreal Engine 3 section] contains advice from [http://www.epicgames.com/ Epic Games] artists on creating normal maps for UE3. The [http://udn.epicgames.com/Three/DesignWorkflow.html#Creating%20normal%20maps%20from%20meshes Design Workflow page] has a summary.&lt;br /&gt;
* [http://www.iddevnet.com/quake4/ArtReference_CreatingModels#head-3400c230e92ff7d57424b2a68f6e0ea75dee4afa Creating Models in Quake 4] by [http://www.ravensoft.com/ Raven Software] is a comprehensive guide to creating Quake 4 characters.&lt;br /&gt;
* [http://www.svartberg.com/tutorials/article_normalmaps/normalmaps.html Normalmaps for the Technical Game Modeler] by [http://www.svartberg.com Ariel Chai] shows how low-poly smoothing and UVs can affect normal maps in Doom 3.&lt;br /&gt;
* [http://wiki.polycount.net/3D_Tutorials/Modeling_High-Low_Poly_Models_for_Next_Gen_Games Modeling High/Low Poly Models for Next Gen Games] by [http://www.acetylenegames.com/artbymasa/ João &amp;quot;Masakari&amp;quot; Costa] is an overview of modeling for normal maps.&lt;br /&gt;
* The [http://tech-artists.org/wiki/Beveling Beveling section on the Tech-Artists.Org Wiki] discusses how smoothing groups and bevels affect the topology of the low-poly model.&lt;br /&gt;
* The two-part article [http://www.ericchadwick.com/examples/provost/byf2.html#wts Beautiful, Yet Friendly] by [http://www.linkedin.com/in/gprovost Guillaume Provost] explains how smoothing groups and other mesh attributes cause vertices to be duplicated in the game. The vertex count is actually what matters in-game, not the triangle or poly count.&lt;br /&gt;
* [http://www.poopinmymouth.com/tutorial/normal_workflow_2.htm Normal map workflow] by [http://www.poopinmymouth.com/ Ben &amp;quot;poopinmymouth&amp;quot; Mathis] demonstrates his normal mapping workflow in 3ds Max and Photoshop.&lt;br /&gt;
* [http://dodownload.filefront.com/9086954//72f71c0147df53765045a22253c18361a29a6d532425842007ead644d39cbb85d0794ab560365cfa This video tutorial] by [http://www.custom-airbrush.com/ Jeff &amp;quot;airbrush&amp;quot; Ross] shows in Maya how to subdivide the low-poly mesh so it more closely matches the high-poly mesh, to help solve wavy lines in the bake.&lt;br /&gt;
* [http://www.bencloward.com/tutorials_normal_maps1.shtml Normal Mapping Tutorial] by [http://www.bencloward.com/ Ben Cloward] is a comprehensive tutorial about the entire normal map creation process.&lt;br /&gt;
* [http://www.pinwire.com/articles/26/1/Generating-High-Fidelity-Normal-Maps-with-3-D-Software.html Generating High Fidelity Normal Maps with 3-D Software] by [http://www.linkedin.com/pub/0/277/4AB Dave McCoy] shows how to use a special lighting setup to render normal maps (instead of baking them).&lt;br /&gt;
* [http://cgtextures.com/content.php?action=tutorial&amp;amp;name=normalmap Tutorial for the NVIDIA Photoshop filter] by [http://hirezstudios.com/ Scott Warren] shows how to create deep normal maps using multiple layers. Note: to use Overlay blend mode properly, make sure to change each layer's Levels ''Output Level'' to 128 instead of 255.&lt;br /&gt;
* [http://www.poopinmymouth.com/process/tips/normalmap_deepening.jpg Normalmap Deepening] by [http://www.poopinmymouth.com/ Ben &amp;quot;poopinmymouth&amp;quot; Mathis] shows how to adjust normal maps, and how to layer together painted and baked normal maps.&lt;br /&gt;
* [http://boards.polycount.net/showthread.php?t=51088 Tutorial for painting out seams on mirrored tangent-space normal maps] by [http://www.warbeast.de/ warby] helps to solve seams along horizontal or vertical UV edges, but not across angled UVs.&lt;br /&gt;
* [http://planetpixelemporium.com/tutorialpages/normal.html Cinema 4D and Normal Maps For Games] by [http://planetpixelemporium.com/index.php James Hastings-Trew] describes normal maps in plain language, with tips on creating them in Cinema 4D.&lt;br /&gt;
* [http://forums.cgsociety.org/showthread.php?f=39&amp;amp;t=359082 3ds Max normal mapping overview] by [http://www.alan-noon.com/ Alan Noon] is a great thread on CGTalk about the normal mapping process.&lt;br /&gt;
* [http://forums.cgsociety.org/showthread.php?f=46&amp;amp;t=373024 Hard Surface Texture Painting] by [http://stefan-morrell.cgsociety.org/gallery/ Stefan Morrell] is a good introduction to painting textures for metal surfaces.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;D&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;Discussion&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Discussion ===&lt;br /&gt;
[http://boards.polycount.net/showthread.php?p=820218 Discuss this page on the Polycount forums]. Suggestions welcome.&lt;br /&gt;
&lt;br /&gt;
Even though only one person has been editing this page so far, the information here was gathered from many different sources. We wish to thank all the contributors for their hard-earned knowledge. It is much appreciated! [http://wiki.polycount.net {{http://boards.polycount.net/images/smilies/pcount/icons/smokin.gif}}]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Texturing]] [[Category:TextureTypes]] [[Category:Bump map]]&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	<entry>
		<id>http://wiki.polycount.com/wiki/Normal_Map_Compression</id>
		<title>Normal Map Compression</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/Normal_Map_Compression"/>
				<updated>2014-11-29T08:22:15Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: Created page with &amp;quot;__NOTOC__ Normal maps can take up a lot of memory. Compression can reduce the size of a map to 1/4 of what it was uncompressed, which means you can either increase the resolut...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
Normal maps can take up a lot of memory. Compression can reduce the size of a map to 1/4 of what it was uncompressed, which means you can either increase the resolution or you can use more maps.&lt;br /&gt;
&lt;br /&gt;
Usually the compression method is to throw away the Blue channel, because this can be re-computing at minimal cost in the shader code. Then the bitmap only has to store two color channels, instead of four (red, green, blue, and alpha).&lt;br /&gt;
&lt;br /&gt;
* The article [http://developer.download.nvidia.com/whitepapers/2008/real-time-normal-map-dxt-compression.pdf Real-Time Normal Map DXT Compression] (PDF) from [http://www.idsoftware.com/ id software] and [http://developer.nvidia.com NVIDIA] is an excellent introduction to compression.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== DXT5nm Compression ==&lt;br /&gt;
DXT5nm is the same file format as DXT5 except before compression the red channel is moved into the alpha channel, the green channel is left as-is, and the red and blue channels are blanked with the same solid color. This re-arranging of the normal map axes is called ''swizzling''.&lt;br /&gt;
&lt;br /&gt;
The Green and Alpha channels are used because in the DXT format they are compressed using somewhat higher bit depths than the Red and Blue channels. Red and Blue have to be filled with the same solid color because DXT uses a compression system that compares differences between the three color channels. If you try to store some kind of texture in Red and/or Blue (specular power, height map, etc.) then the compressor will create more compression artifacts because it has to compare all three channels.&lt;br /&gt;
&lt;br /&gt;
There are some options in the NVIDIA DXT compressor that help reduce the artifacts if you want to add texture to the Red or Blue channels. The artifacts will be greater than if you keep Red and Blue empty, but it might be a tradeoff worth making. Some notes about this on the [http://developer.nvidia.com/forums/index.php?showtopic=1366 NVIDIA Developer Forums].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== DXT1 Compression ==&lt;br /&gt;
DXT1 is also used sometimes for tangent-space normal maps, because it is half the size of a DXT5. The downside though is that it causes many more compression artifacts, so much so that most people end up not using it. &lt;br /&gt;
&lt;br /&gt;
* The blog post [http://realtimecollisiondetection.net/blog/?p=28#more-28 I like spilled beans!] by [http://realtimecollisiondetection.net/blog/?page_id=2 Christer Ericson] has a section about Capcom's clever use of DXT1 and DXT5.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== 3Dc Compression ==&lt;br /&gt;
3Dc compression is also known as BC5 in DirectX 10. It works similar to DXT5nm, because it only stores the X and Y channels. The difference is it stores both the same way as the DXT5 Alpha channel, which is a slightly higher bit depth than DXT5nm's Green channel. 3Dc yields the best results of any listed algorithm for tangent space normal map compression, and requires no extra processing time or unique hardware. See [[3Dc]] for more information.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== [[A8L8]] Compression ==&lt;br /&gt;
The DDS format !A8L8 isn't actually compressed, it's just two 8bit grayscale channels (256 grays each). It does save you from having to store all three color channels. Your shader has to recompute the blue channel for it to work. However, !A8L8 does not actually save any space in texture memory, it is typically converted to a four-channel 32bit texture when it's sent to the card. This format really only helps save disk space.&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	<entry>
		<id>http://wiki.polycount.com/wiki/Normal_map</id>
		<title>Normal map</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/Normal_map"/>
				<updated>2014-11-29T08:20:14Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!-- ## page was renamed from Normal Map --&amp;gt;&lt;br /&gt;
= Normal Map =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;WhatIsANormalMap&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;WIANM&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
== What is a Normal Map? ==&lt;br /&gt;
A Normal Map is usually used to fake high-res geometry detail when it's mapped onto a low-res mesh. The pixels of the normal map each store a ''normal'', a vector that describes the surface slope of the original high-res mesh at that point. The red, green, and blue channels of the normal map are used to control the direction of each pixel's normal. &lt;br /&gt;
&lt;br /&gt;
When a normal map is applied to a low-poly mesh, the texture pixels control the direction each of the pixels on the low-poly mesh will be facing in 3D space, creating the illusion of more surface detail or better curvature. However, the silhouette of the model doesn't change. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;packed&amp;quot;&amp;gt;&lt;br /&gt;
Whatif_normalmap_mapped2.jpg|A model with a normal map.&amp;lt;br&amp;gt;Image by [http://www.jameskuart.com/ James Ku].&lt;br /&gt;
Whatif_normalmap_low.jpg|The model without its normal map.&amp;lt;br&amp;gt;Image by [http://www.jameskuart.com/ James Ku].&lt;br /&gt;
Whatif_normalmap_high.jpg|The high-resolution model used to create the normal map.&amp;lt;br&amp;gt;Image by [http://www.jameskuart.com/ James Ku].&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Tangent-Space vs. Object-Space==&lt;br /&gt;
&lt;br /&gt;
Normal maps can be made in either of two basic flavors: tangent-space or object-space. Object-space is also called local-space or model-space, same thing. World-space is basically the same as object-space, except it requires the model to remain in its original orientation, neither rotating nor deforming, so it's almost never used.&lt;br /&gt;
&lt;br /&gt;
===Tangent-space normal map===&lt;br /&gt;
[[image:normalmap_tangentspace.jpg|frame|none|A tangent-space normal map. &amp;lt;br&amp;gt;Image by [http://ericchadwick.com Eric Chadwick].]]&lt;br /&gt;
&lt;br /&gt;
Predominantly-blue colors. Object can rotate and deform. Good for deforming meshes, like characters, animals, flags, etc.&lt;br /&gt;
&lt;br /&gt;
Pros:&lt;br /&gt;
* Maps can be reused easily, like on differently-shaped meshes.&lt;br /&gt;
* Maps can be tiled and mirrored easily, though some games might not support mirroring very well.&lt;br /&gt;
* Easier to overlay painted details.&lt;br /&gt;
* Easier to use image compression.&lt;br /&gt;
&lt;br /&gt;
Cons:&lt;br /&gt;
* More difficult to avoid smoothing problems from the low-poly vertex normals (see Smoothing Groups and Hard Edges).&lt;br /&gt;
* Slightly slower performance than an object-space map (but not by much).&lt;br /&gt;
&lt;br /&gt;
===Object-space normal map===&lt;br /&gt;
[[image:normalmap_worldspace.jpg|frame|none|An object-space normal map. &amp;lt;br&amp;gt;Image by [http://ericchadwick.com Eric Chadwick].]]&lt;br /&gt;
&lt;br /&gt;
Rainbow colors. Objects can rotate, but usually shouldn't be deformed, unless the shader has been modified to support deformation.&lt;br /&gt;
&lt;br /&gt;
Pros:&lt;br /&gt;
* Easier to generate high-quality curvature because it completely ignores the crude smoothing of the low-poly vertex normals.&lt;br /&gt;
* Slightly better performance than a tangent-space map (but not by much).&lt;br /&gt;
&lt;br /&gt;
Cons:&lt;br /&gt;
* Can't easily reuse maps, different mesh shapes require unique maps.&lt;br /&gt;
* Difficult to tile properly, and mirroring requires specific shader support.&lt;br /&gt;
* Harder to overlay painted details because the base colors vary across the surface of the mesh. Painted details must be converted into Object Space to be combined properly with the OS map.&lt;br /&gt;
* They don't compress very well, since the blue channel can't be recreated in the shader like with tangent-space maps. Also the three color channels contain very different data which doesn't compress well, creating many artifacts. Using a half-resolution object-space map is one option. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=&amp;quot;all&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Converting Between Spaces ===&lt;br /&gt;
Normal maps can be converted between tangent space and object space, in order to use them with different blending tools and shaders, which require one type or the other.&lt;br /&gt;
&lt;br /&gt;
[http://diogo.codingcorner.net Diogo &amp;quot;fozi&amp;quot; Teixeira] created a tool called [http://boards.polycount.net/showthread.php?p=1072599#post1072599 NSpace] that converts an object-space normal map into a tangent-space map, which then works seamlessly in the Max viewport. He converts the map by using the same tangent basis that 3ds Max uses for its hardware shader. To see the results, load the converted map via the ''Normal Bump'' map and enable &amp;quot;Show Hardware Map in Viewport&amp;quot;. [http://gameartist.nl/ Osman &amp;quot;osman&amp;quot; Tsjardiwal] created a GUI for NSpace, you can [http://boards.polycount.net/showthread.php?p=1075143#post1075143 download it here], just put it in the same folder as the NSpace exe and run it. Diogo has further [http://boards.polycount.net/showthread.php?p=1074160#post1074160 plans for the tool] as well.&lt;br /&gt;
&lt;br /&gt;
[[File:NSpace_Gui_osman.png|frame|none|NSpace interface. &amp;lt;br&amp;gt;Image by [http://diogo.codingcorner.net Diogo &amp;quot;fozi&amp;quot; Teixeira] and [http://gameartist.nl Osman &amp;quot;osman&amp;quot; Tsjardiwal]]]&lt;br /&gt;
&lt;br /&gt;
[http://www.linkedin.com/in/earthquake Joe &amp;quot;EarthQuake&amp;quot; Wilson] said: &amp;quot;[8Monkey Labs has] a tool that lets you load up your reference mesh and object space map. Then load up your tangent normals, and adjust some sliders for things like tile and amount. We need to load up a mesh to know how to correctly orient the tangent normals or else things will come out upside down or reverse etc. It mostly works, but it tends to &amp;quot;bend&amp;quot; the resulting normals, so you gotta split the mesh up into some smoothing groups before you run it, and then I usually will just composite this &amp;quot;combo&amp;quot; texture over my orig map in Photoshop.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;RGBC&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;RGBChannels&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== RGB Channels ==&lt;br /&gt;
Shaders can use different techniques to render tangent-space normal maps, but the normal map directions are usually consistent within a game. Usually the red channel of a tangent-space normal map stores the X axis (pointing the normals predominantly leftwards or rightwards), the green channel stores the Y axis (pointing the normals predominantly upwards or downwards), and the blue channel stores the Z axis (pointing the normals outwards away from the surface).&lt;br /&gt;
&lt;br /&gt;
[[image:tangentspace_rgb.jpg|frame|none|The red, green, and blue channels of a tangent-space normal map. &amp;lt;br&amp;gt;Image by [http://ericchadwick.com Eric Chadwick].]]&lt;br /&gt;
&lt;br /&gt;
If you see lighting coming from the wrong angle when you're looking at your normal-mapped model, and the model is using a tangent-space normal map, the normal map shader might be expecting the red or green channel (or both) to point in the opposite direction. To fix this either change the shader, or simply invert the appropriate color channels in an image editor, so that the black pixels become white and the white pixels become black.&lt;br /&gt;
&lt;br /&gt;
Some shaders expect the color channels to be swapped or re-arranged to work with a particular [[#NormalMapCompression|compression format]]. For example the DXT5_nm format usually expects the X axis to be in the alpha channel, the Y axis to be in the green channel, and the red and blue channels to be empty.&lt;br /&gt;
&lt;br /&gt;
== Tangent Basis ==&lt;br /&gt;
[[#TangentSpaceVsObjectSpace|Tangent-space]] normal maps use a special kind of vertex data called the ''tangent basis''. This is similar to UV coordinates except it provides directionality across the surface, it forms a surface-relative coordinate system for the per-pixel normals stored in the normal map. &lt;br /&gt;
&lt;br /&gt;
Light rays are in world space, but the normals stored in the normal map are in tangent space. When a normal-mapped model is being rendered, the light rays must be converted from world space into tangent space, using the tangent basis to get there. At that point the incoming light rays are compared against the directions of the normals in the normal map, and this determines how much each pixel of the mesh is going to be lit. Alternatively, instead of converting the light rays some shaders will convert the normals in the normal map from tangent space into world space. Then those world-space normals are compared against the light rays, and the model is lit appropriately. The method depends on who wrote the shader, but the end result is the same.&lt;br /&gt;
&lt;br /&gt;
Unfortunately for artists, there are many different ways to calculate the tangent basis: [http://area.autodesk.com/blogs/chris/how_the_3ds_max_scanline_renderer_computes_tangent_and_binormal_vectors_for_normal_mapping 3ds Max], [http://download.autodesk.com/us/maya/2011help/index.html?url=./files/Appendix_A_Tangent_and_binormal_vectors.htm,topicNumber=d0e227193 Maya], [http://www.codesampler.com/dx9src/dx9src_4.htm#dx9_dot3_bump_mapping DirectX 9], [http://developer.nvidia.com/object/NVMeshMender.html NVMeshMender], [http://www.terathon.com/code/tangent.html Eric Lengyel], a custom solution, etc. This means a normal map baked in one application probably won't shade correctly in another. Artists must do some testing with different [[#T|baking tools]] to find which works best with their output. When the renderer (or game engine) renders your game model, [[#ShadersAndSeams|the shader]] must use the same tangent basis as the normal map baker, otherwise you'll get incorrect lighting, especially across the seams between UV shells.&lt;br /&gt;
&lt;br /&gt;
The [http://www.xnormal.net/ xNormal] SDK supports custom tangent basis methods. When a programmer uses it to implement their renderer's own tangent basis, artists can then use Xnormal to bake normal maps that will match their renderer perfectly.&lt;br /&gt;
&lt;br /&gt;
The [[#UVC|UVs]] and the [[#SGAHE|vertex normals]] on the low-res mesh directly influence the coloring of a [[#TSNM|tangent-space]] normal map when it is baked. Each tangent basis vertex is a combination of three things: the mesh vertex's normal (influenced by smoothing), the vertex's tangent (usually derived from the V texture coordinate), and the vertex's bitangent (derived in code, also called the binormal). These three vectors create an axis for each vertex, giving it a specific orientation in the tangent space. These axes are used to properly transform the incoming lighting from world space into tangent space, so your normal-mapped model will be lit correctly.&lt;br /&gt;
&lt;br /&gt;
When a triangle's vertex normals are pointing straight out, and a pixel in the normal map is neutral blue (128,128,255) this means that pixel's normal will be pointing straight out from the surface of the low-poly mesh. When that pixel normal is tilted towards the left or the right in the tangent coordinate space, it will get either more or less red color, depending on whether the normal map is set to store the X axis as either a positive or a negative value. Same goes for when the normal is tilted up or down in tangent space, it will either get more or less green color. If the vertex normals aren't exactly perpendicular to the triangle, the normal map pixels will be tinted away from neutral blue as well. The vertex normals and the pixel normals in the normal map are combined together to create the final per-pixel surface normals.&lt;br /&gt;
&lt;br /&gt;
[[#SAS|Shaders]] are written to use a particular direction or &amp;quot;handedness&amp;quot; for the X and Y axes in a normal map. Most apps tend to prefer +X (red facing right) and +Y (green facing up), while others like 3ds Max prefer +X and -Y. This is why you often need to invert the green channel of a normal map to get it to render correctly in this or that app... the shader is expecting a particular handedness.&lt;br /&gt;
&lt;br /&gt;
[[image:tangentseams.jpg|frame|none|When shared edges are at different angles in UV space, different colors will show up&lt;br /&gt;
along the seam. The tangent basis uses these colors to light the model properly. &amp;lt;br&amp;gt;Image by [http://ericchadwick.com Eric Chadwick].]]&lt;br /&gt;
&lt;br /&gt;
When you look at a tangent-space normal map for a character, you typically see different colors along the UV seams. This is because the UV shells are often oriented at different angles on the mesh, a necessary evil when translating the 3D mesh into 2D textures. The body might be mapped with a vertical shell, and the arm mapped with a horizontal one. This requires the normals in the normal map to be twisted for the different orientations of those UV shells. The UVs are twisted, so the normals must be twisted in order to compensate. The tangent basis helps reorient (twist) the lighting as it comes into the surface's local space, so the lighting will then look uniform across the normal mapped mesh.&lt;br /&gt;
&lt;br /&gt;
When an artist tiles a tangent-space normal map across an arbitrary mesh, like a landscape, this tends to shade correctly because the mesh has a uniform direction in tangent space. If the mesh has discontinuous UV coordinates (UV seams), or the normal map has large directional gradients across it, the tangent space won't be uniform anymore so the surface will probably have shading seams.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;MTLPM&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Modeling the Low-Poly Mesh ==&lt;br /&gt;
The in-game mesh usually needs to be carefully optimized to create a good silhouette, define edge-loops for better deformation, and minimize extreme changes between the vertex normals for better shading (see [[#SmoothingGroupsAndHardEdges|Smoothing Groups &amp;amp; Hard Edges]]).&lt;br /&gt;
&lt;br /&gt;
In order to create an optimized in-game mesh including a good silhouette and loops for deforming in animation, you can start with the 2nd subdivision level of your [[DigitalSculpting|digital sculpt]], or in some cases with the base mesh itself. Then you can just collapse edge loops or cut in new edges to add/remove detail as necessary. Or you can [[DigitalSculpting#OART|re-toplogize]] from scratch if that works better for you.&lt;br /&gt;
&lt;br /&gt;
See [http://www.polycount.com/forum/showthread.php?t=107196 You're making me hard. Making sense of hard edges, uvs, normal maps and vertex counts] on the Polycount forum.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;UVC&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;UVCoordinates&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== UV Coordinates ===&lt;br /&gt;
Normal map baking tools only capture normals within the 0-1 UV square, any UV bits outside this area are ignored. &lt;br /&gt;
&lt;br /&gt;
Only one copy of the forward-facing UVs should remain in the 0-1 UV square at baking time. If the mesh uses overlapping UVs, this will likely cause artifacts to appear in the baked map, since the baker will try render each UV shell into the map. Before baking, it's best to move all the overlaps and mirrored bits outside the 0-1 square. &lt;br /&gt;
&lt;br /&gt;
[[image:Normalmap_uvcoord_offset.jpg|frame|none|The mirrored UVs (in red) are offset 1 unit before baking. &amp;lt;br&amp;gt;Image by [http://ericchadwick.com Eric Chadwick].]]&lt;br /&gt;
&lt;br /&gt;
If you move all the overlaps and mirrored bits exactly 1 UV unit (any whole number will do), then you can leave them there after the bake and they will still be mapped correctly. You can move them back if you want, it doesn't matter to most game engines. Be aware that ZBrush does use UV offsets to manage mesh visibility, however this usually doesn't matter because the ZBrush cage mesh is often a different mesh than the in-game mesh used for baking.&lt;br /&gt;
&lt;br /&gt;
You should avoid changing the UVs after baking the normal map, because rotating or mirroring UVs after baking will cause the normal map not to match the [[#TB|tangent basis]] anymore, which will likely cause lighting problems. &lt;br /&gt;
&lt;br /&gt;
In 3ds Max, W is a third texture coordinate. It's used for 3D procedural textures and for storing vertex color in UV channels (you need 3 axes for RGB, so UVW can store vertex color). Bake problems can be avoided by moving any overlapping UVs to -1 on the W axis, with the same results as moving them 1 unit on the U or V axes. The tool Render To Texture will always bake whatever UVs are the highest along the W axis. However using W can be messy... it's generally hidden unless you purposefully look for it (bad for team work), doesn't get preserved on export to other apps, and high W values can prevent selecting and/or welding UVs. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;M&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;Mirroring&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Mirroring ===&lt;br /&gt;
Normal maps can be mirrored across a model to create symmetrical details, and save UV space, which allows more detail in the normal map since the texture pixels are smaller on the model. &lt;br /&gt;
&lt;br /&gt;
With [[#OSNM|object-space]] maps, mirroring requires [http://boards.polycount.net/showthread.php?t=53986 specific shader support]. For [[#TSNM|tangent-space]] maps, mirroring typically creates a shading seam, but this can be reduced or hidden altogether, depending on the method used.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;TMW&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
==== Typical Mirroring Workflow ====&lt;br /&gt;
# Delete the mesh half that will be mirrored. &lt;br /&gt;
# Arrange the UVs for the remaining model, filling the UV square.&lt;br /&gt;
# Mirror the model to create a &amp;quot;whole&amp;quot; mesh, welding the mesh vertices along the seam. &lt;br /&gt;
# Move the mirrored UVs exactly 1 unit (or any whole number) out of the 0-1 UV square.&lt;br /&gt;
# Bake the normal map.&lt;br /&gt;
&lt;br /&gt;
Sometimes an artist will decide to delete half of a symmetrical model before baking. &lt;br /&gt;
&lt;br /&gt;
This is a mistake however because often the vertex normals along the hole will bend towards the hole a bit; there are no faces on the other side to average the normals with. This will create a strong lighting seam in the normal map. &lt;br /&gt;
&lt;br /&gt;
It's typically best to use the complete mirrored model to bake the normal map, not just the unique half. &lt;br /&gt;
&lt;br /&gt;
To prevent the mirrored UVs from causing overlaps or baking errors, move the mirrored [[#UVC|UVs]] out of the 0-1 UV space, so only one copy of the non-mirrored UVs is left within the 0-1 square.&lt;br /&gt;
&lt;br /&gt;
To avoid texel &amp;quot;leaks&amp;quot; between the UV shells, make sure there's enough [[#Edge_padding|Edge Padding]] around each shell, including along the edges of the normal map. None of the UV shells should be touching the edge of the 0-1 UV square, unless they're meant to tile with the other side of the map.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;CM&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Center Mirroring ====&lt;br /&gt;
If the mirror seam runs along the surface of a continuous mesh, like down the center of a human face for example, then it will probably create a lighting seam. &lt;br /&gt;
&lt;br /&gt;
In Epic Games' [http://www.unrealtechnology.com/technology.php Unreal Engine 3] (UE3) their symmetrical models commonly use centered mirroring. Epic uses materials that mix a [[DetailMap]] with the normal maps; these seem to scatter the diffuse/specular lighting and help minimize the obviousness of the mirror seams. For their [[Light Map]]ped models they use [http://udn.epicgames.com/Three/LightMapUnwrapping.html a technique] that can almost completely hide the mirror seam.&lt;br /&gt;
&lt;br /&gt;
[[image:Epic_MirroringCicada.jpg|frame|none| In UE3 a center mirror seam is reduced by using a detail normal map. &amp;lt;br&amp;gt; Image by &amp;quot;[http://epicgames.com Epic Games]&amp;quot;]]&lt;br /&gt;
&lt;br /&gt;
'''''[http://www.zbrushcentral.com/showpost.php?p=573108&amp;amp;postcount=28 GOW2 normal map seams], [http://utforums.epicgames.com/showthread.php?p=27166791#post27166791 UDK normal map seams]'''''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;OM&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Offset Mirroring ====&lt;br /&gt;
Offset mirroring is a method where you move the mirror seam off to one side of the model, so the seam doesn't run exactly down the center. For example with a character's head, the UV seam can go down along the side of the head in front of the ear. The UV shell for the nearest ear can then be mirrored to use the area on the other side of the head. &lt;br /&gt;
&lt;br /&gt;
This avoids the &amp;quot;Rorschach&amp;quot; effect and allows non-symmetrical details, but it still saves texture space because the two sides of the head can be mirrored (they're never seen at the same time anyhow).&lt;br /&gt;
&lt;br /&gt;
Offset mirroring doesn't get rid of the seam, but it does move it off to a place where it can either be less obvious, or where it can be hidden in a natural seam on the model.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;FCM&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
==== Flat Color Mirroring ====&lt;br /&gt;
[http://boards.polycount.net/showthread.php?t=51088 Tutorial for painting out seams on mirrored tangent-space normal maps] by [http://www.warbeast.de/ warby] solves seams by painting a flat set of normals along the seam, using neutral blue (128,128,255). However it only works along horizontal or vertical UV seams, not across any angled UVs. It also removes any details along the mirror seam, creating blank areas. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;EM&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
==== Element Mirroring ====&lt;br /&gt;
The mirror seam can be avoided completely when it doesn't run directly through any mesh. For example if there's a detached mesh element that runs down the center of the model, this can be uniquely mapped, while the meshes on either side can be mirrors of each other. Whenever the mirrored parts don't share any vertex normals with the non-mirrored parts, there won't be any seams. &lt;br /&gt;
&lt;br /&gt;
[[image:normalmap_mirrored-binocs-racer445.jpg|frame|none|The middle part (highlighted in red) uses unique non-mirrored UVs, allowing the mesh on the right to be mirrored without any seams. &amp;lt;br&amp;gt;Image by [http://http://racer445.com/ &amp;quot;racer445&amp;quot;]]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;SGAHE&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Smoothing Groups &amp;amp; Hard Edges ===&lt;br /&gt;
Each vertex in a mesh has at least one vertex normal. Vertex normals are used to control the direction a triangle will be lit from; if the normal is facing the light the triangle will be fully lit, if facing away from the light the triangle won't be lit. &lt;br /&gt;
&lt;br /&gt;
Each vertex however can have more than one vertex normal. When two triangles have different vertex normals along their shared edge, this creates a shading seam, called a ''hard edge'' in most modeling tools. 3ds Max uses ''Smoothing Groups'' to create hard/soft edges, Maya uses ''Harden Edge'' and ''Soften Edge''. These tools create hard and soft edges by splitting and combining the vertex normals.&lt;br /&gt;
&lt;br /&gt;
[[image:BenMathis_SmoothingGroups_Excerpt.gif|frame|none|Hard edges occur where the vertices have multiple normals. &amp;lt;br&amp;gt;Image by [http://poopinmymouth.com Ben 'poopinmymouth' Mathis] ([http://poopinmymouth.com/process/tips/smoothing_groups.jpg tutorial here])]]&lt;br /&gt;
&lt;br /&gt;
When a mesh uses all soft normals (a single smoothing group) the lighting has to be interpolated across the extreme differences between the vertex normals. If your renderer doesn't support the same [[#TangentBasis|tangent basis]] that the baker uses, this can produce extreme shading differences across the model, which creates shading artifacts. It is generally best to reduce these extremes when you can because a mismatched renderer can only do so much to counteract it.&lt;br /&gt;
&lt;br /&gt;
Hard edges are usually best where the model already has a natural seam. For example, you can add a hard edge along the rim of a car's wheel well, to prevent the inside of the wheel well from distorting the shading for the outside of the car body. Mechanical models usually need hard edges where ever the surface bends more than about 45 degrees. &lt;br /&gt;
&lt;br /&gt;
For most meshes, the best results usually come from adding hard edges where ever there are UV seams. There are no hard rules however, you must experiment with different approaches to find what works best in your game.&lt;br /&gt;
&lt;br /&gt;
When you use object-space normal maps the vertex normal problem goes away since you're no longer relying on the crude vertex normals of the mesh. An object-space normal map completely ignores vertex normals. Object-space mapping allows you to use all soft edges and no bevels on the low-res mesh, without showing lighting errors.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;HEDAT&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
==== Hard Edge Discussions &amp;amp; Tutorials ====&lt;br /&gt;
* [http://www.polycount.com/forum/showthread.php?p=2090450#post2090450 Maya MEL Script help needed (UV border edges)]&lt;br /&gt;
* [http://www.polycount.com/forum/showthread.php?t=107196 You're making me hard. Making sense of hard edges, uvs, normal maps and vertex counts]&lt;br /&gt;
* [http://www.polycount.com/forum/showthread.php?t=73593 Normal Maps: Can Somone Explain This &amp;quot;Black Edge&amp;quot; issue]&lt;br /&gt;
* [http://www.polycount.com/forum/showthread.php?t=73566 Normal Maps: Can someone explain normals, tangents and split UVs?]&lt;br /&gt;
* [http://www.polycount.com/forum/showthread.php?t=72861 3Point Shader Lite - Shader material editor and Quality Mode normalmaps for 3ds Max]&lt;br /&gt;
* [http://boards.polycount.net/showthread.php?t=68173 Why you should NOT trust 3ds Max's viewport normal-map display!]&lt;br /&gt;
* [http://www.game-artist.net/forums/support-tech-discussion/10503-xsi-normal-mapped-cube-looks-bad.html XSI - normal mapped cube looks bad]&lt;br /&gt;
* [http://www.game-artist.net/forums/support-tech-discussion/11924-weird-maya-normal-map-seam-artifact-problem-am-i-making-simple-mistake.html Weird Maya normal map seam/artifact problem]&lt;br /&gt;
* [http://boards.polycount.net/showthread.php?p=1080600 Seams in Normals when Creating Tiling Environment Trims and other Tiles]&lt;br /&gt;
* The tutorial [http://www.svartberg.com/tutorials/article_normalmaps/normalmaps.html Normalmaps for the Technical Game Modeler] by [http://www.svartberg.com Ariel Chai] shows how low-poly smoothing can affect the normal map.&lt;br /&gt;
* The tutorial: [http://wiki.polycount.net/3D_Tutorials/Modeling_High-Low_Poly_Models_for_Next_Gen_Games Modeling High/Low Poly Models for Next Gen Games] by [http://www.acetylenegames.com/artbymasa/ João &amp;quot;Masakari&amp;quot; Costa] shows how smoothing affects raycasting.&lt;br /&gt;
* The [http://tech-artists.org/wiki/Beveling Beveling section on the Tech-Artists.Org Wiki] discusses the breaking of normals and smoothing groups in general terms.&lt;br /&gt;
* The two-part article [http://www.ericchadwick.com/examples/provost/byf2.html#wts Beautiful, Yet Friendly] by [http://www.linkedin.com/in/gprovost Guillaume Provost] explains how smoothing groups and other mesh attributes cause vertices to be duplicated in the game. The vertex count is actually what matters in the game, not the triangle count.&lt;br /&gt;
* The Crysis documentation [http://doc.crymod.com/AssetCreation/PolyBumpReference.html PolyBump Reference] has a section towards the bottom that shows how smoothing affects their baked normal maps.&lt;br /&gt;
* The polycount thread [http://boards.polycount.net/showthread.php?t=60694 Toying around with normal map approaches] has a great discussion of how best to use smoothing groups and bevels for better shading.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;UB&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
==== Using Bevels ====&lt;br /&gt;
Bevels/chamfers generally improve the silhouette of the model, and can also help reflect specular highlights better. &lt;br /&gt;
&lt;br /&gt;
However bevels tend to produce long thin triangles, which slow down the in-game rendering of your model. Real-time renderers have trouble rendering long thin triangles because they create a lot of sub-pixel areas to render. &lt;br /&gt;
&lt;br /&gt;
Bevels also balloon the vertex count, which can increase the transform cost and memory usage. Hard edges increase the vertex count too, but not when  the edge also shares a seam in UV space. For a good explanation of the vertex count issue, see [http://www.ericchadwick.com/examples/provost/byf2.html#wts Beautiful, Yet Friendly].&lt;br /&gt;
&lt;br /&gt;
Using hard edges with matching UV shells tends to give better performance and better cosmetic results than using bevels. However there are differing opinions on this, see the Polycount thread &amp;quot;[http://boards.polycount.net/showthread.php?t=71760 Maya transfer maps help]&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;EVN&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
==== Edited Vertex Normals ====&lt;br /&gt;
If you use bevels the shading will be improved by editing the vertex normals so the larger flat surfaces have perpendicular normals. The vertex normals are then forced to blend across the smaller bevel faces, instead of across the larger faces. See the Polycount thread [http://boards.polycount.net/showthread.php?t=66139 Superspecular soft edges tutorial chapter 1].&lt;br /&gt;
&lt;br /&gt;
[[image:oliverio_bevel_normals.gif|frame|none|Bending normals on bevelled models. &amp;lt;br&amp;gt;From the tutorial [http://deadlineproof.com/model-shading-techniques-soft-edge-superspecular/ Shading techniques Superspecular soft edges]&amp;lt;br&amp;gt;Image by [http://deadlineproof.com/ Paolo Oliverio]]]&lt;br /&gt;
&lt;br /&gt;
== Level of Detail Models ==&lt;br /&gt;
See [http://www.polycount.com/forum/showthread.php?p=1216945#post1216945 Problem if you're using 3point-style normals with an LOD].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;MTHPM&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Modeling The High-Poly Mesh ==&lt;br /&gt;
[[Subdivision Surface Modeling]] and [[DigitalSculpting]] are the techniques most often used for modeling a normal map. &lt;br /&gt;
&lt;br /&gt;
Some artists prefer to model the in-game mesh first, other artists prefer to model the high-res mesh first, and others start somewhere in the middle. The modeling order is ultimately a personal choice though, all three methods can produce excellent results:&lt;br /&gt;
* Build the in-game model, then up-res it and sculpt it.&lt;br /&gt;
* Build and sculpt a high resolution model, then build a new in-game model around that.&lt;br /&gt;
* Build a basemesh model, up-res and sculpt it, then step down a few levels of detail and use that as a base for building a better in-game mesh.&lt;br /&gt;
If the in-game mesh is started from one of the subdivision levels of the basemesh sculpt, various edge loops can be collapsed or new edges can be cut to add/remove detail as necessary.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;SE&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Sloped Extrusions ===&lt;br /&gt;
[[image:normal_slopes_hatred.jpg|frame|none|Extrusions on the high-poly model should be sloped to make a better normal map. &amp;lt;br&amp;gt;Image by [http://www.hatred.gameartisans.org/ Krzysztof &amp;quot;Hatred&amp;quot; Dolas].]]&lt;br /&gt;
&lt;br /&gt;
=== Floating Geometry ===&lt;br /&gt;
[[image:FloatingGeo.jpg|frame|none|Normal map stores the direction the surface is facing rather than real depth information, thus allowing to save time using floating geometry. &amp;lt;br&amp;gt;To correctly bake AO with floating geo make it a separate object and turn off it's shadow casting. &amp;lt;br&amp;gt;Image by [http://artisaverb.info/ Andrew &amp;quot;d1ver&amp;quot; Maximov].]]&lt;br /&gt;
&lt;br /&gt;
See also [[3DTutorials/Modeling High-Low Poly Models for Next Gen Games|Modeling High/Low Poly Models for Next Gen Games]] by [http://www.acetylenegames.com/artbymasa/ João &amp;quot;Masakari&amp;quot; Costa]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;ET&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Edge Thickness ===&lt;br /&gt;
[[image:normal_edge_thickness.jpg|frame|none|When creating edges of the Highpoly, sometimes you'll need to make them rounded than in real life to &amp;lt;br&amp;gt;work better at the size they will be seen.&amp;lt;br&amp;gt;Image by [http://racer445.com/Evan &amp;quot;racer445&amp;quot; Herbert]]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;MRF&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;MRRCB&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== mental ray Round Corners Bump ===&lt;br /&gt;
The mental ray renderer offers an automatic bevel rendering effect called Round Corners Bump that can be baked into a normal map. This is available in 3ds Max, Maya, and XSI. See [http://boards.polycount.net/showthread.php?t=71995 Zero Effort Beveling for normal maps] - by [http://boards.polycount.net/member.php?u=31662 Robert &amp;quot;r_fletch_r&amp;quot; Fletcher].&lt;br /&gt;
&lt;br /&gt;
[http://jeffpatton.net/ Jeff Patton] posted about [http://jeffpatton.cgsociety.org/blog/archive/2007/10/ how to expose Round Corners Bump] in 3ds Max so you can use it in other materials.&lt;br /&gt;
&lt;br /&gt;
[http://cryrid.com/art/ Michael &amp;quot;cryrid&amp;quot; Taylor] posted a tutorial about how to use [http://cryrid.com/images/temp/XSI/zeroeffort_bevels.jpg Round Corners in XSI].&lt;br /&gt;
&lt;br /&gt;
XSI is able to bake a good normal map with it, but 3ds Max seems to bake it incorrectly, and Maya isn't able to bake the effect at all. Maybe Max might be able to bake it correctly, if the .mi shader is edited to use the correct coordinate space?&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;Baking&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;B&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
== Baking ==&lt;br /&gt;
The process of transferring normals from the high-res model to the in-game model is often called baking. The baking tool usually starts projecting a certain numerical distance out from the low-poly mesh, and sends rays inwards towards the high-poly mesh. When a ray intersects the high-poly mesh, it records the mesh's surface normal and saves it in the normal map.&lt;br /&gt;
&lt;br /&gt;
To get an understanding of how all the options affect your normal map, do some test bakes on simple meshes like boxes. They generate quickly so you can experiment with [[#UVCoordinates|UV mirroring]], [[#SGAHE|smoothing groups]], etc. This helps you learn the settings that really matter.&lt;br /&gt;
* The tutorial [http://wiki.polycount.net/3D_Tutorials/Modeling_High-Low_Poly_Models_for_Next_Gen_Games Modeling High/Low Poly Models for Next Gen Games] by [http://www.acetylenegames.com/artbymasa/ João &amp;quot;Masakari&amp;quot; Costa] has more examples of ray-casting, plus how to get better results from the bake.&lt;br /&gt;
&lt;br /&gt;
Baking sub-sections:&lt;br /&gt;
# [[#Anti-Aliasing|Anti-Aliasing]]&lt;br /&gt;
# [[#Baking_Transparency|Baking Transparency]]&lt;br /&gt;
# [[#Edge_Padding|Edge Padding]]&lt;br /&gt;
# [[#High_Poly_Materials|High Poly Materials]]&lt;br /&gt;
# [[#Reset_Transforms|Reset Transforms]]&lt;br /&gt;
# [[#Solving_Intersections|Solving Intersections]]&lt;br /&gt;
# [[#Solving_Pixel_Artifacts|Solving Pixel Artifacts]]&lt;br /&gt;
# [[#Solving_Wavy_Lines|Solving Wavy Lines]]&lt;br /&gt;
# [[#Triangulating|Triangulating]]&lt;br /&gt;
# [[#Working_with_Cages|Working with Cages]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;AA&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Anti-Aliasing ===&lt;br /&gt;
Turning on super-sampling or anti-aliasing (or whatever multi-ray casting is called in your normal map baking tool) will help to fix any jagged edges where the high-res model overlaps itself within the UV borders of the low-poly mesh, or wherever the background shows through holes in the mesh. Unfortunately this tends to render much much slower, and takes more memory.&lt;br /&gt;
&lt;br /&gt;
[[image:normalmap_aliasing_knak47.jpg|frame|none|A bake without anti-aliasing shows artifacts where the high-poly mesh has overlaps. &amp;lt;br&amp;gt;Image by [http://www.polycount.com/forum/member.php?u=35938 'knak47']]]&lt;br /&gt;
&lt;br /&gt;
One trick to speed this up is to render 2x the intended image size then scale the normal map down 1/2 in a paint program like Photoshop. The reduction's pixel resampling will add anti-aliasing for you in a very quick process. After scaling, make sure to re-normalize the map if your game doesn't do that already, because the un-normalized pixels in your normalmap may cause pixelly artifacts in your specular highlights. Re-normalizing can be done with [http://developer.nvidia.com/nvidia-texture-tools-adobe-photoshop NVIDIA's normal map filter] for Photoshop.&lt;br /&gt;
&lt;br /&gt;
3ds Max's supersampling doesn't work nicely with edge padding, it produces dark streaks in the padded pixels. If so then turn off padding and re-do the padding later, either by re-baking without supersampling or by using a Photoshop filter like the one that comes with [[#3DTools|Xnormal]].&lt;br /&gt;
&lt;br /&gt;
=== Baking Transparency ===&lt;br /&gt;
Sometimes you need to bake a normal map from an object that uses opacity maps, like a branch with opacity-mapped leaves. Unfortunately baking apps often completely ignore any transparency mapping on your high-poly mesh.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|[[image:JoeWilson_ivynormals_error.jpg]] &lt;br /&gt;
|[[image:JoeWilson_ivynormals_rendered.jpg]]&lt;br /&gt;
|-&lt;br /&gt;
|3ds Max's RTT baker causes transparency errors.&amp;lt;br&amp;gt;image by [http://www.linkedin.com/in/earthquake Joe &amp;quot;EarthQuake&amp;quot; Wilson]&lt;br /&gt;
|The lighting method bakes perfect transparency.&amp;lt;br&amp;gt;image by [http://www.linkedin.com/in/earthquake Joe &amp;quot;EarthQuake&amp;quot; Wilson]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
To solve this, render a Top view of the mesh. This only works if you're using a planar UV projection for your low-poly mesh and you're baking a tangent-space normal map.&lt;br /&gt;
&lt;br /&gt;
* Make sure the Top view matches the dimensions of the planar UV projection used by the low-poly mesh. It helps to use an orthographic camera for precise placement.&lt;br /&gt;
* On the high-poly mesh either use a specific lighting setup or a use special material shader:&lt;br /&gt;
* 1) The lighting setup is described in these tutorials:&lt;br /&gt;
* * [http://www.bencloward.com/tutorials_normal_maps11.shtml Creating A Normal Map Right In Your 3D App] by [http://www.bencloward.com/ Ben Cloward]&lt;br /&gt;
* *[http://www.pinwire.com/articles/26/1/Generating-High-Fidelity-Normal-Maps-with-3-D-Software.html Generating High Fidelity Normal Maps with 3-D Software] by [http://www.linkedin.com/pub/0/277/4AB Dave McCoy], Graphics Techniques Consultant, Xbox Content and Design Team&lt;br /&gt;
* 2) The material shader does the same thing, but doesn't require lights.&lt;br /&gt;
* * [http://www.scriptspot.com/3ds-max/normaltexmap NormalTexMap] scripted map for 3ds Max by [http://www.scriptspot.com/users/dave-locke Dave Locke].&lt;br /&gt;
* * [http://www.footools.com/3dsmax_plugins.html InfoTexture] map plugin for 3ds Max by [http://www.footools.com John Burnett]&lt;br /&gt;
&lt;br /&gt;
[[image:BenCloward_NormalMapLighting.gif|frame|none|The lighting setup for top-down rendering. &amp;lt;br&amp;gt;Image by [http://www.bencloward.com Ben Cloward]]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;EP&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Edge Padding ===&lt;br /&gt;
If a normal map doesn't have enough [[Edge_padding |Edge Padding]], this will create shading seams on the UV borders.&lt;br /&gt;
&lt;br /&gt;
=== High Poly Materials ===&lt;br /&gt;
3ds Max will not bake a normal map properly if the high-res model has a mental ray Arch &amp;amp; Design material applied. If your normal map comes out mostly blank, either use a Standard material or none at all. For an example see the Polycount thread [http://www.polycount.com/forum/showthread.php?t=74792 Render to Texture &amp;gt;:O].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;RT&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Reset Transforms ===&lt;br /&gt;
Before baking, make sure your low-poly model's transforms have been reset. '''''This is very important!''''' Often during the modeling process a model will be rotated and scaled, but these compounded transforms can create a messy local &amp;quot;space&amp;quot; for the model, which in turn often creates rendering errors for normal maps. &lt;br /&gt;
&lt;br /&gt;
In 3ds Max, use the Reset Xforms utility then Collapse the Modifier Stack. In Maya use Freeze Transformation. In XSI use the Freeze button.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;SI&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Solving Intersections ===&lt;br /&gt;
The projection process often causes problems like misses, or overlaps, or intersections. It can be difficult generating a clean normal map in areas where the high-poly mesh intersects or nearly intersects itself, like in between the fingers of a hand. Setting the ray distance too large will make the baker pick the other finger as the source normal, while setting the ray distance too small will lead to problems at other places on the mesh where the distances between in-game mesh and high-poly mesh are greater.&lt;br /&gt;
&lt;br /&gt;
Fortunately there are several methods for solving these problems.&lt;br /&gt;
&lt;br /&gt;
# Change the shape of the cage. Manually edit points on the projection cage to help solve tight bits like the gaps between fingers.&lt;br /&gt;
# Limit the projection to matching materials, or matching UVs.&lt;br /&gt;
# Explode the meshes. See the polycount thread [http://boards.polycount.net/showthread.php?t=62921 Explode script needed (for baking purposes)].&lt;br /&gt;
# Bake two or more times using different cage sizes, and combine them in Photoshop.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;SPA&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Solving Pixel Artifacts ===&lt;br /&gt;
[[image:filterMaps_artifact.jpg|frame|none|Random pixel artifacts in the bake. &amp;lt;br&amp;gt;Image by [http://www.linkedin.com/in/ericchadwick Eric Chadwick]]]&lt;br /&gt;
&lt;br /&gt;
If you are using 3ds Max's ''Render To Texture'' to bake from one UV layout to another, you may see stray pixels scattered across the bake. This only happens if you are using a copy of the original mesh in the Projection, and that mesh is using a different UV channel than the original mesh.&lt;br /&gt;
&lt;br /&gt;
There are two solutions for this:&lt;br /&gt;
&lt;br /&gt;
* Add a Push modifier to the copied mesh, and set it to a low value like 0.01.&lt;br /&gt;
- or -&lt;br /&gt;
&lt;br /&gt;
* Turn off ''Filter Maps'' in the render settings (Rendering menu &amp;gt; Render Setup &amp;gt; Renderer tab &amp;gt; uncheck Filter Maps). To prevent aliasing you may want to enable the Global Supersampler in Render Setup.&lt;br /&gt;
&lt;br /&gt;
See also [[#Anti-Aliasing]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;SWL&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Solving Wavy Lines ===&lt;br /&gt;
When capturing from a cylindrical shape, often the differences between the low-poly mesh and the high-poly mesh will create a wavy edge in the normal map. There are a couple ways to avoid this:&lt;br /&gt;
&lt;br /&gt;
# The best way... create your lowpoly model with better supporting edges. See the Polycount threads [http://www.polycount.com/forum/showthread.php?t=81154 Understanding averaged normals and ray projection/Who put waviness in my normal map?], [http://boards.polycount.net/showthread.php?t=55754 approach to techy stuff], [http://www.polycount.com/forum/showthread.php?t=72713 Any tips for normal mapping curved surface?].&lt;br /&gt;
# Adjust the shape of the cage to influence the directions the rays will be cast. Beware... this work will have to be re-done every time you edit the lowpoly mesh, as the cage will be invalidated. At the bottom of [http://www.poopinmymouth.com/tutorial/normal_workflow_2.htm this page of his normal map tutorial], [http://www.poopinmymouth.com/ Ben &amp;quot;poopinmymouth&amp;quot; Mathis] shows how to do this in 3ds Max. Same method can be seen in the image below.&lt;br /&gt;
# Subdivide the low-res mesh so it more closely matches the high-res mesh. Beware... this will cause the normal map not to match your lowpoly vertex normals, probably causing shading errors. [http://www.custom-airbrush.com/ Jeff &amp;quot;airbrush&amp;quot; Ross] has a [http://dodownload.filefront.com/9086954//72f71c0147df53765045a22253c18361a29a6d532425842007ead644d39cbb85d0794ab560365cfa video tutorial] that shows how to do this in Maya.&lt;br /&gt;
# Paint out the wavy line.  Beware... this work will have to be re-done every time you re-bake the normal map. The [http://www.poopinmymouth.com/tutorial/normal_workflow_3.htm normal map process tutorial] by [http://www.poopinmymouth.com/ Ben &amp;quot;poopinmymouth&amp;quot; Mathis] includes an example of painting out wavy lines in a baked normal map.&lt;br /&gt;
# Use a separate planar-projected mesh for the details that wrap around the barrel area, so the ray-casting is more even. Beware... this will cause the normal map not to match your lowpoly vertex normals, probably causing shading errors. For example to add tread around a tire, the tread can be baked from a tread model that is laid out flat, then that bake can layered onto the bake from the cylindrical tire mesh in a paint program.&lt;br /&gt;
&lt;br /&gt;
[[image:timothy_evison_normalmap_projections.jpg|frame|none|Adjusting the shape of the cage to remove distortion. &amp;lt;br&amp;gt;Image by [http://users.cybercity.dk/~dsl11905/resume/resume.html Timothy &amp;quot;tpe&amp;quot; Evison]]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;TRI&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Triangulating ===&lt;br /&gt;
Before baking, it is usually best to triangulate the low-poly model, converting it from polygons into pure triangles. This prevents the vertex normals from being changed later on, which can create specular artifacts.&lt;br /&gt;
&lt;br /&gt;
[[image:triangulation_modo_ohare.jpg|frame|none| When quads are triangulated in [http://www.luxology.com/modo/ Modo], the internal edges are sometimes flipped, which causes shading differences.&amp;lt;br&amp;gt;Image by [http://www.farfarer.com/|James &amp;quot;Talon&amp;quot; O'Hare]]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Sometimes a baking tool or a mesh exporter/importer will re-triangulate the polygons. A quad polygon is actually treated as two triangles, and the internal edge between them is often switched diagonally during modeling operations. When the vertices of the quad are moved around in certain shapes, the software's algorithm for polygon models tries to keep the quad surface in a &amp;quot;rational&amp;quot; non-overlapping shape. It does this by switching the internal edge between its triangles.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[image:triangulation_spec_tychovii.jpg|frame|none| The specular highlight is affected by triangulation. Flip edges to fix skewing. See the Polycount thread [http://boards.polycount.net/showthread.php?t=66651 Skewed Specular Highlight?] for pictures and more info.&amp;lt;br&amp;gt; Image by [http://robertkreps.com Robert &amp;quot;TychoVII&amp;quot; Kreps]]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;WWC&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Working with Cages ===&lt;br /&gt;
''Cage'' has two meanings in the normal-mapping process: a low-poly base for [[subdivision surface modeling]] (usually called the [[DigitalSculpting#BM|basemesh]]), or a ray-casting mesh used for normal map baking. This section covers the ray-casting cage.&lt;br /&gt;
&lt;br /&gt;
Most normal map baking tools allow you to use a distance-based raycast. A ray is sent outwards along each vertex normal, then at the distance you set a ray is cast back inwards. Where ever that ray intersects the high poly mesh, it will sample the normals from it. &lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|[[Image:Normalmap_raycasting_1.jpg]] &lt;br /&gt;
|[[Image:Normalmap_raycasting_2.jpg]]&lt;br /&gt;
|-&lt;br /&gt;
|Hard edges and a distance-based raycast (gray areas) cause ray misses (yellow) and ray overlaps (cyan).&amp;lt;br&amp;gt; Image by [http://www.mankua.com/ Diego Castaño]&lt;br /&gt;
|The gray area shows that using all soft edges (or hard edges and a cage-based raycast) will avoid ray-casting errors from split normals.&amp;lt;br&amp;gt; Image by [http://www.mankua.com/ Diego Castaño] &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Unfortunately with a distance-based raycast, [[#SGAHE|split vertex normals]] will cause the bake to miss parts of the high-res mesh, causing errors and seams. &lt;br /&gt;
&lt;br /&gt;
Some software allows you to use ''cage mesh'' option instead, which basically inflates a copy of the low-poly mesh, then raycasts inwards from each vertex. This ballooned-out mesh is the cage.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|&amp;lt;tablebgcolor=&amp;quot;#ffaaaa&amp;quot;&amp;gt;| &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
In 3ds Max the cage controls both the distance and the direction of the raycasting. &lt;br /&gt;
&lt;br /&gt;
In Maya the cage only controls the distance; the ray direction matches the vertex normals (inverted).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;text-decoration: line-through&amp;quot;&amp;gt; This may have been fixed in the latest release...&amp;lt;&amp;lt;BR&amp;gt;&amp;gt;&lt;br /&gt;
In Xnormal the cage is split everywhere the model has [[#SGAHE|hard edges]], causing ray misses in the bake. You can fix the hard edge split problem but it involves an overly complex workflow. You must also repeat the whole process any time you change your mesh:&amp;lt;/span&amp;gt;&lt;br /&gt;
# &amp;lt;s&amp;gt; Load the 3d viewer.&amp;lt;/s&amp;gt;&lt;br /&gt;
# &amp;lt;s&amp;gt; Turn on the cage editing tools.&amp;lt;/s&amp;gt;&lt;br /&gt;
# &amp;lt;s&amp;gt; Select all of the vertices.&amp;lt;/s&amp;gt;&lt;br /&gt;
# &amp;lt;s&amp;gt; Weld all vertices.&amp;lt;/s&amp;gt;&lt;br /&gt;
# &amp;lt;s&amp;gt; Expand the cage as you normally would.&amp;lt;/s&amp;gt;&lt;br /&gt;
# &amp;lt;s&amp;gt; Save out your mesh using the Xnormal format.&amp;lt;/s&amp;gt;&lt;br /&gt;
# &amp;lt;s&amp;gt; Make sure Xnormal is loading the correct mesh.&amp;lt;/s&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;P&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;Painting&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Painting ==&lt;br /&gt;
Don't be afraid to edit normal maps in Photoshop. After all it is just a texture, so you can clone, blur, copy, blend all you want... as long as it looks good of course. Some understanding of [[#RGBChannels|the way colors work]] in normal maps will go a long way in helping you paint effectively.&lt;br /&gt;
&lt;br /&gt;
A normal map sampled from a high-poly mesh will nearly always be better than one sampled from a texture, since you're actually grabbing &amp;quot;proper&amp;quot; normals from an accurate, highly detailed surface. That means your normal map's pixels will basically be recreating the surface angles of your high-poly mesh, resulting in a very believable look.&lt;br /&gt;
&lt;br /&gt;
If you only convert an image into a normal-map, it can look very flat, and in some cases it can be completely wrong unless you're very careful about your value ranges. Most image conversion tools assume the input is a heightmap, where black is low and white is high. If you try to convert a diffuse texture that you've painted, the results are often very poor. Often the best results are obtained by baking the large and mid-level details from a high-poly mesh, and then combined with photo-sourced &amp;quot;fine detail&amp;quot; normals for surface details such as fabric weave, scratches and grain.&lt;br /&gt;
&lt;br /&gt;
Sometimes creating a high poly surface takes more time than your budget allows. For character or significant environment assets then that is the best route, but for less significant environment surfaces working from a heightmap-based texture will provide a good enough result for a much less commitment in time.&lt;br /&gt;
&lt;br /&gt;
* [http://crazybump.com/ CrazyBump] is a commercial normal map converter.&lt;br /&gt;
* [http://www.renderingsystems.com/support/showthread.php?tid=3 ShaderMap] is a commercial normal map converter.&lt;br /&gt;
* [http://www.pixplant.com/ PixPlant] is a commercial normal map converter.&lt;br /&gt;
* [http://boards.polycount.net/showthread.php?t=68860 NJob] is a free normal map converter.&lt;br /&gt;
* [http://developer.nvidia.com/nvidia-texture-tools-adobe-photoshop NVIDIA normalmap filter for Photoshop] is a free normal map converter.&lt;br /&gt;
* [http://xnormal.net Xnormal height-to-normals filter for Photoshop] is a free normal map converter.&lt;br /&gt;
* [http://www.poopinmymouth.com/tutorial/normal_workflow_3.htm Normal map process tutorial] by [http://www.poopinmymouth.com/ Ben &amp;quot;poopinmymouth&amp;quot; Mathis] includes an example of painting out wavy lines in a baked normal map.&lt;br /&gt;
&lt;br /&gt;
=== Flat Color ===&lt;br /&gt;
The color (128,128,255) creates normals that are completely perpendicular to the polygon, as long as the vertex normals are also perpendicular. Remember a normal map's per-pixel normals create ''offsets'' from the vertex normals. If you want an area in the normal map to be flat, so it creates no offsets from the vertex normals, then use the color (128,128,255). &lt;br /&gt;
&lt;br /&gt;
This becomes especially obvious when [[#Mirroring|mirroring a normal map]] and using a shader with a reflection ingredient. Reflection tends to accentuate the angles between the normals, so any errors become much more apparent.&lt;br /&gt;
&lt;br /&gt;
[[image:normalmap_127seam.jpg|thumb|600px|none| Mirrored normal maps show a seam when (127,127,255) is used for the flat color; 128 is better.&amp;lt;br&amp;gt;Image by [http://www.ericchadwick.com Eric Chadwick]]]&lt;br /&gt;
&lt;br /&gt;
In a purely logical way, 127 seems like it would be the halfway point between 0 and 255. However 128 is the color that actually works in practice. When a test is done comparing (127,127,255) versus (128,128,255) it becomes obvious that 127 creates a slightly bent normal, and 128 creates a flat one.&lt;br /&gt;
&lt;br /&gt;
This is because most game pipelines use ''unsigned'' normal maps. For details see the Polycount forum thread [http://www.polycount.com/forum/showpost.php?p=771360&amp;amp;postcount=22 tutorial: fixing mirrored normal map seams].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;BNMT&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Blending Normal Maps Together ===&lt;br /&gt;
Blending normal maps together is a quick way to add high-frequency detail like wrinkles, cracks, and the like. Fine details can be painted as a height map, then it can be converted into a normal map using one of the normal map tools. Then this &amp;quot;details&amp;quot; normal map can be blended with a geometry-derived normal map using one of the methods below. &lt;br /&gt;
&lt;br /&gt;
Here is a comparison of four of the blending methods. Note that in these examples the default values were used for CrazyBump (Intensity 50, Strength 33, Strength 33), but the tool allows each layer's strength to be adjusted individually for stronger or milder results. Each of the normal maps below were [[#Renormalizing|re-normalized]] after blending.&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
| [[Image:NormalMap$nrmlmap_blending_methods_Maps.png}}&lt;br /&gt;
|-&lt;br /&gt;
| The blended normal maps.&amp;lt;&amp;lt;BR&amp;gt;&amp;gt;&amp;lt;span style=&amp;quot;font-size: smaller&amp;quot;&amp;gt;image by[http://www.ericchadwick.com Eric Chadwick]&amp;lt;/span&amp;gt; &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The four blending methods used above:&lt;br /&gt;
# [http://www.crazybump.com CrazyBump] by Ryan Clark blends normal maps together using calculations in 3D space rather than just in 2D. This does probably the best job at preserving details, and each layer's strength settings can be tweaked individually. &lt;br /&gt;
# [http://www.rodgreen.com/?p=4 Combining Normal Maps in Photoshop] by Rod Green blends normal maps together using Linear Dodge mode for the positive values and Difference mode for the negative values, along with a Photoshop Action to simplify the process. It's free, but the results may be less accurate than CrazyBump.&lt;br /&gt;
# [http://www.paultosca.com/makingofvarga.html Making of Varga] by [http://www.paultosca.com/ Paul &amp;quot;paultosca&amp;quot; Tosca] blends normal maps together using Overlay mode for the red and green channels and Multiply mode for the blue channel. This gives a slightly stronger bump than the Overlay-only method. [http://www.leocov.com/ Leo &amp;quot;chronic&amp;quot; Covarrubias] has a step-by-step tutorial for this method in [http://www.cgbootcamp.com/tutorials/2009/12/9/photoshop-combine-normal-maps.html CG Bootcamp Combine Normal Maps].&lt;br /&gt;
# [[3DTutorials/Normal Map Deepening|Normal Map Deepening]] by [http://www.poopinmymouth.com/ Ben &amp;quot;poopinmymouth&amp;quot; Mathis] shows how to blend normal maps together using Overlay mode. [http://cgtextures.com/content.php?action=tutorial&amp;amp;name=normalmap CGTextures tutorial for the NVIDIA Photoshop filter] by [http://hirezstudios.com/ Scott Warren] also shows how to create normalmaps using multiple layers (Note: to work with the Overlay blend mode each layer's Output Level should be 128 instead of 255, you can use the Levels tool for this).&lt;br /&gt;
&lt;br /&gt;
The [http://boards.polycount.net/showthread.php?t=69615 Getting good height from Nvidia-filter normalizing grayscale height] thread on the Polycount forum has a discussion of different painting/blending options. Also see the [[#2DT|2D Tools]] section for painting and conversion tools.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;PCT&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Pre-Created Templates ===&lt;br /&gt;
A library of shapes can be developed and stored for later use, to save creation time for future normal maps. Things like screws, ports, pipes, and other doo-dads. These shapes can be stored as bitmaps with transparency so they can be layered into baked normal maps.&lt;br /&gt;
&lt;br /&gt;
* [http://www.beautifulrobot.com/?p=69 Creating &amp;amp; Using NormalMap &amp;quot;Widgets&amp;quot;] - by ''[http://www.beautifulrobot.com Steev &amp;quot;kobra&amp;quot; Kelly]''&amp;lt;&amp;lt;BR&amp;gt;&amp;gt; How to set up and render template objects.&lt;br /&gt;
* [http://www.akramparvez.com/portfolio/scripts/normalmap-widget-for-3ds-max/ NormalMap Widget for 3ds Max] - by ''[http://www.akramparvez.com Akram Parvez]''&amp;lt;&amp;lt;BR&amp;gt;&amp;gt;A script to automate the setup and rendering process.&lt;br /&gt;
* See the section [[#BT|Baking Transparency]] for more template-rendering tools and tutorials.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;RN&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;Renormalizing&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Re-normalizing ===&lt;br /&gt;
Re-normalizing means resetting the length of each normal in the map to 1.&lt;br /&gt;
&lt;br /&gt;
A normal mapping shader takes the three color channels of a normal map and combines them to create the direction and length of each pixel's normal. These normals are then used to apply the scene lighting to the mesh. However if you edit normal maps by hand or if you blend multiple normal maps together this can cause those lengths to change. Most shaders expect the length of the normals to always be 1 (normalized), but some are written to re-normalize the normal map dynamically (for example, 3ds Max's Hardware Shaders do re-normalize).&lt;br /&gt;
&lt;br /&gt;
If the normals in your normal map are not normalized, and your shader doesn't re-normalize them either, then you may see artifacts on the shaded surface... the specular highlight may speckle like crazy, the surface may get patches of odd shadowing, etc. To help you avoid this NVIDIA's normal map filter for Photoshop provides an easy way to re-normalize a map after editing; just use the '''Normalize Only''' option. [http://xnormal.net Xnormal] also comes with a Normalize filter for Photoshop.&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Some shaders use [[#NormalMapCompression|compressed normal maps]]. Usually this means the blue channel is thrown away completely, so it's recalculated on-the-fly in the shader. However the shader has to re-normalize in order to recreate that data, so any custom normal lengths that were edited into the map will be ignored completely. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;AOIANM&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;AmbientOcclusionIntoANormalMap&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Ambient Occlusion into a Normal Map ===&lt;br /&gt;
If the shader doesn't re-normalize the normal map, an [[Ambient Occlusion Map]] can actually be baked into the normal map. This will shorten the normals in the crevices of the surface, causing the surface to receive less light there. This works with both diffuse and specular, or any other pass that uses the normal map, like reflection.&lt;br /&gt;
&lt;br /&gt;
However it's usually best to keep the AO as a separate map (or in an alpha channel) and multiply it against the ambient lighting only. This is usually done with a custom [[Category:Shaders|shader]]. If you multiply it against the diffuse map or normal map then it also occludes diffuse lighting which can make the model look dirty. Ambient occlusion is best when it occludes ambient lighting only, for example a [[DiffuselyConvolvedCubeMap|diffusely convolved cubemap]].&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
To bake the AO into a normal map, adjust the levels of the AO layer first so the darks only go as low as 128 gray, then set the AO layer to Darken mode. This will shorten the normals in the normalmap, causing the surface to receive less light in the darker areas. &lt;br /&gt;
&lt;br /&gt;
This trick doesn't work with any shaders that re-normalize, like 3ds Max's Hardware Shaders. The shader must be altered to actually use the lengths of your custom normals; most shaders just assume all normals are 1 in length because this makes the shader code simpler. Also this trick will not work with most of the common [[#NormalMapCompression|normal map compression formats]], which often discard the blue channel and recalculate it in the shader, which requires re-normalization.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;BLE&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;&amp;lt;Anchor([[BacklightingExample]])&amp;gt;&amp;gt;&lt;br /&gt;
=== Back Lighting Example ===&lt;br /&gt;
You can customize normal maps for some interesting effects. If you invert the blue channel of a tangent-space map, the normals will be pointing to the opposite side of the surface, which can simulate backlighting.&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|  [[Image:NormalMap$tree_front.jpg]] &lt;br /&gt;
|-&lt;br /&gt;
| Tree simulating subsurface scattering (front view).&amp;lt;&amp;lt;BR&amp;gt;&amp;gt;&amp;lt;span style=&amp;quot;font-size: smaller&amp;quot;&amp;gt;image by[http://www.linkedin.com/in/ericchadwick Eric Chadwick]&amp;lt;/span&amp;gt; &lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The tree leaves use a shader than adds together two diffuse maps, one using a regular tangent-space normal map, the other using the same normal map but with the blue channel inverted. This causes the diffuse map using the regular normal map to only get lit on the side facing the light (front view), while the diffuse map using the inverted normal map only gets lit on the opposite side of the leaves (back view). The leaf geometry is 2-sided but uses the same shader on both sides, so the effect works no matter the lighting angle. As an added bonus, because the tree is self-shadowing the leaves in shadow do not receive direct lighting, which means their backsides do not show the inverted normal map, so the fake subsurface scatter effect only appears where the light directly hits the leaves. This wouldn't work for a whole forest because of the computational cost of self-shadowing and double normal maps, but could be useful for a single &amp;quot;star&amp;quot; asset, or if LODs switched the distant trees to a model that uses a cheaper shader.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;SAS&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;&amp;lt;Anchor([[ShadersAndSeams]])&amp;gt;&amp;gt;&lt;br /&gt;
== Shaders and Seams ==&lt;br /&gt;
You need to use the right kind of shader to avoid seeing seams where UV breaks occur. It must be written to use the same [[#TangentBasis|tangent basis]] that was used during baking. If the shader doesn't, the lighting will either be inconsistent across UV borders or it will show smoothing errors from the low-poly vertex normals.&lt;br /&gt;
&lt;br /&gt;
Xnormal generates accurate normals when displayed in Xnormal, and the SDK includes a method to write your own custom tangent space generator for the tool. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;3MS&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== 3ds Max Shaders ===&lt;br /&gt;
The &amp;quot;Render To Texture&amp;quot; tool in 3ds Max 2011 and older generates [[#TSNM|tangent-space]] normal maps that render correctly in the offline renderer (scanline) but do not render correctly in the realtime viewport with the 3ds Max shaders. Max is using a different [[#TangentBasis|tangent basis]] for each. This is readily apparent when creating non-organic hard surface normalmaps; smoothing errors appear in the viewport that do not appear when rendered. &lt;br /&gt;
&lt;br /&gt;
The errors can be fixed by using &amp;quot;Render To Texture&amp;quot; to bake a [[#TSNM|tangent-space]] or [[#OSNM|object-space]] map, and using the free [http://www.3pointstudios.com/3pointshader_about.shtml &amp;quot;3Point Shader&amp;quot;] by Christoph '[[CrazyButcher]]' Kubisch and Per 'perna' Abrahamsen. The shader uses the same tangent basis as the baking tool, so it produces nearly flawless results. It also works with old bakes.&lt;br /&gt;
&lt;br /&gt;
You can get OK results in the Max viewport using a tangent-space map baked in Maya, loading it in a Standard material, and enabling &amp;quot;Show Hardware Map in Viewport&amp;quot;. Another method is to use Render To Texture to bake an [[#OSNM|object-space]] map then use [[#CBS|Nspace]] to convert it into a tangent-space map then load that in a DirectX material and use the RTTNormalMap.fx shader. &lt;br /&gt;
&lt;br /&gt;
Autodesk is aware of these issues, and plans to address them in an upcoming release. See these links for more information:&lt;br /&gt;
* Christoph &amp;quot;[[CrazyButcher]]&amp;quot; Kubisch and Per &amp;quot;perna&amp;quot; Abrahamsen designed a shader/modifier combination approach that fixes the viewport problem, see the Polycount forum post [http://boards.polycount.net/showthread.php?t=72861 3Point Shader Lite - Shader material editor and Quality Mode normalmaps for 3ds Max].&lt;br /&gt;
* Jean-Francois &amp;quot;jfyelle&amp;quot; Yelle, Autodesk Media &amp;amp; Entertainment Technical Product Manager, has [http://boards.polycount.net/showthread.php?p=1115812#post1115812 this post]. &lt;br /&gt;
* Ben Cloward posted [http://boards.polycount.net/showthread.php?p=1100270#post1100270 workarounds and FX code].&lt;br /&gt;
* Christopher &amp;quot;cdiggins&amp;quot; Diggins, SDK writer for 3ds Max, shares some of the SDK code in his blog posts &amp;quot;[http://area.autodesk.com/blogs/chris/how_the_3ds_max_scanline_renderer_computes_tangent_and_binormal_vectors_for_normal_mapping How the 3ds Max Scanline Renderer Computes Tangent and Binormal Vectors for Normal Mapping]&amp;quot; and &amp;quot;[http://area.autodesk.com/blogs/chris/3ds_max_normal_map_baking_and_face_angle_weighting_the_plot_thickens 3ds Max Normal Map Baking and Face Angle Weighting: The Plot Thickens]&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|  [[Image:NormalMap$max2010_normalmap_workarounds_thumb.png]] &lt;br /&gt;
|-&lt;br /&gt;
| Viewport methods in 3ds Max 2010.&amp;lt;&amp;lt;BR&amp;gt;&amp;gt;[[attachment:max2010_normalmap_workarounds.png|Actual size]]&amp;lt;&amp;lt;BR&amp;gt;&amp;gt;&amp;lt;span style=&amp;quot;font-size: smaller&amp;quot;&amp;gt;image by [http://www.linkedin.com/in/ericchadwick Eric Chadwick]&amp;lt;/span&amp;gt; &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;3MENT&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== 3ds Max Edit Normals Trick ===&lt;br /&gt;
After baking, if you add an Edit Normals modifier to your low-poly normalmapped model, this seems to &amp;quot;relax&amp;quot; the vertex normals for more accurate viewport shading. The modifier can be collapsed if desired.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;MS&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Maya Shaders ===&lt;br /&gt;
Maya seems to correctly generate normals to view in realtime, with the correct [[#TangentBasis|tangent basis]], with much less smoothing errors than 3ds Max. &lt;br /&gt;
* [http://www.mentalwarp.com/~brice/shader.php BRDF shader] by [http://www.mentalwarp.com/~brice/ Brice Vandemoortele] and [http://www.kjapi.com/ Cedric Caillaud] (more info in [http://boards.polycount.net/showthread.php?t=49920 this Polycount thread]) '''Update:''' [http://boards.polycount.net/showthread.php?p=821862#post821862 New version here] with many updates, including object-space normal maps, relief mapping, self-shadowing, etc. Make sure you enable cgFX shaders in the Maya plugin manager, then you can create them in the same way you create a Lambert, Phong etc. Switch OFF high quality rendering in the viewports to see them correctly too.&lt;br /&gt;
* If you want to use the software renderer, use mental ray instead of Maya's software renderer because mental ray correctly interprets tangent space normals. The Maya renderer treats the normal map as a grayscale bump map, giving nasty results. Mental ray supports Maya's Phong shader just fine (amongst others), although it won't recognise a gloss map plugged into the &amp;quot;cosine power&amp;quot; slot. The slider still works though, if you don't mind having a uniform value for gloss. Spec maps work fine though. Just use the same set up as you would for viewport rendering. You'll need to have your textures saved as TGAs or similar for mental ray to work though. - from [http://boards.polycount.net/member.php?u=14235 CheeseOnToast]&lt;br /&gt;
&amp;lt;&amp;lt;Anchor([[NormalMapCompression]])&amp;gt;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;NMC&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
== Normal Map Compression ==&lt;br /&gt;
[[Normal Map Compression]]&lt;br /&gt;
Normal maps can take up a lot of memory. Compression can reduce the size of a map to 1/4 of what it was uncompressed, which means you can either increase the resolution or you can use more maps.&lt;br /&gt;
&lt;br /&gt;
Usually the compression method is to throw away the Blue channel, because this can be re-computing at minimal cost in the shader code. Then the bitmap only has to store two color channels, instead of four (red, green, blue, and alpha).&lt;br /&gt;
&lt;br /&gt;
* The article [http://developer.download.nvidia.com/whitepapers/2008/real-time-normal-map-dxt-compression.pdf Real-Time Normal Map DXT Compression] (PDF) from [http://www.idsoftware.com/ id software] and [http://developer.nvidia.com NVIDIA] is an excellent introduction to compression.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;DXT5C&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== DXT5nm Compression ===&lt;br /&gt;
DXT5nm is the same file format as DXT5 except before compression the red channel is moved into the alpha channel, the green channel is left as-is, and the red and blue channels are blanked with the same solid color. This re-arranging of the normal map axes is called ''swizzling''.&lt;br /&gt;
&lt;br /&gt;
The Green and Alpha channels are used because in the DXT format they are compressed using somewhat higher bit depths than the Red and Blue channels. Red and Blue have to be filled with the same solid color because DXT uses a compression system that compares differences between the three color channels. If you try to store some kind of texture in Red and/or Blue (specular power, height map, etc.) then the compressor will create more compression artifacts because it has to compare all three channels.&lt;br /&gt;
&lt;br /&gt;
There are some options in the NVIDIA DXT compressor that help reduce the artifacts if you want to add texture to the Red or Blue channels. The artifacts will be greater than if you keep Red and Blue empty, but it might be a tradeoff worth making. Some notes about this on the [http://developer.nvidia.com/forums/index.php?showtopic=1366 NVIDIA Developer Forums].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;DXT1C&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== DXT1 Compression ===&lt;br /&gt;
DXT1 is also used sometimes for tangent-space normal maps, because it is half the size of a DXT5. The downside though is that it causes many more compression artifacts, so much so that most people end up not using it. &lt;br /&gt;
&lt;br /&gt;
* The blog post [http://realtimecollisiondetection.net/blog/?p=28#more-28 I like spilled beans!] by [http://realtimecollisiondetection.net/blog/?page_id=2 Christer Ericson] has a section about Capcom's clever use of DXT1 and DXT5.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;3DCC&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== 3Dc Compression ===&lt;br /&gt;
3Dc compression is also known as BC5 in DirectX 10. It works similar to DXT5nm, because it only stores the X and Y channels. The difference is it stores both the same way as the DXT5 Alpha channel, which is a slightly higher bit depth than DXT5nm's Green channel. 3Dc yields the best results of any listed algorithm for tangent space normal map compression, and requires no extra processing time or unique hardware. See [[3Dc]] for more information.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;A8L8C&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== [[A8L8]] Compression ===&lt;br /&gt;
The DDS format !A8L8 isn't actually compressed, it's just two 8bit grayscale channels (256 grays each). It does save you from having to store all three color channels. Your shader has to recompute the blue channel for it to work. However, !A8L8 does not actually save any space in texture memory, it is typically converted to a four-channel 32bit texture when it's sent to the card. This format really only helps save disk space.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;L&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
=== Related Pages ===&lt;br /&gt;
* [[Curvature map]]&lt;br /&gt;
* [[DuDv map]]&lt;br /&gt;
* [[Flow map]]&lt;br /&gt;
* [[Normal map]]&lt;br /&gt;
* [[Radiosity normal map]]&lt;br /&gt;
* [[Vector displacement map]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;3DT&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; &amp;lt;span id=&amp;quot;Tools&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; &amp;lt;span id=&amp;quot;3DTools&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== 3D Tools ===&lt;br /&gt;
See [[Category:Tools#A3D_Normal_Map_Software|Category:Tools#3D_Normal_Map_Software]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;2DT&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;2DTools&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== 2D Tools ===&lt;br /&gt;
See [[Category:Tools#A2D_Normal_Map_Software|Category:Tools#2D_Normal_Map_Software]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;T&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;Tutorials&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Tutorials ===&lt;br /&gt;
* [http://area.autodesk.com/userdata/fckdata/239955/The%20Generation%20and%20Display%20of%20Normal%20Maps%20in%203ds%20Max.pdf The Generation and Display of Normal Maps in 3ds Max] (500kb PDF) &amp;lt;&amp;lt;BR&amp;gt;&amp;gt; Excellent whitepaper from Autodesk about normal mapping in 3ds Max and other apps.&lt;br /&gt;
* [http://www.katsbits.com/htm/tutorials/blender-baking-normal-maps-from-models.htm Renderbump and baking normal maps from high poly models using Blender 3D] by ''[http://www.katsbits.com/htm/about.htm &amp;quot;katsbits&amp;quot;]''&amp;lt;&amp;lt;BR&amp;gt;&amp;gt;Baking normal maps in Blender.&lt;br /&gt;
* [http://udn.epicgames.com/Three/CreatingNormalMaps.html Techniques for Creating Normal Maps] in the Unreal Developer Network's [http://udn.epicgames.com/Three/WebHome.html Unreal Engine 3 section] contains advice from [http://www.epicgames.com/ Epic Games] artists on creating normal maps for UE3. The [http://udn.epicgames.com/Three/DesignWorkflow.html#Creating%20normal%20maps%20from%20meshes Design Workflow page] has a summary.&lt;br /&gt;
* [http://www.iddevnet.com/quake4/ArtReference_CreatingModels#head-3400c230e92ff7d57424b2a68f6e0ea75dee4afa Creating Models in Quake 4] by [http://www.ravensoft.com/ Raven Software] is a comprehensive guide to creating Quake 4 characters.&lt;br /&gt;
* [http://www.svartberg.com/tutorials/article_normalmaps/normalmaps.html Normalmaps for the Technical Game Modeler] by [http://www.svartberg.com Ariel Chai] shows how low-poly smoothing and UVs can affect normal maps in Doom 3.&lt;br /&gt;
* [http://wiki.polycount.net/3D_Tutorials/Modeling_High-Low_Poly_Models_for_Next_Gen_Games Modeling High/Low Poly Models for Next Gen Games] by [http://www.acetylenegames.com/artbymasa/ João &amp;quot;Masakari&amp;quot; Costa] is an overview of modeling for normal maps.&lt;br /&gt;
* The [http://tech-artists.org/wiki/Beveling Beveling section on the Tech-Artists.Org Wiki] discusses how smoothing groups and bevels affect the topology of the low-poly model.&lt;br /&gt;
* The two-part article [http://www.ericchadwick.com/examples/provost/byf2.html#wts Beautiful, Yet Friendly] by [http://www.linkedin.com/in/gprovost Guillaume Provost] explains how smoothing groups and other mesh attributes cause vertices to be duplicated in the game. The vertex count is actually what matters in-game, not the triangle or poly count.&lt;br /&gt;
* [http://www.poopinmymouth.com/tutorial/normal_workflow_2.htm Normal map workflow] by [http://www.poopinmymouth.com/ Ben &amp;quot;poopinmymouth&amp;quot; Mathis] demonstrates his normal mapping workflow in 3ds Max and Photoshop.&lt;br /&gt;
* [http://dodownload.filefront.com/9086954//72f71c0147df53765045a22253c18361a29a6d532425842007ead644d39cbb85d0794ab560365cfa This video tutorial] by [http://www.custom-airbrush.com/ Jeff &amp;quot;airbrush&amp;quot; Ross] shows in Maya how to subdivide the low-poly mesh so it more closely matches the high-poly mesh, to help solve wavy lines in the bake.&lt;br /&gt;
* [http://www.bencloward.com/tutorials_normal_maps1.shtml Normal Mapping Tutorial] by [http://www.bencloward.com/ Ben Cloward] is a comprehensive tutorial about the entire normal map creation process.&lt;br /&gt;
* [http://www.pinwire.com/articles/26/1/Generating-High-Fidelity-Normal-Maps-with-3-D-Software.html Generating High Fidelity Normal Maps with 3-D Software] by [http://www.linkedin.com/pub/0/277/4AB Dave McCoy] shows how to use a special lighting setup to render normal maps (instead of baking them).&lt;br /&gt;
* [http://cgtextures.com/content.php?action=tutorial&amp;amp;name=normalmap Tutorial for the NVIDIA Photoshop filter] by [http://hirezstudios.com/ Scott Warren] shows how to create deep normal maps using multiple layers. Note: to use Overlay blend mode properly, make sure to change each layer's Levels ''Output Level'' to 128 instead of 255.&lt;br /&gt;
* [http://www.poopinmymouth.com/process/tips/normalmap_deepening.jpg Normalmap Deepening] by [http://www.poopinmymouth.com/ Ben &amp;quot;poopinmymouth&amp;quot; Mathis] shows how to adjust normal maps, and how to layer together painted and baked normal maps.&lt;br /&gt;
* [http://boards.polycount.net/showthread.php?t=51088 Tutorial for painting out seams on mirrored tangent-space normal maps] by [http://www.warbeast.de/ warby] helps to solve seams along horizontal or vertical UV edges, but not across angled UVs.&lt;br /&gt;
* [http://planetpixelemporium.com/tutorialpages/normal.html Cinema 4D and Normal Maps For Games] by [http://planetpixelemporium.com/index.php James Hastings-Trew] describes normal maps in plain language, with tips on creating them in Cinema 4D.&lt;br /&gt;
* [http://forums.cgsociety.org/showthread.php?f=39&amp;amp;t=359082 3ds Max normal mapping overview] by [http://www.alan-noon.com/ Alan Noon] is a great thread on CGTalk about the normal mapping process.&lt;br /&gt;
* [http://forums.cgsociety.org/showthread.php?f=46&amp;amp;t=373024 Hard Surface Texture Painting] by [http://stefan-morrell.cgsociety.org/gallery/ Stefan Morrell] is a good introduction to painting textures for metal surfaces.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;D&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span id=&amp;quot;Discussion&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Discussion ===&lt;br /&gt;
[http://boards.polycount.net/showthread.php?p=820218 Discuss this page on the Polycount forums]. Suggestions welcome.&lt;br /&gt;
&lt;br /&gt;
Even though only one person has been editing this page so far, the information here was gathered from many different sources. We wish to thank all the contributors for their hard-earned knowledge. It is much appreciated! [http://wiki.polycount.net {{http://boards.polycount.net/images/smilies/pcount/icons/smokin.gif}}]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Texturing]] [[Category:TextureTypes]] [[Category:Bump map]]&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	<entry>
		<id>http://wiki.polycount.com/wiki/Lit_Sphere</id>
		<title>Lit Sphere</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/Lit_Sphere"/>
				<updated>2014-11-26T19:50:19Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{stub}}&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Glossary]]&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	<entry>
		<id>http://wiki.polycount.com/wiki/Template:Stub</id>
		<title>Template:Stub</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/Template:Stub"/>
				<updated>2014-11-26T19:50:12Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;pre&amp;gt;This page is a stub.  YOU can help the Polycount Community by expanding it!&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	<entry>
		<id>http://wiki.polycount.com/wiki/Template:Stub</id>
		<title>Template:Stub</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/Template:Stub"/>
				<updated>2014-11-26T19:49:58Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;pre&amp;gt;This page is a stub.  '''YOU''' can help the Polycount Community by expanding it!&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	<entry>
		<id>http://wiki.polycount.com/wiki/Template:Stub</id>
		<title>Template:Stub</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/Template:Stub"/>
				<updated>2014-11-26T19:49:33Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;pre&amp;gt;This page is a stub.  '''YOU''' can help the Polycount Community by expanding the information here!&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	<entry>
		<id>http://wiki.polycount.com/wiki/Template:Stub</id>
		<title>Template:Stub</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/Template:Stub"/>
				<updated>2014-11-26T19:35:15Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: Created page with &amp;quot;This page is a stub.  '''YOU''' can help the Polycount Community by expanding the information here!&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is a stub.  '''YOU''' can help the Polycount Community by expanding the information here!&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	<entry>
		<id>http://wiki.polycount.com/wiki/Lit_Sphere</id>
		<title>Lit Sphere</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/Lit_Sphere"/>
				<updated>2014-11-26T19:32:38Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
----&lt;br /&gt;
[[Category:Glossary]]&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	<entry>
		<id>http://wiki.polycount.com/wiki/Lerp</id>
		<title>Lerp</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/Lerp"/>
				<updated>2014-11-26T19:30:34Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;LERP is an abbreviation for Linear Interpolation. &lt;br /&gt;
&lt;br /&gt;
When you set a Photoshop layer to Normal blending mode and it has transparency, then it is &amp;quot;lerping&amp;quot; with the layers below it.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Glossary]]&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	<entry>
		<id>http://wiki.polycount.com/wiki/Lerp</id>
		<title>Lerp</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/Lerp"/>
				<updated>2014-11-26T19:29:57Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;LERP is an abbreviation for Linear Interpolation. &lt;br /&gt;
&lt;br /&gt;
When you set a Photoshop layer to Normal blending mode and it has transparency, then it is &amp;quot;lerping&amp;quot; with the layers below it.&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	<entry>
		<id>http://wiki.polycount.com/wiki/Transparency</id>
		<title>Transparency</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/Transparency"/>
				<updated>2014-11-26T09:51:10Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Additive Transparency =&lt;br /&gt;
The flames on this burning bed are using additive transparency to keep the colors &amp;quot;hot.&amp;quot; &lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;traditional&amp;quot;&amp;gt;&lt;br /&gt;
Image:Transparency.gif|in-engine&lt;br /&gt;
Image:flames.gif|map&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
See [[additive color model]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Alpha Transparency =&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;traditional&amp;quot;&amp;gt;&lt;br /&gt;
Image:crowd.gif|A texture using alpha transparency in RT3D.&lt;br /&gt;
Image:crowd_rgb.gif|The RGB part of the texture file.&lt;br /&gt;
alpha.gif|The alpha channel of the texture file, in 8bit (256 colors).&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Alpha Bit Depths =&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;traditional&amp;quot;&amp;gt;&lt;br /&gt;
alpha_8bit.gif|A closeup of the 8bit (256 colors) alpha channel. This is the highest bit depth used for alpha channels, because you can get a full range of grays with 256 colors. If we had a higher bit depth like 16bit (65535 colors), you would see the alpha looking a little bit smoother, but because texture filtering is so common now, it ends up softening your 8bit alpha anyway, and it looks fine.&lt;br /&gt;
alpha_4bit.gif|A closeup of a 4bit (16 colors) version of the alpha channel. Still a lot of detail, but starting to break up some around the edges. This is a much smaller file than the 8bit alpha, which is good because it takes up much less memory. A good trade off.&lt;br /&gt;
alpha_1bit.gif|A closeup of a 1bit (2 colors) version of the alpha channel. 1bit means only black and white, so there's no anti-aliasing. This is a very small file-- the visual quality suffers, but it saves a lot of memory. Not worth the degradation unless you really need the memory.&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Subtractive Transparency =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;traditional&amp;quot;&amp;gt;&lt;br /&gt;
subtractiveT.gif|In Engine&lt;br /&gt;
xray_hand.gif|Map&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The x-rays on this light-table use subtractive transparency to make things under them darker, the way real x-rays do. The subtractive method isn't used all that often, so if you need it you should ask your programmer(s) if they can add it as a specific feature of the engine. &lt;br /&gt;
&lt;br /&gt;
See [[Subtractive Color Model]].&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Glossary]]&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	<entry>
		<id>http://wiki.polycount.com/wiki/Subtractive_Color_Model</id>
		<title>Subtractive Color Model</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/Subtractive_Color_Model"/>
				<updated>2014-11-26T09:49:24Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[image:subtractive.gif|thumb|Subtractive Color Model]]&lt;br /&gt;
= Subtractive Color Model =&lt;br /&gt;
In the subtractive color model, magenta, yellow, cyan, and black are the primary colors. They are also called CMYK, with K standing for black because the letter B is already taken by RGB. Mixing cyan, yellow, and magenta together creates a dark muddy brown, so this is why black has been added as the fourth primary color, to get clean blacks. &lt;br /&gt;
You subtract to get white. To get a lighter color use less of each color, or to get a darker color use more of each color. Subtractive is the color model used for working with pigments, as in painting and color printing. &lt;br /&gt;
&lt;br /&gt;
In RT3D, the subtractive color model governs how colors are blended together, like with [[transparency]] and [[TextureBlending|texture blending]]. Since CMYK are the primary colors, they help describe what subtractive means, but you can use any colors. In fact, since RT3D engines display on a computer screen, you are really in the end just using the additive color model. The subtractive color model can only be simulated in RT3D, to get certain effects. &lt;br /&gt;
&lt;br /&gt;
See [[Color Models|color models]].&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Glossary]]&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	<entry>
		<id>http://wiki.polycount.com/wiki/Color_Models</id>
		<title>Color Models</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/Color_Models"/>
				<updated>2014-11-26T09:48:39Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: Cman2k moved page ColorModels to Color Models&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{:Additive_color_model}}&lt;br /&gt;
&lt;br /&gt;
{{:Subtractive_Color_Model}}&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Glossary]]&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	<entry>
		<id>http://wiki.polycount.com/wiki/ColorModels</id>
		<title>ColorModels</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/ColorModels"/>
				<updated>2014-11-26T09:48:39Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: Cman2k moved page ColorModels to Color Models&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Color Models]]&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	<entry>
		<id>http://wiki.polycount.com/wiki/Color_Models</id>
		<title>Color Models</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/Color_Models"/>
				<updated>2014-11-26T09:48:09Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{:Additive_color_model}}&lt;br /&gt;
&lt;br /&gt;
{{:Subtractive_Color_Model}}&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Glossary]]&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	<entry>
		<id>http://wiki.polycount.com/wiki/Additive_color_model</id>
		<title>Additive color model</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/Additive_color_model"/>
				<updated>2014-11-26T09:47:32Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[image:additive.jpg|thumb|Additive Color Model]]&lt;br /&gt;
= Additive Color Model =&lt;br /&gt;
In the additive color model, red, green, and blue ([[RGB]]) are the primary colors, and mixing them together creates white. This is the way light blends together-- shine a red, a green, and a blue spotlight in the same place, and it will make white light. You add to get white. To get a lighter color use more of each color, or to get a darker color use less of each color. Additive is the color model used to display graphics on your computer screen, where all the colors are just combinations of the colors red, green and blue. Also called RGB space. &lt;br /&gt;
&lt;br /&gt;
See also [[SubtractiveColorModel|subtractive color model]], [[TextureBlending|texture blending]].&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Glossary]]&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	<entry>
		<id>http://wiki.polycount.com/wiki/Color_Models</id>
		<title>Color Models</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/Color_Models"/>
				<updated>2014-11-26T09:45:35Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[image:additive.jpg|thumb|Additive Color Model]]&lt;br /&gt;
= Additive Color Model =&lt;br /&gt;
In the additive color model, red, green, and blue (RGB) are the primary colors, and mixing them together creates white. This is the way light blends together-- shine a red, a green, and a blue spotlight in the same place, and it will make white light.&lt;br /&gt;
You add to get white. To get a lighter color use more of each color, or to get a darker color use less of each color. Additive is the color model used to display graphics on your computer screen, where all the colors are just combinations of the colors red, green and blue. Also called RGB space.&lt;br /&gt;
&lt;br /&gt;
See also [[TextureBlending|Texture Blending]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{:Subtractive_Color_Model}}&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Glossary]]&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	<entry>
		<id>http://wiki.polycount.com/wiki/Color_Models</id>
		<title>Color Models</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/Color_Models"/>
				<updated>2014-11-26T09:44:22Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[image:additive.jpg|thumb|Additive Color Model]]&lt;br /&gt;
= Additive Color Model =&lt;br /&gt;
In the additive color model, red, green, and blue (RGB) are the primary colors, and mixing them together creates white. This is the way light blends together-- shine a red, a green, and a blue spotlight in the same place, and it will make white light.&lt;br /&gt;
You add to get white. To get a lighter color use more of each color, or to get a darker color use less of each color. Additive is the color model used to display graphics on your computer screen, where all the colors are just combinations of the colors red, green and blue. Also called RGB space.&lt;br /&gt;
&lt;br /&gt;
See also [[TextureBlending|Texture Blending]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{User:SubtractiveColorModel}}&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Glossary]]&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	<entry>
		<id>http://wiki.polycount.com/wiki/Subtractive_Color_Model</id>
		<title>Subtractive Color Model</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/Subtractive_Color_Model"/>
				<updated>2014-11-26T09:44:08Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[image:subtractive.gif|thumb|Subtractive Color Model]]&lt;br /&gt;
= Subtractive Color Model =&lt;br /&gt;
In the subtractive color model, magenta, yellow, cyan, and black are the primary colors. They are also called CMYK, with K standing for black because the letter B is already taken by RGB. Mixing cyan, yellow, and magenta together creates a dark muddy brown, so this is why black has been added as the fourth primary color, to get clean blacks. &lt;br /&gt;
You subtract to get white. To get a lighter color use less of each color, or to get a darker color use more of each color. Subtractive is the color model used for working with pigments, as in painting and color printing. &lt;br /&gt;
&lt;br /&gt;
In RT3D, the subtractive color model governs how colors are blended together, like with [[transparency]] and [[TextureBlending|texture blending]]. Since CMYK are the primary colors, they help describe what subtractive means, but you can use any colors. In fact, since RT3D engines display on a computer screen, you are really in the end just using the additive color model. The subtractive color model can only be simulated in RT3D, to get certain effects. &lt;br /&gt;
&lt;br /&gt;
See [[ColorModels|color models]].&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Glossary]]&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	<entry>
		<id>http://wiki.polycount.com/wiki/Subtractive_Color_Model</id>
		<title>Subtractive Color Model</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/Subtractive_Color_Model"/>
				<updated>2014-11-26T09:43:37Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: Cman2k moved page SubtractiveColorModel to Subtractive Color Model&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
= Subtractive Color Model =&lt;br /&gt;
&lt;br /&gt;
See [[ColorModels|color models]].&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	<entry>
		<id>http://wiki.polycount.com/wiki/SubtractiveColorModel</id>
		<title>SubtractiveColorModel</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/SubtractiveColorModel"/>
				<updated>2014-11-26T09:43:37Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: Cman2k moved page SubtractiveColorModel to Subtractive Color Model&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Subtractive Color Model]]&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	<entry>
		<id>http://wiki.polycount.com/wiki/Glossary</id>
		<title>Glossary</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/Glossary"/>
				<updated>2014-11-26T09:38:50Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: /* Additive Color Model */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
== 2.5D ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Two and a half-D&amp;quot; is an optimization trick for [[RT3D]] that fakes the [[Viewer|viewer]] into thinking they are seeing true 3D graphics. A whole scene (or an object in a scene) is made of 2-dimensional graphics that are scaled and drawn in perspective to look like [[Polygon|polygonal]] graphics, but there are no polygons involved. This is done by projecting displacement information onto a single [[Face|plane]]. Accordingly, this technique is limited to 2 point perspective, not allowing any [[Pitch|Y-axis rotation]]. The groundbreaking game Doom is built entirely on this concept. [[Billboard|Billboards]] are 2.5D, but [[Voxel|voxels]] can be either 2.5D or 3D.&lt;br /&gt;
&lt;br /&gt;
Other effects classified as 2.5D can include [[Parallax_Map|parallax]] scrolling effects or orthographic 3/4 perspective (popularized by many real-time strategy and role-playing games).&lt;br /&gt;
&lt;br /&gt;
= '''A''' =&lt;br /&gt;
&lt;br /&gt;
== Additive Blending ==&lt;br /&gt;
&lt;br /&gt;
A [[TextureBlending|texture blending]] method that uses the [[AdditiveColorModel|Additive Color Model]]. The [[Pixel|pixels]] of a [[BaseMap|base map]] and a [[Light map|light map]] are blended together to make a brighter texture.&lt;br /&gt;
&lt;br /&gt;
See also [[AdditiveColorModel|additive color model]], [[AdditiveTransparency|additive transparency]], [[AverageBlending|average blending]], [[BlendInvert|invert blending]], [[BlendSubtract|subtractive blending]].&lt;br /&gt;
&lt;br /&gt;
see [[additive color model]]&lt;br /&gt;
&lt;br /&gt;
== Additive Transparency ==&lt;br /&gt;
&lt;br /&gt;
A way to calculate the color behind a transparent object, using the [[AdditiveColorModel|Additive Color Model]]. In general, wherever the object is more [[Opacity|opaque]], the brighter the background. If you use an alpha channel to change the transparency, the white areas get brighter, and the black areas have no effect on the background. This works well for flames, explosions, lens flares, etc., because it makes them look cleaner and hotter. If you use [[Material|material]] opacity, it makes the whole background behind the object the same brightness level: the more opaque the object, the brighter the background, and the lower the opacity, the less bright the background. This works well for things like water or prisms or holograms. See also [[AverageTransparency|average transparency]], [[SubtractiveTransparency|subtractive transparency]].&lt;br /&gt;
&lt;br /&gt;
== AI ==&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence is a set of computer instructions or algorithms designed to simulate the actions of an intelligent being to the extent necessary to meet design requirements of the game. Unlike the AI computer science field, AI in games is much less dependent on accuracy and uses a variety of tricks and hacks to reduce [[Memory|memory]] and better serve the design of the game.&lt;br /&gt;
&lt;br /&gt;
== AL ==&lt;br /&gt;
&lt;br /&gt;
Artificial Life. In a nutshell, AL is the antithesis of AI. While AI seeks to simulate real-world behaviour by following a complex series of rules, AL starts with very simple rules for a system and enables complex behaviour to emerge from them. Galapagos from [http://www.anark.com/ Anark] is the first commercial game to use AL.&lt;br /&gt;
&lt;br /&gt;
== Aliasing ==&lt;br /&gt;
&lt;br /&gt;
When edges look jagged instead of smooth, and moiré patterns develop in fine parallel lines. The problem is most prevalent in diagonal lines. Aliasing happens when the engine tries to display an image on a portion of the screen where the resolution is too low to display its details correctly. This is solved with [[Anti-Aliasing|anti-aliasing]], [[Mip Mapping|mip mapping]], or [[Texture Filtering|texture filtering]].&lt;br /&gt;
&lt;br /&gt;
== Alpha Channel ==&lt;br /&gt;
&lt;br /&gt;
An optional channel in the texture file that usually defines the [[Transparency map|transparency]] of the texture’s [[Pixel|pixels]]. &lt;br /&gt;
&lt;br /&gt;
It can also be used for other things like a [[Height map|height map]] or grey-scale [[Category:Specular map|specular map]]. &lt;br /&gt;
&lt;br /&gt;
The alpha is usually anywhere from 1bit up to 8bits, depending on the amount of detail you need. Generally, the lower the [[BitDepth|bit depth]] you use, the more [[Memory|memory]] you save, but the less image quality you get. See [[Transparency map]] for more information, and some visual examples can be found on the Glossary page [[Transparency#Alpha_Bit_Depths]].&lt;br /&gt;
&lt;br /&gt;
Different image channels can be extracted by a [[Category:Shaders|shader]] to use them for particular effects, like alpha for physics info, red channel for glow and green channel for specular and blue channel for sound types, etc. In this way, a bitmap can store more information than just RGB color.&lt;br /&gt;
&lt;br /&gt;
== Ancestor ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Any [[Node|node]] in a [[Hierarchy|hierarchy]] that is above the current node.&lt;br /&gt;
&lt;br /&gt;
See also [[Child|child]], [[Parent|parent]], [[Node|node]]&lt;br /&gt;
&lt;br /&gt;
== Animatic ==&lt;br /&gt;
&lt;br /&gt;
An animated [[Storyboard|storyboard]]. This is used to refine timing, cameras, composition, etc., when a static storyboard just isn't enough. The animatic is kind of like a slide-show, with zooms and pans added to the storyboard panels to flesh out timing and composition. Where needed, low-resolution animated 3d scenes are used, intermixed with the remaining 2D storyboard panels. Sometimes called a &amp;quot;story reel.&amp;quot; Also called a &amp;quot;Leica reel,&amp;quot; (pronounced LIKE-uh) a term sometimes still used by gray-haired animators.&lt;br /&gt;
&lt;br /&gt;
== Anisotropic Filtering ==&lt;br /&gt;
&lt;br /&gt;
A [[Texture_filtering|texture filtering method]], specifically for non-square filtering, usually of textures shown in radical perspective (such as a pavement texture as seen from a camera lying on the road). More generally, anisotropic improves clarity of images with severely unequal [[AspectRatio|aspect ratios]]. Anisotropic filtering is an improvement of isotropic mip-mapping, but because it must take many samples, it can be very [[Memory|memory]] intensive.&lt;br /&gt;
&lt;br /&gt;
== Anti-Aliasing ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Anti-aliasing removes the stair-stepping or jaggies which occur at the edges of [[Polygon|polygons]] or between the [[Texel|texels]] on the polygons. It works by [[Interpolation|interpolating]] the [[Pixel|pixels]] at the edges to make the difference between two color areas less dramatic. See also [[Aliasing|aliasing]], [[Mip_Mapping|MIP mapping]], [[Texture_filtering|texture filtering]].&lt;br /&gt;
&lt;br /&gt;
== ASCII ==&lt;br /&gt;
&lt;br /&gt;
The American Standard Code for Information Interchange (pronounced &amp;quot;ASS-key&amp;quot;) is a encoding type for text, based on the English alphabet. A more universal encoding type is Unicode which allows for display of many different characters.&lt;br /&gt;
&lt;br /&gt;
== Aspect Ratio ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
A number that describes the shape of a rectangular texture, whether it's tall or wide. To get the aspect ratio, either divide the width by the height, or write it out as width:height. Aspect ratio helps you decide what kinds of changes need to be done to an image to get it to display correctly, like when you have to scale the image. Aspect ratio gets kind of complex when you have to deal with non-square pixels and other oddities-- not very important to the artist. See also [[AnisotropicFiltering|anisotropic filtering]].&lt;br /&gt;
&lt;br /&gt;
== Average Blending ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
When using [[TextureBlending|texture blending]], average is when the colors of the [[BaseMap|base map]] and the [[Light map|light map]] are blended together evenly. This helps when you don't want the light map to brighten or darken the base map, like when you are placing a decal of a crack on a wall to make it looked cracked. See also [[AdditiveBlending|additive blending]], [[AverageTransparency|average transparency]], [[BlendInvert|invert blending]], [[BlendSubtract|subtractive blending]].&lt;br /&gt;
&lt;br /&gt;
== Average Transparency ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Colors are mixed together evenly to create a new color. Sometimes called filter transparency. See also [[AdditiveTransparency|additive transparency]], [[AverageBlending|average blending]], [[SubractiveTransparency|subtractive transparency]].&lt;br /&gt;
&lt;br /&gt;
== Azimuth ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
One of the two rotational axes used by the astronomers, and also by [[RT3D]] programmers. Azimuth is similar to [[Yaw|yaw]]-- if you shake your head &amp;quot;no,&amp;quot; you are rotating in azimuth. Most [[Engine|engines]] count from 0 to 360 degrees, starting out pointed straight forward, turning clockwise all the way around, and ending pointed straight forward again. Straight forward is both 0 and 360 degrees, and straight backwards is 180 degrees. Sometimes mathematicians measure it in radians, a unit of measure that comes from the magical number Pi. A radian equals Pi/180, but most people use degrees instead of radians.&lt;br /&gt;
&lt;br /&gt;
Astronomers use only two axes (the other axis is [[Declination|declination]]), because they do not use [[Roll|roll]]. This is similar to the way most [[FPS|FPS]] games work. It is preferred in some RT3D engines because it keeps the viewer level no matter where he points. But this system has a drawback because it suffers from gimbal lock whenever the [[Viewer|viewer]] points either straight up or straight down. The Quake games, for instance, avoid [[GimbalLock|gimble lock]] by not allowing the viewer to point all the way up or down.&lt;br /&gt;
&lt;br /&gt;
= '''B''' =&lt;br /&gt;
&lt;br /&gt;
== B-Spline ==&lt;br /&gt;
&lt;br /&gt;
A way to make a curved line with very few points. It has control points with equal weights to adjust the shape of the curve. The control points rarely reside on the curve itself, because the curve is an average of the points. For instance, if you make four control points in the shape of a square, the resulting curve will be a circle inside of that square, because the curve is pulled inward as it tries to average out the weights of all the four points. Different from [[BezierSpline|bezier splines]], which use control points that always touch the curve, and handles that help you adjust the curve.&lt;br /&gt;
&lt;br /&gt;
== Backface Culling ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
The process of removing the unseen polygons that face away from the [[Viewer&amp;lt;sup&amp;gt;1&amp;lt;/sup&amp;gt;|viewer]]. This can dramatically speed up the [[Rendering|rendering]] of a [[Polygon|polygonal]] scene, since they take up valuable processing power. Also called Backface Removal or Back Culling. It is one of the methods of [[HiddenSurfaceRemoval|Hidden Surface Removal]].&lt;br /&gt;
&lt;br /&gt;
== Base Map ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
When using [[TextureBlending|texture blending]], this is the main [[Texture_types|texture]] used on the [[Polygon|polygon]]. One or more additional textures are blended with the base map to create a new texture. See also [[DarkMap|dark map]], [[LightMap|light map]].&lt;br /&gt;
&lt;br /&gt;
== Bézier Spline ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Another way to make a curved line with very few points. named after the French mathematician Pierre Bézier (pronounced BEZ-ee-ay), these curves employ at least three points to define a curve. The two endpoints of the curve are called anchor points. The other points, which define the shape of the curve, are called handles, tangent points, or nodes. Attached to each handle are two control points. By moving the handles and the control points, you end up having a lot of control over the shape of the curve. Different from [[B-Spline|b-splines]], which use control points that don't necessarily touch the curve.&lt;br /&gt;
&lt;br /&gt;
== Bilinear Filtering ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
A method of [[Mip_Mapping|MIP mapping]]. Since the [[Texel|texels]] are almost always larger or smaller than the screen [[Pixel|pixels]], it tries to find a MIP-map with texels that are closest in size to the screen pixels. Then it [[Interpolation|interpolates]] the four texels that are the nearest to each screen pixel in order to render each new screen pixel. See also [[TrilinearFiltering|trilinear filtering]].&lt;br /&gt;
&lt;br /&gt;
== Billboard ==&lt;br /&gt;
&lt;br /&gt;
Billboard is a term commonly used in games to describe a camera-facing plane. This is a polygon with a texture using [[Transparency map|transparency]], that rotates on various axes to always face the viewer. &lt;br /&gt;
&lt;br /&gt;
Billboards are commonly used for particles, far away [[Category:EnvironmentFoliage|trees]], [[GrassTechnique|grass]], clouds, [[Category:UserInterface|UI elements]], etc. This is a common device to get more detail in objects without using a lot of polygons. However, they are flat and may look strange when you move around them. &lt;br /&gt;
&lt;br /&gt;
Billboards can be set up to rotate on only one [[RotationalAxis|axis]] or on combinations of axes. For example, a tree might be set to rotate only on its vertical axis, so it always appears grounded. However a puff of smoke usually looks better if it always stays upright and flat to the view. &lt;br /&gt;
&lt;br /&gt;
One cool trick is to use a curved billboard, which helps give more depth to a single-axis billboard, because like a tree billboard, you can look down on it, so the extra depth helps get rid of that flat look. &lt;br /&gt;
&lt;br /&gt;
Also called [[Sprite]], Imposter, Flag, Kite, Stamp.&lt;br /&gt;
&lt;br /&gt;
== Bit Depth ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Bitdepth is used to denote how many colors a game or application needs to function properly, which is the number of colors plus other channels like [[AlphaChannel|alpha]]. The more colors, the smoother the image. 1-bit is two colors-- black and white. 2-bit is four colors, 4-bit is sixteen, 8-bit is 256 colors, etc. &lt;br /&gt;
The number of colors at any bit depth = 2bitdepth. For instance, 8-bit = 28 = 2x2x2x2x2x2x2x2 = 256 colors.&lt;br /&gt;
If game uses alpha or [[HeightMap|heightmaps]], then they are additional channels that can be added to the bitdepth number. 16-bit is 65536 colors without any additional channels, but if you want to add an alpha channel, then 16 would be divided into [[RGB]] and alpha, which can be done a number of ways. For example, if you use 4 bits each for Red, Green, and Blue, then that leaves 4 bits for the alpha, so you have 16 colors for alpha. Or else 5 bits for each RGB means you have 1-bit for alpha, which is only 2 colors, but gives you more RGB colors to work with. It's a trade-off.&lt;br /&gt;
&lt;br /&gt;
== Bitmap ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Another name for a [[Texture_types|texture]].&lt;br /&gt;
&lt;br /&gt;
== Blending (Inverted) ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
When using [[TextureBlending|texture blending]], invert is when the light map inverts the [[RGB]] color of the base map. See also [[AdditiveBlending|additive blending]], [[AverageBlending|average blending]], [[BlendSubtract|subtractive blending]].&lt;br /&gt;
&lt;br /&gt;
== Blending (Subtractive) ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
When using [[TextureBlending|texture blending]], subtractive is when the colors of the [[DarkMap|dark map]] are subtracted from the colors of the [[BaseMap|base map]] to make a new darker texture. See also [[AdditiveBlending|additive blending]], [[AverageBlending|average blending]], [[BlendInvert|invert blending]].&lt;br /&gt;
&lt;br /&gt;
== Bounding Box ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
A simplified approximation of the volume of a model, used most commonly for [[CollisionDetection|collision detection]]. Although it is commonly a box, it can be any shape. If not a box, it is more properly called a bounding volume.&lt;br /&gt;
&lt;br /&gt;
== BPC ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Bits Per Channel is another term for [[BitDepth|bitdepth]].&lt;br /&gt;
&lt;br /&gt;
== BPP ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Bits Per Pixel is almost the same as a [[BitDepth|bitdepth]]. But BPP is only the [[RGB]] component. It tells us how many colors are used in the RGB part of the image. For example, 16BPP = 216 = 2x2x2x2x2x2x2x2x2x2x2x2x2x2x2x2 = 65536 colors. The term bitdepth is more commonly used to denote what a game or application needs to function properly, which is the number of colors plus other channels like alpha. For example, if you say there is a 32bit bitdepth, this doesn't mean 232 colors. Instead, this means a 24bit RGB space + an 8bit alpha. To make things clearer, instead of the term bitdepth people are starting to use the term BPC, or Bits Per Channel.&lt;br /&gt;
&lt;br /&gt;
== BSP ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Binary Space Partition. A BSP tree subdivides 3D space with 2D planes to help speed up [[Sorting|sorting]]. It is sometimes used for additional purposes like [[CollisionDetection|collision detection]].&lt;br /&gt;
&lt;br /&gt;
= '''C''' =&lt;br /&gt;
&lt;br /&gt;
== Camera ==&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
In [[RT3D]], this is another word for the [[Viewer|viewer's]] view into the scene.&lt;br /&gt;
&lt;br /&gt;
== Channel Packing ==&lt;br /&gt;
&lt;br /&gt;
See [[ChannelPacking|Channel Packing]]&lt;br /&gt;
&lt;br /&gt;
== Child ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Any [[Node|node]] in a [[Hierarchy|hierarchy]] that is attached to another node.&lt;br /&gt;
&lt;br /&gt;
== Chroma Key ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
A [[BitDepth|1-bit]] transparency. In the film &amp;amp; video worlds, chromakey means to &amp;quot;key&amp;quot; to a particular color and make it transparent, like with bluescreening. In [[RT3D]], it means to make a particular [[RGB]] color in a texture 100% transparent, and all other colors 100% opaque. It's a cheap way to get transparency, since you don't need an [[AlphaChannel|alpha channel]]. However it means the transparency has a rough pixelized edge. A good color to designate for chromakey is one that you won't be using elsewhere in your textures, like magenta (1,0,1). Whatever the color you decide, all textures in the engine will use that same color for chromakey. See also [[Sprite|sprites]].&lt;br /&gt;
&lt;br /&gt;
== Clipping Plane ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
A clipping plane throws away [[Polygon|polygons]] on the other side of it. This can dramatically speed up the [[Rendering|rendering]] of a polygonal scene, since unneeded polygons can take up valuable processing power. See also [[BackfaceCulling|backface culling]], [[Frustrum|frustum]], [[HiddenSurfaceRemoval|hidden surface removal]].&lt;br /&gt;
&lt;br /&gt;
== Collision Detection ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
A constant drain on a 3D game engine, the systematic check to see if any intersections are occuring between significant polygons, such as my player's sword and your player's neck. See also [[BoundingBox|bounding box]].&lt;br /&gt;
&lt;br /&gt;
== Color Depth ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
The number of colors in an image, same as [[BitDepth|bit depth]].&lt;br /&gt;
&lt;br /&gt;
== Color Models ==&lt;br /&gt;
&lt;br /&gt;
See [[ColorModels|Color Models]]&lt;br /&gt;
&lt;br /&gt;
== Compile ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Programmers have to convert source code, written in a high-level language such as C, into object code (typically Assembly language), so that a microprocessor can run the program. This is similar to the process artists go thru in order to create game art-- I create textures in Photoshop, saving them in PSD format, then I convert them down to the appropriate bitmap format for use in the game. However, it can take a long time for programmers to compile their code, like overnight or a couple days, depending on the complexity.&lt;br /&gt;
&lt;br /&gt;
== Concave ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
A certain kind of shape, opposite of convex. If any two points can be connectd by a line that goes outside the shape, the shape is concave. The letter C is a concave shape. An easy way to remember concave is thinking of it being like a hill with a cave in it. If you put a straight line between a gold nugget in the floor and a diamond in the ceiling, the line is not inside the hill anymore (it is in the air not in the earth). The word concave means &amp;quot;with a cavity.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== Convex ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
A certain kind of shape, without indentations, opposite of concave. If any two points can be connected by a line that goes outside the shape, the shape is not convex. The letter O is a convex shape. An uncut watermelon is a convex shape-- anytime you draw a straight line between two seeds, you're still inside the watermelon.&lt;br /&gt;
&lt;br /&gt;
== Coordinate ==&lt;br /&gt;
&lt;br /&gt;
See [[Coordinate]]&lt;br /&gt;
&lt;br /&gt;
== Coplanar ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
When two or more things are on the same 2D plane. If two pieces of paper are sitting side by side flat on your desk, they're coplanar.&lt;br /&gt;
&lt;br /&gt;
= '''D''' =&lt;br /&gt;
&lt;br /&gt;
== Dark Map ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
The texture used to [[Blend|blend]] with the [[BaseMap|base map]] to create a new, darker texture. See also [[Light map|light map]], [[BlendSubtract|subtractive blending]].&lt;br /&gt;
&lt;br /&gt;
== DDS ==&lt;br /&gt;
&lt;br /&gt;
See [[DDS]]&lt;br /&gt;
&lt;br /&gt;
== Decal ==&lt;br /&gt;
&lt;br /&gt;
See [[Decal]]&lt;br /&gt;
&lt;br /&gt;
== Declination ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
One of the two rotational axes used by the astronomers, and also by [[RT3D]] programmers. Declination is similar to [[Pitch|pitch]]-- if you nod your head &amp;quot;yes,&amp;quot; you are rotating in declination. Most [[Engine|engines]] count from +90 to -90 degrees, starting out pointed straight up, turning downwards and ending pointed straight down. Straight forward is 0 degrees. Sometimes mathematicians measure it in radians, a unit of measure that comes from the magical number Pi. A radian equals Pi/180, but most people use degrees instead of radians.&lt;br /&gt;
&lt;br /&gt;
Astronomers use only two axes (the other axis is [[Azimuth|azimuth]]), because they do not use [[Roll|roll]]. This is similar to the way most [[FPS|FPS]] games work. It is preferred in some RT3D engines because it keeps the [[Viewer|viewer]] level no matter where it points. But this system has a drawback because it suffers from [[GimbalLock|gimbal lock]] whenever the viewer points either straight up or straight down.&lt;br /&gt;
&lt;br /&gt;
== Displacement map ==&lt;br /&gt;
&lt;br /&gt;
See [[Displacement Map|Displacement_map]]&lt;br /&gt;
&lt;br /&gt;
== Draw Order ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
The back-to-front order of [[Polygon|polygons]] drawing on top of one another, with the last to draw being the one that appears to be in front of the others. Usually, [[AlphaChannel|alpha channeled]] polygons should draw later, and closer polygons should draw later. Some programs allow you to place polygons under a [[Node|node]] so the [[Engine|engine]] will draw them in a certain order, left to right, and assure that alpha polys draw last.&lt;br /&gt;
&lt;br /&gt;
== Draw Rate ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Same as [[FillRate|fill rate]].&lt;br /&gt;
&lt;br /&gt;
== Dynamic Lighting ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Lighting, but updated every frame, as you'd want for moving lights and for characters that move amongst various light sources. Perhaps the lighting itself animates color and intensity.&lt;br /&gt;
&lt;br /&gt;
= '''E''' =&lt;br /&gt;
&lt;br /&gt;
== Engine ==&lt;br /&gt;
&lt;br /&gt;
The heart of the [[RT3D]] program, the engine is what converts all the game assets into a display on your computer screen. It is a software program written by one or more programmers, and has several components that need to work in harmony to display the game at a nice [[FrameRate|frame rate]]. [what are the components?]&lt;br /&gt;
&lt;br /&gt;
== Environment Map ==&lt;br /&gt;
&lt;br /&gt;
A method of [[Texturing|texture]] mapping that simulates the look of reflections in a shiny surface, like chrome or glass. The texture is usually painted to look like an all-encompassing world. In this example, the sky texture on the far right has been used on the donut. This can also be used to fake the look of specular highlights on an object, by painting only the light sources into the environment texture. &lt;br /&gt;
Also called reflection mapping.&lt;br /&gt;
&lt;br /&gt;
A common method is a [[Cube map]] which maps six textures of a panorama to the faces of a cube.&lt;br /&gt;
&lt;br /&gt;
== Expression ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
A formula used to create procedural animation, often using mathematical elements like sine and cosine. Expressions can automate and greatly speed up repetitive animation tasks. Also sometimes called scripting.&lt;br /&gt;
&lt;br /&gt;
= '''F''' =&lt;br /&gt;
&lt;br /&gt;
== Face ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Another name for a [[Polygon|polygon]]. This term can be used specifically for a single [[Triangle|triangle]] or more generally for a multi-triangle polygon.&lt;br /&gt;
&lt;br /&gt;
== Fan ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
One of the ways that [[Triangle|triangles]] can be created to reuse [[Vertex|vertex]] [[Transforms|transforms]], and thus save [[Memory|memory]] and also [[Render|render]] time. Once you have drawn one triangle, the next triangle only needs to load the [[Coordinate|coordinate]] of one additional vertex in order to draw itself, because it re-uses the vertex transforms that were already performed on its neighbor triangle. But your [[Engine|engine]] must specifically support fans for it to work. See also [[Strip|strips]].&lt;br /&gt;
&lt;br /&gt;
== Fill Rate ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
The number of [[Texture_types|textured]] and [[Shading|shaded]] [[Pixel|pixels]] that an [[Engine|engine]] can [[Render|render]] over a given time period, usually measured in millions of pixels per second (MPPS). [How does this affect RT3D artists? Need to elaborate]&lt;br /&gt;
&lt;br /&gt;
See also [[FrameRate|frame rate]]&lt;br /&gt;
&lt;br /&gt;
== Fog ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Objects in the [[RT3D]] scene become more and more the same color as they recede into the distance. This is similar to real fog, except that RT3D fog is a perfect gradient, whereas real fog usually has some wispy uneveness to it. Heavy fogging in RT3D is used to disguise the far [[ClippingPlane|clipping plane]], as shown at right in the game Turok. In fact, you can see the polygons being clipped, just behind the monster's head. This fog isn't done well-- it should completely hide the clipping plane. Besides, heavy-fogging is generally looked down upon, because it shortens the distance you can see enemies. &lt;br /&gt;
&lt;br /&gt;
In some of the newer [[Engine|engines]], volume fog and ground fog are supported, where the fog is localized to a specific area, which is closer to the behavior of real fog.&lt;br /&gt;
&lt;br /&gt;
== Forward Kinematics ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
FK for short. A method of manipulating objects in a [[Hierarchy|hierarchy]] where the animator positions objects in the hierarchy and the program calculates the positions and orientations of the objects below it in the hierarchy. See also [[InverseKinematics|inverse kinematics]].&lt;br /&gt;
&lt;br /&gt;
== FOV ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Field Of View is a traditional photography term, meaning the area that the [[Viewer|viewer]] can see in the RT3D scene. The FOV is usually defined by its width in degrees. A typical FOV in [[RT3D]] games is around 45 degrees. A really wide FOV like 90 degrees allows the viewer a panoramic view, but can seriously lower the frame rate because so many polygons are visible. It also produces distortions in the scene, where objects in the center of the view appear to be farther away than they are, and objects at the periphery appear to be really close. The trick in RT3D is to lower the FOV as much as possible without impairing too much of the view. See also [[Frustrum|frustrum]].&lt;br /&gt;
&lt;br /&gt;
== FPS ==&lt;br /&gt;
&lt;br /&gt;
See [[FPS]]&lt;br /&gt;
&lt;br /&gt;
== Frame Buffer ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
The frame buffer is what a video board uses to store the images it [[Render|renders]], while it is rendering them. When it is done rendering, it sends the completed frame to your monitor and starts building the next frame. The amount of frame buffer [[Memory|memory]] a video board has directly impacts which resolutions it can support-- the more memory you've got the higher resolutions your board will support and at higher [[BitDepth|bit depths]]. &lt;br /&gt;
&lt;br /&gt;
The frame buffer usually stores 2 frames: one is being calculated by the 3D accelerator while the other one is being sent to the monitor. This is called double buffering and delivers smooth animation. For 640x480 resolution with 16 bits color we require 640x480x16x2 = 9830400 bits of memory or about 1.2 Mb of Frame Buffer memory.&lt;br /&gt;
&lt;br /&gt;
== Frame Rate ==&lt;br /&gt;
&lt;br /&gt;
The framerate measures how fast the game is being rendered. &lt;br /&gt;
&lt;br /&gt;
Generally if the framerate is lower than 30 frames per second, then the game will seem &amp;quot;choppy&amp;quot; and unresponsive.&lt;br /&gt;
&lt;br /&gt;
See [[PolygonCount#Articles_About_Performance]].&lt;br /&gt;
&lt;br /&gt;
== Frustrum ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
A term from traditional photography, it is the shape that the [[Viewer|viewer's]] [[FOV]] creates when projected into the scene. This pyramidal shape defines the limits of what the viewer can see, so it is used to calculate [[ClippingPlane|clipping]], [[CollisionDetection|collisions]], fog, etc. &lt;br /&gt;
&lt;br /&gt;
Imagine your eyeball stuck to the top of a clear pyramid, looking straight down into it. The top-most point that is scratching against your eyeball is the near end of the frustum. You cannot see the side faces of the pyramid, because they are perpendicular to your eyeball, but you can see everything within the pyramid. In photography, this pyramid is infinitely tall, because in theory you can see to infinity. &lt;br /&gt;
&lt;br /&gt;
In RT3D, the frustum is not infinitely long. It has near and far clipping planes to help reduce polygon counts. The near clipping plane cuts off the very top of the pyramid, while the far clipping plane defines the base of the pyramid.&lt;br /&gt;
&lt;br /&gt;
== FUBAR ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Fucked Up Beyond All Repair, or more politely Fowled Up Beyond All Recognition.&lt;br /&gt;
&lt;br /&gt;
== Full Bright ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
When a color or a [[Texture_types|texture]] is drawn at its full intensity, unaffected by real-time lighting. Some [[RT3D]] [[Engine|engines]] do not have lighting or [[DynamicLighting|dynamic lighting]], so all textures are drawn full-bright. Sometimes also called fully-emissive, or self-illuminated.&lt;br /&gt;
&lt;br /&gt;
= '''G''' =&lt;br /&gt;
&lt;br /&gt;
== Gamma ==&lt;br /&gt;
&lt;br /&gt;
See [[Gamma]]&lt;br /&gt;
&lt;br /&gt;
== Geometry ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
This term is commonly used to define all [[Polygon|polygonal]] objects in a game. Also called mesh.&lt;br /&gt;
&lt;br /&gt;
== Gimbal Lock ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
The rotation of an object seems to &amp;quot;stick,&amp;quot; when it is rotated all the way down or all the way up. This can also happen with the [[Camera|camera]] itself. It happens because the mathematics in rotational systems like Euler angles cannot make a consistent rotation solution when they point straight up or straight down. The Quake games avoid gimbal lock by not allowing the viewer to look straight up or straight down.&lt;br /&gt;
&lt;br /&gt;
To solve gimbal lock, either avoid looking directly down or directly up, or else use a different rotational system.&lt;br /&gt;
&lt;br /&gt;
== Gouraud Shading ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
A [[Shading|shading]] method, named after the French mathematician Henri Gouraud (pronounced on-REE grrr-ROW). Each [[Triangle|triangle's]] color is created by [[Interpolation|interpolating]] the [[Vertex|vertex]] colors that are located at each corner of the triangle. In other words, the interior of each triangle is a smooth gradient between the colors of the three vertices. The vertex colors are usually created dynamically from the lighting in the scene, although the artist can instead assign specific colors to each vertex. &lt;br /&gt;
&lt;br /&gt;
Gouraud shading has a smooth look, but can look strange when using polygons with solid colors on them.&lt;br /&gt;
&lt;br /&gt;
= '''H''' =&lt;br /&gt;
&lt;br /&gt;
== Height Map ==&lt;br /&gt;
&lt;br /&gt;
A grayscale texture used as a displacement map to define the topography of the polygons. Usually the brighter pixels make higher elevations, and the darker pixels make lower elevations, and 50% gray pixels make no change. Often the [[AlphaChannel|alpha channel]] of an object's texture is used for the height map. [[Voxel|Voxel]] landscapes often use height maps.&lt;br /&gt;
&lt;br /&gt;
== Hidden Surface Removal ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
See also [[BackfaceCulling|backface culling]], [[ClippingPlane|clipping plane]].&lt;br /&gt;
&lt;br /&gt;
== Hierarchy ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
A list of things that are linked together in a certain order. A family tree is used as an analogy to help describe the parts of the hierarchy. However, the hierarchy tree traditionally hangs upside down, to make it easier to read and to use. At the top of the tree is the root, and all things are attached to it. Every object in the hierarchy is called a node. The connections between the nodes are called links. A node linked to another is called a child, and the node the child is linked to is called a parent. A parent can have multiple children, but in this tree each child can have only one parent. The nodes that have no children are called leaves, because they're at the ends of the tree. &lt;br /&gt;
&lt;br /&gt;
Each node can trace its lineage up through the tree, back though parents to the root. If you choose any parent, then you can call its collection of children a branch of the tree.&lt;br /&gt;
&lt;br /&gt;
= '''I'''=&lt;br /&gt;
&lt;br /&gt;
== Interpolation ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
It is the process of determining from two or more values what the &amp;quot;in-between&amp;quot; values should be. Interpolation is used with animation and with [[Texture_types|textures]], particularly [[Texture_filtering|texture filtering]].&lt;br /&gt;
&lt;br /&gt;
== Inverse Kinematics ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
IK for short. A method of manipulating [[Hierarchy|hierarchies]] where the animator positions objects at the end of the hierarchy and the program calculates the positions and orientations of all other objects in the hierarchy. With properly setup IK, you can quickly animate complex motions. For instance, the bones in the arm of a character are linked in a hierarchy, then limits are set for the rotations of the bones, then the animator can move the hand, and the IK will figure out what the rest of the arm needs to do.&lt;br /&gt;
&lt;br /&gt;
IK is used in RT3D to allow characters to interact with the environment in a more realistic manner, like when a player directs a character to pick an object off the floor. See also [[ForwardKinematics|forward kinematics]].&lt;br /&gt;
&lt;br /&gt;
= '''L'''=&lt;br /&gt;
&lt;br /&gt;
== Light Map ==&lt;br /&gt;
&lt;br /&gt;
See [[Light Map]]&lt;br /&gt;
&lt;br /&gt;
== LOD ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Stands for [[LevelOfDetail|Level Of Detail]]. This is used primarily to reduce polygon count in a scene, especially when you have multiple characters like in a sports game. A lower polygon count model of the character is used for when the character is far away, and when the character gets closer, it switches to a different model with a higher polygon count. This example is only two LODs, but you can have multiple LODs to help you reduce the &amp;quot;popping&amp;quot; effect that makes it obvious the character is switching from a low-count to a high-count model. You want to have the least polygons on screen at a time, so there are other factors besides distance... speed it is moving, viewer's anticipated focus, importance of character.&lt;br /&gt;
&lt;br /&gt;
= '''M''' =&lt;br /&gt;
&lt;br /&gt;
== Map ==&lt;br /&gt;
&lt;br /&gt;
== Mapping ==&lt;br /&gt;
&lt;br /&gt;
== Material ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
A set of parameters that determine the color, shininess, smoothness, etc. of a surface. Usually a material is used to assign a [[Texture_types|texture]] to a [[Face|face]].&lt;br /&gt;
&lt;br /&gt;
== Memory ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
One of the most fundamental [[RT3D]] concepts, memory is the amount of quickly-retrievable space available for assets currently being used by the game. This includes textures, geometry, geometry animation, interface artwork, AI, etc. RT3D applications use a number of different types of memory, but RT3D artists are mostly concerned with [[RAM]]. The biggest question for the artist is how much RAM is available for textures. (Need to cross-reference RAM, Video mem, Texture Mem, etc.).&lt;br /&gt;
&lt;br /&gt;
== Mesh ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Another word for [[Geometry|geometry]]. See also [[Polygon|polygon]].&lt;br /&gt;
&lt;br /&gt;
== Mip Mapping ==&lt;br /&gt;
&lt;br /&gt;
See [[Mip Mapping]]&lt;br /&gt;
&lt;br /&gt;
== Morph ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
An animated 2D or 3D effect that makes one [[Texture_types|texture]] or [[Geometry|geometry]] smoothly transform into another. Often used to do 3D facial animation. Comes from the word metamorphosis.&lt;br /&gt;
&lt;br /&gt;
== Multi-Texture ==&lt;br /&gt;
&lt;br /&gt;
See [[MultiTexture]]&lt;br /&gt;
&lt;br /&gt;
= '''N''' =&lt;br /&gt;
&lt;br /&gt;
== N-gon ==&lt;br /&gt;
&lt;br /&gt;
Another word for a [[Polygon|polygon]]. The letter N stands for any whole number. In other words, any polygon with &amp;quot;n&amp;quot; number of sides.&lt;br /&gt;
&lt;br /&gt;
== Nadir ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
The point directly below you in space. The opposite of [[Zenith|zenith]].&lt;br /&gt;
&lt;br /&gt;
== NDO ==&lt;br /&gt;
&lt;br /&gt;
See [[NDO]]&lt;br /&gt;
&lt;br /&gt;
== Node ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Each single object in a [[Hierarchy|hierarchy]] is called a node. Without nodes there can be no hierarchy-- there's nothing to link together. Also called a bead.&lt;br /&gt;
&lt;br /&gt;
== NURBS ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Short for Non-Uniform Rational B-Spline, a mathematical representation of a 3-dimensional object. Most CAD/CAM applications support NURBS, which can be used to represent analytic shapes, such as cones, as well as free-form shapes, such as car bodies. NURBS are not used much for [[RT3D]] because they create a large number of [[Polygon|polygons]], but it is an available modeling method in some artist 3D packages.&lt;br /&gt;
&lt;br /&gt;
= '''O''' =&lt;br /&gt;
&lt;br /&gt;
== Opacity ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Opacity (opaque) is the opposite of transparency (transparent). Opacity can mean that something is partially transparent-- because you can change an object's opacity using a setting in its [[Material|material]]. But the word opaque usually means totally non-transparent.&lt;br /&gt;
&lt;br /&gt;
== Overdraw ==&lt;br /&gt;
&lt;br /&gt;
Overdraw means a screen pixel is being drawn more than once. Overdraw can increase the [[FillRate|fill rate]], how fast the game can render each frame, slowing down the [[FrameRate|frame rate]]. Re-rendering each pixel more than once is usually a waste of processing time. &lt;br /&gt;
&lt;br /&gt;
Overdraw should be avoided whenever possible, however it is required if triangles are partially [[Transparency map|transparent]], because the surfaces must be mixed together to create the final screen pixel.&lt;br /&gt;
&lt;br /&gt;
Overdraw is usually caused by multiple triangles being drawn over each other, for example with particle effects or tree foliage. &lt;br /&gt;
&lt;br /&gt;
= '''P''' =&lt;br /&gt;
&lt;br /&gt;
== Parent ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Any [[Node|node]] in a [[Hierarchy|hierarchy]] that has another node attached to it.&lt;br /&gt;
&lt;br /&gt;
== PBR ==&lt;br /&gt;
&lt;br /&gt;
See [[PBR]]&lt;br /&gt;
&lt;br /&gt;
== Phong ==&lt;br /&gt;
&lt;br /&gt;
See [[Phong]]&lt;br /&gt;
&lt;br /&gt;
== Pitch ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
If you nod your head &amp;quot;yes,&amp;quot; this is pitch in action. One of the three rotational axes commonly used to describe the rotation of an object. The term comes from aviation. Most [[RT3D]] [[Engine|engines]] count from +90 to -90 degrees, starting out pointed straight up, turning downwards and ending pointed straight down. Straight forward is 0 degrees. See also [[Roll|roll]], [[Yaw|yaw]], [[RotationalAxis|rotational axis]].&lt;br /&gt;
&lt;br /&gt;
== Pixel ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Short for picture element. There are two common meanings: the pixels that [[Texture_types|textures]], or [[Bitmap|bitmaps]] are made of, and the pixels that are [[Render|rendered]] onto your computer screen by the [[Engine|engine]]. &lt;br /&gt;
&lt;br /&gt;
You tell each pixel where to go and what color to be by giving it two sets of values. For position, it needs [[Category:TextureCoordinates|coordinates]] (coords), called X and Y, usually written as (X,Y). The X coord is horizontal, the Y coord is vertical, and the numbers usually start at (0,0) in the upper-left corner. For the pixel's color it needs the three [[RGB]] color values, written as (R,G,B). These go from 0 (no color) to 1 (full-on color). For instance, green is (0,1,0). Most paint programs use 0 to 255, but 0 to 1 is easier for [[RT3D]] programmers to use. See also [[Texel|texel]].&lt;br /&gt;
&lt;br /&gt;
== Polygon ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
A series of [[Vertex|vertices]] that define a plane in 3D space. Most [[RT3D]] [[Engine|engines]] use polygons to make the surfaces of their objects. A polygon can be made up of 1 or more [[Triangle|triangles]], like a [[Quad|quad]] is made of two triangles, a pentagon is made of three triangles, etc. Some engines support multiple polygon types, but triangles are the most common. Some people use the term polygon to specify a quad, others use it when talking about triangles. Polygons are also called polys, or sometimes [[N-gon|n-gons]]. See also [[Face|faces]], [[Fan|fans]], [[Quad|quads]], [[Strip|strips]].&lt;br /&gt;
&lt;br /&gt;
== Polygons ==&lt;br /&gt;
&lt;br /&gt;
See [[Polygons]]&lt;br /&gt;
&lt;br /&gt;
== Power of 2 ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
The numbers 2, 4, 8, 16, 32, 64, 128, 256, 512, etc. Usually this is used to describe the [[Texture_types|texture]] sizes that an [[Engine|engine]] requires in order to make good use of video [[Memory|memory]]. An example texture size would be 32x64. Textures that are not in powers of 2, like 33x24, would probably cause the engine to run slower or maybe crash.&lt;br /&gt;
&lt;br /&gt;
= '''Q''' =&lt;br /&gt;
&lt;br /&gt;
== Quad ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
A four-sided polygon. Some [[RT3D]] engines use quads instead of triangular polygons because it saves NEED DEFINITION, because a square polygon stored as a quad stores only four vertices, whereas a square created with two triangles means transforming six vertices instead of only four. The artist should keep the vertices of the quad [[Coplanar|coplanar]], or rendering weirdness can happen, because the quad is divided into two triangles at render time. The internal edge between the two tris is determined arbitrarily, so it could look like a ridge or a valley.&lt;br /&gt;
&lt;br /&gt;
= '''R''' =&lt;br /&gt;
&lt;br /&gt;
== RAM ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Random Access Memory is the place to store game assets that are used most often, because RAM can be read really quick. See also [[Memory|memory]].&lt;br /&gt;
&lt;br /&gt;
== Ray Cast ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Projecting an imaginary line from each screen pixel into the RT3D scene, bouncing from the surface it meets up to the light source...(needs to be flesh out)&lt;br /&gt;
&lt;br /&gt;
== Raytrace ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
A way of rendering a 3D image which follows the path of every ray of light. Non-interactive, it works best for rendering images which have many reflective surfaces, like steel balls.&lt;br /&gt;
&lt;br /&gt;
== Real-time ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
When events happen at a rate consistent with events in the outside world. Specifically for [[RT3D]] artists, if the [[Engine|engine]] [[Render|renders]] a scene at a slow rate, the illusion of movement can be lost. To retain an interactive, immersive experience, the engine must react to your input and present you with new updated images immediately. If you are getting smooth feedback, it is real-time. [[FPS|Frames per second]] is the measurement of how fast the frames are being rendered. &lt;br /&gt;
&lt;br /&gt;
The engine must perform many complex operations, and the effect of that effort is the amount of time needed to draw each frame. By necessity, we must take shortcuts in the image quality to speed up the rendering. However, no single image remains visible for very long. If you carefully choose speedup techniques so that the errors are small, then they will not be noticed during the moment when the picture is visible.&lt;br /&gt;
&lt;br /&gt;
== Rendering ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
The transformation of 3D data by the [[Engine|engine]] into 2D frames for display on your computer screen (or TV). For [[RT3D]] artists, this specifically refers to [[RealTime|real-time]] rendering, where the individual frames must be drawn as fast as possible.&lt;br /&gt;
&lt;br /&gt;
== RGB Colorspace ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Red, Green, and Blue are the primary colors used to display [[RT3D]] on your computer screen. All the colors you see are combinations of those three. RGB space is the place where any transformations are made to colors, whether reducing the [[BitDepth|bit depth]], [[TextureBlending|texture blending]], [[Render|rendering]], etc. &lt;br /&gt;
&lt;br /&gt;
In RT3D, we use numerical RGB values to describe the colors in each [[Texture_types|texture]]. These numbers can be a drag to use, but they give you more control of the medium, especially when you want to tweak something like texture blending. &lt;br /&gt;
&lt;br /&gt;
In texture programs like Photoshop, the RGB values for texture colors are in an [[BitDepth|8bit]] scale which is usually 0 to 255. But RT3D programmers prefer a simpler scale, representing all colors with the values 0 to 1. For instance, red is (1,0,0), white is (1,1,1), black is (0,0,0), brown is (.4,.21,0) etc. The decimal places can go out as far as the programmer decides it needs to, but usually just two decimal places (.00) is precise enough. The less decimals, the smaller the file sizes will be, which conserves precious [[Memory|memory]]. This was the root of the Y2K problem, but we don't need to get into that... heh heh. &lt;br /&gt;
&lt;br /&gt;
See also [[AdditiveColorModel|additive color model]].&lt;br /&gt;
&lt;br /&gt;
== Roll ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
If you tilt your head to read the spine of a book, this is roll in action. One of the three rotational axes commonly used to describe the rotation of an object. The term comes from aviation. Most [[RT3D]] [[Engine|engines]] count from -180 to 180 degrees, starting out rolled upside-down all the way to the left, turning clockwise and ending up rolled upside-down all the way to the right. Straight forward with no roll is 0 degrees. See also [[Pitch|pitch]], [[Yaw|yaw]], [[RotationalAxis|rotational axis]].&lt;br /&gt;
&lt;br /&gt;
== Rotational Axis ==&lt;br /&gt;
&lt;br /&gt;
See [[Rotational Axis|Rotational Axis]]&lt;br /&gt;
&lt;br /&gt;
== RT3D ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Real-Time 3-Dimensional graphics. Artwork that is rendered in real-time on a computer, usually with interactive input from the [[Viewer|viewer]].&lt;br /&gt;
&lt;br /&gt;
= '''S''' =&lt;br /&gt;
&lt;br /&gt;
== Shading ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
The process of assigning values to the surfaces of objects, which control the way the surface interacts with light in the scene to create the object's color, specularity (highlights), reflective qualities, transparency, and refraction. Shading mimics the material that an object is supposed to be made of-- wood, plastic, metal, etc. The art of shading is understanding how the range of parameters will interact to create realistic or else imaginative effects. Also sometimes called surfacing. &lt;br /&gt;
&lt;br /&gt;
See also [[GouraudShading|Gouraud shading]], [[Phong|Phong shading]].&lt;br /&gt;
&lt;br /&gt;
== Smoothing Groups ==&lt;br /&gt;
&lt;br /&gt;
See [[Smoothing Groups]]&lt;br /&gt;
&lt;br /&gt;
== Sorting ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Keeps track of which onscreen elements can be viewed and which are hidden behind other objects. &lt;br /&gt;
&lt;br /&gt;
See also [[Z-Buffer|z-buffering]].&lt;br /&gt;
&lt;br /&gt;
== Spline ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
A curved line, defined by mathematical functions. &lt;br /&gt;
&lt;br /&gt;
See also [[B-Spline|b-spline]], [[BezierSpline|bezier spline]].&lt;br /&gt;
&lt;br /&gt;
== Sprite ==&lt;br /&gt;
&lt;br /&gt;
See [[Sprite]]&lt;br /&gt;
&lt;br /&gt;
== Storyboard ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
A visualization of the animation that breaks it down into a sequence of sketches that illustrate the key movements.&lt;br /&gt;
&lt;br /&gt;
== Strip ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
One of the ways that [[Triangle|triangles]] can be created to reuse [[Vertex|vertex]] [[Transforms|transforms]], and thus save [[Memory|memory]] and also [[Render|render]] time. Once you have drawn one triangle, the next triangle only needs to load the [[Coordinate|coordinate]] of one additional vertex in order to draw itself, because it re-uses the vertex transforms that were already performed on its neighbor triangle. But your [[Engine|engine]] must specifically support strips for it to work. Sometimes also called tri-strips. See also [[Fan|fans]].&lt;br /&gt;
&lt;br /&gt;
== Substance Designer ==&lt;br /&gt;
&lt;br /&gt;
See [[Substance Designer]]&lt;br /&gt;
&lt;br /&gt;
= '''T''' =&lt;br /&gt;
&lt;br /&gt;
== Texel ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Each [[Pixel|pixel]] of a [[Texture_types|texture]] during the time the texture is being processed by the [[RT3D]] [[Engine|engine]]. After the engine performs calculations to project the texture onto [[Polygon|polygons]], the texture pixels are transformed into texels. Then the engine [[Render|renders]] the scene, and at that point it transforms those texels into screen pixels.&lt;br /&gt;
&lt;br /&gt;
The distiction between texels and pixels is important in defining how the engine transforms textures. First they're texture pixels, then they're texels, then they're finally screen pixels.&lt;br /&gt;
&lt;br /&gt;
== Texture Atlas ==&lt;br /&gt;
&lt;br /&gt;
See [[Texture Atlas]]&lt;br /&gt;
&lt;br /&gt;
== Texture Compression ==&lt;br /&gt;
&lt;br /&gt;
The technique used by new and upcoming 3d accelerator cards to use larger [[Texture_types|textures]] in the same amount of texture [[Memory|memory]] and graphics bus bandwidth. With texture compression, you can often use textures as big as 2048x2048 in your real-time scenes. Two examples of texture compression methods are S3TC and VQTC.&lt;br /&gt;
&lt;br /&gt;
== Texture Coordinates ==&lt;br /&gt;
&lt;br /&gt;
See [[Texture Coordinates]]&lt;br /&gt;
&lt;br /&gt;
== Texture Filtering ==&lt;br /&gt;
&lt;br /&gt;
This term is used whenever a texture is altered by the [[Engine|engine]]: to eliminate jagged edges and shimmering [[Pixel|pixels]] whenever [[Texel|texels]] are larger or smaller than screen pixels (see [[Aliasing|aliasing]]), or to perform [[textureBlending|texture blending]] to blend two textures together.&lt;br /&gt;
&lt;br /&gt;
See also [[AnisotropicFiltering|anisotropic filtering]], [[MipMap|MIP mapping]].&lt;br /&gt;
&lt;br /&gt;
== Texture Types ==&lt;br /&gt;
&lt;br /&gt;
See [[Texture Types]]&lt;br /&gt;
&lt;br /&gt;
== Texture Blending ==&lt;br /&gt;
&lt;br /&gt;
See [[TextureBlending]]&lt;br /&gt;
&lt;br /&gt;
== Tiling ==&lt;br /&gt;
&lt;br /&gt;
See [[Tiling]]&lt;br /&gt;
&lt;br /&gt;
== Transforms ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
What the [[Engine|engine]] does to objects and [[Vertex|vertices]] in order to place them in the [[RT3D]] scene. transforms are position, rotation, and scale.&lt;br /&gt;
&lt;br /&gt;
== Transparency ==&lt;br /&gt;
&lt;br /&gt;
See [[Transparency]]&lt;br /&gt;
&lt;br /&gt;
== Triangle ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
A triangular [[Polygon|polygon]]. Often shortened to tri or tris.&lt;br /&gt;
&lt;br /&gt;
== Trilinear Filtering ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
A method of [[MipMap|MIP mapping]]. Since the [[Texel|texels]] are almost always larger or smaller than the screen [[Pixel|pixels]], it finds two MIP-maps whose texels are closest in size to the screen pixels: one with larger texels, and the other with smaller texels. For each of the two MIP-maps, it then [[Interpolation|interpolates]] the four texels that are the nearest to each screen pixel. In the final step it averages between the two MIP results to render the final screen pixel. &lt;br /&gt;
&lt;br /&gt;
Trilinear mip-mapping requires more than twice the computational cost of [[BilinearFiltering|bilinear filtering]], but the textures are filtered very nicely, with a clean result.&lt;br /&gt;
&lt;br /&gt;
= '''U''' =&lt;br /&gt;
&lt;br /&gt;
== UV Coordinates ==&lt;br /&gt;
&lt;br /&gt;
Texture coordinates, also called UVs, are pairs of numbers stored in the vertices of a mesh. These are often used to stretch a 2D texture onto a 3D mesh. &lt;br /&gt;
&lt;br /&gt;
See [[TextureCoordinates]] for more information.&lt;br /&gt;
&lt;br /&gt;
= '''V''' =&lt;br /&gt;
&lt;br /&gt;
== Value correction ==&lt;br /&gt;
&lt;br /&gt;
See [[Value Correction]]&lt;br /&gt;
&lt;br /&gt;
== Vertex ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
A point in 3D space that doesn't really do anything unless it is connected to a [[Polygon|polygon]] or a line. For more than one vertex, you call 'em vertices.&lt;br /&gt;
&lt;br /&gt;
== Viewer ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
You, the user of the [[RT3D]] application, looking at the 3D scene through your computer screen. The viewer looks into the RT3D world from a vantage point, which acts something like a camera, to frame your view. Most of the engine's calculations are tailored to make the world look great from that particular view. See also [[Frustrum|frustum]].&lt;br /&gt;
&lt;br /&gt;
Viewer may also refer to a version of the RT3D engine that is used to preview artwork while it is being created.&lt;br /&gt;
&lt;br /&gt;
== Voxel ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Shorthand for volume [[Pixel|pixel]]. Voxels have traditionally been used to create 3D renderings of complex volumes, like meterological cloud formations or scanned human tissues.&lt;br /&gt;
&lt;br /&gt;
In these visualizations, the voxels are used similar to the way grains of sand are used to make a sand castle-- the volume is dense with thousands of tiny voxels, and each is in the shape of a little cube or tetrahedron. Each voxel is assigned an [[Opacity|opacity]] percentage, and often a color, which makes it easier to examine the underlying structure of the volume. This kind of voxel is usually called a &amp;quot;true&amp;quot; 3D voxel. These voxels require a lot of [[Memory|memory]] and computational time, so they are usually pre-rendered, or else display at a relatively slow [[FrameRate|frame rate]].&lt;br /&gt;
&lt;br /&gt;
In games, voxels have been optimized to run in real-time, most often by using [[Billboard|billboards]] instead of cubes, and by only displaying the voxels on the surfaces of objects. This optimization is called a [[2.5D]] voxel. Typically these voxels have no transparency, and are made large enough to always overlap one another, which usually gives a slightly rough look to the surface. [http://www.novalogic.com/ NovaLogic's] game Comanche Maximum Overkill was the first to use this technique, creating landscapes that were remarkably detailed at the time. In their latest incarnation of the franchise, they've been able to greatly increase the number of voxels, thereby reducing the jaggedness of the landscape surface.&lt;br /&gt;
&lt;br /&gt;
Using voxels, whether 2.5D or 3D, an object can be displayed with great amount of detail, independent of the complexity of the object, dependent instead on the number of voxels used to represent it.&lt;br /&gt;
&lt;br /&gt;
= '''W''' =&lt;br /&gt;
&lt;br /&gt;
== Wavelet ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
A mathematical formula often used for image and video compression. [http://www.radgametools.com/ Bink] uses wavelets, along with other compression techniques, as an update to its popular Smacker video codec.&lt;br /&gt;
&lt;br /&gt;
= '''Y''' =&lt;br /&gt;
&lt;br /&gt;
== Yaw ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
If you shake your head &amp;quot;no,&amp;quot; this is yaw in action. One of the three rotational axes commonly used to describe the rotation of an object. The term comes from aviation. Most [[RT3D]] [[Engine|engines]] count from 0 to 360 degrees, starting out pointed straight forward, turning clockwise and ending pointed straight forwards again. Straight forward is both 0 and 360 degrees. See also [[Pitch|pitch]], [[Roll|roll]], [[RotationalAxis|rotational axis]].&lt;br /&gt;
&lt;br /&gt;
= '''Z''' =&lt;br /&gt;
&lt;br /&gt;
== Z-Buffer ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
An algorithm used in 3-D graphics to determine which objects, or parts of objects, are visible and which are hidden behind other objects. With Z-buffering, the graphics processor stores the Z-axis value of each pixel in a special area of memory called the Z-buffer. Different objects can have the same x- and y-coordinate values, but with different z-coordinate values. The object with the lowest z-coordinate value is in front of the other objects, and therefore that's the one that's displayed. &lt;br /&gt;
&lt;br /&gt;
An alternate algorithm for hiding objects behind other objects is called Z-sorting. The Z-sorting algorithm simply displays all objects serially, starting with those objects furthest back (with the largest Z-axis values). The Z-sorting algorithm does not require a Z-buffer, but it is slow and does not render intersecting objects correctly.&lt;br /&gt;
&lt;br /&gt;
== Z-Fighting ==&lt;br /&gt;
&lt;br /&gt;
See [[Z-Fighting]]&lt;br /&gt;
&lt;br /&gt;
== Zenith ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
The point directly above you in space. The opposite of [[Nadir|nadir]].&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	<entry>
		<id>http://wiki.polycount.com/wiki/Additive_color_model</id>
		<title>Additive color model</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/Additive_color_model"/>
				<updated>2014-11-26T09:37:55Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: Created page with &amp;quot;In the additive color model, red, green, and blue (RGB) are the primary colors, and mixing them together creates white. This is the way light blends together-- shine a red...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In the additive color model, red, green, and blue ([[RGB]]) are the primary colors, and mixing them together creates white. This is the way light blends together-- shine a red, a green, and a blue spotlight in the same place, and it will make white light. You add to get white. To get a lighter color use more of each color, or to get a darker color use less of each color. Additive is the color model used to display graphics on your computer screen, where all the colors are just combinations of the colors red, green and blue. Also called RGB space. &lt;br /&gt;
&lt;br /&gt;
See also [[SubtractiveColorModel|subtractive color model]], [[TextureBlending|texture blending]].&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Glossary]]&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	<entry>
		<id>http://wiki.polycount.com/wiki/Transparency</id>
		<title>Transparency</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/Transparency"/>
				<updated>2014-11-26T09:35:33Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: image and formatting fixes&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Additive Transparency =&lt;br /&gt;
The flames on this burning bed are using additive transparency to keep the colors &amp;quot;hot.&amp;quot; &lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;traditional&amp;quot;&amp;gt;&lt;br /&gt;
Image:Transparency.gif|in-engine&lt;br /&gt;
Image:flames.gif|map&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
See [[additive color model]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Alpha Transparency =&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;traditional&amp;quot;&amp;gt;&lt;br /&gt;
Image:crowd.gif|A texture using alpha transparency in RT3D.&lt;br /&gt;
Image:crowd_rgb.gif|The RGB part of the texture file.&lt;br /&gt;
alpha.gif|The alpha channel of the texture file, in 8bit (256 colors).&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Alpha Bit Depths =&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;traditional&amp;quot;&amp;gt;&lt;br /&gt;
alpha_8bit.gif|A closeup of the 8bit (256 colors) alpha channel. This is the highest bit depth used for alpha channels, because you can get a full range of grays with 256 colors. If we had a higher bit depth like 16bit (65535 colors), you would see the alpha looking a little bit smoother, but because texture filtering is so common now, it ends up softening your 8bit alpha anyway, and it looks fine.&lt;br /&gt;
alpha_4bit.gif|A closeup of a 4bit (16 colors) version of the alpha channel. Still a lot of detail, but starting to break up some around the edges. This is a much smaller file than the 8bit alpha, which is good because it takes up much less memory. A good trade off.&lt;br /&gt;
alpha_1bit.gif|A closeup of a 1bit (2 colors) version of the alpha channel. 1bit means only black and white, so there's no anti-aliasing. This is a very small file-- the visual quality suffers, but it saves a lot of memory. Not worth the degradation unless you really need the memory.&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Subtractive Transparency =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;traditional&amp;quot;&amp;gt;&lt;br /&gt;
subtractiveT.gif|In Engine&lt;br /&gt;
xray_hand.gif|Map&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The x-rays on this light-table use subtractive transparency to make things under them darker, the way real x-rays do. The subtractive method isn't used all that often, so if you need it you should ask your programmer(s) if they can add it as a specific feature of the engine. &lt;br /&gt;
&lt;br /&gt;
See [[subtractive color model]].&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Glossary]]&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	<entry>
		<id>http://wiki.polycount.com/wiki/Transparency</id>
		<title>Transparency</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/Transparency"/>
				<updated>2014-11-26T09:17:52Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Additive Transparency =&lt;br /&gt;
&lt;br /&gt;
[[Image:Transparency.gif]]&lt;br /&gt;
&lt;br /&gt;
The flames on this burning bed are using additive transparency to keep the colors &amp;quot;hot.&amp;quot; &lt;br /&gt;
&lt;br /&gt;
See [[additive color model]].&lt;br /&gt;
&lt;br /&gt;
= Alpha Transparency =&lt;br /&gt;
&lt;br /&gt;
= Alpha Bit Depths =&lt;br /&gt;
&lt;br /&gt;
= Subtractive Transparency =&lt;br /&gt;
&lt;br /&gt;
[[Image:subtractiveT.gif]]&lt;br /&gt;
&lt;br /&gt;
The x-rays on this light-table use subtractive transparency to make things under them darker, the way real x-rays do. The subtractive method isn't used all that often, so if you need it you should ask your programmer(s) if they can add it as a specific feature of the engine. &lt;br /&gt;
&lt;br /&gt;
See [[subtractive color model]].&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Glossary]]&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	<entry>
		<id>http://wiki.polycount.com/wiki/Z-Fighting</id>
		<title>Z-Fighting</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/Z-Fighting"/>
				<updated>2014-11-26T09:15:02Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: image and formatting fixes&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[image:Z-fighting.png|thumb|Z-fighting between two coplanar models. &amp;lt;BR&amp;gt; Image by [http://en.wikipedia.org/wiki/User:Mhoskins mhoskins] ]]&lt;br /&gt;
Also called flickering, duplicate geometry, coplanar meshes, shimmering.&lt;br /&gt;
&lt;br /&gt;
Z-fighting is a term used in 3D games to describe two (or more) polygons which are coplanar, or very close. Tiny rounding errors usually mean that the geometry appears to flicker as pixels from one piece or the other appears seemingly randomly. &lt;br /&gt;
&lt;br /&gt;
A good example would be a billboard on a brick wall - the player can never see behind the billboard but the artist might still be tempted to leave the brick wall there for speed and simply position the billboard quad over the brick wall with a tiny amount offset. Depending on z-buffer setup and the hardware it's running on you might see a strange billboard/brick interference pattern as you move about the scene.&lt;br /&gt;
&lt;br /&gt;
= Solutions =&lt;br /&gt;
Figure out what the minimum distance is for your game:&lt;br /&gt;
# Make a plane primitive and snap it to a large ground model. &lt;br /&gt;
# Move the pivot slightly below the plane, and re-snap. &lt;br /&gt;
# Export to the game and spin the camera. &lt;br /&gt;
# Rinse and repeat until you find the minimum safe distance that you can &amp;quot;float&amp;quot; a plane above another surface without flickering/z-fighting. You can then clone this mesh to create all future &amp;quot;floaters&amp;quot; (posters, litter, decals, etc.)&lt;br /&gt;
&lt;br /&gt;
Some game tools will allow you to manually offset the Z-test values that a model will use. So the model can be coplanar with another model, but it is forced to use slightly different depth values when it is rendered.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Glossary]]&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	<entry>
		<id>http://wiki.polycount.com/wiki/Rotational_Axis</id>
		<title>Rotational Axis</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/Rotational_Axis"/>
				<updated>2014-11-26T09:11:51Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;These axis describe movement in space within an [[RT3D]] engine. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Pitch, Yaw, and Roll =&lt;br /&gt;
Pitch is red, yaw is green, and roll is blue.&lt;br /&gt;
&lt;br /&gt;
[[image:pitchyawroll.gif|thumb|Rotational Axis &amp;lt;BR&amp;gt; Image by [http://wiki.polycount.com/EricChadwick Eric Chadwick] ]]&lt;br /&gt;
&lt;br /&gt;
'''Pitch'''&lt;br /&gt;
&lt;br /&gt;
Most RT3D engines count from 90 to -90 degrees, starting out pointed straight up, turning downwards and ending pointed straight down. If you nod your head &amp;quot;yes,&amp;quot; this is pitch in action. Straight forward is 0 degrees. Also called declination.&lt;br /&gt;
&lt;br /&gt;
'''Yaw'''&lt;br /&gt;
&lt;br /&gt;
Most RT3D engines count from 0 to 360 degrees, starting out pointed straight forward, turning clockwise and ending pointed straight forwards again. If you shake your head &amp;quot;no,&amp;quot; this is yaw in action. Straight forward is both 0 and 360 degrees. Also called azimuth.&lt;br /&gt;
&lt;br /&gt;
'''Roll'''&lt;br /&gt;
&lt;br /&gt;
Most RT3D engines count from -180 to 180 degrees, starting out twisted upside-down all the way to the left, turning clockwise and ending twisted upside-down all the way to the right. If you tilt your head to read the spine of a book, this is roll in action. Straight forward with no roll is 0 degrees.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Glossary]]&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	<entry>
		<id>http://wiki.polycount.com/wiki/Rotational_Axis</id>
		<title>Rotational Axis</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/Rotational_Axis"/>
				<updated>2014-11-26T09:11:29Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;These axis describe movement in space within an [RT3D] engine. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Pitch, Yaw, and Roll =&lt;br /&gt;
Pitch is red, yaw is green, and roll is blue.&lt;br /&gt;
&lt;br /&gt;
[[image:pitchyawroll.gif|thumb|Rotational Axis &amp;lt;BR&amp;gt; Image by [http://wiki.polycount.com/EricChadwick Eric Chadwick] ]]&lt;br /&gt;
&lt;br /&gt;
'''Pitch'''&lt;br /&gt;
&lt;br /&gt;
Most RT3D engines count from 90 to -90 degrees, starting out pointed straight up, turning downwards and ending pointed straight down. If you nod your head &amp;quot;yes,&amp;quot; this is pitch in action. Straight forward is 0 degrees. Also called declination.&lt;br /&gt;
&lt;br /&gt;
'''Yaw'''&lt;br /&gt;
&lt;br /&gt;
Most RT3D engines count from 0 to 360 degrees, starting out pointed straight forward, turning clockwise and ending pointed straight forwards again. If you shake your head &amp;quot;no,&amp;quot; this is yaw in action. Straight forward is both 0 and 360 degrees. Also called azimuth.&lt;br /&gt;
&lt;br /&gt;
'''Roll'''&lt;br /&gt;
&lt;br /&gt;
Most RT3D engines count from -180 to 180 degrees, starting out twisted upside-down all the way to the left, turning clockwise and ending twisted upside-down all the way to the right. If you tilt your head to read the spine of a book, this is roll in action. Straight forward with no roll is 0 degrees.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Glossary]]&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	<entry>
		<id>http://wiki.polycount.com/wiki/Glossary</id>
		<title>Glossary</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/Glossary"/>
				<updated>2014-11-26T08:58:29Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: /* Rotational Axis */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
== 2.5D ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Two and a half-D&amp;quot; is an optimization trick for [[RT3D]] that fakes the [[Viewer|viewer]] into thinking they are seeing true 3D graphics. A whole scene (or an object in a scene) is made of 2-dimensional graphics that are scaled and drawn in perspective to look like [[Polygon|polygonal]] graphics, but there are no polygons involved. This is done by projecting displacement information onto a single [[Face|plane]]. Accordingly, this technique is limited to 2 point perspective, not allowing any [[Pitch|Y-axis rotation]]. The groundbreaking game Doom is built entirely on this concept. [[Billboard|Billboards]] are 2.5D, but [[Voxel|voxels]] can be either 2.5D or 3D.&lt;br /&gt;
&lt;br /&gt;
Other effects classified as 2.5D can include [[Parallax_Map|parallax]] scrolling effects or orthographic 3/4 perspective (popularized by many real-time strategy and role-playing games).&lt;br /&gt;
&lt;br /&gt;
= '''A''' =&lt;br /&gt;
&lt;br /&gt;
== Additive Blending ==&lt;br /&gt;
&lt;br /&gt;
A [[TextureBlending|texture blending]] method that uses the [[AdditiveColorModel|Additive Color Model]]. The [[Pixel|pixels]] of a [[BaseMap|base map]] and a [[Light map|light map]] are blended together to make a brighter texture.&lt;br /&gt;
&lt;br /&gt;
See also [[AdditiveColorModel|additive color model]], [[AdditiveTransparency|additive transparency]], [[AverageBlending|average blending]], [[BlendInvert|invert blending]], [[BlendSubtract|subtractive blending]].&lt;br /&gt;
&lt;br /&gt;
== Additive Color Model ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
In the additive color model, red, green, and blue ([[RGB]]) are the primary colors, and mixing them together creates white. This is the way light blends together-- shine a red, a green, and a blue spotlight in the same place, and it will make white light. You add to get white. To get a lighter color use more of each color, or to get a darker color use less of each color. Additive is the color model used to display graphics on your computer screen, where all the colors are just combinations of the colors red, green and blue. Also called RGB space. See also [[SubtractiveColorModel|subtractive color model]], [[TextureBlending|texture blending]].&lt;br /&gt;
&lt;br /&gt;
== Additive Transparency ==&lt;br /&gt;
&lt;br /&gt;
A way to calculate the color behind a transparent object, using the [[AdditiveColorModel|Additive Color Model]]. In general, wherever the object is more [[Opacity|opaque]], the brighter the background. If you use an alpha channel to change the transparency, the white areas get brighter, and the black areas have no effect on the background. This works well for flames, explosions, lens flares, etc., because it makes them look cleaner and hotter. If you use [[Material|material]] opacity, it makes the whole background behind the object the same brightness level: the more opaque the object, the brighter the background, and the lower the opacity, the less bright the background. This works well for things like water or prisms or holograms. See also [[AverageTransparency|average transparency]], [[SubtractiveTransparency|subtractive transparency]].&lt;br /&gt;
&lt;br /&gt;
== AI ==&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence is a set of computer instructions or algorithms designed to simulate the actions of an intelligent being to the extent necessary to meet design requirements of the game. Unlike the AI computer science field, AI in games is much less dependent on accuracy and uses a variety of tricks and hacks to reduce [[Memory|memory]] and better serve the design of the game.&lt;br /&gt;
&lt;br /&gt;
== AL ==&lt;br /&gt;
&lt;br /&gt;
Artificial Life. In a nutshell, AL is the antithesis of AI. While AI seeks to simulate real-world behaviour by following a complex series of rules, AL starts with very simple rules for a system and enables complex behaviour to emerge from them. Galapagos from [http://www.anark.com/ Anark] is the first commercial game to use AL.&lt;br /&gt;
&lt;br /&gt;
== Aliasing ==&lt;br /&gt;
&lt;br /&gt;
When edges look jagged instead of smooth, and moiré patterns develop in fine parallel lines. The problem is most prevalent in diagonal lines. Aliasing happens when the engine tries to display an image on a portion of the screen where the resolution is too low to display its details correctly. This is solved with [[Anti-Aliasing|anti-aliasing]], [[Mip Mapping|mip mapping]], or [[Texture Filtering|texture filtering]].&lt;br /&gt;
&lt;br /&gt;
== Alpha Channel ==&lt;br /&gt;
&lt;br /&gt;
An optional channel in the texture file that usually defines the [[Transparency map|transparency]] of the texture’s [[Pixel|pixels]]. &lt;br /&gt;
&lt;br /&gt;
It can also be used for other things like a [[Height map|height map]] or grey-scale [[Category:Specular map|specular map]]. &lt;br /&gt;
&lt;br /&gt;
The alpha is usually anywhere from 1bit up to 8bits, depending on the amount of detail you need. Generally, the lower the [[BitDepth|bit depth]] you use, the more [[Memory|memory]] you save, but the less image quality you get. See [[Transparency map]] for more information, and some visual examples can be found on the Glossary page [[Transparency#Alpha_Bit_Depths]].&lt;br /&gt;
&lt;br /&gt;
Different image channels can be extracted by a [[Category:Shaders|shader]] to use them for particular effects, like alpha for physics info, red channel for glow and green channel for specular and blue channel for sound types, etc. In this way, a bitmap can store more information than just RGB color.&lt;br /&gt;
&lt;br /&gt;
== Ancestor ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Any [[Node|node]] in a [[Hierarchy|hierarchy]] that is above the current node.&lt;br /&gt;
&lt;br /&gt;
See also [[Child|child]], [[Parent|parent]], [[Node|node]]&lt;br /&gt;
&lt;br /&gt;
== Animatic ==&lt;br /&gt;
&lt;br /&gt;
An animated [[Storyboard|storyboard]]. This is used to refine timing, cameras, composition, etc., when a static storyboard just isn't enough. The animatic is kind of like a slide-show, with zooms and pans added to the storyboard panels to flesh out timing and composition. Where needed, low-resolution animated 3d scenes are used, intermixed with the remaining 2D storyboard panels. Sometimes called a &amp;quot;story reel.&amp;quot; Also called a &amp;quot;Leica reel,&amp;quot; (pronounced LIKE-uh) a term sometimes still used by gray-haired animators.&lt;br /&gt;
&lt;br /&gt;
== Anisotropic Filtering ==&lt;br /&gt;
&lt;br /&gt;
A [[Texture_filtering|texture filtering method]], specifically for non-square filtering, usually of textures shown in radical perspective (such as a pavement texture as seen from a camera lying on the road). More generally, anisotropic improves clarity of images with severely unequal [[AspectRatio|aspect ratios]]. Anisotropic filtering is an improvement of isotropic mip-mapping, but because it must take many samples, it can be very [[Memory|memory]] intensive.&lt;br /&gt;
&lt;br /&gt;
== Anti-Aliasing ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Anti-aliasing removes the stair-stepping or jaggies which occur at the edges of [[Polygon|polygons]] or between the [[Texel|texels]] on the polygons. It works by [[Interpolation|interpolating]] the [[Pixel|pixels]] at the edges to make the difference between two color areas less dramatic. See also [[Aliasing|aliasing]], [[Mip_Mapping|MIP mapping]], [[Texture_filtering|texture filtering]].&lt;br /&gt;
&lt;br /&gt;
== ASCII ==&lt;br /&gt;
&lt;br /&gt;
The American Standard Code for Information Interchange (pronounced &amp;quot;ASS-key&amp;quot;) is a encoding type for text, based on the English alphabet. A more universal encoding type is Unicode which allows for display of many different characters.&lt;br /&gt;
&lt;br /&gt;
== Aspect Ratio ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
A number that describes the shape of a rectangular texture, whether it's tall or wide. To get the aspect ratio, either divide the width by the height, or write it out as width:height. Aspect ratio helps you decide what kinds of changes need to be done to an image to get it to display correctly, like when you have to scale the image. Aspect ratio gets kind of complex when you have to deal with non-square pixels and other oddities-- not very important to the artist. See also [[AnisotropicFiltering|anisotropic filtering]].&lt;br /&gt;
&lt;br /&gt;
== Average Blending ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
When using [[TextureBlending|texture blending]], average is when the colors of the [[BaseMap|base map]] and the [[Light map|light map]] are blended together evenly. This helps when you don't want the light map to brighten or darken the base map, like when you are placing a decal of a crack on a wall to make it looked cracked. See also [[AdditiveBlending|additive blending]], [[AverageTransparency|average transparency]], [[BlendInvert|invert blending]], [[BlendSubtract|subtractive blending]].&lt;br /&gt;
&lt;br /&gt;
== Average Transparency ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Colors are mixed together evenly to create a new color. Sometimes called filter transparency. See also [[AdditiveTransparency|additive transparency]], [[AverageBlending|average blending]], [[SubractiveTransparency|subtractive transparency]].&lt;br /&gt;
&lt;br /&gt;
== Azimuth ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
One of the two rotational axes used by the astronomers, and also by [[RT3D]] programmers. Azimuth is similar to [[Yaw|yaw]]-- if you shake your head &amp;quot;no,&amp;quot; you are rotating in azimuth. Most [[Engine|engines]] count from 0 to 360 degrees, starting out pointed straight forward, turning clockwise all the way around, and ending pointed straight forward again. Straight forward is both 0 and 360 degrees, and straight backwards is 180 degrees. Sometimes mathematicians measure it in radians, a unit of measure that comes from the magical number Pi. A radian equals Pi/180, but most people use degrees instead of radians.&lt;br /&gt;
&lt;br /&gt;
Astronomers use only two axes (the other axis is [[Declination|declination]]), because they do not use [[Roll|roll]]. This is similar to the way most [[FPS|FPS]] games work. It is preferred in some RT3D engines because it keeps the viewer level no matter where he points. But this system has a drawback because it suffers from gimbal lock whenever the [[Viewer|viewer]] points either straight up or straight down. The Quake games, for instance, avoid [[GimbalLock|gimble lock]] by not allowing the viewer to point all the way up or down.&lt;br /&gt;
&lt;br /&gt;
= '''B''' =&lt;br /&gt;
&lt;br /&gt;
== B-Spline ==&lt;br /&gt;
&lt;br /&gt;
A way to make a curved line with very few points. It has control points with equal weights to adjust the shape of the curve. The control points rarely reside on the curve itself, because the curve is an average of the points. For instance, if you make four control points in the shape of a square, the resulting curve will be a circle inside of that square, because the curve is pulled inward as it tries to average out the weights of all the four points. Different from [[BezierSpline|bezier splines]], which use control points that always touch the curve, and handles that help you adjust the curve.&lt;br /&gt;
&lt;br /&gt;
== Backface Culling ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
The process of removing the unseen polygons that face away from the [[Viewer&amp;lt;sup&amp;gt;1&amp;lt;/sup&amp;gt;|viewer]]. This can dramatically speed up the [[Rendering|rendering]] of a [[Polygon|polygonal]] scene, since they take up valuable processing power. Also called Backface Removal or Back Culling. It is one of the methods of [[HiddenSurfaceRemoval|Hidden Surface Removal]].&lt;br /&gt;
&lt;br /&gt;
== Base Map ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
When using [[TextureBlending|texture blending]], this is the main [[Texture_types|texture]] used on the [[Polygon|polygon]]. One or more additional textures are blended with the base map to create a new texture. See also [[DarkMap|dark map]], [[LightMap|light map]].&lt;br /&gt;
&lt;br /&gt;
== Bézier Spline ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Another way to make a curved line with very few points. named after the French mathematician Pierre Bézier (pronounced BEZ-ee-ay), these curves employ at least three points to define a curve. The two endpoints of the curve are called anchor points. The other points, which define the shape of the curve, are called handles, tangent points, or nodes. Attached to each handle are two control points. By moving the handles and the control points, you end up having a lot of control over the shape of the curve. Different from [[B-Spline|b-splines]], which use control points that don't necessarily touch the curve.&lt;br /&gt;
&lt;br /&gt;
== Bilinear Filtering ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
A method of [[Mip_Mapping|MIP mapping]]. Since the [[Texel|texels]] are almost always larger or smaller than the screen [[Pixel|pixels]], it tries to find a MIP-map with texels that are closest in size to the screen pixels. Then it [[Interpolation|interpolates]] the four texels that are the nearest to each screen pixel in order to render each new screen pixel. See also [[TrilinearFiltering|trilinear filtering]].&lt;br /&gt;
&lt;br /&gt;
== Billboard ==&lt;br /&gt;
&lt;br /&gt;
Billboard is a term commonly used in games to describe a camera-facing plane. This is a polygon with a texture using [[Transparency map|transparency]], that rotates on various axes to always face the viewer. &lt;br /&gt;
&lt;br /&gt;
Billboards are commonly used for particles, far away [[Category:EnvironmentFoliage|trees]], [[GrassTechnique|grass]], clouds, [[Category:UserInterface|UI elements]], etc. This is a common device to get more detail in objects without using a lot of polygons. However, they are flat and may look strange when you move around them. &lt;br /&gt;
&lt;br /&gt;
Billboards can be set up to rotate on only one [[RotationalAxis|axis]] or on combinations of axes. For example, a tree might be set to rotate only on its vertical axis, so it always appears grounded. However a puff of smoke usually looks better if it always stays upright and flat to the view. &lt;br /&gt;
&lt;br /&gt;
One cool trick is to use a curved billboard, which helps give more depth to a single-axis billboard, because like a tree billboard, you can look down on it, so the extra depth helps get rid of that flat look. &lt;br /&gt;
&lt;br /&gt;
Also called [[Sprite]], Imposter, Flag, Kite, Stamp.&lt;br /&gt;
&lt;br /&gt;
== Bit Depth ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Bitdepth is used to denote how many colors a game or application needs to function properly, which is the number of colors plus other channels like [[AlphaChannel|alpha]]. The more colors, the smoother the image. 1-bit is two colors-- black and white. 2-bit is four colors, 4-bit is sixteen, 8-bit is 256 colors, etc. &lt;br /&gt;
The number of colors at any bit depth = 2bitdepth. For instance, 8-bit = 28 = 2x2x2x2x2x2x2x2 = 256 colors.&lt;br /&gt;
If game uses alpha or [[HeightMap|heightmaps]], then they are additional channels that can be added to the bitdepth number. 16-bit is 65536 colors without any additional channels, but if you want to add an alpha channel, then 16 would be divided into [[RGB]] and alpha, which can be done a number of ways. For example, if you use 4 bits each for Red, Green, and Blue, then that leaves 4 bits for the alpha, so you have 16 colors for alpha. Or else 5 bits for each RGB means you have 1-bit for alpha, which is only 2 colors, but gives you more RGB colors to work with. It's a trade-off.&lt;br /&gt;
&lt;br /&gt;
== Bitmap ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Another name for a [[Texture_types|texture]].&lt;br /&gt;
&lt;br /&gt;
== Blending (Inverted) ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
When using [[TextureBlending|texture blending]], invert is when the light map inverts the [[RGB]] color of the base map. See also [[AdditiveBlending|additive blending]], [[AverageBlending|average blending]], [[BlendSubtract|subtractive blending]].&lt;br /&gt;
&lt;br /&gt;
== Blending (Subtractive) ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
When using [[TextureBlending|texture blending]], subtractive is when the colors of the [[DarkMap|dark map]] are subtracted from the colors of the [[BaseMap|base map]] to make a new darker texture. See also [[AdditiveBlending|additive blending]], [[AverageBlending|average blending]], [[BlendInvert|invert blending]].&lt;br /&gt;
&lt;br /&gt;
== Bounding Box ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
A simplified approximation of the volume of a model, used most commonly for [[CollisionDetection|collision detection]]. Although it is commonly a box, it can be any shape. If not a box, it is more properly called a bounding volume.&lt;br /&gt;
&lt;br /&gt;
== BPC ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Bits Per Channel is another term for [[BitDepth|bitdepth]].&lt;br /&gt;
&lt;br /&gt;
== BPP ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Bits Per Pixel is almost the same as a [[BitDepth|bitdepth]]. But BPP is only the [[RGB]] component. It tells us how many colors are used in the RGB part of the image. For example, 16BPP = 216 = 2x2x2x2x2x2x2x2x2x2x2x2x2x2x2x2 = 65536 colors. The term bitdepth is more commonly used to denote what a game or application needs to function properly, which is the number of colors plus other channels like alpha. For example, if you say there is a 32bit bitdepth, this doesn't mean 232 colors. Instead, this means a 24bit RGB space + an 8bit alpha. To make things clearer, instead of the term bitdepth people are starting to use the term BPC, or Bits Per Channel.&lt;br /&gt;
&lt;br /&gt;
== BSP ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Binary Space Partition. A BSP tree subdivides 3D space with 2D planes to help speed up [[Sorting|sorting]]. It is sometimes used for additional purposes like [[CollisionDetection|collision detection]].&lt;br /&gt;
&lt;br /&gt;
= '''C''' =&lt;br /&gt;
&lt;br /&gt;
== Camera ==&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
In [[RT3D]], this is another word for the [[Viewer|viewer's]] view into the scene.&lt;br /&gt;
&lt;br /&gt;
== Channel Packing ==&lt;br /&gt;
&lt;br /&gt;
See [[ChannelPacking|Channel Packing]]&lt;br /&gt;
&lt;br /&gt;
== Child ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Any [[Node|node]] in a [[Hierarchy|hierarchy]] that is attached to another node.&lt;br /&gt;
&lt;br /&gt;
== Chroma Key ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
A [[BitDepth|1-bit]] transparency. In the film &amp;amp; video worlds, chromakey means to &amp;quot;key&amp;quot; to a particular color and make it transparent, like with bluescreening. In [[RT3D]], it means to make a particular [[RGB]] color in a texture 100% transparent, and all other colors 100% opaque. It's a cheap way to get transparency, since you don't need an [[AlphaChannel|alpha channel]]. However it means the transparency has a rough pixelized edge. A good color to designate for chromakey is one that you won't be using elsewhere in your textures, like magenta (1,0,1). Whatever the color you decide, all textures in the engine will use that same color for chromakey. See also [[Sprite|sprites]].&lt;br /&gt;
&lt;br /&gt;
== Clipping Plane ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
A clipping plane throws away [[Polygon|polygons]] on the other side of it. This can dramatically speed up the [[Rendering|rendering]] of a polygonal scene, since unneeded polygons can take up valuable processing power. See also [[BackfaceCulling|backface culling]], [[Frustrum|frustum]], [[HiddenSurfaceRemoval|hidden surface removal]].&lt;br /&gt;
&lt;br /&gt;
== Collision Detection ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
A constant drain on a 3D game engine, the systematic check to see if any intersections are occuring between significant polygons, such as my player's sword and your player's neck. See also [[BoundingBox|bounding box]].&lt;br /&gt;
&lt;br /&gt;
== Color Depth ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
The number of colors in an image, same as [[BitDepth|bit depth]].&lt;br /&gt;
&lt;br /&gt;
== Color Models ==&lt;br /&gt;
&lt;br /&gt;
See [[ColorModels|Color Models]]&lt;br /&gt;
&lt;br /&gt;
== Compile ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Programmers have to convert source code, written in a high-level language such as C, into object code (typically Assembly language), so that a microprocessor can run the program. This is similar to the process artists go thru in order to create game art-- I create textures in Photoshop, saving them in PSD format, then I convert them down to the appropriate bitmap format for use in the game. However, it can take a long time for programmers to compile their code, like overnight or a couple days, depending on the complexity.&lt;br /&gt;
&lt;br /&gt;
== Concave ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
A certain kind of shape, opposite of convex. If any two points can be connectd by a line that goes outside the shape, the shape is concave. The letter C is a concave shape. An easy way to remember concave is thinking of it being like a hill with a cave in it. If you put a straight line between a gold nugget in the floor and a diamond in the ceiling, the line is not inside the hill anymore (it is in the air not in the earth). The word concave means &amp;quot;with a cavity.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== Convex ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
A certain kind of shape, without indentations, opposite of concave. If any two points can be connected by a line that goes outside the shape, the shape is not convex. The letter O is a convex shape. An uncut watermelon is a convex shape-- anytime you draw a straight line between two seeds, you're still inside the watermelon.&lt;br /&gt;
&lt;br /&gt;
== Coordinate ==&lt;br /&gt;
&lt;br /&gt;
See [[Coordinate]]&lt;br /&gt;
&lt;br /&gt;
== Coplanar ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
When two or more things are on the same 2D plane. If two pieces of paper are sitting side by side flat on your desk, they're coplanar.&lt;br /&gt;
&lt;br /&gt;
= '''D''' =&lt;br /&gt;
&lt;br /&gt;
== Dark Map ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
The texture used to [[Blend|blend]] with the [[BaseMap|base map]] to create a new, darker texture. See also [[Light map|light map]], [[BlendSubtract|subtractive blending]].&lt;br /&gt;
&lt;br /&gt;
== DDS ==&lt;br /&gt;
&lt;br /&gt;
See [[DDS]]&lt;br /&gt;
&lt;br /&gt;
== Decal ==&lt;br /&gt;
&lt;br /&gt;
See [[Decal]]&lt;br /&gt;
&lt;br /&gt;
== Declination ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
One of the two rotational axes used by the astronomers, and also by [[RT3D]] programmers. Declination is similar to [[Pitch|pitch]]-- if you nod your head &amp;quot;yes,&amp;quot; you are rotating in declination. Most [[Engine|engines]] count from +90 to -90 degrees, starting out pointed straight up, turning downwards and ending pointed straight down. Straight forward is 0 degrees. Sometimes mathematicians measure it in radians, a unit of measure that comes from the magical number Pi. A radian equals Pi/180, but most people use degrees instead of radians.&lt;br /&gt;
&lt;br /&gt;
Astronomers use only two axes (the other axis is [[Azimuth|azimuth]]), because they do not use [[Roll|roll]]. This is similar to the way most [[FPS|FPS]] games work. It is preferred in some RT3D engines because it keeps the [[Viewer|viewer]] level no matter where it points. But this system has a drawback because it suffers from [[GimbalLock|gimbal lock]] whenever the viewer points either straight up or straight down.&lt;br /&gt;
&lt;br /&gt;
== Displacement map ==&lt;br /&gt;
&lt;br /&gt;
See [[Displacement Map|Displacement_map]]&lt;br /&gt;
&lt;br /&gt;
== Draw Order ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
The back-to-front order of [[Polygon|polygons]] drawing on top of one another, with the last to draw being the one that appears to be in front of the others. Usually, [[AlphaChannel|alpha channeled]] polygons should draw later, and closer polygons should draw later. Some programs allow you to place polygons under a [[Node|node]] so the [[Engine|engine]] will draw them in a certain order, left to right, and assure that alpha polys draw last.&lt;br /&gt;
&lt;br /&gt;
== Draw Rate ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Same as [[FillRate|fill rate]].&lt;br /&gt;
&lt;br /&gt;
== Dynamic Lighting ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Lighting, but updated every frame, as you'd want for moving lights and for characters that move amongst various light sources. Perhaps the lighting itself animates color and intensity.&lt;br /&gt;
&lt;br /&gt;
= '''E''' =&lt;br /&gt;
&lt;br /&gt;
== Engine ==&lt;br /&gt;
&lt;br /&gt;
The heart of the [[RT3D]] program, the engine is what converts all the game assets into a display on your computer screen. It is a software program written by one or more programmers, and has several components that need to work in harmony to display the game at a nice [[FrameRate|frame rate]]. [what are the components?]&lt;br /&gt;
&lt;br /&gt;
== Environment Map ==&lt;br /&gt;
&lt;br /&gt;
A method of [[Texturing|texture]] mapping that simulates the look of reflections in a shiny surface, like chrome or glass. The texture is usually painted to look like an all-encompassing world. In this example, the sky texture on the far right has been used on the donut. This can also be used to fake the look of specular highlights on an object, by painting only the light sources into the environment texture. &lt;br /&gt;
Also called reflection mapping.&lt;br /&gt;
&lt;br /&gt;
A common method is a [[Cube map]] which maps six textures of a panorama to the faces of a cube.&lt;br /&gt;
&lt;br /&gt;
== Expression ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
A formula used to create procedural animation, often using mathematical elements like sine and cosine. Expressions can automate and greatly speed up repetitive animation tasks. Also sometimes called scripting.&lt;br /&gt;
&lt;br /&gt;
= '''F''' =&lt;br /&gt;
&lt;br /&gt;
== Face ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Another name for a [[Polygon|polygon]]. This term can be used specifically for a single [[Triangle|triangle]] or more generally for a multi-triangle polygon.&lt;br /&gt;
&lt;br /&gt;
== Fan ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
One of the ways that [[Triangle|triangles]] can be created to reuse [[Vertex|vertex]] [[Transforms|transforms]], and thus save [[Memory|memory]] and also [[Render|render]] time. Once you have drawn one triangle, the next triangle only needs to load the [[Coordinate|coordinate]] of one additional vertex in order to draw itself, because it re-uses the vertex transforms that were already performed on its neighbor triangle. But your [[Engine|engine]] must specifically support fans for it to work. See also [[Strip|strips]].&lt;br /&gt;
&lt;br /&gt;
== Fill Rate ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
The number of [[Texture_types|textured]] and [[Shading|shaded]] [[Pixel|pixels]] that an [[Engine|engine]] can [[Render|render]] over a given time period, usually measured in millions of pixels per second (MPPS). [How does this affect RT3D artists? Need to elaborate]&lt;br /&gt;
&lt;br /&gt;
See also [[FrameRate|frame rate]]&lt;br /&gt;
&lt;br /&gt;
== Fog ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Objects in the [[RT3D]] scene become more and more the same color as they recede into the distance. This is similar to real fog, except that RT3D fog is a perfect gradient, whereas real fog usually has some wispy uneveness to it. Heavy fogging in RT3D is used to disguise the far [[ClippingPlane|clipping plane]], as shown at right in the game Turok. In fact, you can see the polygons being clipped, just behind the monster's head. This fog isn't done well-- it should completely hide the clipping plane. Besides, heavy-fogging is generally looked down upon, because it shortens the distance you can see enemies. &lt;br /&gt;
&lt;br /&gt;
In some of the newer [[Engine|engines]], volume fog and ground fog are supported, where the fog is localized to a specific area, which is closer to the behavior of real fog.&lt;br /&gt;
&lt;br /&gt;
== Forward Kinematics ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
FK for short. A method of manipulating objects in a [[Hierarchy|hierarchy]] where the animator positions objects in the hierarchy and the program calculates the positions and orientations of the objects below it in the hierarchy. See also [[InverseKinematics|inverse kinematics]].&lt;br /&gt;
&lt;br /&gt;
== FOV ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Field Of View is a traditional photography term, meaning the area that the [[Viewer|viewer]] can see in the RT3D scene. The FOV is usually defined by its width in degrees. A typical FOV in [[RT3D]] games is around 45 degrees. A really wide FOV like 90 degrees allows the viewer a panoramic view, but can seriously lower the frame rate because so many polygons are visible. It also produces distortions in the scene, where objects in the center of the view appear to be farther away than they are, and objects at the periphery appear to be really close. The trick in RT3D is to lower the FOV as much as possible without impairing too much of the view. See also [[Frustrum|frustrum]].&lt;br /&gt;
&lt;br /&gt;
== FPS ==&lt;br /&gt;
&lt;br /&gt;
See [[FPS]]&lt;br /&gt;
&lt;br /&gt;
== Frame Buffer ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
The frame buffer is what a video board uses to store the images it [[Render|renders]], while it is rendering them. When it is done rendering, it sends the completed frame to your monitor and starts building the next frame. The amount of frame buffer [[Memory|memory]] a video board has directly impacts which resolutions it can support-- the more memory you've got the higher resolutions your board will support and at higher [[BitDepth|bit depths]]. &lt;br /&gt;
&lt;br /&gt;
The frame buffer usually stores 2 frames: one is being calculated by the 3D accelerator while the other one is being sent to the monitor. This is called double buffering and delivers smooth animation. For 640x480 resolution with 16 bits color we require 640x480x16x2 = 9830400 bits of memory or about 1.2 Mb of Frame Buffer memory.&lt;br /&gt;
&lt;br /&gt;
== Frame Rate ==&lt;br /&gt;
&lt;br /&gt;
The framerate measures how fast the game is being rendered. &lt;br /&gt;
&lt;br /&gt;
Generally if the framerate is lower than 30 frames per second, then the game will seem &amp;quot;choppy&amp;quot; and unresponsive.&lt;br /&gt;
&lt;br /&gt;
See [[PolygonCount#Articles_About_Performance]].&lt;br /&gt;
&lt;br /&gt;
== Frustrum ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
A term from traditional photography, it is the shape that the [[Viewer|viewer's]] [[FOV]] creates when projected into the scene. This pyramidal shape defines the limits of what the viewer can see, so it is used to calculate [[ClippingPlane|clipping]], [[CollisionDetection|collisions]], fog, etc. &lt;br /&gt;
&lt;br /&gt;
Imagine your eyeball stuck to the top of a clear pyramid, looking straight down into it. The top-most point that is scratching against your eyeball is the near end of the frustum. You cannot see the side faces of the pyramid, because they are perpendicular to your eyeball, but you can see everything within the pyramid. In photography, this pyramid is infinitely tall, because in theory you can see to infinity. &lt;br /&gt;
&lt;br /&gt;
In RT3D, the frustum is not infinitely long. It has near and far clipping planes to help reduce polygon counts. The near clipping plane cuts off the very top of the pyramid, while the far clipping plane defines the base of the pyramid.&lt;br /&gt;
&lt;br /&gt;
== FUBAR ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Fucked Up Beyond All Repair, or more politely Fowled Up Beyond All Recognition.&lt;br /&gt;
&lt;br /&gt;
== Full Bright ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
When a color or a [[Texture_types|texture]] is drawn at its full intensity, unaffected by real-time lighting. Some [[RT3D]] [[Engine|engines]] do not have lighting or [[DynamicLighting|dynamic lighting]], so all textures are drawn full-bright. Sometimes also called fully-emissive, or self-illuminated.&lt;br /&gt;
&lt;br /&gt;
= '''G''' =&lt;br /&gt;
&lt;br /&gt;
== Gamma ==&lt;br /&gt;
&lt;br /&gt;
See [[Gamma]]&lt;br /&gt;
&lt;br /&gt;
== Geometry ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
This term is commonly used to define all [[Polygon|polygonal]] objects in a game. Also called mesh.&lt;br /&gt;
&lt;br /&gt;
== Gimbal Lock ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
The rotation of an object seems to &amp;quot;stick,&amp;quot; when it is rotated all the way down or all the way up. This can also happen with the [[Camera|camera]] itself. It happens because the mathematics in rotational systems like Euler angles cannot make a consistent rotation solution when they point straight up or straight down. The Quake games avoid gimbal lock by not allowing the viewer to look straight up or straight down.&lt;br /&gt;
&lt;br /&gt;
To solve gimbal lock, either avoid looking directly down or directly up, or else use a different rotational system.&lt;br /&gt;
&lt;br /&gt;
== Gouraud Shading ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
A [[Shading|shading]] method, named after the French mathematician Henri Gouraud (pronounced on-REE grrr-ROW). Each [[Triangle|triangle's]] color is created by [[Interpolation|interpolating]] the [[Vertex|vertex]] colors that are located at each corner of the triangle. In other words, the interior of each triangle is a smooth gradient between the colors of the three vertices. The vertex colors are usually created dynamically from the lighting in the scene, although the artist can instead assign specific colors to each vertex. &lt;br /&gt;
&lt;br /&gt;
Gouraud shading has a smooth look, but can look strange when using polygons with solid colors on them.&lt;br /&gt;
&lt;br /&gt;
= '''H''' =&lt;br /&gt;
&lt;br /&gt;
== Height Map ==&lt;br /&gt;
&lt;br /&gt;
A grayscale texture used as a displacement map to define the topography of the polygons. Usually the brighter pixels make higher elevations, and the darker pixels make lower elevations, and 50% gray pixels make no change. Often the [[AlphaChannel|alpha channel]] of an object's texture is used for the height map. [[Voxel|Voxel]] landscapes often use height maps.&lt;br /&gt;
&lt;br /&gt;
== Hidden Surface Removal ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
See also [[BackfaceCulling|backface culling]], [[ClippingPlane|clipping plane]].&lt;br /&gt;
&lt;br /&gt;
== Hierarchy ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
A list of things that are linked together in a certain order. A family tree is used as an analogy to help describe the parts of the hierarchy. However, the hierarchy tree traditionally hangs upside down, to make it easier to read and to use. At the top of the tree is the root, and all things are attached to it. Every object in the hierarchy is called a node. The connections between the nodes are called links. A node linked to another is called a child, and the node the child is linked to is called a parent. A parent can have multiple children, but in this tree each child can have only one parent. The nodes that have no children are called leaves, because they're at the ends of the tree. &lt;br /&gt;
&lt;br /&gt;
Each node can trace its lineage up through the tree, back though parents to the root. If you choose any parent, then you can call its collection of children a branch of the tree.&lt;br /&gt;
&lt;br /&gt;
= '''I'''=&lt;br /&gt;
&lt;br /&gt;
== Interpolation ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
It is the process of determining from two or more values what the &amp;quot;in-between&amp;quot; values should be. Interpolation is used with animation and with [[Texture_types|textures]], particularly [[Texture_filtering|texture filtering]].&lt;br /&gt;
&lt;br /&gt;
== Inverse Kinematics ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
IK for short. A method of manipulating [[Hierarchy|hierarchies]] where the animator positions objects at the end of the hierarchy and the program calculates the positions and orientations of all other objects in the hierarchy. With properly setup IK, you can quickly animate complex motions. For instance, the bones in the arm of a character are linked in a hierarchy, then limits are set for the rotations of the bones, then the animator can move the hand, and the IK will figure out what the rest of the arm needs to do.&lt;br /&gt;
&lt;br /&gt;
IK is used in RT3D to allow characters to interact with the environment in a more realistic manner, like when a player directs a character to pick an object off the floor. See also [[ForwardKinematics|forward kinematics]].&lt;br /&gt;
&lt;br /&gt;
= '''L'''=&lt;br /&gt;
&lt;br /&gt;
== Light Map ==&lt;br /&gt;
&lt;br /&gt;
See [[Light Map]]&lt;br /&gt;
&lt;br /&gt;
== LOD ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Stands for [[LevelOfDetail|Level Of Detail]]. This is used primarily to reduce polygon count in a scene, especially when you have multiple characters like in a sports game. A lower polygon count model of the character is used for when the character is far away, and when the character gets closer, it switches to a different model with a higher polygon count. This example is only two LODs, but you can have multiple LODs to help you reduce the &amp;quot;popping&amp;quot; effect that makes it obvious the character is switching from a low-count to a high-count model. You want to have the least polygons on screen at a time, so there are other factors besides distance... speed it is moving, viewer's anticipated focus, importance of character.&lt;br /&gt;
&lt;br /&gt;
= '''M''' =&lt;br /&gt;
&lt;br /&gt;
== Map ==&lt;br /&gt;
&lt;br /&gt;
== Mapping ==&lt;br /&gt;
&lt;br /&gt;
== Material ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
A set of parameters that determine the color, shininess, smoothness, etc. of a surface. Usually a material is used to assign a [[Texture_types|texture]] to a [[Face|face]].&lt;br /&gt;
&lt;br /&gt;
== Memory ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
One of the most fundamental [[RT3D]] concepts, memory is the amount of quickly-retrievable space available for assets currently being used by the game. This includes textures, geometry, geometry animation, interface artwork, AI, etc. RT3D applications use a number of different types of memory, but RT3D artists are mostly concerned with [[RAM]]. The biggest question for the artist is how much RAM is available for textures. (Need to cross-reference RAM, Video mem, Texture Mem, etc.).&lt;br /&gt;
&lt;br /&gt;
== Mesh ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Another word for [[Geometry|geometry]]. See also [[Polygon|polygon]].&lt;br /&gt;
&lt;br /&gt;
== Mip Mapping ==&lt;br /&gt;
&lt;br /&gt;
See [[Mip Mapping]]&lt;br /&gt;
&lt;br /&gt;
== Morph ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
An animated 2D or 3D effect that makes one [[Texture_types|texture]] or [[Geometry|geometry]] smoothly transform into another. Often used to do 3D facial animation. Comes from the word metamorphosis.&lt;br /&gt;
&lt;br /&gt;
== Multi-Texture ==&lt;br /&gt;
&lt;br /&gt;
See [[MultiTexture]]&lt;br /&gt;
&lt;br /&gt;
= '''N''' =&lt;br /&gt;
&lt;br /&gt;
== N-gon ==&lt;br /&gt;
&lt;br /&gt;
Another word for a [[Polygon|polygon]]. The letter N stands for any whole number. In other words, any polygon with &amp;quot;n&amp;quot; number of sides.&lt;br /&gt;
&lt;br /&gt;
== Nadir ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
The point directly below you in space. The opposite of [[Zenith|zenith]].&lt;br /&gt;
&lt;br /&gt;
== NDO ==&lt;br /&gt;
&lt;br /&gt;
See [[NDO]]&lt;br /&gt;
&lt;br /&gt;
== Node ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Each single object in a [[Hierarchy|hierarchy]] is called a node. Without nodes there can be no hierarchy-- there's nothing to link together. Also called a bead.&lt;br /&gt;
&lt;br /&gt;
== NURBS ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Short for Non-Uniform Rational B-Spline, a mathematical representation of a 3-dimensional object. Most CAD/CAM applications support NURBS, which can be used to represent analytic shapes, such as cones, as well as free-form shapes, such as car bodies. NURBS are not used much for [[RT3D]] because they create a large number of [[Polygon|polygons]], but it is an available modeling method in some artist 3D packages.&lt;br /&gt;
&lt;br /&gt;
= '''O''' =&lt;br /&gt;
&lt;br /&gt;
== Opacity ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Opacity (opaque) is the opposite of transparency (transparent). Opacity can mean that something is partially transparent-- because you can change an object's opacity using a setting in its [[Material|material]]. But the word opaque usually means totally non-transparent.&lt;br /&gt;
&lt;br /&gt;
== Overdraw ==&lt;br /&gt;
&lt;br /&gt;
Overdraw means a screen pixel is being drawn more than once. Overdraw can increase the [[FillRate|fill rate]], how fast the game can render each frame, slowing down the [[FrameRate|frame rate]]. Re-rendering each pixel more than once is usually a waste of processing time. &lt;br /&gt;
&lt;br /&gt;
Overdraw should be avoided whenever possible, however it is required if triangles are partially [[Transparency map|transparent]], because the surfaces must be mixed together to create the final screen pixel.&lt;br /&gt;
&lt;br /&gt;
Overdraw is usually caused by multiple triangles being drawn over each other, for example with particle effects or tree foliage. &lt;br /&gt;
&lt;br /&gt;
= '''P''' =&lt;br /&gt;
&lt;br /&gt;
== Parent ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Any [[Node|node]] in a [[Hierarchy|hierarchy]] that has another node attached to it.&lt;br /&gt;
&lt;br /&gt;
== PBR ==&lt;br /&gt;
&lt;br /&gt;
See [[PBR]]&lt;br /&gt;
&lt;br /&gt;
== Phong ==&lt;br /&gt;
&lt;br /&gt;
See [[Phong]]&lt;br /&gt;
&lt;br /&gt;
== Pitch ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
If you nod your head &amp;quot;yes,&amp;quot; this is pitch in action. One of the three rotational axes commonly used to describe the rotation of an object. The term comes from aviation. Most [[RT3D]] [[Engine|engines]] count from +90 to -90 degrees, starting out pointed straight up, turning downwards and ending pointed straight down. Straight forward is 0 degrees. See also [[Roll|roll]], [[Yaw|yaw]], [[RotationalAxis|rotational axis]].&lt;br /&gt;
&lt;br /&gt;
== Pixel ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Short for picture element. There are two common meanings: the pixels that [[Texture_types|textures]], or [[Bitmap|bitmaps]] are made of, and the pixels that are [[Render|rendered]] onto your computer screen by the [[Engine|engine]]. &lt;br /&gt;
&lt;br /&gt;
You tell each pixel where to go and what color to be by giving it two sets of values. For position, it needs [[Category:TextureCoordinates|coordinates]] (coords), called X and Y, usually written as (X,Y). The X coord is horizontal, the Y coord is vertical, and the numbers usually start at (0,0) in the upper-left corner. For the pixel's color it needs the three [[RGB]] color values, written as (R,G,B). These go from 0 (no color) to 1 (full-on color). For instance, green is (0,1,0). Most paint programs use 0 to 255, but 0 to 1 is easier for [[RT3D]] programmers to use. See also [[Texel|texel]].&lt;br /&gt;
&lt;br /&gt;
== Polygon ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
A series of [[Vertex|vertices]] that define a plane in 3D space. Most [[RT3D]] [[Engine|engines]] use polygons to make the surfaces of their objects. A polygon can be made up of 1 or more [[Triangle|triangles]], like a [[Quad|quad]] is made of two triangles, a pentagon is made of three triangles, etc. Some engines support multiple polygon types, but triangles are the most common. Some people use the term polygon to specify a quad, others use it when talking about triangles. Polygons are also called polys, or sometimes [[N-gon|n-gons]]. See also [[Face|faces]], [[Fan|fans]], [[Quad|quads]], [[Strip|strips]].&lt;br /&gt;
&lt;br /&gt;
== Polygons ==&lt;br /&gt;
&lt;br /&gt;
See [[Polygons]]&lt;br /&gt;
&lt;br /&gt;
== Power of 2 ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
The numbers 2, 4, 8, 16, 32, 64, 128, 256, 512, etc. Usually this is used to describe the [[Texture_types|texture]] sizes that an [[Engine|engine]] requires in order to make good use of video [[Memory|memory]]. An example texture size would be 32x64. Textures that are not in powers of 2, like 33x24, would probably cause the engine to run slower or maybe crash.&lt;br /&gt;
&lt;br /&gt;
= '''Q''' =&lt;br /&gt;
&lt;br /&gt;
== Quad ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
A four-sided polygon. Some [[RT3D]] engines use quads instead of triangular polygons because it saves NEED DEFINITION, because a square polygon stored as a quad stores only four vertices, whereas a square created with two triangles means transforming six vertices instead of only four. The artist should keep the vertices of the quad [[Coplanar|coplanar]], or rendering weirdness can happen, because the quad is divided into two triangles at render time. The internal edge between the two tris is determined arbitrarily, so it could look like a ridge or a valley.&lt;br /&gt;
&lt;br /&gt;
= '''R''' =&lt;br /&gt;
&lt;br /&gt;
== RAM ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Random Access Memory is the place to store game assets that are used most often, because RAM can be read really quick. See also [[Memory|memory]].&lt;br /&gt;
&lt;br /&gt;
== Ray Cast ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Projecting an imaginary line from each screen pixel into the RT3D scene, bouncing from the surface it meets up to the light source...(needs to be flesh out)&lt;br /&gt;
&lt;br /&gt;
== Raytrace ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
A way of rendering a 3D image which follows the path of every ray of light. Non-interactive, it works best for rendering images which have many reflective surfaces, like steel balls.&lt;br /&gt;
&lt;br /&gt;
== Real-time ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
When events happen at a rate consistent with events in the outside world. Specifically for [[RT3D]] artists, if the [[Engine|engine]] [[Render|renders]] a scene at a slow rate, the illusion of movement can be lost. To retain an interactive, immersive experience, the engine must react to your input and present you with new updated images immediately. If you are getting smooth feedback, it is real-time. [[FPS|Frames per second]] is the measurement of how fast the frames are being rendered. &lt;br /&gt;
&lt;br /&gt;
The engine must perform many complex operations, and the effect of that effort is the amount of time needed to draw each frame. By necessity, we must take shortcuts in the image quality to speed up the rendering. However, no single image remains visible for very long. If you carefully choose speedup techniques so that the errors are small, then they will not be noticed during the moment when the picture is visible.&lt;br /&gt;
&lt;br /&gt;
== Rendering ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
The transformation of 3D data by the [[Engine|engine]] into 2D frames for display on your computer screen (or TV). For [[RT3D]] artists, this specifically refers to [[RealTime|real-time]] rendering, where the individual frames must be drawn as fast as possible.&lt;br /&gt;
&lt;br /&gt;
== RGB Colorspace ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Red, Green, and Blue are the primary colors used to display [[RT3D]] on your computer screen. All the colors you see are combinations of those three. RGB space is the place where any transformations are made to colors, whether reducing the [[BitDepth|bit depth]], [[TextureBlending|texture blending]], [[Render|rendering]], etc. &lt;br /&gt;
&lt;br /&gt;
In RT3D, we use numerical RGB values to describe the colors in each [[Texture_types|texture]]. These numbers can be a drag to use, but they give you more control of the medium, especially when you want to tweak something like texture blending. &lt;br /&gt;
&lt;br /&gt;
In texture programs like Photoshop, the RGB values for texture colors are in an [[BitDepth|8bit]] scale which is usually 0 to 255. But RT3D programmers prefer a simpler scale, representing all colors with the values 0 to 1. For instance, red is (1,0,0), white is (1,1,1), black is (0,0,0), brown is (.4,.21,0) etc. The decimal places can go out as far as the programmer decides it needs to, but usually just two decimal places (.00) is precise enough. The less decimals, the smaller the file sizes will be, which conserves precious [[Memory|memory]]. This was the root of the Y2K problem, but we don't need to get into that... heh heh. &lt;br /&gt;
&lt;br /&gt;
See also [[AdditiveColorModel|additive color model]].&lt;br /&gt;
&lt;br /&gt;
== Roll ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
If you tilt your head to read the spine of a book, this is roll in action. One of the three rotational axes commonly used to describe the rotation of an object. The term comes from aviation. Most [[RT3D]] [[Engine|engines]] count from -180 to 180 degrees, starting out rolled upside-down all the way to the left, turning clockwise and ending up rolled upside-down all the way to the right. Straight forward with no roll is 0 degrees. See also [[Pitch|pitch]], [[Yaw|yaw]], [[RotationalAxis|rotational axis]].&lt;br /&gt;
&lt;br /&gt;
== Rotational Axis ==&lt;br /&gt;
&lt;br /&gt;
See [[Rotational Axis|Rotational Axis]]&lt;br /&gt;
&lt;br /&gt;
== RT3D ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Real-Time 3-Dimensional graphics. Artwork that is rendered in real-time on a computer, usually with interactive input from the [[Viewer|viewer]].&lt;br /&gt;
&lt;br /&gt;
= '''S''' =&lt;br /&gt;
&lt;br /&gt;
== Shading ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
The process of assigning values to the surfaces of objects, which control the way the surface interacts with light in the scene to create the object's color, specularity (highlights), reflective qualities, transparency, and refraction. Shading mimics the material that an object is supposed to be made of-- wood, plastic, metal, etc. The art of shading is understanding how the range of parameters will interact to create realistic or else imaginative effects. Also sometimes called surfacing. &lt;br /&gt;
&lt;br /&gt;
See also [[GouraudShading|Gouraud shading]], [[Phong|Phong shading]].&lt;br /&gt;
&lt;br /&gt;
== Smoothing Groups ==&lt;br /&gt;
&lt;br /&gt;
See [[Smoothing Groups]]&lt;br /&gt;
&lt;br /&gt;
== Sorting ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Keeps track of which onscreen elements can be viewed and which are hidden behind other objects. &lt;br /&gt;
&lt;br /&gt;
See also [[Z-Buffer|z-buffering]].&lt;br /&gt;
&lt;br /&gt;
== Spline ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
A curved line, defined by mathematical functions. &lt;br /&gt;
&lt;br /&gt;
See also [[B-Spline|b-spline]], [[BezierSpline|bezier spline]].&lt;br /&gt;
&lt;br /&gt;
== Sprite ==&lt;br /&gt;
&lt;br /&gt;
See [[Sprite]]&lt;br /&gt;
&lt;br /&gt;
== Storyboard ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
A visualization of the animation that breaks it down into a sequence of sketches that illustrate the key movements.&lt;br /&gt;
&lt;br /&gt;
== Strip ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
One of the ways that [[Triangle|triangles]] can be created to reuse [[Vertex|vertex]] [[Transforms|transforms]], and thus save [[Memory|memory]] and also [[Render|render]] time. Once you have drawn one triangle, the next triangle only needs to load the [[Coordinate|coordinate]] of one additional vertex in order to draw itself, because it re-uses the vertex transforms that were already performed on its neighbor triangle. But your [[Engine|engine]] must specifically support strips for it to work. Sometimes also called tri-strips. See also [[Fan|fans]].&lt;br /&gt;
&lt;br /&gt;
== Substance Designer ==&lt;br /&gt;
&lt;br /&gt;
See [[Substance Designer]]&lt;br /&gt;
&lt;br /&gt;
= '''T''' =&lt;br /&gt;
&lt;br /&gt;
== Texel ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Each [[Pixel|pixel]] of a [[Texture_types|texture]] during the time the texture is being processed by the [[RT3D]] [[Engine|engine]]. After the engine performs calculations to project the texture onto [[Polygon|polygons]], the texture pixels are transformed into texels. Then the engine [[Render|renders]] the scene, and at that point it transforms those texels into screen pixels.&lt;br /&gt;
&lt;br /&gt;
The distiction between texels and pixels is important in defining how the engine transforms textures. First they're texture pixels, then they're texels, then they're finally screen pixels.&lt;br /&gt;
&lt;br /&gt;
== Texture Atlas ==&lt;br /&gt;
&lt;br /&gt;
See [[Texture Atlas]]&lt;br /&gt;
&lt;br /&gt;
== Texture Compression ==&lt;br /&gt;
&lt;br /&gt;
The technique used by new and upcoming 3d accelerator cards to use larger [[Texture_types|textures]] in the same amount of texture [[Memory|memory]] and graphics bus bandwidth. With texture compression, you can often use textures as big as 2048x2048 in your real-time scenes. Two examples of texture compression methods are S3TC and VQTC.&lt;br /&gt;
&lt;br /&gt;
== Texture Coordinates ==&lt;br /&gt;
&lt;br /&gt;
See [[Texture Coordinates]]&lt;br /&gt;
&lt;br /&gt;
== Texture Filtering ==&lt;br /&gt;
&lt;br /&gt;
This term is used whenever a texture is altered by the [[Engine|engine]]: to eliminate jagged edges and shimmering [[Pixel|pixels]] whenever [[Texel|texels]] are larger or smaller than screen pixels (see [[Aliasing|aliasing]]), or to perform [[textureBlending|texture blending]] to blend two textures together.&lt;br /&gt;
&lt;br /&gt;
See also [[AnisotropicFiltering|anisotropic filtering]], [[MipMap|MIP mapping]].&lt;br /&gt;
&lt;br /&gt;
== Texture Types ==&lt;br /&gt;
&lt;br /&gt;
See [[Texture Types]]&lt;br /&gt;
&lt;br /&gt;
== Texture Blending ==&lt;br /&gt;
&lt;br /&gt;
See [[TextureBlending]]&lt;br /&gt;
&lt;br /&gt;
== Tiling ==&lt;br /&gt;
&lt;br /&gt;
See [[Tiling]]&lt;br /&gt;
&lt;br /&gt;
== Transforms ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
What the [[Engine|engine]] does to objects and [[Vertex|vertices]] in order to place them in the [[RT3D]] scene. transforms are position, rotation, and scale.&lt;br /&gt;
&lt;br /&gt;
== Transparency ==&lt;br /&gt;
&lt;br /&gt;
See [[Transparency]]&lt;br /&gt;
&lt;br /&gt;
== Triangle ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
A triangular [[Polygon|polygon]]. Often shortened to tri or tris.&lt;br /&gt;
&lt;br /&gt;
== Trilinear Filtering ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
A method of [[MipMap|MIP mapping]]. Since the [[Texel|texels]] are almost always larger or smaller than the screen [[Pixel|pixels]], it finds two MIP-maps whose texels are closest in size to the screen pixels: one with larger texels, and the other with smaller texels. For each of the two MIP-maps, it then [[Interpolation|interpolates]] the four texels that are the nearest to each screen pixel. In the final step it averages between the two MIP results to render the final screen pixel. &lt;br /&gt;
&lt;br /&gt;
Trilinear mip-mapping requires more than twice the computational cost of [[BilinearFiltering|bilinear filtering]], but the textures are filtered very nicely, with a clean result.&lt;br /&gt;
&lt;br /&gt;
= '''U''' =&lt;br /&gt;
&lt;br /&gt;
== UV Coordinates ==&lt;br /&gt;
&lt;br /&gt;
Texture coordinates, also called UVs, are pairs of numbers stored in the vertices of a mesh. These are often used to stretch a 2D texture onto a 3D mesh. &lt;br /&gt;
&lt;br /&gt;
See [[TextureCoordinates]] for more information.&lt;br /&gt;
&lt;br /&gt;
= '''V''' =&lt;br /&gt;
&lt;br /&gt;
== Value correction ==&lt;br /&gt;
&lt;br /&gt;
See [[Value Correction]]&lt;br /&gt;
&lt;br /&gt;
== Vertex ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
A point in 3D space that doesn't really do anything unless it is connected to a [[Polygon|polygon]] or a line. For more than one vertex, you call 'em vertices.&lt;br /&gt;
&lt;br /&gt;
== Viewer ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
You, the user of the [[RT3D]] application, looking at the 3D scene through your computer screen. The viewer looks into the RT3D world from a vantage point, which acts something like a camera, to frame your view. Most of the engine's calculations are tailored to make the world look great from that particular view. See also [[Frustrum|frustum]].&lt;br /&gt;
&lt;br /&gt;
Viewer may also refer to a version of the RT3D engine that is used to preview artwork while it is being created.&lt;br /&gt;
&lt;br /&gt;
== Voxel ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
Shorthand for volume [[Pixel|pixel]]. Voxels have traditionally been used to create 3D renderings of complex volumes, like meterological cloud formations or scanned human tissues.&lt;br /&gt;
&lt;br /&gt;
In these visualizations, the voxels are used similar to the way grains of sand are used to make a sand castle-- the volume is dense with thousands of tiny voxels, and each is in the shape of a little cube or tetrahedron. Each voxel is assigned an [[Opacity|opacity]] percentage, and often a color, which makes it easier to examine the underlying structure of the volume. This kind of voxel is usually called a &amp;quot;true&amp;quot; 3D voxel. These voxels require a lot of [[Memory|memory]] and computational time, so they are usually pre-rendered, or else display at a relatively slow [[FrameRate|frame rate]].&lt;br /&gt;
&lt;br /&gt;
In games, voxels have been optimized to run in real-time, most often by using [[Billboard|billboards]] instead of cubes, and by only displaying the voxels on the surfaces of objects. This optimization is called a [[2.5D]] voxel. Typically these voxels have no transparency, and are made large enough to always overlap one another, which usually gives a slightly rough look to the surface. [http://www.novalogic.com/ NovaLogic's] game Comanche Maximum Overkill was the first to use this technique, creating landscapes that were remarkably detailed at the time. In their latest incarnation of the franchise, they've been able to greatly increase the number of voxels, thereby reducing the jaggedness of the landscape surface.&lt;br /&gt;
&lt;br /&gt;
Using voxels, whether 2.5D or 3D, an object can be displayed with great amount of detail, independent of the complexity of the object, dependent instead on the number of voxels used to represent it.&lt;br /&gt;
&lt;br /&gt;
= '''W''' =&lt;br /&gt;
&lt;br /&gt;
== Wavelet ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
A mathematical formula often used for image and video compression. [http://www.radgametools.com/ Bink] uses wavelets, along with other compression techniques, as an update to its popular Smacker video codec.&lt;br /&gt;
&lt;br /&gt;
= '''Y''' =&lt;br /&gt;
&lt;br /&gt;
== Yaw ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
If you shake your head &amp;quot;no,&amp;quot; this is yaw in action. One of the three rotational axes commonly used to describe the rotation of an object. The term comes from aviation. Most [[RT3D]] [[Engine|engines]] count from 0 to 360 degrees, starting out pointed straight forward, turning clockwise and ending pointed straight forwards again. Straight forward is both 0 and 360 degrees. See also [[Pitch|pitch]], [[Roll|roll]], [[RotationalAxis|rotational axis]].&lt;br /&gt;
&lt;br /&gt;
= '''Z''' =&lt;br /&gt;
&lt;br /&gt;
== Z-Buffer ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
An algorithm used in 3-D graphics to determine which objects, or parts of objects, are visible and which are hidden behind other objects. With Z-buffering, the graphics processor stores the Z-axis value of each pixel in a special area of memory called the Z-buffer. Different objects can have the same x- and y-coordinate values, but with different z-coordinate values. The object with the lowest z-coordinate value is in front of the other objects, and therefore that's the one that's displayed. &lt;br /&gt;
&lt;br /&gt;
An alternate algorithm for hiding objects behind other objects is called Z-sorting. The Z-sorting algorithm simply displays all objects serially, starting with those objects furthest back (with the largest Z-axis values). The Z-sorting algorithm does not require a Z-buffer, but it is slow and does not render intersecting objects correctly.&lt;br /&gt;
&lt;br /&gt;
== Z-Fighting ==&lt;br /&gt;
&lt;br /&gt;
See [[Z-Fighting]]&lt;br /&gt;
&lt;br /&gt;
== Zenith ==&lt;br /&gt;
&lt;br /&gt;
{{:[[OutOfDate]]}}&lt;br /&gt;
&lt;br /&gt;
The point directly above you in space. The opposite of [[Nadir|nadir]].&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	<entry>
		<id>http://wiki.polycount.com/wiki/RotationalAxis</id>
		<title>RotationalAxis</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/RotationalAxis"/>
				<updated>2014-11-26T08:58:01Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: Cman2k moved page RotationalAxis to Rotational Axis&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Rotational Axis]]&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	<entry>
		<id>http://wiki.polycount.com/wiki/Rotational_Axis</id>
		<title>Rotational Axis</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/Rotational_Axis"/>
				<updated>2014-11-26T08:58:00Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: Cman2k moved page RotationalAxis to Rotational Axis&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;These axis describe movement in space within a RT3D engine. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Pitch, Yaw, and Roll =&lt;br /&gt;
Pitch is red, yaw is green, and roll is blue.&lt;br /&gt;
&lt;br /&gt;
[[image:pitchyawroll.gif|thumb|Rotational Axis &amp;lt;BR&amp;gt; Image by [http://wiki.polycount.com/EricChadwick Eric Chadwick] ]]&lt;br /&gt;
&lt;br /&gt;
'''Pitch'''&lt;br /&gt;
&lt;br /&gt;
Most RT3D engines count from 90 to -90 degrees, starting out pointed straight up, turning downwards and ending pointed straight down. If you nod your head &amp;quot;yes,&amp;quot; this is pitch in action. Straight forward is 0 degrees. Also called declination.&lt;br /&gt;
&lt;br /&gt;
'''Yaw'''&lt;br /&gt;
&lt;br /&gt;
Most RT3D engines count from 0 to 360 degrees, starting out pointed straight forward, turning clockwise and ending pointed straight forwards again. If you shake your head &amp;quot;no,&amp;quot; this is yaw in action. Straight forward is both 0 and 360 degrees. Also called azimuth.&lt;br /&gt;
&lt;br /&gt;
'''Roll'''&lt;br /&gt;
&lt;br /&gt;
Most RT3D engines count from -180 to 180 degrees, starting out twisted upside-down all the way to the left, turning clockwise and ending twisted upside-down all the way to the right. If you tilt your head to read the spine of a book, this is roll in action. Straight forward with no roll is 0 degrees.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Glossary]]&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	<entry>
		<id>http://wiki.polycount.com/wiki/Rotational_Axis</id>
		<title>Rotational Axis</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/Rotational_Axis"/>
				<updated>2014-11-26T08:56:10Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: image and formatting fixes&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;These axis describe movement in space within a RT3D engine. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Pitch, Yaw, and Roll =&lt;br /&gt;
Pitch is red, yaw is green, and roll is blue.&lt;br /&gt;
&lt;br /&gt;
[[image:pitchyawroll.gif|thumb|Rotational Axis &amp;lt;BR&amp;gt; Image by [http://wiki.polycount.com/EricChadwick Eric Chadwick] ]]&lt;br /&gt;
&lt;br /&gt;
'''Pitch'''&lt;br /&gt;
&lt;br /&gt;
Most RT3D engines count from 90 to -90 degrees, starting out pointed straight up, turning downwards and ending pointed straight down. If you nod your head &amp;quot;yes,&amp;quot; this is pitch in action. Straight forward is 0 degrees. Also called declination.&lt;br /&gt;
&lt;br /&gt;
'''Yaw'''&lt;br /&gt;
&lt;br /&gt;
Most RT3D engines count from 0 to 360 degrees, starting out pointed straight forward, turning clockwise and ending pointed straight forwards again. If you shake your head &amp;quot;no,&amp;quot; this is yaw in action. Straight forward is both 0 and 360 degrees. Also called azimuth.&lt;br /&gt;
&lt;br /&gt;
'''Roll'''&lt;br /&gt;
&lt;br /&gt;
Most RT3D engines count from -180 to 180 degrees, starting out twisted upside-down all the way to the left, turning clockwise and ending twisted upside-down all the way to the right. If you tilt your head to read the spine of a book, this is roll in action. Straight forward with no roll is 0 degrees.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Glossary]]&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	<entry>
		<id>http://wiki.polycount.com/wiki/Rotational_Axis</id>
		<title>Rotational Axis</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/Rotational_Axis"/>
				<updated>2014-11-26T08:54:44Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;These axis describe movement in space within a RT3D engine. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Pitch, Yaw, and Roll =&lt;br /&gt;
Pitch is red, yaw is green, and roll is blue.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
'''Pitch'''&lt;br /&gt;
&lt;br /&gt;
Most RT3D engines count from 90 to -90 degrees, starting out pointed straight up, turning downwards and ending pointed straight down. If you nod your head &amp;quot;yes,&amp;quot; this is pitch in action. Straight forward is 0 degrees. Also called declination.&lt;br /&gt;
&lt;br /&gt;
'''Yaw'''&lt;br /&gt;
&lt;br /&gt;
Most RT3D engines count from 0 to 360 degrees, starting out pointed straight forward, turning clockwise and ending pointed straight forwards again. If you shake your head &amp;quot;no,&amp;quot; this is yaw in action. Straight forward is both 0 and 360 degrees. Also called azimuth.&lt;br /&gt;
&lt;br /&gt;
'''Roll'''&lt;br /&gt;
&lt;br /&gt;
Most RT3D engines count from -180 to 180 degrees, starting out twisted upside-down all the way to the left, turning clockwise and ending twisted upside-down all the way to the right. If you tilt your head to read the spine of a book, this is roll in action. Straight forward with no roll is 0 degrees.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Glossary]]&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	<entry>
		<id>http://wiki.polycount.com/wiki/Phong</id>
		<title>Phong</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/Phong"/>
				<updated>2014-11-26T08:54:11Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Phong Shading =&lt;br /&gt;
A method of [[Shading|shading]] that applies the Phong lighting model not to every [[Polygon|polygon]], but to every [[Pixel|pixel]] of every polygon. Even SGI's Reality Engine can't do Phong shading, so unless you're ready to spend a few million dollars on your next game machine (and write all the games yourself), don't expect to see Phong shading anytime soon.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Phong Lighting =&lt;br /&gt;
A method of lighting a 3D world, the Phong lighting model applies three different types of lighting to the [[Vertex|vertex]] of every polygon. Phong lighting works by performing operations based on the normal of the polygon, the &amp;quot;normal&amp;quot; being an imaginary line drawn orthogonal (straight up from) the face of the polygon. &lt;br /&gt;
The first of the three lighting types is ambient light - light which is just there because god (in this case the programmer) said it was. It affects every polygon equally. &lt;br /&gt;
&lt;br /&gt;
Diffuse lighting is the second type. It assumes that there is no reflection from the objects it is lighting (clay is an example of a nearly perfect diffuse surface), but it does take into consideration the angle that the light hits the surface. If it hits it fully, it will be 100% illuminated, if the object is turned slightly, it will be less illuminated, etc. &lt;br /&gt;
&lt;br /&gt;
The third aspect is called specular highlighting, which takes into account the angle between the light-source and the &amp;quot;eye&amp;quot; of the viewer, so that if the light bounces off a particular spot on the object straight into the &amp;quot;camera&amp;quot; it will be illuminated 100%, and less so if it misses the camera. &lt;br /&gt;
&lt;br /&gt;
The Phong lighting model is fairly realistic for games, but fails to account for the fact that in real life, reflections off of steel or other metals change color depending on what angle they're viewed from, while specular highlighting always gives a reflection of the same color. Phong lighting works only on the vertices of a polygon (using Gouraud shading to color the rest of the polygon), so if a highlight happens to fall in the middle of the polygon, it will be missed, which requires programmers to &amp;quot;tessellate&amp;quot; or break-up large polygons into many small ones to be sure of &amp;quot;catching&amp;quot; highlights at vertices. However, Phong lighting is very fast and doesn't require much processor power.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Glossary]]&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	<entry>
		<id>http://wiki.polycount.com/wiki/FPS</id>
		<title>FPS</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/FPS"/>
				<updated>2014-11-26T08:53:30Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Frames Per Second =&lt;br /&gt;
Frames Per Second. This means how many animation frames the computer displays each second. Also called frame-rate. The NTSC television standard used in the United States displays its frames at around 30 fps, whereas the PAL standard used in Europe displays at 25 fps, and a motion picture displays at 24 fps. See also [[Rendering|rendering]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= First Person Shooter =&lt;br /&gt;
A standard abbreviation for the videogame genre &amp;quot;First Person Shooter&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Glossary]]&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	<entry>
		<id>http://wiki.polycount.com/wiki/ChannelPacking</id>
		<title>ChannelPacking</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/ChannelPacking"/>
				<updated>2014-11-26T08:52:26Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Channel packing is a game art term for a bitmap that has different grayscale images in each of its channels... Red, Green, and Blue. Alpha can also be used as a fourth channel. This saves [[Memory]], but increases [[Shaders]] complexity.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Compression Artifacts =&lt;br /&gt;
If you save a channel-packed texture using DXT compression, it will introduce blocky artifacts to your channels. For details, see [[Normal map#Normal_Map_Compression]].&lt;br /&gt;
&lt;br /&gt;
= Examples =&lt;br /&gt;
&lt;br /&gt;
[[image:SneferTileExplain.jpg|thumb|600px|left|Two channel-packed textures, which store a total of six unique textures, see [http://www.polycount.com/forum/showthread.php?t=89682 An exercise in modular textures - Scifi lab UDK] on the Polycount Forum. Image by [http://www.torfrick.com/ Tor 'Snefer' Frick].]]&amp;lt;br clear=&amp;quot;all&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[image:DistingTileExplain.jpg|thumb|400px|left|A channel-packed texture used to texture an entire scene, see [http://www.polycount.com/forum/showthread.php?p=1588220#post1588220 [UDK] Oil Rig Observation Outpost] on the Polycount Forum. Image by [http://artbywiktor.com/ Wiktor 'Disting' Öhman].]]&amp;lt;br clear=&amp;quot;all&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[image:Marcan-MassEffect3-swizzle.jpg|thumb|600px|left|A channel-packed texture used in Mass Effect 3, see [http://www.polycount.com/forum/showthread.php?p=1881634#post1881634 Mass Effect 3 art - Marc-Antoine Hamelin] on the Polycount Forum. Image by [http://www.marc-antoine.ca/ Marc-Antoine 'Marcan' Hamelin].]]&amp;lt;br clear=&amp;quot;all&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* For more examples and tutorials see [[Texture atlas]] and [[MultiTexture]].&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:TextureTechnique]] [[Category:Glossary]]&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	<entry>
		<id>http://wiki.polycount.com/wiki/Color_Models</id>
		<title>Color Models</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/Color_Models"/>
				<updated>2014-11-26T08:51:08Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: fixing images and formatting&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[image:additive.jpg|thumb|Additive Color Model]]&lt;br /&gt;
= Additive Color Model =&lt;br /&gt;
In the additive color model, red, green, and blue (RGB) are the primary colors, and mixing them together creates white. This is the way light blends together-- shine a red, a green, and a blue spotlight in the same place, and it will make white light.&lt;br /&gt;
You add to get white. To get a lighter color use more of each color, or to get a darker color use less of each color. Additive is the color model used to display graphics on your computer screen, where all the colors are just combinations of the colors red, green and blue. Also called RGB space.&lt;br /&gt;
&lt;br /&gt;
See also [[TextureBlending|Texture Blending]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[image:subtractive.gif|thumb|Subtractive Color Model]]&lt;br /&gt;
= Subtractive Color Model =&lt;br /&gt;
In the subtractive color model, magenta, yellow, cyan, and black are the primary colors. They are also called CMYK, with K standing for black because the letter B is already taken by RGB. Mixing cyan, yellow, and magenta together creates a dark muddy brown, so this is why black has been added as the fourth primary color, to get clean blacks. &lt;br /&gt;
You subtract to get white. To get a lighter color use less of each color, or to get a darker color use more of each color. Subtractive is the color model used for working with pigments, as in painting and color printing. &lt;br /&gt;
&lt;br /&gt;
In RT3D, the subtractive color model governs how colors are blended together, like with [[transparency]] and [[TextureBlending|texture blending]]. Since CMYK are the primary colors, they help describe what subtractive means, but you can use any colors. In fact, since RT3D engines display on a computer screen, you are really in the end just using the additive color model. The subtractive color model can only be simulated in RT3D, to get certain effects. &lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Glossary]]&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	<entry>
		<id>http://wiki.polycount.com/wiki/Color_Models</id>
		<title>Color Models</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/Color_Models"/>
				<updated>2014-11-26T08:47:41Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Additive Color Model =&lt;br /&gt;
In the additive color model, red, green, and blue (RGB) are the primary colors, and mixing them together creates white. This is the way light blends together-- shine a red, a green, and a blue spotlight in the same place, and it will make white light.&lt;br /&gt;
You add to get white. To get a lighter color use more of each color, or to get a darker color use less of each color. Additive is the color model used to display graphics on your computer screen, where all the colors are just combinations of the colors red, green and blue. Also called RGB space.&lt;br /&gt;
&lt;br /&gt;
See also [[TextureBlending|Texture Blending]].&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Subtractive Color Model ==&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
In the subtractive color model, magenta, yellow, cyan, and black are the primary colors. They are also called CMYK, with K standing for black because the letter B is already taken by RGB. Mixing cyan, yellow, and magenta together creates a dark muddy brown, so this is why black has been added as the fourth primary color, to get clean blacks. &lt;br /&gt;
You subtract to get white. To get a lighter color use less of each color, or to get a darker color use more of each color. Subtractive is the color model used for working with pigments, as in painting and color printing. &lt;br /&gt;
&lt;br /&gt;
In RT3D, the subtractive color model governs how colors are blended together, like with [[transparency]] and [[TextureBlending|texture blending]]. Since CMYK are the primary colors, they help describe what subtractive means, but you can use any colors. In fact, since RT3D engines display on a computer screen, you are really in the end just using the additive color model. The subtractive color model can only be simulated in RT3D, to get certain effects. &lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Glossary]]&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	<entry>
		<id>http://wiki.polycount.com/wiki/PaintingAcrossSeams</id>
		<title>PaintingAcrossSeams</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/PaintingAcrossSeams"/>
				<updated>2014-11-25T07:02:44Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When a 2D texture is applied to a 3D model, the [[TextureCoordinates]] often need to be split into multiple UV islands or chunks, to minimize distortion. These splits cause seams in the texture, which need to be removed by an artist. &lt;br /&gt;
&lt;br /&gt;
Here are some common workflows for solving texture seams. It is often easier if the seams can be painted non-destructively, on a separate layer with transparency.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= 2D Paint =&lt;br /&gt;
# Paint &amp;amp; save in your 2D painting app (Photoshop, GiMP, etc.)&lt;br /&gt;
# Reload the texture in your 3D app (3ds Max, Maya, etc.) to examine&lt;br /&gt;
# Repeat until seams are solved&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Integrated 3D Paint =&lt;br /&gt;
Use a 3D paint tool or plugin inside your main 3D software to paint directly across the seams.&lt;br /&gt;
&lt;br /&gt;
=== 3ds Max Painting Tools ===&lt;br /&gt;
* [http://docs.autodesk.com/3DSMAX/13/ENU/Autodesk%203ds%20Max%202011%20Help/files/WS1a9193826455f5ff-6672057a11ce3b1f807-74cb.htm Viewport Canvas] for Max 2010 and later. Layers are only supported in Max 2011 and later. See the tutorial [http://www.shawnolson.net/a/1810/creating_a_tileable_texture_from_a_photo.html Creating a Tileable Texture from a Photo] by [http://www.shawnolson.net/u/1/shawn_olson.html Shawn Olson].&lt;br /&gt;
* [http://renderhjs.net/textools/ TexTools] is a free set of tools. Camera Map allows projection painting in conjunction with your 2D painting app, via the Windows clipboard. Does not isolate the painted details on a transparent layer.&lt;br /&gt;
* [http://www.polyboost.com/ PolyBoost] was the genesis of Viewport Canvas. It works in older versions of Max, but is not free. Does not isolate the painted details on a transparent layer.&lt;br /&gt;
* [http://www.texpaint3d.de/tutorialtexpaint3d.html TexPaint3D] is a free painting plugin, but does not isolate the painted details on a transparent layer.&lt;br /&gt;
&lt;br /&gt;
=== Maya Painting Tools ===&lt;br /&gt;
Maya has various painting methods available. See the [http://download.autodesk.com/us/maya/2011help/files/Paint_Effects_and_3D_Paint_Tool_overview_What_is_Painting_in_Maya_.htm Maya 2011 Help].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Dedicated 3D Paint =&lt;br /&gt;
You can use a dedicated 3D painting program to paint directly across the seams. See the [[Tools]] page for a list of 3D Paint software.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Multiple UV Channels =&lt;br /&gt;
Use your 3D app's texture-baking tool and multiple UV channels:&lt;br /&gt;
# Apply a 2nd UV layout that is setup to be seamless where the original seam was.&lt;br /&gt;
# Bake it out to a new texture. &lt;br /&gt;
# Fix the seam in your 2D painting app.&lt;br /&gt;
# Apply the new map in your 3D app, and bake it back into the original UV layout. &lt;br /&gt;
# [http://www.gamasutra.com/features/20061019/kojesta_01.shtml 3ds Max Tutorial] for this process - by ''[http://www.peterkojesta.com Peter Kojesta]''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= More Information =&lt;br /&gt;
* [[:Category:TextureTechnique]] - Texturing techniques commonly used in game development. &lt;br /&gt;
* [[:Category:Concept]] - The basics of concept drawing and painting. &lt;br /&gt;
* [[TexturingTutorials]] - Tutorials for creating game textures.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:TextureTechnique]]&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	<entry>
		<id>http://wiki.polycount.com/wiki/PaintingAcrossSeams</id>
		<title>PaintingAcrossSeams</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/PaintingAcrossSeams"/>
				<updated>2014-11-25T07:01:47Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When a 2D texture is applied to a 3D model, the [[TextureCoordinates]] often need to be split into multiple UV islands or chunks, to minimize distortion. These splits cause seams in the texture, which need to be removed by an artist. &lt;br /&gt;
&lt;br /&gt;
Here are some common workflows for solving texture seams. It is often easier if the seams can be painted non-destructively, on a separate layer with transparency.&lt;br /&gt;
&lt;br /&gt;
= 2D Paint =&lt;br /&gt;
# Paint &amp;amp; save in your 2D painting app (Photoshop, GiMP, etc.)&lt;br /&gt;
# Reload the texture in your 3D app (3ds Max, Maya, etc.) to examine&lt;br /&gt;
# Repeat until seams are solved&lt;br /&gt;
&lt;br /&gt;
= Integrated 3D Paint =&lt;br /&gt;
Use a 3D paint tool or plugin inside your main 3D software to paint directly across the seams.&lt;br /&gt;
&lt;br /&gt;
=== 3ds Max Painting Tools ===&lt;br /&gt;
* [http://docs.autodesk.com/3DSMAX/13/ENU/Autodesk%203ds%20Max%202011%20Help/files/WS1a9193826455f5ff-6672057a11ce3b1f807-74cb.htm Viewport Canvas] for Max 2010 and later. Layers are only supported in Max 2011 and later. See the tutorial [http://www.shawnolson.net/a/1810/creating_a_tileable_texture_from_a_photo.html Creating a Tileable Texture from a Photo] by [http://www.shawnolson.net/u/1/shawn_olson.html Shawn Olson].&lt;br /&gt;
* [http://renderhjs.net/textools/ TexTools] is a free set of tools. Camera Map allows projection painting in conjunction with your 2D painting app, via the Windows clipboard. Does not isolate the painted details on a transparent layer.&lt;br /&gt;
* [http://www.polyboost.com/ PolyBoost] was the genesis of Viewport Canvas. It works in older versions of Max, but is not free. Does not isolate the painted details on a transparent layer.&lt;br /&gt;
* [http://www.texpaint3d.de/tutorialtexpaint3d.html TexPaint3D] is a free painting plugin, but does not isolate the painted details on a transparent layer.&lt;br /&gt;
&lt;br /&gt;
=== Maya Painting Tools ===&lt;br /&gt;
Maya has various painting methods available. See the [http://download.autodesk.com/us/maya/2011help/files/Paint_Effects_and_3D_Paint_Tool_overview_What_is_Painting_in_Maya_.htm Maya 2011 Help].&lt;br /&gt;
&lt;br /&gt;
= Dedicated 3D Paint =&lt;br /&gt;
You can use a dedicated 3D painting program to paint directly across the seams. See the [[Tools]] page for a list of 3D Paint software.&lt;br /&gt;
&lt;br /&gt;
= Multiple UV Channels =&lt;br /&gt;
Use your 3D app's texture-baking tool and multiple UV channels:&lt;br /&gt;
# Apply a 2nd UV layout that is setup to be seamless where the original seam was.&lt;br /&gt;
# Bake it out to a new texture. &lt;br /&gt;
# Fix the seam in your 2D painting app.&lt;br /&gt;
# Apply the new map in your 3D app, and bake it back into the original UV layout. &lt;br /&gt;
# [http://www.gamasutra.com/features/20061019/kojesta_01.shtml 3ds Max Tutorial] for this process - by ''[http://www.peterkojesta.com Peter Kojesta]''&lt;br /&gt;
&lt;br /&gt;
= More Information =&lt;br /&gt;
* [[:Category:TextureTechnique]] - Texturing techniques commonly used in game development. &lt;br /&gt;
* [[:Category:Concept]] - The basics of concept drawing and painting. &lt;br /&gt;
* [[TexturingTutorials]] - Tutorials for creating game textures.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:TextureTechnique]]&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	<entry>
		<id>http://wiki.polycount.com/wiki/PaintingAcrossSeams</id>
		<title>PaintingAcrossSeams</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/PaintingAcrossSeams"/>
				<updated>2014-11-25T06:59:12Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: /* More Information */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!-- ## page was renamed from Painting Across Seams --&amp;gt;&lt;br /&gt;
= Painting Across Seams =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When a 2D texture is applied to a 3D model, the [[TextureCoordinates]] often need to be split into multiple UV islands or chunks, to minimize distortion. These splits cause seams in the texture, which need to be removed by an artist. &lt;br /&gt;
&lt;br /&gt;
Here are some common workflows for solving texture seams. It is often easier if the seams can be painted non-destructively, on a separate layer with transparency.&lt;br /&gt;
&lt;br /&gt;
== 2D Paint ==&lt;br /&gt;
# Paint &amp;amp; save in your 2D painting app (Photoshop, GiMP, etc.)&lt;br /&gt;
# Reload the texture in your 3D app (3ds Max, Maya, etc.) to examine&lt;br /&gt;
# Repeat until seams are solved&lt;br /&gt;
&lt;br /&gt;
== Integrated 3D Paint ==&lt;br /&gt;
Use a 3D paint tool or plugin inside your main 3D software to paint directly across the seams.&lt;br /&gt;
&lt;br /&gt;
=== 3ds Max Painting Tools ===&lt;br /&gt;
* [http://docs.autodesk.com/3DSMAX/13/ENU/Autodesk%203ds%20Max%202011%20Help/files/WS1a9193826455f5ff-6672057a11ce3b1f807-74cb.htm Viewport Canvas] for Max 2010 and later. Layers are only supported in Max 2011 and later. See the tutorial [http://www.shawnolson.net/a/1810/creating_a_tileable_texture_from_a_photo.html Creating a Tileable Texture from a Photo] by [http://www.shawnolson.net/u/1/shawn_olson.html Shawn Olson].&lt;br /&gt;
* [http://renderhjs.net/textools/ TexTools] is a free set of tools. Camera Map allows projection painting in conjunction with your 2D painting app, via the Windows clipboard. Does not isolate the painted details on a transparent layer.&lt;br /&gt;
* [http://www.polyboost.com/ PolyBoost] was the genesis of Viewport Canvas. It works in older versions of Max, but is not free. Does not isolate the painted details on a transparent layer.&lt;br /&gt;
* [http://www.texpaint3d.de/tutorialtexpaint3d.html TexPaint3D] is a free painting plugin, but does not isolate the painted details on a transparent layer.&lt;br /&gt;
&lt;br /&gt;
=== Maya Painting Tools ===&lt;br /&gt;
Maya has various painting methods available. See the [http://download.autodesk.com/us/maya/2011help/files/Paint_Effects_and_3D_Paint_Tool_overview_What_is_Painting_in_Maya_.htm Maya 2011 Help].&lt;br /&gt;
&lt;br /&gt;
== Dedicated 3D Paint ==&lt;br /&gt;
You can use a dedicated 3D painting program to paint directly across the seams. See the [[Tools]] page for a list of 3D Paint software.&lt;br /&gt;
&lt;br /&gt;
== Multiple UV Channels ==&lt;br /&gt;
Use your 3D app's texture-baking tool and multiple UV channels:&lt;br /&gt;
# Apply a 2nd UV layout that is setup to be seamless where the original seam was.&lt;br /&gt;
# Bake it out to a new texture. &lt;br /&gt;
# Fix the seam in your 2D painting app.&lt;br /&gt;
# Apply the new map in your 3D app, and bake it back into the original UV layout. &lt;br /&gt;
# [http://www.gamasutra.com/features/20061019/kojesta_01.shtml 3ds Max Tutorial] for this process - by ''[http://www.peterkojesta.com Peter Kojesta]''&lt;br /&gt;
&lt;br /&gt;
== More Information ==&lt;br /&gt;
* [[:Category:TextureTechnique]] - Texturing techniques commonly used in game development. &lt;br /&gt;
* [[:Category:Concept]] - The basics of concept drawing and painting. &lt;br /&gt;
* [[TexturingTutorials]] - Tutorials for creating game textures.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:TextureTechnique]]&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	<entry>
		<id>http://wiki.polycount.com/wiki/Decal</id>
		<title>Decal</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/Decal"/>
				<updated>2014-11-25T06:57:32Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A texture with [[Opacity map|transparency]] on a floating polygon, so that it appears to be painted directly on the underlying mesh. When placing decals, care must be taken to avoid [[Z-Fighting]].  Decals are commonly used for effects such as bullet marks, graffiti, broken edges, and tire tracks. &lt;br /&gt;
&lt;br /&gt;
For hard-edged decals use &amp;quot;[[Transparency map#Alpha_Test|alpha test]]&amp;quot; because this is usually cheaper to render. If the decals must be softer (tire tracks in a racing game) then use [[Transparency map#Alpha_Blend|alpha blend]] instead.  See also [[MultiTexture]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Examples =&lt;br /&gt;
&lt;br /&gt;
[[Image:decals_crytek.png]] &amp;lt;BR&amp;gt;&lt;br /&gt;
[http://freesdk.crydev.net/display/SDKDOC3/Using+Decals+for+Destroyed+Structures Using Decals for Destroyed Structures] - from the [http://freesdk.crydev.net/display/SDKDOC3/Home CryENGINE Art Asset Creation Guide] - Using alpha blended decals for broken concrete edges.&lt;br /&gt;
&lt;br /&gt;
[[Image:decals_nyhlen.png]] &amp;lt;BR&amp;gt;&lt;br /&gt;
[http://www.polycount.com/forum/showthread.php?t=100867 Broken concrete for CE3 (Tutorial)] - by [http://vnyhlen.se/ Valdemar 'sltrOlsson' Nyhlén] - Video tutorials using Zbrush and Maya to generate tiled maps for broken concrete edges.&lt;br /&gt;
&lt;br /&gt;
[[Image:decals_schreibt.png]] &amp;lt;BR&amp;gt;&lt;br /&gt;
[http://simonschreibt.de/gat/fallout-3-edges/ Fallout 3 – Edges] from Simon Schreibt - Investigating decal usage in Fallout 3.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:TextureTechnique]]&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	<entry>
		<id>http://wiki.polycount.com/wiki/Subdivision_Surface_Modeling</id>
		<title>Subdivision Surface Modeling</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/Subdivision_Surface_Modeling"/>
				<updated>2014-11-25T06:49:08Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: /* Tips &amp;amp; Tricks */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
This is a modeling technique for making high-poly hard-surface models, by manipulating a lower-resolution &amp;quot;cage&amp;quot; model and using software to subdivide for a smoother surface.&lt;br /&gt;
&lt;br /&gt;
For game artists, hard-surface usually means mechanical/constructed items. These high-poly models are baked into [[Normal Maps]] and other [[Category:TextureTypes|types of textures]], to be used on lower-resolution game-friendly models.&lt;br /&gt;
&lt;br /&gt;
Also called SDS or smoothing. Not to be confused with [[SmoothingGroups]]. The term &amp;quot;box modeling&amp;quot; can be used, though this applies only to the cage modeling process, not to the subdivision part.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;P&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
== Primers ==&lt;br /&gt;
&lt;br /&gt;
[[Image:subd_subdsurfacesoverview.png]]&lt;br /&gt;
[http://www.youtube.com/watch?v=ckOTl2GcS-E Subdivision Surfaces: Overview] video from [http://www.guerrillacg.org/about The GuerrillaCG Project]&lt;br /&gt;
&lt;br /&gt;
[[Image:subd_subdtopologyartifacts.png]]&lt;br /&gt;
[http://www.youtube.com/watch?v=k_S1INdEmdI Subdivision Topology Artifacts] video from [http://www.guerrillacg.org/about The GuerrillaCG Project]&lt;br /&gt;
&lt;br /&gt;
[[Image:subd_inorganicfundamentals1.png]]&lt;br /&gt;
[http://vimeo.com/10941211 Hard Surface Fundamentals] for 3ds Max by [http://digitalapprentice.net/ Grant 'sathe' Warwick]&lt;br /&gt;
&lt;br /&gt;
[[Image:subd_tecnicasmodelado.png]]&lt;br /&gt;
[http://www.etereaestudios.com/training_img/subd_tips/index.htm Técnicas modelado por subdivisión] by [http://www.etereaestudios.com/ Cristóbal Vila] ([http://translate.googleusercontent.com/translate_c?hl=en&amp;amp;sl=es&amp;amp;tl=en&amp;amp;u=http://www.etereaestudios.com/training_img/subd_tips/index.htm English translation by Google]) &lt;br /&gt;
&lt;br /&gt;
[[Image:subd_subdmodelingprimer.png]]&lt;br /&gt;
[http://www.blendernewbies.com/tools/subdivisionmodeling/subd_PRIMER/page1.html Sub-Division Primer] from [http://www.subdivisionmodeling.com/ Subdivisionmodeling.com]&lt;br /&gt;
&lt;br /&gt;
[[Image:subd_gotwires.png]]&lt;br /&gt;
[http://gotwires.info/ gotwires.info] Got Wires is all about Subdivision Modeling: Video Tutorials, Sub-D Wires and Modeling Resources.&lt;br /&gt;
&lt;br /&gt;
[[Image:subd_wikipedia.png]]&lt;br /&gt;
[http://en.wikipedia.org/wiki/Subdivision_surface Subdivision surface - Wikipedia] has good technical info about sub-d.&lt;br /&gt;
&lt;br /&gt;
[[Image:subd_perna.png]]&lt;br /&gt;
[http://www.polycount.com/forum/showthread.php?t=134116 Shared: My Technical Talk content] by [http://www.3pointstudios.com Per 'Perna' Abrahamsen].&lt;br /&gt;
&lt;br /&gt;
== Hard Surfaces ==&lt;br /&gt;
&lt;br /&gt;
[[Image:subd_howumodeldemshapes.png]]&lt;br /&gt;
[http://boards.polycount.net/showthread.php?t=56014 FAQ: How u model dem shapes? Hands-on mini-tuts for mechanical sub-d AKA ADD MORE GEO] from the [http://boards.polycount.net/ Polycount Boards]&lt;br /&gt;
&lt;br /&gt;
[[Image:subd_selwy.png]]&lt;br /&gt;
[http://www.selwy.com/2012/hard-surface-sculpting/ Hard Surface Sculpting – ZBrush] by Selwy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Media:subd_floatingpanelinglines.jpg|Making paneling lines as a floater]] for 3ds Max by [http://boyluya.blogspot.com/ Ralphie 'boyluya' Agenar]. From the Polycount Forum thread [http://www.polycount.com/forum/showthread.php?t=115361 Firefall - Hard surface Art Dump].&lt;br /&gt;
&lt;br /&gt;
[[Image:subd_probooleans.png]][http://www.polycount.com/forum/showthread.php?t=106272 Using ProBooleans in 3ds Max for sub-d modeling] Polycount forum thread.&lt;br /&gt;
&lt;br /&gt;
[[Image:subd_per1283dtutorials.png]]&lt;br /&gt;
[http://www.per128.com/pages_tutorials/index.html 3D Tutorials] from [http://www.per128.com/ Per Abrahamsen aka Per128]&lt;br /&gt;
&lt;br /&gt;
[[Image:subd_marcusaseth.png]][http://www.polycount.com/forum/showthread.php?t=86882 Minitutorials by Marcus Aseth] - by ''[http://www.polycount.com/forum/member.php?u=41030 Marcus Aseth]''. Modeling gun parts, etc. &lt;br /&gt;
&lt;br /&gt;
[[Image:subd_slipgatecentral.png]][http://www.youtube.com/watch?&amp;amp;v=epqUf2SI3kw hard surface modelling hints pt1] - by ''[http://slipgateworks.blogspot.com/ 'slipgatecentral']''. Creating floaters in Maya, short video tutorial. &lt;br /&gt;
&lt;br /&gt;
[[Image:subd_circularholes.png]]  &lt;br /&gt;
[http://www.etereaestudios.com/training_img/subd_tips/agujeros.htm How to create circular holes by subdivision] by [http://www.etereaestudios.com Etereae Studios] ([http://translate.google.com/translate?hl=en&amp;amp;sl=es&amp;amp;tl=en&amp;amp;u=http://www.etereaestudios.com/training_img/subd_tips/agujeros.htm translated into English] by Google)&lt;br /&gt;
&lt;br /&gt;
[[Image:subd_blaizer.png]]  &lt;br /&gt;
[http://blog.whiteblaizer.com/tutorials/ Subdivision Tips – Hard Surface Modelling] by [http://blog.whiteblaizer.com/ Alberto 'Blaizer' Lozano] ([http://translate.google.com/translate?hl=en&amp;amp;sl=es&amp;amp;tl=en&amp;amp;u=http://blog.whiteblaizer.com/tutorials translated into English] by Google)&lt;br /&gt;
&lt;br /&gt;
[[Image:subd_racer445scifiprop.png]]&lt;br /&gt;
[http://cg.tutsplus.com/tutorials/autodesk-3d-studio-max/project-workflow-creating-a-next-gen-sci-fi-prop-day-1/ Creating a Next-Gen Sci-Fi Prop] video tutorial by [http://racer445.com/ Evan 'racer445' Herbert]. Shows the &amp;quot;double smooth&amp;quot; modeling trick for 3ds Max: use smoothing groups to define hard edges, add a [[TurboSmooth]] modifier set to preserve smoothing groups, then another without on top.&lt;br /&gt;
&lt;br /&gt;
[[Image:subd_doublesmooth.png]]&lt;br /&gt;
[http://www.polycount.com/forum/showthread.php?t=117488 Double Smooth] - by [http://www.poopinmymouth.com Ben 'poopinmymouth' Mathis]Video tutorial demonstrates the double-smooth technique for fast sub-d modeling.&lt;br /&gt;
&lt;br /&gt;
[[Image:subd_firehydrant.png]]&lt;br /&gt;
[http://cg.tutsplus.com/tutorials/3d-art/model-a-detailed-high-poly-fire-hydrant-in-3ds-max/ Model a Detailed High Poly Fire Hydrant in 3ds Max] - by ''[http://www.bentateonline.com Ben Tate]''&lt;br /&gt;
&lt;br /&gt;
[[Image:sculpt_philipkmetal.png]]  [http://www.philipk.net/tutorials/materials/metalmatte/metalmatte.html Matte Metal Tutorial] - by [http://www.philipk.net Philip 'Philipk' Klevestav]Modeling and texturing a sci-fi metal plate wall using 3ds Max and Photoshop. More modeling and texturing tutorials at http://www.philipk.net/tutorials.html.&lt;br /&gt;
&lt;br /&gt;
[[Image:subd_modelingbathroomtiles.png]] &lt;br /&gt;
[[ModelingBathroomTiles]] - by [http://www.valent.us okkun]&lt;br /&gt;
&lt;br /&gt;
[[Image:subd_modelinghipolyweaponsispainful.png]]  &lt;br /&gt;
[http://boards.polycount.net/showthread.php?t=37457 modeling hi poly weapons is painful, any tips?] from the [http://boards.polycount.net/ Polycount Boards]&lt;br /&gt;
&lt;br /&gt;
[[Image:subd_technicalhighpolyworkflow.png]]  &lt;br /&gt;
[http://boards.polycount.net/showthread.php?t=38222 Technical highpoly workflow tutorial and scripts] from the [http://boards.polycount.net/ Polycount Boards]&lt;br /&gt;
&lt;br /&gt;
[[Image:subd_chainmailneox.png]]  [http://boards.polycount.net/showpost.php?p=1115091&amp;amp;postcount=156 Modeling chainmail in 3ds Max] - by ''[http://polyphobia.de Steffen &amp;quot;Neox&amp;quot; Unger]''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;OS&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Organic Surfaces ==&lt;br /&gt;
&lt;br /&gt;
[[Image:subd_makingofmoff.png]]&lt;br /&gt;
[http://www.daz-art.com/hi_poly_tut.htm The Making of Moff - High Polygon Realistic Character Creation] - by ''[http://www.daz-art.com/ Darren &amp;quot;Daz&amp;quot; Pattenden]''&lt;br /&gt;
&lt;br /&gt;
[[Image:subd_shoelaces.png]]  &lt;br /&gt;
[http://boards.polycount.net/showthread.php?t=71189 Modeling shoe laces, boot laces... etc.] from the [http://boards.polycount.net/ Polycount Boards]&lt;br /&gt;
&lt;br /&gt;
[[Image:subd_digitalsculpturetechniques.png]][http://www.theminters.com/misc/articles/derived-surfaces/index.htm Digital Sculpture Techniques] by [http://cube.phlatt.net/home/spiraloid/ Bay Raitt] and [http://www.izware.com/ Greg Minter]&lt;br /&gt;
&lt;br /&gt;
[[Image:subd_thepole.png]][http://web.archive.org/web/20101231124113/http://www.subdivisionmodeling.com/forums/showthread.php?t=907 The Pole] from the [http://www.subdivisionmodeling.com/ Subdivisionmodeling.com] forum. Saved here: [[Media:SubdivisionModelingDotCom_The-Pole.pdf]] (10MB PDF)&lt;br /&gt;
&lt;br /&gt;
[[Image:subd_thepolerevised.png]][http://web.archive.org/web/20110101013544/http://www.subdivisionmodeling.com/forums/showthread.php?t=8000 The Pole - Revised] from the [http://www.subdivisionmodeling.com/ Subdivisionmodeling.com] forum&lt;br /&gt;
&lt;br /&gt;
[[Image:subd_thebody.png]][[Media:SubdivisionModelingDotCom_Topology-Body.pdf]] (1MB PDF) from the [http://www.subdivisionmodeling.com/ Subdivisionmodeling.com] forum.&lt;br /&gt;
&lt;br /&gt;
[[Image:subd_thehead.png]][[Media:SubdivisionModelingDotCom_Topology-Head.pdf]] (8MB PDF) from the [http://www.subdivisionmodeling.com/ Subdivisionmodeling.com] forum.&lt;br /&gt;
&lt;br /&gt;
== Tips &amp;amp; Tricks ==&lt;br /&gt;
[[Image:subdiv_stepdownguide.png|512px]]&lt;br /&gt;
[[Image:subdiv_roundinsets.jpg|512px]]&lt;br /&gt;
[[Image:subdiv_tubes-oblastradiuso.jpg|512px]]&lt;br /&gt;
[[Image:subdiv_tesselation.jpg|512px]]&lt;br /&gt;
[[Image:subdiv_transition.jpg|512px]]&lt;br /&gt;
[[Image:subdiv_cylinder-ngon.jpg|512px]]&lt;br /&gt;
[[Image:subdiv_cylinder-cap.jpg|512px]]&lt;br /&gt;
[[Image:subdiv_cylinder-extrusion.jpg|512px]]&lt;br /&gt;
[[Image:subdiv_cylinder-extrusion2.jpg|512px]]&lt;br /&gt;
[[Image:subdiv_cylinder-extrusion3.png|512px]]&lt;br /&gt;
[[Image:subdiv_cylinder-extrusion4.gif|512px]]&lt;br /&gt;
[[Image:subdiv_rims-alecmoody.jpg|512px]]&lt;br /&gt;
[[Image:subdiv_gunstock-racer445.jpg|512px]]&lt;br /&gt;
[[Image:subdiv_leica-joewilson.jpg|512px]]&lt;br /&gt;
[[Image:subdiv_bolt-insets.png|512px]]&lt;br /&gt;
&lt;br /&gt;
== More Information ==&lt;br /&gt;
[http://www.polycount.com/forum/showthread.php?t=56014 FAQ: How u model dem shapes? Hands-on mini-tuts for mechanical sub-d AKA ADD MORE GEO] Polycount forum thread&lt;br /&gt;
[[BaseMesh]]&lt;br /&gt;
[[Category:Topology]]&lt;br /&gt;
[[CharacterSculpting]]&lt;br /&gt;
[[PolygonCount]]&lt;br /&gt;
[[ReTopologyModeling]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Character]] [[Category:CharacterModeling]] [[Category:Environment]] [[Category:EnvironmentModeling]]&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	<entry>
		<id>http://wiki.polycount.com/wiki/Category:SpecialEffects</id>
		<title>Category:SpecialEffects</title>
		<link rel="alternate" type="text/html" href="http://wiki.polycount.com/wiki/Category:SpecialEffects"/>
				<updated>2014-11-25T06:22:30Z</updated>
		
		<summary type="html">&lt;p&gt;Cman2k: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Special Effects''', also called '''FX''', '''SFX''', '''Visual Effects''', '''VFX''' and '''Real-Time VFX''' are an essential part of gaming. The amount of immersion caused by that small puff of dust of the foot while running on a dusty area helps the player feel as they are actually moving the avatar in that virtual world.&lt;br /&gt;
&lt;br /&gt;
Real-Time VFX can generally be broken down into 3 major categories: '''Particles''', '''Shaders''' and '''Simulations'''.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Links =&lt;br /&gt;
* [http://www.crytek.com/download/Crytek_Aug_2012_Siggraph_SH_VFX_for_Games_Particle_Effects_Workshop.zip VFX for Games - Particle Effects] (84MB zip) presentation delivered at SIGGRAPH 2012 by Crytek’s Sascha Herfort.&lt;br /&gt;
* [http://tech-artists.org/forum/forumdisplay.php?35-VFX-Artists VFX Artists subforum] on Tech-Artists.Org&lt;br /&gt;
* [http://www.imbuefx.com/ imbueFX : VFX Training] ($) training site for visual effects in games&lt;br /&gt;
* [http://www.polycount.com/forum/showthread.php?t=78003 FX Artist resources?]&lt;br /&gt;
* [http://blogs.battlefield.com/2012/05/inside-dice-close-quarters-vfx/ Inside DICE: Adapting the Battlefield visuals for Close Quarters combat] &lt;br /&gt;
* [http://www.slideshare.net/alexvlachos/gdc-2010-left4dead2-wounds Rendering Wounds in Left 4 Dead 2] by [http://alex.vlachos.com/ Alex Vlachos] of Valve Software, at GDC 2010&lt;br /&gt;
* [http://stephenjameson.com/tutorials/faking-volumetric-effects/ Faking Volumetric Effects] in UDK by [http://stephenjameson.com Stephen Jameson]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
[[Category:Discipline]]&lt;/div&gt;</summary>
		<author><name>Cman2k</name></author>	</entry>

	</feed>