One more tiny explanation: Actually, in practice, object and world space normal maps should not differ, so i also don't think that one can select "world space" anywhere.
As for tangent space maps: They should (and still do) also work, but what i was unaware of is the fact that there are also a lot of different formats out there in the wild: For example Blender exports the z (blue) component at full precision (0..255), whereas most others only use the value range (128..255). Then the signs of r,g and b can be flipped, too, depending on where one exports the normal map from. :/
The same btw goes for the newly supported object space maps. So the current implementation matches the Blender one with settings +X,+Y,+Z for the export. UE4 (i think) by default for example uses +X,+Y,-Z.
And one more: Both object and tangent space normal maps DO work with animated/rigid objects/primitives, BUT object space normal maps NOT with the primitive frame animations (so where one blends between following frames of the same mesh, something that seems to be rarely used, i only noticed it so far with the dancing cactus animation on cactus jacks).
This is not directed at Toxie, as I would imagine you to know all this very well, but rather an opportunity to try and describe it in a way both artists and engineer can understand, to not run afoul of each other.
The way I understand normal maps, and specifically tangent normals are as follows :
The normal of the Polygon surface (The face normal) gets tweaked, and can be sent into any direction, and can modify height. So here's how that works:
When you are aligning your normal and not changing a thing, you use the values: normalizedValue(hex) = Red .5 (128), Green .5(128), Blue 1(255)
Because the vector of the normal and consequently the depth can be down, as well as up, and with a value range of 0-255 there would only be Up, you split teh value range in half and allow 128 to be neutral and anything less to be down for "R" in U space, and for "G" in V space (UV Coordinate space). Blue is essentially a intensity value that generally gets overused, and therefore compressed or culled altogether by the engine, There are "Bent Normals" that can support Blue, but I know they're superior in some ways, but I don't have a clear understanding on how the blue channel affects the surface normal, and how it's superior.
So now that we understand that R and B are responsible, and to have positive and negative values we start at .5 (128) lets move on to rendering systems. OpenGL and DirectX use two different type of matrices to calculate transforms. OpenGl is column major DirectX is row major, what does that mean ? Well If I give you six values
1,2,3,4,5,6
in OpenGl if you tyransfered this to a matrix you'd expect :
1,4
2,5
3,6
In DirectX you'd get :
1,2,3
4,5,6
so when you play battleship, and you say you want "A,2" you'd get different results in Open Gl than in DirectX,
OpenGl you'd get 4, and DirectX you'd get 2. So how does that affect normal maps? Well the color Green is reversed (Inverted), and that's why if you use OpenGl normals on a DirectX renderer you get something that may appear correct at first, but then you move the light, and you see that it's depth is inverted.
Ok now that we have all that, each "Engine" and "Authoring tool" has it's quirks and theories as to how to represent it's depth axis. Most engines see X-Y as a piece of paper (Plane), with Z being depth, but are you holding the piece of paper on the wall like a picture (Maya) or Putting it on the floor with depth being up pointing into the sky (Max/Blender). So the way Tangent space normals are encoded is not dependent on world space coordinates, because it lives in tangent space, (On the UV's in the objects coordinates) while World space normal are values that describe a normal as an absolute vector in that world, so things like R 0, G 1, B 0 should in theory point straight along the "Y" = Green axis. Now that's great for anything where the world doesn't change and the object doesn't change (Animation) as now your straight up value is still true but if the position / orientation of the surface no longer is through deformation or animation we have issues. Also if you encode world space normals, for OpenGl, and running them in directX, you could get the right results, as long as you don't offset and freeze the rotation of the object. Because of these caveats, most people don't encode world space normal maps, because the dependency on information where / how they were generated makes them less portable from platform to platform.
Also each renderer has the ability to be left handed or right handed coordinate systems, and that dictates how the positive direction of the axis are described. If you hold you your left hand, and you use your Thunmb, Index, and Middle Finger to point in 3 different directions (X Y Z) with the thumb being, up, Index being forward and middle across,you'd get a left handed coordinate system, where each axis's positive direction is pointing along your finger, if you do that with your right hand you notice you're pointing mostly in the same direction, but the "Across" axis is now inverted. This is easy to account for just invert the across or first axis, and seems fairly harmless as far as position is concerned becomes a tangle of math once you add rotations.
Edited by DCrosby, 04 July 2019 - 01:58 PM.