-
Notifications
You must be signed in to change notification settings - Fork 931
Texture Color Formats and Srgb conversions
(The following is adapted from conversations on the wgpu users matrix channel (mainly thanks to cwfitzgerald), and saved here as a useful resource for future wgpu developers).
Digital Color can be an extremely confusing topic and the resources in common circulation aren't always great. Modern low level graphics APIs (such as those under the hood of wgpu) give us powerful tools to manipulate color data, though this comes at the cost of having to understand several low level principles in how digital colour works.
The best place to start is to have an understanding of the problems and history of gamma correction. This wiki won't go deeply into this topic but some useful links can be found here:
- What every coder should know about gamma.
- Hitchhikers Guide to Digital Color
- Learn OpenGL - Gamma Correction
There are many different texture formats you can specify for the format of your color data. This can be image data you upload from the cpu, shaded geometry rendered to intermediate textures (what we used to call frame buffers in OpenGL), compute shader targets, or geometry shaded to the window surface its self.
We won't go into all the different formats here, but it's useful to talk about 2 of the most commonly used formats and some misconceptions about how they work and what they do (and don't) do.
Some of the most common formats you will see used for texture data and render targets etc are Rgba8Unorm and Rgba8UnormSrgb. So what are they and how do they relate to the issues of gamma correction, perceptually linear 'srgb' space and and physically accurate 'linear' space (as described in detail in the links above)?
First of all, it will be useful to clarify some terminology as the word linear and sRGB can be overloaded and lead to confusion:
- sRGB space is a set of color primaries - this defines what red green and blue are
In this sense (and perhaps counter intuitively), TextureFormats ending in "Unorm" and "UnormSrgb" are BOTH in the sRGB color space, and encode their data in the same way without losing precision (as might be implied by some descriptions of 'linear' vs 'srgb'), more on this later.
Here is some useful terminology to help avoid 'linear' and 'srgb' confusion:
-
What we traditionally called "linear space" is more accurately called "scene referred color"
-
What we traditionally called "srgb space" is more accurately called "monitor referred color"
-
the conversion from scene referred color to monitor referred color is done by the sRGB OETF (opto-electrical transfer function [light -> bits])
-
the conversion from monitor referred color to scene referred color is done by the sRGB EOTF (electro-optical transfer function [bits -> light])
Lets walk our way up the color pipeline. The OS assumes that the bits you are sending to the screen are monitor referred and in the srgb space (unless otherwise specified). It will interpret the bits this way no matter what.
- If you write / shade to a texture which is in the format of Unorm it will take the output of your shader and do a float -> int conversion and that's it (Unorm floats are written as ints under the hood though you don't need to worry about this).
This means that if you're using this Unorm texture as a surface, the floats you are writing from your shader need to be monitor referred (ie numbers that the os can safely assume are already in sRGB space).
- If you write / shade to a texture which is in the format of UnormSrgb it will take the output of your shader and then apply the sRGB OETF to it, then afterwards do the float -> int conversion to store the data.
This means that if you're using this UnormSrgb texture as a surface, the floats you are writing from your shader need to be scene referred, as the gpu will do the conversion to monitor referred as part of the write!
The inverse happens when you read from a texture:
- a Unorm texture just does int -> float on read.
- a UnormSrgb texture applies the sRGB EOTF to do monitor -> scene and then does the int -> float write.
Q: So Unorm textures are in 'linear' space and UnormSrgb textures are in 'srgb' space?
A: No, despite the naming you are responsible for whether the outputs of your shader represent 'scene' or 'monitor' referred values. The 'Srgb' tag on the texture format just lets the shader know that you want it to do a conversion from sRGB to linear (or more specifically 'monitor' to 'scene') when you sample it or the inverse when you write to it.
Q:
A:
- Think of TextureFormats as a combination of data resolution and data conversion functions. For example: Rgba8UnormSrgb = Rgba8 data + UnormSrgb conversion step.
- Let the gpu do the conversion work for you. Historically in webgl for example if you wanted to do physically accurate lighting, you would need to convert from linear to Srgb manually (often with an approximation like pow(col, 2.2)). By being aware of the textures you are sampling and writing to you should be able to avoid manual conversions.