Making a True Detective look! Controlling the Blending Stage in your Shader

Once you’ve got comfortable writing shaders in Unity you start to ask yourself how the graphics pipeline works under the hood, and if you are able to access it.

In this post, I’m going to cover an important part of the  graphic pipeline: the Blending Stage. The Blending Stage is the one responsible for how the fragment color from our fragment shader  blends with the others. We’re going to check out terms such as framebuffer, depth buffer, order-independent transparency… Exciting? I hope so, let’s start!

What happened with the output from our fragment function?

The first thing we should know it’s that the fragment we get from our fragment function is a candidate to be a pixel of our screen, but it is not a pixel at all. By being a candidate I mean that this fragment has to pass a couple of tests and operations to become a pixel. Sometimes it’s not the final pixel we see but a part of it.

But why our fragment has to pass all those tests and operations? A simple example it’s our fragment could be occluded by another object which is nearer to the camera. This object will have his own material, wrapping a shader with another fragment function who returns new fragment. So, in this case, we will have two fragment candidates to become a real pixel. How can we decide which of these is the one? Applying per-fragment operations.

These operations will decide how the final color is contributed by the fragments coming from the fragment functions. In the example above (one object occluding other) the depth test  will sort out the issue.

The  depth test is used to render opaque primitives, comparing the depth of a fragment to the depth of the frontmost previously rendered fragment, which is stored in the depth buffer.

The Blending Stage only makes sense to render semi-transparent primitives (glass, fire, flares, etc.). The reason is simple, if the object is opaque, Why blending?

How Unity deals with Blending Stage

As far as we know the Blending stage is the part who mixes the current fragment output color  with the color already stored in the frame buffer.

The way Unity deals with this stage is using  Unity’s ShaderLab syntax with this line:

Blend {code for SrcFactor} {code for DstFactor}

As I said, it’s ShaderLab syntax, if you want to use it you will have to write it exactly like Unity ShaderLab wants.

We can appreciate in the line above two blocks, the code for SrcFactor and the code for DstFactor. This is not actually code written by you, indeed you have to write once of the next "Code" from the table below. 

Code Resulting Factor (SrcFactor or DstFactor)
One float4(1.0, 1.0, 1.0, 1.0)
Zero float4(0.0, 0.0, 0.0, 0.0)
SrcColor fragment_output
SrcAlpha fragment_output.aaaa
DstColor pixel_color
DstAlpha pixel_color.aaaa
OneMinusSrcColor float4(1.0, 1.0, 1.0, 1.0) - fragment_output
OneMinusSrcAlpha float4(1.0, 1.0, 1.0, 1.0) - fragment_output.aaaa
OneMinusDstColor float4(1.0, 1.0, 1.0, 1.0) - pixel_color
OneMinusDstAlpha float4(1.0, 1.0, 1.0, 1.0) - pixel_color.aaaa

The code for the ScrFactor deals with the color from our fragment function. And the code for the  DstFactor deals with the color aready in the frame bufer.

We can think about this operation like:

float4 result = SrcFactor * fragment_output + DstFactor * pixel_color;

The  variable float4 result is the result color that will be set in the frame buffer replacing the old value. The fragment_output is the output color from our fragment function and the pixel_color  is the frame buffer color.

Implementing an Alpha Blending Shader

Let’s make a very common alpha blending shader. As you could read (alpha blending)  we’re going to use the alpha value from both colors ( output fragment and frame buffer) in order to mix into one final color.

Shader "Cg shader using blending" {
   SubShader {
      Tags { "Queue" = "Transparent" } 
         // draw after all opaque geometry has been drawn
      Pass {
         ZWrite Off // don't write to depth buffer 
            // in order not to occlude other objects

//=====>>> We write the instruction inside the Pass but out CG section
//====>>
         Blend SrcAlpha OneMinusSrcAlpha // use alpha blending

         CGPROGRAM 
 
         #pragma vertex vert 
         #pragma fragment frag
 
         float4 vert(float4 vertexPos : POSITION) : SV_POSITION 
         {
            return mul(UNITY_MATRIX_MVP, vertexPos);
         }
 
         float4 frag(void) : COLOR 
         {
            return float4(0.0, 1.0, 0.0, 0.3); 
               // the fourth component (alpha) is important: 
               // this is semitransparent green
         }
 
         ENDCG  
      }
   }
}

I’ve taken the source code of the this shader from this wikibook, where you can find everything I’m talking about deeper.

The instruction for the blending stage is set inside the Pass block, but outside the CG code.

What we have done above is applying the same blend mode as  the normal photoshops blend mode. The more opacity,the  more participation or weight added to the final color:

This instruction

 Blend SrcAlpha OneMinusSrcAlpha

Means:

float4 finalColor = fragment_output.aaaa * fragment_output + (float4(1.0, 1.0, 1.0, 1.0) – fragment_output.aaaa) * pixel_color;

Let’s take some advantages of the Blending Stage 

One of the most important things about being able to change the properties of the blending stage is that we can achieve blend modes (multiply,  additive …) in the cheapest way, without using any “grab pass”. Not all of the blend modes we are used to are available. You have access to next ones:

Blend SrcAlpha OneMinusSrcAlpha // Traditional transparency
Blend One OneMinusSrcAlpha // Premultiplied transparency
Blend One One // Additive
Blend OneMinusDstColor One // Soft Additive
Blend DstColor Zero // Multiplicative
Blend DstColor SrcColor // 2x Multiplicative

Being Creative:  Achieving “True Detective” Style

Let’s do a non-traditional blending using the concepts we just got! There’s a famous HBO serie I like called “True Detective”. The series’s opening is well known, and it uses this kind of effect:

Taking a look at the images we can say that there is an image in the background (some building environment)  and a second layer, our “character” who is masking the background. But, how is it masking? Well, making us closer let’s say  brighter  our mask or character is,the fewer background we can see , and darker our mask is, the more we can see of it.

Let’s Try!

Set an image into your unity project, it will be the background:

 

First, add to the scene as a sprite or a quad  with  mesh renderer and an unlit material with that picture as texture. Then, add the image who is going to be the mask:

Set the second image ( the one with just the girl)  as a texture of the material of the shader below:

Shader "Unlit/BlendShader_03"
{
	Properties
	{
		_MainTex ("Texture", 2D) = "white" {}
		_Color("Color", Color) = (1.0,1.0,1.0,1.0)
	}
	SubShader
	{
		Tags { "RenderType"="Transparent" }
		LOD 100

		Pass
		{
		Cull off

		 ZWrite off // don't write to depth buffer 
            // in order not to occlude other objects

         //Here is the blending directive!===>
         Blend  OneMinusSrcColor SrcColor  
			CGPROGRAM
			#pragma vertex vert
			#pragma fragment frag
			// make fog work
			#pragma multi_compile_fog

			
			#include "UnityCG.cginc"


			struct appdata
			{
				float4 vertex : POSITION;
				float2 uv : TEXCOORD0;
			};

			struct v2f
			{
				float2 uv : TEXCOORD0;
				UNITY_FOG_COORDS(1)
				float4 vertex : SV_POSITION;
			};

			sampler2D _MainTex;
			float4 _MainTex_ST;
			float4 _Color;

			v2f vert (appdata v)
			{
				v2f o;
				o.vertex = UnityObjectToClipPos(v.vertex);
				o.uv = TRANSFORM_TEX(v.uv, _MainTex);
				UNITY_TRANSFER_FOG(o,o.vertex);
				return o;
			}
			
			fixed4 frag (v2f i) : SV_Target
			{
				// sample the texture
				fixed4 col = tex2D(_MainTex, i.uv)  * _Color;
				// apply fog
				UNITY_APPLY_FOG(i.fogCoord, col);
				return col ;
			}
			ENDCG
		}
	}
}

What we are doing with this line:

         Blend  OneMinusSrcColor SrcColor  

It is,

Take the color from the background and subtract the color from the “mask”(the texture of our shader) and add the color of the mask to the buffer Color.

So if the “mask” texture is white, the contribution of the background will be zero, and the result will be white because of the color source of the mask.

On the other hand, if the value of the mask is for example grey, let’s say a a 0.5 contribution  from the background plus the source color to the buffer color.

It could sound a little confusing but just play around changing the textures and you will understand better the behaviour. I linked a video in case you just want to check out the result!

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *