Introduction to Processing image render effect in Unity

In this post, I’m going to cover an overview how unity works in processing camera effects or post-processing effects.

The point is once all your camera view has been rendered and it’s ready to be displayed on your screen, we take this “image-screen” and do some changes in order to make it look as we want (more contrast, brighter, blur effect, monochromatic effect…).

Unity will provide this image by the Monobehaviour’s event OnRenderImage. To make it works we have to follow just two  steps:

Step one

Create a script which implements the event and attach it to the camera of our scene.

using UnityEngine;

public class ImageRenderedEffect : MonoBehaviour
    public Material processingImageMaterial;

    void OnRenderImage(RenderTexture imageFromRenderedImage, RenderTexture imageDisplayedOnScreen)
        if (processingImageMaterial != null)
            Graphics.Blit(imageFromRenderedImage, imageDisplayedOnScreen, processingImageMaterial);

OnRenderImage is called after all rendering is complete to render image.  You can read the documentation about this event here.  The public variable processingImageMaterial creates a field in our inspector where we can set the material we want to use in order to process the image.

The GraphicsBlit() function is some kind of magic function that let us set the rendered image to the material which will process it.  The first parameter is the rendered image, followed by the out RenderTexture, and the third is the material we want to use.

I know all the code above is a little tricky. But it’s something you have to do if you want to make postprocessing effect.


Step two

Create a shader and a material to wrap it. The shader will be the one who will process the rendered image. In this case, I’m going to create a simple “black and white” effect.

One key concept is that the _MainText property of the shader will be set by the function  GraphicsBlit() . It’s important to know this issue because we work with the _MainText  which is the one who contains the rendered image. The UVs will be set too with the UV of our screen. So we are going to work as the Rendered Image is a textured set in a mesh plane which fits all our screen (from corner to corner).

Shader "Unlit/B&WEffect"
		_MainTex ("Texture", 2D) = "white" {}
		_ScreenPartitionWidth("ScreenPartitionWidth",  Range (0.0, 1.0)) = 0.5
		Tags { "RenderType"="Opaque" }
		LOD 100

			#pragma vertex vert
			#pragma fragment frag
			#include "UnityCG.cginc"

			struct appdata
				float4 vertex : POSITION;
				float2 uv : TEXCOORD0;

			struct v2f
				float2 uv : TEXCOORD0;
				float4 vertex : SV_POSITION;

			sampler2D _MainTex;
			float4 _MainTex_ST;
			float _ScreenPartitionWidth;
			v2f vert (appdata v)
				v2f o;
				o.vertex = UnityObjectToClipPos(v.vertex);
				o.uv = TRANSFORM_TEX(v.uv, _MainTex);
				return o;
			fixed4 frag (v2f i) : SV_Target
				// sample the texture

				fixed4 col = tex2D(_MainTex, i.uv);

				//Apply the perception brightness proportion for each color chanel
				float brightness = col.x * 0.3 + col.y * 0.59 + col.z *  0.11;

			     //If the uv x coordinate is higher than _ScreenPartitionWidth we apply the b&w effect, if not, we apply the image render how it is.
			     if(i.uv.x >_ScreenPartitionWidth)
			      //This condition is done in order to draw a vertical line which is the frontier between the image processed and the normal image
			        if(abs(i.uv.x -_ScreenPartitionWidth) < 0.005)
			        return fixed4(0.0,0.0,0.0,1.0);

			     	return fixed4(brightness,brightness,brightness,1.0) ;
			      	return col;

Here you can see a video applying the effect  to a scene with a couple of 3D low-poly flowers downloaded from asset store created by Chlyang,

Leave a Reply

Your email address will not be published. Required fields are marked *