top of page

Pixelation has its uses in the 3D world. A common use of the shader is when the players die and a transistion is used when respawning or if certain elements on the screen need hiding. This filter pixelates the screen to a lower resolution. The full source code can be found here.

(Please note code comments are removed from these snippets)

Shader "My Shaders/Pixelation"
{
    Properties 
    {
        _MainTex ("Buffer", 2D) = "white" {}
        _PixelCountX ("Number of X", float) = 128
        _PixelCountY ("Number of Y", float) = 72
    }
    SubShader 
    {        
        Pass 
        {         

As with all shaders we start with the setup stuff. First we define the name of the shader. In Unity this will be the path in the dropdown menu the user will have to take when searching for shaders. We set the name of the shader to Greyscale, placing it in the My Shaders folder.

We the define the inputs of the shader, or the Properties. The underscore is an identifier for input variables. After grabbing the screen buffer from the first pass we also define a couple of floating point values. These will be the X and Y axis resolutions of the final image.

SubShader contains all of our shaders. It is named SubShader as it contains all of the shaders in one file; Vertex and fragment, and multiple passes if necessary.

            CGPROGRAM
            #pragma vertex vert_img
            #pragma fragment frag

            #include "UnityCG.cginc"

            uniform sampler2D _MainTex;

            float _PixelCountX;
            float _PixelCountY;

We tell the compiler to stop reading in ShaderLab and instead start compiling for CG, Nvidia's own shader language.

Pragma's are used to define the names of the vertex and fragment shader. As a vertex shader isn't needed for this post-process effect the standard built-in vertex shader is used.

Next we take the inputs of the shader and uniform them in to this specific pass (again, only one pass is needed here). 'uniform' doesn't actually have to be defined but for consistency and understanding of the code it is left in.

            float4 frag(v2f_img input) : COLOR
            {
                float pixelWidth = 1.0f / _PixelCountX;
                float pixelHeight = 1.0f / _PixelCountY;
                
                float2 texCoord = float2((int)(input.uv.x / pixelWidth) * pixelWidth, (int)(input.uv.y / pixelHeight) * pixelHeight);
                float4 colour = tex2D(_MainTex, texCoord);
            
                return colour;
            }
            ENDCG
          }
    }
}

First here we divide our inputs from 1.0 to obtain the spacing between texture coordinates for each pixel for that resolution. This allows us to sample the texture map (buffer) at only one point for multiple texture coordinates, resulting in square, solid colours to represent each pixel of the defined resolution.

So now that we have our new texture coordinate 'offsets' we define a new vec2 "texCoord" to replace the texture coordinates obtained from the vertex shader. These new texture coordinates are calculated by taking the original texture coordinates from the vertex shader and dividing them by the new offset values, then multiplying by the offsets. We then make the resulting value an integer using (int). This is almost the equivalent to using floor() in GLSL which returns the closest integer rounded down except since texture coordinates are a value range from 0.0 to 1.0 this would always return a value of 0.

Finally we sample the texture using these newly created texture coordinates and output the result, which gives us the image "scaled down" to the input resolution.

bottom of page