Each pixel of an image is usually representing using 4 bytes: 3 colors (R,G,B) and Alpha (A) or just 3 bytes (R,G,B). As we know 1 byte is 8 bits and it can represent values from 0 to 255 (unsigned). Thus in terms of RGB a pixel will have a combination of 2^{24} which is 16777216 combinations.

We know that in some occasion, storage is more important relative to quality of an image. Plus, we might not be that sensitive to the changes color resolution in small magnitude if an image has more drastic changes in relative to spatial resolution. This is the reason why we can reduce the resolution of a pixel by changing the representation of a pixel below 8 bits.

Below shows an example of down-quantization of an image. It can be seen that changes are not obvious when colors are represent by 5 bits.

**Input Image (0~255)**

**5 bits per color (0~65)**

**3 bits per color (0~8)**

**1 bit per color**

## Implementation

I didn’t actually change the storage of bits per pixel of the bitmap file but I just grouped the colors into a small set of groups. The diagram below illustrates my method:

It can be seen that it’s just a flooring of pixel values, thus the implementation is just as easy as dividing the current pixel value by the down-quantize factor and multiply it back by the down-quantize factor. Dividing it removes the decimal points of an image and multiplying it back will result in the flooring-effect of pixels.

This is the link to the source code of my implementation method.