Understanding Digital Image Encoding

FS Ndzomga
7 min readJun 25, 2023
Photo by Ouael Ben Salah on Unsplash

I think it is important to understand how images are encoded, in order to fully understand deep learning applied to computer vision.

Digital image encoding is the process of converting a visual image into a form that a computer can understand and manipulate. This process involves several steps, including sampling, quantization, and encoding. Here’s a more detailed explanation of the process:

  1. Image Sampling: The first step in the digitization process is to divide the image into a grid of individual picture elements or pixels. This is called “sampling.” For instance, a 1920 x 1080 pixel image contains a grid of 1920 pixels across and 1080 pixels down, giving a total of 2,073,600 individual pixels.
  2. Color Representation: Each pixel in an image is represented by a combination of primary colors. The most common method uses Red, Green, and Blue (RGB). Each of these colors is assigned an intensity value, and their combination defines the pixel’s color. In a standard 24-bit color representation, each color gets 8 bits, resulting in 256 possible intensities per color, and more than 16.7 million possible color combinations per pixel.
  3. Quantization: Quantization is the process of reducing the number of distinct colors used in an image. This can help to reduce the amount of data needed to represent the image. For…

--

--

FS Ndzomga

Engineer passionate about data science, startups, product management, philosophy and French literature. Built lycee.ai, discute.co and rimbaud.ai