Im trying to understand downscaling. I can see how interpolation algorithms such as bicubic and nearest neighbour can be used when when upscaling, to “fill in the blanks” between the old, known points (pixels, in case of images).
But downscaling? I cant see how any interpolation technique can be used there. There are no blanks to fill!
Ive been stuck with this for far to long, give me a nudge in the right direction. How do you interpolate when you, in fact, remove known data?
Edit: Lets assume we have a one dimensional image, with one colour channel per point. A downscale algorithm scaling 6 to 3 points by average pixel value looks like this:
1,2,3,4,5,6 = (1+2)/2,(3+4)/2,(5+6)/2
Am I on the right track here? Is this interpolation in downscaling rather than just discarding data?
Here you have the original image on top, then a naive removal algorithm in the middle, and an interpolating one at the bottom.
Consider a big spotlight. The light at the center is the brightest, and the light at the edges become darker. When you shine it farther away, would you expect the light beam to suddenly lose the darkness near the edges and become a solid outline of light?
No, and the same thing is happening here to the stackoverflow logo. As you can see in the first downscaling, the picture has lost the softness in its edges and looks horrible. The second downscaling has kept the smoothness at the edges by averaging the pixel surroundings.
A simple convolution filter for you to try is to add the RGB values of the pixel and all other pixels surrounding it, and do a simple average. Then replace the pixel with that value. You can then discard the adjacent pixels since you’ve already included that information in the central pixel.