Luminance error in image scaling

Upscaled images display a loss of contrast due to scaling algorithms such as bilinear interpolation. Here is what can be done about it. First, some results:

A source bitmap to be scaled (2x2 pixels)

Results of typical scaling process (upscaled by factor 16)

Results with corrected algorithm (upscaled by factor 16)

The lower image accounts for the inter-pixel blending that occurs due to the bilinear processing and compensates for it by enhancing the contrast of the pixels through an "edge enhance" type filter. This results in the averaged luminance within each 16x16 region equaling the luminance of the source pixel.

Unfortunately, a lot of bilinear samples must be taken for the average to approach the correct value with respect to the source pixel. Therefore, such contrast-enhancing algorithms are only appropriate for large scale factors where it is known that large number of bilinear samples will be taken. Otherwise the "edge enhance" will just produce artifacts of its own.

Contrast enhancing is also quick to overflow. It is fairly easy to generate colors brighter or darker than the maximum brightness. For instance, there's the problematic concept of negative brightness if a dark pixel is surrouned by light pixels, as it must be very dark indeed to look "black enough" under bilinear interpolation interpretation. The problem could be mitigated by using some higher order interpolation function that localized more of the pixel's energy within itself.