The myth of infinite detail: Bilinear vs. Bicubic

Have you ever noticed how, in movies and television, actors can take a crappy, grainy low-res traffic camera picture of a distant automobile and somehow "enhance" the image until they can read the license plate perfectly?


This is a companion discussion topic for the original blog entry at: http://www.codinghorror.com/blog/2005/08/the-myth-of-infinite-detail-bilinear-vs-bicubic.html

Stuff like 2xSaI will get much better results for drawings like Hello Kitty there.

BTW, for EXACTLY the type of image you show here - the low-res, low-color, sharp Hello Kitty - the emulator writers have created a bunch of very good heuristics interpolation algorithms, e.g. 2xSAI.

The example you gave is not very example of the astonishingly high quality effects you get with bicubic and bilinear scaling.

Line drawings, such as the one you use, don’t scale well.

However, photographs scale extremely well and the difference between bicubic and bilinear become very apparent.

photographs scale extremely well

A photograph actually has a lot of pixels (a real one, that is)

I think both of you are referring to real photography with professional grade pixel counts, eg, many megapixels. I’m thinking of the low-res, grainy security and traffic camera images I frequently see “enhanced” to reveal detail on shows like television’s “24”.

But I agree that if you have a picture with megapixels of detail to start with, a bit of reasonable upscaling can be done.

very good heuristics interpolation algorithms, e.g. 2xSAI

That is very cool! More screenshots of it in action:

http://elektron.its.tudelft.nl/~dalikifa/#Screenshots

as well as a detail shot at the top of that page.

The problem of photographic enhancement is usually not one of “making pixels” (scaling up). A photograph actually has a lot of pixels (a real one, that is); the average size of a silver grain on film is about 1 micron. A typical 35mm negative is 24x36 mm (don’t ask). Assuming you scan the film at the full resolution of the grains, you would get 24M x 36M pixels, or 864M pixels. Even if the region of interest doesn’t fill the frame, you will generally have plenty of pixels for the subregion (e.g. license plate).

The problem is that the lens system probably wasn’t focused exactly on the subregion (license plate), so you have to sharpen the image to enhance it. There are many techniques for sharpening, most of which use a convolution kernel. This is a matrix operation in which a pixel’s value is changed by applying postive and negative offsets computed from nearby pixels. For example, the “unsharp mask” operation is a convolution kernel. It is amazing how much contrast and detail can be recovered from an out-of-focus image using this technique. Often you really can read the license plate.

Ole

P.S. In case you’re wondering, there are devices which can scan at micron or even submicron resolution, which is two orders of magnitude beyond your typical flatbed scanner. My company Aperio makes instruments called ScanScopes which are used for scanning microscope slides; the resulting images can have a resolution of .25 microns (approx. 100,000 dpi). A typical microscope slide has a tissue area of about 20mm x 15mm, so the resulting digital images are around 100M x 60M pixels. That’s a lot of pixels.

Sigh. This is what I get for commenting before I’ve finished my coffee. I used some Ms instead of Ks. A 35mm negative at 1 micron/pixel would have 24K x 36K pixels (which is 864M pixels, as noted). A 20mm x 15mm microscope slide at .25 microns/pixel would have 100K x 60K pixels (which is 6G pixels).

Don’t you know that traffic cameras record everything using vector based representation? That’s how they do it. And they have in instantaneous shutter speed and Petabytes of storage capacity.

:wink:

However, photographs scale extremely well and the difference between bicubic and bilinear become very apparent.

I’m not so sure about this, particularly for the relatively low-res images I was referring to in the post. Here’s an example 640x428 photo image:

http://www.codinghorror.com/blog/images/woz_roth.jpg

I blew this up 300% using both bilinear and bicubic. Then I zoomed in to 100% and browsed the image. I noticed that the sharpening effect of bicubic makes the JPEG artifacts far more prononounced.

That said, bicubic and bilinear are essentially the same. You’re basically choosing between a SLIGHTLY sharper image (bicubic) or a SLIGHTLY blurrier one (bilinear). When upsizing an image, I think it’s somewhat dangerous to err on the side of sharpening, although I guess this depends how good the source image is. And your decision might be different when you are downsizing the image…

I think the people pointing out that bicubic interpolation works best on photos weren’t trying to claim that this would enable you to expand 3 pixels into a sharp and legible number plate. I don’t think anyone’s taking issue with your basic claim that the magic zoom beloved of crime drama writers is impossible.

I think they were just pointing out that bicubic interpolation is often significantly better than bilinear interpolation on photographic images. What you wrote implies that bicubic interpolation is pointless because it offers no benefits over bilinear. That’s not actually true - it’s not a magic bullet, but it’s a definite improvement for certain scenarios.

(But it does interact unfortunately with artifacts on over-compressed JPEGs, as you observe…)

Hi,

And have you heard of GREYCstoration?
http://www.greyc.ensicaen.fr/~dtschump/greycstoration

It can be used for image resizing, among the others, and it does wonders.

See Image Resizing on http://www.greyc.ensicaen.fr/~dtschump/greycstoration/demonstration.html

eg.:
http://www.greyc.ensicaen.fr/~dtschump/greycstoration/img/res_rabbit.png

the problem with your hello kitty isn’t that it’s not like a photograph. The problem is that many of the jaggies are double-pixel. If you look at the feet, you can see single-pixel jaggies. These turn into very impressive straight lines using bicubic. The double-pixel features hardly even get blurred by either algorithm, and that is the way you would want it to work.

Anyway, you’re second on a google search of “bilinear vs bicubic,” so you have much honor and responsibility to fix this.

tho i wonder if maybe one reason you’re second is cuz you seem to “debunk” the advantage of bicubic.

If you want to see an example of an “impossible” increase in clarity of a blurry image, check out the sharpened video of the tiles that fell off the space shuttle, leading to the explosion on reentry. The original video that they had was taken from miles away, and showed about what you’d expect: a blurry mess. After some NASA magic, it clearly showed the tiles hitting the wing. If you ask me, I’d say that it’s possible because I’ve seen it (but I don’t know how they did it!)

I saw this algorithm called hq2x/hq3x/hq4x (pick one, it just describes the magnification). It seems fairly good for images like this

This article has a problem, all its internal links are dead. The link to the older article redirects to the new blog’s homepage, while the links to the screenshots are 404.

1 Like

10 years later, there is super resolution with deep neural networks

1 Like

“A bit blurry, yes, but clearly superior to giant chunky pixels.” I Actually prefer the sharp blocky 300% version using pixel scaling (naive nearest neighbour) over the bilinear filtering and bicubic filtering versions. To me those last two with their blurry borders are clearly not better at all. The reason is that limited lineart resolution/fixed palette images (2,4,8 bit) should not be blurred by bilinear/bicubic filtering. That only works sort of okay on true colour (24bit colour) images / photographic material.