Better Image Resizing

In a previous post, I examined the difference between bilinear and bicubic image resizing techniques. Those are the two options available in most graphics programs for resizing an image.

This is a companion discussion topic for the original blog entry at:

I think you may have these backwards:

* When making an image smaller, use bicubic, which has a natural sharpening effect. You want to emphasize the data that remains in the new, smaller image after discarding all that extra detail from the original image.
* When making an image larger, use bilinear, which has a natural smoothing effect. You want to blend over the interpolated fake detail in the new, larger image that never existed in the original image. 

Bicubic definitely smooths more than bilinear. It interpolates from a 16x16 neighborhood, whereas bilinear interpolates from a 4x4.

Regarding enlarging: the big problem is information density. By definition, interpolated pixels cannot add information to the image. The information entropy of the interpolated image (the minimum number of bits necessary to encode it) will always be exactly the same or less than the original. Things like Genuine Fractals do better (by visual inspection, or provably with certain assumptions, for a single class of images) by making strong inferences about the new pixels based on local geometry.

It’d be more correct to say that you can’t magically synthesize any pixels, period.

(I’m being pedantic, but that’s only because I do research in this area. Oh, and someone might find my pedantry useful - at least, that’s what I tell myself.)

The two image resizing kernels most famous for being “soft” and “sharp” respectively are Gaussian and Lanczos.

Some common software like IrfanView should implement those; for some reason, Photoshop doesn’t.

You can use something like this to try out different options:

Just one thing: why would anyone want to upscale an image? By doing that you increase the file size (as well as processing power required to display the imagem) and decrease the quality. I just don’t see the point.

Of course downscaling is much more useful. I always do that before publishing pictures I took.

There are some really excellent javascript mouseover comparisons on these two pages.

Enlargement algorithms compared:

Reducing algorithms compared:

Check out this page for some demonstrations of different interpolation methods. Both bilinear and bicubic basically suck, but have the benefit of being fast:

Here’s a nice explanation of the tradeoffs between aliasing, blurring, and edge halo in non-adaptive interpolation:

a href=""

“Even the most advanced non-adaptive interpolators always have to increase or decrease one of the above artifacts [aliasing, blurring, edge halo] at the expense of the other two-- therefore at least one will be visible.”

what about GREYCstoration? it does pretty good inpainting, I even got rid of the flag in the Windows 95 logo, once! (easy because it’s just clouds, which look GOOD when blurred)

A little off-topic, but there’s a serious downside to attempting to smooth upscaled pixel art images: they become smooth. As a retrogamer, I just can’t live without being able to see every pixel exactly as intended. “Nearest neighbor” used by increments of exactly 100% is the only algorithm I ever use when I want to enlarge some sprites.

Yep, HQ?x ( is clearly the most sophisticated algorithm for upscaling low-res images. Though it’s more suited for cartoon-like scenes with less colors - think Super Mario World instead of Donkey Kong Country.

Personally, I prefer good ol’ interpolation.


You’d want to upscale an image before printing. Most printers, even very good, industrial photographic and poster printers, don’t do a great job. Upscaling requires subjective judgment, which a computer can’t do. You know that a bit of a photograph is supposed to be a whisker, and you know what one is supposed to look like. You can tweak it until the upscaled whisker looks right, but the default interpolation a printer uses hasn’t got a clue.

My senior project in college (undergrad) was to implement fractal image compression and study the differences between it and other compression techniques. It was a great project that I really enjoyed - and taught me a lot about image compression in general. (That was the idea, I suppose.) Thanks for the post, Jeff! Now I’m going to have to go dig out my final paper and re-read it. :slight_smile:

Good summary, Jeff.

–Kevin Fairchild


Bicubic = sharpening, bilinear = smoothing just isn’t a good rule of thumb. It looks like the second kitty image in the post you linked to was interpolated using a bicubic interpolator with a Catmull-Rom kernel. If it had been a B-spline bicubic interpolator, for example, it would have been much smoother than the result from the bilinear. The Catmull-Rom kernel is actually sort of an odd duck as cubic kernels go, as it yields a C^1 continuous surface, making the result rather sharp, whereas most (maybe all) other popular cubic kernels yield a C^2 continuous surface.

It really comes down to the fact that weighted averaging over a 16x16 area will generally yield smoother results than weighted averaging over a 4x4 area, except in special cases.

Photoshop has three different bicubic interpolators, and I would guess that Catmull-Rom is one of them. B-spline is probably another, and I have no idea what the third is. (They may even be using BC-splines, which is a generalization of many popular splines, with hand-picked assignments of B and C - see The thing is, which kernel is used for a “bicubic” interpolator will vary from photo package to photo package, which is what really kills the rule of thumb.

I’ll dig around for a page that demonstrates the difference between Catmull-Rom bicubic interpolation and B-spline bicubic interpolation. If I can’t find one, maybe I’ll make one.

lord trousers, take a look at the images in this post-- it’s the first link in the above entry:

There is absolutely a sharpening effect for bicubic, and a smoothing effect for bilinear.

The real question I have is: why don’t Firefox or IE implement the better resizing algorithms? It’s sad that client-resized images on the web look terrible, even they really don’t have to.

I agree with lord trousers - after resizing images up + down for nearly a decade, bicubic is usually best for up + bilinear for down, whether it’s Paint shop pro, photoshop, or coding in dot net.

To answer Ivy Mike, I’m guessing the browsers go for speed.

I once read that expanding an image, using photoshop, but only increasing it by 10% at a time produces dramatically better results, and indeed it does.

In fact, I just tried it on the reference image, and I believe it does produce better results than either method listed above: And that is actually 500%, with a size of 2589 x 2589. You can grab the action I used here (, just keep tapping F10 till you get to the desired size.

I have used this to blow up legal size documents to poster size for print, and it has always worked like a charm.

I guess you forgot to mention scale2x and hq3x for pixel art, these are the generally best accepted algorithms for upscaling pixel art imagery, as they don’t suffer from blurriness.

for any other high-color image, it could be worth to mention Lanczos Interpolation, a great resampling algorithm that clearly surpasses Bicubic for any given image.

To be nitpicky, not always “Reducing images is a completely safe and rational operation”. When you are working with icon sizes, photographic images almost don’t work and pixel art is mangled slightly beyond recognition. It’s that pixel-grid from the cleartype discussions, again.