@codinghorror you should also check out freeware app called http://www.irfanview.com and it’s plugins which include pngout etc. I have been using it for nearly 10yrs and really crunches PNG images down by alot !
You could try a tool like ImageOptim (https://imageoptim.com), that tries a bunch of compression tricks, including Zopfli, to see what gets the best results. It will even do sequences of decompression. For most of the images in sites I build, what it usually settles on is PNGout + Zopfli.
Second the motion on ImageOptim, an excellent tool. For your default avatar images, you can also pngquant them to save more, because you’ve basically got two shades plus the intermediates for anti-aliasing. You can go to 16 colors with no problems, and nobody is going to notice if you go to 8 – you can barely see the differences A/B comparing zoomed versions.
The pngquant 8 + imageOptim version of your A avatar goes from 1542 to 1174 bytes.
One nice thing about pngquant is that it has a quality=100 setting that won’t change the image unless it can reduce the color space losslessly. So you can insert it into the chain and occasionally get a nice win.
For example, eva2000’s avatar goes from 2958 to 2277 using ImageOptim (PNGOUT+Zopfli) but 1610 with pngquant quality=100 done first.
pngnq-s9 is a modified version of pngnq, the neural network colour quantizer for png images.
Like pngnq, pngnq-s9 takes a full 32 bit RGBA png image, selects a palette of up to 256 colours, and then redraws the image in 8 bit indexed mode. The resulting image can be up to 70% smaller than the original.
pngnq-s9 adds several new options to pngnq including the ability to augment a user-supplied palette, the ability to quantize in the YUV colour space, and the ability to give more or less weight to specific colour components when quantizing. The program also includes a few bug fixes relative to the most recent version of pngnq.
Chances are, the font will already be cached in the client. The CSS and markup aspect of the above code amount to a little over 300 bytes.
The image is currently sent at 240x240px, but scaled down to 128x128 in the client. If there are scenarios in which the full 240px version is rendered, there’s advantage to only having a single size for a resource, but sending it larger than anyone will ever see it is a waste of bytes.
The problem with HTML/CSS and SVG avatars is they completely fail in email and a bunch of other places where a PNG image works perfectly… they also are a hellscape of crazy tweaky font alignment issues per browser. You can see the disussion at
As for further reducing color depth of the avatars, in my testing with ImageMagick reducing color depth, 128 colors worked best:
There was (almost) no difference in file size for 16 and 32 and 64 colors. And even 64 colors isn’t enough gradations. It won’t cause dithering, per se,
But even reducing to 64 colors (with virtually no file size savings) produces a worse avatar letter since the edge gradations are strongly affected.
Reducing to 8 colors does bump the file size down a fair bit, but that’s an extreme. You would absolutely notice only 8 6 gradations in color between the letter (one color) and the background (another color).
That’s why 128 colors was the sweet spot – big file size savings, with zero impact on image quality.
We do generate multiple different resolutions for any given avatar @Chris_JL although given the presence of retina and higher resolution devices, sometimes it’s better to use the higher resolution image.
For some more fun, try my tool Precomp. Using it without additional parameters compresses the PBF image to 533,052 bytes. No, the resulting file is not a viewable PNG file, but using “precomp -r” you get the original file back! Not only is the image content compressed lossless, it also stores information to restore the original file - now that’s lossless .
Essentially, the result file is a bZip2 compressed version of the image content together with additional information to restore the original compression. You can also use “precomp -cn” for a decompressed-only version you can feed to your favorite compression program, e.g. if you prefer 7-Zip. With compressors from the PAQ family like ZPAQ, the PBF image can be compressed down to 440 KB.
This works for many other filetypes that contain deflate streams like PDF and ZIP or even some Linux distribution images, and also handles GIF and JPG files using specialized routines.
It has the same catch you mentioned in your post - slow compression, fast decompression. Decompression speed is not the same as for the original PNG, but still very fast. But as a browser add-on or built-in, it could save even more bandwidth than Zopfli does.
There hasn’t been much progress in the project since 2012, but I’ll make it open source soon and there even is an open source alternative on GitHub called antiz.
A good reminder is that DEFLATE streams tend to crop up all over the place - a common one is the venerable .zip file. Take the SysinternalsSuite.zip as an example: 15,160,701 bytes served directly from Microsoft. Taking about 3 minutes to recompress it with the excellent advzip utility reduces it to 14,597,826 bytes - more than 500K in savings!
Also worth noting, the Zopfli algorithm can also specify the “strength” or iterations used in searching. It tends to scale pretty linearly in time but that isn’t exactly a good thing when you are already talking minutes for basic compression. For many files it also doesn’t actually increase compression, or does so by very few bytes. Still something to consider for the hard-core nerd that doesn’t mind letting a CPU burn all night long for the ultimate in broadly compatible compression.
Just out of curiosity-- have you tried JPEG2000 compression? I know, a PNG-> JPEG2000 step is kind of a canonically inelegant way of doing things, but the results might be interesting. For one thing, It would give a benchmark on how well you can do with image compression on your test image. And lossy JPEG2000 compression works very well too.
I know that. As a matter of fact, for JPEG2000, lossy compression is a floating point algorithm and lossless compression is an integer algorithm, so they are fundamentally different. I’m just saying that lossy compression in JPEG2000 works well-- generally avoids those nasty artifacts.
The point is that with lossless compression there are zero artifacts.
It is true that reducing color depth with PNG is a very brute force method of lossy compression, however in the only cases I recommend color depth reduction the image can be accurately represented with only (n) colors, as shown in the monochrome avatars example in my blog post.