Font Rendering: Respecting The Pixel Grid

Actually I like my Mac’s text rendering far better than that of any PC I’m using.
You make the whole issue sound much more dramatic than it is, I think.

Apple’s font rendering doesn’t scale, and that’s why I am quite happy in saying that Microsoft has got a design decision right, for once. If an OS has to remove a styling under a certain pixel size, then it just can’t be the right way to do things. I’m still keeping my brand new MacBook Pro, though.

Apple’s Viewpoint: Garbage in, Garbage out
Microsoft’s Viewpoint: Garbage in, (arbitrary deterministic magic on behalf of Microsoft), Something nice

LOL, it’s funny because it’s true…

You can see my samples at http://zajac.ca/fonts/

This is very cool, and I agree with your conclusions on that page. I don’t think a greyscale-only approach can truly compete with the 3x effective increase in horiziontal resolution that RGB subpixel approaches deliver.

Not having actually used Safari on Windows yet, I can’t comment on how the Apple rendering works on Windows compared to my impression of it on the Mac.
While I can understand differences in taste and approach, there is one area in which Windows has not made an effort at all, and that is double-byte font rendering. I work in both English and Japanese on both Windows and Mac environments. The Japanese on Windows looks like something out of dot-matrix days–almost unbearably bad. Turning ClearType on only makes the western language fonts look better. It does nothing for Japanese (or other double-byte languages). I was actually looking forward to Vista in the hope that they would do something about that, but from all the Japanese Vista PCs I’ve seen so far, it appears they still don’t care. How hard can it possibly be? The underlying font technology is the same for both western and double-byte fonts. So now (and for the foreseeable future) I will continue whenever possible to do all of my Japanese computing tasks on my Mac, which renders the same with all languages and fonts and looks very nice, in my opinion.

Please stop ignoring the fact that the iPhone has a 160dpi screen, multiple times zoom functionality on webpages in Safari (and maybe other things like Mail), and will be in real users customers hands in two weeks.

I use both a PC and a Mac, and I have to say I definitely prefer the font rendering on Windows over OS X, because it is SO much easier to read…

I’ll give one example… A friend of mine was working on a script on a BSD box from his Mac, and literally spent hours and hours trying to figure out why it wasn’t working. He asked me to look it over, and I found the problem in under a minute… It turned out that he had typed a - instead of a . in a domain name and on his Mac the two looked virtually identical – both gray blobs, so he couldn’t find the problem. From my PC I spotted the problem a mile away.

Not only that, but when you work in small fonts like I do (to maximize screen space) things become pretty much illegible below about 9 pt on OS X because it ends up as a garbled gray mess, but you can go to 7 pt (or sometimes lower) on Windows and still have readable text.

I agree that in software like page layout programs that the Mac approach may be better, but how many of us are actually doing that? Shouldn’t readability be the primary goal for 99%+ of the population?

He asked me to look it over, and I found the problem in under a
minute… It turned out that he had typed a - instead of a .
@Doug
On my previous job, year or two ago, not wilingly though I was working under OS X envoriment… and you have no idea how many hours I’ve spend trying to figure out the problem, just to find out that it was a typo, similar that your mate had. It was hell for the eyes too… changing IDEs (native or java based) didn’t help, while turining on antialiasing would eat up the memory which eventually would slow down the system… increasing the font size also didnt help, as it made giving more attention to scrollbar then work.

nightmare…

@wingman:
if I am not mistaken, a 17" screen would give you a resolution of
1280x1024 (which is what I use), while it is a 15" that would give
you the resolution you spoke of, 1024x768.

unless you’re using a widescreen monitor, or a higher resolution monitor. Personally, I thought my 17" widescreen was 1280x800, but it turns out that it’s 1440x900 (16:10 instead of 16:9). I’ve only been using widescreen monitors for a short time so I’m not as familiar with the resolutions, whereas 1024x768 and 1152x864 are resolutions I’ve used for years on 4:3 CRTs (and 1600x1200 for the last 5 years or so).

Unfortunately, none of it really matters all that much. I can’t run Safari on the system that has the CRT on it for a more direct comparison (XP and Vista only, supposedly), and the program is only useful for browsing the built-in bookmarks on my Vista machine with the LCD screen. Maybe I’ll try again when the program comes out of beta, although I still have issues with the use of the brushed metal interface on Vista. Of course, I also still think that a browser’s font rendering should be optimized for reading on the screen, rather than reading on a page I’ll never print.

Yikes! I’m still on a single 17" 1024x768 monitor, though my laptop screen is significantly better. :slight_smile:

Actually Steve Jobs knows enough typography to know that we do not read letter-by-letter but reads more like one word-pattern at a time.

Keeping the typography correct is not a matter of “pleasing the (font) designer” - it is a matter of readability. Unless you do actually spell your way through letter-by-letter when you read something.

Since there is clearly different opinions on this ClearType/grid and Apple/typeface issue, Microsoft and Apple should have a setting the user can change instead of just forcing it to look how they want it to, so the customer can decide.

The problem with designing a system that does two things at once is that it will suck at doing at least one of the things. Especially when that thing is font rendering, it seems.

The issues are not just limited to computer typography. They show up in all kinds of visual media where imaging technology is used. But let’s start with an instance where non-computer media and typography collide…

A long time ago, a company that licenses and sells subtitled videos used to boast that their English-subtitled Japanese videos were better than their competitor’s English-subtitled Japanese videos because they used italicized antialiased fonts that were optimized for on-screen readability on a standard TV set (and painted in a color called “optical yellow”). As strange as it is for a company to be trying to sell subtitling (as opposed to, say, the artistic qualities of the actual video), they were actually quite right: for quite some time they really were much easier to read than other subtitled videos.

Unfortunately, the company with the obscure but real technical advantage lost it when DVD replaced VHS and laserdisc. The custom subtitle overlays generated for the analog media failed to translate well to DVD’s limited pixel depth for subtitles, and this combined with the loss of the smooth horizontal filtering from the analog processes in the video tapes turned the subtitles into jagged, blocky, garishly colored horrors.

If you watch a lot of subtitled movies, the technical quality of the subtitles can be important. Movies imported from Hong Kong are infamous for exasperated English-speaking audiences calling out “Speak darker!” at showings where the plain white text just disappears into some white object in the lower foreground during a critical piece of dialogue. If neither you nor your date speak Cantonese, you’ll just have to guess what was said.

The VHS-then-DVD subtitlers designed for a system that had one pixel grid, then failed to redesign when the pixel grid changed (not just resolution, but interaction rules between adjacent pixels too). The Hong Kong subtitlers didn’t even bother dealing with trivial and obvious signal saturation issues, let alone pixel alignment.

Movies are routinely edited for their delivery media, because all delivery media have different properties. The one technique we’ve probably all seen is “pan scan”, where about 1/3 of the image is simply sliced away to fit on a TV screen. Sometimes separate 5.1 DTS and stereo audio tracks are provided, instead of simply remixing the 5.1-channel to 2-channel in the DVD player. Entire scenes are sometimes reshot for VHS (IIRC there’s a night scene in one of the Starship Troopers movies which is dark blue on DVD, but was reshot in green and on a different set for the VHS version). Most of the work in DVD mastering involves stretching color response curves, performing spatial filtering, or tweaking more obscure MPEG encoder parameters because one scene or another is chewed up and spit out by the defaults. TV networks routinely use different graphics on analog carriers vs. digital ones.

So (getting back to the point now :wink: it seems obvious to me that we should expect all kinds of nasty artifacts from trying to mix media and intent in arbitrary ways. There’s just no way to solve this problem by technical means (if we just eliminate all the other font rendering systems, and all the other fonts, and standardize on one true display DPI…but those are political and economic problems, not technical ones).

The problem won’t go away with 300DPI displays either, at least not until everyone with a lower resolution display goes away. 75DPI web pages will look like crap on 300DPI web browsers, and 300DPI web pages will look like crap on 75DPI web browsers.

In Ubuntu you can change to whatever anti-aliasing style you want. It’s quite handy. Want a OS X look? Just use less hinting. Want a XP look? Just use more hinting.

I found just changign the font in Safari’s options (to Calibri) fixed most of the problems I found with reading text in safari

Just a follow-up to my earlier comments on the lack of Japanese font smoothing for Windows.

I installed Safari on Windows and was pleased to see that the font rendering in that browser carries through to all languages. Japanese looks just as good as it does on the Mac. It’s too bad that Windows doesn’t want to bother doing it itself. But I think the reason they don’t is because of the adjustments to the pixel grid that has been discussed so much here. Japanese (and Chinese, of course) characters are often very complex. Adjusting lines to fit a pixel grid could potentially make it look like an illegible blob–a crisp blob, but nonetheless a blob. The dot-matrix style they continue to use today (I wish I could post screenshots here to show you) actually cheats in the way it displays the characters by often times leaving out less critical lines at lower point sizes.

Although I can’t side completely for either method with regards to western text, for double-byte languages Apple’s way of doing things is hands down the best (if for no other reason than because Microsoft doesn’t do anything at all). I can’t tell you how many times Japanese natives I work with who use only Windows come over to my desk and express near awe at how good a page of Japanese text looks on my Mac (the majority of whom have since bought Macs for personal use).

While I probably won’t use Safari on Windows for my main browser until it’s official release, I’ll definitely be using it for Japanese web browsing on that platform.

Why is this even a question, and not simply an option?

I can’t say more than that, because it honestly seems far too simple a concept to warrant all the brain power being spent debating this.

Way up near the start of this thread someone was commenting about Vista’s new scaling algorithm for monitors that are not 96dpi.

The reason that some apps look nasty is that Vista requires apps to explicitly declare that they are resolution-independent. If they don’t, they are given a bitmap display surface at 96dpi and then that surface is scaled appropriately to make it fit in on an alternative-dpi display. Because this scaling is done during compositing, any text the app rendered will have been rasterized already at 96dpi, and so if cleartype is enabled it’s liable to look terrible. Microsoft really should have disabled cleartype for apps where this scaling is being applied, but whatever.

This is also why this only works if you have the fancy graphical effects turned on: the scaling is done by the compositor in hardware.

When I just started using Safari - I wholeheartly agreed with your earlier observation - but after a few days I have to admit I have defected especially after seeing Safari’s font rendering with fonts other than Arial and to some extent Verdana. Take Georgia - Go to Wired.com and click on any article. Not only is Safari’s rendering superior for larger fonts (IE7 has a lot of jagged edges) - but even on smaller sizes (the article text font) Safari is a delight to read, while IE7 just elongates the characters vertically a little bit - and the RGB noise is also visible to some extent. Or take Trebuchet, or any other font. I have to admit - the consistency of Apple font rendering technology has put in me in their camp. (Come to think of it the Copyright message on your blog is shouting RGB noise in IE7, but looks pretty clean and more readable on Safari)

I much prefer OSX’s rendering, but I can see the Windows argument. I think it comes down to design purity vs. design practicality - with strong arguments for both sides. Take yer pick folks.

I personally think that one of the absolutely strongest reasons to use OS X instead of windows is that is has WAY more beautiful typography (that’s what this is about). Especially, this makes a lot of difference on the web. I’ve always been fascinated how anyone using windows can do serious web design - after all, text is very central to web design, so designing for the web in windows must be like designing with a blind fold. It’s a bit like trying to write a cook book when you only have foul ingredients to try the recipes with.

I can see why you would want to respect the grid more in small font sizes*, but in any larger sizes, it’s just silly and is almost disrespectful to both the font designer and the user. Legibility is not only about making the letters clear and distinct with high contrast, it is also (more, even) about making it easy to percieve the nuances in their shapes, which, in a well designed font, create words instead of just a sequence of letters. ClearType changes and ruin the shape with cold mathematical algorithms, OS X preserves it pretty much the way it’s supposed to be. About 99% of the work any font designer has put into their font will be completely in vain when the font is used on windows.

So, I actually expected windows users to be happy to finally get a browser with a more beautiful typography. But, I’m not really expecting a windows user to even notice that it actually does look better. :wink:

  • This only applies to a certain frame of font sizes where the lines of the letters are about 1 px wide. There, you want to make sure that you don’t get 2 px of grey blur instead of 1 px of black (as demonstrated by your example), but both above AND below this, proper AA not respecting the grid is much better for legibility.