Personally I'm pretty darn happy that Sql Server has good defaults that are concerned with data integrity and doesn't hope programmers will know what to do. Maybe you're happy otherwise, but people would scream bloody murder if it was otherwise.
Automatic retries .. you really don't want that do you? Retry it yourself. If you had the system software do it for you, it most certainly isn't going to make the correct choice. Additionally, should every layer retry between your app and the database? What if n classes wait rand(m) seconds between p retries? You press a button and the app goes away for 5 minutes because a failure takes n * m * p seconds to get back to you? No thanks. Fail fast and allow the application to retry if it can at the appropriate level. As long as you have concurrency detection mechanisms in place then there should be nothing wrong with retrying the write.
From an application standpoint, read-committed locks are only necessary if you plan to update the database using that data, because you must first see a snapshot the data you need to change, and then change it. If you're reading just to grab replies to a post, then you should be able to use nolock. Make sure your code can handle the case when the count is not equal to what was returned.
Also to avoid locking, you could read from views and you can partition the hot data out of the table (ie statistics, counts, etc) so you're not locking pages on a heavily accessed table to update a count that is, for most purposes, probably useless.
But is this a premature optimization? Is your goal of storing the count trying to save you time, but in effect is causing the deadlocks and slowing you down? The statistics are pretty simple, and I have to ask, why aren't you using a view? It's what they're meant for; let Sql Server work it out.
Sounds like you also need some application-level caching. While it's good to optimize the lower layers, you should be preventing every request from hitting the database without reason. Even a light cache is better than nothing. e.g. if you're getting 1000 hits a minute and the data is not expected to change in that time range, even a 30 or 60 second cache lifetime will save you 500 to 1000 roundtrips to the database during that cache period. That's something to think about.
Writing software is easy. Writing good software isn't.