It’s as if one of the dining philosophers happened to glance over at another philosoper’s plate, and the other philosopher, seeing this, screamed meal viewing deadlock! and quickly covered his plate with his hands.
Love this! :slight_smile:

Although I’m only an amateur DBA (and an ex-amateur-DBA at that!), I’ve always been under the impression that databases suffered more from the a href= and Writers problem/a. This would seem to fit your starvation problem better than the Dining Philosophers’ Problem, Jeff, as the large number of reads you’re doing are locking out the write; it also mirror’s ratboy666’s comment.


SQL server provides lots of opportunities for deadlocks. Whenever I encounter this scenario, I look at my Update/Insert/Delete statements and I add the WITH(ROWLOCK,UPDLOCK) query hints.
This suggests that SQL Server first locate the rows to be updated (UPDLOCK) and then lock the effected rows only rather than the entire page(s) that contains the rows.

Your new statement would look like the following:

SET [AnswerCount] = @p1, [LastActivityDate] = @p2, [LastActivityUserId] = @p3
WHERE [Id] = @p0

It has worked for me. Good luck!


If you ever stuck with any big problem, you can always ask the community for hints and suggestions.


Too bad you dont use Java:



This is an excellent post . Thanks a lot . I am grateful to you .


Would have been nice to see how many users you have hammering away to get this error. I’ve never seen this error from MSSQL or MySQL, and I’ve used them a lot in both read and write heavy environments.


The biggest problem i have seen relating to this scenario is when the statement causes lock escalation in the database. If your first query is running a table scan rather than an index lookup it is conceivable that SQL with give up locking the row or page and lock the whole table instead. If this happens on your update statemtent then all your subsequent transactions will be blocked until the update finishes and you end up with the deadlock problems you are seeing. If it happens on a SELECT statement all your UPDATE transactions will be blocked.

This problem is compounded if your data is very large and one row spans multiple data pages.

So before setting the read committed snapshot I would first check the queries are using index lookups and try and check if the data size of a row could be reduced.

Do you really need every column in the [posts] table?


Don’t if I’m to purist, but I avoid the nolock like the death, even that it gets faster, because you could show wrong informations to your user, and it could be realy bad, for example, if you are leading with money. I prefer making the query and passing the values to local variables and then killing the recordset, and consuming the variables in my next query.


Interesting article. And while I agree that dirty reads can fit the bill, I think the point is that one makes an informed decision on whether a dirty read is ok. Sure for most website applications few writes and mostly reads. No one cares if their profile is a few seconds out of date or if thaty aren’t seeing Bob’s post from 1 ms. after they submitted their own, but if your application say tracks an ordering system, it isn’t appropriate to count 15 orders when 3 of them are still completing transactions before they can truly be counted. The default locking mechanism is aimed at keeping data-integrity up, not user wait time.

I feel that if it’s important enough to be in a transaction in the first place, it’s important enough to fix the cause of the deadlock and not just ducttape a dirty read into the query. Having said that, for non-critical data it’s fine, just makes me worried when I see it. :slight_smile:



If, like most web applications, you are doing mostly reads, and few writes, why use a database at all? Most web apps are just doing the same query over and over again. Just generate and store the content of each page and serve that, regenerate on change. You can use whatever granularity of locking you want that matches how you store the content. I’ve never understood why so many web applications are just front ends to a big pile of the same SQL queries over and over again, it’s never made much sense to me but I guess that’s what everyone else does, so why not do it too?


UPDATE [Posts]
SET [AnswerCount] = @p1, [LastActivityDate] = @p2, [LastActivityUserId] = @p3
WHERE [Id] = @p0

I think the update statement is trying to convert a read lock into a write lock. I would look at putting a locking hint on the select that first read the post in the transaction to make it take out a write lock at that point. Otherwise do the update in a separate transaction.


Seems like a case of bad defaults… just reading what read committed snapshot is and it sounds like how it should be implemented by default.

If we request a row that is being modified by some other transaction, our transaction will wait until the other is finished and fully committed.

Sounds like common sense… queueing database requests is hardly rocket science. :stuck_out_tongue:

Clearly, you should be using MySQL. :slight_smile:


Re: choice of SQL Server
He wants to use LINQ, so he must use SQL Server. LINQ doesn’t talk to anything else at the present time (there’s only one sql-translation provider).

Re: deadlock -
It’s not really about query complexity… it’s about multiple tables involved in a single transaction in a multiple user environment.

Are there any triggers that the update statement activates that reads other tables?

Is the select issued in its own transaction, or are other tables queried in the same transaction?


Someone has to explain to me sometime - why every time a problem is posed, peoples first response is to tell you that you should use a different platform/technology/brand.

Same thing happened in the comments to having issues with Windows and Wordpress, half the responses were use x blog engine instead, or use linux.

Here it’s use x db technology.

And ta for posting this Jeff, I don’t get the responses that expect you to always be an expert, and I don’t think I’d learn as much if you were an expert and didn’t need to pose the problems!


Maybe if you open sourced your webapp, you could gain the benefits of peer code review! :wink:

-Some- bugs are shallow with enough eyes…and maybe development would speed along, and then you can get back to writing more posts to enlighten the masses.

Why not open source at least some of the stackoverflow website?


MS-SQL is not a particularly strong database for online transaction processing, at least not compared to Oracle. In the mid-1990’s, Oracle developed a form of multiversion concurrency control that pretty much devastated the competition: it’s sort of like nolock, but safer.

MySQL rocked the world by coming out with the blisteringly fast MyISAM table handler, which traded transaction support for performance. Not having transactions is a real loss, but many people have found MyISAM is reliable enough for what they do. MySQL also supports the InnoDB table handler, which uses MVCC. InnoDB is remarkably fast for ordinary operations and commits, but it can really burn you if you need to rollback a big transaction. I’ve heard of InnoDB wrecks where it took more than a week to recover from a system crash.


Evolv CS ( upgraded to SS 2005 and told us to add the nolock option to ALL of our custom queries. So maybe this is common practice.


We are writing an app that should work at least with Oracle and SQL Server, therefore using non standard hints like NOLOCK is not an option.
It’s ok to have propietary extensions but SQL Server should support ANSI standards better. It’s incredible that the concatenation operator || is not available. I guess they haven’t changed the Sybase core much, that’s unfortunate.


Great examples here of why comments need ratings so you can separate the rantings and insults from the answers. I did learn some things from reading the comments about the differences between database designs.

I’ve been writing database backed applications for 12 years now- never hit a deadlock on a read. Even in SQL Server 2005. Of course, I generally follow the rules…


You could always try with read uncommitted