Deadlocked!

SQL Server isn’t acting up here, it’s just doing what LINQ To SQL told it to do. It sounds like READ COMMITTED SNAPSHOT will solve the deadlock issue, but it also sounds like it’s a pretty inefficient workaround because it’s making copies of all data as it’s written.

I’d recommend looking at the LINQ To SQL usage. As I understand it, you’re using a static DataContext for the controller. From what I’ve read (and in my experience), it’s better to create a new DataContext for each data operation. That may sound inefficient, but it sounds a lot more efficient to me than using a database setting which alters the way data is physically stored to create multiple snapshots of the data as it’s written.

Too bad that you don’t seem to notice why a SELECT starts a transaction. A transacted READ makes sure you don’t have dirty reads. Your example about out-of-date data is one which isn’t that important as this is the same as caching.

What IF you have a rule in a table that is like an order column that containt values for a set that are unique. For example a table that contains multiple menulists but each listitem is unique per list. Doing a dirty read on that table could cause multiple items having the same order value as an update could have updated the data during reading.

This of course depends very much on the type of data you have, how (de)normalized the data is and how you manage transactions with methods like optimistic concurrency control.

The same when you READ and WRITE. For example sum all order lines, add VAT and store it at the order for quick read. You do want to make sure that no order lines are altered during this read.

When you know that certain data will never be updated like logging data then you can do dirty reads as those will not cause harm.

When you get deadlocks then you are not using the appropriate locking and/or isolation levels which definitely is an application design error. Just don’t say to just perform dirty reads as that really me loose faith in your blog.

Still… databases provide integrity and features to circumvent/loosen that model if you know what you are doing will not break integrity. If you don’t know what you are doing then don’t play around with the transaction isolation levels. Just increase performance by doing less reads by introducing caching and add versioning to records to perform optimistic concurrency control without sacrificing integrity.

Actually, it’s quite easy to deadlock on Oracle, it happens quite frequently the second you open your wallet to pay for it and realize the price doesn’t include the guru you need to pay as well just to set it up decently.

Being real, deadlocks don’t have anything to do with reads. They have to do with the acquisition order of locks and they can happen in Oracle just as in any other db.

If two transactions acquire locks to shared resources in different orders then it becomes possible each will grab a lock on a resource the other needs to finish and both processes become deadlocked. That Oracle doesn’t lock reads simply reduces the likely hood of this happening. That it does MVCC by default also means the write locks are extremely short, just long enough to change the pointer from the old version to the new version rather than locking during the write itself.

To commit a write they still have to ultimately acquire a write lock on the current row long enough to swap in the new version and hold it until all other locks needed for this transaction to commit are successfully acquired. Multi version concurrency control allows multiple writers as well but ultimately only one will acquire the lock and succeed with the update and the other will be aborted. If those two transactions acquire their locks in different orders and each succeeds on locking a resource the other needs to finish, you have a deadlock.

Deadlocks are a logic problem, not a software problem. If you’re getting deadlocks it’s because your code is flawed and your transactions are not acquiring locks in the same order, i.e. first table A, then table B, then table C. What the order is doesn’t matter as long as they all do it the same.

MVCC just happens to shorten lock time so much both by eliminating read locks and by allowing multiple writers and extremely short write locks that many such flaws in logic are never exposed because the likely hood of them happening is so rare. Sql locks during the write to disk by default, so the lock time is long, Oracle writes to disk first, all the new versions, then acquires locks just long enough to swap in the new versions.

Oracle chooses the correct approach and just suffers the performance penalty for always doing row locks and MVCC in order to achieve the correct behavior.

Sql 05 or better can be made to behave correctly, it just isn’t the default behavior. MVCC is expensive so it’s off by default because they don’t want 05 to appear slower than previous versions. That was a mistake, it should be on by default, correct behavior is more important than any appearance of speed.

Small Correction:

In Oracle and PostgreSQL, a read will never block, since they both use a form of multiversion concurrency control. This feature removes a major cause of database programming suckage.

I’ll admit I didn’t read all of the posts so this may have been covered. Can you not turn on the READ_COMMITTED_SNAPSHOT option on the database? This will further explain it:
http://blog.sublogic.com/2006/02/oracles-multiversioning-now-available-in-sql-server/

Jeff-

You might want to try here:

a href=http://blogs.msdn.com/bartd/http://blogs.msdn.com/bartd//a

a href=http://blogs.msdn.com/bartd/archive/2006/09/13/751343.aspxhttp://blogs.msdn.com/bartd/archive/2006/09/13/751343.aspx/a

I don’t think you have a typical deadlock situation going on (A waiting on B, B waiting on A scenario).

The above links should help with troubleshooting and offer some insights. I think you might just need some query tuning or some index adjustment.

I don’t think SQL Server is broken as several have suggested. Looks like Stack Overflow needs a SQL Server DBA!

Jon Raynor

Assuming this is not an April Fool’s type joke blog entry: the author states I’m no DBA and then proceeds to give DB related advice? Seriously? Even to the point of saying But in practice adding nolock to queries that you absolutely know are simple, straightforward read-only affairs never seems to lead to problems. Never SEEMS to lead to problems? Really? These are to be taken as trustworthy statements? And the conclusion from the Google search on SQL Server vs MySQL deadlocks did not lead to a conclusion that twice as many (at least) companies use SQL Server vs MySQL? (and that doesn’t even take into account the fact that MySQL has only supported transactions for a relatively short time in comparison to SQL Server). And no mention of the increased TempDB activity trade-off when using snapshot isolation?

I am very glad that this guy doesn’t work for my bank. And, developers like this (no matter how good they are at application code and/or architecture) really do help keep DB folks like myself very well paid. Bless his heart ;-).

PS. with regards to one of the responses above (from wds): this is not an insult to the author’s intelligence but rather a response to the dismissive attitude that databases are so simple that anyone who is a good developer is automagically qualified to do data modeling, solve complex DB problems, and even give DB-specific advice.

Love the rant…just had the a similar problem. It caused me to lose some sleep and some hair.

The problem I had was a simple insert stored procedure - when I say simple it was basically “insert into tablename values …”. The deadlock was between, you guessed it, two processes running this stored procedure.

Do you know why? The table was set up with a primary id that was an autonumber field (ie. seeding, increment). On insert Ms Sql has to lock the previous record, meanwhile another update is taking place concurrently…presto deadlock.

I thought it was my so called ‘lack of sql’ knowledge in writing stored procedures. But it was a lack of knowledge in database design - not that I am a DBA (I am a software developer).

There are a few work arounds, having another table is one of them. But I chose the easiest option - make the column type of ‘uniqueidentifier’. Problem partially solved, now it is time to inform my so called SQL Admin Expert.

PS. This is Sql 2005, not sure if this is still a problem in 2008.

@codinghorror FYI, “SQL Server deadlocks” link goes to http://www.google.com/search?q= which seems wrong.

https://samsaffron.com/archive/2008/08/27/Deadlocked+ sums it up. Unless the table is guaranteed to not be written to, NOLOCK is a great tool to speed up a negligible amount of cpu ticks. :slight_smile:

set deadlock_priority low

Instead you should use this. And try / catch around your query. And possibly reloop x times until it succeeds or can’t run.

There are rare circumstances where NOLOCK is good. WHILE loop + WAITFOR DELAY for polling until a certain state in the system is set. Like waiting for a SQL Server Agent job is running. But you do need to check if the dirty read. So if it was dirty or it was still running, keep waiting.

If your query is long running and out of control, look into the Resource Governor to lower it’s priority. Even if it is reads across many databases. “set deadlock_priority low” should also be used.

1 Like