To illustrate read uncommitted, if all your selects used read-uncommitted, you could have

employee ID 12 is salary of $50

SELECT Salary FROM Employee WHERE ID = 12

phantom data a possibility
0s Process A - starts transaction - multiplies all salaries by 100
1s Process B - reads salary for user X (with nolock - read-uncommitted) … read value is $5000, or $50, or both
20s Process A - does heaps of other things and then rolls back transaction

whereas in snapshot mode
0s Process A - starts transaction - multiplies all salaries by 100
1s Process B - reads salary for user X … read value is $50
20s Process A - rolls back transaction or commits transaction

So there’s nothing wrong with snapshot data, because it’s not committed, it just prevents you from seeing data before it’s committed, but doesn’t make you wait. It’s like you just reordered the read to come before the write started. It’s a better version of NOLOCK, but it does come at a price. And maybe the reordering doesn’t make sense to your application.

So the only alternative is you block readers from reading data that is under change.

0s Process A - starts transaction - multiplies all salaries by 100
1s Process B - wants to reads salary for user X - lock wait
20s Process A - rolls back transaction
21s Process B - wants to reads salary for user X … read value is $50

And as pointed out earlier, if you block readers for rows that may change, and the reader is locking tables in a different order than writers, then you have deadlock.


Robin Day Wrote: If they were to justify reducing the level to READ COMMITTED SNAPSHOT, then they could in fairness reduce the default to READ UNCOMMITTED, how many people would that affect?

I think you aren’t understanding the difference between these two, and I think Jeff isn’t either, as he said: With nolock / read uncommitted / dirty reads, data may be out of date at the time you read it, but it’s never wrong or garbled or corrupted in a way that will crash you.

Read uncomitted doesn’t mean data is out of date, it means you are seeing data BEFORE its in date. It means you are seeing the absolute latest uncomitted data. Read Committed Snapshot on the other hand means that you ALWAYS see the latest committed data. Read Committed Snapshot allows for 100% ACIDity. Read Uncomitted allows for 0%.


Tim C:
Read uncomitted doesn’t mean data is out of date, it means you are seeing data BEFORE its in date. It means you are seeing the absolute latest uncomitted data. Read Committed Snapshot on the other hand means that you ALWAYS see the latest committed data. Read Committed Snapshot allows for 100% ACIDity. Read Uncomitted allows for 0%.

Almost, until that last sentence. The %-age is unknown a priori. The only time that UR data is wrong is when the Uncommitted transaction is Rollback-ed. In no other case is it wrong. Depending on the nature of the system (i.e. no programmatic Rollback, no Deadlock, and Timeout), it can be 100%.


Lastly, this is why database transactions should be deliberately designed rather than be the bastard step-child of the code’s business layer use-cases.
Robert C. Barth on August 25, 2008 10:56 AM

Hallelujah. Like object databases, this is another useless idea designed to make programmers think they can write code by clicking about a few times in an IDE. To design a working system like you want, you need to do it the old-fashioned, boring but reliable ways. LINQ is just like the Ruby programs that are finding they don’t scale, they’re nice ideas, but that’s all they are.

If you are designing something that will be putting food on your table, you want it to work. If you’re designing a playground for yourself, only then can you use whatever you like.


Simple solution: Change to Firebrid


@Tim C:
How about this case?

This is a non-atomic transaction. What happens depends on the implementation. And in an OLTP system (really, any multi-process one), such a read will always be wrong, since there are multi-transactions (whether atomic or not) at any point in time. The read will interleave some writes; therefore it is wrong. If you’ve setup uour database (and it supports this) to read from buffers immediately (recognize commit on buffer update; don’t wait for flushes, table or log), you shouldn’t see the issue. But as others have mentioned, MVCC rather than locking is the answer.


LINQ is just like the Ruby programs that are finding they don’t scale, they’re nice ideas, but that’s all they are.

I’d agree for using it with SQL, but in-memory objects like Arrays, Lists, loaded XML documents, etc… LINQ has been incredible.



I provide an explanation of why and how deadlocks can occur with only SELECT and UPDATE statements outside of transactions


Funny thing:
During one project we found out that running parallel database queries didn’t improve speed over serialized database queries.
(E.g. we had similar queries running one minute each. Running
them in parallel it took 6 minutes and then all of them finished.
So we didn’t win anything by parallel execution - even worse the
response time for all users got unacceptable).

So the trouble about locking and concurrent operations and everything
was pointless - batching all queries and running them one after the
other didn’t slow down the system and all problems about locking
and stuff were gone.

Sometimes I really wonder if todays heavily optimized x86-CPUs
really run faster than a overclocked 6502 CPU (CBM 64) running
on 3 GHz :wink:


@Sam - insightful stuff. Another thing to keep me from sleeping at night.


Is it possible to somehow filter out suggestions like Use MySQL or Switch to Linux or Oracle is the best! or Microsoft rules! or I love Macs! or whatever. I don’t mean just from this blog’s comments, I mean from life in general.


One of the ‘underrated’ causes of deadlocks is the absence of covering indexes (i.e. non-clustered covering indexes in the SQL Server world). Grossly simplfying, reads (SELECTs) that can be satisfied wholly from a covering index, will not require an extra lookup into the clustered index (which is predominately where the locking for updates or inserts will be occurring)


You need to look at what else is happening in the transactions that are deadlocking, its the transactions that hold the locks, not the individual SQL statements.

As Jasmine said above, for a deadlock to occur one transaction must have locked resource A and then tried to lock resource B, then the second transaction must have locked B, and tried to lock A, causing the deadlock.

In the case of read-write locks, it’s also possible that the both transactions have (non-exclusive) read locks on B and one has a (exclusive) write lock on A.

Once you have understood this, you can solve the problem using a technique called Structural Deadlock Avoidance. It’s pretty simple, but it requires a bit of hard work and discipline. It works like this:

  1. assign a unique number to each resource that you want to lock. E.g., resource A = 1, resource B = 2.

  2. when locking multiple resources, always lock the one with the lowest number first, then the next lowest, … E.g., lock A first, then lock B.

If you do this you can see that you’ll never deadlock, because you always lock resources in the same order.

This technique works with any kind of locking construct, object locks in Java or .NET, file locks, semaphores, or database locks.

In a database the resources you number are tables. E.g., TABLE A = 1, TABLE B = 2, … then you order your transactions like this:

Transaction 1:

Transaction 2:

I’ve been using this technique in a variety of environments for more than 15 years. As long as I don’t take shortcuts, I never have deadlocks. Hope it helps you.


Holy shit! These comments (inc. Jeffs original post) display the staggering amount of ignorance there is about SQL Server.

It’s truly scary - A good 60-70% of comments are clueless, particularly the ones just saying use NOLOCK. Oh dear. Makes me realize how lucky I am to have a good understanding of the product. Conversely it makes me realise why so many database applications are fcked!

If you actually understand how to program SQL Server its one of the best performing DBMS in the world, not to mention to easiest to develop for.


I like Dave and his writings. But, for the life of me I can not understand why anyone would want to run Windows SQL Server when they can just as easily Install and Run MySQL.

MySQL can run on a Windows Server. I am not sure if .NET/ASP.NET has API’s for MySQL, but if it does, switching to a better DataBase is what I would recommend.

I run 4 sites that have massive reads and writes and Im running a very old MySQL version (3) and have never had any problems. Other than when the server crashes, for whatever reason, and even than all I have to do is run the repair tool to reconstruct any broke record that may have corrupted.


There is a fundamental difference between database dead-lock and ‘the Dining Philosophers puzzle’ dead-lock you would see in concurrent programming using ‘lock’.

The deadlock described is ‘Philosopher’ is a state at which none of process can continue because circular dependency. ( that is until you force one to give up ). This happens when a process A has lock on X and requires lock on Y and a process B has lock on Y and wants to lock on X ) There is an inherit resource inter-dependency conflict.

In the database, ‘dead-lock’ can happen when a rough process can lock the table - there is no lock on two resources. ‘select * from table’ may look innocuous but if it does not release lock on the table, any subsequent ‘update’ statement may cause ‘dead-lock’. It really depends on the database to determine which statements to terminate. In DB2 case, it was ‘update’ statement but in MS-SQL it may be ‘select *’.

‘select *’ is not innocent. If your program holds to the result set and iterate over very ‘slowly’, other operation that require ‘write’ lock maybe be dead-lock with ‘select *’. It is just resource contention and no circular dependency. This problem was very self-evident on my case with DB2. The developer was doing ‘select *’ from very large table using DBArtisan. DBArtisan continues to hold on to the result set thus the table is locked out, causing any subsequent update to fail.

Detecting ‘dead-lock’ is very difficult. Your first instinct ( ‘SELECT *’ cannot cause this ) may lead you to wrong path. I hope you find your cause soon.

Good luck.


Based on the philosopher example, simple switching is all you need


Jeff’s point in one of his followup posts is the critical one: the DBMS should, at user’s option, be able to handle deadlock automatically, without user intervention. We are willing to accept some epsilon-level error on one side (dirty read, error in time synchronization, whatever) in exchange for not having our query barf on us for reasons created by the specific circumstances, i.e., someone else is competing with us for scarce resources.

In situations where code must be of very high reliability (larger organizations like mine), deadlock failures are simply unacceptable, period. The fact that they can arise through no fault of one’s own query is a huge Achilles’ heel to basic SQL.

The other Achilles heel, btw, is the ability of a single client with a bad query to bring the server to its knees, also a simply 100% unacceptable event. I imagine both of these server weaknesses can be tuned out by a skilled DBA, but I certainly don’t know how, and we see these kinds of failures all the time.


Just a comment on deadlocks and the with (nolock) option - That basically says that you don’t care what state the data is at. That is blocked by default because it is dangerous. Information that is dirty might be about to be undone. You can’t seriously say that this doesn’t matter because it isn’t a banking app, because ALL information that is being read from the database is being read for a reason. Information is acted on, either by a computer or by a user (even if that just means displaying to a user so they can make decisions). Even somethign as simple as a date of a post: Oh - I won’t respond to that post because it was made last year. or That concept is outdated… no - hang on, they just posted today.

Effectively, you are allowing decisions to be made based on information that will be wrong some of the time. That doesn’t make sense. Seriously - why are we bringing back information from the database if no decisions are based on that information? Useless information would never need to be retreived.

SOMETIMES I will use with (nolock) - especially when I know the information I am returning can’t be changed (one part of a row may be static, and another part may change). Also - if the decision being made by the information is allowed to be wrong some of the time (such as general monitoring processes), then I will also use it.

BUT - I agree with others that it is how you are trying to acheive your result that is the issue. A deadlock is a bit more complicated than 1 process trying to get access to a resource while another already has it. A deadlock is 2 processes trying to get to eachother’s resource at the same time, but neither will let go of their resources until they complete. So your problem isn’t with the Select, your problem will be that you have TWO updates. SQL Server only killed one of them, which is the one you had reported to you. Seriously - do a profile to determine what is running at the time the deadlock occured, and it will be clear the processes that are clashing (I use Lumigent Log Explorer, and it makes it very clear which resources are in conflict).

As for the select:

FROM [Posts]
WHERE [ParentId] = @p0

If you had not used an asterisk (which is a BIG NO NO) AND indexed the information you were after, you may have found that no block due to the lock would have occured at all. That is, SQL Server 2005 on allows you to include data at the bottom page of the index. Once the index has been used to find the data, it can then return the data to you without reading from the actual table!!! If this data is not being edited, the index should not be locked. So that performing this query:

SELECT [UserName], [Topic]
FROM [Posts]
WHERE [ParentId] = @p0

Stting the ParentID’s Index to have username and topic columns included with the index (look up the TSQL on this to see how this is created). This would locate your information FROM THE INDEX and never need to touch the data, and never need to even know the row was locked (take that MYSQL).

As for database technology (others have commented on this):

I have done speed comparisons, and for small apps I agree that MySQL is OK. But for larger apps Microsoft SQL Server out-strips MySQL like a Car vs a scooter. Then there is the power of Microsoft SQL Server. Sure MySQL is getting better, but there is nowhere near the programming ability. So it is still horses for courses - and as long as the SQL is iso standard, a user can still upgrade from MySQL to SQL Server. However, my personal thoughts are that using SQL Server throughout is a prefered option due to scalability, power and it is the tool our developers are most familiar with…


Using parameterised SQL statements (as in the shown code) can cause SQL to not use indexes, thus causing table scans, which can cause table locks.

On analysis, I have never found this to be true. At least not since SQL Server 2000… But this is a very Key issue - you always need to profile your system. How things work depend on your hardware, your data, your data structures and indexes.