Using parameterised SQL statements (as in the shown code) can cause SQL to not use indexes, thus causing table scans, which can cause table locks.
@Fran, this might have been true before parameter peeking was introduced in IIRC MSSQL7, or if you way out of date statistics.
My experience is that SQLServer is quite aggressive on using indexes for equals / like% queries (as you would expect) but it is unlikely to use indexes when using a range scan or if you are issuing SELECT * which means both the index and the table row records need to be read.
As always, donât ASSUME anything. If you want to be sure then design a test case and PROVE it. (I did this on a StackOverflow thread and surprised myself!)
I have seen something similar to this before. As another poster mentioned, if the select or the write requires a table scan then that will require a table lock and they can conflict. However, just adding an index may not help - and I would indeed assume that you already have that.
Using parameterised SQL statements (as in the shown code) can cause SQL to not use indexes, thus causing table scans, which can cause table locks.
So, if you do the thing about declaring a variable and then using that variable in a SQL statement, then the optimiser may choose not to use an index. I believe something similar is true for prepared statements - sometimes.
Because the database is small and all in memory, you are unlikely to even notice that table scans are happening.
You need to use the SQL Profiler with more details, combined with the Performance Monitor for checking for number of locks and number of table scans. Then try looking at the estimated execution plan in Query Analyser - using variables/prepared statements rather than substituting them.
@Chris Not sure if youâre coming back to read this, but anyway, I just wanted you give you a thumbs up for your comments. Iâm glad to see that someone has a different perspective on things. You also inspired me to do a little rant on stupid questions (hope you donât mind me quoting you):
One has to be careful while using the (nolock) hint in SQL Server due to the fact that it is possible for the query to not return all the expected results!
I have spent many days trying to figure out why a SSRS report never returned consistent results nor match information out of Microsoft Dynamics AX (an ERP system with SQL Server as a back end). This was due entirely to the (nolock) hint causing SQL Server to not return records that had a lock on them (hence the same query wouldnât always return the same rows). You need to SET ISOLATION LEVEL to READ UNCOMMITTED, or use the SNAPSHOT isolation level you mentioned in your post.
One thing of note with READ COMMITTED SNAPSHOT, it uses tempdb to create the snapshot before taking out the lock, so your tempdb files should be on an appropriate disk setup as it is being used more often.
The dining philosophers problem has been coined a classic synchronization problem either because itâs practically inapplicable nor does computer scientists dislike scientistsâŚ
We figured out our deadlock problem, finally. It had to do with Log4Net being called during a database operation, itself triggering another database operation and log event. This is normally OK, but the events were happening so close together that it resulted in a deadlock.
SQL Server still works fine. Programmers still blame their platform for their mistakes.
Jeff said - We figured out our deadlock problem, finally. It had to do with Log4Net being called during a database operation, itself triggering another database operation and log event. This is normally OK, but the events were happening so close together that it resulted in a deadlock.
@JacKU - being careful not to jump to conclusions when you donât understand the problem/software/library.
I agree. You can easily come to an incorrect conclusion even if you follow a very logical path. I find the forums really useful. How many problems/errors/issues do you come across where nobody in the world has ever come across that issue before in any form? Not very often. Being part of on-line forum communities is becoming essential because woops problems like this just shouldnât happen if propper communication through forums is occuring.
We canât know everything, but everyone can know something.
The article as well as the discussion just confirms what I encountered in my professional experience - many experienced application developers donât have a good understanding of the most basic DBMS properties: ACID, concurrency control, serialization levels⌠All of this I learnt at uni, canât remember if it was second or third year.
The underlying logical and theoretical essentials should take no more than a dozen hours to comprehend. RDBMS vendor-specific features are a few hours on top. Whatâs everyoneâs excuse for not investing such relatively insignificant amounts of time?
Jeffâs having problems with a simple web app. The scary part is that there are much more demanding apps designed by similarly ignorant engineers.
Most DBAs donât know programming so donât answer these questions correctly (which is why I donât hire DBAs, becuase 95% are worthless). A few facts Iâm surprised no on mentioned:
In the same database connection, you CANNOT block yourself. Iâve see lots of people chase their tails.
Many deveolopers use multiple database connections. This allows them to block themselves in the same logical set of code. If they simply reused connections, blocking can go away.
Try to always read and update your tables in the same order. This is old school, but just about no developer or DBA enforces these rules anymore.
Using NOLOCK in ALL instances is just plain wrong. Use it when you know it doesnât matterâŚlike all things, youâve gotta know what you are doing to do it well.
I encountered the same problem last week. Whenever two threads ran the transaction simultaneously, there occurred a deadlock. After reproducing the error and finding no solution, I placed a lock in the c# code so that no two thread execute that transaction at the same time. This eliminated the problem but at the cost of reduced parallelism. The idea of retrying could have backfired as there could have been an issue of thread starvation. Any comments or a better idea?
Jeff,
There isnât any no locks option in Oracle because readers donât block writers. (and Oracle does not support dirty reads because there is no reason to do so) Not all RDBMSs work the same way. This is a prime example of that.
Just because dirty reads donât exist doesnât mean that readers donât block writers. Dirty reads mean that readers arenât blocked by writers. Readers DO NOT block writers in SQL Server.
The one exception in SQL Server where readers block writers is when the reading is done within a transaction. That is - if a row is read in a transaction it is considered part of the transaction, but that is logical.
You wouldnât want to start a withdawal proces, read the balance, proceed with the transaction only to find that the balance was updated during the final part of the transaction! Instead, the second process trying to reduce the balance shoul wait until this transaction finishes.
We had a similar problem where deadlocks were happening, and we found that it all came down to the MaxDop (Maximum Degree Of Parallelism) setting.
We have 90GB databases running super fast with no drop in performance by setting this value to 1. In fact we recommend it now. We also get 0 dead locks as a result.
On one site with high traffic we are managing, we discovered an interesting issue. We had a bug with the view counter of the post.
Initially we were reading the table for the âpostâ which had viewcount. Then in the application we were doing a viewcount = viewcount+1. Finally we were updating the DB with the new viewcount.
Issue: massive dead locks.
We looked at the stats of the DB and discovered that many UPDATE were trying to insert the same value: viewcount. The app does many reads very quickly and pile up the subsequent writes with the same value.
And that doesnât please Postgresql at all.
We removed all our issues by incrementing the viewcount directly through an sql statement.
You wrote:
UPDATE [Posts]
SET [AnswerCount] = @p1, [LastActivityDate] = @p2, [LastActivityUserId] = @p3
WHERE [Id] = @p0
Have you tried this:
UPDATE [Posts]
SET [AnswerCount] = [AnswerCount+1], [LastActivityDate] = @p2, [LastActivityUserId] = @p3
WHERE [Id] = @p0
Iâd quess you used to do this:
Begin;
select AnswerCount from ⌠;
âŚincrement AnswerCount in application logic
update ⌠set AnswerCount=NewAnswerCount;
Commit;
In which the problem is the default isolation level in postgres allows the first select to read data without any locks (well itâd just get a mvc snapshot when it starts - what Jeff wants to be the default in SQA Server :-), resulting in concurrent executions of that transaction block will read data before other transaction have completed.
One fix is to run that transaction with the SERIALIZABLE isolation level (in which case postgresql will abort the transaction and make you restart it)
Or in this case you could simply
select AnswerCount from ⌠for update;
which will do the proper locking, so concurrent execution of that transaction will see a coherent snapshot of the data, and will wait for other transactions to complete.
OK, but why doesnât SQL Server re-issue the transaction on my behalf, then? Automatic retries of some kind?
Jeff, would you expect any other API to retry a command if it failed? It would seem like this is a cleanly written API. I think of SQL as an API to SQL Server (or any other RDBMS). It should be up to the client to handle the error.