We have a website using ASP.NET MVC and SQL Server 2008 and we are using the default transactionscope isolation level which is Serializable. But it makes the application unusable if any transaction is opened as we have a table that being used by almost everything and it runs like
select * from table1 where id = 1
So I think it locks the whole table when the above is exec开发者_StackOverflow中文版uted.
I have been reading what could be the best option for web application with lot of transactions.
I am kind of sold on Snapshot isolation level. Which give the best of everything:
- Data protection when read data is changed by throwing error.
- Also allows read of data that's in transaction.
Which isolation level could be for web application as per your experience?
Edit: Default isolation level is from transactionscope, just to clarify why I mentioned serializable as default.
I read lots of blogs and answers here suggesting using no_lock for select, but that is kind of a hack if you ask me. In a real life scenario maybe 99.99% of the time it will be OK. And that is something fine for Facebook, Twitter or Stack Overflow. Who cares if data is corrupted. But my take on it is if we use transaction isolation then it must be 100% guaranteed architecture. Otherwise don't use it at all and add some other data integrity checks with a trigger or something.
You should use read committed snapshot in the database. This isolation level gives the highest degree of concurrency, at the cost of maintaining a version store during updates. If your app is read intensive and not oltp intensive then this trade-off is worth.
If you enable read committed snapshot in your database, then transactions under read committed isolation will automatically use the snapshot isolation.
The serializable isolation level is way overkill usually, a true performance hog. Read committed (simple, no snapshot) is good but it can cause problems as it blocks reads vs. writes.
I suppose this question may be a better fit for serverfault.com, that said here's my understanding of the topic:
First, since when the default isolation level became Serializable, I thought it was Read Committed!
Second, Snapshot may not be a good idea as that effectively uses the tempdb(which for the most part sits in the memory) to store concurrent versions of the data, so with any luck you will run out of RAM on 1-2-3.
Third, serialization level is not an all-or-nothing endeavor, instead you should look at each and every query and set it per query, using query hints or whatever. I would say for your magic select statement that is used everywhere you may even want to go with (nolock) hint (assuming the underlying table is 99.99% readonly; BTW if you notice yourself doing too much readonly stuff that's an indication that you should look into caching, whether native ASP.NET cache or Memcached or whatever) while the rest may use read committed. Only on rare occasions (e.g. an automaintained lookup table) you want anything higher than that.
Forth, don't overuse pessimistic locking altogether. Much smarter choice is to go optimistic locking instead, e.g. insert hoping there's no dupe and deal with a failing constraint afterward and things like that. For updates, you can add timestamp column and include that in your where clause. If somebody else hijacked the update before you, the rowcount will be 0. Etc.
Hope this makes sense.
One of the best strategies is Row Versioning. You can set this at the database level like:
ALTER DATABASE YourDatabaseName SET READ_COMMITTED_SNAPSHOT ON;
This basically does nonblocking reads by default. See also Row Versioning-based Isolation Levels in the Database Engine.
A good RDBMS would only be locking the row indicated by "id=1", not the whole table, when you issue such a SELECT. Serializable isolation level is just fine provided that the database provides reasonable row locking, and that all pending locks are released at the end of each web request; this generally means closing the database connection at the end of a request, or alternatively issuing a ROLLBACK or COMMIT if the connection is to be re-used again for a subsequent request.
You need to go with read committed
and do some optimistic locking in your app. This seems to be more robust in most cases.
Also have a look how stackoverflow team handled their locking issues.
I think The best Isolation Level is Snapshot Isolation Level because It Will read the last Saved Data in the Table If the other transaction will not completes and no lock and no Dirty read So I best To use Snapshot Isolation Level better than Read Committed
How you can Do this by two Ways
1-alter database testing set allow_SnapShot_Isolation on ; or 2-alter database testing set Read_Committed_Snapshot on ;
I hope To Be Useful For The Reader Mahmoud Shahin Egypt 0201288846966
精彩评论