Under what scenarios we use SERIALIZABLE Isolation level? I have seen some answers on website that says, when you want transaction to be completely isolated, we usually go for this.
What i would like to know from your own experience is that, when you've used this in your projects/or you've seen this to be used in other projects an开发者_如何学JAVAd what was the specific requirement that was not getting fulfilled by other isolation levels?
SERIALIZABLE
is useful when you build complex reports which require several queries.
In READ COMMITTED
or REPEATABLE READ
the queries could see the changes made between the transaction started and the moment they ran.
pg_dump
, the PostgreSQL
dump utility, starts it job with issuing SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
, to guarantee that the state of database being dumped would be that of the moment the utility was run, and the subsequent changes to the database could not interfere with the dump.
SERIALIZABLE transactions are useful if you want a simple proof of correctness in concurrent loads. Since the definition of the serializable transaction isolation level in the SQL standard (since SQL-92) is that the behavior of any concurrent set of serializable transactions must be consistent with some serial (one-at-a-time) order of execution, any transaction which can be shown to do the right thing when it is run alone, will do the right thing as part of any mix of serializable transactions.
There is a cost to this protection -- transactions must be blocked or rolled back for retry at times to ensure serializable transaction isolation, and information must be tracked to determine when it is necessary to take such actions. In some development environments, with a small number of transaction types and a small number of developers, it is often more cost-effective to use a less strict isolation level and manage race conditions explicitly in application code. Once you have dozens of programmers working against a schema with hundreds of tables and tens of thousands of transaction types, the cost of determining where race conditions exist can become overwhelming at less strict isolation levels and it generally becomes more cost-effective to use serializable transactions.
Currently, the most frequently implemented method of providing serializable transactions is strict two-phase locking (S2PL), which relies on blocking locks held until the end of each transaction, and deadlock detection with rollbacks to break deadlocks. In loads with very little write contention optimistic concurrency control (OCC) can be used. It tracks a "read set" during the course of the transaction and rolls back if any other transaction modifies the read set. Some database products refer to snapshot isolation as serializable, although it does not actually provide the guarantees required by the SQL standard. A new technique called Serializable Snapshot Isolation (SerializableSI or SSI), was first described in an academic paper presented at the 2008 ACM SIGMOD and is used in PostgreSQL version 9.1 and later. It uses snapshot isolation plus tracking of read-write dependency patterns to determine when a transaction must be canceled. There are other techniques which are seen less often in production. Each of these has its own set of advantages and disadvantages, providing a different break-even point for a question like this.
精彩评论