开发者

Isolation level required for reliable de/increments on a single field

开发者 https://www.devze.com 2023-02-06 09:12 出处:网络
Imagine we have a table as follows, +----+---------+--------+ id | Name| Bunnies| +----+---------+--------+

Imagine we have a table as follows,

+----+---------+--------+
| id | Name    | Bunnies|
+----+---------+--------+
|  1 | England |   1000 |
|  2 | Russia  |   1000 |
+----+---------+--------+

And we have multiple users removing bunnies, for a specified period, such as 2 hours. (So minimum 0 bunnies, max 1000 bunnies, bunnies are returned, not added by users)

I'm using two basic transaction queries like

BEGIN;
  UPDATE `BunnyTracker` SET `Bunnies`=`Bunnies`+1 where `id`=1;
COMMIT;

When someone returns a bunny and,

BEGIN;
  UPDATE `BunnyTracker` SET `Bunnies`=`Bunnies`-1 where `id`=1 AND `Bunnies` > 0;
COMMIT;

When someone attempts to take a bunny. I'm assuming those queries will implement some sort of atomicity under the hood

It's imperative that users cannot take more bunnies than each country has, (ie. -23 bunnies if 23 users transact concurrently)

My issue is, how do I maintain ACID safety in this case, while being able to concurrently add/increment/decrement the bunnies field, while staying within the bounds (0-1000) I could set t开发者_如何学编程he isolation level to serialized, but I'm worried that would kill performance.

Any tips? Thanks in advance


I believe you need to implement some additional logic to prevent concurrent increment and decrement transactions from both reading the same initial value.

As it stands, if Bunnies = 1, you could have simultaneous increment and decrement transactions that both read the initial value of 1. If the increment then completes first, its results will be ignored, since the decrement has already read the initial value of 1 and will decrement the value to 0. Whichever of these operations completes last would effectively cancel the other operation.

To resolve this issue, you need to implement a locking read using SELECT ... FOR UPDATE, as described here. For example:

BEGIN;
  SELECT `Bunnies` FROM `BunnyTracker` where `id`=1 FOR UPDATE;
  UPDATE `BunnyTracker` SET `Bunnies`=`Bunnies`+1 where `id`=1;
COMMIT;


Although it looks to the users like multiple transactions occur simultaneously within the DB they are actually sequential. (E.g. entries get written to the redo/transaction logs one at a time).

Would it therefore work for you to put a constraint on the table "bunnies >= 0" and catch the failure of a transaction which attempts to breach that constraint?

0

精彩评论

暂无评论...
验证码 换一张
取 消