开发者

how to enforce sqlite select for update transaction behavior in sqlalchemy

开发者 https://www.devze.com 2023-03-01 03:19 出处:网络
Yesterday I was working with some sqlalchemy stuff that needed a \"select ... for update\" concept to avoid a race condition.Adding .with_lockmode(\'update\') to the query works a treat on InnoDB and

Yesterday I was working with some sqlalchemy stuff that needed a "select ... for update" concept to avoid a race condition. Adding .with_lockmode('update') to the query works a treat on InnoDB and Postgres, but for sqlite I end up having to sneak in a

if session.bind.name == 'sqlite':
    session.execute('begin immediate transaction')
开发者_开发问答

before doing the select.

This seems to work for now, but it feels like cheating. Is there a better way to do this?


SELECT ... FOR UPDATE OF ... is not supported. This is understandable considering the mechanics of SQLite in that row locking is redundant as the entire database is locked when updating any bit of it. However, it would be good if a future version of SQLite supports it for SQL interchageability reasons if nothing else. The only functionality required is to ensure a "RESERVED" lock is placed on the database if not already there.

excerpt from https://www2.sqlite.org/cvstrac/wiki?p=UnsupportedSql

[EDIT] also see https://sqlite.org/isolation.html thanks @michauwilliam.

i think you have to synchronize the access to the whole database. normal synchronization mechanism should also apply here file lock, process synchronization etc


I think a SELECT FOR UPDATE is relevant for SQLite. There is no way to lock the database BEFORE I start to write. By then it's too late. Here is the scenario:

I have two servers and one database queue table. Each server is looking for work and when it picks up a job, it updates the queue table with an "I got it” so the other server doesn’t also pick it up the same work. I need to leave the record in the queue in case of recovery.

Server 1 reads the first unclaimed item and has it in memory. Server 2 reads the same record and now has it in memory too. Server 1 then updates the record, locking the database, updates, then unlocks. Server 2 then locks the database, updates, and unlocks. The result is both servers now work on the same job. The table shows Server 2 has it and the Server 1 update is lost.

I solved this by creating a lock database table. Server 1 begins a transaction, writes to the lock table which locks the database for writing. Server 2 now tries to begin a transaction and write to the lock table, but is prevented. Server 1 now reads the first queue record and then updates it with the “I got it” code. Then deletes the record it just wrote to the lock table, commits and releases the lock. Now server 2 is able to begin its transaction, write to the lock table, read the 2nd queue record, update it with its “I got it” code, delete it’s lock record, commits and the database is available for the next server looking for work.

0

精彩评论

暂无评论...
验证码 换一张
取 消