I have a sqlite3 database that is accessed by a few threads (3-4). I am aware of the general limitations of sqlite3 with regards to concurrency as stated http://www.sqlite.org/faq.html#q6 , but I am convinced that is not the problem.
All of the threads both read and write from this database. Whenever I do a write, I have the following construct:
try:
Cursor.execute(q, params)
Connection.commit()
except sqlite3.IntegrityError:
Notify
except sqlite3.OperationalError:
print sys.exc_info()
开发者_StackOverflow中文版print("DATABASE LOCKED; sleeping for 3 seconds and trying again")
time.sleep(3)
Retry
On some runs, I won't even hit this block, but when I do, it never comes out of it (keeps retrying, but I keep getting the 'database is locked' error from exc_info. If I understand the reader/writer lock usage correctly, some amount of waiting should help with the contention. What this sounds like is deadlock, but I do not use any transactions in my code, and every SELECT or INSERT is simply a one off. Some threads, however, keep the same connection when they do their operation (which includes a mix of SELECTS and INSERTS and other modifiers).
I would appericiate it if you could shade a light on this, and also ways around fixing it (besides using a different database engine.)
Sqlite locks the entire database every time you try to write to the database. Is there any chance one of your threads is constantly writing? Is only one thread hitting the Database lock or all buy one of them?
Here is a not-so-elegant temporary fix: using an external exclusive lock around the writes rather than depend on internal sqlite locking. The above block in the question is basically wrapped with a system wide lock that every thread has to acquire before writing. Since sqlite3 locks the entire DB when writing anyways, I am hoping this doesn't add a lot more overhead.
On the other hand, reads can proceed without acquiring the lock, which I think might work ok with the less-restrictive reader lock sqlite3 needs.
I also suffered from this, on a website that had ~200 users per day (which transalted to maybe 1000 page views). Retries just didn't help (and I finally increased their number up to 100, with short sleeps in between). I don't remember which version of SQLite it was, but I learned the lesson that if you want to have reliable concurrent writes to SQLite database, then you better use some other database like MySQL or PostgreSQL.
This holds up even if you solve your problem with OperationalError-s, because eventually concurrent writes to SQLite file will kill performance for good.
精彩评论