It is usually noted that simple database systems which does not have server (e.g. GDBM, SQLite, etc) are weaker for concurrent connections.
How database server handles concurrent connections to have a better concurrency?
I think read concurrency is better in database systems without server, as there is no limitation for reading data from a flat file. The limitation should be the available memory, am I right?
The problem is about write concurrency, as the file will be locked. Thus, only one write at a time. I think this is a开发者_如何学编程lso the case for Mysql (with MyISAM engine, as locking in InnoDB is limited to row). Is there concurrent write practically?
Overall, how concurrency of a database system with server (e.g. Mysql) is better than a system without server (e.g. SQLite)?
Many database servers support transactions and row level locking
Read concurrency without writes is very easy to obtain. It's more difficult to allow readers when updates are being made. Databases with row level locking will allow that on other rows, flat files and SQLite tables can't be read during updates.
In many real system, you'll have a mix of read and writes. Concurrent writes in practice? Think of stackoverflow or any busy forum, there will be plenty of concurrent write.
SQLite supports only table level locking, thus in presence of updates of a many concurrent users would be very slow. On the other hand embedded databases are almost zero setup and they will do great for single users or also on a webserver with a limited number of concurrent user.
I've heard from the Trac mailing list, that SQLite backend is practical until about a handful of developers, after a switch to MySQL or Postgres becomes essential.
精彩评论