开发者

Maximum number of rows in a sqlite table

开发者 https://www.devze.com 2022-12-08 00:23 出处:网络
Give an simple sqlite3 table (create table data (key PRIMARY KEY,v开发者_开发百科alue)) with key size of 256 bytes and value size of 4096 bytes,what is the limit (ignoring disk space limits) on the ma

Give an simple sqlite3 table (create table data (key PRIMARY KEY,v开发者_开发百科alue)) with key size of 256 bytes and value size of 4096 bytes, what is the limit (ignoring disk space limits) on the maximum number of rows in this sqlite3 table? Are their limits associated with OS (win32, linux or Mac)


As of Jan 2017 the sqlite3 limits page defines the practical limits to this question based on the maximum size of the database which is 140 terabytes:

Maximum Number Of Rows In A Table

The theoretical maximum number of rows in a table is 2^64 (18446744073709551616 or about 1.8e+19). This limit is unreachable since the maximum database size of 140 terabytes will be reached first. A 140 terabytes database can hold no more than approximately 1e+13 rows, and then only if there are no indices and if each row contains very little data.

So with a max database size of 140 terabytes you'd be lucky to get ~1 Trillion rows since if you actually had a useful table with data in it the number of rows would be constrained by the size of the data. You could probably have up to 10s of billions of rows in a 140 TB database.


I have SQLite database 3.3 GB in size with 25million rows of stored numeric logs and doing calculations on them, it is working fast and well.


In SQLite3 the field size isn't fixed. The engine will commit as much space as needed for each cell.

For the file limits see this SO question:
What are the performance characteristics of sqlite with very large database files?


I have a 7.5GB SQLite database which stores 10.5 million rows. Querying is fast as long as you have correct indexes. To get the inserts to run quickly, you should use transactions. Also, I found it's better to create the indexes after all rows have been inserted. Otherwise the insert speed is quite slow.


The answer you want is right here.

Each OS you mentioned supports multiple file system types. The actual limits will be per-filesystem, not per-OS. It's difficult to summarize the constraint matrix on SO, but while some file systems impose limits on file sizes, all major OS kernels today support a file system with extremely large files.

The maximum page size of an sqlite3 db is quite large, 2^32768, although this requires some configuration. I presume an index must specify a page number but the result is likely to be that an OS or environment limit is reached first.


Essentially no real limits

see http://www.sqlite.org/limits.html for details


No limits, but basically after a certain point the sqlite database will become useless. PostgreSQL is the top free database BY FAR for huge databases. In my case, it is about 1 million rows on my Linux 64-Bit Quad Core Dual Processor computer with 8GB RAM and Raptor hard disks. PostgreSQL is unbeatable, even by a tuned MySQL database. (Posted in 2011).

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号