I have a table in PostgreSQL that I need read into memory. It is a very small table, with only three columns and 200 rows, and I just do a select col1, col2, col3 from my_table
on the whole thing.
On the development machine this is very fast (less than 1ms), even though that machine is a VirtualBox inside of a Mac OS FileVault.
But on the production server it consistently takes 600 ms. The production server may have lower specs, and the database version is older, too (7.3.x), but that alone cannot explain the huge difference, I think.
In both cases, I am running explain analyze
on the db server itself, so it cannot be开发者_JAVA技巧 the network overhead. The query execution plan is in both cases a simple sequential full table scan. There was also nothing else going on on the production machine at the time, so contention is out, too.
How can find out why this is so slow, and what can I do about it?
Sounds like perhaps you haven't been VACUUMing this database properly? 7.3 is way too old to have AutoVacuum, so this is something you must do manually (cron job is recommended). If you have had many updates (over time) to this table and not run VACUUM, it will be very slow to access.
It's clearly table bloat. Run vacuum analyze of the table in question. Also - upgrade. 7.3 is not even supported anymore.
what does happen if you run the query several times ? the fisrt run should be slow, but the others should be faster because the 1st execution put the data in the cache.
BTW : if you do a SELECT ... FROM without any restriction, it's 100% normal that you have a seq scan, you have to seq scan to retrieve the value, and since you have do restrictions, there is no need to do an index scan.
Don't hesitate to post the result of your Explain Analyze query.
PostgreSQL 7.3 is really old, no option to upgrade to a more modern version ?
精彩评论