开发者

Dump postgres data with indexes

开发者 https://www.devze.com 2023-04-09 07:45 出处:网络
I\'ve got a Postgres 9.0 database which frequently I took data dumps of it. This database has a lot of indexes and everytime I restore a dump postgres starts background task vacuum cleaner (is that r

I've got a Postgres 9.0 database which frequently I took data dumps of it.

This database has a lot of indexes and everytime I restore a dump postgres starts background task vacuum cleaner (is that right?). That task consumes much processing time and memory to recreate indexes of the restored dump.

My question is:

  1. Is there a way to dump the database data and the indexes of that database?
  2. If there is a way, will worth the effort (I meant dumping the data with the indexes will perform better than vacuum cleaner)?
  3. Oracle has some the "data pump" command a faster way to imp and exp. Does postgres have so开发者_开发知识库mething similar?

Thanks in advance, Andre


If you use pg_dump twice, once with --schema-only, and once with --data-only, you can cut the schema-only output in two parts: the first with the bare table definitions and the final part with the constraints and indexes. Something similar can probably be done with pg_restore.


Best Practice is probably to

  • restore the schema without indexes
  • and possibly without constraints,
  • load the data,
  • then create the constraints,
  • and create the indexes.

If an index exists, a bulk load will make PostgreSQL write to the database and to the index. And a bulk load will make your table statistics useless. But if you load data first, then create the index, the stats are automatically up to date.

We store scripts that create indexes and scripts that create tables in different files under version control. This is why.

In your case, changing autovacuum settings might help you. You might also consider disabling autovacuum for some tables or for all tables, but that might be a little extreme.

0

精彩评论

暂无评论...
验证码 换一张
取 消