It seems like Continuous Archiving and Point-In-Time Recovery for PostgreSQL is much m开发者_JAVA百科ore complicated to backup and restore. What would the advantages be over using the SQL dump method? My environment is Windows Server 2008.
For large databases (hundreds of GB or even several TB) the dump can take a substantial amount of time. It will also slow down the server while taking the backup so it will affect the performance of the applications.
The potential amount of data that is lost in case there is e.g. a power outage is far smaller with a PITR backup than with a dump (unless you take the dump approx. every minute which is definitely not possible for any real world database)
If you're OK with a snapshot of the database you get every week or two, and don't mind losing any activity since the last pg_dump if your server dies, than pg_dump is a reasonable solution. If you want to get to where the most data you'll lose in a crash is minutes or possibly seconds of information, you have to use something real time like PITR log-shipping. Not only does this keep your backup more up to date, you can also bring that backup server up much, much faster to replace the master if it fails. Reloading your database from a pg_dump file can take days if it's big, whereas you can configure PostgreSQL using Warm Standby to make switching to the backup almost instant.
I see two main advantages:
- With continuous archiving, DB restore is not as long as it will be with a regular dump. With a big database (say, 50GB) you can expect your restore to take a couple of hours.
- You can have a UPDATED-AT-ALL-TIMES instance of your database in case you need it for something (i.e. read only queries).
精彩评论