We are using BerkeleyDB Java edition, core api to read/write cdrfiles, we are having a problem with log files.
When we开发者_如何学C are writing 9lack records to database then multiples log files are created with huge sizes, 1.08gb. We want to know why multiple logfiles are created while using transaction , is it due to every commit statement after writing data to database ? or is there any other reason ?
This is normal. The log files contains ongoing tranactions, as well as information you can use for recovering the database (which means they're suitable to use as backup and disaster recovery).
Read chapter 5 of the documentation carefully, as well as this section which explains the periodic maintenance you need to do on your database.
精彩评论