On 08/10/2012 11:34 AM, Florian Weimer wrote:
> * jeff:
>
>> I have a server running Citadel which uses Berkeley DB on the back end
>> and I am trying to transfer the VM that is running it to a different
>> server and when I just copy the virtual hard drive and load it the
>> database is corrupt and cannot be fixed.
>
> This is odd. Are you copying a volume which is being written to? If
> you copy a read-only snapshot, it should work.
>
Yes it is being written to right now because it is production. I will
see about scheduling some downtime for this.
>> 1. How long should it take to dump a database? Mine is about 7.4 GB
>> with about 200 MB of log files right now.
>
> That mostly depends on fragmentation. About an hour would not be
> entirely unheard of.
Only an hour, that is not a problem but the dumps that I have attempted
have taken 8-10 hours and either never finished or came up with an error.
>
>> 2. What is the best way to transfer the database to a different
>> system? I would prefer a method that does not involve downtime as this
>> is a production email system.
>
> You need to make the database read-only at one point, otherwise some
> writes will be lost. And if you change versions of Berkeley DB, you
> have to perform environment recovery, which also requires downtime.
I have reason to believe that I am pushing the limits of the hard drive
IO even though the drives I am using are not exactly slow, but the
results of trying to copy does not seem to be effected by other virtual
machines running on that system. From other tests that I have done I
have verified that memory, and CPU are not being used much at all, but
the IO on the hard drive, mostly waiting for random seeks might be
causing problems. I have also noticed what appears to be some possible
corruption with other VMs which from what I can tell is affected by the
drive IO since moving to the new system with a larger RAID array seems
to have fixed the problems.