I have a LIVE version of a MySQL database with 5 tables and a TEST version.
I am continually using phpMyAdmin to make a copy of each table in the LIVE version to the TEST version.
Does anyone have the mysql query stateme开发者_JAVA技巧nt to make a complete copy of a database? The query string would need to account for structure, data, auto increment values, and any other things associated with the tables that need to be copied.
Thanks.
Ok, after a lot of research, googling, and reading through everyone's comments herein, I produced the following script -- which I now run from the browser address bar. Tested it and it does exactly what I needed it to do. Thanks for everyone's help.
<?php
function duplicateTables($sourceDB=NULL, $targetDB=NULL) {
$link = mysql_connect('{server}', '{username}', '{password}') or die(mysql_error()); // connect to database
$result = mysql_query('SHOW TABLES FROM ' . $sourceDB) or die(mysql_error());
while($row = mysql_fetch_row($result)) {
mysql_query('DROP TABLE IF EXISTS `' . $targetDB . '`.`' . $row[0] . '`') or die(mysql_error());
mysql_query('CREATE TABLE `' . $targetDB . '`.`' . $row[0] . '` LIKE `' . $sourceDB . '`.`' . $row[0] . '`') or die(mysql_error());
mysql_query('INSERT INTO `' . $targetDB . '`.`' . $row[0] . '` SELECT * FROM `' . $sourceDB . '`.`' . $row[0] . '`') or die(mysql_error());
mysql_query('OPTIMIZE TABLE `' . $targetDB . '`.`' . $row[0] . '`') or die(mysql_error());
}
mysql_free_result($result);
mysql_close($link);
} // end duplicateTables()
duplicateTables('liveDB', 'testDB');
?>
Depending on your access to the server. I suggest using straight mysql
and mysqldump
commands. That's all phpMyAdmin is doing under the hood.
Reference material for Mysqldump.
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html
There is a PHP class for that, I didn't test it yet.
From it's description:
This class can be used to backup a MySQL database.
It queries a database and generates a list of SQL statements that can be used later to restore the database **tables structure** and their contents.
I guess this what you need.
Hi here you can use simple bash script to backup whole database.
######### SNIP BEGIN ##########
## Copy from here #############
#!/bin/bash
# to use the script do following:
# sh backup.sh DBNAME | sh
# where DBNAME is database name from alma016
# ex Backuping mydb data:
# sh backup.sh mydb hostname username pass| sh
echo "#sh backup.sh mydb hostname username pass| sh"
DB=$1
host=$2
user=$3
pass=$4
NOW=$(date +"%m-%d-%Y")
FILE="$DB.backup.$NOW.gz"
# rest of script
#dump command:
cmd="mysqldump -h $host -u$user -p$pass $DB | gzip -9 > $FILE"
echo $cmd
############ END SNIP ###########
EDIT
If you like to clone backuped database just edit the dump and change the database name then:
tar xzf yourdump.tar.gz| mysql -uusername -ppass
cheers Arman.
Well in script form, you could try using
CREATE TABLE ... LIKE
syntax, iterating through a list of tables, which you can get from SHOW TABLES
.
Only problem is that does not recreate indexes or foreign keys natively. So you would have to list them and create them as well. Then a few INSERT ... SELECT
calls to get the data in.
If your schema never changes, only the data. Then create a script that replicates the table structure and then just do the INSERT ... SELECT
business in a transaction.
Failing that, mysqldump
as the others say is pretty easy to get working from a script. I have a daily firing cron job that dumps all manner of databases from my datacenter servers, connects via FTPS to my location and sends all the dumps across. It can be done, quite effectively. Obviously you have to make sure such facilities are locked down, but again, not overly hard.
As per code request
The code is proprietary, but I'll show you the critical section that you need. This is from in the middle of a foreach
loop, hence the continue
statements and the $c..
prefixed variables (I use that to indicate current loop (or instance) variables). The echo
commands could be whatever you want, this is a cron
script, so echoing current status was appropriate. The flush()
lines are helpful for when you run the script from the browser, as the output will be sent up to that point, so the browser results fill as it runs, rather than all turning up at the end. The ftp_fput()
line is obviously down to my situation of uploading the dump somewhere and uploads directly from the pipe - you could use another process open to pipe the output in to a mysql
process to replicate the database. Providing suitable amendments where made.
$cDumpCmd = $mysqlDumpPath . ' -h' . $dbServer . ' -u' . escapeshellarg($cDBUser) . ' -p' . escapeshellarg($cDBPassword) . ' ' . $cDatabase . (!empty($dumpCommandOptions) ? ' ' . $dumpCommandOptions : '');
$cPipeDesc = array(0 => array('pipe', 'r'),
1 => array('pipe', 'w'),
2 => array('pipe', 'w'));
$cPipes = array();
$cStartTime = microtime(true);
$cDumpProc = proc_open($cDumpCmd, $cPipeDesc, $cPipes, '/tmp', array());
if (!is_resource($cDumpProc)) {
echo "failed.\n";
continue;
} else {
echo "success.\n";
}
echo "DB: " . $cDatabase . " - Uploading Database...";
flush();
$cUploadResult = ftp_fput($ftpConn, $dbFileName, $cPipes[1], FTP_BINARY);
$cStopTime = microtime(true);
if ($cUploadResult) {
echo "success (" . round($cStopTime - $cStartTime, 3) . " seconds).\n";
$databaseCount++;
} else {
echo "failed.\n";
continue;
}
$cErrorOutput = stream_get_contents($cPipes[2]);
foreach ($cPipes as $cFHandle) {
fclose($cFHandle);
}
$cDumpStatus = proc_close($cDumpProc);
if ($cDumpStatus != 0) {
echo "DB: " . $cDatabase . " - Dump process caused an error:\n";
echo $cErrorOutput . "\n";
continue;
}
flush();
If you're using linux or mac, here is a single line to clone a database.
mysqldump -uUSER -pPASSWORD -hsample.host --single-transaction --quick test | mysql -uUSER -pPASSWORD -hqa.sample.host --database=test
The 'advantage' here is that it will lock the database while its making a copy. That means you end up with a consistent copy. It also means your production database will be tied up for the duration of the copy which generally isn't a good thing.
Without locks or transactions, if something is writing to the database while you're making a copy, you could end up with orphaned data in your copy.
To get a good copy without impacting production, you should create a slave on another server. The slave is updated in real time. You can run the same command on the slave without impacting production.
精彩评论