18

我有一个包含多个数据库的 mysqldump 文件 (5)。其中一个数据库需要很长时间才能加载,有没有办法按数据库拆分 mysqldump 文件,或者只是告诉 mysql 只加载指定的数据库之一?

马尼什

4

7 回答 7

27

这个 Perl 脚本应该可以解决问题。

#!/usr/bin/perl -w
#
# splitmysqldump - split mysqldump file into per-database dump files.

use strict;
use warnings;

my $dbfile;
my $dbname = q{};
my $header = q{};

while (<>) {

    # Beginning of a new database section:
    # close currently open file and start a new one
    if (m/-- Current Database\: \`([-\w]+)\`/) {
    if (defined $dbfile && tell $dbfile != -1) {
        close $dbfile or die "Could not close file!"
    } 
    $dbname = $1;
    open $dbfile, ">>", "$1_dump.sql" or die "Could not create file!";
    print $dbfile $header;
    print "Writing file $1_dump.sql ...\n";
    }

    if (defined $dbfile && tell $dbfile != -1) {
    print $dbfile $_;
    }

    # Catch dump file header in the beginning
    # to be printed to each separate dump file.  
    if (! $dbname) { $header .= $_; }
}
close $dbfile or die "Could not close file!"

为包含所有数据库的转储文件运行此命令

./splitmysqldump < all_databases.sql
于 2010-12-03T16:24:35.650 回答
13

或者,可以将每个数据库直接保存到单独的文件中......

#!/bin/bash
dblist=`mysql -u root -e "show databases" | sed -n '2,$ p'`
for db in $dblist; do
    mysqldump -u root $db | gzip --best > $db.sql.gz
done
于 2012-08-03T11:46:21.563 回答
1

Here is a great blog post I always re-refer to to do this kind of thing with a mysqldump.

http://gtowey.blogspot.com/2009/11/restore-single-table-from-mysqldump.html

You can easily extend it to extract individual db's.

于 2010-12-03T17:34:16.603 回答
1

我一直在研究一个 python 脚本,它将一个大的转储文件分成小文件,每个数据库一个。它的名字是 dumpsplit,这里是一个划痕:

#!/usr/bin/env python
# -*- coding: utf-8 -*-
import sys
import re
import os

HEADER_END_MARK = '-- CHANGE MASTER TO MASTER_LOG_FILE'
FOOTER_BEGIN_MARK = '\/\*\!40103 SET TIME_ZONE=@OLD_TIME_ZONE \*\/;'
DB_BEGIN_MARK = '-- Current Database:'

class Main():
    """Whole program as a class"""

    def __init__(self,file,output_path):
        """Tries to open mysql dump file to call processment method"""
        self.output_path = output_path
        try:
            self.file_rsrc = open(file,'r')
        except IOError:
            sys.stderr.write('Can\'t open %s '+file)
        else:
            self.__extract_footer()
            self.__extract_header()
            self.__process()

    def __extract_footer(self):
        matched = False
        self.footer = ''
        self.file_rsrc.seek(0)
        line = self.file_rsrc.next()
        try:
            while line:
                if not matched:
                    if re.match(FOOTER_BEGIN_MARK,line):
                        matched = True
                        self.footer = self.footer + line
                else:
                    self.footer = self.footer + line
                line = self.file_rsrc.next()
        except StopIteration:
            pass
        self.file_rsrc.seek(0)

    def __extract_header(self):
        matched = False
        self.header = ''
        self.file_rsrc.seek(0)
        line = self.file_rsrc.next()
        try:
            while not matched:
                self.header = self.header + line
                if re.match(HEADER_END_MARK,line):
                    matched = True
                else:
                    line = self.file_rsrc.next()
        except StopIteration:
            pass
        self.header_end_pos = self.file_rsrc.tell()
        self.file_rsrc.seek(0)

    def __process(self):
        first = False
        self.file_rsrc.seek(self.header_end_pos)
        prev_line = '--\n'
        line = self.file_rsrc.next()
        end = False
        try:
            while line and not end:
                    if re.match(DB_BEGIN_MARK,line) or re.match(FOOTER_BEGIN_MARK,line):
                    if not first:
                        first = True
                    else:
                        out_file.writelines(self.footer)
                        out_file.close()
                    if not re.match(FOOTER_BEGIN_MARK,line):
                        name = line.replace('`','').split()[-1]+'.sql'
                        print name
                        out_file = open(os.path.join(self.output_path,name),'w')
                        out_file.writelines(self.header + prev_line + line)
                        prev_line = line
                        line = self.file_rsrc.next()
                    else:
                        end = True
                else:
                    if first:
                        out_file.write(line)
                    prev_line = line
                    line = self.file_rsrc.next()
        except StopIteration:
            pass

if __name__ == '__main__':
    Main(sys.argv[1],sys.argv[2])
于 2010-12-28T12:46:15.190 回答
0

就像 Stano 建议的那样,最好的办法是在转储时间使用类似...

mysql -Ne "show databases" | grep -v schema | while read db; do mysqldump $db | gzip > $db.sql.gz; done

当然,这依赖于 ~/.my.cnf 文件的存在

[client]
user=root
password=rootpass

否则,只需使用 mysql 和 mysqldump 调用的 -u 和 -p 参数定义它们:

mysql -u root -prootpass -Ne "show databases" | grep -v schema | while read db; do mysqldump -u root -prootpass $db | gzip > $db.sql.gz; done

希望这可以帮助

于 2013-01-17T13:37:00.547 回答
-1

“mysqldump 文件”只是一个包含 SQL 语句的文本文件。因此,您可以使用任何类型的文本编辑器来将其拆分成您认为合适的方式。

首先进行更具选择性的转储(每个文件只有一个数据库等)可能会更好地为您服务。如果您无权访问原始数据库,您也可以进行完全恢复,然后再次使用 mysqldump 为各个数据库创建转储。

如果您只想要一个快速而肮脏的解决方案,那么快速的谷歌搜索会产生对几个 可能也有用的工具的引用。

于 2009-12-09T21:08:51.087 回答
-1

I might do the dump and reload in steps:

  1. Take the dump of table structure with --no-data with dumps per database.
  2. Create the structure in new server
  3. Take the data dump of table with --no-create-info per database level
  4. Now, as have dumps per database, I can split the files even with cut file if some particular file is large.

Note: if you are using MyISAM tables, you can disable the indexes evaluation during step 4 and re-enable it later to make your insert faster.

于 2015-06-22T20:00:26.260 回答