1

因此,我试图通过引用位于不同 MYSQL 服务器中的表来运行在远程 MySQL 服务器中创建表的代码。我试图在其中创建表的服务器有空间限制,并且被引用的表非常大,因此它们必须保存在不同的远程服务器上。

我正在尝试找到一种方法来同时建立与两个数据库的持久连接(使用 JDBC 库),这样我就不必一直缓冲少量数据行......我希望能够直接引用数据。

例如,数据库 A 包含我引用的数据,而数据库 B 是我创建新表的位置。假设我在数据库 A 中引用的表是 1,000,000 行。而不是说,打开与数据库 A 的连接,缓冲 10,000 行,关闭连接,打开与数据库 B 的连接,写入该数据库,删除我的缓冲区,然后重复......

我只想与数据库 A 建立持久连接,因此每次写入数据库 B 都可以引用数据库 A 中的数据。

这可能吗?我尝试了几种方法(主要是通过创建仅在连接中断时才刷新的新连接对象),但我似乎无法让这个想法奏效。

有没有人使用 JDBC 做过类似的事情?如果是这样,如果你能指出我正确的方向,或者告诉我你是如何让它工作的,我将不胜感激。

4

4 回答 4

1

我认为最好有两个单独的连接,一个读取连接和一个写入连接,并使用某种小型缓冲区通过您的 Java 应用程序传递数据。

另一个更复杂但可能更优雅的解决方案是使用 FEDERATED 表。它使远程服务器上的表看起来是本地的。查询被传递到远程服务器并将结果发回。您必须小心索引,否则它会非常慢,但它可能适用于您想要做的事情。

http://dev.mysql.com/doc/refman/5.5/en/federated-description.html

于 2012-04-11T01:48:33.977 回答
1

您可以在数据库 A 中创建数据,然后通过复制将其复制到数据库 B。

或者,听起来您正在实现某种队列。我曾经用 Java 构建了一个数据复制程序,它使用了 Queue 接口的内置实现。我有一个从数据库 A 读取数据并填充队列的线程,以及一个从队列读取数据并写入数据库 B 的线程。如果有用的话,我可以尝试挖掘我使用的类吗?

编辑:

这是代码,对发布进行了一些调整。我没有包含配置类,但它应该让您了解如何使用队列类;

package test;

import java.io.File;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.ResultSetMetaData;
import java.sql.SQLException;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.concurrent.ArrayBlockingQueue;
import java.util.concurrent.BlockingQueue;

/**
 * This class implements a JDBC bridge between databases, allowing data to be
 * copied from one place to another.
 * <p>This implementation is threaded, as it uses a {@link BlockingQueue} to pass
 * data between a producer and a consumer.
 */
public class DBBridge {

    public static void main( String[] args ) {

        Adaptor fromAdaptor = null;
        Adaptor toAdaptor = null;

        BridgeConfig config = null;

        try {
            /* BridgeConfig is essentially a wrapper around the Simple XML serialisation library.
             * http://simple.sourceforge.net/
             */
            config = BridgeConfig.loadConfig( new File( "db-bridge.xml" ) );
        }
        catch ( Exception e ) {
            System.err.println( "Failed to read or parse db-bridge.xml: " + e.getLocalizedMessage() );
            System.exit( 1 );
        }

        BlockingQueue<Object> b = new ArrayBlockingQueue<Object>( config.getQueueSize() );

        try {
            HashMap<String, DatabaseConfig> dbs = config.getDbs();

            System.err.println( "Configured DBs" );

            final String sourceName = config.getSource();
            final String destinationName = config.getDestination();

            if ( !dbs.containsKey( sourceName ) ) {
                System.err.println( sourceName + " is not a configured database connection" );
                System.exit( 1 );
            }

            if ( !dbs.containsKey( destinationName ) ) {
                System.err.println( destinationName + " is not a configured database connection" );
                System.exit( 1 );
            }

            DatabaseConfig sourceConfig = dbs.get( sourceName );
            DatabaseConfig destinationConfig = dbs.get( destinationName );

            try {
                /*
                 * Both adaptors must be created before attempting a connection,
                 * as otherwise I've seen DriverManager pick the wrong driver!
                 */
                fromAdaptor = AdaptorFactory.buildAdaptor( sourceConfig, sourceConfig );
                toAdaptor = AdaptorFactory.buildAdaptor( destinationConfig, destinationConfig );

                System.err.println( "Connecting to " + sourceName );
                fromAdaptor.connect();

                System.err.println( "Connecting to " + destinationName );
                toAdaptor.connect();

                /* We'll send our updates to the destination explicitly */
                toAdaptor.getConn().setAutoCommit( false );
            }
            catch ( SQLException e ) {
                System.err.println();
                System.err.println( "Failed to create and configure adaptors" );
                e.printStackTrace();
                System.exit( 1 );
            }
            catch ( ClassNotFoundException e ) {
                System.err.println( "Failed to load JDBC driver due to error: " + e.getLocalizedMessage() );
                System.exit( 1 );
            }

            DataProducer producer = null;
            DataConsumer consumer = null;

            try {
                producer = new DataProducer( config, fromAdaptor, b );
                consumer = new DataConsumer( config, toAdaptor, b );
            }
            catch ( SQLException e ) {
                System.err.println();
                System.err.println( "Failed to create and configure data producer or consumer" );
                e.printStackTrace();
                System.exit( 1 );
            }

            consumer.start();
            producer.start();
        }
        catch ( Exception e ) {
            e.printStackTrace();
        }
    }

    public static class DataProducer extends DataLogger {

        private BridgeConfig config;
        private Adaptor adaptor;
        private BlockingQueue<Object> queue;


        public DataProducer(BridgeConfig c, Adaptor a, BlockingQueue<Object> bq) {
            super( "Producer" );
            this.config = c;
            this.adaptor = a;
            this.queue = bq;
        }


        @Override
        public void run() {
            /* The tables to copy are listed in BridgeConfig */
            for ( Table table : this.config.getManifest() ) {

                PreparedStatement stmt = null;
                ResultSet rs = null;

                try {
                    String sql = table.buildSourceSelect();
                    this.log( "executing: " + sql );
                    stmt = this.adaptor.getConn().prepareStatement( sql );

                    stmt.execute();

                    rs = stmt.getResultSet();

                    ResultSetMetaData meta = rs.getMetaData();

                    /* Notify consumer that a new table is to be processed */
                    this.queue.put( table );
                    this.queue.put( meta );

                    final int columnCount = meta.getColumnCount();

                    while ( rs.next() ) {
                        ArrayList<Object> a = new ArrayList<Object>( columnCount );

                        for ( int i = 0; i < columnCount; i++ ) {
                            a.add( rs.getObject( i + 1 ) );
                        }

                        this.queue.put( a );
                    }
                }
                catch ( InterruptedException ex ) {
                    ex.printStackTrace();
                }
                catch ( SQLException e ) {
                    // TODO Auto-generated catch block
                    e.printStackTrace();
                }

                try {
                    /* refresh the connection */
                    /* Can't remember why I can this line - maybe the other
                     * end kept closing the connection. */
                    this.adaptor.reconnect();
                }
                catch ( SQLException e ) {
                    // TODO Auto-generated catch block
                    e.printStackTrace();
                }
            }

            try {
                /* Use an object of a specific type to "poison" the queue
                 * and instruct the consumer to terminate. */
                this.log( "putting finished object into queue" );
                this.queue.put( new QueueFinished() );

                this.adaptor.close();
            }
            catch ( InterruptedException e ) {
                // TODO Auto-generated catch block
                e.printStackTrace();
            }
            catch ( SQLException e ) {
                // TODO Auto-generated catch block
                e.printStackTrace();
            }
        }

    }

    /* Superclass for producer and consumer */
    public static abstract class DataLogger extends Thread {

        private String prefix;


        public DataLogger(String p) {
            this.prefix = p;
        }


        protected void log( String s ) {
            System.err.printf( "%d %s: %s%n", System.currentTimeMillis(), this.prefix, s );
        }


        protected void log() {
            System.err.println();
        }
    }

    public static class DataConsumer extends DataLogger {

        private BridgeConfig config;
        private Adaptor adaptor;
        private BlockingQueue<Object> queue;
        private int currentRowNumber = 0;
        private int currentBatchSize = 0;
        private long tableStartTs = -1;


        public DataConsumer(BridgeConfig c, Adaptor a, BlockingQueue<Object> bq) throws SQLException {

            super( "Consumer" );

            this.config = c;
            this.adaptor = a;
            this.queue = bq;

            /* We'll send our updates to the destination explicitly */
            this.adaptor.getConn().setAutoCommit( false );
        }


        public void printThroughput() {
            double duration = ( System.currentTimeMillis() - this.tableStartTs ) / 1000.0;
            long rowsPerSec = Math.round( this.currentRowNumber / duration );
            this.log( String.format( "%d rows processed, %d rows/s", this.currentRowNumber, rowsPerSec ) );
        }


        @Override
        public void run() {

            this.log( "running" );

            Table currentTable = null;
            ResultSetMetaData meta = null;

            int columnCount = -1;

            PreparedStatement stmt = null;

            while ( true ) {
                try {
                    Object o = this.queue.take();

                    if ( o instanceof Table ) {
                        currentTable = (Table) o;

                        this.log( "processing" + currentTable );

                        if ( this.currentBatchSize > 0 ) {
                            /* Commit outstanding rows from previous table */

                            this.adaptor.getConn().commit();

                            this.printThroughput();
                            this.currentBatchSize = 0;
                        }

                        /* refresh the connection */
                        this.adaptor.reconnect();
                        this.adaptor.getConn().setAutoCommit( false );

                        /*
                         * Arguably, there's no need to flush the commit buffer
                         * after every table, but I like it because it feels
                         * tidy.
                         */
                        this.currentBatchSize = 0;
                        this.currentRowNumber = 0;

                        if ( currentTable.isTruncate() ) {
                            this.log( "truncating " + currentTable );
                            stmt = this.adaptor.getConn().prepareStatement( "TRUNCATE TABLE " + currentTable );
                            stmt.execute();
                        }

                        this.tableStartTs = System.currentTimeMillis();
                    }
                    else if ( o instanceof ResultSetMetaData ) {

                        this.log( "received metadata for " + currentTable );

                        meta = (ResultSetMetaData) o;
                        columnCount = meta.getColumnCount();

                        String sql = currentTable.buildDestinationInsert( columnCount );
                        stmt = this.adaptor.getConn().prepareStatement( sql );
                    }
                    else if ( o instanceof ArrayList ) {

                        ArrayList<?> a = (ArrayList<?>) o;

                        /* One counter for ArrayList access, one for JDBC access */
                        for ( int i = 0, j = 1; i < columnCount; i++, j++ ) {

                            try {
                                stmt.setObject( j, a.get( i ), meta.getColumnType( j ) );
                            }
                            catch ( SQLException e ) {
                                /* Sometimes data in a shonky remote system
                                 * is rejected by a more sane destination
                                 * system. Translate this data into
                                 * something that will fit. */
                                if ( e.getMessage().contains( "Only dates between" ) ) {

                                    if ( meta.isNullable( j ) == ResultSetMetaData.columnNullable ) {
                                        this.log( "Casting bad data to null: " + a.get( i ) );
                                        stmt.setObject( j, null, meta.getColumnType( j ) );
                                    }
                                    else {
                                        this.log( "Casting bad data to 0000-01-01: " + a.get( i ) );
                                        stmt.setObject( j, new java.sql.Date( -64376208000L ), meta.getColumnType( j ) );
                                    }
                                }
                                else {
                                    throw e;
                                }
                            }
                        }

                        stmt.execute();

                        this.currentBatchSize++;
                        this.currentRowNumber++;

                        if ( this.currentBatchSize == this.config.getBatchSize() ) {
                            /*
                             * We've reached our non-committed limit. Send the
                             * requests to the destination server.
                             */

                            this.adaptor.getConn().commit();

                            this.printThroughput();
                            this.currentBatchSize = 0;
                        }
                    }
                    else if ( o instanceof QueueFinished ) {
                        if ( this.currentBatchSize > 0 ) {
                            /* Commit outstanding rows from previous table */

                            this.adaptor.getConn().commit();

                            this.printThroughput();

                            this.log();
                            this.log( "completed" );
                        }

                        /* Exit while loop */
                        break;
                    }
                    else {
                        throw new RuntimeException( "Unexpected obeject in queue: " + o.getClass() );
                    }
                }
                catch ( InterruptedException e ) {
                    // TODO Auto-generated catch block
                    e.printStackTrace();
                }
                catch ( SQLException e ) {
                    // TODO Auto-generated catch block
                    e.printStackTrace();
                }
            }

            try {
                this.adaptor.close();
            }
            catch ( SQLException e ) {
                // TODO Auto-generated catch block
                e.printStackTrace();
            }
        }
    }

    public static final class QueueFinished {
        /*
         * This only exists as a completely type-safe value in "instanceof"
         * expressions
         */
    }
}
于 2012-04-10T18:24:01.163 回答
1

在我为工作而写的一个程序中,我有两个同时连接。在不泄露代码的情况下,您会想要

public void initialize() {

    String dbUrl, dbUrl2, dbClass, dbClass2, user, user2, password, password2;
    Connection con, con2;
    Statement stmt, stmt2;
    ResultSet rs, rs2;

    try {
        Class.forName(dbClass);
        con = DriverManager.getConnection(dbUrl,user,password);
        con2 = DriverManager.getConnection(dbUrl2,user2,password2);
        stmt = con.createStatement();
    } catch(ClassNotFoundException e) {
        e.printStackTrace();
    }
    catch(SQLException e) {
        e.printStackTrace();
    }
}

然后,一旦您的两个连接运行,

rs = stmt.executeQuery("query");
rs2 = stmt2.executeQuery("second query");

我不知道如何具体解决您的问题,但此代码可能对您的系统有点负担(假设您没有高端个人/公司机器)并且可能需要一段时间。这至少应该给你足够的入门,如果可以的话,我会发布更多,遗憾的是模拟一个版本有点太复杂了。不过祝你好运!

于 2012-04-10T19:01:43.827 回答
0

我以前做过,我建议你做我所做的,即从 DB A 获取你需要的数据,并将其写入 1 个或多个文件作为 SQL 'set' 语句。当我这样做时,我不得不分成大约 10 个文件,因为在 DB B 中加载的文件大小受到限制

于 2012-04-10T18:15:05.207 回答