1

我正在尝试批量插入 100 个表(我听说这是与 mySQL 一起使用的最佳大小),我使用 scala 2.10.4 和 sbt 0.13.6,我使用的 jdbc 框架是 scalikejdbc 和 Hikaricp,我的连接设置如下所示:

val dataSource: DataSource = {
  val ds = new HikariDataSource()
  ds.setDataSourceClassName("com.mysql.jdbc.jdbc2.optional.MysqlDataSource");
  ds.addDataSourceProperty("url", "jdbc:mysql://" + org.Server.GlobalSettings.DB.mySQLIP + ":3306?rewriteBatchedStatements=true")
  ds.addDataSourceProperty("autoCommit", "false")
  ds.addDataSourceProperty("user", "someUser")
  ds.addDataSourceProperty("password", "not my password")
  ds
}

ConnectionPool.add('review, new DataSourceConnectionPool(dataSource))

插入代码:

try {
  implicit val session = AutoSession
  val paramList: scala.collection.mutable.ListBuffer[Seq[(Symbol, Any)]] = scala.collection.mutable.ListBuffer[Seq[(Symbol, Any)]]()
  .
  .
  .
  for(rev<reviews){
  paramList += Seq[(Symbol, Any)](
            'review_id -> rev.review_idx,
            'text -> rev.text,
            'category_id -> rev.category_id,
            'aspect_id -> aspectId,
            'not_aspect -> noAspect /*0*/ ,
            'certainty_aspect -> rev.certainty_aspect,
            'sentiment -> rev.sentiment,
            'sentiment_grade -> rev.certainty_sentiment,
            'stars -> rev.stars
          )
  }
  .
  .
  .
  try {
    if (paramList != null && paramList.length > 0) {
        val result = NamedDB('review) localTx { implicit session =>
        sql"""INSERT INTO `MasterFlow`.`classifier_results`
        (
            `review_id`,
            `text`,
            `category_id`,
            `aspect_id`,
            `not_aspect`,
            `certainty_aspect`,
            `sentiment`,
            `sentiment_grade`,
            `stars`)
        VALUES
              ( {review_id}, {text}, {category_id}, {aspect_id},
              {not_aspect}, {certainty_aspect}, {sentiment}, {sentiment_grade}, {stars})
        """
          .batchByName(paramList.toIndexedSeq: _*)/*.__resultOfEnsuring*/
          .apply()
        }

每次我插入一个批次需要 15 秒,我的日志:

29/10/2014 14:03:36 - DEBUG[Hikari Housekeeping Timer (pool HikariPool-0)] HikariPool - Before cleanup pool stats HikariPool-0 (total=10, inUse=1, avail=9, waiting=0)
29/10/2014 14:03:36 - DEBUG[Hikari Housekeeping Timer (pool HikariPool-0)] HikariPool - After cleanup pool stats HikariPool-0 (total=10, inUse=1, avail=9, waiting=0)
29/10/2014 14:03:46 - DEBUG[default-akka.actor.default-dispatcher-3] StatementExecutor$$anon$1 - SQL execution completed

  [SQL Execution]
   INSERT INTO `MasterFlow`.`classifier_results` ( `review_id`, `text`, `category_id`, `aspect_id`, `not_aspect`, `certainty_aspect`, `sentiment`, `sentiment_grade`, `stars`) VALUES ( ...can't show this....);
   INSERT INTO `MasterFlow`.`classifier_results` ( `review_id`, `text`, `category_id`, `aspect_id`, `not_aspect`, `certainty_aspect`, `sentiment`, `sentiment_grade`, `stars`) VALUES ( ...can't show this....);
.
.
.
   INSERT INTO `MasterFlow`.`classifier_results` ( `review_id`, `text`, `category_id`, `aspect_id`, `not_aspect`, `certainty_aspect`, `sentiment`, `sentiment_grade`, `stars`) VALUES ( ...can't show this....);
   ... (total: 100 times); (15466 ms)

  [Stack Trace]
    ...
    logic.DB.ClassifierJsonToDB$$anonfun$1.apply(ClassifierJsonToDB.scala:119)
    logic.DB.ClassifierJsonToDB$$anonfun$1.apply(ClassifierJsonToDB.scala:96)
    scalikejdbc.DBConnection$$anonfun$_localTx$1$1.apply(DBConnection.scala:252)
    scala.util.control.Exception$Catch.apply(Exception.scala:102)
    scalikejdbc.DBConnection$class._localTx$1(DBConnection.scala:250)
    scalikejdbc.DBConnection$$anonfun$localTx$1.apply(DBConnection.scala:257)
    scalikejdbc.DBConnection$$anonfun$localTx$1.apply(DBConnection.scala:257)
    scalikejdbc.LoanPattern$class.using(LoanPattern.scala:33)
    scalikejdbc.NamedDB.using(NamedDB.scala:32)
    scalikejdbc.DBConnection$class.localTx(DBConnection.scala:257)
    scalikejdbc.NamedDB.localTx(NamedDB.scala:32)
    logic.DB.ClassifierJsonToDB$.insertBulk(ClassifierJsonToDB.scala:96)
    logic.DB.ClassifierJsonToDB$$anonfun$bulkInsert$1.apply(ClassifierJsonToDB.scala:176)
    logic.DB.ClassifierJsonToDB$$anonfun$bulkInsert$1.apply(ClassifierJsonToDB.scala:167)
    scala.collection.Iterator$class.foreach(Iterator.scala:727)
    ...

当我在托管 mySQL 数据库的服务器上运行它时,它运行得很快,我该​​怎么做才能让它在远程计算机上运行得更快?

4

2 回答 2

2

如果有人需要,我有类似的问题,用 ScalikeJdbc 将 10000 条记录批量插入 MySQL,可以通过在 jdbc url 中将 rewriteBatchedStatements 设置为 true 来解决(“jdbc:mysql://host:3306/db?rewriteBatchedStatements =真”)。它将批量插入时间从 40 秒减少到 1 秒!

于 2015-07-24T22:40:15.130 回答
0

我想这不是 ScalikeJDBC 或 HikariCP 的问题。您应该调查您的机器和 MySQL 服务器之间的网络环境。

于 2014-10-29T22:33:59.630 回答