0

我想对 scala 代码进行一些修改,spark.ml.classification.LogisticRegression而不必重建整个 Spark。因为我们可以将 jar 文件附加到 spark-submit 或 pySpark 的执行中。是否可以编译LogisticRegression.java并覆盖 Spark 的默认方法的修改副本,或者至少创建新方法?谢谢。

4

1 回答 1

2

创建一个新的类扩展org.apache.spark.ml.classification.LogisticRegression,并在不修改源代码的情况下覆盖相应的方法应该可以工作。

class CustomLogisticRegression
  extends
    LogisticRegression {
  override def toString(): String = "This is overridden Logistic Regression Class"
}

CustomLogisticRegression使用新类运行逻辑回归

val data = sqlCtx.createDataFrame(MLUtils.loadLibSVMFile(sc, "/opt/spark/spark-1.5.2-bin-hadoop2.6/data/mllib/sample_libsvm_data.txt"))

val customLR = new CustomLogisticRegression()
  .setMaxIter(10)
  .setRegParam(0.3)
  .setElasticNetParam(0.8)

val customLRModel = customLR.fit(data)

val originalLR = new LogisticRegression()
  .setMaxIter(10)
  .setRegParam(0.3)
  .setElasticNetParam(0.8)

val originalLRModel = originalLR.fit(data)

// Print the intercept for logistic regression
println(s"Custom Class's Intercept: ${customLRModel.intercept}")
println(s"Original Class's Intercept: ${originalLRModel.intercept}")
println(customLR.toString())
println(originalLR.toString())

输出:

Custom Class's Intercept: 0.22456315961250317
Original Class's Intercept: 0.22456315961250317
This is overridden Logistic Regression Class
logreg_1cd811a145d7
于 2016-04-18T10:46:26.150 回答