我正在尝试设置一个简单的代码,在其中传递一个数据帧并使用 johnSnowLabs Spark-NLP 库提供的预训练解释管道对其进行测试。我正在使用来自 anaconda 的 jupyter 笔记本,并使用 apache toree 进行了 spark scala kernet 设置。每次我运行应该加载预训练管道的步骤时,它都会引发 tensorflow 错误。有没有办法可以在本地 Windows 上运行它?
I was trying this in a maven project earlier and the same error had happened. Another colleague tried it on a linux system and it worked. Below is the code I have tried and the error that it gave.
import org.apache.spark.ml.PipelineModel
import com.johnsnowlabs.nlp.pretrained.PretrainedPipeline
import com.johnsnowlabs.nlp.SparkNLP
import org.apache.spark.sql.SparkSession
val spark: SparkSession = SparkSession
.builder()
.appName("test")
.master("local[*]")
.config("spark.driver.memory", "4G")
.config("spark.kryoserializer.buffer.max", "200M")
.config("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
.getOrCreate()
val testData = spark.createDataFrame(Seq(
(1, "Google has announced the release of a beta version of the popular TensorFlow machine learning library"),
(2, "Donald John Trump (born June 14, 1946) is the 45th and current president of the United States"))).toDF("id", "text")
val pipeline = PretrainedPipeline("explain_document_dl", lang = "en") //this is where it gives error
val annotation = pipeline.transform(testData)
annotation.show()
annotation.select("entities.result").show(false)
出现以下错误:
名称:java.lang.UnsupportedOperationException 消息:Spark NLP 尝试使用 Contrib 模块加载 Tensorflow Graph,但未能在此系统上加载它。如果您使用的是 Windows,则不支持此操作。请尝试非贡献模型。如果不是这种情况,请报告此问题。原始错误信息:
操作类型未在“MyMachine”上运行的二进制文件中注册“BlockLSTM”。确保在此进程中运行的二进制文件中注册了 Op 和 Kernel。请注意,如果您正在加载使用来自 tf.contrib 的操作的已保存图,
tf.contrib.resampler
则应在导入图之前完成访问(例如),因为第一次访问模块时会延迟注册 contrib 操作。StackTrace:操作类型未在“MyMachine”上运行的二进制文件中注册“BlockLSTM”。确保在此进程中运行的二进制文件中注册了 Op 和 Kernel。请注意,如果您正在加载使用来自 tf.contrib 的操作的已保存图,tf.contrib.resampler
则应在导入图之前完成访问(例如),因为第一次访问模块时会延迟注册 contrib 操作。
在 com.johnsnowlabs.ml.tensorflow.TensorflowWrapper$.readGraph(TensorflowWrapper.scala:163) 在 com.johnsnowlabs.ml.tensorflow.TensorflowWrapper$.read(TensorflowWrapper.scala:202) 在 com.johnsnowlabs.ml.tensorflow.ReadTensorflowModel $class.readTensorflowModel(TensorflowSerializeModel.scala:73) at com.johnsnowlabs.nlp.annotators.ner.dl.NerDLModel$.readTensorflowModel(NerDLModel.scala:134) at com.johnsnowlabs.nlp.annotators.ner.dl.ReadsNERGraph$ class.readNerGraph(NerDLModel.scala:112) at com.johnsnowlabs.nlp.annotators.ner.dl.NerDLModel$.readNerGraph(NerDLModel.scala:134) at com.johnsnowlabs.nlp.annotators.ner.dl.ReadsNERGraph$$ anonfun$2.apply(NerDLModel.scala:116) at com.johnsnowlabs.nlp.annotators.ner.dl.ReadsNERGraph$$anonfun$2.apply(NerDLModel.scala:116) at com.johnsnowlabs.nlp。ParamsAndFeaturesReadable$$anonfun$com$johnsnowlabs$nlp$ParamsAndFeaturesReadable$$onRead$1.apply(ParamsAndFeaturesReadable.scala:31) at com.johnsnowlabs.nlp.ParamsAndFeaturesReadable$$anonfun$com$johnsnowlabs$nlp$ParamsAndFeaturesReadable$$onRead$1.apply (ParamsAndFeaturesReadable.scala:30) 在 scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) 在 scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) 在 com.johnsnowlabs.nlp。 ParamsAndFeaturesReadable$class.com$johnsnowlabs$nlp$ParamsAndFeaturesReadable$$onRead(ParamsAndFeaturesReadable.scala:30) at com.johnsnowlabs.nlp.ParamsAndFeaturesReadable$$anonfun$read$1.apply(ParamsAndFeaturesReadable.scala:41) at com.johnsnowlabs.nlp .ParamsAndFeaturesReadable$$anonfun$read$1.apply(ParamsAndFeaturesReadable.scala:41) 在 com.johnsnowlabs.nlp。FeaturesReader.load(ParamsAndFeaturesReadable.scala:19) at com.johnsnowlabs.nlp.FeaturesReader.load(ParamsAndFeaturesReadable.scala:8) at org.apache.spark.ml.util.DefaultParamsReader$.loadParamsInstance(ReadWrite.scala:652) at org.apache.spark.ml.Pipeline$SharedReadWrite$$anonfun$4.apply(Pipeline.scala:274) at org.apache.spark.ml.Pipeline$SharedReadWrite$$anonfun$4.apply(Pipeline.scala:272) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) 在 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) 在 scala.collection.IndexedSeqOptimized$class。 foreach(IndexedSeqOptimized.scala:33) at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186) at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at scala.collection.mutable .ArrayOps$ofRef.map(ArrayOps.scala:186)
org.apache.spark.ml.Pipeline$SharedReadWrite$.load(Pipeline.scala:272) 在 org.apache.spark.ml.PipelineModel$PipelineModelReader.load(Pipeline.scala:348) 在 org.apache.spark。 ml.PipelineModel$PipelineModelReader.load(Pipeline.scala:342) at com.johnsnowlabs.nlp.pretrained.ResourceDownloader$.downloadPipeline(ResourceDownloader.scala:135) at com.johnsnowlabs.nlp.pretrained.ResourceDownloader$.downloadPipeline(ResourceDownloader. scala:129) 在 com.johnsnowlabs.nlp.pretrained.PretrainedPipelinenter code here
e.(PretrainedPipeline.scala:14)