2

我正在探索 pyspark 并且在尝试拟合高斯混合模型时遇到了错误。我一直在尝试限制潜在错误的总数,并且我已经能够通过显着减少的向量数量(在这种情况下,只有 3 个)来复制错误。

这是我的代码:

sc = ps.SparkContext('local[4]')

sql_c = SQLContext(sc)
test_df = sql_c.createDataFrame([
    Row(features_idf=SparseVector(103882, {0: 0.6015, 5: 1.2943, 9: 1.2757, 17: 1.111})),
    Row(features_idf=SparseVector(103882, {3: 0.6015, 5: 4.2963, 14: 1.2757, 17: 1.5308})),
    Row(features_idf=SparseVector(103882, {5: 0.6015, 13: 1.2343, 15: 1.2757, 17: 3.708}))])

gm = GaussianMixture(featuresCol='features_idf')
gm_model = gm.fit(test_df)

这是回溯:

---------------------------------------------------------------------------
Py4JJavaError                             Traceback (most recent call last)
<ipython-input-21-34a25cf6f1d8> in <module>()
      1 gm = GaussianMixture(featuresCol='features_idf')
----> 2 gm_model = gm.fit(test_df)

/opt/spark/python/pyspark/ml/base.pyc in fit(self, dataset, params)
     62                 return self.copy(params)._fit(dataset)
     63             else:
---> 64                 return self._fit(dataset)
     65         else:
     66             raise ValueError("Params must be either a param map or a list/tuple of param maps, "

/opt/spark/python/pyspark/ml/wrapper.pyc in _fit(self, dataset)
    211 
    212     def _fit(self, dataset):
--> 213         java_model = self._fit_java(dataset)
    214         return self._create_model(java_model)
    215 

/opt/spark/python/pyspark/ml/wrapper.pyc in _fit_java(self, dataset)
    208         """
    209         self._transfer_params_to_java()
--> 210         return self._java_obj.fit(dataset._jdf)
    211 
    212     def _fit(self, dataset):

/Users/wmees/anaconda/lib/python2.7/site-packages/py4j/java_gateway.pyc in __call__(self, *args)
   1131         answer = self.gateway_client.send_command(command)
   1132         return_value = get_return_value(
-> 1133             answer, self.gateway_client, self.target_id, self.name)
   1134 
   1135         for temp_arg in temp_args:

/opt/spark/python/pyspark/sql/utils.pyc in deco(*a, **kw)
     61     def deco(*a, **kw):
     62         try:
---> 63             return f(*a, **kw)
     64         except py4j.protocol.Py4JJavaError as e:
     65             s = e.java_exception.toString()

/Users/wmees/anaconda/lib/python2.7/site-packages/py4j/protocol.pyc in get_return_value(answer, gateway_client, target_id, name)
    317                 raise Py4JJavaError(
    318                     "An error occurred while calling {0}{1}{2}.\n".
--> 319                     format(target_id, ".", name), value)
    320             else:
    321                 raise Py4JError(

Py4JJavaError: An error occurred while calling o141.fit.
: java.lang.NegativeArraySizeException
    at scala.reflect.ManifestFactory$$anon$12.newArray(Manifest.scala:141)
    at scala.reflect.ManifestFactory$$anon$12.newArray(Manifest.scala:139)
    at breeze.linalg.DenseMatrix$.zeros(DenseMatrix.scala:340)
    at breeze.linalg.diag$$anon$1.apply(diag.scala:19)
    at breeze.linalg.diag$$anon$1.apply(diag.scala:17)
    at breeze.generic.UFunc$class.apply(UFunc.scala:48)
    at breeze.linalg.diag$.apply(diag.scala:15)
    at org.apache.spark.mllib.clustering.GaussianMixture.org$apache$spark$mllib$clustering$GaussianMixture$$initCovariance(GaussianMixture.scala:269)
    at org.apache.spark.mllib.clustering.GaussianMixture$$anonfun$3.apply(GaussianMixture.scala:188)
    at org.apache.spark.mllib.clustering.GaussianMixture$$anonfun$3.apply(GaussianMixture.scala:186)
    at scala.Array$.tabulate(Array.scala:331)
    at org.apache.spark.mllib.clustering.GaussianMixture.run(GaussianMixture.scala:186)
    at org.apache.spark.ml.clustering.GaussianMixture.fit(GaussianMixture.scala:331)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:237)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:280)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:214)
    at java.lang.Thread.run(Thread.java:745)

我一生都无法弄清楚发生了什么——我认为我创建的向量的大小不是负数,所以我不知道是什么引发了这个错误。我看过一些其他问题,但没有什么真正有帮助的,所以任何建议都将不胜感激!

4

1 回答 1

0

GaussianMixture在 Spark MLlib 中,将创建协方差矩阵以用于期望最大化算法。103882 x 103882在您的情况下,该矩阵由一个大小数组支持。正如有人已经指出的那样,这会导致整数溢出,它试图分配一个 size 的数组103882 * 103882 = -2093431964。虽然这似乎是一个错误,但 Spark MLlib 使用的高斯混合算法在高维数据上效果不佳。查看警告:

@note For high-dimensional data (with many features), this algorithm may perform poorly. This is due to high-dimensional data (a) making it difficult to cluster at all (based on statistical/theoretical arguments) and (b) numerical issues with Gaussian distributions.

于 2017-01-20T15:47:05.117 回答