0

在我的数据集中,我有 730 亿行。我想对其应用分类算法。我需要来自原始数据的样本,以便我可以测试我的模型。

我想做一个训练测试拆分。

数据框看起来像 -

id    age   gender    salary    bonus  area   churn
1      38    m        37654      765    bb     1
2      48    f        3654       365    bb     0
3      33    f        55443      87     uu     0
4      27    m        26354      875    jh     0
5      58    m        87643      354    vb     1

如何使用 pyspark 进行随机抽样,以便我的依赖(流失)变量比率不会改变。有什么建议吗?

4

2 回答 2

0

要从原始数据中查看样本,我们可以在 spark 中使用 sample:

df.sample(分数).show()

分数应在 [0.0, 1.0] 之间

例子:

df.sample(0.2).show(10) --> 重复运行此命令,它将显示原始数据的不同样本。

于 2019-09-26T07:15:50.063 回答
0

您将在链接文档中找到示例。

Spark 支持分层采样

# an RDD of any key value pairs
data = sc.parallelize([(1, 'a'), (1, 'b'), (2, 'c'), (2, 'd'), (2, 'e'), (3, 'f')])

# specify the exact fraction desired from each key as a dictionary
fractions = {1: 0.1, 2: 0.6, 3: 0.3}

approxSample = data.sampleByKey(False, fractions)

您还可以使用TrainValidationSplit

例如:

from pyspark.ml.evaluation import RegressionEvaluator
from pyspark.ml.regression import LinearRegression
from pyspark.ml.tuning import ParamGridBuilder, TrainValidationSplit

# Prepare training and test data.
data = spark.read.format("libsvm")\
    .load("data/mllib/sample_linear_regression_data.txt")
train, test = data.randomSplit([0.9, 0.1], seed=12345)

lr = LinearRegression(maxIter=10)

# We use a ParamGridBuilder to construct a grid of parameters to search over.
# TrainValidationSplit will try all combinations of values and determine best model using
# the evaluator.
paramGrid = ParamGridBuilder()\
    .addGrid(lr.regParam, [0.1, 0.01]) \
    .addGrid(lr.fitIntercept, [False, True])\
    .addGrid(lr.elasticNetParam, [0.0, 0.5, 1.0])\
    .build()

# In this case the estimator is simply the linear regression.
# A TrainValidationSplit requires an Estimator, a set of Estimator ParamMaps, and an Evaluator.
tvs = TrainValidationSplit(estimator=lr,
                           estimatorParamMaps=paramGrid,
                           evaluator=RegressionEvaluator(),
                           # 80% of the data will be used for training, 20% for validation.
                           trainRatio=0.8)

# Run TrainValidationSplit, and choose the best set of parameters.
model = tvs.fit(train)

# Make predictions on test data. model is the model with combination of parameters
# that performed best.
model.transform(test)\
    .select("features", "label", "prediction")\
    .show()
于 2019-09-26T07:22:41.133 回答