0

我的数据框的一列具有如下所示的值:

air_voice_no_null.loc[:,"host_has_profile_pic"].value_counts(normalize = True)*100

1.0    99.694276
0.0     0.305724
Name: host_has_profile_pic, dtype: float64

对于该列中的每个唯一值,这是 99:1。

我现在想创建一个新的数据框,使其具有该数据框的 1.0 的 60% 和 0.0 的 40% 以及所有行(当然行数更少)。

我尝试使用如下所示的类strat函数将其拆分,但没有获得每个唯一值比例相等的数据帧。train_test_splitsklearn.model_selection

from sklearn.model_selection import train_test_split

profile_train_x, profile_test_x, profile_train_y, profile_test_y = train_test_split(air_voice_no_null.loc[:,['log_price', 'accommodates', 'bathrooms','host_response_rate', 'number_of_reviews', 'review_scores_rating','bedrooms', 'beds', 'cleaning_fee', 'instant_bookable']],
                                                                                   air_voice_no_null.loc[:,"host_has_profile_pic"],
                                                                                   random_state=42, stratify=air_voice_no_null.loc[:,"host_has_profile_pic"])

这就是上面代码的结果,行数没有变化。

print(profile_train_x.shape)
print(profile_test_x.shape)
print(profile_train_y.shape)
print(profile_test_y.shape)

(55442, 10)
(18481, 10)
(55442,)
(18481,)

如何选择行数减少的数据集子集,同时保持每个host_has_profile_pic变量类的适当比例。

完整数据集链接:https ://www.kaggle.com/stevezhenghp/airbnb-price-prediction

4

1 回答 1

0

考虑以下方式:

import pandas as pd

# create some data
df = pd.DataFrame({'a': [0] * 10 + [1] * 90})

print('original proportion:')
print(df['a'].value_counts(normalize=True))

# take samples for every unique value separately
df_new = pd.concat([
    df[df['a'] == 0].sample(frac=.4),
    df[df['a'] == 1].sample(frac=.07)])

print('\nsample proportion:')
print(df_new['a'].value_counts(normalize=True))

输出:

original proportion:
1    0.9
0    0.1
Name: a, dtype: float64

sample proportion:
1    0.6
0    0.4
Name: a, dtype: float64
于 2019-06-07T13:25:56.383 回答