I have a dataset with 45 million rows of data. I have three 6gb ram gpu. I am trying to train a language model on the data.
For that, I am trying to load the data as the fastai data bunch. But this part always fails because of the memory issue.
data_lm = TextLMDataBunch.from_df('./', train_df=df_trn,
valid_df=df_val, bs=10)
How do I handle this issue?