1

我正在尝试在 Colab 中使用 Python 的 twint 库(Twitter scraper)运行代码。

我的代码是:

!pip install twint
!pip install nest_asyncio
!pip install pandas

import twint
import nest_asyncio
nest_asyncio.apply()
import time
import pandas as pd
import os
import re

timestr = time.strftime("%Y%m%d")

c = twint.Config()
c.Limit = 1000
c.Lang = "en"
c.Store_csv = True
c.Search = "apple"
c.Output = timestr + "_en_apple.csv"
twint.run.Search(c)

上面的代码在我机器上的 Jupyter 中完美运行并获取推文。但是,Colab 中的相同代码会导致以下结果:

 CRITICAL:root:twint.run:Twint:Feed:noDataExpecting value: line 1 column 1 (char 0)
 sleeping for 1.0 secs
 CRITICAL:root:twint.run:Twint:Feed:noDataExpecting value: line 1 column 1 (char 0)
 sleeping for 8.0 secs
 CRITICAL:root:twint.run:Twint:Feed:noDataExpecting value: line 1 column 1 (char 0)
 sleeping for 27.0 secs
 CRITICAL:root:twint.run:Twint:Feed:noDataExpecting value: line 1 column 1 (char 0)
 sleeping for 64.0 secs
 CRITICAL:root:twint.run:Twint:Feed:noDataExpecting value: line 1 column 1 (char 0)
 sleeping for 125.0 secs
 CRITICAL:root:twint.run:Twint:Feed:noDataExpecting value: line 1 column 1 (char 0)
 sleeping for 216.0 secs

如何在 Colab 中解决这个问题?

4

2 回答 2

4

我得到了以下在 Google Colab 中的工作。从 requirements.txt 安装更容易。

!git clone --depth=1 https://github.com/twintproject/twint.git
!cd /content/twint && pip3 install . -r requirements.txt
import twint
import nest_asyncio
nest_asyncio.apply()
import time
import pandas as pd
import os
import re
timestr = time.strftime("%Y%m%d")

c = twint.Config()
c.Limit = 1000
c.Lang = "en"
c.Store_csv = True
c.Search = "apple"
c.Output = timestr + "_en_apple.csv"
twint.run.Search(c)
于 2021-04-23T07:46:26.900 回答
0

对于那些构建失败的人,像这样编辑你的 requirements.txt;

aiohttp==3.7.0
aiogram==2.2
aiodns
beautifulsoup4
cchardet
dataclasses
elasticsearch
pysocks
pandas>=0.23.0
aiohttp_socks<=0.4.1
schedule
geopy
fake-useragent
googletransx
于 2021-12-18T16:52:51.923 回答