所以......我在我的 ubuntu ec2 实例上运行了scrapyd 在关注这篇文章之后:http ://www.dataisbeautiful.io/deploying-scrapy-ec2/
但是我想我无法让 pymongo 连接到我的 MongoLabs mongo 数据库,因为 ubuntu ec2 scrapyd 日志说
pymongo.errors.ConnectionFailure: timed out
在后端方面,我是一个真正的菜鸟,所以我真的不知道是什么导致了这个问题。当我从 localhost 运行我的 scrapyd 时,它工作得很好,并将抓取的数据保存到我的 MongoLabs 数据库中。对于在 ec2 实例上运行的 scrapyd,我可以通过在端口 6800(相当于 scrapyd 的 localhost:6800)处输入 ec2 地址来访问 scrapyd gui,仅此而已。冰壶
curl http://aws-ec2-link:6800/schedule.json -d project=sportslab_scrape -d spider=max -d max_url="http://www.maxpreps.com/high-schools/de-la-salle-spartans-(concord,ca)/football/stats.htm"
回馈 'status': 'okay' 我可以看到作业出现,但没有生成任何项目,日志仅显示
2014-11-17 02:20:13+0000 [scrapy] INFO: Scrapy 0.24.4 started (bot: sportslab_scrape_outer)
2014-11-17 02:20:13+0000 [scrapy] INFO: Optional features available: ssl, http11
2014-11-17 02:20:13+0000 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'sportslab_scrape.spiders', 'SPIDER_MODULES': ['sportslab_scrape.spiders'], 'FEED_URI': 'items/sportslab_scrape/max/4299afa26e0011e4a543060f585a893f.jl', 'LOG_FILE': 'logs/sportslab_scrape/max/4299afa26e0011e4a543060f585a893f.log', 'BOT_NAME': 'sportslab_scrape_outer'}
2014-11-17 02:20:13+0000 [scrapy] INFO: Enabled extensions: FeedExporter, LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2014-11-17 02:20:13+0000 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2014-11-17 02:20:13+0000 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
有人对我的问题有一些有用的见解吗?谢谢!
编辑:添加了连接代码。设置.py
MONGODB_HOST = 'mongodb://user:pass@asdf.mongolab.com:38839/sportslab_mongodb'
MONGODB_PORT = 38839 # Change in prod
MONGODB_DATABASE = "sportslab_mongodb" # Change in prod
MONGODB_COLLECTION = "sportslab"
Scrapy 的 Pipeline.py
from pymongo import Connection
from scrapy.conf import settings
class MongoDBPipeline(object):
def __init__(self):
connection = Connection(settings['MONGODB_HOST'], settings['MONGODB_PORT'])
db = connection[settings['MONGODB_DATABASE']]
self.collection = db[settings['MONGODB_COLLECTION']]
def process_item(self, item, spider):
self.collection.insert(dict(item))
return item