我正在使用使用 Cassandra 作为存储的 Nutch 2.x。目前我只抓取一个网站,数据以字节码格式加载到 Cassandra。当我在 Nutch 中使用 readdb 命令时,我确实得到了任何有用的爬取数据。
以下是我得到的不同文件和输出的详细信息:
==========命令运行爬虫=====================
bin/crawl urls/ crawlDir/ http://localhost:8983/solr/ 3
========================seed.txt 数据====================== ====
http://www.ft.com
=== readdb 命令的输出以从 cassandra 网页.f 表中读取数据======
~/Documents/Softwares/apache-nutch-2.3/runtime/local$ bin/nutch readdb -dump data -content
~/Documents/Softwares/apache-nutch-2.3/runtime/local/data$ cat part-r-00000
http://www.ft.com/ key: com.ft.www:http/
baseUrl: null
status: 4 (status_redir_temp)
fetchTime: 1426888912463
prevFetchTime: 1424296904936
fetchInterval: 2592000
retriesSinceFetch: 0
modifiedTime: 0
prevModifiedTime: 0
protocolStatus: (null)
parseStatus: (null)
title: null
score: 1.0
marker _injmrk_ : y
marker dist : 0
reprUrl: null
batchId: 1424296906-20007
metadata _csh_ :
===============regex-urlfilter.txt 的内容======================
# skip file: ftp: and mailto: urls
-^(file|ftp|mailto):
# skip image and other suffixes we can't yet parse
# for a more extensive coverage use the urlfilter-suffix plugin
-\.(gif|GIF|jpg|JPG|png|PNG|ico|ICO|css|CSS|sit|SIT|eps|EPS|wmf|WMF|zip|ZIP|ppt|PPT|mpg|MPG|xls|XLS|gz|GZ|rpm|RPM|tgz|TGZ|mov|MOV|exe|EXE|jpeg|JPEG|bmp|BMP|js|JS)$
# skip URLs containing certain characters as probable queries, etc.
-[?*!@=]
# skip URLs with slash-delimited segment that repeats 3+ times, to break loops
-.*(/[^/]+)/[^/]+\1/[^/]+\1/
# accept anything else
+.
===========困扰我的日志文件内容======================
2015-02-18 13:57:51,253 ERROR store.CassandraStore -
2015-02-18 13:57:51,253 ERROR store.CassandraStore - [Ljava.lang.StackTraceElement;@653e3e90
2015-02-18 14:01:45,537 INFO connection.CassandraHostRetryService - Downed Host Retry service started with queue size -1 and retry delay 10s
如果您需要更多信息,请告诉我。有人可以帮帮我吗 ?
提前致谢。-苏曼特