我使用 htmlagilitypack 从这里获取信息。这是代码
int i=2449520;
.....................
web.OverrideEncoding = Encoding.UTF8;
web.UserAgent = "Mozilla/5.0 (Windows NT 6.2; WOW64; rv:19.0) Gecko/20100101 Firefox/19.0";
doc = web.Load("http://ru-patent.info/24/49/" + i + ".html");
var List = doc.DocumentNode.SelectNodes("//div[@style='padding:10px; border:#999 dotted 1px; background-color:#FFF; background-image:url(/imgs/back.gif);']");
foreach (var t in List)
{
Regex regex = new Regex(@"\sRU\s\d+");
Match match = regex.Match(t.InnerText);
sw.WriteLine(i.ToString());
while (match.Success)
{
sw.WriteLine(match.ToString());
match = match.NextMatch();
}
sw.WriteLine('\n');
}
i++;
我还使用了一个间隔为 10 秒的计时器,我需要从中获取信息的页面超过一千个。但是大约 30 页后,我得到了 403 禁止错误。我怎样才能绕过这个?