我是编程新手,对 http 知之甚少,但我编写了一个代码来用 Java 抓取网站,并且一直遇到我的代码抓取“获取”http 调用(基于输入 URL)的问题,但我不知道如何为“发布”http 调用抓取数据。
在对http进行了简要概述之后,我相信我需要模拟浏览器,但不知道如何在Java中做到这一点。我一直在尝试使用的网站。
由于我需要为所有页面抓取该源代码,因此在单击每个下一步按钮时 URL 不会更改。我使用 Firefox firebug 查看单击按钮时发生的情况,但我不知道我要查找的所有内容。
我现在抓取数据的代码是:
public class Scraper {
private static String month = "11";
private static String day = "4";
private static String url = "http://cpdocket.cp.cuyahogacounty.us/SheriffSearch/results.aspx?q=searchType%3dSaleDate%26searchString%3d"+month+"%2f"+day+"%2f2013%26foreclosureType%3d%27NONT%27%2c+%27PAR%27%2c+%27COMM%27%2c+%27TXLN%27"; // the input website to be scraped
public static String sourcetext; //The source code that has been scraped
//scrapeWebsite runs the method to scrape the input URL and returns a string to be parsed.
public static void scrapeWebsite() throws IOException {
URL urlconnect = new URL(url); //creates the url from the variable
URLConnection connection = urlconnect.openConnection();
BufferedReader in = new BufferedReader(new InputStreamReader(
connection.getInputStream(), "UTF-8"));
String inputLine;
StringBuilder sourcecode = new StringBuilder(); // creates a stringbuilder which contains the sourcecode
while ((inputLine = in.readLine()) != null)
sourcecode.append(inputLine);
in.close();
sourcetext = sourcecode.toString();
}
为每个“帖子”调用抓取所有页面的最佳方法是什么?