我对网络抓取相当陌生,对 Java 的了解有限。
每次运行此代码时,我都会收到错误消息:
Exception in thread "main" java.lang.NullPointerException
at sws.SWS.scrapeTopic(SWS.java:38)
at sws.SWS.main(SWS.java:26)
Java Result: 1
BUILD SUCCESSFUL (total time: 0 seconds)
我的代码是:
import java.io.*;
import java.net.*;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
public class SWS
{
/**
* @param args the command line arguments
*/
public static void main(String[] args)
{
scrapeTopic("wiki/Python");
}
public static void scrapeTopic(String url)
{
String html = getUrl("http://www.wikipedia.org/" + url);
Document doc = Jsoup.parse(html);
String contentText = doc.select("#mw-content-text > p").first().text();
System.out.println(contentText);
}
public static String getUrl(String Url)
{
URL urlObj = null;
try
{
urlObj = new URL(Url);
}
catch(MalformedURLException e)
{
System.out.println("The url was malformed");
return "";
}
URLConnection urlCon = null;
BufferedReader in = null;
String outputText = "";
try
{
urlCon = urlObj.openConnection();
in = new BufferedReader(new InputStreamReader(urlCon.getInputStream()));
String line = "";
while ((line = in.readLine()) != null)
{
outputText += line;
}
in.close();
}
catch(IOException e)
{
System.out.println("There was a problem connecting to the url");
return "";
}
return outputText;
}
}
我一直盯着我的屏幕有一段时间了,需要帮助!
提前致谢。