使用 java 代码访问 haddop 文件时出现堆栈溢出错误。
import java.io.InputStream;
import java.net.URL;
import org.apache.hadoop.fs.FsUrlStreamHandlerFactory;
import org.apache.hadoop.io.IOUtils;
public class URLCat
{
static
{
URL.setURLStreamHandlerFactory(new FsUrlStreamHandlerFactory());
}
public static void main(String[] args) throws Exception
{
InputStream in = null;
try
{
in = new URL(args[0]).openStream();
IOUtils.copyBytes(in, System.out, 4096, false);
}
finally
{
IOUtils.closeStream(in);
}
}
}
我用 eclipse 调试这段代码然后我就知道了
in = new URL(args[0]).openStream();
产生错误。
我通过传递hadoop文件路径来运行这段代码,即
hdfs://localhost/user/jay/abc.txt
例外(从评论中提取):
Exception in thread "main" java.lang.StackOverflowError
at java.nio.Buffer.<init>(Buffer.java:174)
at java.nio.ByteBuffer.<init>(ByteBuffer.java:259)
at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:52)
at java.nio.ByteBuffer.wrap(ByteBuffer.java:350)
at java.nio.ByteBuffer.wrap(ByteBuffer.java:373)
at java.lang.StringCoding$StringEncoder.encode(StringCoding.java:237)
at java.lang.StringCoding.encode(StringCoding.java:272)
at java.lang.String.getBytes(String.java:946)
at java.io.UnixFileSystem.getBooleanAttributes0(Native Method)
.. stack trace truncated ..