我正在尝试读取一个到目前为止大约有 30 万行的文本文件。
我读得怎么样?
我正在使用 java.io.BufferedReader 阅读
这是一个代表我的方法的小代码片段。
int lineNumber = 1;
BufferedReader br = null;
String currentLine = null;
br = new BufferedReader(new FileReader(f));//here f will be the file name to be read, I have passed
while ((cuurentLine = br.readLine()) != null) {
//here I have written logic to do processing after reading 1000 lines
//line number = 1001 start processing, similarly it reads next 1000 lines, each line is put in a List collection
//after reaching 1001 line clearing list and continuing the loop
}
我曾尝试使用 NIO2 以下情况
br = Files.newBufferedReader(Paths.get(inputFileName), StandardCharsets.UTF_16);
它导致了以下异常
exception :Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Unknown Source)
at java.lang.AbstractStringBuilder.expandCapacity(Unknown Source)
at java.lang.AbstractStringBuilder.ensureCapacityInternal(Unknown Source)
at java.lang.AbstractStringBuilder.append(Unknown Source)
at java.lang.StringBuffer.append(Unknown Source)
at java.io.BufferedReader.readLine(Unknown Source)
at java.io.BufferedReader.readLine(Unknown Source)
at TexttoExcelMerger.readFileLineByLine(TexttoExcelMerger.java:66)
at TexttoExcelMerger.main(TexttoExcelMerger.java:255)
首先,我的方法对吗?
NIO2、apache FileUtils 或任何其他 API 中是否有任何有效且快速的方法可以更快地读取文件,从而更快地改善我的文件读取过程。我可以读取前 1000 行之类的行集
br.readFirst(1000);
,
但不逐行读取或按照我的逻辑进行迭代吗?