(更新的答案)
我可以说,无论您的程序速度有什么问题,tokenizer 的选择都不是其中之一。在初始运行每种方法以消除初始化怪癖后,我可以在几毫秒内解析 1000000 行“12 34”。如果您愿意,您可以切换到使用 indexOf,但我真的认为您需要查看其他代码位的瓶颈,而不是这种微优化。拆分对我来说是一个惊喜 - 与其他方法相比,它真的非常慢。我在 Guava 拆分测试中添加了它,它比 String.split 快,但比 StringTokenizer 稍慢。
- 拆分:371ms
- 索引:48ms
- 字符串标记器:92 毫秒
- Guava Splitter.split(): 108ms
- CsvMapper 构建一个 csv 文档并解析为 POJOS:237 毫秒(如果将行构建到一个文档中,则为 175 毫秒!)
即使超过数百万行,这里的差异也可以忽略不计。
现在在我的博客上有一篇这样的文章:http: //demeranville.com/battle-of-the-tokenizers-delimited-text-parser-performance/
我运行的代码是:
import java.util.StringTokenizer;
import org.junit.Test;
public class TestSplitter {
private static final String line = "12 34";
private static final int RUNS = 1000000;//000000;
public final void testSplit() {
long start = System.currentTimeMillis();
for (int i=0;i<RUNS;i++){
String[] st = line.split(" ");
int x = Integer.parseInt(st[0]);
int y = Integer.parseInt(st[1]);
}
System.out.println("Split: "+(System.currentTimeMillis() - start)+"ms");
}
public final void testIndexOf() {
long start = System.currentTimeMillis();
for (int i=0;i<RUNS;i++){
int index = line.indexOf(' ');
int x = Integer.parseInt(line.substring(0,index));
int y = Integer.parseInt(line.substring(index+1));
}
System.out.println("IndexOf: "+(System.currentTimeMillis() - start)+"ms");
}
public final void testTokenizer() {
long start = System.currentTimeMillis();
for (int i=0;i<RUNS;i++){
StringTokenizer st = new StringTokenizer(line, " ");
int x = Integer.parseInt(st.nextToken());
int y = Integer.parseInt(st.nextToken());
}
System.out.println("StringTokenizer: "+(System.currentTimeMillis() - start)+"ms");
}
@Test
public final void testAll() {
this.testSplit();
this.testIndexOf();
this.testTokenizer();
this.testSplit();
this.testIndexOf();
this.testTokenizer();
}
}
eta:这里是番石榴代码:
public final void testGuavaSplit() {
long start = System.currentTimeMillis();
Splitter split = Splitter.on(" ");
for (int i=0;i<RUNS;i++){
Iterator<String> it = split.split(line).iterator();
int x = Integer.parseInt(it.next());
int y = Integer.parseInt(it.next());
}
System.out.println("GuavaSplit: "+(System.currentTimeMillis() - start)+"ms");
}
更新
我也添加了 CsvMapper 测试:
public static class CSV{
public int x;
public int y;
}
public final void testJacksonSplit() throws JsonProcessingException, IOException {
CsvMapper mapper = new CsvMapper();
CsvSchema schema = CsvSchema.builder().addColumn("x", ColumnType.NUMBER).addColumn("y", ColumnType.NUMBER).setColumnSeparator(' ').build();
long start = System.currentTimeMillis();
StringBuilder builder = new StringBuilder();
for (int i = 0; i < RUNS; i++) {
builder.append(line);
builder.append('\n');
}
String input = builder.toString();
MappingIterator<CSV> it = mapper.reader(CSV.class).with(schema).readValues(input);
while (it.hasNext()){
CSV csv = it.next();
}
System.out.println("CsvMapperSplit: " + (System.currentTimeMillis() - start) + "ms");
}