我正在做非常简单的测试:
- 有一个包含随机二进制信息的大文件,大小约为 6Gb
- 算法使“SeekCount”重复循环
- 每次重复都在执行以下操作:
- 计算文件大小范围内的随机偏移量
- 寻求该偏移量
- 读取小块数据
C#:
public static void Test()
{
string fileName = @"c:\Test\big_data.dat";
int NumberOfSeeks = 1000;
int MaxNumberOfBytes = 1;
long fileLength = new FileInfo(fileName).Length;
FileStream stream = new FileStream(fileName, FileMode.Open, FileAccess.Read, FileShare.Read, 65536, FileOptions.RandomAccess);
Console.WriteLine("Processing file \"{0}\"", fileName);
Random random = new Random();
DateTime start = DateTime.Now;
byte[] byteArray = new byte[MaxNumberOfBytes];
for (int index = 0; index < NumberOfSeeks; ++index)
{
long offset = (long)(random.NextDouble() * (fileLength - MaxNumberOfBytes - 2));
stream.Seek(offset, SeekOrigin.Begin);
stream.Read(byteArray, 0, MaxNumberOfBytes);
}
Console.WriteLine(
"Total processing time time {0} ms, speed {1} seeks/sec\r\n",
DateTime.Now.Subtract(start).TotalMilliseconds, NumberOfSeeks / (DateTime.Now.Subtract(start).TotalMilliseconds / 1000.0));
stream.Close();
}
然后在C++中做同样的测试:
void test()
{
FILE* file = fopen("c:\\Test\\big_data.dat", "rb");
char buf = 0;
__int64 fileSize = 6216672671;//ftell(file);
__int64 pos;
DWORD dwStart = GetTickCount();
for (int i = 0; i < kTimes; ++i)
{
pos = (rand() % 100) * 0.01 * fileSize;
_fseeki64(file, pos, SEEK_SET);
fread((void*)&buf, 1 , 1,file);
}
DWORD dwEnd = GetTickCount() - dwStart;
printf(" - Raw Reading: %d times reading took %d ticks, e.g %d sec. Speed: %d items/sec\n", kTimes, dwEnd, dwEnd / CLOCKS_PER_SEC, kTimes / (dwEnd / CLOCKS_PER_SEC));
fclose(file);
}
执行次数:
- C#:100-200 次读取/秒
- C++:25 万次读取/秒(25 万次)
问题:为什么在文件读取这样的微不足道的操作上,C++ 比 C# 快数千倍?
附加信息:
- 我玩了流缓冲区并将它们设置为相同的大小(4Kb)
- 磁盘已整理碎片(0% 碎片)
- 操作系统配置:Windows 7、NTFS、一些最新的现代 500Gb 硬盘(如果没记错的话是 WD)、8 GB RAM(尽管几乎没有使用)、4 核 CPU(利用率几乎为零)