1

如果我们追加到文件而不是更新更快的随机访问文件。
我想知道如果我们追加到一个文件,我们不会得到碎片吗?我读过数据库,例如 MySQL 首先将数据附加到基于日志的文件中,然后最终将数据保存到实际的“表”中。

我想知道如果我们追加到一个文件并且文件大小发生变化,我们会不会有碎片并且会遇到与我们在随机访问文件中使用写入相同的问题?

4

1 回答 1

0

(first, let's get this out of the way: file system level fragmentation is really an issue on Microsoft file systems only. Below comments regarding fragmentation apply to Microsoft files systems only, as the question is virtually irrelevant to Linux files systems)

You seem to be mixing two distinct mechanisms in MySQL:

The Binary Log

The binary log contains "events" that describe database changes such as table creation operations or changes to table data.

It really is a log. It is written sequentially at the end of every transaction, and grows indefinitely as new transactions are committed. For completeness, let's mention that a new file is started over when the log reaches max_binlog_size, à la logrotate.

This file is subject to fragmentation as you describe it, because it grows constantly. But this fragmentation has little impact on performance, because it is not used intensively.

The Redo Log

A disk-based data structure used during crash recovery, to correct data written by incomplete transactions. (...) Modifications that did not finish updating the data files before an unexpected shutdown are replayed automatically [hence its name].

This log is more a buffer than a log. This log consists of a set of files of fixed size, allocated once on startup. Changes to the database are first written, sequentially, into this buffer zone, then transferred into the actual data files.

This buffer is not really subject to fragmentation, because its size is fixed. Fragmentation may always occur because of other factors, but should be minimal anyways.


PS: I would always avoid Windows as a host OS for MySQL in production. Performances on Windows are still a bit behind compared to Linux, anyways.

PPS: Tables need defragmentation, regardless of the underlying file system

于 2013-11-12T01:58:35.973 回答