7 回答
有一个第三方工具可以做到这一点。
https://github.com/kilobyte/compsize
用法:
ayush@devbox:/code/compsize$ sudo compsize /opt
Processed 54036 files, 42027 regular extents (42028 refs), 27150 inline.
Type Perc Disk Usage Uncompressed Referenced
Data 82% 5.3G 6.4G 6.4G
none 100% 4.3G 4.3G 4.3G
zlib 37% 427M 1.1G 1.1G
lzo 56% 588M 1.0G 1.0G
I am not able to answer on a file by file basis, and @catlover2 gave the answer for a filesystem. But you should differentiate between the block size on disk, and the size in the (virtual) filesystem, ls
and du
can not go beyond filesystem, so they give no informations on how many disk blocks are used, and @jiliagre --apparent-size
is here useless.
To better illustrate this question, I have made a test with a single 23G file btrfs filesystem; first uncompressed, then lzo compressed. The example file is a virtual machine image and as only a compression level of 0.5. It shows that only df
, and btrfs filesystem df
can show the compression.
$ lvcreate vg0 test_btrfs -L 30G
Logical volume "test_btrfs" created
$ mkfs.btrfs /dev/vg0/test_btrfs
...
fs created label (null) on /dev/vg0/test_btrfs
nodesize 16384 leafsize 16384 sectorsize 4096 size 30.00GiB
$ mount /dev/vg0/test_btrfs /tmp/test_btrfs
$ btrfs filesystem df /tmp/test_btrfs
Data, single: total=8.00MiB, used=256.00KiB
System, DUP: total=8.00MiB, used=16.00KiB
System, single: total=4.00MiB, used=0.00
Metadata, DUP: total=1.00GiB, used=112.00KiB
Metadata, single: total=8.00MiB, used=0.00
$ cp bigfile /tmp/test_btrfs
$ btrfs filesystem df /tmp/test_btrfs
Data, single: total=24.01GiB, used=22.70GiB
System, DUP: total=8.00MiB, used=16.00KiB
System, single: total=4.00MiB, used=0.00
Metadata, DUP: total=1.00GiB, used=23.64MiB
Metadata, single: total=8.00MiB, used=0.00
$ btrfs filesystem df /tmp/test_btrfs
... unchanged!
$ cd /tmp/test_btrfs/
$ ls -l bigfile
-rw------- 1 root root 24367940096 May 4 15:03 bigfile
$ du -B1 --apparent-size bigfile
24367940096 bigfile
$ du -B1 bigfile
24367943680 bigfile
$ btrfs filesystem defragment -c bigfile
$ ls -l bigfile
-rw------- 1 root root 24367940096 May 4 15:03 bigfile
$ du -B1 --apparent-size bigfile
24367940096 bigfile
$ du -B1 bigfile
24367943680 bigfile
$ btrfs filesystem df /tmp/test_btrfs
Data, single: total=24.01GiB, used=12.90GiB
System, DUP: total=8.00MiB, used=16.00KiB
System, single: total=4.00MiB, used=0.00
Metadata, DUP: total=1.00GiB, used=39.19MiB
Metadata, single: total=8.00MiB, used=0.00
$ df -BG /tmp/test_btrfs
Filesystem 1G-blocks Used Available Use% Mounted on
/dev/mapper/vg0-test_btrfs 30G 13G 16G 47% /tmp/test_btrfs
The question of @gandalf3 is still unanswered, and may be we need to wait for development of btrfs (or to help to develop it!) to get a proper underlying disks block du
for a peculiar file. It would be very useful, I find very frustrating when I mount a btrfs fs with compression (without force) not knowing if my files are compressed or not and at which level.
在 Ubuntu-18 中
apt install btrfs-compsize
compsize /mnt/btrfs-partition
文件的磁盘大小,无论文件系统类型如何,都由命令指定为1du
,例如:
$ du -h *
732K file
512 file1
4.0M file2
$ du -B1 *
749568 file
512 file1
4091904 file2
磁盘大小等于文件大小加上元数据大小,四舍五入为文件系统块大小。非压缩文件的磁盘大小通常比它们的实际(字节数)大小略大。
如前所述,未压缩的大小由 显示ls -l
。也可以du
使用--apparent-size option
;
$ du --apparent-size -h *
826K file
64M file1
17M file2
$ du --apparent-size -B 1 *
845708 file
67108864 file1
16784836 file2
请注意,-B1
and--apparent-size
是 GNU 特定的du
扩展。
1似乎btrfs
没有遵循这个规则。如果这是真的/仍然是真的,我的理解是这应该被认为是一个错误,或者至少是一个POSIX non conformance。
我也试图回答这个问题,这就是我发现的:du -s
并df
产生不同的数字。所以我做了一些测试:
我在 /home 中放置了一个大小约为 3TB 的测试目录。它是整个 /home 目录的部分副本,其中包含文档、文本文件、图像和程序的典型组合
我使用 .tar.gz 压缩了这个目录,导致文件大小
# du -s ./test.tar.gz
1672083116 ./test.tar.gz
- 在文件系统中存在此文件的情况下,我这样做了:
# du -s /home
11017624664 /home
# du --apparent-size -s /home
11010709168 /home
# df /home
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/md2 31230406656 9128594488 22095200200 30% /home
这意味着我们有((11017624664/(1024**2))/(9128594488/(1024**2))-1)*100 = 20%
压缩比
- 然后我删除了这个文件,我得到了这个:
# du -s /home
9348284812 /home
# du --apparent-size -s /home
9340957158 /home
# df /home
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/md2 31230406656 7455549036 23764949364 24% /home
产生25%的压缩比。同样从这些信息中,我得出结论,实际大小为 1592 G 的 test.tar.gz 文件在磁盘 1595 G 上占用。我还注意到,使用--apparent-size
flag 产生的差异微不足道,可能是由于块大小四舍五入。
旁注,我用于安装此分区的 fstab 行是:
UUID=be6...07fe /home btrfs defaults,compress=zlib 0 2
概括:
要检查整个分区的压缩率,请使用以下两个命令:
du -s /home
df /home
然后划分输出。我想我的 25% 压缩率是 zlib 压缩器的典型结果。
您可以在文件中创建 Btrfs 文件系统,挂载它,将文件复制到那里并运行 df:
$ dd if=/dev/zero of=btrfs.data size=1M count=1K
$ mkdir btrfs
$ mount btrfs.data btrfs -o compress
... copy the files to ./btrfs
$ sync
$ cd btrfs
$ btrfs filesystem df .
从 17MiB 压缩到 5MiB 的单个文件示例:
$ cd btrfs
$ ls -l
-rwx------ 1 atom atom 17812968 Oct 27 2015 commands.bin
$ btrfs filesystem df .
Data, single: total=1.01GiB, used=5.08MiB
System, DUP: total=8.00MiB, used=16.00KiB
Metadata, DUP: total=1.00GiB, used=112.00KiB
GlobalReserve, single: total=16.00MiB, used=0.00B
Run btrfs filesystem df /mountpoint
.
Example output:
Data: total=2.01GB, used=1.03GB
System, DUP: total=8.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, DUP: total=1.00GB, used=2.52MB
Metadata: total=8.00MB, used=0.00
The key line starts with Data:
; used=
is the compressed size, and total=
is the total size as if on an uncompressed filesystem. I created a test filesystem, mounted it with the compress_force=zlib
option, and copied 1GB of zeroes to a file on the filesystem; at that point the Data:
line was Data: total=1.01GB, used=32.53MB
(zeroes are quite compressable!). Then I re-mounted the filesystem with compression disabled, copied another GB of zeroes to it, and at that point the Data:
line read Data: total=2.01GB, used=1.03GB
.
As nemequ mentioned above, ls -l
, on the contrary, shows the uncompressed size.