1
0
Fork 0

Merging upstream version 0.7.

Signed-off-by: Daniel Baumann <daniel@debian.org>
This commit is contained in:
Daniel Baumann 2025-02-17 22:51:57 +01:00
parent d419679950
commit ca990cc36e
Signed by: daniel
GPG key ID: FBB4F0E80A80222F
15 changed files with 74 additions and 74 deletions

26
README
View file

@ -18,7 +18,7 @@ masked by a low compression ratio in the last members.
The xlunzip tarball contains a copy of the lzip_decompress module and can be
compiled and tested without downloading or applying the patch to the kernel.
My lzip patch for linux can be found at
http://download.savannah.gnu.org/releases/lzip/kernel/
@ -61,8 +61,8 @@ data is uncompressible. The worst case is very compressible data followed by
uncompressible data because in this case the output pointer increases faster
when the input pointer is smaller.
| * <-- input pointer
| * , <-- output pointer
| * <-- input pointer (*)
| * , <-- output pointer (,)
| * , '
| x ' <-- overrun (x)
memory | * ,'
@ -71,7 +71,7 @@ address | * ,'
| ,'
| ,'
|,'
`--------------------------
'--------------------------
time
All we need to know to calculate the minimum required extra space is:
@ -82,19 +82,23 @@ All we need to know to calculate the minimum required extra space is:
The maximum expansion ratio of LZMA data is of about 1.4%. Rounding this up
to 1/64 (1.5625%) and adding 36 bytes per input member, the extra space
required to decompress lzip data in place is:
extra_bytes = ( compressed_size >> 6 ) + members * 36
Using the compressed size to calculate the extra_bytes (as in the equation
Using the compressed size to calculate the extra_bytes (as in the formula
above) may slightly overestimate the amount of space required in the worst
case. But calculating the extra_bytes from the uncompressed size (as does
linux) is wrong (and inefficient for high compression ratios). The formula
used in arch/x86/boot/header.S
extra_bytes = (uncompressed_size >> 8) + 65536
fails with 1 MB of zeros followed by 8 MB of random data, and wastes memory
for compression ratios > 4:1.
linux currently) is wrong (and inefficient for high compression ratios). The
formula used in arch/x86/boot/header.S
extra_bytes = ( uncompressed_size >> 8 ) + 65536
fails to decompress 1 MB of zeros followed by 8 MB of random data, wastes
memory for compression ratios larger than 4:1, and does not even consider
multimember data.
Copyright (C) 2016-2020 Antonio Diaz Diaz.
Copyright (C) 2016-2021 Antonio Diaz Diaz.
This file is free documentation: you have unlimited permission to copy,
distribute, and modify it.