Caridorc Caridorc - 3 months ago 9
Python Question

Why do Python and wc disagree on byte count?

Python and

disagree drastically on the byte count (length) of a given string:

with open("commedia.pfc", "w") as f:
t = ''.join(chr(int(b, base=2)) for b in chunks(compressed, 8))

Output : 318885

$> wc commedia.pfc
2181 12282 461491 commedia.pfc

The file is mostly made of unreadable chars so I will provide an hexdump:

The file is the result of a prefix free compression, if you ask I can provide the full code that generates it along with the input text.

Why aren't both byte counts equal?

I add the full code of the compression algorithm, it looks long but is full of documentation and tests, so should be easy to understand:

Implementation of prefix-free compression and decompression.
import doctest
from itertools import islice
from collections import Counter
import random
import json

def binary_strings(s):
Given an initial list of binary strings `s`,
yield all binary strings ending in one of `s` strings.

>>> take(9, binary_strings(["010", "111"]))
['010', '111', '0010', '1010', '0111', '1111', '00010', '10010', '01010']
yield from s
while True:
s = [b + x for x in s for b in "01"]
yield from s

def take(n, iterable):
Return first n items of the iterable as a list.
return list(islice(iterable, n))

def chunks(xs, n, pad='0'):
Yield successive n-sized chunks from xs.
for i in range(0, len(xs), n):
yield xs[i:i + n]

def reverse_dict(dictionary):
>>> sorted(reverse_dict({1:"a",2:"b"}).items())
[('a', 1), ('b', 2)]
return {value : key for key, value in dictionary.items()}

def prefix_free(generator):
Given a `generator`, yield all the items from it
that do not start with any preceding element.

>>> take(6, prefix_free(binary_strings(["00", "01"])))
['00', '01', '100', '101', '1100', '1101']
seen = []
for x in generator:
if not any(x.startswith(i) for i in seen):
yield x

def build_translation_dict(text, starting_binary_codes=["000", "100","111"]):
Builds a dict for `prefix_free_compression` where
More common char -> More short binary strings
This is compression as the shorter binary strings will be seen more times than
the long ones.

Univocity in decoding is given by the binary_strings being prefix free.

>>> sorted(build_translation_dict("aaaaa bbbb ccc dd e", ["01", "11"]).items())
[(' ', '001'), ('a', '01'), ('b', '11'), ('c', '101'), ('d', '0001'), ('e', '1001')]
binaries = sorted(list(take(len(set(text)), prefix_free(binary_strings(starting_binary_codes)))), key=len)
frequencies = Counter(text)
# char value tiebreaker to avoid non-determinism v
alphabet = sorted(list(set(text)), key=(lambda ch: (frequencies[ch], ch)), reverse=True)
return dict(zip(alphabet, binaries))

def prefix_free_compression(text, starting_binary_codes=["000", "100","111"]):
Implements `prefix_free_compression`, simply uses the dict
made with `build_translation_dict`.

Returns a tuple (compressed_message, tranlation_dict) as the dict is needed
for decompression.

>>> prefix_free_compression("aaaaa bbbb ccc dd e", ["01", "11"])[0]
translate = build_translation_dict(text, starting_binary_codes)
# print(translate)
return ''.join(translate[i] for i in text), translate

def prefix_free_decompression(compressed, translation_dict):
Decompresses a prefix free `compressed` message in the form of a string
composed only of '0' and '1'.

Being the binary codes prefix free,
the decompression is allowed to take the earliest match it finds.

>>> message, d = prefix_free_compression("aaaaa bbbb ccc dd e", ["01", "11"])
>>> message
>>> sorted(d.items())
[(' ', '001'), ('a', '01'), ('b', '11'), ('c', '101'), ('d', '0001'), ('e', '1001')]
>>> ''.join(prefix_free_decompression(message, d))
'aaaaa bbbb ccc dd e'
decoding_translate = reverse_dict(translation_dict)
# print(decoding_translate)
word = ''
for bit in compressed:
# print(word, "-", bit)
if word in decoding_translate:
yield decoding_translate[word]
word = ''
word += bit
yield decoding_translate[word]

if __name__ == "__main__":
with open("commedia.txt") as f:
text =
compressed, d = prefix_free_compression(text)
with open("commedia.pfc", "w") as f:
t = ''.join(chr(int(b, base=2)) for b in chunks(compressed, 8))
with open("commedia.pfcd", "w") as f:
# dividing by 8 goes from bit length to byte length
print("Compressed / uncompressed ratio is {}".format((len(compressed)//8) / len(text)))
original = ''.join(prefix_free_decompression(compressed, d))
assert original == text



You are using Python3 and an str object - that means the count you see in len(t) is the number of characters in the string. Now, characters are not bytes - and it is so since the 90's .

Since you did not declare an explicit text encoding, the file writing is encoding your text using the system default encoding - which on Linux or Mac OS X will be utf-8 - an encoding in which any character that falls out of the ASCII range (ord(ch) > 127) uses more than one byte on disk.

So, your program is basically wrong. First, define if you are dealing with text or bytes . If you are dealign with bytes, open the file for writting in binary mode (wb, not w) and change this line:

t = ''.join(chr(int(b, base=2)) for b in chunks(compressed, 8))


t = bytes((int(b, base=2) for b in chunks(compressed, 8))

That way it is clear that you are working with the bytes themselves, and not mangling characters and bytes.

Of course there is an ugly workaround to do a "transparent encoding" of the text you had to a bytes object - (if your original list would have all character codepoints in the 0-256 range, that is): You could encode your previous t with latin1 encoding before writing it to a file. But that would have been just wrong semantically.

You can also experiment with Python's little known "bytearray" object: it gives one the ability to deal with elements that are 8bit numbers, and have the convenience of being mutable and extendable (just as a C "string" that would have enough memory space pre allocated)