Re: @Deltics
An interesting feature of a perfect compression is that the output bit stream is (if one did not know that it was a compressor's output) perfectly random.
Not true, for two main reasons.
First, you're not defining what "perfectly random" means. You have an general idea of what this means, but I'm pretty sure that you're not able to back that up with maths. Also, why the hedge "if you don't know it's a compressed file"?
If you take a simple compressor that takes an input stream and does something like Lempel-Ziv-Welch (LZW) compression then you have a stream of output tokens that either encode for a verbatim section or a pointer + length symbol that refers back to a previously-seen section of the file. Both have patterns that make them distinguishable from random data.
Second, this "perfect" compression. Again, there's this terminology deficit, but a fundamental theorem of compression is that not all data can be compressed using a given compression algorithm. If a compression algorithm is to be reversible (and can accept arbitrary input) then some inputs will compress to smaller outputs (or the same size) while others will compress to larger outputs. These larger outputs (and even smaller outputs, to some degree) will tend to be at least partly systemic, meaning that some input symbols will appear verbatim in the output. The only thing that running something through a compressor proves is how good the compressor is at exploiting certain kinds of redundancy/structure in the input file, not how much entropy the input/output files have in absolute terms.
This false line of thinking (that compression outputs have to be random and uncorrelated with input) has been used before in attacks on SSL connections:
https://en.wikipedia.org/wiki/CRIME_%28security_exploit%29