If a file or data does not compress, and the “compressed” result is slightly larger than the uncompressed input, then…
This is typical if one tries to compress already-compressed data. For example, the JPG image format
is itself a compression algorithm. The more something is compressed, the more the data resembles
a sequence of random bytes. Compression algorithms work by finding non-random sequences
and replacing them with shorter sequences. But there is overhead in storing the information required
to decompress. The act of trying to compress already-compressed is bad in two ways:
1) It tends to be the most CPU intensive input case for a compression algorithm. A lot of CPU cycles
are spent trying compress, but not much is found.
2) The overhead is larger than whatever little “compression” occurs. The result is a file
that may be slightly larger than the original.