this post was submitted on 10 Aug 2023
344 points (100.0% liked)

Programmer Humor

421 readers
5 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] Fedegenerate@lemmynsfw.com 36 points 1 year ago* (last edited 1 year ago) (8 children)

Anyone willing to drop some learning on a lay person? Is encrypted data less compressable because it lacks the patterns compression relies on?

Or, is it less secure to encrypt first because smart people things? I know enough about cryptography to know I know fuck all about cryptography.

[–] Mic_Check_One_Two@reddthat.com 30 points 1 year ago (1 children)

It isn’t compressible at all, really. As far as a compression algorithm is concerned, it just looks like random data.

Imagine trying to compress a text file. Each letter normally takes 8 bits to represent. The computer looks at 8 bits at a time, and knows which character to display. Normally, the computer needs to look at all 8 bits even when those bits are “empty” simply because you have no way of marking when one letter stops and another begins. It’s all just 1’s and 0’s, so it’s not like you can insert “next letter” flags in that. But we can cut that down.

One of the easiest ways to do this is to count all the letters, then sort them from most to least common. Then we build a tree, with each character being a fork. You start at the top of the tree, and follow it down. You go down one fork for 0 and read the letter at your current fork on a 1. So for instance, if the letters are sorted “ABCDEF…” then “0001” would be D. Now D is represented with only 4 bits, instead of 8. And after reading the 1, you return to the top of the tree and start over again. So “01000101101” would be “BDBAB”. Normally that sequence would take 40 bits to represent, (because each character would be 8 bits long,) but we just did it in 11 bits total.

But notice that this also has the potential to produce letters that are MORE than 8 bits long. If we follow that same pattern I listed above, “I” would be 9 bits, “J” would be 10, etc… The reason we’re able to achieve compression is because we’re using the more common (shorter) letters a lot and the less common (longer) letters less.

Encryption undoes this completely, because (as far as compression is concerned) the data is completely random. And when you look at random data without any discernible pattern, it means that counting the characters and sorting by frequency is basically a lesson in futility. All the letters will be used about the same, so even the “most frequent” characters are only more frequent by a little bit due to random chance. So now. Even if the frequency still corresponds to my earlier pattern, the number of Z’s is so close to the number of A’s that the file will end up even longer than before. Because remember, the compression only works when the most frequent characters are actually used most frequently. Since there are a lot of characters that are longer than 8 bits and those characters are being used just as much as the shorter characters our compression method fails and actually produces a file that is larger than the original.

[–] Eloise@lemm.ee 8 points 1 year ago

I understood this despite being drunk, thank you for the excellent explanation!

[–] dan@lemm.ee 25 points 1 year ago* (last edited 1 year ago)

Lossless compression algorithms aren’t magical, they can’t make everything smaller (otherwise it would be possible to have two different bits of input data that compress to the same output). So they all make some data bigger and some data smaller, the trick is that the stuff they make smaller happens to match common patterns. Given truly random data, basically every lossless compression algorithm will make the data larger.

A good encryption algorithm will output data that’s effectively indistinguishable from randomness. It’s not the only consideration, but often the more random the output looks, the better the algorithm.

Put those two facts together and it’s pretty easy to see why you should compress first then encrypt.

[–] tvbusy@lemmy.dbzer0.com 22 points 1 year ago

Any good encryption should make data looks random. Looking for patterns in encrypted data is one of the most basic steps to break an encryption. Therefore, good encryption should make data almost uncompressable, as in it's so random that compression does not reduce the size.

[–] manpacket@lemmyrs.org 5 points 1 year ago

Encrypted data don't compress well.

[–] takeda@kbin.social 4 points 1 year ago

In an ideal encryption, the resulting data should be indistinguishable from random when doing statistical analysis.

So yes, such data will be really hard to compress, so typically compression is done before encryption.

Now here's a twist. The compression before encryption can reveal some details about the encrypted data. This is especially true if attacker has a way to generate encrypted message with part of information that is being encrypted (for example some kind of token etc).
There were attacks on it. For example https://en.wikipedia.org/wiki/CRIME or https://en.wikipedia.org/wiki/BREACH (this was during that idiotic phase where vulnerabilities had those lame-ass names and they even created webpages)

Ideally compression would be done after encryption, but because of issues described earlier, that wouldn't give any benefit.

[–] CanadaPlus@lemmy.sdf.org 4 points 1 year ago

Anyone willing to drop some learning on a lay person? Is encrypted data less compressable because it lacks the patterns compression relies on?

No, I can't because you just answered your own question perfectly there. If there's any patterns detectable at all in your ciphertext that's a very bad sign.

[–] argv_minus_one 3 points 1 year ago* (last edited 1 year ago)

It's less secure to compress and encrypt at all, unless the compresstext is static (such as a pre-made image or video). See BREACH.

[–] odium@programming.dev 2 points 1 year ago* (last edited 1 year ago)

Also a lay person. Maybe it's because of the time complexity. Let's say you have data of size 10 (any unit) which becomes size 8 after compression.

If you encrypt first: Encrypt size 10 of data -> Conpress size 10 of data

If you compress first: Compress size 10 of data -> Encrypt size 8 of data

So second way is faster as the compression still takes the same size of input but the encryption takes a smaller input.