The Wrong Level of Abstraction

@Philip,

a very strange point of view, and a bit naive. The fact that your implementation is slow means nothing for the attacker, who probably has $$$ to buy superfast hardware in superlarge amounts and to hire superintelligent programmers who will write a superoptimized implementation fine-tuned to the hardware. Speed is relative. But even given this, it simply doesn’t matter.

Let’s play the big integer game, and given your 16,777,216 comparison, I know you’ll love it. When we have a key size of 128 bits, then we have

340,282,366,920,938,463,463,374,607,431,768,211,456

different keys to test. If we assume that our superattacker is able to test 1,000,000,000 keys per second, which means not only decryption, but includes all necessary tests like character statistics and the search for known file headers to be sure that he has the key, and we further divide by 60 (->minutes), 60 (->hours), 24 (->days) and 365 (->years), he still needs

2^128/(1000000000606024365)

= 10,790,283,070,806,014,188,970 years

to test the complete key space of 128 bits. Given that the universe is only somewhat around 15,000,000,000 years old, he has to spend very much more $$$ to test very much more than the 1,000,000,000 keys per second.

Does the speed of the algorithm matter? No. But it matters for me if the decryption of my encrypted hard disk runs with 1 MB per second or with 10 MB per second.

@Secure (good to see you are happy to use your real name with a URL)

Some good calculations, but unfortunately it misses some key points. I feel like I must have glossed over some “assumed” logic so I’ll explain more in detail.

Before I start with my rant, I’ll just ask the simple question – are you arguing that a slower encryption IS more secure, but the difference doesn’t matter because it would take so long to crack the “less secure” algorithm that it doesn’t matter? If so then we disagree only on time frame of computers and hacking. If not then we disagree on everything.

First of all the bit I agree with – that big $$ helps, but even then it will only help to an extent. The big $$ is for more processing power by forming a grid of computers, where each computer hacks a different part of the encryption (different key range). Every computer you bring on-line effectively reduces the bit range of the encryption. That is - if you only had 2 computers and 32 passwords to check, 1 computer would check the first 16 while the other would check the next 16.

Therefore we have a distinction between actual bit rate of the encryption and effective bit rate.

1 computer = full effective bit encryption
2 computers = minus 1 effective bit encryption
4 computers = minus 2 effective bit encryption
8 computers = minus 3 effective bit encryption
and so on so that with 1,024 computers the bit range has been reduced by 10, taking the encryption to effectively 118 bit encryption.

Obviously adding somewhere between these values reduces the time, but doesn’t calculate as an exact bit reduction.

In historic tests, approximately 100,000 computers were used to crack DES (it took about 22 hours but that was a decade ago).

Let’s round that off to 131,072 as that reduced security bits by 17, taking the 128 bit encryption down to 111 (it reduced DES to effectively 39 bit encryption).

This is the way to speed up encryption “right now”… and this is where either big $$ or zombie PCs would come into the picture. Even then, it is unlikely to be successful with anything but a dictionary or close to dictionary attack.

But secondly, and most importantly, you are still forgetting or leaving out the fact that computers are getting faster. The computer I’m using has 4 cpus, which can process four independent attacks (4 passwords at once) AND computers are faster AND cheaper AND they are bringing in new technology, such as GPU which can offer many-many-many times more parallel hack attempts.

But let’s stick with an approximation that flops double every 18 months… that’s exponential computer speed growth! Exponential growth is scary and should be on the minds of all those who need to protect data. It’s scary because it grows really fast constantly making unthinkable processing tasks possible.

If a computer doubles in speed every 18 months, then the effective bit rate of the encryption will decrease by 1 every 18 months. This means that the time taken to crack a 55 bit encryption this year equals the time it will take to crack 56 bit encryption in 18 months time.

Encrypted data that took 24 hours to crack 18 months ago will only take 12 hours to crack today and 6 hours in 18 month’s time. That’s a very sharp reduction in just 3 years, going from 1 day to 6 hours to crack an encryption.

So in 18 months 128 bit encryption becomes effectively 127, in 15 years it becomes 110 bit. In 150 years time it’s effectively 28 bits (comparable to 28 bit encryption by today’s standards). And with my back-of-the-envelope calculations, that would take about 5 seconds to crack. Obviouly that first point is still an issue - crackers with large arrays of computers can still get the data much sooner than 1 computer processing away.

Anyway - it would still be 128 bit encrypted data, but it would only take 5 seconds to decrypt, which is as good as 28 bit encryption on today’s computers. At the same point in time the 192 bit encryption would still take over 8 hours and 256 bit would take until the end of time…. 1.799*10^23 years… well at least for a couple of years until computers are fast enough again.

But all of this is using the DES calculation times, using a faster algorithm reduces the power of the encryption because that 8 hours with 192 bit encryption would be far less. In fact, the faster the encryption, the sooner that it will be viable to crack the encryption cheap and quickly.

That is to say that a slower encryption method IS more secure key bit for key bit, and the results of this security difference WILL be seen within our lifetimes.

To re-iterate my main point, the first question should be how long in time data should remain secret. Keeping in mind that any encryption halves in power ever 18 months or so, calculations should be made as to how long it will take to crack the encryption at various points in time until the privacy of the data is no longer a concern.

To meet that question you need to be aware of the bit rate AND how fast the encryption algorithm itself takes. Yes it DOES mean approximating the future speed of computers, which is NOT going to be accurate, but at least it is better than just wrongly assuming that a fast encryption method is necessarily better than a slow one given the same bit rate.

@Philip,

@Secure (good to see you are happy to use your real name with a URL)”

and then giving a wrong name and a wrong URL. What does it matter in a forum that doesn’t require registration? Since it is not the first time that this typical reaction happens, I guess you felt personally attacked by “strange point of view” and “naive”. If you felt insulted I apologize, but it was in no way meant personally and only my honest opinion about your arguments. In fact I don’t care who you are, because I want to discuss against your arguments, not against your person.

Besides that anonymity and privacy is strongly related to encryption and security. Isn’t it a bit inconsequent to care about the future lifetime of your encryption but not about the present trackability of your person? No one can currently see the encrypted files on your hard disk, but everyone can datamine a lot of information about you if you always use your real name.

Back to the topic. All you say is right and well and I’m well aware of it. It is in fact the exact reason why the average key sizes for encryption are permanently increased over time. But as you may have noticed, I’ve talked about the naked number of 1,000,000,000 keys per second, without giving any other indications. Yes, if the attacker buys twice the amount of computers, he can test 2,000,000,000 keys per second. If he spends large $$$ 18 months later for a complete new set of computers, he can test 4,000,000,000 keys per second. I didn’t deny this.

But the one information you didn’t give was any quantification. Slow, fast, what does it mean?

  1. How many bits will the slow algorithm be slower than the fast algorithm? You’re obviously talking about orders of magnitude here.

  2. How is fast and slow defined? I can write an encryption algorithm that uses a lot of fancy maths, squareroots, sin, cos, etc., needing 1 second per byte, while effectively boiling down to ROT13.

  3. What implementations are you comparing? Interpreted language, which may be slower than compiled language, which may be slower than handcrafted assembler, which may be slower than a specialized hardware chip? While compiled(A) may be slower than compiled(B), hardware(A) may be faster than compiled(B), even with a highly optimized B.

Does it make sense to look, say, 50 years into the future? What about quantum computers that may cast all the traditional encryption schemes into the void? What if new cryptanalytic methods are invented that break the slower algorithm, but not the faster one? This estimation of encryption lifetime is such a rough estimation that IMHO the speed of the algorithm is the least concern.

And even if all this doesn’t matter and the slower algorithm proves to be more secure for the same key bit number, there is one remaining question: So you’ve estimated that the slower algorithm with N bits will fit your security needs. Why not simply use a faster algorithm with N+64 bits?

@secure - obviously my humour didn’t translate, the commont on user name and url was meant to be a joke… no insult was taken about you not using it. I think many names here are fake…

I actually agree with the points you make about your privacy.

You also raise some good points. Some really good points. I’ll clarify but from what I can see you already have an excelent grasp on everything.

1: I’ve only done software benchmarking on various C based encryptions, but I suspect that correctly desinged hardware would provide completely different results. That is - chips that decrypt in parrallel. So I can’t give answers for that.

2: I just use the one simple measure - how fast the raw algorithm (in C) takes to decrypt using a test key. This is averaged over several thousand itterations. I mentioned “ecryption” speed several times, but this is actually incorrect unless they encryption is litterally the reverse of the decryption or “symetrical” (which is not the case with private/public key encryption).

That’s the point - the time taken to try a key AND the number of combinations of keys is the security strength.

3: Very good point. Going back to 1 - I use software (C with as much optimisation as I can including processor APIs). But your point on hardware is spot on.

RE quantum computers - they may affect more’s law but they aren’t the leap they were 10-15 years ago. When they come on line they may still be close to in line with More’s law. Even so - your point is valid and predicting future computer power is like predicting weather in the future, in the end we are just making rough guesses. We are going to be out but by how much and in which directions?

RE slower vs faster + 64 bits. Yes - agreed and I said that earlier. The difference in keys between 168 3DES and 196 AES is something like 16 million and there’s very little chance of AES is 16 million times faster, but if it is twice as fast as DES, then effectively it has 195 bit encryption compared to 3DES. If it is 4 times faster, it has effectively 194 bit encryption. You can adjust DES to go up in 56 bits (i.e. 4DES at 224 Bit)

So for border line cases it isn’t a case of just looking at bits.

Here is the equation I would use for strength of encryption:
Strength = TimeTakenToDecrypt * NumberOfKeyCombinations

Where NumberOfKeyCombinations = 2^64 for 64 bit encryption.

So I would forget the focus on bits AND forget the focus on TimeTakenToDecyrpt, it is both that are important.

HOWEVER - this will probably only matter when comparing two algorithms that have similar bits but different speeds - it would affect the security plus or minus 1 to 4 bits.

And back to the original issue - you can do various bit encryption with varying speed (AES can be done multiple times, giving enormous bit length).

I’ll throw in two more flies in the ointment. On the plus side we are assuming that the hacker knows the encryption algorithm. This wouldn’t normally be known and therefore, the task is multiplied by every possible encryption option… so if they don’t know this they are stuffed!

On the down side, probability states that on average only half the keys need to be tried. The above calculations I used are based on that fact. In fact, proability states that half the time you try to crack an encryption you will find the key BEFORE hitting the half way mark.

But many security sites try to show how long it will take to calculate ALL keys. This would only be required if the very last key to be tried was the correct key, which has a probability of 1 in 2^xxx (so inprobable that it would only happen in an infinate improbability machine hooked up to a particularly strong pot of coffee).

Oh - one last thing RE aliases. I’ve always thought it would be a great idea to change my name officially to “The Obviously Innocent Man”.

That way if I was ever charged for anything the Judge would have to say “How does the Jury find The Obviously Innocent Man?”.

I recently did a presentation on web 2.0 cryptography, subtitled “A Study in Failure”. Those creating web 2.0 apps may find it instructive:

http://www.subspacefield.org/security/web_20_crypto.pdf

Before I start with my rant, I’ll just ask the simple question – are you arguing that a slower encryption IS more secure, but the difference doesn’t matter because it would take so long to crack the “less secure” algorithm that it doesn’t matter? If so then we disagree only on time frame of computers and hacking. If not then we disagree on everything.

Imagine doing some low key list building in the store…maybe by having a free prize draw.

Using a JS library is not analog to using a C library.

On the down side, probability states that on average only half the keys need to be tried. The above calculations I used are based on that fact. In fact, proability states that half the time you try to crack an encryption you will find the key BEFORE hitting the half way mark

The longer the data needs to remain private, the longer it should take to unencrypted the data. The time taken to unencrypted given the CORRECT key is directly related to the time taken to crack the encryption with ANY key.

Very soon super fast proccessors coming up in us. Nasa have very fast super proccessors even now.

Yes it is really hard to belive that there is some computers on the market which can test very much more than the 4,000,000,000 keys per second.

Hey casus elefonlar please read this carefully; ‘‘Thinking Machines Corp., the four - ear-old Cambridge company, yesterday said that it has developed the world’s most powerful computer for complex scientific problems – a machine more than 50 times as fast as the most powerful mainframes produced by International Business Machines.’’

They simply do not have a professional level of experience invested in a given ecosystem.

A blog entry masquerading as an overly dramatic college screenplay, but still. These guys, unlike us, are real security experts, so it’s worth reading.

I 3 Clippit!
http://bdkowert.com/wp-content/uploads/2007/11/familyguyclippy.png

I got the most awesome reCaptcha ever:
lorena ©1969

Well, I agree with the benefits of coding to a higher level of abstraction, but that is not without it’s own problems.
If I want to write some code, I can write it myself, or grab a library which makes it easier. However, I then need to get up to speed on the library I’ve chosen. If I have problems, is it in the code I’ve written, or in the library?
Let’s say I use two components on my web site which depend on a common library - say, SWFObject - but they use different versions, so component A won’t work with the version of SWFObject that component B uses. Then I find that SWFObject is 10k of Javascript condensed into 7 lines, so it’s a hige pile of work to debug it, and if I do fix it, what else will break?
As always it’s a cost/benefit trade-off. It’s OK once you get up the learning curve for the library, but maybe for what I need it would be quicker, and more maintainable, to write it myself.
I don’t have a definitive answer, but I think the trade-off is worth considering!

Seems to me that as far as the encryption saga is concerned, the biggest lesson is that all the abstraction layers in the world won’t help you if you genuinely don’t know what you’re doing.

I think Taylor has hit the nail on the head with his “roll your own” comment.