Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Double Encryption is Patented!

0 views
Skip to first unread message

.

unread,
Apr 25, 1999, 3:00:00 AM4/25/99
to
Can anyone believe that simple CBC
double-encryption is PATENTED????
(if input filesize=output filesize)

See:


United States Patent 5,673,319
Bellare , et al. September 30, 1997


Block cipher mode of operation for secure, length-preserving encryption


Abstract
A method for encrypting a plaintext string into ciphertext begins by cipher
block chaining (CBC) the plaintext using a first key and a null
initialization
vector to generate a CBC message authentication code (MAC) whose length is
equal
to the block length. The plaintext string is then cipher block chained
again,
now using a second key and the CBC-MAC as the initialization vector, to
generate
an enciphered string. The CBC-MAC and a prefix of the enciphered string
comprising all of the enciphered string except the last block are then
combined
to create the ciphertext. The described mode of operation is
length-preserving,
yet has the property that related plaintexts give rise to unrelated
ciphertexts.

Inventors: Bellare; Mihir (New York, NY); Rogaway; Phillip W.
(Davis, CA)
Assignee: International Business Machines Corporation (Austin,
TX)
Appl. No.: 384152
Filed: February 6, 1995

U.S. Class:380/25; 380/37
Intern'l Class: H04L 009/08
Field of Search: 380/4,21,25,28,37

It should be appreciated by those skilled in the art that the specific
embodiments disclosed above may be readily utilized as a basis for modifying
or
designing other routines for carrying out the same purposes of the present
invention. Those skilled in the art will recognize that such equivalent
techniques and embodiments do not depart from the spirit and scope of the
invention as set forth in the appended claims.


* * * * *


Paul Rubin

unread,
Apr 25, 1999, 3:00:00 AM4/25/99
to
In article <4lGU2.439$Ol1....@news1.atlantic.net>, . <none@nowhere> wrote:
>Can anyone believe that simple CBC double-encryption is PATENTED????
>(if input filesize=output filesize)

It's not double encryption being patented but rather a scheme for
using the error recovery properties of CBC mode to be able to
get ciphertext that's the same length as the plaintext. It's clever.
However, it is pretty much the same scheme that is used in the SFS
disk encryption program which I think has been around longer.
It's described in AC2. I think it might be listed in the index
under "ciphertext stealing" but I don't have my copy here to check.

Peter Gutmann

unread,
Apr 29, 1999, 3:00:00 AM4/29/99
to
p...@netcom.com (Paul Rubin) writes:

Yup, it's what's used in SFS, which dates from 1993 (see also my sci.crypt
post). Scanning through the SFS docs, the techniques described are basically
the same thing, I don't explicitly mention the words "CBC-MAC" in the docs
but describe the use of encryption with the output being used as the IV for
the next pass - it's probably close enough to cover what's in the patent.

Peter.


Paul Rubin

unread,
Apr 30, 1999, 3:00:00 AM4/30/99
to
Peter Gutmann <pgu...@cs.auckland.ac.nz> wrote:
>Yup, it's what's used in SFS, which dates from 1993 (see also my sci.crypt
>post). Scanning through the SFS docs, the techniques described are basically
>the same thing, I don't explicitly mention the words "CBC-MAC" in the docs
>but describe the use of encryption with the output being used as the IV for
>the next pass - it's probably close enough to cover what's in the patent.

I seem to remember Colin posting an article describing the method,
where he specifically talked about using two encryption passes, in the
context of saying it wasn't necessary because using a checksum for the
first pass was faster and security didn't suffer. If it was from as
far back as 1993, that predates Dejanews, but I might have saved a copy
of the post and will dig around for it when I get a chance.

Peter Gutmann

unread,
May 2, 1999, 3:00:00 AM5/2/99
to

p...@netcom.com (Paul Rubin) writes:

Here it is, from December 1993. Colin even specifically mentions the fact that
he's posting it to make sure it can't be patented later.

Peter.

-- Snip --

Newsgroups: sci.crypt
From: co...@nyx10.cs.du.edu (Colin Plumb)
Subject: Weakness in CFB disk encryption + fix
Message-ID: <1993Dec3.1...@mnemosyne.cs.du.edu>
Date: Fri, 3 Dec 93 16:48:57 GMT

Here's an interesting attack on disk encryption products that a block
cipher in CFB mode. It applies to a lesser degree to CBC, but there
is a fix. The latter is the useful part of this posting.

Consider a message encrypted in CFB mode using a given IV. If
x[] is the plaintext and y[] is the ciphertext, then

y[0] = x[0] ^ e(IV)
y[1] = x[1] ^ e(x[0])
etc.

Now, if you have two messages x[] and x'[] encrypted with the same key
and IV, then

y[0] ^ y'[0] = x[0] ^ e(IV) ^ x'[0] ^ e(IV) = x[0] ^ x'[0].

Thus, you can XOR the first ciphertext blocks and get the XOR of the
corresponding first plaintext blocks. If the XOR happens to be
zero, then the same technique applies to the second blocks and so on.
It is well known that two ASCII texts XORed with each other are not
very secure. Shannon estimated the entropy of english text at about
1.2 bits per character. Using 8 bits leaves lots of checking
information.

Now prepending random padding to the message, or using a random IV,
renders this attack unusable. But in a disk encryptor, the IV is
usually computed based on the sector offset plus some disk-wide data.
The same sector always uses the same IV, so the ciphertext at different
times (say, before and after running a disk optimizer), constitutes
two messages that use the same key and IV.

One fix is to add some random data, but that requires extra storage
space, which is undesirable on a disk, because you need to store it
somewhere, and the logical and physical sector sizes are usually
required to be equal, so writes become non-atomic and slow down and
generally it's undesirable.

With CBC, you cal tell where the first changed block is, for example
where an edit to a file occurred. Not much information, but it
might be of use sometime.

The objective is to have the IV change each time the sector is written.
This is hard to achieve, but it *can* be data-dependent. Here's how:

Every block of ciphertext depends on the choice of IV, due to feedback.
However, when decrypting, only block 0 of the plaintext needs the
IV to decrypt. Block i needs only blocks i and i-1 of ciphertext.

So what you do is take a checksum (it doesn't have to be cryptographic,
just large enough there's little chance of collision) of blocks 1 through
n of the plaintext. Merge this with the IV derived from per-disk
and sector offset information, as usual. Then encrypt as usual.

This makes the IV, and thus every bit following, depend on all of
blocks 1..n. (The checksum need not be cryptographic under the
assumption that the plaintext is not specially structured to cause
collisions.)

To decrypt, decrypt blocks 1..n, compute the checksum, compute the
IV, and then decrypt block 0.

There are similar fixes, including making two encryption passes
(for the second, the IV is the last block in the first pass),
but that's more expensive.

Now, the only time information is revealed is if the first block has
changed in some way that makes the XOR useful, but the remaining
blocks in the sector have not.

You can eliminate even this if yoy encrypt the first block a second time,
using the last block as its IV. This is sort of a 1.1-pass encryption,
and makes every bit in the ciphertext depend on every bit in the plaintext.
(Actually, assuming 64-bit blocks and 512-byte sectors, that's 65
encryptions where 64 would do, so it's 1.015625.)

It might also be possible to do something similar where a first pass
is made with a weak cipher, but that still causes dependencies,
and then a second pass is made with the last block of the first pass as
the IV. This would achieve the same effect.

(Well, it *is* possible, but I'd like to think up a good cipher to
use. It need not have any key information. A scrambler polynomial
might do nicely.)

Just to tell people, and to make sure it's been published so it can't
be patented. Please post if you come up with any refinements.
--
-Colin

Newsgroups: sci.crypt
From: co...@nyx10.cs.du.edu (Colin Plumb)
Subject: Re: Weakness in CFB disk encryption + fix
Message-ID: <1993Dec4.1...@mnemosyne.cs.du.edu>
References: <1993Dec3.1...@mnemosyne.cs.du.edu>
Date: Sat, 4 Dec 93 18:58:40 GMT

As a followup to my previous posting, I've thought of a giid first pass
for an encryption algorithm such as I described.

Use a word-wise CRC scrambler polynomial. E.g. making a first pass over
the data, do

x[i] ^= x[i-k1] ^ x[i-k2];

(Maybe more terms would be a good idea.) It's fast because it's 32 bits
(or whatever) wide, but causes the last chunk of k2 words to depend
on everything before that.

This is using x[-k2] through x[-1] as an IV. You only need one IV,
since this doesn't destroy information. After the first pass with
this, which just makes the last few words depend on the whole sector
plus the IV, use the last block as the IV for CFB encryption.

This also provides resistance to the selective-modification attack,
that allows an enemy to diddle selected bits in the last block of
something CFB encrypted without gross damage. By damaging the IV
to the strong CFB encryption, you will totally corrupt the first
block, which is less likely to go unnoticed.

Oh, yes, if k2 is larger than the cipher block size in words, then
k1 must be chosen so that the last cipher block contains dependencies
from everything. For DES or IDEA, k1 = 1 and k2 = 2 are probably
best.

Anyway, any comments?
--
-Colin


0 new messages