Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

a new very fast hash algorithm (160 bits), with a technique of "overlapping sums"

11 views
Skip to first unread message

Claudio

unread,
Sep 21, 2004, 5:41:46 PM9/21/04
to
Hello,

This is a method for a hash function, based on a new technique (I
think it's new, but this should be validated).

#define UL unsigned long
#define LEFT 13
#define RIGHT 19

void round()
{
hash[4] += hash[0] >> RIGHT;
hash[0] += hash[0] << LEFT;
hash[0] += hash[1] >> RIGHT;
hash[1] += hash[1] << LEFT;
hash[1] += hash[2] >> RIGHT;
hash[2] += hash[2] << LEFT;
hash[2] += hash[3] >> RIGHT;
hash[3] += hash[3] << LEFT;
hash[3] += hash[4] >> RIGHT;
hash[4] += hash[4] << LEFT;
}

void processBlock(const UL in[25])
{
int i, j, r, s;
for (i = 0; i < 5; i++)
{
s = 0;
for (r=0; r<5; r++)
{
for (j = 0; j < 5; j++)
{
hash[j] ^= in[j+s];
hash[j]++;
}
round();
s += 5;
}
}
}

It processes blocks of 800 bits. The same block is processed 3 times.
Each time, a sub-block of 160 bits is XORed with the hash block, and the
result is mixed with itself (round() function), by making the 32 bits
integers overlap with themselves (overlapping, and then addition modulo
2^32).

Can anybody tell me whether this idea is completely bad or very
interesting ??

The facts:
- The avalanche effect seems OK after 5 rounds. This has been
empirically checked with the tests programs: ENT, NIST , DIEHARD (I
generated a set of hashvalues, from a given 800-bits block where a
32-bits integer at a given position was incremented before generating
the next hashvalue. I chosed the following integers: in[0] and in[24]).

- fast algorithm. In Assembly, the inner loop will take about 40 cycles
by using MMX instructions, and by storing the hash block in MMX
registers (6 cycles per DWORD processed). The processBlock() function
will therefore take about 40*5*5 = 1000 cycles, for 100 bytes => about
10 cycles / byte, which is just a little bit slower than MD5.

Thanks a lot in advance for your remarks (sorry for my non-perfect english).

Tom St Denis

unread,
Sep 21, 2004, 6:00:40 PM9/21/04
to
Claudio wrote:
> Hello,
>
> This is a method for a hash function, based on a new technique (I
> think it's new, but this should be validated).

You think this is new?

> #define UL unsigned long
> #define LEFT 13
> #define RIGHT 19
>
> void round()
> {
> hash[4] += hash[0] >> RIGHT;
> hash[0] += hash[0] << LEFT;
> hash[0] += hash[1] >> RIGHT;
> hash[1] += hash[1] << LEFT;
> hash[1] += hash[2] >> RIGHT;
> hash[2] += hash[2] << LEFT;
> hash[2] += hash[3] >> RIGHT;
> hash[3] += hash[3] << LEFT;
> hash[3] += hash[4] >> RIGHT;
> hash[4] += hash[4] << LEFT;

How did you pick these values? What form of analysis led you to pick
this design?

<snip>

> It processes blocks of 800 bits. The same block is processed 3 times.
> Each time, a sub-block of 160 bits is XORed with the hash block, and the
> result is mixed with itself (round() function), by making the 32 bits
> integers overlap with themselves (overlapping, and then addition modulo
> 2^32).

You use key material way too slowly.

> Can anybody tell me whether this idea is completely bad or very
> interesting ??

I can't say off the top of my head that it's weak but it certainly isn't
an interesting design. let me tell you why.

First off, I know what you are trying to do. You're trying to be the
first kid on the block with a new cipher design in the wake of the
recent attacks. Well that's just stupid and lame.

Second, competent hash designs already exist. Like using block ciphers
in various chaining modes [like what WHIRLPOOL is] are far more
effective and easier to analyze.

Third, a new "160-bit hash" is like a new "1980 honda civic". Just a
bit dated.

Fourth, Why not present your design material [e.g. in the form of a
paper] before presenting the design?

> The facts:
> - The avalanche effect seems OK after 5 rounds. This has been
> empirically checked with the tests programs: ENT, NIST , DIEHARD (I
> generated a set of hashvalues, from a given 800-bits block where a
> 32-bits integer at a given position was incremented before generating
> the next hashvalue. I chosed the following integers: in[0] and in[24]).

ENT and DIEHARD are not hash function "analyzers". You might as well
used the algo to pick lotto numbers.

> - fast algorithm. In Assembly, the inner loop will take about 40 cycles
> by using MMX instructions, and by storing the hash block in MMX
> registers (6 cycles per DWORD processed). The processBlock() function
> will therefore take about 40*5*5 = 1000 cycles, for 100 bytes => about
> 10 cycles / byte, which is just a little bit slower than MD5.

Shifts are costly in cryptography. Not because they are slow but
because they lose information. A rotation is much more useful and on
the AMD they're just as expensive as a shift [being that the AMD
processors have real ALUs....].

> Thanks a lot in advance for your remarks (sorry for my non-perfect
> english).

Oh your English is ok. Your presentation just sucks.

Tom

Cristiano

unread,
Sep 21, 2004, 6:59:46 PM9/21/04
to
Tom St Denis wrote:

> Claudio wrote:
>
>> The facts:
>> - The avalanche effect seems OK after 5 rounds. This has been
>> empirically checked with the tests programs: ENT, NIST , DIEHARD (I
>> generated a set of hashvalues, from a given 800-bits block where a
>> 32-bits integer at a given position was incremented before generating
>> the next hashvalue. I chosed the following integers: in[0] and
>> in[24]).
>
> ENT and DIEHARD are not hash function "analyzers". [...]

Agreed, but the output should look like random, so it seems a good idea to
use a random number tester; if you see non-random output, the hash is surely
bad.

Claudio, ti sei dimenticato di usare RaBiGeTe! :-)

Cristiano


Claudio

unread,
Sep 23, 2004, 5:03:11 PM9/23/04
to
Tom St Denis wrote:
> Claudio wrote:
>
>> Hello,
>>
>> This is a method for a hash function, based on a new technique (I
>> think it's new, but this should be validated).
>
>
> You think this is new?

Do you think this already exists ?

>
>> #define UL unsigned long
>> #define LEFT 13
>> #define RIGHT 19
>>
>> void round()
>> {
>> hash[4] += hash[0] >> RIGHT;
>> hash[0] += hash[0] << LEFT;
>> hash[0] += hash[1] >> RIGHT;
>> hash[1] += hash[1] << LEFT;
>> hash[1] += hash[2] >> RIGHT;
>> hash[2] += hash[2] << LEFT;
>> hash[2] += hash[3] >> RIGHT;
>> hash[3] += hash[3] << LEFT;
>> hash[3] += hash[4] >> RIGHT;
>> hash[4] += hash[4] << LEFT;
>
>
> How did you pick these values? What form of analysis led you to pick
> this design?

I am preparing a document where I explain the history hidden behind this
design. It will be posted soon.

>
> <snip>
>
> > It processes blocks of 800 bits. The same block is processed 3 times.
>
>> Each time, a sub-block of 160 bits is XORed with the hash block, and
>> the result is mixed with itself (round() function), by making the 32
>> bits integers overlap with themselves (overlapping, and then addition
>> modulo 2^32).
>
>
> You use key material way too slowly.

What do you mean by slowly ?
In the contrary, I think the round() function is used so that all the
key bits are used with efficiency. I was able to test the avalanche
effect (see my tests below).

>
>> Can anybody tell me whether this idea is completely bad or very
>> interesting ??
>
>
> I can't say off the top of my head that it's weak but it certainly isn't
> an interesting design. let me tell you why.
>
> First off, I know what you are trying to do. You're trying to be the
> first kid on the block with a new cipher design in the wake of the
> recent attacks. Well that's just stupid and lame.

No no: I am trying to publish a cryptographic primitive which can be
used as hash function, as a block cipher or as a stream cipher. I know,
it's very ambitious ! :)

>
> Second, competent hash designs already exist. Like using block ciphers
> in various chaining modes [like what WHIRLPOOL is] are far more
> effective and easier to analyze.

I know that, what do you believe ??!! I obviously studied how the known
hash functions are built. I only know that Whirlpool is very slow,
compared to MD5. Nobody can convince me that it is not possible to build
a function as secure as WHIRLPOOL (with our current state of knowledge),
but faster.

>
> Third, a new "160-bit hash" is like a new "1980 honda civic". Just a
> bit dated.
>
> Fourth, Why not present your design material [e.g. in the form of a
> paper] before presenting the design?

Well, in fact I have an idea behind the head ... I wanted to know
whether this design already exists.

>
>> The facts:
>> - The avalanche effect seems OK after 5 rounds. This has been
>> empirically checked with the tests programs: ENT, NIST , DIEHARD (I
>> generated a set of hashvalues, from a given 800-bits block where a
>> 32-bits integer at a given position was incremented before generating
>> the next hashvalue. I chosed the following integers: in[0] and in[24]).
>
>
> ENT and DIEHARD are not hash function "analyzers". You might as well
> used the algo to pick lotto numbers.

That's not the purpose. These "analyzers" helped me to determine the
threshold (a number of rounds) from where it can be considered that the
mixing is OK. If you have a better idea, I am receiver.

>
>> - fast algorithm. In Assembly, the inner loop will take about 40
>> cycles by using MMX instructions, and by storing the hash block in MMX
>> registers (6 cycles per DWORD processed). The processBlock() function
>> will therefore take about 40*5*5 = 1000 cycles, for 100 bytes => about
>> 10 cycles / byte, which is just a little bit slower than MD5.
>
>
> Shifts are costly in cryptography. Not because they are slow but
> because they lose information. A rotation is much more useful and on
> the AMD they're just as expensive as a shift [being that the AMD
> processors have real ALUs....].
>

Here I completely disagree with you. You're raising the key point of the
system. Look (more deeply) at the mechanical operations in the round()
function.
Absolutely no bit is lost after a call to the round() function. It seems
magical, but actually it is quite simple.
What is done is (at a high level)
- a rotation of ALL the bits of the hash block (the table hash[5]) to
the left
- each 32 bits integer of the resulting block is added mod 2^32 with the
integer of the block before the rotation (the positions are respected).
- each integer is incremented by 1.

The complete rotation can't be coded in one single instruction, because
the registers sizes of 160 bits don't exist yet! So , what I do is that
I split up this high-level operation into several low-level instructions.

That's why I call this technique the "overlapping sums". In my draft
document there is a draw showing very well this concept.

But I invite you to reproduce yourself these operations on a draw. You
will see that each integer overlaps with itself (ex: hash[0] += hash[0]
<< LEFT;) and is overlaped / influenced by its successor (ex: hash[0] +=
hash[1] >> RIGHT;). My list is cyclic, it is not linear. After 5 rounds,
every element has influenced all the elements of the table.

I called this function SAMSARA ..

Claudio

unread,
Sep 23, 2004, 5:12:52 PM9/23/04
to
Claudio wrote:

You will remark that the high-level rotation operation , is a rotation
of 13 bits to the left. This is helpfull for the understanding of the
system.

Bryan Olson

unread,
Sep 24, 2004, 2:19:05 AM9/24/04
to
Claudio wrote:

> This is a method for a hash function

Not really. The starting state isn't defined (in fact, the code
won't compile because 'hash' is undeclared). A real hash also
has to define how to pad to the blocksize.

[...]


> It processes blocks of 800 bits. The same block is processed 3 times.

More like 5 times.

> Can anybody tell me whether this idea is completely bad or very
> interesting ??

I'd go with bad. It looks to me like this omits many of the
techniques popular hashes use, and suffers pretty much the
consequences we'd expect. Better hashes than this have recently
fallen.


--
--Bryan

Scott Contini

unread,
Sep 24, 2004, 3:52:40 AM9/24/04
to
The hash function is not secure. Here are two inputs that hash to
all 0's (hexadecimal words).

Input 1:
ffffffff ffffffff ffffffff ffffffff ffffffff ffffffff ffffffff
ffffffff ffffffff ffffffff ffffffff ffffffff ffffffff ffffffff
ffffffff ffffffff ffffffff ffffffff ffffffff ffffffff ffffffff
ffffffff ffffffff ffffffff ffffffff

Input 2:
0 0 0 0 0 ffffdffe ffffdffe ffffdffe ffffdffe ffffdffe ffffffff
ffffffff ffffffff ffffffff ffffffff ffffffff ffffffff ffffffff
ffffffff ffffffff ffffffff ffffffff ffffffff ffffffff ffffffff

I can easily construct 2^32 inputs that hash to 0 (or actually, I can
make
more).

Scott

Grumble

unread,
Sep 24, 2004, 1:21:07 PM9/24/04
to
Tom St Denis wrote:

> Shifts are costly in cryptography. Not because they are slow but
> because they lose information. A rotation is much more useful
> and on the AMD they're just as expensive as a shift [being that
> the AMD processors have real ALUs....].

Rotates and shifts have the same latency on Northwood (4 cycles).

Rotates and shifts have the same latency on Prescott (1 cycle).

Get your facts straight, Tom :-)

Claudio

unread,
Sep 24, 2004, 2:00:17 PM9/24/04
to

Hello Scott,

First of all, I have to precise the starting state for hash[5]. It is
{0, 0, 0, 0, 0}.

I checked both of your input messages, but found different hashes. The
first one is zero, but not the second one. In the second input, is the
first element (in[0]) equal to all 0's ? The four last elements are
missing . I checked with all 1's , but found a hash <> 0 ...

Did you find any template in the block which is invariant after a call
to round() ?

Thanx a lot,

Claudio

bryanjuggler...@yahoo.com

unread,
Sep 24, 2004, 3:51:06 PM9/24/04
to

Claudio wrote:
> Scott Contini wrote:
> > The hash function is not secure. Here are two inputs that hash to
> > all 0's (hexadecimal words).
> >
> > Input 1:
> > ffffffff ffffffff ffffffff ffffffff ffffffff ffffffff ffffffff
> > ffffffff ffffffff ffffffff ffffffff ffffffff ffffffff ffffffff
> > ffffffff ffffffff ffffffff ffffffff ffffffff ffffffff ffffffff
> > ffffffff ffffffff ffffffff ffffffff
> >
> > Input 2:
> > 0 0 0 0 0 ffffdffe ffffdffe ffffdffe ffffdffe ffffdffe ffffffff
> > ffffffff ffffffff ffffffff ffffffff ffffffff ffffffff ffffffff
> > ffffffff ffffffff ffffffff ffffffff ffffffff ffffffff ffffffff
> >
> > I can easily construct 2^32 inputs that hash to 0 (or actually, I
can
> > make
> > more).
>
> Hello Scott,
>
> First of all, I have to precise the starting state for hash[5]. It is

> {0, 0, 0, 0, 0}.
>
> I checked both of your input messages, but found different hashes.
The
> first one is zero, but not the second one. In the second input, is
the
> first element (in[0]) equal to all 0's ? The four last elements are
> missing . I checked with all 1's , but found a hash <> 0 ...

You read Scott Contini's input wrong. Each string of digits is
one 32- bit word. The first five words of input 2 are zero, and
all twenty-five are given. Mr. Contini is correct.


--
--Bryan

Claudio

unread,
Sep 24, 2004, 3:53:17 PM9/24/04
to
Claudio wrote:

Thanks for your instructive remarks (even if I don't agree with some of
them).
I am confident in the strength of the hidden one-way function, even if
this naive design does not work (because of a possibility of inversion
from the first cycle of 5 round() calls - Thx Scott). A judicious change
in the design will make the hash function difficult to invert this time.

Claudio

unread,
Sep 24, 2004, 5:58:55 PM9/24/04
to
bryanjuggler...@yahoo.com wrote:

I took the good input, it's OK now.
Thx.

David Eather

unread,
Sep 24, 2004, 6:12:33 PM9/24/04
to
Would it make any sense to look at the newly broken algorithms and find out
what was there weakness first?

Joe Peschel

unread,
Sep 24, 2004, 10:32:36 PM9/24/04
to
"David Eather" <eat...@tpg.com.au> wrote in
news:41549bd5$1...@dnews.tpgi.com.au:

> Would it make any sense to look at the newly broken algorithms and
> find out what was there weakness first?
>

Would it make any sense to double-check your spell-checker?

J

--
__________________________________________
When will Bush be tried for war crimes?

"Our enemies are innovative and resourceful, and so are we. They
never stop thinking about new ways to harm our country and our
people, and neither do we." --G. W. B.

Joe Peschel
D.O.E. SysWorks
http://members.aol.com/jpeschel/index.htm
__________________________________________

David Eather

unread,
Sep 25, 2004, 12:02:30 AM9/25/04
to
Joe Peschel wrote:
> "David Eather" <eat...@tpg.com.au> wrote in
> news:41549bd5$1...@dnews.tpgi.com.au:
>
>> Would it make any sense to look at the newly broken algorithms and
>> find out what was there weakness first?
>>
>
> Would it make any sense to double-check your spell-checker?

Yes

> J


David Eather

unread,
Sep 25, 2004, 12:19:40 AM9/25/04
to
Joe Peschel wrote:
> "David Eather" <eat...@tpg.com.au> wrote in
> news:41549bd5$1...@dnews.tpgi.com.au:
>
>> Would it make any sense to look at the newly broken algorithms and
>> find out what was there weakness first?
>>
>
> Would it make any sense to double-check your spell-checker?

On further reflection, I discover the answer is an emphatic "NO".
Regardless of how many times I check my spell-checker, the hash functions
still remain broken.
D


>
> J


Johnny Bravo

unread,
Sep 25, 2004, 5:35:23 AM9/25/04
to
On Sat, 25 Sep 2004 02:32:36 -0000, Joe Peschel <jpes...@no.spam.org>
wrote:

>"David Eather" <eat...@tpg.com.au> wrote in
>news:41549bd5$1...@dnews.tpgi.com.au:
>
>> Would it make any sense to look at the newly broken algorithms and
>> find out what was there weakness first?
>>
>
>Would it make any sense to double-check your spell-checker?

Dew knot tryst yore spell eschewer too fine awl mistaken.

--
"The most merciful thing in the world, I think, is the inability
of the human mind to correlate all its contents." - H.P. Lovecraft

Claudio

unread,
Sep 25, 2004, 5:52:59 AM9/25/04
to
That's a wise opinion.

If the common algorithms like MD5, HAVAL or RIPEMD are "broken", that
may be because their designers didn't have a clear strategy of defense
against all kinds of attacks (the situation is not the same regarding
the block ciphers).

The problem is that these designers will never explain you the reason of
their choices about the specification of their algorithm. They probably
have made several trade-offs (including the performance), but the deep
reason of their choices is not so clear.

As far as I know there is no "elegant" standard for cryptographic hash
functions ...

Joe Peschel

unread,
Sep 25, 2004, 6:27:18 PM9/25/04
to
"David Eather" <eat...@tpg.com.au> wrote in
news:4154...@dnews.tpgi.com.au:

> On further reflection, I discover the answer is an emphatic "NO".
>

You were right the first time.

David Eather

unread,
Sep 25, 2004, 8:45:25 PM9/25/04
to
Snip/

You rag people on their spelling, their grammar and particularly seem to
particularly enjoy stretching possible meaning to the nth most degree.
You're better at grammar than most people will ever be. So when you're
caught (rare event) please, have some grace about it - just smile or ignore
it.

DE


>> Would it make any sense to look at the newly broken algorithms and
>> find out what was there weakness first?

JP


> Would it make any sense to double-check your spell-checker?

DE


On further reflection, I discover the answer is an emphatic "NO".

Regardless of how many times I check my spell-checker, the hash functions
still remain broken.

It was just a joke - I thought you would find a smile in it. If I thought
the above reply would have annoyed you / pissed you off / begin another long
thread on evil grammar - I would have just let it alone.

For future reference let me know what your preferred response to your posts
are.

David Eather


Joe Peschel

unread,
Sep 26, 2004, 2:14:15 AM9/26/04
to
"David Eather" <eat...@tpg.com.au> wrote in
news:4156112b$1...@dnews.tpgi.com.au:

> Snip/
>
> You rag people on their spelling, their grammar and particularly seem
> to particularly enjoy stretching possible meaning to the nth most
> degree.

Where have I stretched meaning to the nth most degree?

>You're better at grammar than most people will ever be.

Thankee.

> So
> when you're caught (rare event) please, have some grace about it -
> just smile or ignore it.

When I make a mistake, I usually say, "oops." Once I corrected a mistake of
my own and was flamed for correcting it. C'est la guerre.


You certainly seem to get worked up over a whimsical barb. It amazes me
that people writing in Usenet tolerate -- and seem to expect -- an
onslaught of insults, flames, condescension, and name-calling on most
topics, but become upset at even the most minute criticism of their
writing.

>
> DE
>>> Would it make any sense to look at the newly broken algorithms and
>>> find out what was there weakness first?
>
> JP
>> Would it make any sense to double-check your spell-checker?
>
> DE
> On further reflection, I discover the answer is an emphatic "NO".
> Regardless of how many times I check my spell-checker, the hash
> functions still remain broken.
>
> It was just a joke - I thought you would find a smile in it. If I
> thought the above reply would have annoyed you / pissed you off /
> begin another long thread on evil grammar - I would have just let it
> alone.

I smiled.

>
> For future reference let me know what your preferred response to your
> posts are.

What a strange idea!

> David Eather

Mok-Kong Shen

unread,
Sep 26, 2004, 6:51:15 AM9/26/04
to

Joe Peschel wrote:

> "David Eather" <eat...@tpg.com.au> wrote in

>>So


>>when you're caught (rare event) please, have some grace about it -
>>just smile or ignore it.
>
>
> When I make a mistake, I usually say, "oops." Once I corrected a mistake of
> my own and was flamed for correcting it. C'est la guerre.

'La guerre'?? Do we have an Irak in our group??

M. K. Shen

David Eather

unread,
Sep 26, 2004, 12:57:03 PM9/26/04
to
Joe Peschel wrote:
> "David Eather" <eat...@tpg.com.au> wrote in
> news:4156112b$1...@dnews.tpgi.com.au:
>
>> Snip/
>>
>> You rag people on their spelling, their grammar and particularly seem
>> to particularly enjoy stretching possible meaning to the nth most
>> degree.
>
> Where have I stretched meaning to the nth most degree?
>
>> You're better at grammar than most people will ever be.
>
> Thankee.
>
>> So
>> when you're caught (rare event) please, have some grace about it -
>> just smile or ignore it.
>
> When I make a mistake, I usually say, "oops." Once I corrected a
> mistake of my own and was flamed for correcting it. C'est la guerre.
>
>
> You certainly seem to get worked up over a whimsical barb. It amazes
> me that people writing in Usenet tolerate -- and seem to expect -- an
> onslaught of insults, flames, condescension, and name-calling on most
> topics, but become upset at even the most minute criticism of their
> writing.

No, not upset. Not even at the most minute criticism of their writing."
What happened is I made a small on line joke - non-derogatory etc. When
posted, the joke was over ("impropriety is the soul of humour" - Socrates).-
complete , delivered etc"

The reply that my first answer was correct, brings the logical illogical
response "Ok, I tried re-checking the spell checker, but the hash function
is still broken."
This would no doubt generate another reply relating to grammar etc and so
another thread would succumb to the off-topic subject of the appropriateness
of perfect grammar in an informal forum on an unregulated medium would
ensue. I don't want that and I doubt many other do


>
>>
>> DE
>>>> Would it make any sense to look at the newly broken algorithms and
>>>> find out what was there weakness first?
>>
>> JP
>>> Would it make any sense to double-check your spell-checker?
>>
>> DE
>> On further reflection, I discover the answer is an emphatic "NO".
>> Regardless of how many times I check my spell-checker, the hash
>> functions still remain broken.
>>
>> It was just a joke - I thought you would find a smile in it. If I
>> thought the above reply would have annoyed you / pissed you off /
>> begin another long thread on evil grammar - I would have just let it
>> alone.
>
> I smiled.
>
>>
>> For future reference let me know what your preferred response to your
>> posts are.

Ignore, read / laugh but don't reply or plonk?

Claudio

unread,
Sep 26, 2004, 2:16:56 PM9/26/04
to
As in MD5, as well as in other hash algorithms, it seems necesary to add
arbitrary constants (unsigned long) which are well distributed, each
time a new unsigned long number of the input block is processed. One
would have a table of 125 constants in our case.

This can avoid a "direct attack" which has been recently posted.

If you think the design still is weak, I would know GOOD reasons: is the
mixing slower than MD5 os SHA-1 for instance ? How can be measured this
speed ? What says that the research of collisions is easier in this case
than in other designs ?

Thanks for your answers, and for clarifying these issues of hash
functions cryptanalysis.

Bryan Olson

unread,
Sep 26, 2004, 3:26:49 PM9/26/04
to
Claudio wrote:
[...]

> If you think the design still is weak, I would know GOOD reasons: is the
> mixing slower than MD5 os SHA-1 for instance ?

You don't prevent the meet-in-the middle preimage attack, as all
the poplular hash functions do.

You can easily fix that too. Pushing your scheme just past what
sci.crypt participants will shoot-down in their spare time isn't
really hard. After fixing the many obvious mistakes, what would
be the chance you got all the subtle stuff right?


--
--Bryan

0 new messages