Yes, LC_RNG.
--
Jens Peter Secher.
_DD6A 05B0 174E BFB2 D4D9 B52E 0EE5 978A FE63 E8A1 jpsecher gmail com_.
A. Because it breaks the logical sequence of discussion.
Q. Why is top posting bad?
> CTR_Mode<AES> should also work, but there's a bug (which will be
> fixed soon) that makes the RandomNumberGenerator interface
> inaccessible.
So a CTR_Mode<AES> instance can be passed as an argument of type RNG
starting with Crypto++ v5.6?
I am about to implement "deterministic generation of private key from
small seed" [1] for Tahoe, so I need to come up with a function that
takes an input of 96 bits and produces a 192-bit ECDSA private key.
I'm going to have to support this functon forever (approximately) for
backwards-compatibility reasons. I would really like the next
release of Tahoe to be compatible with older Crypto++ versions. Also
I would really like for this function to be as simple and clear as
possible so that I can easily explain to other people how to
implement it compatibly.
My current code to do this is below (and I've earlier posted it to
this list: [2]), but I'm not entirely satisfied with it because it
seems rather ad-hoc. One of my earlier notes on this subject to this
list, [2], says that I experimented with using X917RNG with a
customization of Salsa20 to pretend that it has a block size of 32.
So, I ask everyone, what is the simplest efficient way to take a
secret 96-bit input, and produce an output between [1, n) such that
a) if you know the 96-bit secret and use this algorithm, you always
get the same output, and
b) if you don't know the 96-bit secret, you can't learn anything
about the output
Unless I, or someone, can think of a problem with this way to do it,
or can propose a better way to do it, then I guess I'm going to
proceed with this and then I'll be committed to maintaining it for a
while.
Regards,
Zooko
[1] http://allmydata.org/trac/pycryptopp/ticket/2 # deterministic
generation of private key from small seed
[2] http://groups.google.com/group/cryptopp-users/browse_thread/
thread/f30427601a5884f6
[3] http://groups.google.com/group/cryptopp-users/msg/c1041e508c8d8705
---
Tahoe, the Least-Authority Filesystem -- http://allmydata.org
store your data: $10/month -- http://allmydata.com/?tracking=zsig
------- begin appended code
static const char* TAG_AND_SALT = "102:pycryptopp v0.5.3 key
derivation algorithm using Tiger hash to generate ECDSA 192-bit
secret exponents," \
"16:H1yGNvUONoc0FD1d,";
static const size_t TAG_AND_SALT_len = 127;
static int
SigningKey___init__(PyObject* self, PyObject* args, PyObject* kwdict) {
static const char *kwlist[] = { "seed", NULL };
const char* seed;
int seedlen;
if (!PyArg_ParseTupleAndKeywords(args, kwdict,
"t#:SigningKey___init__", const_cast<char**>(kwlist), &seed,
&seedlen)) {
return -1;
}
if (seedlen != 12) {
PyErr_Format(ecdsa_error, "Precondition violation: seed is
required to be of length 12, but it was %d", seedlen);
return -1;
}
OID curve;
Integer grouporderm1;
byte privexpbytes[24] = {0};
Integer privexponentm1;
privexponentm1.Decode(privexpbytes, sizeof(privexpbytes));
assert (priveexponentm1 == 0); // just checking..
curve = ASN1::secp192r1();
grouporderm1 = DL_GroupParameters_EC<ECP>(curve).GetGroupOrder()
- 1;
Tiger t;
t.Update(reinterpret_cast<const byte*>(TAG_AND_SALT),
TAG_AND_SALT_len);
t.Update(reinterpret_cast<const byte*>(seed), seedlen);
t.TruncatedFinal(privexpbytes, Tiger::DIGESTSIZE);
privexponentm1.Decode(privexpbytes, sizeof(privexpbytes));
while (privexponentm1 >= grouporderm1) {
Tiger t2;
t2.Update(reinterpret_cast<const byte*>(TAG_AND_SALT),
TAG_AND_SALT_len);
t2.Update(privexpbytes, sizeof(privexpbytes));
t2.TruncatedFinal(privexpbytes, Tiger::DIGESTSIZE);
privexponentm1.Decode(privexpbytes, sizeof(privexpbytes));
}
SigningKey* mself = reinterpret_cast<SigningKey*>(self);
mself->k.AccessKey().Initialize(curve, privexponentm1+1);
return 0;
}
> One of my earlier notes on this subject to this
> list, [2], says that I experimented with...
> ...
> So, I ask everyone, what is the simplest efficient way
> to take a secret 96-bit input, and produce an output
> between [1, n) ...
sci.crypt is probably better equipped to answer the question. There
are quite a few PhDs and consultants who are active in the group.
Jeff
> [SNIP]
SecByteBlock seed;
// fill seed here
r.GenerateRandom(NullRNG(), MakeParameters(Name::Min(), 1)(Name::Max(),
n)(Name::Seed(), ConstByteArrayParameter(seed)));
This will be supported and be backwards compatible indefinitely. Internally
it will use P1363_KDF2<SHA1> to generate random integers that are the same
length as n-1, until one of them is less than n-1, then it returns that
number plus 1. Not too different from your code, actually.
I want to make sure that I understand the algorithm and can
regenerate the same keys later, so I intend to write unit tests which
generate keys, and then compute the same algorithm (based on
P1363_KDF2) in the test code (in Python) and assert that they get the
same result.
I wonder if I can write a bit of C++ code so that I can do both the C+
+ version and the Python version using SHA-256 instead of SHA1. I
know that SHA1 is probably okay for this use, where all we require of
it is "unpredictable and well distributed output given unpredictable
and well distributed input", and as far as anyone knows SHA1 can do
that, but certainly we don't have any use of SHA1 anywhere else in
our formats right now, and it would be nice in the future to be able
to simply say "No SHA1 anywhere in here.".
For example, people could then implement Tahoe without implementing
SHA1 at all.
So I think I'll look into what it would take to implement the same
functionality that the above GenerateRandom() has, but with SHA-256.
What do you think?
Regards,
Zooko