Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Static version of bignum library

4 views
Skip to first unread message

pet...@gocougs.wsu.edu

unread,
Mar 16, 2005, 8:28:37 PM3/16/05
to
Does anyone know of a static version of the bignum library (i.e. the
one used for public key encryption) that does everything _without_
dynamic memory?

We want to use some public key methods in an embedded device, but we
don't want to support dynamic memory allocation.

Or barring that, a bignum library that bounds the maximum amount of
memory used.

Thanks,
Pete

tomst...@gmail.com

unread,
Mar 16, 2005, 8:35:03 PM3/16/05
to

In LibTomMath all heap operations are through replaceable macros. Just
make your own "functions" that allocate out of a statically allocated
buffer and you're set.

I just happen to have a cryptographic library that has the same
feature...

Tom

Thad Smith

unread,
Mar 17, 2005, 10:44:51 AM3/17/05
to
pet...@gocougs.wsu.edu wrote:
>
> Does anyone know of a static version of the bignum library (i.e. the
> one used for public key encryption) that does everything _without_
> dynamic memory?

The source for PGP contained MP routines that, at least in the very
old versions I have (PGP 2.6), didn't use dynamic allocation. I
haven't checked recent sources. These routines are not as well
isolated as a well-written standalone library, though.

Thad

pet...@gocougs.wsu.edu

unread,
Mar 17, 2005, 12:14:37 PM3/17/05
to
tomst...@gmail.com wrote:
> In LibTomMath all heap operations are through replaceable macros.
Just
> make your own "functions" that allocate out of a statically allocated
> buffer and you're set.

I initially did this, but how do I know I will never exhaust that
static store? If I release product to the world, and just 1 time that
store is exhausted, the product is broken.

Perhaps a better way to put it is: Is there a bignum library that has
a boundable maximum on the amount of memory used?

Thanks,
Pete

Colin Andrew Percival

unread,
Mar 17, 2005, 12:41:19 PM3/17/05
to
In sci.crypt pet...@gocougs.wsu.edu wrote:
> Does anyone know of a static version of the bignum library (i.e. the
> one used for public key encryption) that does everything _without_
> dynamic memory?

Yes. I wrote some RSA code a few years ago where the functions all
take a parameter "TMP" meaning "here's a temporary buffer of size foo *
[length of inputs] bytes, do all your work within that buffer" (for
various values of foo depending upon the function being used). I did
likewise for entropy -- the keypair generation code takes as input a
buffer of [size of desired modulus in bits] / 8 random bytes.

> We want to use some public key methods in an embedded device, but we
> don't want to support dynamic memory allocation.

Given that my code performs FFTs over floating-point numbers, you
probably wouldn't want to use it in an embedded application, but my
code at least demonstrates that what you want is possible.

Colin Percival

CBFalconer

unread,
Mar 17, 2005, 1:24:03 PM3/17/05
to

Please explain how to do that without putting a bound on the size
of numbers that can be handled? There are proverbs about putting
quarts into pint bottles, and so on.

--
"If you want to post a followup via groups.google.com, don't use
the broken "Reply" link at the bottom of the article. Click on
"show options" at the top of the article, then click on the
"Reply" at the bottom of the article headers." - Keith Thompson


Paul Rubin

unread,
Mar 17, 2005, 1:31:23 PM3/17/05
to
CBFalconer <cbfal...@yahoo.com> writes:
> > Perhaps a better way to put it is: Is there a bignum library that
> > has a boundable maximum on the amount of memory used?
>
> Please explain how to do that without putting a bound on the size
> of numbers that can be handled? There are proverbs about putting
> quarts into pint bottles, and so on.

I don't think there was a requirement of unbounded sized numbers.
The request was simply to be able to do fixed size crypto operations
(say 1024 bits) in a bounded amount of ram.

tomst...@gmail.com

unread,
Mar 17, 2005, 1:35:41 PM3/17/05
to

I don't get the question. LibTomMath uses [internally] a predictable
amount of memory todo things like invmod/exptmod. Externally it uses
as much memory as you request it to.

So what I would do is make a huge [say] 100KB "static heap" and run
your application. Get the amount of ram it uses [while performing all
the operations it supports] and just round up to the next 4KB or
something.

For example, for RSA-1024 on a 32-bit processor [typical LTM platform
;-)]

exptmod: window size = 64, but it only uses 33 of them
+1 for res
total of 34 bignums
mp_div : 6 bignums

So a rough ballpark figure is you need ~45 bignums during an exptmod of
a 1024-bit exponent. Each occupies 64 digits + header == 272 bytes *
45 == 12KB

So realistically to perform a 1024-bit exptmod out of the box with
LibTomMath you're looking at roughly [with typical malloc overhead,
etc...] 16KB of heap plus about 2KB of stack.

You can trim this down a bit, with MP_LOW_MEM the window is 32 [again
half used == 16 + 2 == 18 for exptmod] plus the precision padding is 8
so you'd have 48 digits + header = 208 bytes * ~29 == ~6KB without a
significant speed loss.

Again, just profile your application by overshooting the heap
requirement then tune it down. LTM is very predictable in terms of
heap usage, for instance, the contents of the exponent do not change
the memory requirement (nor do the bits of the modulus, provided it's
the same "type").

To add to this, ... I've had users who have done THIS VERY THING [which
is where I got the 6KB figure from btw] and they've gone to market with
the end product...

So I'd say it's an ideal solution to the problem.

Tom

Jean-Luc Cooke

unread,
Mar 17, 2005, 3:08:30 PM3/17/05
to
In sci.crypt pet...@gocougs.wsu.edu wrote:
> You get the the meat of what I need below. Basically, I was asking how
> to bound the maximum amount of heap usage during a DSA operation.

> > So what I would do is make a huge [say] 100KB "static heap" and run
> > your application. Get the amount of ram it uses [while performing
> all
> > the operations it supports] and just round up to the next 4KB or
> > something.

> While this can work in the lab, I am uncomfortable using it in released
> code. While you assert that LTM is predictable in terms of heap usage,
> it is software without warranty, and was not designed for use in static
> storage systems.

> Given two separate DSA operations, with different keys and data, will
> both use the same amount of heap?

Gracenote hired Tom to code his math library to something looking eerily
famliar to an IPod. Right now he's doing other interesting hardware-y
things with his library for an Ottawa Company. I'm fairly confident it
can do what he says it can.

> > So a rough ballpark figure is you need ~45 bignums during an exptmod
> of
> > a 1024-bit exponent. Each occupies 64 digits + header == 272 bytes *
> > 45 == 12KB
> >
> > So realistically to perform a 1024-bit exptmod out of the box with
> > LibTomMath you're looking at roughly [with typical malloc overhead,
> > etc...] 16KB of heap plus about 2KB of stack.

> Aha! These are the kind of numbers I'm looking for. But how
> predictable is this?

ExpMod is a memory determanistic algorithm (or it should be if done
right). There is an upper bound and a lower bound.

webs...@gmail.com

unread,
Mar 17, 2005, 3:32:17 PM3/17/05
to

The amount of memory used for any bignum operation is bounded, one way
or another, by the range of the numbers used in the inputs for your
formulas, and the set of formulas themselves. I think this is true for
any bignum library. For example, if we assume that the memory used is
basically proportional to the base digits used then for the following:

a <- b + c

The number of digits needed to store the result in a is no more than
the maximum of the digits for b and c plus 1. With multiplication its
basically the sum of the digits of the operands plus 1, and so on.

So long as the operations you perform are fixed, the input sizes are
bounded, and so long as you don't leak (i.e., you deallocate everything
before you lose reference to them) there is always some fixed finite
boundary for the total amount of memory that will be used.

So from an engineering point of view, you can go ahead and use
something like LibToMath, or LibToCrypt, and hook out
malloc/realloc/free as he suggests, and literally measure the memory
usage over lots of testing. Its usually quite easy to get a set of
bignum operations to use the maximum memory by just picking inputs that
leads to the largest (in magnitude) possible output.

--
Paul Hsieh
http://www.pobox.com/~qed/
http://bstring.sf.net/

pet...@gocougs.wsu.edu

unread,
Mar 17, 2005, 4:25:20 PM3/17/05
to
Jean-Luc Cooke wrote:

> In sci.crypt pet...@gocougs.wsu.edu wrote:
> > While this can work in the lab, I am uncomfortable using it in
released
> > code. While you assert that LTM is predictable in terms of heap
usage,
> > it is software without warranty, and was not designed for use in
static
> > storage systems.
>
> > Given two separate DSA operations, with different keys and data,
will
> > both use the same amount of heap?
>
> Gracenote hired Tom to code his math library to something looking
eerily
> famliar to an IPod. Right now he's doing other interesting
hardware-y
> things with his library for an Ottawa Company. I'm fairly confident
it
> can do what he says it can.

Please don't misunderstand me. I'm not suggesting that LTM is poorly
designed, or incapable of doing (well) what it is designed to do.
However, it was designed to use dynamic memory management, not for
systems with static memory usage.

I was only pointing out its initial design goals did not likely take
into account static memory usage.

> > > So a rough ballpark figure is you need ~45 bignums during an
exptmod
> > of
> > > a 1024-bit exponent. Each occupies 64 digits + header == 272
bytes *
> > > 45 == 12KB
> > >
> > > So realistically to perform a 1024-bit exptmod out of the box
with
> > > LibTomMath you're looking at roughly [with typical malloc
overhead,
> > > etc...] 16KB of heap plus about 2KB of stack.
>
> > Aha! These are the kind of numbers I'm looking for. But how
> > predictable is this?
>
> ExpMod is a memory determanistic algorithm (or it should be if done
> right). There is an upper bound and a lower bound.

Great. That's half of what I want to know. The other half is what
that upper bound is. From what I gathered from Tom's response is 16KB
of heap and 2KB of stack. And now that I know it is deterministic, I
am more comfortable in using the software.

Thanks,
Pete

CBFalconer

unread,
Mar 17, 2005, 9:44:45 PM3/17/05
to

You snipped the quote and attributions that showed precisely that
worry about bounds.

Douglas A. Gwyn

unread,
Mar 17, 2005, 11:06:16 PM3/17/05
to
pet...@gocougs.wsu.edu wrote:
> Perhaps a better way to put it is: Is there a bignum library that has
> a boundable maximum on the amount of memory used?

No, what you have to do is analyze the needs of the algorithm.
If, for example, you use the Euclidean algorithm to find a
GCD as part of your app, then *if* it is properly coded the
(recursive) procedure will never require more storage than
a certain easily estimated amount that you can preallocate.
If, on the other hand, you have a piece of the code that
multiplies out many factors without modular reduction, the
intermediate products could be much larger than the maximum
size of any single input. Either you need to determine how
much storage will always be enough for the intermediate
values, or else you need to change the algorithm and/or
rearrange the computation to keep the intermediate values of
a manageable size (for which you must determine a safe upper
bound).

Douglas A. Gwyn

unread,
Mar 17, 2005, 11:08:39 PM3/17/05
to
pet...@gocougs.wsu.edu wrote:

> tomst...@gmail.com wrote:
>>So what I would do is make a huge [say] 100KB "static heap" and run
>>your application. Get the amount of ram it uses [while performing
> all
>>the operations it supports] and just round up to the next 4KB or
>>something.
> While this can work in the lab, I am uncomfortable using it in released
> code. While you assert that LTM is predictable in terms of heap usage,

> it is software without warranty, and was not designed for use in static
> storage systems.

Worse than that, what guarantee is there that with different
input values the operation will not require more storage than
it did in the test runs? The only way to be certain is to
analyze the algorithm.

tomst...@gmail.com

unread,
Mar 18, 2005, 7:08:53 AM3/18/05
to

Or ask the author ... or read the text book he wrote on bignum math....

I think you're trying to "invent" an issue here. I really do have
users all over the market and quite a few in "small memory" devices
which use LTM/LTC [often out of the box] without any problems.

Just to give an idea of some of my users... I've got Gracenote who are
putting a DH-768 based PK system in embedded devices. They wanted the
memory usage to be less than 8KB [iirc] and this was accomplished with
the LOW_MEM option and some porting. Then I have the Organic Network
folk who are putting LibTomCrypt on WG511 wireless routers. Then
there is Sony who uses it in some Playstation games [though the PS2
does have 32MiB of ram they're not going to use a library that is
memory hungry].

Oh and let's not forget that I've [long ago] built and ran the LTC
demos on my Gameboy Advance. [32K stack, 256K heap]...

If the user just wants todo mp_exptmod() operations then the memory
requirements [for any given modulus size] are going to be tightly
contrained. A bignum library such as LibTomMath is a bit complicated
to put "the exact bit requirement is ..." as it's fairly complicated
software [since it's not meant to be single purposed]. However, you
can just profile runs of the application using the library to get an
idea.

I would be very surprised if the memory requirements for mp_exptmod()
for RSA-1024 [with/without CRT] rose above 16KiB of heap.

Tom

Douglas A. Gwyn

unread,
Mar 19, 2005, 1:53:55 AM3/19/05
to
tomst...@gmail.com wrote:

> Douglas A. Gwyn wrote:
>>Worse than that, what guarantee is there that with different
>>input values the operation will not require more storage than
>>it did in the test runs? The only way to be certain is to
>>analyze the algorithm.
> Or ask the author ... or read the text book he wrote on bignum math....
> I think you're trying to "invent" an issue here. I really do have
> users all over the market ...

Clearly you do not even understand what I am saying.
There is no way that the bignum package author can
possibly know what the application requirements are,
without studying the application's algorithm. This
has nothing to do with whether the bignum package is
any good or is widely used.

> If the user just wants todo mp_exptmod() operations then the memory
> requirements [for any given modulus size] are going to be tightly
> contrained. A bignum library such as LibTomMath is a bit complicated
> to put "the exact bit requirement is ..." as it's fairly complicated
> software [since it's not meant to be single purposed]. However, you
> can just profile runs of the application using the library to get an
> idea.

Is that actually the specific application requirement
that started this thread? And you're ignoring the
question cited at the top of this posting: what
guarantee is there that the profiling captures the
worst possible case?

There is no way to get reliability for this app
without doing some analysis.

Tom St Denis

unread,
Mar 19, 2005, 5:57:26 AM3/19/05
to

Douglas A. Gwyn wrote:
> tomst...@gmail.com wrote:
> > Douglas A. Gwyn wrote:
> >>Worse than that, what guarantee is there that with different
> >>input values the operation will not require more storage than
> >>it did in the test runs? The only way to be certain is to
> >>analyze the algorithm.
> > Or ask the author ... or read the text book he wrote on bignum
math....
> > I think you're trying to "invent" an issue here. I really do have
> > users all over the market ...
>
> Clearly you do not even understand what I am saying.
> There is no way that the bignum package author can
> possibly know what the application requirements are,
> without studying the application's algorithm. This
> has nothing to do with whether the bignum package is
> any good or is widely used.

This is sci.crypt.... chances are good they're calling a exptmod... to
quote their original post

---


We want to use some public key methods in an embedded device, but we
don't want to support dynamic memory allocation.

---

So unless it's ECC they're going to likely use it.

> > If the user just wants todo mp_exptmod() operations then the memory
> > requirements [for any given modulus size] are going to be tightly
> > contrained. A bignum library such as LibTomMath is a bit
complicated
> > to put "the exact bit requirement is ..." as it's fairly
complicated
> > software [since it's not meant to be single purposed]. However,
you
> > can just profile runs of the application using the library to get
an
> > idea.
>
> Is that actually the specific application requirement
> that started this thread? And you're ignoring the
> question cited at the top of this posting: what
> guarantee is there that the profiling captures the
> worst possible case?

Unless the OP is writing a Magma clone or something the analysis is
VERY easy. Say their product does RSA-1024 ... then you do RSA-1024
and capture the amount of memory it requires.

As for ensuring you get the "worst possible case" I wrote the library
and know how it behaves. For exptmod() the memory requirement for
equal modulus/exponent sizes over several calls is essentially the
same.

I still say you're "inventing" an issue here. I have yet over the last
THREE years to hear about LibTomMath taking excessive heap to do pretty
much any of the operations.

So just let it go. You're not making a valid point in this case.

Tom

webs...@gmail.com

unread,
Mar 19, 2005, 10:42:30 AM3/19/05
to
Douglas A. Gwyn wrote:
> tomst...@gmail.com wrote:
> > Douglas A. Gwyn wrote:
> >>Worse than that, what guarantee is there that with different
> >>input values the operation will not require more storage than
> >>it did in the test runs? The only way to be certain is to
> >>analyze the algorithm.
> > Or ask the author ... or read the text book he wrote on bignum
> > math.... I think you're trying to "invent" an issue here. I
> > really do have users all over the market ...
>
> Clearly you do not even understand what I am saying.
> There is no way that the bignum package author can
> possibly know what the application requirements are,
> without studying the application's algorithm. This
> has nothing to do with whether the bignum package is
> any good or is widely used.

The OP was asking about RSA.

> > If the user just wants todo mp_exptmod() operations then
> > the memory requirements [for any given modulus size] are
> > going to be tightly contrained. A bignum library such as
> > LibTomMath is a bit complicated to put "the exact bit
> > requirement is ..." as it's fairly complicated software
> > [since it's not meant to be single purposed]. However,
> > you can just profile runs of the application using the
> > library to get an idea.
>
> Is that actually the specific application requirement
> that started this thread?

Yes, it is.

> [...] And you're ignoring the


> question cited at the top of this posting: what
> guarantee is there that the profiling captures the
> worst possible case?

Because its very possible that a bignum library will use the same
amount of memory regardless of whether or not its best case or worst
case. Even if it doesn't it is usually very easy to exercise the
maximum memory usage situation. For an application like crypto, most
usages of it will certainly come close to the maximum memory usage.
That's why profiling is actually a good suggestion -- you'll get very
close to your ballpark memory usage with very little testing.

(I'm a little confused by Tom St. Denis' comments that the memory
expendature would be complicated -- even using the window method for
power computations the memory consumption should still be
deterministic.)

> There is no way to get reliability for this app
> without doing some analysis.

Yes, but the analysis is fairly straightforward.

Tom St Denis

unread,
Mar 19, 2005, 4:36:52 PM3/19/05
to

webs...@gmail.com wrote:
> (I'm a little confused by Tom St. Denis' comments that the memory
> expendature would be complicated -- even using the window method for
> power computations the memory consumption should still be
> deterministic.)

I have four methods of multiplication, four methods of squaring and
five modular reduction techniques and it can be built out of the box to
use 28 or 60 bit digits depending on your platform and it can be
configured for "reduced memory" via a preprocessor define.

LibTomMath != RSA library...

So for me to say "if you build LTM all 1000 bit exptmod will take N KB
of memory" is a bit loaded since I dunno if they're diminished radix,
odd, even, etc moduli, I dunno if you're on a 64 or 32-bit platform, I
dunno if you defined the low mem mode...

What I can say is that "it's well behaved". So if you profile it given
similar inputs (e.g. same size, reduction class, etc) the memory
requirement is going to be nicely bound.

Tom

Roger Willcocks

unread,
Mar 20, 2005, 8:13:32 PM3/20/05
to

"Tom St Denis" <tomst...@gmail.com> wrote in message
news:1111229846.4...@l41g2000cwc.googlegroups.com...

His point is valid. Your have not heard of the library taking excessive
heap. That doesn't mean it will never do so. If you were asked to insure the
user against heap overflow would you be prepared to do so? What if it the
code was used to control a nuclear reactor?

Are you confident it won't overflow, or are you sure it won't overflow?

--
Roger


Tom St Denis

unread,
Mar 20, 2005, 8:34:13 PM3/20/05
to
Roger Willcocks wrote:
> His point is valid. Your have not heard of the library taking
excessive
> heap. That doesn't mean it will never do so. If you were asked to
insure the
> user against heap overflow would you be prepared to do so? What if it
the
> code was used to control a nuclear reactor?

Chances are [nice contrived smartass scenario btw] if you had to
"secure a nuclear reactor" you would use custom software specifically
made for the problem and not a GENERAL PURPOSE BIGNUM LIB.

> Are you confident it won't overflow, or are you sure it won't
overflow?

As for am I confident it works?

Let's put it this way, the only way you people know about me is THROUGH
MY SOFTWARE. So if it's bad I basically burn my only
outlet/connection. That's why I strive to fix all bugs as reported and
I'm very open about the process.

As for it "never overflowing" I never make such a claim. What I had
claimed is given the same types of inputs you'd require the same [+/- a
small variance] of memory. If you model the application with 1024-bit
inputs and then send 2048-bit inputs I can guarantee that you will need
more memory. Put it another way, if you send the EXACT same inputs to
exptmod() it will require the same memory EACH time you call it
(provided your heap operations are deterministic).

Keep in mind there is extensive error checking in all of my libraries
so if a function fails [e.g. to alloc required heap] it won't just
write in lala land [by design, though there have been bugs which I
correct as I find them] it will early abort and tell the developer.

So do I guarantee that LTM 0.35 is 100% bugfree? Hell no. But I
strongly think that it's reliable for cryptographic usage and very
unlikely to wack out at proper usage.

But this is about as much as you can say for any complicated system.
Heck, is AES secure? Likely it is but what does that say? Is your car
safe to drive tommorow? likely it is. Is $BAD_THING not going to
happen shortly? Likely it won't.

Though the fact that I have so many users of my software and they're
all being productive with it strongly indicates to me [at least] that
I'm doing a decent job.

Now you guys can "invent" problems to talk about. The reality of the
situation though is that the library works, works well, behaves well
and is entirely free and public domain.

Use it if you want, if you're not going to then shut up. Cuz it's
really easy to openly question and character attack people. Let's see
you write and maintain a general purpose bignum library, a 300 page
textbook, a crypto library, a polynomial basis math lib, etc... all for
free in your spare time. Then, when you do all that [competently] ,
then, maybe you can come here and question my ability to produce
quality software.

Your [and DAG] questions essentially are openly questioning if I can do
a good job at what I do. It'd be like me going to your work and
questioning if your latest work product could withstand a bit of
scrutiny as if you're not competent enough to do something of value.

Tom

Douglas A. Gwyn

unread,
Mar 21, 2005, 1:44:31 AM3/21/05
to
Roger Willcocks wrote:
> His point is valid. Your have not heard of the library taking excessive
> heap. That doesn't mean it will never do so. If you were asked to insure the
> user against heap overflow would you be prepared to do so? ...

Actually my point was more that the answers to such questions
require knowledge/study/analysis of the application's *use*
of the bignum package, even if the bignum package itself has
very simple and predictable storage requirements.

Douglas A. Gwyn

unread,
Mar 21, 2005, 1:53:32 AM3/21/05
to
Tom St Denis wrote:
> Your [and DAG] questions essentially are openly questioning
> if I can do a good job at what I do.

Not at all, although you seem to insist on taking it that way.
I haven't once in this discussion commented on the quality of
your software. That is not germane to the point I was making,
which was that the application usage pattern is the dominant
factor in determining the storage requirement necessary to
guarantee sufficient capacity, and that a careful analysis of
the application algorithm(s) is required to determine a
reliable upper bound. Such an analysis might be relatively
easy or it might be difficult (in which case perhaps an
alternate algorithm more amenable to analysis could be
substituted), but it needs to be done.

Tom St Denis

unread,
Mar 21, 2005, 8:01:38 AM3/21/05
to

Douglas A. Gwyn wrote:
> Tom St Denis wrote:
> > Your [and DAG] questions essentially are openly questioning
> > if I can do a good job at what I do.
>
> Not at all, although you seem to insist on taking it that way.

It's called professionalism. If I say "this == true" and you
repeatedly question it you're basically suggesting that I'm not telling
the truth or that what I have to say is suspect.

I already answered the questions about LTM long ago in this thread yet
you insisted on repeatedly questioning the behaviour of the code. That
suggests to me that you don't think what I'm saying is true.

> I haven't once in this discussion commented on the quality of
> your software. That is not germane to the point I was making,
> which was that the application usage pattern is the dominant
> factor in determining the storage requirement necessary to
> guarantee sufficient capacity, and that a careful analysis of
> the application algorithm(s) is required to determine a
> reliable upper bound. Such an analysis might be relatively
> easy or it might be difficult (in which case perhaps an
> alternate algorithm more amenable to analysis could be
> substituted), but it needs to be done.

My algorithms are fully deterministic and have class specific
behaviours. Profiling is all you need to do to figure out the memory
requirements.

To suggest otherwise is to suggest that I'm lying about the nature of
my work product and that I'm not competent to make such claims.

I'd suggest that unless you know of a reason to call suspect to what
I'm saying that you just leave it at that.

Tom

CBFalconer

unread,
Mar 21, 2005, 12:14:59 PM3/21/05
to
Tom St Denis wrote:
>
... snip ...

>
> My algorithms are fully deterministic and have class specific
> behaviours. Profiling is all you need to do to figure out the
> memory requirements.
>
> To suggest otherwise is to suggest that I'm lying about the nature
> of my work product and that I'm not competent to make such claims.
>
> I'd suggest that unless you know of a reason to call suspect to
> what I'm saying that you just leave it at that.

You are just another pretty face hiding behind a keyboard. You may
be lying, mistaken, or correct. Instead of protesting you could
point out facets and tests you know of and have performed on your
own code. Maybe you have. At any rate there is no reason to get
your water hot and complain about such people as Doug Gwyn or
anybody else who fails to take your word for it. Incidentally DAGs
reputation far exceeds yours outside the area of persnicketyness.

--
"I conclude that there are two ways of constructing a software
design: One way is to make it so simple that there are obviously
no deficiencies and the other way is to make it so complicated
that there are no obvious deficiencies." -- C. A. R. Hoare


Tom St Denis

unread,
Mar 21, 2005, 12:23:06 PM3/21/05
to
CBFalconer wrote:
> You are just another pretty face hiding behind a keyboard. You may
> be lying, mistaken, or correct. Instead of protesting you could
> point out facets and tests you know of and have performed on your
> own code. Maybe you have. At any rate there is no reason to get
> your water hot and complain about such people as Doug Gwyn or
> anybody else who fails to take your word for it. Incidentally DAGs
> reputation far exceeds yours outside the area of persnicketyness.

Then I write a test and the same people will just say "how do we know
your tests worked properly", etc...It's all about questioning someone's
credibility. I won't speculate his motivations but if he was really
concerned with the behaviour he'd try to model it himself.

As for reputation I dunno. Whenever someone asks for a crypto or
bignum lib here I seem to be among [if not the only] recommendation
made by quite a few people here. I have yet to see Doug get quoted,
cited or referals about ANYTHING in cryptography. Compound that with
the fact that I have thousands upon thousands of users makes me think
that there is something more to my reputation than his.

Tom

Joe Peschel

unread,
Mar 21, 2005, 1:08:05 PM3/21/05
to
"Tom St Denis" <tomst...@gmail.com> wrote in
news:1111425786....@f14g2000cwb.googlegroups.com:

> As for reputation I dunno. Whenever someone asks for a crypto or
> bignum lib here I seem to be among [if not the only] recommendation
> made by quite a few people here. I have yet to see Doug get quoted,
> cited or referals about ANYTHING in cryptography.

I don't think Doug has written a big number library, at least he hasn't
mentioned writing one. You haven't seen Doug quoted or cited about anything
in cryptography? Have you been snoozing, Tom?

J

--

--
__________________________________________
When will Bush be tried for war crimes?

"Our enemies are innovative and resourceful, and so are we. They
never stop thinking about new ways to harm our country and our
people, and neither do we." --G. W. B.

Joe Peschel
D.O.E. SysWorks
http://members.aol.com/jpeschel/index.htm
__________________________________________

Tom St Denis

unread,
Mar 21, 2005, 1:21:11 PM3/21/05
to

Joe Peschel wrote:
> "Tom St Denis" <tomst...@gmail.com> wrote in
> news:1111425786....@f14g2000cwb.googlegroups.com:
>
> > As for reputation I dunno. Whenever someone asks for a crypto or
> > bignum lib here I seem to be among [if not the only] recommendation
> > made by quite a few people here. I have yet to see Doug get
quoted,
> > cited or referals about ANYTHING in cryptography.
>
> I don't think Doug has written a big number library, at least he
hasn't
> mentioned writing one. You haven't seen Doug quoted or cited about
anything
> in cryptography? Have you been snoozing, Tom?

Um, he's cited in citeseer 0 times, I'm in there three times...

AND I'VE NEVER BEEN PUBLISHED!!!

Sorry, I'm not saying Doug is a stupid person or unaccomplished. I'm
just saying who the f!@# is he to sit here and question everything I
say?

Tom

Joe Peschel

unread,
Mar 21, 2005, 3:39:03 PM3/21/05
to
Joe Peschel <jpes...@no.spam.org> wrote in
news:Xns9620942C1FCE7...@216.168.3.44:

> Does that makes you more of a cryptology authority than Gwyn?*
make

J

Joe Peschel

unread,
Mar 21, 2005, 3:33:31 PM3/21/05
to
"Tom St Denis" <tomst...@gmail.com> wrote in
news:1111429271.7...@f14g2000cwb.googlegroups.com:

>
> Joe Peschel wrote:
>> "Tom St Denis" <tomst...@gmail.com> wrote in
>> news:1111425786....@f14g2000cwb.googlegroups.com:
>>
>> > As for reputation I dunno. Whenever someone asks for a crypto or
>> > bignum lib here I seem to be among [if not the only] recommendation
>> > made by quite a few people here. I have yet to see Doug get
> quoted,
>> > cited or referals about ANYTHING in cryptography.
>>
>> I don't think Doug has written a big number library, at least he
> hasn't
>> mentioned writing one. You haven't seen Doug quoted or cited about
> anything
>> in cryptography? Have you been snoozing, Tom?
>
> Um, he's cited in citeseer 0 times, I'm in there three times...

Does that makes you more of a cryptology authority than Gwyn?*

>

> AND I'VE NEVER BEEN PUBLISHED!!!

I have. I'm not in there. ;-)

>
> Sorry, I'm not saying Doug is a stupid person or unaccomplished. I'm
> just saying who the f!@# is he to sit here and question everything I
> say?

I'm sure he isn't questioning everything you say. It might seem like it
sometimes, though. On cryptology matters, listen to him.

J

* The score is, at least, tied. I found three citations for Gwyn in
CiteSeer. There may be more. And don't forget his i-hat code.

Douglas A. Gwyn

unread,
Mar 21, 2005, 6:32:11 PM3/21/05
to
Tom St Denis wrote:
> Douglas A. Gwyn wrote:
>>Tom St Denis wrote:
>>>Your [and DAG] questions essentially are openly questioning
>>>if I can do a good job at what I do.
>>Not at all, although you seem to insist on taking it that way.
> It's called professionalism. If I say "this == true" and you
> repeatedly question it you're basically suggesting that I'm not telling
> the truth or that what I have to say is suspect.
> I already answered the questions about LTM long ago in this thread yet
> you insisted on repeatedly questioning the behaviour of the code. That
> suggests to me that you don't think what I'm saying is true.

Wow, you continue to hallucinate.

>>I haven't once in this discussion commented on the quality of
>>your software. That is not germane to the point I was making,
>>which was that the application usage pattern is the dominant
>>factor in determining the storage requirement necessary to
>>guarantee sufficient capacity, and that a careful analysis of
>>the application algorithm(s) is required to determine a
>>reliable upper bound. Such an analysis might be relatively
>>easy or it might be difficult (in which case perhaps an
>>alternate algorithm more amenable to analysis could be
>>substituted), but it needs to be done.

> My algorithms are fully deterministic and have class specific
> behaviours. Profiling is all you need to do to figure out the memory
> requirements.

*Your* algorithms aren't at issue here. The algorithms
in which the original poster would *use* your package are.

> To suggest otherwise is to suggest that I'm lying about the nature of
> my work product and that I'm not competent to make such claims.
> I'd suggest that unless you know of a reason to call suspect to what
> I'm saying that you just leave it at that.

One can only speculate about why you are so defensive when
you're not even being attacked.

Douglas A. Gwyn

unread,
Mar 21, 2005, 6:35:01 PM3/21/05
to
CBFalconer wrote:
> ... such people as Doug Gwyn or

> anybody else who fails to take your word for it.

That is not an accurate characterization: Tom
simply missed my point and argues about something
irrelevant, so it doesn't matter whether I take
his word for what he is arguing or not, and I
haven't yet expressed any opinion about that.

Douglas A. Gwyn

unread,
Mar 21, 2005, 6:37:06 PM3/21/05
to
Joe Peschel wrote:
> I don't think Doug has written a big number library, ...

Actually I have, but it's not advertised.
Anyway, implementation of a bignum library is
not the issue I was addressing.

Douglas A. Gwyn

unread,
Mar 21, 2005, 6:43:25 PM3/21/05
to
Tom St Denis wrote:
> Sorry, I'm not saying Doug is a stupid person or unaccomplished. I'm
> just saying who the f!@# is he to sit here and question everything I
> say?

In this thread I haven't challenged your assertions
about your software, which may or may not be true.
Rather, I pointed out the need for an analysis of
the application's algorithm that would make use of
that software, and you keep misinterpreting that as
some sort of attack on your software package, which
is quite wrong.

If you maintain that thee is no need to know how a
bignum library will be used in order to determine a
safe upper bound on storage requirements for the
application, then that is patently absurd and
*deserves* to be challenged.

Roger Willcocks

unread,
Mar 21, 2005, 7:23:33 PM3/21/05
to
"Douglas A. Gwyn" <DAG...@null.net> wrote in message
news:kYqdncLcZ-C...@comcast.com...

If every entry point into the library used a (known, limited) amount of
storage which was freed up again before returning to the application,
wouldn't the upper bound be completely predictable?

Of course, the actual usage would depend on which routines were actually
called by the application.

But if the library were to return handles to internally allocated persistent
structures, all bets would be off.

--
Roger


Tom St Denis

unread,
Mar 21, 2005, 9:44:47 PM3/21/05
to

Roger Willcocks wrote:
> "Douglas A. Gwyn" <DAG...@null.net> wrote in message
> news:kYqdncLcZ-C...@comcast.com...
> > Roger Willcocks wrote:
> > > His point is valid. Your have not heard of the library taking
excessive
> > > heap. That doesn't mean it will never do so. If you were asked to
insure
> the
> > > user against heap overflow would you be prepared to do so? ...
> >
> > Actually my point was more that the answers to such questions
> > require knowledge/study/analysis of the application's *use*
> > of the bignum package, even if the bignum package itself has
> > very simple and predictable storage requirements.
>
> If every entry point into the library used a (known, limited) amount
of
> storage which was freed up again before returning to the application,
> wouldn't the upper bound be completely predictable?

That's just it. This isn't an "RSA only" library. My exptmod function
has 2*6*4*4*2 = 384 configurations... that's

2 => 28 or 60-bit digits [out of the box detect at build time]
6 => 5 reduction modes (and montgomery comes in two flavours)
4 => 4 multiplication
4 => 4 squaring
2 => MP_LOW_MEM or default config

There is actually more than that, you can also tweak the division code
to use a normal digit or slower [smaller] bitlevel algorithm, etc...

So I can't tell you "exptmod will always take you xKB". Hell I can't
even tell you "RSA-1024 will always take you xKB". Best I can do is
give you a rough ballpark and tell you to profile the damn code

> Of course, the actual usage would depend on which routines were
actually
> called by the application.

Which routines with what types [and sizes] of inputs along with the
build configuration.

> But if the library were to return handles to internally allocated
persistent
> structures, all bets would be off.

Once you build the library and pass in the same type of inputs the
memory requirements should be roughly consistent. That's called
profiling.

Tom

Tom St Denis

unread,
Mar 21, 2005, 9:48:44 PM3/21/05
to

You're right that an upper-bound should be "computable". The problem
is until I'm told the exact config + types of operations + size/type of
operands I can't tell you how much memory this will require.

There are hundreds of configurations a given operation [well things
like exptmod] can be performed in. As I've been trying to point out
LibTomMath isn't an RSA bignum lib. It's a bignum lib that is often
used for crypto.

So the solution is for the OP to build the library, run his program
through it and profile the memory usage.

Tom

Douglas A. Gwyn

unread,
Mar 22, 2005, 12:01:48 AM3/22/05
to
Tom St Denis wrote:
> ... Best I can do is

> give you a rough ballpark and tell you to profile the damn code

Generally speaking that would be insufficient.
I refer you back to an earlier question which
you never addressed: how do you know that the
test cases used during that "profiling" cause
the worst-case memory usage to be attained?
(In many cases it wouldn't be.) Analysis is
required. Depending on the algorithm(s), the
analysis might not be very hard, but without
it you're just guessing, and there is no need
to guess.

Douglas A. Gwyn

unread,
Mar 21, 2005, 11:57:07 PM3/21/05
to
Roger Willcocks wrote:
> If every entry point into the library used a (known, limited) amount of
> storage which was freed up again before returning to the application,
> wouldn't the upper bound be completely predictable?

The storage in question is primarily that used to hold
the bignum values, which of course cannot be discarded
upon leaving each bignum function.

pet...@gocougs.wsu.edu

unread,
Mar 30, 2005, 8:36:01 PM3/30/05
to
pet...@gocougs.wsu.edu wrote:
> > > > So a rough ballpark figure is you need ~45 bignums during an
> exptmod
> > > of
> > > > a 1024-bit exponent. Each occupies 64 digits + header == 272
> bytes *
> > > > 45 == 12KB
> > > >
> > > > So realistically to perform a 1024-bit exptmod out of the box
> with
> > > > LibTomMath you're looking at roughly [with typical malloc
> overhead,
> > > > etc...] 16KB of heap plus about 2KB of stack.
> >
> > > Aha! These are the kind of numbers I'm looking for. But how
> > > predictable is this?
> >
> > ExpMod is a memory determanistic algorithm (or it should be if done
> > right). There is an upper bound and a lower bound.
>
> Great. That's half of what I want to know. The other half is what
> that upper bound is. From what I gathered from Tom's response is
16KB
> of heap and 2KB of stack. And now that I know it is deterministic, I
> am more comfortable in using the software.

Now that we've got LTM incorporated into our design, I thought I'd
update everyone with the memory use we are seeing. But before I give
the numbers, I thought I'd mention our "dynamic" memory manager (if you
want to call it that).

Basically, we set aside a large chunk of memory, about 1MB, to do the
testing. This is _way_ more than we needed, but for testing, its fine.
Our memory manager basically alots the requested size on a word
boundary. It has an overhead of 4 bytes per allocation to store the
length of the request (to support realloc()). A call to free() does
nothing. Once allocated, it is allocated forever. A call to realloc()
always allocates the newly requested size, rather than just resizing
the requested array (we could be smarter if the new size is smaller to
save an alloc, but we aren't interested).

This is _very_ crude, but it works for us, and took about 20 mins to
write. We just require the static store to be reset every time we want
to start a string of math operations.

Our system uses 1024-bit DSA keys to verify signatures. The
verification is only used in certain, special circumstances, and is not
part of normal, everyday operation. So, we are not worried about
performance or memory use (other than the original request that is it
bounded).

So, now for the numbers. We ran a few sets of keys and signatures and
saw a maximum memory use of about 64k bytes. Now this is the amount
reported by the memory manager (with its overhead and crude realloc()).
We have lots of memory in our system (about 8MB), so 64k is just fine.

We still need to test with jibberish to make sure things don't blow up
(however, we always check that the keys have appropriate sizes), but so
far, things are looking great.

As for performance, I thought I'd just throw out some numbers.
Ignoring the SHA at the beginning, the DSA takes about 70 seconds on
our system. Now the system is a crappy 50MHz Microblaze based system,
with no cache. The Microblaze is pipelined, but the crappy performance
of the SDRAM controller on the OPB bus is our bottleneck. However,
this doesn't seem too bad considering.

Pete

0 new messages