http://i23.photobucket.com/albums/b363/CompressorX/1.jpg
#include <stdio.h>
#include <stdlib.h>
unsigned long fact(unsigned long);
unsigned long sum_fact(unsigned long);
float harmonic(unsigned long);
unsigned long fact(unsigned long n)
{
unsigned long acc = 1;
unsigned long i;
for (i = 1; i <= n; i++)
acc *= i;
return acc;
}
unsigned long sum_fact(unsigned long n)
{
unsigned long acc = 0;
unsigned long i;
for (i = 1; i <= n; i++) {
acc += fact(i);
}
return acc;
}
float harmonic(unsigned long n)
{
float acc = 0;
float i;
for (i = 1; i <= n; i++) {
acc += ((100) *1/i);
}
return acc;
}
int main(int argc, char **argv)
{
unsigned long s;
if (argc != 2) {
fprintf(stderr, "Invalid number of args\n");
return 1;
}
s = atoi(argv[1]);
printf("The final value is: %f\n", sum_fact(s)/(harmonic(s)/100));
return 0;
}
The problem is that the code seems to break around n=6 or n=7. How
would I go about calculating stuff up to say the number 100.
You say you're interested in values of n up to "say the
number 100." Are you aware that 100! is approximately 1E158?
You'll need an `unsigned long' of about 525 bits or so to
get that high ...
> unsigned long sum_fact(unsigned long n)
> {
> unsigned long acc = 0;
> unsigned long i;
>
> for (i = 1; i<= n; i++) {
> acc += fact(i);
> }
>
> return acc;
> }
... and a few more bits by the time you've added a bunch
of those enormous quantities together ...
> [...]
> The problem is that the code seems to break around n=6 or n=7. How
> would I go about calculating stuff up to say the number 100.
I haven't studied the code to find why it "breaks," because
the fundamental approach simply isn't going to work with the
native types on today's machines. Find a different formula, or
find a multiple-precision arithmetic package.
--
Eric Sosman
eso...@ieee-dot-org.invalid
What other formula do have in mind?
Chad
> unsigned long fact(unsigned long);
> unsigned long sum_fact(unsigned long);
> float harmonic(unsigned long);
Why not use float (or, better, double) for fact() and sum_fact() too? Then
you won't get the overflow problems so early.
--
Bartc
Using a float as a loop index is not a good idea.
> return acc;
> }
>
> int main(int argc, char **argv)
> {
> unsigned long s;
>
> if (argc != 2) {
> fprintf(stderr, "Invalid number of args\n");
> return 1;
> }
>
> s = atoi(argv[1]);
>
> printf("The final value is: %f\n", sum_fact(s)/(harmonic(s)/100));
>
> return 0;
> }
>
>
> The problem is that the code seems to break around n=6 or n=7. How
> would I go about calculating stuff up to say the number 100.
What accuracy do you need? S(100) is exactly
262898806796746281676079930191080203927243930357368966202376910097033877607478900552047729559394146124839310937700464163196753332648214687830620489578849276792933475322073718652663784818718016211136 / 14466636279520351160221518043104131447711
(that was a Haskell program) or about
18172766752207502666566054742592774333334408288875508533667627526462215026676578187355017535365067611072187901276460031983419028727944397925589152640251401379.88848916234112956007944544952276803589241196007806492632486867816494303869269492189114953628059466014191088986813481278364231972207711080807475859000106626980248937302365258665295763512500573893636009
but you can get a result of 1.81727667522075e+157 in plain C by using
double precision and being careful about how you do the sum. If you
need more accurate results you will have to use some sort of extended
precision float or bignum library.
--
Ben.
Okay, I don't get how using float (or double) for fact() and
sum_fact() would prevent overflow problems early on.
What's the alternative?
>
>
> > return acc;
> > }
>
> > int main(int argc, char **argv)
> > {
> > unsigned long s;
>
> > if (argc != 2) {
> > fprintf(stderr, "Invalid number of args\n");
> > return 1;
> > }
>
> > s = atoi(argv[1]);
>
> > printf("The final value is: %f\n", sum_fact(s)/(harmonic(s)/100));
>
> > return 0;
> > }
>
> > The problem is that the code seems to break around n=6 or n=7. How
> > would I go about calculating stuff up to say the number 100.
>
> What accuracy do you need? S(100) is exactly
>
Maybe around S(20).
> The following program is my attempt to calculate the formula at the
> following url...
>
> http://i23.photobucket.com/albums/b363/CompressorX/1.jpg
> The problem is that the code seems to break around n=6 or n=7. How
> would I go about calculating stuff up to say the number 100.
If you don't need an exact answer, just use "double" throughout; for
n=100, that gives a result which is accurate to 14 decimal digits. If you
need an exact answer, use a multi-precision arithmetic library which
supports rational arithmatic, e.g. GMP:
Or use a high-level language with built-in bignum support, e.g. Python,
Haskell.
Pretty much anything (except double)! What type are you comparing it with?
--
Ian Collins
If your 'unsigned long' is, say, 32-bits, then the largest factorial it can
store is 12!
13! would require about 33 bits I think, and if you try and do it using 32
bits, it will give funny results.
Summing the factorials has similar problems; integers just have a too narrow
range.
Floats have a much wider range: typically a float might go up to around
1e38, and a double up to about 1e300. But a double only gives some 16
digits of precision.
Simplify your program to *only* show factorials from 1 to 100 (forget
command line input, just use a loop), and you will
see where it goes wrong. Then put in doubles, and see the difference.
--
Bartc
> On Apr 20, 3:38 pm, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
>> Chad <cdal...@gmail.com> writes:
<snip>
>> > float acc = 0;
>> > float i;
>>
>> > for (i = 1; i <= n; i++) {
>> > acc += ((100) *1/i);
>> > }
>>
>> Using a float as a loop index is not a good idea.
>
> What's the alternative?
An integer type is preferable. I don't think there is anything actually
wrong with the above as far as C is concerned (because of the guarantees
C makes about floating point arithmetic) but you'll make everyone read
the code several times just to be sure.
<snip>
>> What accuracy do you need? S(100) is exactly
>>
>
> Maybe around S(20).
That's not an accuracy. Do you need to know what S(20) is exactly
(39750532290178371794884752 / 55835135) or will an approximation do? If
so, what accuracy do you need? Would 7.12e17 be good enough or do you
need 7.119268591394715e17 or better?
--
Ben.
How many numbers would 1e300 produce? What can I say, my math isn't up
to par.
Okay, I misread that. 7.12e17 is good enough.
How many numbers?
1e300 is one number.
It has 301 decimal digits, if that's what you mean: a 1 followed by 300
0s. And you'd need 997 bits to represent it exactly.
--
Keith Thompson (The_Other_Keith) ks...@mib.org <http://www.ghoti.net/~kst>
Nokia
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
> On Apr 20, 4:05 pm, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
<snip>
>> Do you need to know what S(20) is exactly
>> (39750532290178371794884752 / 55835135) or will an approximation do? If
>> so, what accuracy do you need? Would 7.12e17 be good enough or do you
>> need 7.119268591394715e17 or better?
>
> Okay, I misread that. 7.12e17 is good enough.
Then just use double to calculate the factorial and factorial sum and
you should be fine. Even using float for the harmonic sum will give you
more than three significant figures but I'd use double for that as
well. There is no need to be careful about the order in which you sum
the terms.
However, if all you need it S(1) to S(20) it would make sense to
pre-calculate these values and put them in a table.
--
Ben.
Using float gives very little precision and dynamic range.
Probably, you want a toolkit if you are determined to write it yourself,
but a math program would be better. For instance, you could download
pari/gp and calculate things like this with ease.
well I make s(100) to be about 1e157 so it will fit comfortably into
1e300.
S(100) is (assuming I didn't screw up)
> (S 100)
26289880679674628167607993019108020392724393035736896620237691009703387760747890
05520477295593941461248393109377004641631967533326482146878306204895788492767929
33475322073718652663784818718016211136/14466636279520351160221518043104131447711
this was written in scheme (Gambit scheme). Note the two posts that
have given exact answers have switched to a different language. Out-of-
the-box C isn't good for this sort of problem. As someone else
mentioned you need a multiple-precision arithmetic package. Google for
it.
The lcc-win compiler featrues 105 digits precision floating point. The
dynamic range is +- 10 ^10000
>> Floats have a much wider range: typically a float might go up to
>> around 1e38, and a double up to about 1e300. But a double only gives
>> some 16 digits of precision.
> How many numbers would 1e300 produce? What can I say, my math isn't up
> to par.
My maths isn't quite up to it either.
But, modifying your code to use double as necessary, and just making it loop
over a set of numbers, on my machine (double=64-bit) it clearly starts going
after N=170.
#include <stdio.h>
#include <stdlib.h>
double fact(unsigned long);
double sum_fact(unsigned long);
double harmonic(unsigned long);
double fact(unsigned long n)
{
double acc = 1.0;
unsigned long i;
for (i = 1; i <= n; i++)
acc *= i;
return acc;
}
double sum_fact(unsigned long n)
{
double acc = 0;
unsigned long i;
for (i = 1; i <= n; i++) {
acc += fact(i);
}
return acc;
}
double harmonic(unsigned long n)
{
double acc = 0;
int i;
for (i = 1; i <= n; i++) {
acc += 1.0/i;
}
return acc;
}
/* int main(int argc, char **argv) */
int main(void)
{
unsigned long s;
/*
if (argc != 2) {
fprintf(stderr, "Invalid number of args\n");
return 1;
}
s = atoi(argv[1]);
*/
for (s=1; s<=200; ++s)
/* for (s=100; s<=100; ++s) */
printf("S(%d) = %g\n", s, sum_fact(s)/(harmonic(s)));
return 0;
}
--
Bartc
I've tried it, and it sort of works. (Although some of the conversions
between qfloat (this extended type) and ordinary ints for example are
fiddly:
ltoq(&i,&qi);
acc += 1.0q/qi;
)
Using qfloats (and any necessary alterations), the function s() seems to
converge to about 1.5e9863 (luckily just within the limits of qfloat).
However in the neighbourhood of s(5000), the calculation
sum_fact()/harmonic() (see OP), was taking several seconds each, so could
only do a few spot checks.
(Qfloats seem to be about 20 times slower than doubles.)
--
Bartc
None. What are you trying to calculate? Can you calculate
its logarithm, say, instead of the quantity itself? (That's how
I got to the ">525-bit" estimate, for example: by calculating
lgamma(101) instead of attempting 100 factorial.)
--
Eric Sosman
eso...@ieee-dot-org.invalid
> [ Using a float as a loop index is not a good idea. ]
I emphatically agree.
> An integer type is preferable. I don't think there is anything actually
> wrong with the above as far as C is concerned (because of the guarantees
> C makes about floating point arithmetic) but you'll make everyone read
> the code several times just to be sure.
Considering that the condition (f == f+1) is true for (relatively) small
values of f, I would argue that using a float variable as a loop index
is a disaster waiting to happen.
#include <stdio.h>
void foo(unsigned long n)
{
float f;
for (f = 0; f < n; ++f)
{
float g = f+1;
if (f == g) break;
}
printf("f=%f n=%lu\n", f, n);
}
int main(void)
{
foo(10*1000*1000);
foo(20*1000*1000);
return 0;
}
$ gcc -Wall -Wextra tutu.c
$ ./a.out
f=10000000.000000 n=10000000
f=16777216.000000 n=20000000
Regards.
How do you figure that you need 997 bits to represent 301 decimal
digits exactly?
The base 2 log of 1e300 is approximately 996.58.
To put it another way, 10**300 is approximately 1.49 * 2**996 (where
"**" denotes exponentiation).
<off topic>
I've noticed I will sometimes forget things that I learned 6 or even 9
months ago because the material never quite *sunk in* the first couple
of times around.
</off topic>
I guess the answer would be I don't know because this was just
something I did as a programming exercise.
I think I'm missing the broader concept here. Given the following....
#include <stdio.h>
#include <stdlib.h>
double fact(double n)
{
double acc = 1.0;
unsigned long i;
for (i = 1; i <= n; i++)
acc *= i;
return acc;
}
int main(void)
{
int i;
for (i = 0; i < 150; i++) {
printf("%d! = %.4e\n",i, fact(i));
}
return 0;
}
How can my 32 bit computer computer calculate numbers like 100! if it
can only store 32 bits.
Floating-point gives up some precision to get greater range.
The larger the number, the less precisely it can represent it.
Consider a small, simplified version of floating point where a
number can be represented as:
X.XXXeXX
That's 4 digits of precision and 2 digits of exponent (decimal,
not binary, just for this example) (and I'm ignoring signs).
With a 6-digit integer, you can only represent numbers up to 999999.
With this simplifed floating-point representation, you can represent
much bigger numbers, up to 9.999e99 -- but you can't represent 999999
exactly. The nearest representable values are 9.999e05 and 1.000e06.
Picture the real number line. Mathemetically, it's continuous and
infinitely long. Numbers representable in an integer type are spaced
evenly, one unit apart, over a finite range. Numbers representable
in a floating-point type are spaced unevenly, more tightly near
zero and more widely far from zero.
By the way, these questions are mostly not about C, but about computer
numeric representations in general. An appropriate Google search
or a decent book will probably give you more and better information.
The classic paper on the topic is David Goldberg's "What Every Computer
Scientist Should Know About Floating-Point Arithmetic". It might be too
advanced for your current level of knowledge, but you should take a look
and decide for yourself.
The trick is that it's not storing 100! in 32 bits---it's storing a
number *like* 100! for some definition of "like". Floating point
formats split up the bits into mantissa and exponent (and sign, but
let's leave that out for now). It's easy to see if you think of the
"scientific notation", which does the same with base-10. Imagine you
have two decimal digits for the mantissa and two for the exponent, so
99000 would be represented as 9.9E+04. You can count by one to a
hundred, but once you hit 100, you start counting by 10, since you
don't have a third digit in your mantissa (e.g. 1.0E+2, 1.1E+2, 1.2+E2
are 100, 110, 120). This skims right past the sign, biasing of the
exponent, etc., but should give you the general idea. Once you hit
1000, you start counting by 100's. With this scheme, you could
represent huge numbers (like googol), using just those four decimal
digits. You just lose detail with larger numbers. Now imagine the
same thing using e.g. 11 digits of exponent, 20 digits of mantissa,
and all the digits being binary, and you have the idea. Below a
certain threshold you can represent every integer; above it, you
represent some of them, getting lower "density" as the magnitude
increases. It's possible that you get 100! exactly with a 32-bit
floating point representation, but if so you won't get 100! + 1.
That's why people were asking how accurate the result had to be.
Does that make it clear?
I tried to read David Goldberg's "What Every Computer Scientist Should
Know About Floating-Point Arithmetic". I was totally lost after the
first paragraph.
How can *you* express a hundred-digit number if you can
store only five decimal digits?
Da da da da da da daaah,
Da da da da daah, da-da-da-da-da,
Da da da da da da daaah,
Da! da-da da da da-da.
Da da da da da da daaah,
Da da da da daah, da-da-da-da-da,
Da da da da da da daaah,
Da! da-da da da ... da ... da.
Hint: What does 123e97 mean in a C source module?
Extra credit: When you use your five digits this way to
get an enormous range, what do you sacrifice by comparison to
using them in the "ordinary" way? There ain't no such thing
as a free lunch, after all.
--
Eric Sosman
eso...@ieee-dot-org.invalid
>> Simplify your program to *only* show factorials from 1 to 100 (forget
>> command line input, just use a loop), and you will
>> see where it goes wrong. Then put in doubles, and see the difference.
> double fact(double n)
Actually the n can be an int (since it only needs to store fairly small,
whole numbers)
> {
> double acc = 1.0;
> unsigned long i;
>
> for (i = 1; i <= n; i++)
A floating point n *might* cause trouble in some cases (but OK in this case,
as n will be a whole number).
> How can my 32 bit computer computer calculate numbers like 100! if it
> can only store 32 bits.
Floating point numbers are like that. Magic. They sacrifice some of the
32-bits to form an exponent, to increase the range, in the same way my
12-digit Casio calculator uses 2-digits for the exponent, leaving 10 for the
significant figures.
Anyway 'double' is likely 64-bits. Change double to float (usually 32-bits),
and the program will go wrong after 34!, as 32-bit floats can only represent
up to about 1e38.
As it is, will go wrong after 170! (change the 150 to 180), as it hits the
top limit around 1e306.
--
Bartc
You can't use a native data type all by itself. You need an array of
objects.
There are kits that have been built to help you with this.
Here are some examples:
http://wapedia.mobi/en/Arbitrary-precision_arithmetic?p=1
Not if you don't need complete accuracy (which the OP apparently
doesn't).
On the other hand, 32-bit float typically doesn't have the range to
represent 100!, though 64-bit double typically does. 100! is about
2**525, which means you need slightly more than 10 bits of exponent.
[snip]
Can you give me another 4 years before I can answer that? This is I
was having similar issues 4 years ago...