But:
If I write it to the EqWriter, press Enter and EVAL, the result is:
1/e^(1000*ln(1000)-1000*ln(1001)) then ->NUM and in a moment: the result is 2.716923... in floating point form.
Can I set anyway the CAS computational precision?
In my opinion: because the available memory is the only limit for integers, the HP not switch to floating point form at the big numbers and it's generate extremly slow computations in any case if I can't set the Cas's precision.
Try compute this example with X= 1000
(1.+1./100.)^100.
or
(1.+1./1000.)^1000.
But: Can I set anyway the CAS computational precision?
> Can I set anyway the CAS computational precision?
Yes, the computational precision of the CAS
is generally always EXACT.
Floating point computation is, however,
generally always approximate.
I think that's why the number interpretation modes
are coincidentally called "Exact" and "Approximate"
and it's not a great surprise that "Exact" computations
(including _exact_ rational fractions)
take longer than similar floating-point computations.
[r->] [OFF]
Hi!, George:
The HP50G, have, Internal precision 15 digits (floating point),
limited by memory for integers.
However, you can select only, 11 digits with FIX.
Please, see the speed comparison, from ...
http://www.hpmuseum.org/cgi-sys/cgiwrap/hpmuseum/archv016.cgi?read=107169
Best Regards.
MACH.
> I tested this [infinite limit vs. specific long calculation]:
> '(1+1/X)^X' 'X=\oo' lim Answer: 'e' Time: 4 sec
> '(1+1/X)^X' 'X=100' lim time= 1 min 11 sec and the result is a ZINT fraction.
Why use "lim" for a direct calculation?
101 100 ^ 100 100 ^ / is of course faster (couple of seconds?)
The comparison is somewhat like two people flying
from New York to New Jersey -- the first flying East,
circling the entire world, and the other heading West,
taking a short helicopter flight :)
The second result is an EXACT rational fraction,
as CAS results are supposed to be, and since there are
201 digits in both numerator and denominator,
it's not surprising that this is a longer calculation,
carried out via 198 multiplications and one division
(plus lots of extra overhead when done your way),
than a relatively short "approximate" calculation performed
via a few floating point multiplication/division/addition,
one floating point logarithm,
and one floating point exponential function:
'(1.+1./100.)^100.' EVAL
What you really want to know is the first (infinite limit) answer,
and that, too, is a relatively fast calculation,
because it is performed using mathematical intelligence,
also giving an EXACT answer (in terms of the symbol 'e')
rather than by multiplying an infinite number of times :)
[r->] [OFF]
Hi John!
Of course it was a (not too good) example only.
I want know: These semi-infinite big integers aren't generate too slow calculations?
The size of the integers limited by the memory only. And what will do my HP in the low memory conditions? Less precision?
The size of the TI's integers are limited. So the TI-s computing speed is faster but not too accurate with the integers.
I'm not say: The TI is better because the HP is more better. :-) But I don't understand the "habit" of these "big numbers".
George
Thanks for the link.
George
limit(1+1/x)^x, x, 200)= (201/200)^200
TI-92 Plus:
(201/200)^200->bignum = 12 sec
dim(string(getNum(bignum))) = 461 characters
HP 50g:
201 200 / 200 ^ EVAL = 7 min 35 sec
And the numerator's size is 461 digits too.
I don't understand this causeless slowness of this modern calculator. Can anybody help me?
Hi!, George:
For the example ... 'lim((11/X)^X;X,200)'
The result after one second, is ... 'EXP(X,200*LN(11/X,200))'
or, in the HP50G Advanced Users Reference (AUR)) ... Full Command and
Function Reference 3-131, from ...
http://h20000.www2.hp.com/bizsupport/TechSupport/DocumentIndex.jsp?contentType=SupportManual&lang=en&cc=us&docIndexId=64179&taskId=120&prodTypeId=215348&prodSeriesId=3235173
Best Regards.
MACH.
Thanks for your nice links. With the EqWriter the result in symbolic form is really faster. But what is this serious speed difference between the two calculators?
Best Regards.
George
Hi!, George:
You only must change, the FLAGS to ...
01 Principal value (confirm, with F3 CHK)
03 Function -> num (confirm, with F3 CHK)
105 Exact mode on (confirm, with F3 CHK)
Best Regards.
MACH.
Thanks for your help.
George
> These big integers aren't generating too slow calculations?
If we "erase the blackboard" and restart the discussion,
upon first hearing the adjective "big" being applied to numbers,
we might be thinking either:
(a) Big exponents, in floating point numbers.
(b) Big enough storage to keep _exact_ answers
to every mathematical operation, never approximating
and never "rounding off" -- through the use of potentially
unlimited size integers, then rational fractions of those
(with as large an integer numerator and integer denominator
as may be required), plus special symbolic values (e.g. '\pi' 'e') Etc.
As with any creative art, however, we can devise both less efficient
and more efficient ways to get the same end result,
with the difference reflecting how much of our own intelligence we use,
rather than to blame on tools which can only carry out what we tell them to do.
The HP48 series did its algebra using only floating point numbers,
but because of this, its algebra was also more limited,
its results less trustworthy near points of discontinuity,
and with fundamental limitations that could never be eliminated -- for example,
if you keep adding 1 to a value for a millennium using floating point,
the value 1E12 is as high as you can ever reach in these calculators,
after which any further attempt to continue counting
will be like a wheel spinning on ice, going nowhere.
Suppose you want to investigate RSA encryption;
how far can you get with floating point arithmetic?
Even if you rely on the 64-bit "binary integer" object type,
how far can that take you toward the same goal?
Depending on what you want to accomplish,
you can use whatever tool best fits the job.
The inbuilt CAS is not a numerical tool,
but is a theoretical, symbolic tool, and as such,
it delivers results that have no element of approximation
or roundoff -- however, you are free
to go ahead and round off anything you like,
if your ultimate desire is a numeric answer,
as the command \->NUM is always ready to deliver.
You are even also free to do the opposite,
as is implemented in a single command: XQ (try it!)
> The size of the integers limited by the memory only.
> And what will do my HP in the low memory conditions? Less precision?
Suppose you are taking an algebra exam, and run out of scratch paper?
How does that influence the results you should get?
> The size of the TI's integers are limited.
> So the TI-s computing speed is faster but not too accurate with the integers.
The numerical system designed to be very fast, but to need
no more than so many significant digits' accuracy, yet still allowing
for "big numbers" in terms of exponents, is called "floating point,"
and most calculators have that, including TI, HP,
and even the "four-banger" I just bought for $1 --
the first such calculator I ever saw would have cost about $200,
but now we pay far less for almost everything we produce,
except for the tragic cost of fundamental human folly,
and the loss of the most fundamental, spiritual values
for which we even exist as material, though only approximate,
expressions of unbounded, pure consciousness.
-[ ]-
HI!, George:
You can download the Library 902, from ... http://www.hpcalc.org/details.php?id=5363
This Library is for ...
Multiple precision real and complex library including trig and
hyperbolic functions. Interval arithmetic (precision tracking) for
real functions. Algebraics with interval numbers or units may be
automatically evaluated to user-defined precision. Now has basic
matrix functions and 49g+/50g support. Authors: Gjermund Skailand and
Thomas Rast
Best Regards.
MACH.
I'm sorry for my poor english (and other) knowledge because my question was to formulate loosely.
My question is technical only.
I've a TI-92 Plus and I got a HP 50g in last summer because the new TI Nspire is antipathetic to me. I tested the HP and I wrote to it my "limit" formula. The answer was the "e" in 4 seconds. From the TI's CAS this "e" returned less than 1 second. So I tried this formula with a numeric (integer) value (Began to fly to Jersey) and the speed difference was bigger.
In the TI's book is:
"Integer values in memory are stored using up to 614 digits."
On the HP the integers limited by the memory only. Because the HP (in this case only) was slower I tried found a solution like the numeric integral's FIX digits settings. (Yes I know, the numeric integral use floating point numbers.) I found nothing.
Because I tried ask this forum for it or the reason of this symptom. But it's very hard if somebody speak in english so dicky like me.
Thank you for all your trouble and excuse me for my importunity.
Best Regards: George Litauszky
Thanks, I'll try it. And it's speed too.
George
The simple answer is that there is no (built-in) way to limit the
maximum number of digits of integers on the 50G. (I don´t mean binary
integers of type 10 and their word length here, but rather the "zints"
of type 28.) So you either go for arbitrary precision/accuracy (and
arbitrary long waits) with zints, or you just use floats.
Unfortunately (or fortunately?) no possible setting of a maximum
number of digits of zints, and that of course with all the related
advantages and disantantages.
Cheers,
Nick
P.S.: Can you somehow set the max. size of integers on the TI-92 or is
it just fixed to 614, no matter how many of them you would really
need?
Thanks four your help.
"P.S.: Can you somehow set the max. size of integers on the TI-92 or is
it just fixed to 614, no matter how many of them you would really
need?"
No I can't. But on the TI-92 I can't set the numeric integral's precision neither. Because I was think: Maybe I missed anything.
Thanks,
George
Well, one way or the other we will never have it all, it seems.
Nick
> The simple answer is that there is no (built-in) way to limit
> the maximum number of digits of integers on the 50G.
You can reduce the amount of free memory available, e.g. via FMEM
(which is actually designed to make "Garbage Collection" less "bumpy,"
but can also be used to reduce available memory for any other purpose).
"FMEM (a short UserRPL program for any HP48/49/50)"
http://groups.google.com/group/comp.sys.hp48/msg/abcffb34ee273a78
However, when you hit the limit, any attempt to produce larger integers
will simply get an Insufficient Memory error, so why ever do that?
If some TI calculator has a fixed smaller limit to the size of integers,
then it has a fixed limit to its range of computations of this type --
is that of some benefit to anyone?
Again, if what you need is floating point (approximate) computation
with a range of exponents of 10 from -499 to +499,
then use "real" numbers, rather than integers,
which always have the same size
and for which computations are very fast.
Or you can use "extended reals," which have 15 mantissa digits,
and exponents of 10 from -49999 to +49999 (this requires
using SysRPL entry points and doing your own object type checking).
[r->] [OFF]
This doesn´t control the max number of digits for a zint.
> If some TI calculator has a fixed smaller limit to the size of integers,
> then it has a fixed limit to its range of computations of this type --
> is that of some benefit to anyone?
For me definitely not, but it is none of my business to decide about
"benefits for others", Mr. Meyers.
> Again, if what you need is floating point (approximate) computation
>....
> using SysRPL entry points and doing your own object type checking).
This is your cup of tea. Use whatever you like the way you like it.
The question here was if there is a way to set up max bytes for a
zint, for whatever reason. The answer is simply "no", there is not.
Maybe perfect for you, but nobody has an obligation to be incorporated
into your methodology. Different people, different reasons.
Ciao,
Nick
Thanks for the answers from Mr. Meyers, MACH and Nick.
Best Regards
George
> I'm sorry once more for my laxity. I just arrived from the other side and I'm now here.
Nothing to be sorry about, George. Really nothing. You just asked a
question and that was a petty good one.
>I like my HP. It's an erudite but so difficult calculator and it seems I need a lot of years
>to discover it. I found a lot of operative informations on this forum and I hope it will be
> in the future too.
Well yes, the HP50 is perhaps also a bit of.... much like reading a
book with a good storyline, but also a book that doesn´t show you
everything right in the first few chapters. You start somewhere and
while you keep going you gather many little pieces that alltogether
asseble an (almost) coherent picture. Many things evolve nicely in
that picture, in a very interesting or even unexpectedly good way.
Other things evolve terribly too. ;-)
Still of course nobody should have an "a priori" obligation to like or
dislike any concept at all just because somebody else implies that it
is that and only that way. ;-)
Cheers!
Nick
>> You can reduce the amount of free memory available, e.g. via FMEM
>> ....
>> will simply get an Insufficient Memory error, so why ever do that?
> This doesn't control the max number of digits for a zint.
How can I control your personal re-interpretation
(or lack of seeing the actual meaning and truth) of what I write?
Since the size of a "zint" (integer object) is "limited only by
available memory," it's obvious that "limiting available memory"
_does_ control the maximum size of zints,
unless you have a new system of logic.
However, I also questioned why we would bother doing that.
It's not completely crazy to actually do that,
because sometimes we might like to have some way
to detect an "overflow," such as the built-in maximum exponent
for floating point (real) objects, at which point we can choose
whether to cause an error or leave the result at the value MAXR,
rather than let a program "run forever"
in a direction not producing a finite answer or a programmed stop,
which could be a legitimate interpretation of the stated desire.
Another, more direct way to do that, in any program,
is to apply the SIZE command to any zint,
which returns the number of digits it contains,
and then decide, in the program, when to stop further calculation,
or set the result to a chosen "maximum zint,"
or to do anything else which we deem to suit our purpose,
which no amount of denial on your part can prevent anyone from doing,
if they see that it's possible and suits their purpose.
As an aside, this also seemed an appropriate moment
to mention "FMEM" again -- that's a program for deliberately
limiting available memory, which also "breaks up and smoothes out"
unpredictable longer "freezes" due to Garbage Collection,
when available memory is very much larger than we need,
if ever those "freezes" become disconcerting,
and it really works, even if some folks' initial reaction
to the program is to think it's crazy :)
>> If some TI calculator has a fixed smaller limit to the size of integers,
>> then it has a fixed limit to its range of computations of this type --
>> is that of some benefit to anyone?
> For me definitely not, but it is none of my business to decide about
> "benefits for others", Mr. Meyers.
The first part of the above sentence is again a matter of
straightforward logic, and the next part, which might be
considered as the "Socratic method" of exploring knowledge,
is the asking of questions to provoke thought, or even answers,
even if the range of your own thought comes to a dead end over it.
>> Again, if what you need is floating point (approximate) computation
>> ....
>> using SysRPL entry points and doing your own object type checking).
> This is your cup of tea. Use whatever you like the way you like it.
That's exactly what I was pointing out -- many possibilities,
which can be used in any way that suits anyone,
and thank you for "seconding" that idea.
> The question here was if there is a way to set up max bytes for a
> zint, for whatever reason. The answer is simply "no", there is not.
Other than at least two ways already demonstrated,
and any more that others might devise, such as Andreas Möller,
who has even described how to replace built-in ROM functions,
which could again replace some zint-processing functions
with others that might automatically set a limit,
and act according to flag settings at that limit, etc.,
much as is done for real numbers at the absolute value MAXR.
> Maybe perfect for you, but nobody has an obligation to be incorporated
> into your methodology. Different people, different reasons.
I don't know what this means,
but seeing it reminded me of these quotes:
"I believe in intuition and inspiration.
Imagination is more important than knowledge.
For knowledge is limited, whereas imagination
embraces the entire world, stimulating progress,
giving birth to evolution. It is, strictly speaking,
a real factor in scientific research."
http://en.wikiquote.org/wiki/Albert_Einstein
"Whoever undertakes to set himself up as a judge of Truth and Knowledge
is shipwrecked by the laughter of the gods." [Einstein]
[r->] [OFF]
> >> You can reduce the amount of free memory available, e.g. via FMEM
> >> ....
> >> will simply get an Insufficient Memory error, so why ever do that?
> > This doesn't control the max number of digits for a zint.
>
> How can I control your personal re-interpretation
> (or lack of seeing the actual meaning and truth) of what I write?
By making clear what you mean without writing epical novels.
> Since the size of a "zint" (integer object) is "limited only by
> available memory," it's obvious that "limiting available memory"
> _does_ control the maximum size of zints,
> unless you have a new system of logic.
Setting the total amount of RAM for all objects is not the same as
setting the number of digits for each single zint. With 10Bytes of
total RAM you can have one zint with ten bytes, or two with five, and
so on.
> However, I also questioned why we would bother doing that.
Because "we" have not necessarily your kind of studies, etc.
Or even just for fun.
> It's not completely crazy to actually do that,
>.......
> which could be a legitimate interpretation of the stated desire.
You don´t define what is "crazy" or "legitimate" for others.
> Another, more direct way to do that, in any program,
>......
> to the program is to think it's crazy :)
And thus there can't be any subjects of investigation at all which
require a non arbitrary number of digits allocation for integers. If
you say so...
> >> If some TI calculator has a fixed smaller limit to the size of integers,
> >> then it has a fixed limit to its range of computations of this type --
> >> is that of some benefit to anyone?
> > For me definitely not, but it is none of my business to decide about
> > "benefits for others", Mr. Meyers.
>
> The first part of the above sentence is again a matter of
.....
> even if the range of your own thought comes to a dead end over it.
Are you my superintendent or are you my analyst?
> >> Again, if what you need is floating point (approximate) computation
> >> ....
> >> using SysRPL entry points and doing your own object type checking).
> > This is your cup of tea. Use whatever you like the way you like it.
>
> That's exactly what I was pointing out -- many possibilities,
> which can be used in any way that suits anyone,
> and thank you for "seconding" that idea.
No, you sell "better oranges" where somebody asks for "just good
grapefruit".
> > The question here was if there is a way to set up max bytes for a
> > zint, for whatever reason. The answer is simply "no", there is not.
> Other than at least two ways already demonstrated,
> and any more that others might devise, such as Andreas Möller,
> who has even described how to replace built-in ROM functions,
>.....
> much as is done for real numbers at the absolute value MAXR.
Point taken. Möller's ideas could indeed be followed for
implementation of this specific requirement.
Still, not especially attractive for somebody having the own work to
do, or just being a newbie.
> > Maybe perfect for you, but nobody has an obligation to be incorporated
> > into your methodology. Different people, different reasons.
>
> I don't know what this means,
Only what it says.
> but seeing it reminded me of these quotes:
>......
> is shipwrecked by the laughter of the gods." [Einstein]
"Wie artig weiß die Hündin Sinnlichkeit um ein Stück Geist zu betteln,
wenn ihr ein Stück Fleisch versagt wird!"
Friedrich Wilhelm Nietzsche
Thank you very much, Veli-Pekka!
And yes, indeed. These are the specific results for the HP50g under
its own kind of coding and memory usage for integers.
But in general, whichever way the coding of an integer (or any other
object) is carried out, be it more or less economic in its own ways,
and whatever the control over the total amount of available memory is,
it doesn't really control the max amount of digits for each single
integer (or any other object as well). In the case of integers it
would only affect the max memory available (and thus also the max
number of digits in some way) for the one greatest possible integer,
and that is not the same as the max number of digits of each
constituent in total memory. We are talking here not about the max
number of digits of one (the largest possible) of the constituents as
subject to max total memory allocation, I think.
To use one of Mr. Meyer's really "lovely" metaphors, it is like
having a great basket carrying eggs and expecting a uniform max size
for each and every egg by defining the max size... of the basket. ;-)
It is an interesting partitioning problem, by the way.
Not to speak about affecting the max size of any other object as well,
where one would (perhaps) just like to limit integers and nothing else
to a max allocation size of, say 100 bytes, no matter of the number of
digits possible with those 100 bytes. Imagine what that would mean for
you programs which under these conditions would never be longer than
100 bytes too, where you only would like to have integers of a max
size of 100 bytes.
So, we perhaps just accept that the HP50 has its own internal ways to
manage an arbitrary memory allocation for each and every integer (and
any other object) up to memory limitations, and we simply and
sincerely say that it was not thought for such an "absolute" control
over the bits and bytes of each and every object taking part in our
investigations. To me personally it is the better way to go, since it
frees me up of the worries of allocted memory "being enough, or
perhaps too much, or perhaps too little" for each and every kind of
thing around, but I must also see that there can be different needs by
other other people.
Ciao, Veli-Pekka!