Google Gruppi non supporta più i nuovi post o le nuove iscrizioni Usenet. I contenuti storici continuano a essere visibili.

Fibonacci series recursion error

213 visualizzazioni
Passa al primo messaggio da leggere

lalit

da leggere,
29 apr 2011, 23:22:4829/04/11
a
import os
def fib(n):
if n == 1:
return(n)
else:
return (fib(n-1)+fib(n-2))

list=fib(20)
print(list)

The above function return the
return (fib(n-1)+fib(n-2))


RuntimeError: maximum recursion depth exceeded in comparison
[36355 refs]

can any one help

Gary Herron

da leggere,
29 apr 2011, 23:41:5229/04/11
a pytho...@python.org

You correctly test for n==1, but what about when n==2? When the
recursion works its way down to fib(2), you call both fib(1) and fib(0),
but the latter starts an infinite sequence of calls to fib(-1), fib(-2)
and so on without end.

Gary Herron

Jason Friedman

da leggere,
29 apr 2011, 23:57:3729/04/11
a pytho...@python.org

The first call to fib() recursively calls fib() twice. Each of those
will call fib() twice. Each of those will call fib() twice. Pretty
soon, you've got a lot of calls.

Have a look at: http://en.literateprograms.org/Fibonacci_numbers_(Python).

harrismh777

da leggere,
30 apr 2011, 00:45:3030/04/11
a
lalit wrote:
> The above function return the
> return (fib(n-1)+fib(n-2))
>
> RuntimeError: maximum recursion depth exceeded in comparison
> [36355 refs]

There is much debate about this generally, but general wisdom is that
recursion is to be avoided when possible. Another way to say this is,
"Only use recursion when there is no other obvious way to handle the
problem".
Recursion is very tempting to young artists because its a ~cool trick,
and because sometimes it requires very little coding (although huge
amounts of memory!), or as in your case, recursion depth errors.
Anyway, the better way to build a Fibonacci sequence generator is the
following... I have expanded things a bit so that someone not knowing
what the sequence is can see what is happening... you will notice simple
'for' iterations, and no recursion:
===============begin======================
def fib(i=1):
l=[]
p=0
a=1
n=p+a
for j in range(1,i+1):
l.append(a)
p=a
a=n
n=p+a
for j in l:
print(j, end=' ')

fib(7)
=======================end======================


kind regards,

m harris

harrismh777

da leggere,
30 apr 2011, 00:51:0930/04/11
a
===============begin======================
def fib(i=1):
l=[]
p=0
a=1
n=p+a
for j in range(1,i+1):
l.append(a)
p=a
a=n
n=p+a
return l

list=fib(7)
=======================end======================


... the above, if you want to return the list, not print...

Ian Kelly

da leggere,
30 apr 2011, 01:27:1430/04/11
a Python
On Fri, Apr 29, 2011 at 9:57 PM, Jason Friedman <ja...@powerpull.net> wrote:
> The first call to fib() recursively calls fib() twice.  Each of those
> will call fib() twice.  Each of those will call fib() twice.  Pretty
> soon, you've got a lot of calls.

Which is hell for the running time, but doesn't answer the question of
why the maximum recursion depth is exceeded, since the fact is that if
the function were properly coded, the call stack for fib(20) would
never be more than 20 entries deep at any one time.

The actual problem, as Gary pointed out, is that the base case is incomplete.

harrismh777

da leggere,
30 apr 2011, 01:31:5130/04/11
a
def fib(i=1):
a=1;n=1;l=[]
for j in range(0,i):

l.append(a)
p=a;a=n;n=p+a
return l

list=fib(7)

... and the above, is how I would actually code it....


kind regards,
m harris

harrismh777

da leggere,
30 apr 2011, 01:35:2230/04/11
a
Ian Kelly wrote:
> since the fact is that if
> the function were properly coded, the call stack for fib(20) would
> never be more than 20 entries deep at any one time.
>

Not so much... and much more !....


... because each recursion level 'return' calls fib() twice, and each
of those calls fib() twice, and you get the point...


(not to mention, its not properly coded)

Hans Georg Schaathun

da leggere,
30 apr 2011, 01:50:4530/04/11
a
On Fri, 29 Apr 2011 23:45:30 -0500, harrismh777
<harri...@charter.net> wrote:
: There is much debate about this generally, but general wisdom is that
: recursion is to be avoided when possible.

That is context dependent at best. You have given reasons to avoid
recursion in /executable code/, but that's a compiler issue. You
have only given reason /for/ recursion in source code. It generally
gives little and very reaadble code. In almost every non-trivial
software project, the programmers will be more overworked than the
computer, and therefore they are the once to consider when optimising.

: Recursion is very tempting to young artists because its a ~cool trick,

: and because sometimes it requires very little coding (although huge
: amounts of memory!), or as in your case, recursion depth errors.

Waste of memory happens only with some types of recursion, and even
then it is usually negligible. The recursion depth issue is the
result of a flawed base case, and nothing to do with a weakness of
recursion.

: Anyway, the better way to build a Fibonacci sequence generator is the

: following... I have expanded things a bit so that someone not knowing
: what the sequence is can see what is happening... you will notice simple
: 'for' iterations, and no recursion:

And surprisingly difficult to read for such a well-known operation as
Fibonacci numbers. If you want to promote iteration, you had better
at least try to make it legible.

Your code is obviously more efficient in being O(n) whereas OP had
(I think) O(2^n), but that's not a property of iteration. You can
make a recursive implementation which is O(n). Any undergraduate
textbook teaching recursion in any depth is likely to give it as an
example; see e.g. Simon Thompson's Haskell book.

--
:-- Hans Georg

Paul Rudin

da leggere,
30 apr 2011, 01:43:4230/04/11
a
harrismh777 <harri...@charter.net> writes:

> lalit wrote:
>> The above function return the
>> return (fib(n-1)+fib(n-2))
>>
>> RuntimeError: maximum recursion depth exceeded in comparison
>> [36355 refs]
>
> There is much debate about this generally, but general wisdom is that
> recursion is to be avoided when possible. Another way to say this is,
> "Only use recursion when there is no other obvious way to handle the
> problem".
> Recursion is very tempting to young artists because its a ~cool trick,
> and because sometimes it requires very little coding (although huge
> amounts of memory!),


Writing recurive code is acceptable and is a nice clear way of
expressing things when you have naturally recursive data structures, and
can lead to perfectly good compiled code. The problem in CPython is the
lack of tail optimization, so it's not a good idea for python . Some
language standards guarantee tail optimization...

Peter Otten

da leggere,
30 apr 2011, 02:14:5630/04/11
a pytho...@python.org
harrismh777 wrote:

> def fib(i=1):
> a=1;n=1;l=[]
> for j in range(0,i):
> l.append(a)
> p=a;a=n;n=p+a

Hm, did you run out of newlines?

> return l
>
> list=fib(7)
>
>
>
> ... and the above, is how I would actually code it....
>
>
>
>

Nah, that can't be it ;)

For the record, the one true way to implement the Fibonacci series in Python
is

>>> def fib():
... a = b = 1
... while True:
... yield a
... a, b = b, a+b # look ma, no temporary variable
...
>>> from itertools import islice
>>> list(islice(fib(), 20))
[1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584,
4181, 6765]


Peter Otten

da leggere,
30 apr 2011, 02:32:5530/04/11
a
harrismh777 wrote:

> Ian Kelly wrote:
>> since the fact is that if
>> the function were properly coded, the call stack for fib(20) would
>> never be more than 20 entries deep at any one time.
>>
>
> Not so much... and much more !....
>
>
> ... because each recursion level 'return' calls fib() twice, and each
> of those calls fib() twice, and you get the point...

I don't understand what you are trying to say -- but it's wrong ;)

Chris Angelico

da leggere,
30 apr 2011, 02:37:3330/04/11
a pytho...@python.org
On Sat, Apr 30, 2011 at 4:32 PM, Peter Otten <__pet...@web.de> wrote:
>> ...  because each recursion level 'return' calls fib() twice, and each
>> of those calls fib() twice, and you get the point...
>
> I don't understand what you are trying to say -- but it's wrong ;)

Fortunately, most Python interpreters will not implement
double-tail-recursion as forking.

Chris Angelico

Steven D'Aprano

da leggere,
30 apr 2011, 03:18:1530/04/11
a


I don't know what M Harris thinks he is trying to say either, but the
*naive* Fibonacci recursive algorithm is particularly badly performing
and should be avoided. It's ironic that of the two classic algorithms
used for teaching beginner programmers about recursion, neither is useful
in practice.

Except for fib(0) and fib(1), each call to fib() results in multiple
recursive calls. E.g. calling fib(4) results in the following sequence of
calls:

fib(4) # first call
=> fib(3) + fib(2) # three calls
=> fib(2) + fib(1) + fib(1) + fib(0) # seven calls
=> fib(1) + fib(0) + 1 + 1 + 0 # nine calls
=> 1 + 0 + 1 + 1 + 0 = 3

Thus requiring nine function calls to calculate fib(4) and a maximum
stack depth of four. As n increases, the number of calls increases at a
rate significantly faster than the Fibonacci sequence itself, making it
much worse than O(N**2) but not quite as bad as O(2**N):

n = 0 1 2 3 4 5 6 ... 35 ...
fib(n) = 0 1 1 2 3 5 8 ... 9227465 ...
calls = 1 1 3 5 9 15 25 ... 29860703 ...

The number of calls is given by a recursive function with a similar form
as that of Fibonacci. As far as I know, it doesn't have a standard name,
but I'll call it R(n):

R(n) = R(n-1) + R(n-2) + 1, where R(0) = R(1) = 1

Consequently, the naive recursive function is ridiculously slow and
memory-hungry.

This seems to have give rise to a myth that recursion should be avoided.
What needs to be avoided is *badly thought out* recursion. More sensible
recursive forms performs much better. I leave them as an exercise for
those who care, or an exercise in googling for those who care a little
less *wink*

--
Steven

harrismh777

da leggere,
30 apr 2011, 03:23:4030/04/11
a
Peter Otten wrote:
> I don't understand what you are trying to say -- but it's wrong;)

... that was the point! :-))

harrismh777

da leggere,
30 apr 2011, 03:43:5330/04/11
a
Peter Otten wrote:
> For the record, the one true way to implement the Fibonacci series in Python
> is
>
>>>> >>> def fib():
> ... a = b = 1
> ... while True:
> ... yield a
> ... a, b = b, a+b # look ma, no temporary variable

[snip]

I created two scripts, stressed them a bit, and then timed:
I called the two scripts fib.py and fib2.py... here they are:

==================begin fib.py============================
#!/home/marcus/bin/python3
def fib():
a=b=1
while True:
yield a
a,b=b,a+b # look ma, no temporary variable

from itertools import islice
flist=list(islice(fib(), 100000))
==================end fib.py==============================


==================begin fib2.py===========================
#!/home/marcus/bin/python3
def fib(i=1):
a=b=1;l=[]


for j in range(0,i):
l.append(a)

a,b = b,a+b # look ma, I can do it too....
return l

list=fib(100000)
==================end fib2.py=============================

[ TIMES ] 1 core, 1.6 Ghz, 3196 bogomips

marcus@shire:~/Python3$ time ./fib.py

real 0m2.775s
user 0m1.908s
sys 0m0.820s


marcus@shire:~/Python3$ time ./fib2.py

real 0m2.796s
user 0m1.916s
sys 0m0.824s

You will notice first that the times are *almost* identical. So,
down under the heart of python, something is wrong. (never mind)

But the really interesting thing I found is that when I coded away
the temporary reference, 100ms of user time moved over to sys time. doh,
must be that 'somewhere' a 'temporary' reference had to be created...
and it doesn't look like it's optimized very well...

But, this should also go to show you... that there is *never* just
one obvious way to do anything... contrary to the ZEN of Python... ;-)


On the other hand, I am very much interested in "yield," because of
its obvious persistent state, On the other hand, I don't like you fib()
function because it does not encapsulate the fib generator. I suppose
you are considering whatever module is holding the fib() code as the
encapsulation, but I thought the idea from the OP was to create a
generator that could be called and return a list... this is a little goofy:
list(islice(fib(), X))

when what was wanted is:

list = fib(X)

kind regards,
m harris

Thomas Rachel

da leggere,
30 apr 2011, 04:14:4630/04/11
a
Am 30.04.2011 09:43 schrieb harrismh777:

> On the other hand, I am very much interested in "yield," because of its
> obvious persistent state, On the other hand, I don't like you fib()
> function because it does not encapsulate the fib generator. I suppose
> you are considering whatever module is holding the fib() code as the
> encapsulation, but I thought the idea from the OP was to create a
> generator that could be called and return a list... this is a little goofy:
> list(islice(fib(), X))
>
> when what was wanted is:
>
> list = fib(X)

You can have both with the following:

def fib(max=None):


a = b = 1

while max is None or a <= max:
yield a


a, b = b, a+b

from itertools import islice
flist = list(islice(fib(), 100000))

flist2 = list(fib(100000))

or - if even the latter is unwanted -


def fibgen(max=None):


a = b = 1

while max is None or a <= max:
yield a


a, b = b, a+b

def fib(max=None, num=None):
if num is None:
return list(fibgen(max=max))
else:
from itertools import islice
return list(islice(fib(max=max), num))


Thomas

Thomas Rachel

da leggere,
30 apr 2011, 05:37:2230/04/11
a
Am 30.04.2011 07:35, schrieb harrismh777:
> Ian Kelly wrote:
>> since the fact is that if
>> the function were properly coded, the call stack for fib(20) would
>> never be more than 20 entries deep at any one time.
>
> Not so much... and much more !....
>
> ... because each recursion level 'return' calls fib() twice, and each of
> those calls fib() twice, and you get the point...

yes - but they are called one after the other, so the "twice" call
counts only for execution speed, not for recursion depth.


Thomas

Hans Georg Schaathun

da leggere,
30 apr 2011, 07:01:5030/04/11
a
On Sat, 30 Apr 2011 06:43:42 +0100, Paul Rudin
<paul....@rudin.co.uk> wrote:
: Writing recurive code is acceptable and is a nice clear way of

: expressing things when you have naturally recursive data structures, and
: can lead to perfectly good compiled code. The problem in CPython is the
: lack of tail optimization, so it's not a good idea for python . Some
: language standards guarantee tail optimization...

Well, if you run into a case where tail optimisation really makes a
difference, you probably want to reimplement it for a compiler that
guarantees tail optimisation, rather than reimplement it without
recursion within python.


--
:-- Hans Georg

Paul Rudin

da leggere,
30 apr 2011, 07:29:0030/04/11
a

Clearly it makes a difference in any case where you'd hit the recursion
limit. It's no big deal to do your own unwinding of the recursion to a
loop, but the code is typically less clear.

Hans Georg Schaathun

da leggere,
30 apr 2011, 09:21:2630/04/11
a
On Sat, 30 Apr 2011 12:29:00 +0100, Paul Rudin
<paul....@rudin.co.uk> wrote:
: Clearly it makes a difference in any case where you'd hit the recursion
: limit.

What kind of problems make you hit the limit?

Other than when you forget the base case, I mean.

: It's no big deal to do your own unwinding of the recursion to a


: loop, but the code is typically less clear.

Sure. And you have to live with your less clear code when you maintain
the system later.

--
:-- Hans Georg

Paul Rudin

da leggere,
30 apr 2011, 10:40:2430/04/11
a
Hans Georg Schaathun <h...@schaathun.net> writes:

> On Sat, 30 Apr 2011 12:29:00 +0100, Paul Rudin
> <paul....@rudin.co.uk> wrote:
> : Clearly it makes a difference in any case where you'd hit the recursion
> : limit.
>
> What kind of problems make you hit the limit?
>
> Other than when you forget the base case, I mean.

Anytime you have enough data... there are plenty of things that are natural to
represent as recursive data structures, and often you don't know in
advance how much data your code is going to have to deal with.

Peter Otten

da leggere,
30 apr 2011, 12:07:2030/04/11
a pytho...@python.org
harrismh777 wrote:

Well, fib(10**5)[-1] has over 20000 digits, so I'd expect the runtime to be
dominated by long-integer arithmetic. And that is identical for both
scripts. You will get some improvement from switching to gmpy.

> But the really interesting thing I found is that when I coded away
> the temporary reference, 100ms of user time moved over to sys time. doh,
> must be that 'somewhere' a 'temporary' reference had to be created...
> and it doesn't look like it's optimized very well...
>
> But, this should also go to show you... that there is *never* just
> one obvious way to do anything... contrary to the ZEN of Python... ;-)

I take that as an indication that you are not Dutch ;)

> On the other hand, I am very much interested in "yield," because of
> its obvious persistent state, On the other hand, I don't like you fib()
> function because it does not encapsulate the fib generator. I suppose
> you are considering whatever module is holding the fib() code as the
> encapsulation, but I thought the idea from the OP was to create a
> generator that could be called and return a list... this is a little
> goofy:
> list(islice(fib(), X))
>
> when what was wanted is:
>
> list = fib(X)

I would not seriously recommend list(islice(generator, stop)) in that case,
either, but I think my fib() is a good building block:

>>> from itertools import islice
>>> def cached_series(g):
... items = []
... def f(n):
... if n <= len(items):
... return items[:n]
... items.extend(islice(g, n-len(items)))
... return items[:n]
... return f
...
>>> def _fib():


... a = b = 1
... while True:
... yield a
... a, b = b, a + b

...
>>> def _noisy():
... for x in _fib():
... print "calculating", x
... yield x
...
>>> fib = cached_series(_noisy())
>>> fib(0)
[]
>>> fib(1)
calculating 1
[1]
>>> fib(5)
calculating 1
calculating 2
calculating 3
calculating 5
[1, 1, 2, 3, 5]
>>> fib(5)
[1, 1, 2, 3, 5]
>>> fib(6)
calculating 8
[1, 1, 2, 3, 5, 8]


harrismh777

da leggere,
30 apr 2011, 16:41:0730/04/11
a
Thomas Rachel wrote:
> You can have both with the following:

> def fib(max=None):
> a = b = 1
> while max is None or a <= max:
> yield a
> a, b = b, a+b

> from itertools import islice
> flist = list(islice(fib(), 100000))

> flist2 = list(fib(100000))

Yes, yes, yes... but what I realized was really bugging me is the import
of islice... I don't like it... ; however, I agree that we should
combine Otten's ideas with my own... good collaboration... but like this:

fib3.py 'yields' within a 'for' (better than the while for a couple of
reasons I'll include later after the rebuttal... anyway because the def
with the yield compiles into an iterable we can take advantage of the
iteration protocol directly with the 'for' to either print the list over
time, or to get the list and save it... as so:

====================begin fib3.py=======================================
!/home/marcus/bin/python3
def fib(n=1):


a = b = 1

for j in range(n):


yield a
a, b = b, a+b

# to get the elements of the generator over time
for n in fib(10):
print(n, end=' ')

# or, get the whole list and save it
flist=[]
for n in fib(10):
flist.append(n)

====================end fib3.py=========================================

Or, same technique generally, we can take advantage of the fact that the
def with a 'yield' compiles into an iterable with a next method... all
part of the iteration protocal of python... next in 2.6, and __next__
in 3.2 / ... and the next(N) function provided kindly by somone for
convenience sake meaning N.next() in 2.6 and N.__next__() in 3.2 /
In the following code I've included a try block to catch the iteration
protocol's StopIteration error so that we can use the venerable 'while
True' do forever coding style outside the fib() generator. Again, we can
print the elements over time, or we can save the list as a whole for
later use, as so:

====================begin fib4.py=======================================
#!/home/marcus/bin/python3
def fib(n=1):


a = b = 1

for j in range(n):


yield a
a, b = b, a+b

# get the elements of the generator over time
X = fib(10)
while True:
try:
print(next(X), end=' ')
except StopIteration:
break

# or, get the whole list and save it
Y = fib(10)
flist=[]
while True:
try:
flist.append(next(Y))
except StopIteration:
break

====================end fib4.py=========================================

harrismh777

da leggere,
30 apr 2011, 16:51:5130/04/11
a
Peter Otten wrote:
>> > But, this should also go to show you... that there is*never* just

>> > one obvious way to do anything... contrary to the ZEN of Python...;-)

> I take that as an indication that you are not Dutch;)

True enough... American Irish - mostly...

:-}


Hans Georg Schaathun

da leggere,
1 mag 2011, 04:18:3601/05/11
a
On Sat, 30 Apr 2011 15:40:24 +0100, Paul Rudin
<paul....@rudin.co.uk> wrote:
: Anytime you have enough data... there are plenty of things that are natural to

: represent as recursive data structures, and often you don't know in
: advance how much data your code is going to have to deal with.

Sure, but one would think that if you can fit the data in memory,
then you can also fit the call stack. I can also /imagine/ that one
might run into a limit, but I cannot see any concrete examples where
it is likely.

--
:-- Hans Georg

Steven D'Aprano

da leggere,
1 mag 2011, 05:04:2701/05/11
a


Why? You might have 4000 MB of main memory, and only 2 MB (say?) of call
stack allocated. The call stack can't grow indefinitely. If it does, you
get a stack overflow:

http://www.ehow.com/list_6572027_reasons-stack-overflow.html

which, if you're lucky, will just crash the process. If you're unlucky,
it will lead to an exploit that malware can use to compromise your
machine.

In Python, the virtual machine protects you against stack overflow.
Before the stack overflows, Python raises RecursionError. You can
experiment by using sys.getrecursionlimit and sys.setrecursionlimit, but
be careful, if you increase the limit too high, a deeply recursive
function will overflow the stack.


--
Steven

Hans Georg Schaathun

da leggere,
1 mag 2011, 05:27:1401/05/11
a
On 01 May 2011 09:04:27 GMT, Steven D'Aprano
<steve+comp....@pearwood.info> wrote:
: Why? You might have 4000 MB of main memory, and only 2 MB (say?) of call
: stack allocated. The call stack can't grow indefinitely. If it does, you
: get a stack overflow:

Of course you do, but you are still only saying that there might be
an application where this might happen because of excessive although
logically correct recursion. You have not given a single example where
it actually happened.

: In Python, the virtual machine protects you against stack overflow.

: Before the stack overflows, Python raises RecursionError. You can
: experiment by using sys.getrecursionlimit and sys.setrecursionlimit, but
: be careful, if you increase the limit too high, a deeply recursive
: function will overflow the stack.

But the recursion limit is mainly there to protect you against faulty
base cases. Obviously, since it measures the number of items on the
stack and not their size.

--
:-- Hans Georg

Steven D'Aprano

da leggere,
1 mag 2011, 07:56:5701/05/11
a
On Sun, 01 May 2011 10:27:14 +0100, Hans Georg Schaathun wrote:

> On 01 May 2011 09:04:27 GMT, Steven D'Aprano
> <steve+comp....@pearwood.info> wrote:
> : Why? You might have 4000 MB of main memory, and only 2 MB (say?) of
> call : stack allocated. The call stack can't grow indefinitely. If it
> does, you : get a stack overflow:
>
> Of course you do, but you are still only saying that there might be an
> application where this might happen because of excessive although
> logically correct recursion. You have not given a single example where
> it actually happened.

Just google on "stack overflow crash".


> : In Python, the virtual machine protects you against stack overflow. :
> Before the stack overflows, Python raises RecursionError. You can :
> experiment by using sys.getrecursionlimit and sys.setrecursionlimit, but
> : be careful, if you increase the limit too high, a deeply recursive :
> function will overflow the stack.
>
> But the recursion limit is mainly there to protect you against faulty
> base cases. Obviously, since it measures the number of items on the
> stack and not their size.

The Python virtual machine knows how big each entry on the stack needs to
be. (I don't, but it's got to be at least the size of a pointer.) So
"number of items" is just a multiplication away from "size of the items".

In practice the main reason that stacks overflow is because of faulty
base cases in recursion. That's obvious. But in principle, any
sufficiently large number of function calls could overflow the stack. If
the call stack is (say) 1 MB (chosen only to make the maths easier), and
each function call requires (say) a single four-byte entry on the stack,
then you can have a maximum of 250000 function calls before overflowing
the stack.

If you don't like my numbers -- and you probably shouldn't, since I made
them up -- choose your own. But whatever numbers you choose, there *must*
be a maximum number of function calls before the stack overflows.

Not necessarily recursive function calls either: any nested function call
will do. However, it's generally only in recursion that you have more
than a few hundred calls on the stack.

So even if the base case is correct, you can overflow the stack. Given
the naive recursive factorial function:

def fact(n):
if n <= 1: return 1
return n*fact(n-1)

and the theoretical limit above, then fact(250001) will overflow the
stack even though the base case is correct.


--
Steven

Hans Georg Schaathun

da leggere,
1 mag 2011, 09:15:3501/05/11
a
On 01 May 2011 11:56:57 GMT, Steven D'Aprano
<steve+comp....@pearwood.info> wrote:
: Just google on "stack overflow crash".

And I get loads of examples of stack overflows, but I could not see
anybody linking this to logically correct recursion. Wikipedia, for
instance, mentions two common causes, neither of which has anything
to do with logically correct recursion.

: The Python virtual machine knows how big each entry on the stack needs to

: be. (I don't, but it's got to be at least the size of a pointer.) So
: "number of items" is just a multiplication away from "size of the items".

Does it? In a conventional stack you need to store all the local
variables for the function as well. Thus, there is no limit to how
much space a single element on the stack may require.

: But in principle, any

: sufficiently large number of function calls could overflow the stack.

: (...)
: If you don't like my numbers -- and you probably shouldn't, since I made

: them up -- choose your own. But whatever numbers you choose, there *must*
: be a maximum number of function calls before the stack overflows.

But all of this is in principle, and does not constitute any valid
reason to avoid recursion in practice. If you want to argue that
recursion is a bad thing in /practice/ you need a /practical/ argument.

There is also a limit to how far you can usefully recurse over a limited
data structure.

: def fact(n):


: if n <= 1: return 1
: return n*fact(n-1)
:
: and the theoretical limit above, then fact(250001) will overflow the
: stack even though the base case is correct.

Of course that will overflow, but you are now trying to calculate an
integer of several /million/ bits. Please suggest me a current-day
application where you would want to have and also be able to use such
a number.

There are many ways to crash a system if you want to.

But if you don't deliberately try to crash it, you are much more
likely to crash it because you allocate to much memory in each step
than because the recursion gets to deep. Consequently, because recursion
is usually a clearer form of expression than iterative loops, recursion
may actually be /less/ dangerous.


--
:-- Hans Georg

Chris Angelico

da leggere,
1 mag 2011, 16:49:4101/05/11
a pytho...@python.org
On Sun, May 1, 2011 at 11:15 PM, Hans Georg Schaathun <h...@schaathun.net> wrote:
> :  The Python virtual machine knows how big each entry on the stack needs to
> :  be. (I don't, but it's got to be at least the size of a pointer.) So
> :  "number of items" is just a multiplication away from "size of the items".
>
> Does it?  In a conventional stack you need to store all the local
> variables for the function as well.  Thus, there is no limit to how
> much space a single element on the stack may require.

In anything less than a useless and trivial demo of recursion, there's
going to be some kind of local state for each call (in the case of a
fibonacci or factorial, that would be the number(s) being multiplied).
Whether they go on the stack or elsewhere, that's additional storage
that gets multiplied out.

> There is also a limit to how far you can usefully recurse over a limited
> data structure.

Sure. Serialize this Python object in a way that can be given to, say, PHP:
foo={"asdf":"qwer","zxcv":"1234"}; foo["self"]=[1,2,3,foo]
Recurse from self into the list, recurse from there into a
dictionary... Okay, that's a rather naive recursion and fraught with
risk, but there are more complex examples. And watching for cyclic
references would be O(N*N) as you'd need to maintain a full list of
every PyObject* that you've sighted (talking here from the C API, but
the same consideration applies whichever way you do it).

> There are many ways to crash a system if you want to.
>
> But if you don't deliberately try to crash it, you are much more
> likely to crash it because you allocate to much memory in each step
> than because the recursion gets to deep.  Consequently, because recursion
> is usually a clearer form of expression than iterative loops, recursion
> may actually be /less/ dangerous.

I'm not sure that recursion is clearer. Recursion is a way of
expressing the two rules:

1! = 1
n! = n * (n-1)!

But iteration is a way of expressing this equivalent rule:

n! = 1 * 2 * 3 * ... * n-1 * n

It really depends what you're trying to say.

Chris Angelico

Terry Reedy

da leggere,
1 mag 2011, 18:24:3001/05/11
a pytho...@python.org
On 5/1/2011 5:27 AM, Hans Georg Schaathun wrote:

> Of course you do, but you are still only saying that there might be
> an application where this might happen because of excessive although
> logically correct recursion. You have not given a single example where
> it actually happened.

I will. Stack overflow *can* happen with a bad base case. It *will*
happen with correct linear recursion* applied to a large enough
collection on a finite-memory machine (as opposed to an 'infinite'
memory Turing machine).

def count_popable_collection(pop_col):
try:
pop_col.pop()
return count_popable_collection(pop_col) + 1
except (KeyError,IndexError):
return 0

print(count_popable_collection({1,2,3}))
print(count_popable_collection([1,2,3]))

Both calls correctly print 3, but will fail for large enough sets or
lists. I call the above body recursion*. A tail-recursive version

def count_popable_collection2(pop_col, cnt=0):
try:
pop_col.pop()
return count_popable_collection2(pop_col, cnt + 1)
except (KeyError,IndexError):
return cnt

print(count_popable_collection2({1,2,3}))
print(count_popable_collection2([1,2,3]))

is less clear to most people, I think, and, executed as written, fails
at the same point with the same memory error. Try either of the above
with list(range(bignum)) (I am using 3.2).

This does not make linear recursion 'bad', just impractical for general
use on finite-memory machines. While I consider it very useful for
learning, it is unnecessary because it is easy to write an iterative
version. So called tail-recursion optimization saves memory by REMOVING
RECURSION and replacing it with iteration.

def count_popable_collection3(pop_col):
cnt = 0
while True:
try:
pop_col.pop()
cnt += 1
except (KeyError,IndexError):
return cnt

print(count_popable_collection3({1,2,3}))
print(count_popable_collection3([1,2,3]))

Python does not do this automatically because 1) it can be a semantic
change under some circumstances; 2) one who wants the iterative version
can just as easily write it directly; and 3) Python has a better way to
process collections that removes essentially all the boilerplate in the
recursive-call and while-loop versions:

def count_popable_collection4(pop_col):
cnt = 0
for item in pop_col:
cnt += 1
return cnt

print(count_popable_collection4({1,2,3}))
print(count_popable_collection4([1,2,3]))


Binary recursion* is a different case because the exponential growth in
leaf number and hence time limits the useful depth of recursion to well
below the default of 1000.

* linear recursion: usually and at most one recursive call per call
* binary recursion: usually and at most two recursive calls per call
Fib is the best known example.
* tail recursion: base cases return completed calculations
* body recursion: base cases return starting values, often constants

--

Terry Jan Reedy

BartC

da leggere,
1 mag 2011, 19:16:0801/05/11
a
"Steven D'Aprano" <steve+comp....@pearwood.info> wrote in message
news:4dbbb7b6$0$29991$c3e8da3$5496...@news.astraweb.com...

> On Sat, 30 Apr 2011 08:32:55 +0200, Peter Otten wrote:
>
>> harrismh777 wrote:
>>
>>> Ian Kelly wrote:
>>>> since the fact is that if
>>>> the function were properly coded, the call stack for fib(20) would
>>>> never be more than 20 entries deep at any one time.
>>>>
>>>>
>>> Not so much... and much more !....
>>>
>>>
>>> ... because each recursion level 'return' calls fib() twice, and each
>>> of those calls fib() twice, and you get the point...
>>
>> I don't understand what you are trying to say -- but it's wrong ;)
>
>
> I don't know what M Harris thinks he is trying to say either, but the
> *naive* Fibonacci recursive algorithm is particularly badly performing
> and should be avoided. It's ironic that of the two classic algorithms
> used for teaching beginner programmers about recursion, neither is useful
> in practice.

Yes, it generates lots of calls.

About 22000 for fib(20), and 330 million for fib(40).

That's why it's popular for benchmarks that measure performance of function
calls. Using an iterative algorithm wouldn't work quite so well...

--
Bartc

Steven D'Aprano

da leggere,
1 mag 2011, 21:09:2101/05/11
a
On Sun, 01 May 2011 14:15:35 +0100, Hans Georg Schaathun wrote:

> On 01 May 2011 11:56:57 GMT, Steven D'Aprano
> <steve+comp....@pearwood.info> wrote:
> : Just google on "stack overflow crash".
>
> And I get loads of examples of stack overflows, but I could not see
> anybody linking this to logically correct recursion. Wikipedia, for
> instance, mentions two common causes, neither of which has anything to
> do with logically correct recursion.

Ah, I see where you're coming from now! You think I'm arguing *against*

the use of recursion. Not at all. Earlier in this thread, I said:

"Consequently, the naive recursive function is ridiculously slow and
memory-hungry.

This seems to have give rise to a myth that recursion should be avoided.
What needs to be avoided is *badly thought out* recursion."

[Emphasis in original.]


> : The Python virtual machine knows how big each entry on the stack
> needs to : be. (I don't, but it's got to be at least the size of a
> pointer.) So : "number of items" is just a multiplication away from
> "size of the items".
>
> Does it? In a conventional stack you need to store all the local
> variables for the function as well. Thus, there is no limit to how much
> space a single element on the stack may require.

To be honest, I don't know what Python does with local variables. But I
*guess* it uses a constant-sized record which points to the locals,
because that's how I'd do it :)

I do know that objects in CPython all live in the heap, not on the stack,
so even if local variables are placed directly on the stack, that's only
a pointer to the object, not the entire object.


[...]


> But all of this is in principle, and does not constitute any valid
> reason to avoid recursion in practice. If you want to argue that
> recursion is a bad thing in /practice/ you need a /practical/ argument.

Okay. In *practice*, all else being equal, recursion is less efficient
than iteration, especially in Python which doesn't optimize tail-
recursion. So if you have a choice between two algorithms which are
equally simple and clear, and one is recursive and the other uses
iteration, the one with iteration will be more efficient.

However, and equally in practice, it's unusual to have two equally simple
algorithms. Usually one or the other will be much simpler than the other,
and you should avoid "optimizing" code based on hypothetical
considerations and instead prefer simple code over complicated, unless
absolutely necessary.

Given a choice between a complicated iterative algorithm and a simple
recursive version, there's no prima facie reason to expect the recursive
version to *necessarily* be slower than iteration in Python *merely*
because it uses recursion. As always, if speed is an issue, profile and
identify the bottlenecks before deciding how to fix them.

--
Steven

Terry Reedy

da leggere,
1 mag 2011, 22:30:0201/05/11
a pytho...@python.org
On 5/1/2011 7:16 PM, BartC wrote:

> Yes, it generates lots of calls.
>
> About 22000 for fib(20), and 330 million for fib(40).

Using the standard double recursion implementation of fib, ncf(n)
(number of calls to fib() for fib(n)) requires ncf(n-2) + ncf(n+1) + 1
calls. The +1 is for the call to fib(n) itself). So it grows a bit
faster than fib(n).

Fib(n) is actually calculated as the sum of fib(n) 1s: 1+1+1....+1
fib(n) times as 1 is the only non-0 base case and there is nothing else
added or multiplied in non-base cases. To put it another way, there are
fib(n) leaf nodes labeled 1 in the unbalanced tree generated by the
recursion.

--
Terry Jan Reedy

Hans Georg Schaathun

da leggere,
2 mag 2011, 01:36:0602/05/11
a
On Mon, 2 May 2011 06:49:41 +1000, Chris Angelico
<ros...@gmail.com> wrote:
: Sure. Serialize this Python object in a way that can be given to, say, PHP:

: foo={"asdf":"qwer","zxcv":"1234"}; foo["self"]=[1,2,3,foo]
: Recurse from self into the list, recurse from there into a
: dictionary... Okay, that's a rather naive recursion and fraught with
: risk, but there are more complex examples. And watching for cyclic
: references would be O(N*N) as you'd need to maintain a full list of
: every PyObject* that you've sighted (talking here from the C API, but
: the same consideration applies whichever way you do it).

Wouldn't cyclic references give infinite recursion? And remain
infinitive if you recode it iteratively?

: I'm not sure that recursion is clearer. Recursion is a way of


: expressing the two rules:
:
: 1! = 1
: n! = n * (n-1)!
:
: But iteration is a way of expressing this equivalent rule:
:
: n! = 1 * 2 * 3 * ... * n-1 * n
:
: It really depends what you're trying to say.

True. There is a place for everything. However, in the example
above, you can map the recursive definition directly into python
without further ado. In order to express the one-liner in python,
as iteration, you need to introduce additional elements, namely
a state (index variable). Hence, recursion is clearer by being
close to the language you would normally use to describe the
problem.

--
:-- Hans Georg

Chris Angelico

da leggere,
2 mag 2011, 02:28:4502/05/11
a pytho...@python.org
On Mon, May 2, 2011 at 3:36 PM, Hans Georg Schaathun <h...@schaathun.net> wrote:
> On Mon, 2 May 2011 06:49:41 +1000, Chris Angelico
>  <ros...@gmail.com> wrote:
> :  Sure. Serialize this Python object in a way that can be given to, say, PHP:
> :  foo={"asdf":"qwer","zxcv":"1234"}; foo["self"]=[1,2,3,foo]
>
> Wouldn't cyclic references give infinite recursion?  And remain
> infinitive if you recode it iteratively?

Well, I don't know of a decent non-recursive way to process a
recursive structure. Incidentally, this example is almost directly
from some working code of ours; I have a C function that recurses over
a Python dictionary and aborts as soon as it's found "too much" data
(for a fairly arbitrary definition of "too much", but one that's
deliberately designed to prevent infinite recursion). Since every
element in the dictionary can be a list/dict, and every element of
those can be too, there's no non-recursive way to do it, other than by
doing the recursion yourself:

# partly pseudocode
searchme=[foo]
while len(searchme):
cur=get_next_elem(searchme[-1])
if cur==end_of_list: searchme[-1:]=[]
else: if can_be_recursed_into(cur): searchme.append(cur)
else: output(cur)

This would work, more or less, but it merely trades "true" recursion
for the searchme[] stack. Yes, it's iterative. No, it hasn't abolished
recursion.

> True.  There is a place for everything.  However, in the example
> above, you can map the recursive definition directly into python
> without further ado.  In order to express the one-liner in python,
> as iteration, you need to introduce additional elements, namely
> a state (index variable).  Hence, recursion is clearer by being
> close to the language you would normally use to describe the
> problem.

True, and you could abolish a lot of temporary variables by turning
them into parameters to recursive calls. But you've abolished nothing.

The recursive factorial is very similar to:
reduce(`*,range(1,n+1))
The iterative is very similar to:
reduce(`*,xrange(1,n+1))
One of 'em stacks up all the numbers and the other gets 'em as it
needs 'em. Fundamentally, there's really not a lot of difference. By
piling up the numbers on your stack, you just change the format of
your saved state, you don't actually save anything.

Chris Angelico

Chris Angelico

da leggere,
2 mag 2011, 02:30:5302/05/11
a pytho...@python.org
On Mon, May 2, 2011 at 4:28 PM, Chris Angelico <ros...@gmail.com> wrote:
> reduce(`*,range(1,n+1))
> reduce(`*,xrange(1,n+1))

Whoops, forgot which language I was using. Back-tick functions not
being available, these need to be:

reduce(operator.mul,range(1,n+1))
reduce(operator.mul,xrange(1,n+1))

Chris Angelico

rusi

da leggere,
2 mag 2011, 03:08:2602/05/11
a
On Apr 30, 11:14 am, Peter Otten <__pete...@web.de> wrote:

> For the record, the one true way to implement the Fibonacci series in Python
> is
>
> >>> def fib():
>
> ...     a = b = 1
> ...     while True:
> ...             yield a
> ...             a, b = b, a+b # look ma, no temporary variable

Not any claim to 'the one true pythonic way' but fib can be written in
a clean recursive way with linear space-time behavior asz follows:

Dijkstra explained that fib is an 2nd order recurrence
-- fib(n) defined in terms of fib (n-1) and fib(n-2)
whereas programming loops and recursion are 1st order -- state at
iteration n defines state at iteration n+1.

Converting the 2nd order fib relation to a 1st order one trivially
gives a linear program.
The key insight from Dijkstra is that all we need to do is to move
from a recursive function returning fibonacci nos to one returning
pairs of adjacent ones.

def fibpair(n):
# returns (fib(n), fib(n+1))
if n==0:
return (1,1)
else:
(a,b) = fibpair(n-1)
return (b, a+b)

After that fib is just this:

def fib(n):
a,b = fibpair(n)
return a;

rusi

da leggere,
2 mag 2011, 04:27:3902/05/11
a
On Apr 30, 12:18 pm, Steven D'Aprano <steve
+comp.lang.pyt...@pearwood.info> wrote:

> The number of calls is given by a recursive function with a similar form
> as that of Fibonacci. As far as I know, it doesn't have a standard name,
> but I'll call it R(n):
>
> R(n) = R(n-1) + R(n-2) + 1, where R(0) = R(1) = 1

Changing your definition slightly to the following:

def r(n):
if n==0 or n==1: return 0
else: return r(n-1) + r(n-2) + 1


This r counts the number of times the '+' is done in the original
(naive) fib.

We see it is the same as fib (off by 1)

>>> [fib(n) for n in range(10)]
[1, 1, 2, 3, 5, 8, 13, 21, 34, 55]
>>> [r(n) for n in range(10)]
[0, 0, 1, 2, 4, 7, 12, 20, 33, 54]
>>>

So it does not need a new name :-)

Steven D'Aprano

da leggere,
2 mag 2011, 04:56:5702/05/11
a
On Mon, 02 May 2011 01:27:39 -0700, rusi wrote:

> On Apr 30, 12:18 pm, Steven D'Aprano <steve
> +comp.lang.pyt...@pearwood.info> wrote:
>
>> The number of calls is given by a recursive function with a similar
>> form as that of Fibonacci. As far as I know, it doesn't have a standard
>> name, but I'll call it R(n):
>>
>> R(n) = R(n-1) + R(n-2) + 1, where R(0) = R(1) = 1
>
> Changing your definition slightly to the following:
>
> def r(n):
> if n==0 or n==1: return 0
> else: return r(n-1) + r(n-2) + 1

Except that's not the same as my "R(n)". The base cases should be 1, not
0. That makes rather a big difference to the value: by n = 35, you have

R(35) = 29860703
fib(35) = 9227465

or nearly 2.25 times greater. And the difference just keeps getting
bigger...


> We see it is the same as fib (off by 1)

[...]


> So it does not need a new name :-)

I see your smiley, but there are a number of similar series as Fibonacci,
with the same recurrence but different starting values, or similar but
slightly different recurrences. E.g. Lucas, primefree, Pell, Padovan and
Perrin numbers.

(The fact that most of those start with P is almost certainly a
coincidence.)

--
Steven

Hans Georg Schaathun

da leggere,
2 mag 2011, 05:41:4802/05/11
a
On 02 May 2011 01:09:21 GMT, Steven D'Aprano
<steve+comp....@pearwood.info> wrote:
: Ah, I see where you're coming from now! You think I'm arguing *against*
: the use of recursion. Not at all. Earlier in this thread, I said:

Fair enough. Somebody said something about recursion mainly being
a beginner's error. I don't think it was you, but I felt that your
argument in context mainly served to reinforce such a view, whether
intended or not.

: "Consequently, the naive recursive function is ridiculously slow and

: memory-hungry.
:
: This seems to have give rise to a myth that recursion should be avoided.
: What needs to be avoided is *badly thought out* recursion."

Then we agree.

And badly thought-out iteration is as bad as badly thought-out
recursion.

: To be honest, I don't know what Python does with local variables. But I

: *guess* it uses a constant-sized record which points to the locals,
: because that's how I'd do it :)

Maybe, but would it be able to treat specially C API functions with
a large chunk of stack memory used for local variables?

: Given a choice between a complicated iterative algorithm and a simple

: recursive version, there's no prima facie reason to expect the recursive
: version to *necessarily* be slower than iteration in Python *merely*
: because it uses recursion. As always, if speed is an issue, profile and
: identify the bottlenecks before deciding how to fix them.

Then we are on the same page.

And it is becoming increasingly clear how bizarre this discussion is in
a python context. The overhead which may be caused by recursion in
hardware is only one of many sources of overhead which one accepts when
opting to use python in order to gain other benefits.

--
:-- Hans Georg

Hans Georg Schaathun

da leggere,
2 mag 2011, 05:53:5202/05/11
a
On 02 May 2011 08:56:57 GMT, Steven D'Aprano
<steve+comp....@pearwood.info> wrote:
: I see your smiley, but there are a number of similar series as Fibonacci,
: with the same recurrence but different starting values, or similar but
: slightly different recurrences. E.g. Lucas, primefree, Pell, Padovan and
: Perrin numbers.

Well, Fibonacci isn't one unique sequence. Any sequence satisfying
f(n) = f(n-1) + f(n-2) is /a/ Fibonacci sequence. Regardless of
starting values. At least according to some authors.

Ian Andersen (A First Course in Combinatorial Mathematics) prefer
the sequence 1,2,3,5 ...

Cormen, Leiserson, Rivest (Introduction to Algorithms) prefer
0,1,1,2, ... (although they also start the indexing at 0).

Penguin, Dict. of Mathematics prefer 1,1,2,3,5 but they also
suggest 0,1,1,2,3, ...

In short, don't assume that a person talking about Fibonacci
numbers assume the same base cases as you do.

--
:-- Hans Georg

rusi

da leggere,
2 mag 2011, 06:40:3102/05/11
a
On May 2, 2:53 pm, Hans Georg Schaathun <h...@schaathun.net> wrote:

> On 02 May 2011 08:56:57 GMT, Steven D'Aprano  <steve+comp.lang.pyt...@pearwood.info> wrote:
>
> :  I see your smiley, but there are a number of similar series as Fibonacci,
> :  with the same recurrence but different starting values, or similar but
> :  slightly different recurrences. E.g. Lucas, primefree, Pell, Padovan and
> :  Perrin numbers.
>
> Well, Fibonacci isn't one unique sequence.  Any sequence satisfying
> f(n) = f(n-1) + f(n-2) is /a/ Fibonacci sequence.  Regardless of
> starting values.  At least according to some authors.

And they all will, when expressed in closed form, be of the form
f(n) = phi^n + something of a smaller order
where phi = (1 + sqrt(5))/2

Since phi > 1 they are exponential, ignoring lower order terms.

Dave Angel

da leggere,
2 mag 2011, 08:50:3902/05/11
a Chris Angelico, pytho...@python.org
On 01/-10/-28163 02:59 PM, Chris Angelico wrote:

> On Mon, May 2, 2011 at 3:36 PM, Hans Georg Schaathun<h...@schaathun.net> wrote:
>> On Mon, 2 May 2011 06:49:41 +1000, Chris Angelico
>> <ros...@gmail.com> wrote:
>> : Sure. Serialize this Python object in a way that can be given to, say, PHP:
>> : foo=asdf":"qwer","zxcv":"1234"}; foo["self"]=[1,2,3,foo]

>>
>> Wouldn't cyclic references give infinite recursion? And remain
>> infinitive if you recode it iteratively?
>
> Well, I don't know of a decent non-recursive way to process a
> recursive structure. Incidentally, this example is almost directly
> from some working code of ours; I have a C function that recurses over
> a Python dictionary and aborts as soon as it's found "too much" data
> (for a fairly arbitrary definition of "too much", but one that's
> deliberately designed to prevent infinite recursion).
> <snip>
>
> Chris Angelico
>

When iterating over a repeatable, potentially infinite sequence of
distinguishable values, you can safely detect infinitude ;-) (cycles)
with a linear overhead (rather than n-squared).

Given a sequence, create two iterators for it. Start them both at the
same point, but iterate the second one twice for every time you iterate
the first one. If the two ever get the same value, you have a cycle.

In the case of Python objects, you probably want to use an 'is'
comparison, rather than comparing id() for equal. But the concept
works, and easily.

DaveA

Hans Georg Schaathun

da leggere,
2 mag 2011, 09:14:2002/05/11
a
On Sun, 01 May 2011 18:24:30 -0400, Terry Reedy
<tjr...@udel.edu> wrote:
: This does not make linear recursion 'bad', just impractical for general
: use on finite-memory machines. While I consider it very useful for
: learning, it is unnecessary because it is easy to write an iterative
: version. So called tail-recursion optimization saves memory by REMOVING
: RECURSION and replacing it with iteration.
: (...)
: Python does not do this automatically because 1) it can be a semantic
: change under some circumstances; 2) one who wants the iterative version
: can just as easily write it directly;

That's the silliest argument I ever heard. The programmer should write
the code the way he and application domain experts think, i.e. very
often recursively. The compiler should generate code the way the CPU
thinks (most optimally), i.e. iteratively.

Of course, I am not saying that one should always model problems
recursively, and thus also not that one should implement them
recursively. Just that when a model is agreed recursively between
the stakeholders, one should get a compiler to do the optimisation,
not a programmer.

I always thought of python as a step forward, in the way it allows
direct expression of so many alternative modes of thinking (imperative,
functional, recursion, iterators, etc). If what you say is python
philosophy, it is a step backward in requiring the programmer to
think more about low-level technical matters which even C has managed
to leave for the compiler.

: and 3) Python has a better way to

: process collections that removes essentially all the boilerplate in the
: recursive-call and while-loop versions:


--
:-- Hans Georg

Steven D'Aprano

da leggere,
2 mag 2011, 12:24:0702/05/11
a
On Mon, 02 May 2011 10:53:52 +0100, Hans Georg Schaathun wrote:

> On 02 May 2011 08:56:57 GMT, Steven D'Aprano
> <steve+comp....@pearwood.info> wrote:
> : I see your smiley, but there are a number of similar series as
> Fibonacci, : with the same recurrence but different starting values, or
> similar but : slightly different recurrences. E.g. Lucas, primefree,
> Pell, Padovan and : Perrin numbers.
>
> Well, Fibonacci isn't one unique sequence. Any sequence satisfying f(n)
> = f(n-1) + f(n-2) is /a/ Fibonacci sequence. Regardless of starting
> values. At least according to some authors.

According to the references I have, there is one and only one Fibonacci
sequence, the usual one invented by Fibonacci to talk about rabbits.
(Actually, we should talk about Fibonacci *numbers*, rather than
sequence.)

Wolfram Mathworld is typical, defining *the* Fibonacci numbers as those
with starting conditions f(1)=1, f(2)=1 (or f(0)=0 if you prefer).

http://mathworld.wolfram.com/FibonacciNumber.html

The Collins Dictionary of Mathematics agrees with Mathworld.

The Lucas numbers (related to, but not the same as, the Lucas sequence)
obey the same recurrence as the Fibonacci numbers, but with a different
set of starting values. So there is certainly precedent in the
mathematical literature for giving different names to the same recurrence
with different starting values.

In any case, whatever you call them, what I call R(n) is not one, as the
recurrence is different:

R(n) = R(n-1) + R(n-2) + 1

> Penguin, Dict. of Mathematics prefer 1,1,2,3,5 but they also suggest
> 0,1,1,2,3, ...

This depends on whether you start counting from n=0 or n=1.

Even the sequence you quote, from Anderson:

1,2,3,5...

is just the usual Fibonacci numbers starting at n=2 instead of 1 or 0.
(No matter how confusing something is, there's always at least one person
who will try to make it *more* confusing.)


> In short, don't assume that a person talking about Fibonacci numbers
> assume the same base cases as you do.

As far as I'm concerned, there are only two definitions of Fibonacci
numbers that have widespread agreement in the mathematical community:

0,1,1,2,3 ... (starting from n=0)
1,1,2,3,5 ... (starting from n=1)

Any other definition is rather, shall we say, idiosyncratic.

--
Steven

Steven D'Aprano

da leggere,
2 mag 2011, 12:41:3702/05/11
a
On Mon, 02 May 2011 14:14:20 +0100, Hans Georg Schaathun wrote:

> On Sun, 01 May 2011 18:24:30 -0400, Terry Reedy
> <tjr...@udel.edu> wrote:
> : This does not make linear recursion 'bad', just impractical for
> general : use on finite-memory machines. While I consider it very
> useful for : learning, it is unnecessary because it is easy to write an
> iterative : version. So called tail-recursion optimization saves memory
> by REMOVING : RECURSION and replacing it with iteration. : (...)
> : Python does not do this automatically because 1) it can be a semantic
> : change under some circumstances; 2) one who wants the iterative
> version : can just as easily write it directly;
>
> That's the silliest argument I ever heard.

You must be new to the Internet then :)


> The programmer should write
> the code the way he and application domain experts think, i.e. very
> often recursively. The compiler should generate code the way the CPU
> thinks (most optimally), i.e. iteratively.

Perhaps so, but there can be a huge gulf between what "should" happen and
what does happen. The CPython implementation is deliberately a naive, not-
very clever compiler. (PyPy seems to be the implementation aiming at
cutting-edge cleverness.)

Even such simple optimizations as constant folding are very controversial
among the CPython developers. They have good reasons for their caution:
optimizing compilers are renowned for breaking code, especially floating
point code, and there have been cases in Python's history where
optimizations have broken existing code and needed to be backed out.

The problem is, once you include side-effects, recursion and iteration
are *not* identical. Specifically, the opposition to tail-recursion
optimization in Python is that it plays havoc with the tracebacks you get
in the event of an exception. The argument goes:

* Debugging is harder than writing code in the first place, so it is more
important to simplify debugging, even at the cost of making some code
slightly harder to write.

* Recursion doesn't just mean simple recursion where a function calls
itself, but mutually recursive functions. You could have (say) five
functions that mutually called each other recursively, and in the event
of a traceback you need to see the exact calling chain, but that's
precisely the information that is blown away by tail-recursion
optimization.

* Tail-recursion is only a special case of recursion, and not a very
common special case either, so it is hardly worth the extra complexity of
the compiler to treat it specially.

Regardless of whether you agree with the arguments or not, they're hardly
"silly".


--
Steven

Terry Reedy

da leggere,
2 mag 2011, 14:56:1302/05/11
a pytho...@python.org
On 5/2/2011 9:14 AM, Hans Georg Schaathun wrote:
> On Sun, 01 May 2011 18:24:30 -0400, Terry Reedy

> : Python does not do this automatically because 1) it can be a semantic


> : change under some circumstances; 2) one who wants the iterative version
> : can just as easily write it directly;
>
> That's the silliest argument I ever heard.

Calling something silly when you have not really read it and perhaps
asked questions to really understand it is, to me, silly.

If you want a compiler that can introduce bugs into your program by
doing what it thinks you meant, rather than what you said, don't use
CPython.

> : and 3) Python has a better way to
> : process collections that removes essentially all the boilerplate in the
> : recursive-call and while-loop versions:

To properly use modern Python, one should know and use for loops and
iterators.

--
Terry Jan Reedy

Hans Georg Schaathun

da leggere,
2 mag 2011, 16:02:4302/05/11
a
On 02 May 2011 16:41:37 GMT, Steven D'Aprano
<steve+comp....@pearwood.info> wrote:
: You must be new to the Internet then :)

OK. Maybe I heard something worse last time I was an active news users,
years ago.

Anyway, most of the silly things I hear do not qualify as arguments :-)

: The problem is, once you include side-effects, recursion and iteration

: are *not* identical. Specifically, the opposition to tail-recursion
: optimization in Python is that it plays havoc with the tracebacks you get
: in the event of an exception. The argument goes:

Thanks for the comprehensive background. I can see the point that
python may be harder to optimise correctly during compilation than
many other languages.

: Regardless of whether you agree with the arguments or not, they're hardly
: "silly".

I only called one argument silly, namely the one about iterative
rewriting being so simple that it is not an issue. Sometimes it isn't,
but sometimes it is.

The other arguments are valid. And they make me lean more towards
more static, compiled languages without the excessive run-time
dynamism of python.

--
:-- Hans Georg

Mark Dickinson

da leggere,
2 mag 2011, 17:18:3602/05/11
a
On May 2, 5:24 pm, Steven D'Aprano <steve
+comp.lang.pyt...@pearwood.info> wrote:

> As far as I'm concerned, there are only two definitions of Fibonacci
> numbers that have widespread agreement in the mathematical community:
>
> 0,1,1,2,3 ... (starting from n=0)
> 1,1,2,3,5 ... (starting from n=1)
>
> Any other definition is rather, shall we say, idiosyncratic.

And a concrete reason for preferring the above definition (in either
form) is that divisibility properties of the sequence are much neater
with this choice:

gcd(F_m, F_n) = F_{gcd(m, n)}

--
Mark

harrismh777

da leggere,
2 mag 2011, 17:50:5202/05/11
a
Thomas Rachel wrote:
>> ... because each recursion level 'return' calls fib() twice, and each of
>> those calls fib() twice, and you get the point...
>
> yes - but they are called one after the other, so the "twice" call
> counts only for execution speed, not for recursion depth.
>>> def fib(n):
>>> if n == 1:
>>> return(n)
>>> else:
>>> return (fib(n-1)+fib(n-2))

They *are not* called one after the other in the sense that there is
ever only one level of recursion with each call (not at all). Take a
closer look. Each call to fib() forces a double head, and each of those
forces another double head (now four), and so on... the recursion depth
exception of the OP testifies against your theory.

The thing about this problem that puzzles me, is why we might consider
recursion for a possible solution in the first place. In other words,
normally we consider using recursion when we need information further
down the stack then we have now... so that recursion is necessary
because each head call will not have the information it needs for
completion until the tail clean-up (coming back up the stack) provides
that information.

In the case of the fib() numbers (or the fib sequence) recursion is not
necessary for correctly handling of the problem. The simple
straight-forward obvious way to handle the problem is to calculate the
sequence outright in a straight-forward way. On the other hand, Otten's
use of yield (making a generator) makes some sense too, in the case that
we want to get the fib numbers over time without taking the time to
calculate the entire sequence up-front.
Just saying...

kind regards,
m harris

Chris Angelico

da leggere,
2 mag 2011, 17:56:2602/05/11
a pytho...@python.org
On Mon, May 2, 2011 at 11:14 PM, Hans Georg Schaathun <h...@schaathun.net> wrote:
> That's the silliest argument I ever heard.  The programmer should write
> the code the way he and application domain experts think, i.e. very
> often recursively.  The compiler should generate code the way the CPU
> thinks (most optimally), i.e. iteratively.

The CPU can function iteratively or recursively. The main difference
between the programmer and the CPU there is that the CPU is always
imperative, while a programmer might want to think functionally, or in
an event-driven model, and then "under the covers" the compiler
translates that into imperative code.

But in my opinion, the programmer and the interpreter/compiler are
teammates. If you allow programmers to be stupid, you will get a lot
of stupid programmers writing code. (I think that's one of PHP's
biggest downsides.) If the human and the machine know each other well
enough, the resulting code can be orders of magnitude more efficient
to run, *without taking any more tme to code* because the programmer
already knew the right things to do.

Perhaps it will take you four hours to read through something, study
it, maybe eyeball the compiler's source code, maybe do some
disassembly. If you have a few years of career ahead of you, you'll
benefit many times over from those few hours, and without spending
extra time, you'll produce better code. THAT is what distinguishes the
master from the novice.

Chris Angelico

Chris Angelico

da leggere,
2 mag 2011, 18:17:3502/05/11
a pytho...@python.org
On Tue, May 3, 2011 at 7:50 AM, harrismh777 <harri...@charter.net> wrote:
> They *are not* called one after the other in the sense that there is ever
> only one level of recursion with each call (not at all). Take a closer look.
> Each call to fib() forces a double head, and each of those forces another
> double head (now four), and so on...  the recursion depth exception of the
> OP testifies against your theory.

def fib(n):
if n==1:
return n
return fib(n-1)+fib(n-2)


dis.dis(fib)
2 0 LOAD_FAST 0 (n)
3 LOAD_CONST 1 (1)
6 COMPARE_OP 2 (==)
9 JUMP_IF_FALSE 5 (to 17)
12 POP_TOP

3 13 LOAD_FAST 0 (n)
16 RETURN_VALUE
>> 17 POP_TOP

4 18 LOAD_GLOBAL 0 (fib)
21 LOAD_FAST 0 (n)
24 LOAD_CONST 1 (1)
27 BINARY_SUBTRACT
28 CALL_FUNCTION 1
31 LOAD_GLOBAL 0 (fib)
34 LOAD_FAST 0 (n)
37 LOAD_CONST 2 (2)
40 BINARY_SUBTRACT
41 CALL_FUNCTION 1
44 BINARY_ADD
45 RETURN_VALUE

The recursion is in the last block. Note that it calls a function,
then keeps going. It doesn't fork. There are two CALL_FUNCTION
opcodes, called *sequentially*. In terms of the call stack, there is
only ever one of those two calls active at any given time. Also, as
earlier pointed out, the reason for the stack overflow is that fib(2)
will call fib(1) and fib(0), and fib(0) will call fib(-1) and fib(-2),
etc. Changing the operator from == to <= in the conditional return
will solve this.

Chris Angelico

Ian Kelly

da leggere,
2 mag 2011, 18:22:3502/05/11
a harrismh777, pytho...@python.org
On Mon, May 2, 2011 at 3:50 PM, harrismh777 <harri...@charter.net> wrote:
> Thomas Rachel wrote:
>>>
>>> ... because each recursion level 'return' calls fib() twice, and each of
>>> those calls fib() twice, and you get the point...
>>
>> yes - but they are called one after the other, so the "twice" call
>> counts only for execution speed, not for recursion depth.
>
>>>> def fib(n):
>>>>     if n == 1:
>>>>         return(n)
>>>>     else:
>>>>         return (fib(n-1)+fib(n-2))
>
> They *are not* called one after the other in the sense that there is ever
> only one level of recursion with each call (not at all). Take a closer look.
> Each call to fib() forces a double head, and each of those forces another
> double head (now four), and so on...

The branching factor has nothing to do with the maximum stack depth.
If you don't believe me, consider this little script:

import inspect
def maxstack(n):
if n <= 0:
return len(inspect.stack())
else:
return max(maxstack(n-1), maxstack(n-2))
print maxstack(15)

This prints "17".

Now consider this script, which is similar but singly-recursive:

import inspect
def maxstack(n):
if n <= 0:
return len(inspect.stack())
else:
return maxstack(n-1)
print maxstack(15)

This also prints "17". Note: they're the same.

>  the recursion depth exception of the
> OP testifies against your theory.

The OP's recursion depth exception was because a malformed base case
made the recursion infinite. It had nothing to do with the branching
factor.

Chris Angelico

da leggere,
2 mag 2011, 18:42:2202/05/11
a pytho...@python.org
On Tue, May 3, 2011 at 8:22 AM, Ian Kelly <ian.g...@gmail.com> wrote:
>> They *are not* called one after the other in the sense that there is ever
>> only one level of recursion with each call (not at all). Take a closer look.
>> Each call to fib() forces a double head, and each of those forces another
>> double head (now four), and so on...
>
> The branching factor has nothing to do with the maximum stack depth.
>

Side point: In a massively parallel environment, branching like this
COULD be implemented as a fork. I just don't know of any
implementations of Python that do so. (And of course, it works for
fib() because it needs/uses no global state, which makes the
recursions completely independent. Not all functions are so courteous,
and the compiler can't necessarily know the difference.)

Chris Angelico

Steven D'Aprano

da leggere,
2 mag 2011, 20:21:4502/05/11
a
On Mon, 02 May 2011 21:02:43 +0100, Hans Georg Schaathun wrote:

> The other arguments are valid. And they make me lean more towards more
> static, compiled languages without the excessive run-time dynamism of
> python.

If you value runtime efficiency over development time, sure. There are
plenty of languages which have made that decision: Pascal, C, Java, Lisp,
Forth, and many more.

Python aims at acceptable performance (between 10 and 100 times slower
than C) and much easier development (between 10 and 100 times faster than
C *wink*). If that tradeoff doesn't suit you, perhaps Python is not the
language for you.


--
Steven

rusi

da leggere,
2 mag 2011, 21:48:2502/05/11
a
On May 3, 2:50 am, harrismh777 <harrismh...@charter.net> wrote:
> The thing about this problem that puzzles me, is why we might consider
> recursion for a possible solution in the first place....

This can be answered directly but a bit lengthily.
Instead let me ask a seemingly unrelated (but actually much the same)
question.

Here are two power functions:

def powI(x,n):
result = 1
for i in range(0,n):
result = result * x
return result

def powR(x,n): return 1 if (n==0) else (x * powR(x, n-1))

What are their space/time complexities?
Which do you prefer?

Chris Angelico

da leggere,
2 mag 2011, 22:13:1602/05/11
a pytho...@python.org
On Tue, May 3, 2011 at 11:48 AM, rusi <rusto...@gmail.com> wrote:
> What are their space/time complexities?
> Which do you prefer?

They're pretty similar actually. If you rework the first one to not
use range() but instead have a more classic C style of loop, they'll
be almost identical:

def powI(x,n):
result = 1

while n > 0:
result *= x
n-=1
return result

There's a subtle difference in that powR as you've coded it will
infinite-loop if given a negative exponent, while powI in any form
will simply return 1 (neither is technically correct, fwiw). Other
than that, the only difference is that one keeps its state on the call
stack, the other uses a temporary variable.

Performance would probably benefit from a special syntax meaning
"recurse", which would avoid the LOAD_GLOBAL for the function's name
(plus, things break if you pass recursive functions around after
compiling them - for instance, powR2=powR; def powR(x,n): os.exit() -
but if you do that, then you should expect to see nice bullet-holes in
your foot). However, I do not believe that Python would overall
benefit from any such thing.

Chris Angelico

rusi

da leggere,
2 mag 2011, 23:42:4502/05/11
a
On May 3, 5:21 am, Steven D'Aprano <steve

+comp.lang.pyt...@pearwood.info> wrote:
> On Mon, 02 May 2011 21:02:43 +0100, Hans Georg Schaathun wrote:
> > The other arguments are valid.  And they make me lean more towards more
> > static, compiled languages without the excessive run-time dynamism of
> > python.
>
> If you value runtime efficiency over development time, sure. There are
> plenty of languages which have made that decision: Pascal, C, Java, Lisp,
> Forth, and many more.

Or Haskell:

Succinctness like python
Type safety better than C, C++, Java with a minimum of type
declarations

Chris Angelico

da leggere,
3 mag 2011, 01:29:5103/05/11
a pytho...@python.org
On Tue, May 3, 2011 at 3:27 PM, Dan Stromberg <drsa...@gmail.com> wrote:
>
> But the recursive solution has time complexity of O(logn).  The iterative
> solution has time complexity of O(n).  That's a significant difference for
> large n - a significant benefit of the recursive version.

Are you sure? It will produce n function calls and n multiplications,
how is that O(log n)?

Chris Angelico

Ian Kelly

da leggere,
3 mag 2011, 01:46:5803/05/11
a Python
On Mon, May 2, 2011 at 11:27 PM, Dan Stromberg <drsa...@gmail.com> wrote:
> But the recursive solution has time complexity of O(logn).  The iterative
> solution has time complexity of O(n).  That's a significant difference for
> large n - a significant benefit of the recursive version.


It's linear as written. I think you're talking about an
implementation like this one:

def powR(x,n):
if n < 0:
return 1.0 / powR(x, -n)
elif n == 0:
return 1
else:
half_n = n // 2
root = powR(x, half_n)
result = root * root
if half_n + half_n == n - 1:


result = result * x
return result

That's O(log n), but it was harder to write, and it could just as
easily be iterative. In fact it might actually have been a bit
simpler to write it iteratively.

Hans Georg Schaathun

da leggere,
3 mag 2011, 01:13:0703/05/11
a
On 03 May 2011 00:21:45 GMT, Steven D'Aprano
<steve+comp....@pearwood.info> wrote:
: Python aims at acceptable performance (between 10 and 100 times slower
: than C) and much easier development (between 10 and 100 times faster than
: C *wink*). If that tradeoff doesn't suit you, perhaps Python is not the
: language for you.

It isn't the trade-off per se which bothers me, but certain features
which seem to make compilation harder without making development any
easier.

But then, it probably depeds on what kind of development you are doing.

--
:-- Hans Georg

Hans Georg Schaathun

da leggere,
3 mag 2011, 03:01:3503/05/11
a
On Tue, 3 May 2011 07:56:26 +1000, Chris Angelico
<ros...@gmail.com> wrote:
: > often recursively.  The compiler should generate code the way the CPU

: > thinks (most optimally), i.e. iteratively.
:
: The CPU can function iteratively or recursively.

I should have said 'hardware' rather than CPU. Iteratively it is not
as optimal as the call contexts have to be stored.

: But in my opinion, the programmer and the interpreter/compiler are


: teammates. If you allow programmers to be stupid, you will get a lot
: of stupid programmers writing code. (I think that's one of PHP's
: biggest downsides.) If the human and the machine know each other well
: enough, the resulting code can be orders of magnitude more efficient
: to run, *without taking any more tme to code* because the programmer
: already knew the right things to do.

There are situations where that team is essential. However, in many
more situation, the programmer has to team primarily with domain
experts and work with a language in which the domain expert is fluent.
The compiler and hardware are slaves of the engineers, and extra staff
to team up with the compiler is an added cost and delay.

: Perhaps it will take you four hours to read through something, study


: it, maybe eyeball the compiler's source code, maybe do some
: disassembly. If you have a few years of career ahead of you, you'll
: benefit many times over from those few hours, and without spending
: extra time, you'll produce better code. THAT is what distinguishes the
: master from the novice.

That depends on /what/ your career is, and what you need to master.

--
:-- Hans Georg

rusi

da leggere,
3 mag 2011, 03:52:1503/05/11
a
On May 3, 10:29 am, Chris Angelico <ros...@gmail.com> wrote:
> On Tue, May 3, 2011 at 3:27 PM, Dan Stromberg <drsali...@gmail.com> wrote:
> Doh.

> Usually when someone gives a recursive solution to this problem, it's
> O(logn), but not this time.

> Here's a logn one:

:-) Ok so you beat me to it :D

I was trying to arrive at an answer to the question (by M Harris)
about why anyone would do something like fib recursively. I used power
since that illustrates the case more easily, viz:
A trivial power -- recursive or iterative -- is linear time -- O(n)
A more clever power can be O(log(n))
This more clever power can be trivially derived from the recursive one
as follows:

The recursive O(n) function:


def powR(x,n): return 1 if (n==0) else (x * powR(x, n-1))

is just python syntax for the school algebra (or Haskell) equations:

x^0 = 1
x^(n+1) = x * x^n

Adding one more equation:
x^(2*n) = (x^2)^n = (x*x)^n
Re-python-ifying we get:


def pow(x,n):
return 1 if (n==0) else pow(x*x, n/2) if even(n) else x*pow(x,n-1)

def even(n): return n % 2 == 0

Can this be done iteratively? Of course but its not so trivial.

[If you believe it is, then try writing a log(n) fib iteratively :D ]

Dave Angel

da leggere,
3 mag 2011, 06:32:0603/05/11
a rusi, pytho...@python.org
On 01/-10/-28163 02:59 PM, rusi wrote:
> On May 3, 10:29 am, Chris Angelico<ros...@gmail.com> wrote:
>> On Tue, May 3, 2011 at 3:27 PM, Dan Stromberg<drsali...@gmail.com> wrote:
>> Doh.
>
>> Usually when someone gives a recursive solution to this problem, it's
>> O(logn), but not this time.
>
>> Here's a logn one:
>
> :-) Ok so you beat me to it :D
>
> I was trying to arrive at an answer to the question (by M Harris)
> about why anyone would do something like fib recursively. I used power
> since that illustrates the case more easily, viz:
> A trivial power -- recursive or iterative -- is linear time -- O(n)
> A more clever power can be O(log(n))
> This more clever power can be trivially derived from the recursive one
> as follows:
>
> The recursive O(n) function:
> def powR(x,n): return 1 if (n=) else (x * powR(x, n-1))

>
> is just python syntax for the school algebra (or Haskell) equations:
>
> x^0 =
> x^(n+1) = * x^n
>
> Adding one more equation:
> x^(2*n) =x^2)^n = (x*x)^n

> Re-python-ifying we get:
>
>
> def pow(x,n):
> return 1 if (n=) else pow(x*x, n/2) if even(n) else x*pow(x,n-1)
>
> def even(n): return n % 2 =0

>
> Can this be done iteratively? Of course but its not so trivial.
>
> [If you believe it is, then try writing a log(n) fib iteratively :D ]
>

What I'm surprised at is that nobody has pointed out that the logn
version is also generally more accurate, given traditional floats.
Usually getting the answer accurate (given the constraints of finite
precision intermediates) is more important than performance, but in this
case they go hand in hand.

DaveA

rusi

da leggere,
3 mag 2011, 06:51:1603/05/11
a
On May 3, 3:32 pm, Dave Angel <da...@ieee.org> wrote:

> What I'm surprised at is that nobody has pointed out that the logn
> version is also generally more accurate, given traditional floats.
> Usually getting the answer accurate (given the constraints of finite
> precision intermediates) is more important than performance, but in this
> case they go hand in hand.

Well!!
That's a slant that I was not aware of -- though fairly obvious on
second thought.

Thanks Dave.

Chris Angelico

da leggere,
3 mag 2011, 07:04:0703/05/11
a pytho...@python.org
On Tue, May 3, 2011 at 8:32 PM, Dave Angel <da...@ieee.org> wrote:
> What I'm surprised at is that nobody has pointed out that the logn version
> is also generally more accurate, given traditional floats. Usually getting
> the answer accurate (given the constraints of finite precision
> intermediates) is more important than performance, but in this case they go
> hand in hand.

And that, Your Honour, is why I prefer bignums (especially for
integers) to floating point. Precision rather than performance.

Chris Angelico

Steven D'Aprano

da leggere,
3 mag 2011, 08:49:5003/05/11
a
On Tue, 03 May 2011 21:04:07 +1000, Chris Angelico wrote:

> And that, Your Honour, is why I prefer bignums (especially for integers)
> to floating point. Precision rather than performance.

I'm intrigued by your comment "especially for integers", which implies
that you might use bignums for non-integers. Out of curiosity, how would
you calculate:

sin(sqrt(7)*pi/31)

using bignums?

--
Steven

Chris Angelico

da leggere,
3 mag 2011, 09:02:2903/05/11
a pytho...@python.org

REXX uses decimal bignums, although I don't have a high-performance
math library (for sin) that uses anything more than IEEE double
precision. If I coded my own sin algorithm in REXX, I could go to
whatever precision memory (and patience) would allow.

Chris Angelico

harrismh777

da leggere,
3 mag 2011, 13:10:0403/05/11
a
Chris Angelico wrote:
> The recursion is in the last block. Note that it calls a function,
> then keeps going. It doesn't fork. There are two CALL_FUNCTION
> opcodes, called*sequentially*. In terms of the call stack, there is

> only ever one of those two calls active at any given time.

RuntimeError: maximum recursion depth exceeded in comparison
[36355 refs]

can any one help

[ @ Ian, Chris, Thomas ]

Thanks, guys, but I think y'all are still missing my point/question, as
interesting as these discussions are... something is wrong (might be my
understanding)...

... CPython must be making function calls by placing stack-frames on the
call stack... which has a relatively small limit. Python does crappy
tail optimization (one of the reasons to avoid recursion in Python) and
yes, this is an infinite recursion... doh... but the main point for this
part of the discussion is that the "maximum recursion depth exceeded in
comparison" runtime error is posted because there are more stack-frames
being posted to the call stack than there is call-stack to hold them!
We might change this with sys.setrecursionlimit, but that's dangerous;
but the bottom line is that in order for the recursion depth runtime
error to be posted, somebody placed too many stack-frames on the call
stack... in this case about 36,355 of them... yes, the base-case is
coded wrong and the process is infinite, the point is that tail
processing doesn't even get to run... the silly thing pummels the call
stack with stack-frames (obviously exceeding 20) and the runtime error
is posted. If your point is that the infinite process is the problem, I
agree. But my point is that the cpu crunch and the rate at which the
call stack is filled has to do with the double call (which never finds
tail processing).

What am I missing here>?


Chris Angelico

da leggere,
3 mag 2011, 17:41:3203/05/11
a pytho...@python.org
On Wed, May 4, 2011 at 3:10 AM, harrismh777 <harri...@charter.net> wrote:
> If your point is that the infinite process is the problem, I agree. But my
> point is that the cpu crunch and the rate at which the call stack is filled
> has to do with the double call (which never finds tail processing).

The double call _does not_ affect it. Failing to terminate recursion
_does_. I don't know what you mean by "cpu crunch" but the call stack
is going to get n entries. On the Python 2.6 on this system,
sys.getrecursionlimit() returns 1000, meaning that you can calculate
fib(1000) safely (okay, probably not quite as there'll be a few used
for other things, but fib(900) would be safe).

Chris Angelico

Ian Kelly

da leggere,
3 mag 2011, 18:31:2903/05/11
a Python
On Tue, May 3, 2011 at 3:41 PM, Chris Angelico <ros...@gmail.com> wrote:
> On Wed, May 4, 2011 at 3:10 AM, harrismh777 <harri...@charter.net> wrote:
>> If your point is that the infinite process is the problem, I agree. But my
>> point is that the cpu crunch and the rate at which the call stack is filled
>> has to do with the double call (which never finds tail processing).
>
> The double call _does not_ affect it. Failing to terminate recursion
> _does_. I don't know what you mean by "cpu crunch" but the call stack
> is going to get n entries. On the Python 2.6 on this system,
> sys.getrecursionlimit() returns 1000, meaning that you can calculate
> fib(1000) safely (okay, probably not quite as there'll be a few used
> for other things, but fib(900) would be safe).

Depends what you mean by "safe". A back-of-the-envelope calculation
shows that with modern technology it would take more than 10 ** 257
years to complete. That puts it well into the Dark Era of the
universe, long after the black holes will have decayed, when I suspect
it will be difficult to find a continued power source for the computer
to run. And even if it somehow is still running, the process memory
will have been so thoroughly muddled by cosmic rays that the final
result of the calculation will be utterly worthless.

Chris Angelico

da leggere,
3 mag 2011, 19:09:2403/05/11
a pytho...@python.org
On Wed, May 4, 2011 at 8:31 AM, Ian Kelly <ian.g...@gmail.com> wrote:
> A back-of-the-envelope calculation
> shows that with modern technology it would take more than 10 ** 257
> years to complete.

Then I propose that Python be extended to allow for underlying
hardware upgrades without terminating a script. This will be essential
to the finding of answers to these vital questions.

Chris Angelico

Chris Angelico

da leggere,
2 mag 2011, 05:03:4002/05/11
a pytho...@python.org
On Mon, May 2, 2011 at 6:56 PM, Steven D'Aprano
<steve+comp....@pearwood.info> wrote:
> (The fact that most of those start with P is almost certainly a
> coincidence.)
>

There's definitely something attractive about that letter. Lots of
programming languages' names start with P. I smell a conspiracy. The
word "programming" has been secretly attracting related words to its
corner of the alphabet... and the government's *covering it up* and
pretending there's nothing happening!

Chris Angelico

Gregory Ewing

da leggere,
4 mag 2011, 22:35:4104/05/11
a
Chris Angelico wrote:

> There's definitely something attractive about that letter. Lots of
> programming languages' names start with P.

Including one I invented some years ago, that (in the spirit
of C and its derivatives) was called simply "P".

(I was playing around with Sun's NeWS window server, which
was programmed in Postscript, and I got fed up with Forth-like
syntax, so I wrote a translator that took an infix language and
generated Postscript from it.)

--
Greg

Robert Brown

da leggere,
8 mag 2011, 01:44:1308/05/11
a

Steven D'Aprano <steve+comp....@pearwood.info> writes:
> If you value runtime efficiency over development time, sure. There are
> plenty of languages which have made that decision: Pascal, C, Java,
> Lisp, Forth, and many more.

I don't understand why you place Lisp and Forth in the same category as
Pascal, C, and Java. Lisp and Forth generally have highly interactive
development environments, while the other languages generally require an
edit, compile, run it again debugging cycle. Lisp in particular does a
better job than Python in optimizing developer time. I can change class
definitions interactively without restarting my program. I can add type
declarations to a single function and recompile it without restarting my
program. Python requires me to rewrite the slow bits of my program in C
to get good performance. Why is that an efficient use of developer
time?

bob

Steven D'Aprano

da leggere,
8 mag 2011, 08:59:0208/05/11
a
On Sun, 08 May 2011 01:44:13 -0400, Robert Brown wrote:

> Steven D'Aprano <steve+comp....@pearwood.info> writes:
>> If you value runtime efficiency over development time, sure. There are
>> plenty of languages which have made that decision: Pascal, C, Java,
>> Lisp, Forth, and many more.
>
> I don't understand why you place Lisp and Forth in the same category as
> Pascal, C, and Java. Lisp and Forth generally have highly interactive
> development environments, while the other languages generally require an
> edit, compile, run it again debugging cycle.

Good point. Perhaps I need to rethink where Lisp and Forth sit in the
development vs runtime trade-off continuum.


> Python requires me to rewrite the slow bits of my program in C
> to get good performance.

Python doesn't require you to re-write anything in C. If you want to make
a different trade-off (faster runtime, slower development time) then you
use a language that has made the appropriate trade-off. This applies
whether you are talking about the entire program, or just one subroutine.


> Why is that an efficient use of developer time?

Because for any non-trivial program, it is faster and more developer
friendly to just write one or three performance-critical routines in C,
and the rest in Python, than it is to write everything in C.

--
Steven

Teemu Likonen

da leggere,
8 mag 2011, 12:34:1308/05/11
a
* 2011-05-08T12:59:02Z * Steven D'Aprano wrote:

> On Sun, 08 May 2011 01:44:13 -0400, Robert Brown wrote:
>> I don't understand why you place Lisp and Forth in the same category
>> as Pascal, C, and Java. Lisp and Forth generally have highly
>> interactive development environments, while the other languages
>> generally require an edit, compile, run it again debugging cycle.
>
> Good point. Perhaps I need to rethink where Lisp and Forth sit in the
> development vs runtime trade-off continuum.
>
>> Python requires me to rewrite the slow bits of my program in C to get
>> good performance.
>
> Python doesn't require you to re-write anything in C. If you want to
> make a different trade-off (faster runtime, slower development time)
> then you use a language that has made the appropriate trade-off.

I believe that Robert Brown wanted to imply that Common Lisp is quite
optimal on both sides. It supports dynamic interactive development and
yet it has implementations with very efficient compilers. The trade-off
is not necessarily between the two.

But of course "development time" is a nicely vague concept. Depending on
the argument it can include just the features of language and
implementation. Other times it could include all the available resources
such as documentation, library archives and community mailing lists. All
these can reduce development time.

Chris Angelico

da leggere,
8 mag 2011, 22:49:0108/05/11
a pytho...@python.org
On Mon, May 9, 2011 at 2:34 AM, Teemu Likonen <tlik...@iki.fi> wrote:
> But of course "development time" is a nicely vague concept. Depending on
> the argument it can include just the features of language and
> implementation. Other times it could include all the available resources
> such as documentation, library archives and community mailing lists. All
> these can reduce development time.
>

And it also includes a developer's personal experience level. A good
programmer will have a mental "algorithmic library" if you like;
effectively, it's a gigantic hash table of "if I need to do this, I
can do it this way". A mediocre programmer will both take longer and
produce less efficient code than an awesome programmer. This is
primarily not a language feature, but a language that usually has one
obvious way to do things will likely be easier to grok than a language
that has myriad "gotchas" to work around.

Chris Angelico

Robert Brown

da leggere,
8 mag 2011, 23:01:1808/05/11
a

Teemu Likonen <tlik...@iki.fi> writes:
> * 2011-05-08T12:59:02Z * Steven D'Aprano wrote:
>
>> On Sun, 08 May 2011 01:44:13 -0400, Robert Brown wrote:
>>> Python requires me to rewrite the slow bits of my program in C to get
>>> good performance.
>>
>> Python doesn't require you to re-write anything in C. If you want to
>> make a different trade-off (faster runtime, slower development time)
>> then you use a language that has made the appropriate trade-off.
>
> I believe that Robert Brown wanted to imply that Common Lisp is quite
> optimal on both sides. It supports dynamic interactive development and
> yet it has implementations with very efficient compilers. The trade-off
> is not necessarily between the two.

Yes, exactly. Sometimes I don't know in advance which parts of my
Python program will run too slowly or take too much memory. I only find
out after the code is written, when it fails to run fast enough or
doesn't fit in my computer's 32-bit address space. Then the slow parts
get recoded in C. Common Lisp supports a larger performance range. A
developer can write slow code quickly or faster code by giving the
compiler more information, which requires more developer time.


> But of course "development time" is a nicely vague concept. Depending on
> the argument it can include just the features of language and
> implementation. Other times it could include all the available resources
> such as documentation, library archives and community mailing lists. All
> these can reduce development time.

This is definitely true. The total time to complete any task always
depends on tons of factors outside of the programming language. The
ecosystem is important ... How much code can I avoid writing myself?

bob

Hans Mulder

da leggere,
11 mag 2011, 18:06:5711/05/11
a
On 03/05/2011 09:52, rusi wrote:

> [If you believe it is, then try writing a log(n) fib iteratively :D ]

It took me a while, but this one seems to work:

from collections import namedtuple

Triple = namedtuple('Triple', 'hi mid lo')
Triple.__mul__ = lambda self, other: Triple(
self.hi * other.hi + self.mid * other.mid,
self.hi * other.mid + self.mid * other.lo,
self.mid * other.mid + self.lo * other.lo,
)

def fib(n):
f = Triple(1, 1, 0)
a = Triple(1, 0, 1)
while n:
if n & 1:
a *= f
f *= f
n >>= 1
return a.mid


-- HansM

rusi

da leggere,
13 mag 2011, 07:11:3313/05/11
a

Bravo! Can you explain this?

The tightest way I knew so far was this:
The 2x2 matrix
0 1
1 1
raised to the nth power gives the nth fibonacci number. [And then use
a logarithmic matrix mult]
Your version is probably tighter than this.

Yet one could argue that this is 'cheating' because you (and I) are
still solving the power problem.

What I had in mind was to use fib results like:
f_(2n) = f_n^2 + f_(n+1)^2
and use these in the same way (from first principles) like we use the
equation
x^2n = (x*x)^n
to arrive at a logarithmic power algo.

Ian Kelly

da leggere,
13 mag 2011, 09:55:5113/05/11
a Python
On Fri, May 13, 2011 at 5:11 AM, rusi <rusto...@gmail.com> wrote:
> The tightest way I knew so far was this:
> The 2x2 matrix
> 0 1
> 1 1
> raised to the nth power gives the nth fibonacci number. [And then use
> a logarithmic matrix mult]
> Your version is probably tighter than this.

Oh, nice! I did it this way once:

V = [0 1]

M =
[0 1]
[1 1]

fib(n) = (V * M ** n)[0]

Since I viewed M as operating on V, it never occurred to me that
multiplying by V is actually unnecessary, but it is obvious in
retrospect. I think it's just a fortuitous coincidence that it works,
since V sets up the base case and M describes the recursive case. For
a FIbonacci sequence starting with any other pair of numbers, V would
change, but M would not, and so V would no longer happen to be
identical to the top row of M.

Ian

Hans Mulder

da leggere,
13 mag 2011, 15:46:0413/05/11
a
On 13/05/2011 13:11, rusi wrote:
> On May 12, 3:06 am, Hans Mulder<han...@xs4all.nl> wrote:
>> On 03/05/2011 09:52, rusi wrote:
>>
>>> [If you believe it is, then try writing a log(n) fib iteratively :D ]
>>
>> It took me a while, but this one seems to work:
>>
>> from collections import namedtuple
>>
>> Triple = namedtuple('Triple', 'hi mid lo')
>> Triple.__mul__ = lambda self, other: Triple(
>> self.hi * other.hi + self.mid * other.mid,
>> self.hi * other.mid + self.mid * other.lo,
>> self.mid * other.mid + self.lo * other.lo,
>> )
>>
>> def fib(n):
>> f = Triple(1, 1, 0)
>> a = Triple(1, 0, 1)
>> while n:
>> if n& 1:

>> a *= f
>> f *= f
>> n>>= 1
>> return a.mid
>>
>> -- HansM
>
> Bravo! Can you explain this?
>
> The tightest way I knew so far was this:
> The 2x2 matrix
> 0 1
> 1 1
> raised to the nth power gives the nth fibonacci number. [And then use
> a logarithmic matrix mult]
> Your version is probably tighter than this.

My method is just a thinly disguised version of your method: your 2x2
matrices are symmetrical, i.e. the number in the upper right is equal
to the number in the lower left. So I can save some memory and some
CPU time by working with only three numbers.

> Yet one could argue that this is 'cheating' because you (and I) are
> still solving the power problem.

That's true.

> What I had in mind was to use fib results like:
> f_(2n) = f_n^2 + f_(n+1)^2
> and use these in the same way (from first principles) like we use the
> equation
> x^2n = (x*x)^n
> to arrive at a logarithmic power algo.

To compute f(4n) this way, you need to compute both f(2n) and f(2n+1)
first, and to compute those, you need f(n) and f(n+1) and f(n+2)....

I think I can construct an O(log(n)**2) algorithm this way.

And it would still be 'cheating', because we'd still use some special
property of the Fibonacci sequence to reduce our problem to the power
problem. I think this sort of cheating can't be avoided: there is no
general method to compute recurrent sequences faster than O(n);
Fibonacci is just a special case.

-- HansM

Mark Dickinson

da leggere,
13 mag 2011, 17:48:3213/05/11
a
On May 11, 11:06 pm, Hans Mulder <han...@xs4all.nl> wrote:
> On 03/05/2011 09:52, rusi wrote:
>
> > [If you believe it is, then try writing a log(n) fib iteratively :D ]
>
> It took me a while, but this one seems to work:
>
> from collections import namedtuple
>
> Triple = namedtuple('Triple', 'hi mid lo')
> Triple.__mul__ = lambda self, other: Triple(
>      self.hi * other.hi + self.mid * other.mid,
>      self.hi * other.mid + self.mid * other.lo,
>      self.mid * other.mid + self.lo * other.lo,
> )
> [...]

You can even get away with pairs rather than triples:

----

from collections import namedtuple

Pair = namedtuple('Pair', 'z o')
Pair.__mul__ = lambda self, other: Pair(
self.z * other.z + self.o * other.o,
self.z * other.o + self.o * other.z + self.o * other.o,
)

def fib(n):
f = Pair(0, 1)
a = Pair(1, 0)


while n:
if n & 1:
a *= f
f *= f
n >>= 1

return a.o

----

I don't see this (or Hans' version) as cheating at all. This really
*is* the power algorithm, just in a different number system from the
usual one. For those with a bit of abstract algebra, the above
algorithm is just computing x^n in the ring Z[x] / (x^2 - x - 1). A
pair 'Pair(a, b)' represents the element 'a + bx' (more precisely, the
image of 'a + bx' under the natural quotient map Z[x] -> Z[x] / (x^2 -
x - 1)) of that ring. And this *can* be generalised to other
sequences given by a linear recurrence.

Mark

rusi

da leggere,
14 mag 2011, 13:24:1314/05/11
a
On May 14, 2:48 am, Mark Dickinson <dicki...@gmail.com> wrote:

> I don't see this (or Hans' version) as cheating at all.

Yeah sure -- cheating is a strong word :-)

> This really *is* the power algorithm, just in a different number system from the
> usual one.

Yes that was my point. If we take the standard logarithmic power algo
as trivial (in the sense that it is well known) then all these
solutions do heavy-lifting to transform fib to power and then use the
'trivial' algo.

A more direct approach would be to use the identities:

f_2n = f_n ^ 2 + 2*f_n * f_(n-1)
f_(2n-1) = f_n ^ 2 + f_(n-1) ^ 2


The naive python implementation of which is:

def even(n): return n % 2 == 0

def sq(x): return x * x

def fib(n):
if n==1 or n==2:
return 1
elif even(n):
return sq(fib (n//2)) + 2 * fib(n//2) * fib(n//2 - 1)
else:
return sq(fib (n//2 + 1)) + sq(fib(n // 2))

This is a strange algo -- logarithmic because it halves the n,
exponential because of the double (triple) calls. [I cannot say I
know how to work out its exact complexity but I would guess its about
linear]

--------------
BTW How do I parse "the ring Z[x] / (x^2 - x - 1)"?
Is this a division ring?

Ian Kelly

da leggere,
14 mag 2011, 17:19:5514/05/11
a Python
On Sat, May 14, 2011 at 11:24 AM, rusi <rusto...@gmail.com> wrote:
> def fib(n):
>    if n==1 or n==2:
>        return 1
>    elif even(n):
>        return sq(fib (n//2)) + 2 * fib(n//2) * fib(n//2 - 1)
>    else:
>        return sq(fib (n//2 + 1)) + sq(fib(n // 2))
>
> This is a strange algo  -- logarithmic because it halves the n,
> exponential because of the double (triple) calls.  [I cannot say I
> know how to work out its exact complexity but I would guess its about
> linear]

Yup, linear. Assuming you optimize the even case so that it doesn't
actually call fib(n//2) twice, the call tree can be approximated as a
balanced binary tree with height log(n). The total number of nodes in
the tree is thus O(2 ** log(n)) = O(n).

rusi

da leggere,
14 mag 2011, 23:32:4714/05/11
a
On May 15, 2:19 am, Ian Kelly <ian.g.ke...@gmail.com> wrote:

It would be linear if the base of the log were 2.
I am not sure it is.
You see the naive fib has a complexity which is fib itself. [Earlier
discussion with Steven]
fib is exponential but with radix < 2 [phi = (1 + sqrt(5))/2 ]
This would suggest that this algo is slightly better than linear.

But I have no idea of the exact complexity.

Mark Dickinson

da leggere,
15 mag 2011, 15:20:0315/05/11
a

Nope. It's linear, just as Ian Kelly said. If g(n) is the total
number of fib calls made for fib(n), then it's easy to show (e.g., by
induction) that:

(a) g(n) is an increasing function of n, and
(b) g(2^k) = 2^k - 1 for all k >= 1.

Hence g(n) is O(n).

Mark

Mark Dickinson

da leggere,
15 mag 2011, 15:29:3815/05/11
a

Hmm. It's even easier: g(n) is:

* 1 if n == 1
* n if n > 1, n odd
* n-1 if n > 1, n even

So definitely linear. :-)

--
Mark

0 nuovi messaggi