Regards,
-Dhruv.
Homework?
>>> def cumulative_sum(values, start=0):
... for v in values:
... start += v
... yield start
...
>>> list(cumulative_sum([ 0, 1, 2, 1, 1, 0, 0, 2, 3 ]))
[0, 1, 3, 4, 5, 5, 5, 7, 10]
Peter
Hi,
just a straightworward, naive approach...:
lst_int = [ 0, 1, 2, 1, 1, 0, 0, 2, 3 ]
acc_int = 0
output_lst = []
for i in lst_int:
acc_int += i
output_lst.append(acc_int)
print output_lst
vbr
<python>
import copy
import itertools
def acc(items, copy=copy.deepcopy):
items = iter(items)
result = next(items)
yield copy(result)
for item in items:
result += item
yield copy(result)
print list(acc([0, 1, 2, 1, 1, 0, 0, 2, 3]))
print list(itertools.islice(acc(itertools.count()), 10))
print list(acc(['a', 'b', 'c']))
print list(acc([[a], [b], [c]]))
</python>
Output:
[0, 1, 3, 4, 5, 5, 5, 7, 10]
[0, 1, 3, 6, 10, 15, 21, 28, 36, 45]
['a', 'ab', 'abc']
[[a], [a, b], [a, b, c]]
Without copy.deepcopy() the last line would be:
[[a, b, c], [a, b, c], [a, b, c]]
The copy=copy.deepcopy parameter allows for things like this:
>>> print list(acc([[a], [b], [c]], tuple))
[(a,), (a, b), (a, b, c)]
or:
>>> print list(acc([['a'], ['b'], ['f'], ['s'], ['c'], ['g']], max))
['a', 'b', 'f', 's', 's', 's']
or:
>>> data = [[0], [1], [2], [1], [1], [2], [3]]
>>> print list(acc(data, lambda x: float(sum(x)) / float(len(x))))
[0.0, 0.5, 1.0, 1.0, 1.0, 1.1666666666666667, 1.4285714285714286]
Endless possibilities in an endless universe.
Regards,
Mick.
Maybe not pythonic, but straight-forward:
>>> import numpy
>>> numpy.cumsum(x)
array([ 0, 1, 3, 4, 5, 5, 5, 7, 10])
An example with a class
class CumulativeSum(object):
def __init__(self, start=0):
self._current = start
def __call__(self, value):
self._current += value
return self._current
>>> cummulative_sum = CumulativeSum(0)
>>> map(cummulative_sum, x)
[0, 1, 3, 4, 5, 5, 5, 7, 10]
Dirty:
current = 0
def cummulative_sum(value):
global current
current += value
return current
>>> map(cummulative_sum, x)
[0, 1, 3, 4, 5, 5, 5, 7, 10]
Weird:
def cummulative_sum_reducer(x, y):
x.append(x[-1] + y)
return x
>>> reduce(cummulative_sum_reducer, x, [0])
[0, 0, 1, 3, 4, 5, 5, 5, 7, 10]
Cheers
Andre
> Hello,
> I have a list of integers: x = [ 0, 1, 2, 1, 1, 0, 0, 2, 3 ] And would
> like to compute the cumulative sum of all the integers
> from index zero into another array. So for the array above, I should
> get: [ 0, 1, 3, 4, 5, 5, 5, 7, 10 ]
[pedant]
The above are *lists*, not arrays. Python has arrays, but you have to
call "import array" to get them.
[/pedant]
> What is the best way (or pythonic way) to get this.
Others have given you a plethora of advanced, complicated and obscure
ways to solve this question, but I haven't seen anyone give the simplest
method (simple as in no tricks or advanced features):
data = [0, 1, 2, 1, 1, 0, 0, 2, 3]
csums = []
for x in data:
if csums:
y = x + csums[-1]
else:
y = x
csums.append(y)
We can save some code with the ternary operator:
data = [0, 1, 2, 1, 1, 0, 0, 2, 3]
csums = []
for x in data:
csums.append((x + csums[-1]) if csums else x)
Here's a version that writes the cumulative sum in place:
data = [0, 1, 2, 1, 1, 0, 0, 2, 3]
for i in range(1, len(data)):
data[i] += data[i-1]
--
Steven
Now that Steven's given you the simple, pythonic way, I'll just mention
the advanced, complicated and obscure way that might be vaguely familiar
if you're coming from a functional programming background:
x = [ 0, 1, 2, 1, 1, 0, 0, 2, 3 ]
def running_sum(result, current_value):
return result + [result[-1]+current_value if result else current_value]
reduce(running_sum, x, [])
Having offered this, I don't recall ever seeing reduce used in real
python code, and explicit iteration is almost always preferred.
--
Brian
Yes, even I have noticed that reduce is a tad under-used function.
So, I guess no function like "accumulate" below exists in the standard
lib.
def accumulate(proc, seed, seq):
ret = []
for i in seq:
ret.append(proc(seed, i))
return ret
x = [0, 1, 3, 4, 5, 5, 5, 7, 10]
print accumulate(lambda x,y: x+y, 0, x)
My guess is that accumulate can be used in many more scenarios.
Regards,
-Dhruv.
not really :)
It's just that I was wondering if a built-in function for doing such
things (which I find myself doing increasingly with an explicit loop)
exists.
Regards,
-Dhruv.
> On Jul 19, 4:28 pm, Peter Otten <__pete...@web.de> wrote:
>> dhruvbird wrote:
>> > I have a list of integers: x = [ 0, 1, 2, 1, 1, 0, 0, 2, 3 ]
>> > And would like to compute the cumulative sum of all the integers
>> > from index zero into another array. So for the array above, I should
>> > get: [ 0, 1, 3, 4, 5, 5, 5, 7, 10 ]
>> > What is the best way (or pythonic way) to get this.
>>
>> Homework?
>
> not really :)
>
> It's just that I was wondering if a built-in function for doing such
> things (which I find myself doing increasingly with an explicit loop)
> exists.
>
Why would you find yourself doing it more than once? Write it once in a
function and then just re-use the code.
That is not really any good because Python lists are actually vectors,
so result+[...] actually copies the whole old list, making your function
take quadratic time. It would be ok in a FP language where lists were
chains of cons nodes and result+[...] just allocated a single cons.
I think Peter Otten's solution involving a generator is the one most in
the current Python spirit. It's cleaner (for my tastes) than the ones
that use things like list.append.
nice! Peter
>> Having offered this, I don't recall ever seeing reduce used in real
>> python code, and explicit iteration is almost always preferred.
>
> Yes, even I have noticed that reduce is a tad under-used function.
Yes, I had a use case for it once, but it wasn't worth the trouble.
"map" is often useful, but "reduce", not so much.
Python isn't really a functional language. There's no bias toward
functional solutions, lambdas aren't very general, and the performance
isn't any better. Nor is any concurrency provided by "map" or "reduce".
So there's no win in trying to develop cute one-liners.
John Nagle
Yes agreed.
However, there is:
1. now scope for optimization (for example returning generators
instead of lists) at every stage if using functions -- these functions
can be internally changed as long as the external guarantees they
provide remain essentially unchanged.
2. readability wins because you express your intent (operations)
rather than anything else.
For example, if I want the product of the square roots of all odd
integers in an array, I can say:
answer = reduce(product, map(math.sqrt, filter(lambda x: x%2 == 0,
some_array_with_ints)))
While I agree that python may not have been initially seen as a
functional language, it is powerful and flexible enough to be one or
at least decently support such paradigms.
Regards,
-Dhruv.
Too bad about the lack of concurrency, would be many places where that
would be nice.
Geremy Condra
Besides the many places where the current properties match just fine, there
are some places where concurrency would be helpful. So I wouldn't call it
"lack" of concurrency, as that seems to imply that it's a missing feature
in what both builtins are targeted to provide. Just use one of the
map-reduce frameworks that are out there if you need concurrency in one way
or another. Special needs are not what builtins are there for.
Stefan
At least for large arrays, this is the kind of task where NumPy will
help.
>>> import numpy as np
>>> np.cumsum([ 0, 1, 2, 1, 1, 0, 0, 2, 3 ])
array([ 0, 1, 3, 4, 5, 5, 5, 7, 10])
Agreed
--
Aahz (aa...@pythoncraft.com) <*> http://www.pythoncraft.com/
"....Normal is what cuts off your sixth finger and your tail..." --Siobhan