array of complex variables

10 views
Skip to first unread message

Braxton Osting

unread,
Sep 17, 2009, 2:40:56 PM9/17/09
to apam-pyt...@googlegroups.com
I want to create an array to store 100 complex values before a for
loop. In matlab, I would just write
u = zeros(100,1)

and then insert the complex values into this array as I please.

I can write the same thing in python:
u=zeros(100)

but when I set
u[0] = 1j

I get the error
TypeError: can't convert complex to float; use abs(z)

How do I create an empty vector prepared to accept complex values??

Thanks,
Braxton

francois monard

unread,
Sep 17, 2009, 2:42:13 PM9/17/09
to apam-pyt...@googlegroups.com
u = zeros(100, dtype='complex').... or 'complex128'
--
------------------------------------------------------------
Francois Monard
------------------------------------------------------------

Ethan Coon

unread,
Sep 17, 2009, 2:53:26 PM9/17/09
to apam-pyt...@googlegroups.com
This is actually true in general, and makes a big difference if you're
using things like Cython or otherwise concerned with efficiency... you
can type all the array-generation things in numpy with the dtype option.


In [1]: import numpy
In [2]: u = numpy.ones((100,),'bool')
In [3]: u.all()
Out[3]: True

etc...

Ethan

--
-------------------------------------------
Ethan Coon
DOE CSGF - Graduate Student
Dept. Applied Physics & Applied Mathematics
Columbia University
212-854-0415

http://www.ldeo.columbia.edu/~ecoon/
-------------------------------------------


Anil Raj

unread,
Sep 17, 2009, 3:06:07 PM9/17/09
to apam-pyt...@googlegroups.com
while we're on the topic of typing variables, what is the difference between defining 'runtime' type and 'compile-time' type? (i pulled those words from a cython tutorial)

import numpy as np

IDXTYPE = np.int
ctypedef np.int_t IDXTYPE_t

also, is there a difference between 'int' and 'numpy.int' ? or does it not matter?

Anil

Ethan Coon

unread,
Sep 17, 2009, 3:39:00 PM9/17/09
to apam-pyt...@googlegroups.com

On Thu, 2009-09-17 at 15:06 -0400, Anil Raj wrote:
> while we're on the topic of typing variables, what is the difference
> between defining 'runtime' type and 'compile-time' type? (i pulled
> those words from a cython tutorial)

I'm not sure, but I do know there is a casting that happens at runtime
if they're not the same (meaning it's slower). I don't know when these
would not be the same, or even what happens if they're not.

>
> import numpy as np
>
> IDXTYPE = np.int
> ctypedef np.int_t IDXTYPE_t
>
> also, is there a difference between 'int' and 'numpy.int' ? or does it
> not matter?
>

Yes and maybe. "numpy.int" is specifically set up to be an alias for
"int", which is your machine's thought on what size/format an integer
should be. On the other hand, "numpy.int32" and others indicate the
specific sizeof(int), which you should use if you care about the
precision of your variables (if you need very large integers). For
cases where you need high precisions, you should obviously use
numpy.int64 or numpy.int32 or whatever. For cases where you don't need
high precision, "numpy.int" is the right choice.

This way, by default, you'll get what the machine does best (int).
However, If you hardwire in "int", you'll always get the machine's
default, but if you put in "numpy.int", you can change this with one
line early in your code and force the same precision on all machines
(useful for comparing solutions to ensure correctness).

Ethan

Ethan Coon

unread,
Sep 17, 2009, 4:02:47 PM9/17/09
to apam-pyt...@googlegroups.com
Actually, there's a bit more subtlety here than I thought, and therefore
even more reason to be careful about your typing in Cython code
especially. Check out the code below... in summary, numpy types it's
arrays to be of type numpy.float if numpy.float is assigned to be a type
that is a numpy type. However, by default, numpy.float is python's
float, which is not a valid numpy type. So by default, the type of a
numpy array element is not numpy.float (which seems wierd to me).

So if you type everything as numpy.float, you'll get whatever numpy
chooses, which is fine in most cases. However, you'll also then be able
to, at the beginning of your simulation, set numpy.float to a desired
precision, and then get that desired precision throughout.

If you're using Cython, this is especially important, because if you
don't initialize your arrays to be the correct type, Cython will
constantly have to re-cast things, which will slow things down.

Ethan


In [1]: import numpy

In [2]: numpy.float is float
Out[2]: True

In [3]: a = numpy.array([1.0])

In [4]: type(a[0])
Out[4]: <type 'numpy.float64'>

In [5]: type(a[0]) is numpy.float
Out[5]: False

In [6]: type(a[0]) is numpy.float64
Out[6]: True

In [7]: type(a[0]) is numpy.float_
Out[7]: True

In [8]: a = numpy.array([1.0], numpy.float)

In [9]: type(a[0]) is numpy.float
Out[9]: False

In [10]: type(a[0]) is numpy.float64
Out[10]: True

In [11]: type(a[0]) is numpy.float_
Out[11]: True

In [12]: numpy.float = numpy.float32

In [13]: numpy.float_ = numpy.float32

In [14]: a = numpy.array([1.0])

In [15]: type(a[0])
Out[15]: <type 'numpy.float64'>

In [16]: a = numpy.array([1.0], numpy.float)

In [17]: type(a[0])
Out[17]: <type 'numpy.float32'>

Reply all
Reply to author
Forward
0 new messages