I believe that Cython has good performance increase in numerical code and not so much on other application.
Moreover, Cython relies on C compiler which most Windows systems do not have.
To get major speedup in Cython, you would need to add type information which involves syntax outside of standard Python.
Fundamentally cython is the wrong approach for this in my opinion.
It's syntactic glue to translate pseudo python into c code- much like
pyrex. This isn't a bad thing mind you. It's just not applicable to
speeding up the core processing of the language.
To use cython, they'd have to go converting large chunks of stdlib to
cython. Yes those components would be faster, but when I write
def f(*args):
some_cython_func(args[0])
... do other things
This still invokes the mainline cpython vm, meaning you still get
.5us hit for just generating the function frame itself. This is
ignoring the FFI cost there also- for passing bits into the extension,
it has to convert it into tuples/keywords (hence *args, **kwargs,
although METH_O/METH_NONE are modifications to this).
All of these things take time, and are not something cython can
address unless you're proposing shifting the actual vm implementation
itself to cython... which isn't really possible.
> Moreover, Cython relies on C compiler which most Windows systems do
> not have.
>
> How is that relevant for the many server-side users of Python, such as
> Google?
Rebasing the implementation upon llvm, they don't have to care if it's
windows or unix (or client or server)- they just need to either
compile llvm into the resultant binary, or ensure the library is
there.
This is *far* more simple then trying to ensure cython machinery still
works and avoids excluding platforms.
> To get major speedup in Cython, you would need to add type
> information which involves syntax outside of standard Python.
>
> Yes. Still far less work than rewriting the entire prototype in C, or
> writing it in C from scratch.
The gains from cython aren't comparable to llvm gains on a grand
scale. Speeding up chunks of code is nice (a faster os.path.join for
example), but that's hot spot optimization. The potential gains from
it aren't comparable to the intended goal of a 5x increase in
raw/normal python code (this is the target of u-s).
Or that's the theory at least. Either way I'd suggest you dig into
what u-s is targeting, and what cython is targeting- they're two
rather orthogonally different things. The one nifty thing is that via
u-s trying to maintain transparent cpy extension, you probably will be
able to use cython generated modules w/ u-s. That's the current
theory at least...
Hope that clarifies things.
~harring