Hi Wes,
I saw your blog post
http://wesmckinney.com/blog/?p=475
and tried to comment there, but unfortunately whatever ID system it
expects I don't belong to so it won't let me post, so the below
relates to that:
I think there's something that you're missing. You write "It may be in
a couple of years when the JIT-compiler improves", but I that suggests
that it's just a case of coding the JIT. However, I think one of the
things that's exciting the people who are excited about Julia is
precisely that it's at an early stage of development and there's not
yet a huge set of library codes. Consequently, if some changes are
discovered related to the user language which would make it easier to
run faster then things can be changed now (assuming that there's
consensus that it's a good trade-off, etc). In contrast the thing
about all the other languages mentioned is that as mature languages,
even things that everyone agrees were bad choices can't be changed
short of a "Python 3000" set of big changes all-at-once to minimise
porting problems. And you still here that many code/communities are
still on Python 2.x because they don't have the immediate block of
time to fix things that have changed in 3.0.
Hopefully Julia still has maybe a few months ahead where "good"
breaking changes (like the change in comprehension syntax) can happen.
And I'd pin my hopes on higher-level language changes rather than the
JIT for performance improvements.
On May 2, 9:34 pm, Wes McKinney <
w...@lambdafoundry.com> wrote:
> On Wed, May 2, 2012 at 12:28 PM, Jeff Bezanson <
jeff.bezan...@gmail.com> wrote:
> > Totally agree about performance tracking. I'd love to have more
> > complex benchmarks and show more numbers. It's hard to add stuff to
> > our current table since implementing the benchmarks in all the
> > languages is really tedious. It will be easier to track julia
> > performance vs. itself over time.
> > I think it's already clear that we're slower than C; basically all of
> > our current numbers show that.
>
> To further dangle a carrot, the work you're doing on the compiler is
> probably the most critical part of Julia but also the most opaque to
> end users. Having a daily updating graph of Julia performance is a
> concrete way to demonstrate progress and more importantly to get lots
> of kudos from the community ;)
>
>
>
> > On Wed, May 2, 2012 at 12:09 PM, Wes McKinney <
w...@lambdafoundry.com> wrote:
> >>> On Tue, May 1, 2012 at 7:51 PM, Andrei Jirnyi <
laxyf...@gmail.com> wrote:
> >>>> On May 1, 3:15 pm, Jeff Bezanson <
jeff.bezan...@gmail.com> wrote:
> >>>>> Your test1() takes about 16ms for me. But, test2() takes 1.3ms. So if
> >>>>> you need to tweak performance you can "unvectorize" by hand with less
> >>>>> effort than cython, and usually without giving up polymorphism. This
> >>>>> is where julia really does well, but of course the big gap between
> >>>>> test1 and test2 is still something for us to work on.
>
> >>>> It is quite a big difference -- what is the reason for it? I would
> >>>> imagine the Julia code for sum() must be pretty similar to the loop
> >>>> inside your test2(), so why does it not run just as fast? Is the
> >>>> function call overhead the culprit here -- and if this is the case
> >>>> should one avoid calling functions in loops and manually inline?
>
> >>>> --aj- Hide quoted text -
>
> - Show quoted text -