Dan Luu's critique of Julia: needs commented code, better testing, error handling, and bugs fixed.

4,186 views
Skip to first unread message

Christian Peel

unread,
Dec 29, 2014, 11:36:19 AM12/29/14
to julia...@googlegroups.com
Dan Luu has a critique of Julia up at http://danluu.com/julialang/  (reddit thread at http://bit.ly/1wwgnks)
Is the language feature-complete enough that there could be an entire point release that targeted some of the less-flashy things he mentioned?  I.e. commented code, better testing, error handling, and just fixing bugs?   If it's not there, is there any thoughts on when it would be?

Keno Fischer

unread,
Dec 29, 2014, 11:44:35 AM12/29/14
to julia...@googlegroups.com
I've written up some of my thoughts on the issues raised in this article in the hacker news discussion, but to answer your question, there's still a number of big items that need to be tackled by the core team. I do think it might make some sense to have a docs/tests sprint just prior to the 0.4 release (we had a doc sprint before the 0.1? release which I think was pretty successful).

There is also plenty of opportunity for tests and documentation for people outside the core team. API design discussions can also happen even if people don't know how to implement them - it's much easier to implement an API that's already designed than to do both, designing the API and implementing it. 

Valentin Churavy

unread,
Dec 29, 2014, 11:51:07 AM12/29/14
to julia...@googlegroups.com

Tim Holy

unread,
Dec 29, 2014, 12:30:26 PM12/29/14
to julia...@googlegroups.com
In my personal opinion, his post is a mix of on-target and off-target. I
completely agree with the inadequacy of our testing, particularly in packages.
However, it's also not entirely simple: julia is _so_ composable that it's
hard to come up with tests that cover everything. Until recently we've not
even had the ability to find out how much of Base is covered by tests, and
inlining makes even that a little bit tricky to determine. That said, the
situation is improving. At one point I put out a call to julia-users to tackle
writing more tests (it doesn't take deep expertise to do so), but I don't
think that netted a lot of contributions.

In terms of off-target, in particular I disagree pretty strongly with his
feeling that Base should catch lots of exceptions and try to recover. That
would make it basically impossible to deliver good performance, and it also
(in my view) jeopardizes sensible behavior.

--Tim

Tobias Knopp

unread,
Dec 29, 2014, 2:18:35 PM12/29/14
to julia...@googlegroups.com
The post reads like a rant. As every software project out there Julia has bugs. So is it really necessary to complain about bugs of an open source project in a blog post?

Stefan Karpinski

unread,
Dec 29, 2014, 2:39:34 PM12/29/14
to julia...@googlegroups.com
There's lots of things that are very legitimate complaints in this post but also other things I find frustrating.

On-point

Testing & coverage could be much better. Some parts of Base were written a long time ago before we wrote tests for new code. Those can have a scary lack of test coverage. Testing of Julia packages ranges from non-existent to excellent. This also needs a lot of work. I agree that the the current way of measuring coverage is nearly useless. We need a better approach.

The package manager really, really needs an overhaul. This is my fault and I take full responsibility for it. We've been waiting a frustratingly long time for libgit2 integration to be ready to use. Last I checked, I think there was still some Windows bug pending.

Julia's uptime on Travis isn't as high as I would like it to be. There have been a few periods (one of which Dan unfortunately hit), when Travis was broken for weeks. This sucks and it's a relief whenever we fix the build after a period like that. Fortunately, since that particularly bad couple of weeks, there hasn't been anything like that, even on Julia master, and we've never had Julia stable (release-0.3 currently) broken for any significant amount of time.

Documentation of Julia internals. This is getting a bit better with the developer documentation that has recently been added, but Julia's internals are pretty inscrutable. I'm not convinced that many other programming language implementations are any better about this, but that doesn't mean we shouldn't improve this a lot.

Frustrating

Mystery Unicode bug – Dan, I've been hearing about this for months now. Nobody has filed any issues with UTF-8 decoding in years (I just checked). The suspense is killing me – what is this bug? Please file an issue, no matter how vague it may be. Hell, that entire throwaway script can just be the body of the issue and other people can pick it apart for specific bugs.

The REPL rewrite, among other things, added tests to the REPL. Yes, it was a disruptive transition, but the old REPL needed to be replaced. It was a massive pile of hacks around GNU readline and was incomprehensible and impossible to test. Complaining about the switch to the new REPL which is actually tested seems misplaced.

Unlike Python, catching exceptions in Julia is not considered a valid way to do control flow. Julia's philosophy here is closer to Go's than to Python's – if an exception gets thrown it should only ever be because the caller screwed up and the program may reasonably panic. You can use try/catch to handle such a situation and recover, but any Julia API that requires you to do this is a broken API. So the fact that

When I grepped through Base to find instances of actually catching an exception and doing something based on the particular exception, I could only find a single one.

actually means that the one instance is actually a place where we're doing it wrong and hacking around something we know to be broken. The next move is to get rid of that one instance, not add more code like this. The UDP thing is a problem and needs to be fixed.

The business about fixing bugs getting Dan into the to 40 is weird. It's not quite accurate – Dan is #47 by commits (I'm assuming that's the metric here) with 28 commits, so he's in the top 50 but not the top 40. There are 23 people who have 100 commits or more, and that's roughly the group I would consider to be the "core devs". This paragraph is frustrating because it gives the imo unfair impression that not many people are working on Julia. Having 23+ people working actively on a programming language implementation is a lot.

Ranking of how likely Travis builds are to fail by language doesn't seem meaningful. A huge part of this is how aggressively each project uses Travis. We automatically test just about everything, even completely experimental branches. Julia packages can turn Travis testing on with a flip of a switch. So lots of things are broken on Travis because we've made it easy to use. We should, of course, fix these things, but other projects having higher uptime numbers doesn't imply that they're more reliable – it probably just means they're using Travis less.

In general, a lot of Dan's issues only crop up if you are using Julia master. The Gadfly dates regression is probably like this and the two weeks of Travis failures was only on master during a "chaos month" – i.e. a month where we make lots of reckless changes, typically right after releasing a stable version (in this case it was right after 0.3 came out). These days, I've seen a lot of people using Julia 0.3 for work and it's pretty smooth (package management is by far the biggest issue and I just take care of that myself). If you're a normal language user, you definitely should not be using Julia master.

Jeff Lunt

unread,
Dec 29, 2014, 2:54:58 PM12/29/14
to julia...@googlegroups.com
Completely agree on the exception handling philosophy (as Stefen has put it). Not only should you not rely on exception handling to make your API reliable, for some reason encouraging exception handling has a way of making folks think very defensively: "Oh, so I should handle every possible scenario from a bug in my code to what to do if I'm recovering from a hard drive crash and OS re-install," which is just wrong, because each unit of code, generally, should worry about its responsibilities and doing its job well.

It also encourages retry-thinking, such as, "If it's broken, reboot/retry an arbitrary number of time," rather than, "If it's broken, figure out why and fix it so it doesn't ever break again."

Mike Innes

unread,
Dec 29, 2014, 3:31:05 PM12/29/14
to julia...@googlegroups.com
Slightly OT, but I imagine the try/catch Dan refers to is the display system. Unfortunately it is a horribly brittle way to implement that code that still now has the potential to cause bugs (due to the fact that you can't tell where in the stack the error came from). I'm prototyping something to try and solve that and a lot of the other issues with the current display system, though who knows if it'll ever end up in Base.

Dan Luu

unread,
Dec 29, 2014, 3:37:33 PM12/29/14
to julia...@googlegroups.com
Here are a few responses to the stuff in this thread, plus Keno's comment on HN.

The Travis stats are only for builds against master, i.e., only for
things that got merged. BTW, this is mentioned in the linked post. For
every project listed in http://danluu.com/broken-builds/, only the
"main" branch on github was used, with the exception of a couple
projects where that didn't make sense (IIRC, scala has some weird
setup, as does d). "Using travis less" doesn't really make sense in
that context. I heard that response a lot from people in various
communities when I wrote that post, but from checking the results, the
projects that have better travis results are more rigorous and, on
average, the results are biased against the projects with the highest
uptimes. There are exceptions, of course.

I have a "stable" .3 build I use for all my Julia scripts and IIRC
that's where I saw the dates issue with Gadfly. I dunno, maybe I
should only use older releases? But if I go to the Julia download
page, the options are 0.3.4, 0.2.1, and 0.1.2. This might not be true,
but I'm guessing that most packages don't work with 0.2.1 or 0.1.2. I
haven't tried with 0.3.4 since I haven't touched Julia for a while.
It's possible that the issue is now fixed, but the issue is still open
and someone else also commented that they're seeing the same problem.

Sorry, I'm not being a good open source citizen and filing bugs, but
when you run into 4 bugs when writing a half-hour script, filing bugs
is a drag on productivity. A comment I've gotten here and elsewhere is
basically "of course languages have bugs!". But there have been
multiple instances where I've run into more bugs in an hour of Julia
than I've ever hit with scala and go combined, and scala is even known
for being buggy! Between scala and go, I've probably spent 5x or 10x
the time I've spent in Julia. Just because some number will be
non-zero doesn't mean that all non-zero numbers are the same. There
are various reasons that's not a fair comparison. I'm just saying that
I expect to hit maybe one bug per hour while writing Julia, and I
expect maybe 1 bug per year for most languages, even pre-1.0 go.

I don't think 40 vs. 50 really changes the argument, but of course
I've been drifting down in github's ranking since I haven't done any
Julia lately and other people are committing code.

I don't think it's inevitable that language code is inscrutable. If I
grep through the go core code (excluding tests, but including
whitespace), it's 9% pure comment lines, and 16% lines with comments.
It could use more comments, but it's got enough comments (and
descriptive function and variable names) that I can go into most files
and understand the code.

It sounds like, as is, there isn't a good story for writing robust
Julia program? There are bugs and exceptions will happen. Putting
aside that `catch` non-deterministically fails to catch, what's a
program supposed to do when some bug in Base causes a method to throw
a bounds error? You've said that the error handling strategy is
go-like, but I basically never get a panic in go (I actually can't
recall it ever having happened, although it's possible I've gotten one
at some point). That's not even close to being true in Julia.
Terminating is fine for scripts where I just want to get at the source
of the bug and fix it, but it's not so great for programs that
shouldn't ever crash or corrupt data? Is the answer just "don't write
stuff like that in Julia"?

On Mon, Dec 29, 2014 at 1:38 PM, Stefan Karpinski <ste...@karpinski.org> wrote:
> Mystery Unicode bug - Dan, I've been hearing about this for months now.
> Nobody has filed any issues with UTF-8 decoding in years (I just checked).
> The suspense is killing me - what is this bug? Please file an issue, no
> matter how vague it may be. Hell, that entire throwaway script can just be
> the body of the issue and other people can pick it apart for specific bugs.
>
> The REPL rewrite, among other things, added tests to the REPL. Yes, it was a
> disruptive transition, but the old REPL needed to be replaced. It was a
> massive pile of hacks around GNU readline and was incomprehensible and
> impossible to test. Complaining about the switch to the new REPL which is
> actually tested seems misplaced.
>
> Unlike Python, catching exceptions in Julia is not considered a valid way to
> do control flow. Julia's philosophy here is closer to Go's than to Python's
> - if an exception gets thrown it should only ever be because the caller
> screwed up and the program may reasonably panic. You can use try/catch to
> handle such a situation and recover, but any Julia API that requires you to
> do this is a broken API. So the fact that
>
>> When I grepped through Base to find instances of actually catching an
>> exception and doing something based on the particular exception, I could
>> only find a single one.
>
>
> actually means that the one instance is actually a place where we're doing
> it wrong and hacking around something we know to be broken. The next move is
> to get rid of that one instance, not add more code like this. The UDP thing
> is a problem and needs to be fixed.
>
> The business about fixing bugs getting Dan into the to 40 is weird. It's not
> quite accurate - Dan is #47 by commits (I'm assuming that's the metric here)
> with 28 commits, so he's in the top 50 but not the top 40. There are 23
> people who have 100 commits or more, and that's roughly the group I would
> consider to be the "core devs". This paragraph is frustrating because it
> gives the imo unfair impression that not many people are working on Julia.
> Having 23+ people working actively on a programming language implementation
> is a lot.
>
> Ranking of how likely Travis builds are to fail by language doesn't seem
> meaningful. A huge part of this is how aggressively each project uses
> Travis. We automatically test just about everything, even completely
> experimental branches. Julia packages can turn Travis testing on with a flip
> of a switch. So lots of things are broken on Travis because we've made it
> easy to use. We should, of course, fix these things, but other projects
> having higher uptime numbers doesn't imply that they're more reliable - it
> probably just means they're using Travis less.
>
> In general, a lot of Dan's issues only crop up if you are using Julia
> master. The Gadfly dates regression is probably like this and the two weeks
> of Travis failures was only on master during a "chaos month" - i.e. a month

Tobias Knopp

unread,
Dec 29, 2014, 3:58:01 PM12/29/14
to julia...@googlegroups.com
So you dislike Julia and encountered several bugs. Reading your posts is like you want to blame someone for that. If you are not satisfied with Julia simply do not use it.

And seriously: You cannot compare Julia with a project that has Google in the background. Its clear that they have a "more clear" development model and more documentation. Some goes for Rust. Julia is from and for researchers. And there are several people very satisfied with how Julia evolves (including me).

Tobias

Jeff Lunt

unread,
Dec 29, 2014, 4:05:50 PM12/29/14
to julia...@googlegroups.com
To be fair, that's really an argument in Dan's favor, unless Dan is not a researcher, in which case you might be able to say that Julia is better for you because you're a researcher and Dan is not. But that would imply a domain mismatch.

To say that one likes and understands a language, warts and all, is like defending something because I know it and love it, rather than because it is objectively the best tool.


Dan Luu

unread,
Dec 29, 2014, 4:06:43 PM12/29/14
to julia...@googlegroups.com
On Mon, Dec 29, 2014 at 2:58 PM, Tobias Knopp
<tobias...@googlemail.com> wrote:
> So you dislike Julia and encountered several bugs. Reading your posts is
> like you want to blame someone for that. If you are not satisfied with Julia
> simply do not use it.

I don't really use Julia anymore! Thanks for the suggestion, though.
Also, I hope you don't mind if I rescind my comment about the
community. Yikes.

Stefan Karpinski

unread,
Dec 29, 2014, 4:12:36 PM12/29/14
to Julia Users
On Mon, Dec 29, 2014 at 3:37 PM, Dan Luu <dan...@gmail.com> wrote:
Here are a few responses to the stuff in this thread, plus Keno's comment on HN.

The Travis stats are only for builds against master, i.e., only for
things that got merged. BTW, this is mentioned in the linked post. For
every project listed in http://danluu.com/broken-builds/, only the
"main" branch on github was used, with the exception of a couple
projects where that didn't make sense (IIRC, scala has some weird
setup, as does d). "Using travis less" doesn't really make sense in
that context. I heard that response a lot from people in various
communities when I wrote that post, but from checking the results, the
projects that have better travis results are more rigorous and, on
average, the results are biased against the projects with the highest
uptimes. There are exceptions, of course.

I didn't read through the broken builds post in detail – thanks for the clarification. Julia basically uses master as a branch for merging and simmering experimental work. It seems like many (most?) projects don't do this, and instead use master for stable work.
 
I have a "stable" .3 build I use for all my Julia scripts and IIRC
that's where I saw the dates issue with Gadfly. I dunno, maybe I
should only use older releases? But if I go to the Julia download
page, the options are 0.3.4, 0.2.1, and 0.1.2. This might not be true,
but I'm guessing that most packages don't work with 0.2.1 or 0.1.2. I
haven't tried with 0.3.4 since I haven't touched Julia for a while.
It's possible that the issue is now fixed, but the issue is still open
and someone else also commented that they're seeing the same problem.

Entirely possible – packages are definitely not as stable in terms of remaining unbroken as Julia's stable releases are. I think it's getting better, but can still be frustrating.
 
Sorry, I'm not being a good open source citizen and filing bugs, but
when you run into 4 bugs when writing a half-hour script, filing bugs
is a drag on productivity. A comment I've gotten here and elsewhere is
basically "of course languages have bugs!". But there have been
multiple instances where I've run into more bugs in an hour of Julia
than I've ever hit with scala and go combined, and scala is even known
for being buggy! Between scala and go, I've probably spent 5x or 10x
the time I've spent in Julia. Just because some number will be
non-zero doesn't mean that all non-zero numbers are the same. There
are various reasons that's not a fair comparison. I'm just saying that
I expect to hit maybe one bug per hour while writing Julia, and I
expect maybe 1 bug per year for most languages, even pre-1.0 go.

This seems like a crazy high number to me. My experience with other people who are using Julia for work has been about 1-3 legitimate Julia bugs per year. You seem to have a knack for pushing systems to their limit (plus, there was that period where you were filing bugs you found with a fuzzer).

Why not just post the throwaway script you wrote as an issue? That would take about a minute. More than anything, I'm interested in what the specific Unicode bug. Was there a problem with UTF-8 decoding?
 
I don't think 40 vs. 50 really changes the argument, but of course
I've been drifting down in github's ranking since I haven't done any
Julia lately and other people are committing code.

No, of course, that part doesn't matter. But what was the point of including that stat at all? It makes it seem like you're trying to imply that not a lot of people have worked on the project and that fixing a small number of bugs gets you high up on the contributors list.
 
I don't think it's inevitable that language code is inscrutable. If I
grep through the go core code (excluding tests, but including
whitespace), it's 9% pure comment lines, and 16% lines with comments.
It could use more comments, but it's got enough comments (and
descriptive function and variable names) that I can go into most files
and understand the code.

I never said it was – what I said was that most programming language implementations happen to have inscrutable code, not that they must. That's not an excuse and we should do better.
 
It sounds like, as is, there isn't a good story for writing robust
Julia program? There are bugs and exceptions will happen. Putting
aside that `catch` non-deterministically fails to catch, what's a
program supposed to do when some bug in Base causes a method to throw
a bounds error? You've said that the error handling strategy is
go-like, but I basically never get a panic in go (I actually can't
recall it ever having happened, although it's possible I've gotten one
at some point). That's not even close to being true in Julia.
Terminating is fine for scripts where I just want to get at the source
of the bug and fix it, but it's not so great for programs that
shouldn't ever crash or corrupt data? Is the answer just "don't write
stuff like that in Julia"?

At the moment, I think the way to write a reliable Julia systems is to compartmentalize and make each component restartable. This is very much in the Erlang philosophy, although we're miles away from being as good as Erlang at this kind of thing. I know that it's basically the opposite of the Google approach. I don't think Julia is the best choice currently for writing mission-critical systems software.

Stefan Karpinski

unread,
Dec 29, 2014, 4:14:07 PM12/29/14
to Julia Users
Let's please take Dan's comments as a constructive critique (which it is), rather than an attack. I know Dan personally and happen to know that this is where he's coming from.

Tobias Knopp

unread,
Dec 29, 2014, 4:27:43 PM12/29/14
to julia...@googlegroups.com
Stefan, ok. My advice was actually also constructive. I have tried various open source software in my life and there were several that were broken. But then I have simple not used it if I am not satisfied.

I think it is clear that Julia's development model could be improved. But unless a company hires some full time developer to work on Julia a change in the development model is not easily done.

Cheers

Tobias 

Stefan Karpinski

unread,
Dec 29, 2014, 4:42:13 PM12/29/14
to Julia Users
I think the main takeaways from Dan's post are the following:
  • Figure out a better way to measure coverage and work towards 100% coverage.
  • Make a thorough pass over all Base code and carefully examine situations where we throw exceptions to make sure they are correct and can only ever happen if the caller made an error. Document the conditions under which each exported function may raise an exception.
  • Improve the readability of Julia's implementation code. Rename the less scrutable functions and variables. Add comments, add to the developer docs. It's not that much code, so this isn't that awful but it is some tricky code that's in flux.

Steven G. Johnson

unread,
Dec 29, 2014, 4:45:51 PM12/29/14
to julia...@googlegroups.com
On Monday, December 29, 2014 4:12:36 PM UTC-5, Stefan Karpinski wrote:
I didn't read through the broken builds post in detail – thanks for the clarification. Julia basically uses master as a branch for merging and simmering experimental work. It seems like many (most?) projects don't do this, and instead use master for stable work.

Yeah, a lot of projects use the Gitflow model, in which a develop branch is used for experimental work and master is used for (nearly) release candidates.

I can understand where Dan is coming from in terms of finding issues continually when using Julia, but in my case it's more commonly "this behavior is annoying / could be improved" than "this behavior is wrong".  It's rare for me to code for a few hours in Julia without filing issues in the former category, but out of the 300 issues I've filed since 2012, it looks like less than two dozen are in the latter "definite bug" category.

I'm don't understand his perspective on "modern test frameworks" in which FactCheck is light-years better than a big file full of asserts.  Maybe my age is showing, but from my perspective FactCheck (and its Midje antecedent) just gives you a slightly more verbose assert syntax and a way of grouping asserts into blocks (which doesn't seem much better than just adding a comment at the top of a group of asserts).   Tastes vary, of course, but Dan seems to be referring to some dramatic advantage that isn't a matter of mere spelling.  What am I missing?

Milan Bouchet-Valat

unread,
Dec 29, 2014, 4:58:41 PM12/29/14
to julia...@googlegroups.com
Le lundi 29 décembre 2014 à 16:41 -0500, Stefan Karpinski a écrit :
> I think the main takeaways from Dan's post are the following:
> * Figure out a better way to measure coverage and work towards
> 100% coverage.
> * Make a thorough pass over all Base code and carefully examine
> situations where we throw exceptions to make sure they are
> correct and can only ever happen if the caller made an error.
> Document the conditions under which each exported function may
> raise an exception.
I'd add: improve the manual to make it clear that Julia's philosophy
with regard to exceptions (easy).

I'm only realizing today that exceptions are supposed to be raised in
Julia only when the caller is at fault. If we want packages to follow
this pattern, better make it as clear as possible.


Regards

> * Improve the readability of Julia's implementation code. Rename

Tobias Knopp

unread,
Dec 29, 2014, 5:02:59 PM12/29/14
to julia...@googlegroups.com
I think one important way to improve the stability of Julia is to separate Julia and its standard library (e.g. split Base into "crucial base" and stdlib). This will help making the core rock solid and will further reduce the number of binary dependencies to a minimum. It also helps to make more clear who the maintainer of a specific set of functions is. (see https://github.com/JuliaLang/julia/issues/5155).

Stefan Karpinski

unread,
Dec 29, 2014, 5:03:32 PM12/29/14
to Julia Users
On Mon, Dec 29, 2014 at 4:45 PM, Steven G. Johnson <steve...@gmail.com> wrote:
I'm don't understand his perspective on "modern test frameworks" in which FactCheck is light-years better than a big file full of asserts.  Maybe my age is showing, but from my perspective FactCheck (and its Midje antecedent) just gives you a slightly more verbose assert syntax and a way of grouping asserts into blocks (which doesn't seem much better than just adding a comment at the top of a group of asserts).   Tastes vary, of course, but Dan seems to be referring to some dramatic advantage that isn't a matter of mere spelling.  What am I missing?

Man, I'm glad I'm not the only one. Can someone explain what the big deal about the FactCheck approach is? Am I missing something really fundamental here?

Stefan Karpinski

unread,
Dec 29, 2014, 5:04:16 PM12/29/14
to Julia Users
On Mon, Dec 29, 2014 at 4:58 PM, Milan Bouchet-Valat <nali...@club.fr> wrote:
I'd add: improve the manual to make it clear that Julia's philosophy
with regard to exceptions (easy).

I'm only realizing today that exceptions are supposed to be raised in
Julia only when the caller is at fault. If we want packages to follow
this pattern, better make it as clear as possible.

Yes, we should do this. Also relevant: https://github.com/JuliaLang/julia/issues/7026 

Jameson Nash

unread,
Dec 29, 2014, 5:05:27 PM12/29/14
to julia...@googlegroups.com
I imagine there are advantages to frameworks in that you can expected failures and continue through the test suite after one fails, to give a better % success/failure metric than Julia's simplistic go/no-go approach.

I used JUnit many years ago for a high school class, and found that, relative to `@assert` statements, it had more options for asserting various approximate and conditional statements that would otherwise have been very verbose to write in Java. Browsing back through it's website now (http://junit.org/ under Usage and Idioms), it apparently now has some more features for testing such as rules, theories, timeouts, and concurrency). Those features would likely help improve testing coverage by making tests easier to describe.

Jeff Bezanson

unread,
Dec 29, 2014, 5:05:57 PM12/29/14
to julia...@googlegroups.com
Dan, I know there are many areas where we should improve. For now I
share Stefan's frustration about the mystery bugs you keep alluding
to. I don't expect a full detailed report on each one, and I get that
you don't want to interrupt work to file them. But we have now seen at
least two blog posts and one long list post inspired in part by these
bugs. If you have time to write all that, you have time to at least
send us your script. We have not often asked others to fix their own
bugs, and we have not been known to call people brats, but we have
been known to fix lots of bugs. I know fixing bugs one-by-one is not
as good as systematically improving our tests and process, but it is
more helpful than alarmist invective.

Jeff Bezanson

unread,
Dec 29, 2014, 5:13:35 PM12/29/14
to julia...@googlegroups.com
Reporting % success rather than demanding 100% success would seem to
be a strictly weaker testing policy.

Arguably, with macros you need fewer features since `@test a == b`
could recognize an equality test and report what a and b were. But one
feature we could stand to add is asserting properties that must be
true for all arguments, and running through lots of combinations of
instances. However, in reality we do some of this already, since the
"files full of asserts" also in many cases do nested loops of tests.
Saying we do "just asserts" obscures this fact.

Patrick O'Leary

unread,
Dec 29, 2014, 5:37:15 PM12/29/14
to julia...@googlegroups.com
On Monday, December 29, 2014 4:13:35 PM UTC-6, Jeff Bezanson wrote:
But one
feature we could stand to add is asserting properties that must be
true for all arguments, and running through lots of combinations of
instances.

Anyone who is interested in this is welcome to use https://github.com/pao/QuickCheck.jl as a starting point.

Dan Luu

unread,
Dec 29, 2014, 6:42:32 PM12/29/14
to julia...@googlegroups.com
Welp, I ended up checking up on this thread again because of a
conversation with Stefan, so here are some more responses.

I tried https://github.com/dcjones/Gadfly.jl/issues/462 on the current
release binary on julialang.org and it still fails as before, so that
wasn't because I was running off of master. I updated the issue.

Yes, I agree that I seem to run into an unusual number of bugs. My
guess is it's partially because I basically don't do any of the kind
of data stuff people normally do with Julia and I'm off doing stuff
that's untested and has rarely, if ever, been used before. But IIRC, I
ran into a bug where code_llvm and code_native would segfault.
Sometimes stuff just breaks and doesn't get fixed for a while.

I don't really want to get sucked into a discussion about test
methodologies, so I'm happy to concede the point if it will get me out
of that debate.

Alright, I'll see if I can find my script somewhere and copy+paste it
to make a bugreport, but it's a pretty obnoxious bug report. It's a
write-once throwaway script that probably has all sorts of stuff wrong
with it. Also, it takes as input `git log` from the linux kernel git
repo, which is pretty large.

Once, while running it, an exception escaped from a try/catch and
killed the script. But it only happened once so I don't know how many
times you'd have to re-run it to get that result. So, that's not
really nicely reproducible.

Otherwise, if you remove the try/catch statements a couple of string
related things will blow up with an exception.

The entitled brat response wasn't aimed at you (Jeff), but I've
literally never written anything negative about an open source project
without having someone tell me that I'm an entitled jerk, so I
expected to get that response to this post. And I did, so that streak
continues!

Jeff Bezanson

unread,
Dec 29, 2014, 8:38:51 PM12/29/14
to julia...@googlegroups.com

I feel like you are trying to convey the impression that finding bugs in julia results in insults and no help from us. That is a total mis-characterization of the project. There is also no equivalence between responses to bug reports, and responses to blog posts. As far as I know, all 9000 of our bug reports have been received with gratitude. However your post says or implies that we don't care about error handling, tell people to fix their own bugs, and even that we don't understand our own code. You can very well expect some pushback on that.

jrgar...@gmail.com

unread,
Dec 29, 2014, 9:07:39 PM12/29/14
to julia...@googlegroups.com
On Monday, December 29, 2014 2:39:34 PM UTC-5, Stefan Karpinski wrote:
Unlike Python, catching exceptions in Julia is not considered a valid way to do control flow. Julia's philosophy here is closer to Go's than to Python's – if an exception gets thrown it should only ever be because the caller screwed up and the program may reasonably panic. You can use try/catch to handle such a situation and recover, but any Julia API that requires you to do this is a broken API.

I would really like if I could throw and catch an exception without needing to consider that my program might panic as a result of doing so.  I just looked through the entire corpus of Julia code I have written so far, and the only places I catch exceptions are when the exception is actually due to calling a Python API via PyCall.  I am willing to accept that using exceptions is not a very Julian way of doing things, but I still want them to work when they are needed.

Stefan Karpinski

unread,
Dec 29, 2014, 9:27:41 PM12/29/14
to Julia Users
On Mon, Dec 29, 2014 at 9:07 PM, <jrgar...@gmail.com> wrote:

I would really like if I could throw and catch an exception without needing to consider that my program might panic as a result of doing so.  I just looked through the entire corpus of Julia code I have written so far, and the only places I catch exceptions are when the exception is actually due to calling a Python API via PyCall.  I am willing to accept that using exceptions is not a very Julian way of doing things, but I still want them to work when they are needed.

"Panic" is the Go term for "throw". Your Julia program will not panic if you throw an exception – throw/catch works just fine.

Ravi Mohan

unread,
Dec 29, 2014, 9:55:37 PM12/29/14
to julia...@googlegroups.com
Fwiw the correct engineering response here seems to be to acknowledge the subset of Dan's criticisms that are valid/reasonable, fix those, and get back to work. Criticising Dan's motives etc isn't a productive path (imo) If there are low hanging fruit fixes on such a successful project,(the build/test thing certainly seems to be one) that is a *good* thing. Yes the HN crowd can be a bit rough (I am plinkplonk on HN, fwiw) , and often unreasonable, but hey anyone running an open source project can't afford to get disturbed by weird discussions on HN.

All projects have bugs, and if someone has an uncanny knack for surfacing heisenbugs, that is a good thing, irrespective of communication style.

My 2 cents (I am just tinkering with Julia and don't use it anger yet, but after some discussion with Viral (who is my neighbor) am considering jumping in - Julia is a brilliant project). As a prospective contributor to Julia, I am encouraged by Stefan's approach to this)

regards,
Ravi

Tim Holy

unread,
Dec 29, 2014, 10:09:05 PM12/29/14
to julia...@googlegroups.com
Yeah, there's absolutely no problem with try/catch or try/finally. The only
debate is about how much you should use them. HDF5 uses loads of `try/finally`s
to clean up properly before throwing an error.

--Tim

Tim Holy

unread,
Dec 29, 2014, 11:39:58 PM12/29/14
to julia...@googlegroups.com
For anyone who wants to help with the test coverage issue, I just posted some
instructions here:
https://github.com/JuliaLang/julia/issues/9493

--Tim

Dan Luu

unread,
Dec 30, 2014, 12:11:55 AM12/30/14
to julia...@googlegroups.com
Hi Jeff,


That's a lot of claims, so let me just respond to one, that my post
"implies ... that we don't understand our own code".

Where I've been vague and imply something it's because I don't like
calling people out by name.

I literally stated the opposite in my post, saying that the Julia core
team can "hold all of the code in their collective heads".

I'm guessing the objection is to the line "code that even the core
developers can't figure out because it's too obscure", but that refers
to the vague anecdote in the previous paragraph. The plural here is a
side effect of being vague, not an implication that you can't figure
it out your own code.

If it helps clear the air, the specifics are that when I ask Stefan
how something works the most common response I get is that I should
ask you or the mailing list since it's your code and he doesn't really
understand it. He could probably figure it out, but it's enough of a
non-trivial effort that he always forwards me to someone else. You
might object that I never actually emailed you, but that's because
I've seen what happened Leah emailed you, with plenty of reminder
emails, when she was working on her thesis and trying to figure out
things that would help her finish her thesis. Most emails didn't get a
response, even with a reminder or two, and I figured I'd get the same
treatment since I don't even know you.

I understand that you must be incredibly busy, between doing your own
thesis and also being the most prolific contributor to Julia. But
perhaps you can see why, in this case, I might not expect my questions
to be "received with gratitude".

There are some easily verifiable claims in my post that got
"pushback". That plotting bug? Probably because I'm using master,
which wasn't true and could have been checked by anyone with a release
build handy. That thing about build stats? Probably grabbing the wrong
numbers, which wasn't true, and could be easily spot checked by using
the script pointed in my linked post.

In the past, I've talked to someone about the build being broken and
gotten the response that it worked for him, and when I pointed out
that Travis had been broken for half a day I got some response about
how Travis often has spurious fails. The bug eventually got fixed, a
few days later, but in the meantime the build was broken and there was
also a comment about how people shouldn't expect the build on master
to not be broken. I'm being vague again because I don't see calling
people out as being very productive, but if you prefer I can dig
through old chat logs the dredge up the specifics.

Now, you say that responses to bug reports aren't responses to blog
posts. That's true, but perhaps you can see why I might not feel that
it's a great use of time to file every bug I run across when the
responses I've gotten outside of github bug reports have been what
they are.

Sorry if I've caused any offense with my vagueness and implications
that can be read from my vagueness.


Best,
Dan

Christian Peel

unread,
Dec 30, 2014, 12:44:06 AM12/30/14
to julia...@googlegroups.com
Dan, thanks for the honest critique.  Keno, Stephan, Jeff, thanks for the quick and specific replies.  Tim, thanks for the very explicit instructions on how newbies such as myself can contribute.  I also think julia-users is very welcoming, which helps me be bullish on the language.

Jeff Bezanson

unread,
Dec 30, 2014, 1:10:09 AM12/30/14
to julia...@googlegroups.com
Ok, I'm starting to see where you're coming from a bit better. As you
can gather, I don't read every issue discussion (especially in
packages) so I have not seen some of these things. Personally, I have
always really appreciated your bug reports and fixed several myself. I
hope to keep doing that.

Typically I have paid more attention to the julia issue tracker than
to almost anything else, so to me stuff that's not there doesn't
exist. (This is why Steve J had to file an issue to get me to write my
thesis!) I'm sorry about the lack of email responses. However that
does not mean I'm not receptive to bug reports. Applying my "received
with gratitude" phrase to something completely different is confusing
separate things.

I'm sorry about your bug reports getting dismissed. I will try not to
do that. In fact I just opened a bug for the UDP issue you referred to
(Leah pointed me to it). However the context of responding to points
in a blog post is utterly different. Writing a blog post is inviting
debate. If you post stats that make a project look not-so-great, of
course people will question them, wrong though they may be. To me
that's acceptable, while being overly dismissive of bug reports is
not.

There are many kinds of bugs. If you file 20, it's quite possible that
5 will be fixed, 5 will sit around for a long time, 5 will be
duplicates, and 5 will be dismissed for some reason. But that's still
5 fixed bugs, and to me that's how progress happens.

Keno Fischer

unread,
Dec 30, 2014, 1:49:04 AM12/30/14
to julia...@googlegroups.com
> That thing about build stats? Probably grabbing the wrong
numbers, which wasn't true, and could be easily spot checked by using
the script pointed in my linked post.

I apologize for missing that part if your post (and I added a follow up comment to the hacker news discussion once you pointed that out). I did actually go back and look at it, but I somehow must have read over it. It wasn't meant of a criticism of your methodology - I was simply looking at the recent travis build and the only ones failing were development branches and those with dependency issues. I'm still not entirely convinced that the Travis numbers are a good indicator of build status for Julia, because of the dependency issue which comes up way more often than julia changes breaking, as well as the way we uses branches may be different from other projects - nevertheless, let's rest that debate.

This whole discussion turned a little more adversarial than I had hoped it would. Still, I expect we can at least take some points away from this whole thing and I hope that this experience didn't entirely ruin you're experience with Julia - maybe try it again after it matures a little more. 

Tobias Knopp

unread,
Dec 30, 2014, 3:01:29 AM12/30/14
to julia...@googlegroups.com</