Dan Luu's critique of Julia: needs commented code, better testing, error handling, and bugs fixed.

4,131 views
Skip to first unread message

Christian Peel

unread,
Dec 29, 2014, 11:36:19 AM12/29/14
to julia...@googlegroups.com
Dan Luu has a critique of Julia up at http://danluu.com/julialang/  (reddit thread at http://bit.ly/1wwgnks)
Is the language feature-complete enough that there could be an entire point release that targeted some of the less-flashy things he mentioned?  I.e. commented code, better testing, error handling, and just fixing bugs?   If it's not there, is there any thoughts on when it would be?

Keno Fischer

unread,
Dec 29, 2014, 11:44:35 AM12/29/14
to julia...@googlegroups.com
I've written up some of my thoughts on the issues raised in this article in the hacker news discussion, but to answer your question, there's still a number of big items that need to be tackled by the core team. I do think it might make some sense to have a docs/tests sprint just prior to the 0.4 release (we had a doc sprint before the 0.1? release which I think was pretty successful).

There is also plenty of opportunity for tests and documentation for people outside the core team. API design discussions can also happen even if people don't know how to implement them - it's much easier to implement an API that's already designed than to do both, designing the API and implementing it. 

Valentin Churavy

unread,
Dec 29, 2014, 11:51:07 AM12/29/14
to julia...@googlegroups.com

Tim Holy

unread,
Dec 29, 2014, 12:30:26 PM12/29/14
to julia...@googlegroups.com
In my personal opinion, his post is a mix of on-target and off-target. I
completely agree with the inadequacy of our testing, particularly in packages.
However, it's also not entirely simple: julia is _so_ composable that it's
hard to come up with tests that cover everything. Until recently we've not
even had the ability to find out how much of Base is covered by tests, and
inlining makes even that a little bit tricky to determine. That said, the
situation is improving. At one point I put out a call to julia-users to tackle
writing more tests (it doesn't take deep expertise to do so), but I don't
think that netted a lot of contributions.

In terms of off-target, in particular I disagree pretty strongly with his
feeling that Base should catch lots of exceptions and try to recover. That
would make it basically impossible to deliver good performance, and it also
(in my view) jeopardizes sensible behavior.

--Tim

Tobias Knopp

unread,
Dec 29, 2014, 2:18:35 PM12/29/14
to julia...@googlegroups.com
The post reads like a rant. As every software project out there Julia has bugs. So is it really necessary to complain about bugs of an open source project in a blog post?

Stefan Karpinski

unread,
Dec 29, 2014, 2:39:34 PM12/29/14
to julia...@googlegroups.com
There's lots of things that are very legitimate complaints in this post but also other things I find frustrating.

On-point

Testing & coverage could be much better. Some parts of Base were written a long time ago before we wrote tests for new code. Those can have a scary lack of test coverage. Testing of Julia packages ranges from non-existent to excellent. This also needs a lot of work. I agree that the the current way of measuring coverage is nearly useless. We need a better approach.

The package manager really, really needs an overhaul. This is my fault and I take full responsibility for it. We've been waiting a frustratingly long time for libgit2 integration to be ready to use. Last I checked, I think there was still some Windows bug pending.

Julia's uptime on Travis isn't as high as I would like it to be. There have been a few periods (one of which Dan unfortunately hit), when Travis was broken for weeks. This sucks and it's a relief whenever we fix the build after a period like that. Fortunately, since that particularly bad couple of weeks, there hasn't been anything like that, even on Julia master, and we've never had Julia stable (release-0.3 currently) broken for any significant amount of time.

Documentation of Julia internals. This is getting a bit better with the developer documentation that has recently been added, but Julia's internals are pretty inscrutable. I'm not convinced that many other programming language implementations are any better about this, but that doesn't mean we shouldn't improve this a lot.

Frustrating

Mystery Unicode bug – Dan, I've been hearing about this for months now. Nobody has filed any issues with UTF-8 decoding in years (I just checked). The suspense is killing me – what is this bug? Please file an issue, no matter how vague it may be. Hell, that entire throwaway script can just be the body of the issue and other people can pick it apart for specific bugs.

The REPL rewrite, among other things, added tests to the REPL. Yes, it was a disruptive transition, but the old REPL needed to be replaced. It was a massive pile of hacks around GNU readline and was incomprehensible and impossible to test. Complaining about the switch to the new REPL which is actually tested seems misplaced.

Unlike Python, catching exceptions in Julia is not considered a valid way to do control flow. Julia's philosophy here is closer to Go's than to Python's – if an exception gets thrown it should only ever be because the caller screwed up and the program may reasonably panic. You can use try/catch to handle such a situation and recover, but any Julia API that requires you to do this is a broken API. So the fact that

When I grepped through Base to find instances of actually catching an exception and doing something based on the particular exception, I could only find a single one.

actually means that the one instance is actually a place where we're doing it wrong and hacking around something we know to be broken. The next move is to get rid of that one instance, not add more code like this. The UDP thing is a problem and needs to be fixed.

The business about fixing bugs getting Dan into the to 40 is weird. It's not quite accurate – Dan is #47 by commits (I'm assuming that's the metric here) with 28 commits, so he's in the top 50 but not the top 40. There are 23 people who have 100 commits or more, and that's roughly the group I would consider to be the "core devs". This paragraph is frustrating because it gives the imo unfair impression that not many people are working on Julia. Having 23+ people working actively on a programming language implementation is a lot.

Ranking of how likely Travis builds are to fail by language doesn't seem meaningful. A huge part of this is how aggressively each project uses Travis. We automatically test just about everything, even completely experimental branches. Julia packages can turn Travis testing on with a flip of a switch. So lots of things are broken on Travis because we've made it easy to use. We should, of course, fix these things, but other projects having higher uptime numbers doesn't imply that they're more reliable – it probably just means they're using Travis less.

In general, a lot of Dan's issues only crop up if you are using Julia master. The Gadfly dates regression is probably like this and the two weeks of Travis failures was only on master during a "chaos month" – i.e. a month where we make lots of reckless changes, typically right after releasing a stable version (in this case it was right after 0.3 came out). These days, I've seen a lot of people using Julia 0.3 for work and it's pretty smooth (package management is by far the biggest issue and I just take care of that myself). If you're a normal language user, you definitely should not be using Julia master.

Jeff Lunt

unread,
Dec 29, 2014, 2:54:58 PM12/29/14
to julia...@googlegroups.com
Completely agree on the exception handling philosophy (as Stefen has put it). Not only should you not rely on exception handling to make your API reliable, for some reason encouraging exception handling has a way of making folks think very defensively: "Oh, so I should handle every possible scenario from a bug in my code to what to do if I'm recovering from a hard drive crash and OS re-install," which is just wrong, because each unit of code, generally, should worry about its responsibilities and doing its job well.

It also encourages retry-thinking, such as, "If it's broken, reboot/retry an arbitrary number of time," rather than, "If it's broken, figure out why and fix it so it doesn't ever break again."

Mike Innes

unread,
Dec 29, 2014, 3:31:05 PM12/29/14
to julia...@googlegroups.com
Slightly OT, but I imagine the try/catch Dan refers to is the display system. Unfortunately it is a horribly brittle way to implement that code that still now has the potential to cause bugs (due to the fact that you can't tell where in the stack the error came from). I'm prototyping something to try and solve that and a lot of the other issues with the current display system, though who knows if it'll ever end up in Base.

Dan Luu

unread,
Dec 29, 2014, 3:37:33 PM12/29/14
to julia...@googlegroups.com
Here are a few responses to the stuff in this thread, plus Keno's comment on HN.

The Travis stats are only for builds against master, i.e., only for
things that got merged. BTW, this is mentioned in the linked post. For
every project listed in http://danluu.com/broken-builds/, only the
"main" branch on github was used, with the exception of a couple
projects where that didn't make sense (IIRC, scala has some weird
setup, as does d). "Using travis less" doesn't really make sense in
that context. I heard that response a lot from people in various
communities when I wrote that post, but from checking the results, the
projects that have better travis results are more rigorous and, on
average, the results are biased against the projects with the highest
uptimes. There are exceptions, of course.

I have a "stable" .3 build I use for all my Julia scripts and IIRC
that's where I saw the dates issue with Gadfly. I dunno, maybe I
should only use older releases? But if I go to the Julia download
page, the options are 0.3.4, 0.2.1, and 0.1.2. This might not be true,
but I'm guessing that most packages don't work with 0.2.1 or 0.1.2. I
haven't tried with 0.3.4 since I haven't touched Julia for a while.
It's possible that the issue is now fixed, but the issue is still open
and someone else also commented that they're seeing the same problem.

Sorry, I'm not being a good open source citizen and filing bugs, but
when you run into 4 bugs when writing a half-hour script, filing bugs
is a drag on productivity. A comment I've gotten here and elsewhere is
basically "of course languages have bugs!". But there have been
multiple instances where I've run into more bugs in an hour of Julia
than I've ever hit with scala and go combined, and scala is even known
for being buggy! Between scala and go, I've probably spent 5x or 10x
the time I've spent in Julia. Just because some number will be
non-zero doesn't mean that all non-zero numbers are the same. There
are various reasons that's not a fair comparison. I'm just saying that
I expect to hit maybe one bug per hour while writing Julia, and I
expect maybe 1 bug per year for most languages, even pre-1.0 go.

I don't think 40 vs. 50 really changes the argument, but of course
I've been drifting down in github's ranking since I haven't done any
Julia lately and other people are committing code.

I don't think it's inevitable that language code is inscrutable. If I
grep through the go core code (excluding tests, but including
whitespace), it's 9% pure comment lines, and 16% lines with comments.
It could use more comments, but it's got enough comments (and
descriptive function and variable names) that I can go into most files
and understand the code.

It sounds like, as is, there isn't a good story for writing robust
Julia program? There are bugs and exceptions will happen. Putting
aside that `catch` non-deterministically fails to catch, what's a
program supposed to do when some bug in Base causes a method to throw
a bounds error? You've said that the error handling strategy is
go-like, but I basically never get a panic in go (I actually can't
recall it ever having happened, although it's possible I've gotten one
at some point). That's not even close to being true in Julia.
Terminating is fine for scripts where I just want to get at the source
of the bug and fix it, but it's not so great for programs that
shouldn't ever crash or corrupt data? Is the answer just "don't write
stuff like that in Julia"?

On Mon, Dec 29, 2014 at 1:38 PM, Stefan Karpinski <ste...@karpinski.org> wrote:
> Mystery Unicode bug - Dan, I've been hearing about this for months now.
> Nobody has filed any issues with UTF-8 decoding in years (I just checked).
> The suspense is killing me - what is this bug? Please file an issue, no
> matter how vague it may be. Hell, that entire throwaway script can just be
> the body of the issue and other people can pick it apart for specific bugs.
>
> The REPL rewrite, among other things, added tests to the REPL. Yes, it was a
> disruptive transition, but the old REPL needed to be replaced. It was a
> massive pile of hacks around GNU readline and was incomprehensible and
> impossible to test. Complaining about the switch to the new REPL which is
> actually tested seems misplaced.
>
> Unlike Python, catching exceptions in Julia is not considered a valid way to
> do control flow. Julia's philosophy here is closer to Go's than to Python's
> - if an exception gets thrown it should only ever be because the caller
> screwed up and the program may reasonably panic. You can use try/catch to
> handle such a situation and recover, but any Julia API that requires you to
> do this is a broken API. So the fact that
>
>> When I grepped through Base to find instances of actually catching an
>> exception and doing something based on the particular exception, I could
>> only find a single one.
>
>
> actually means that the one instance is actually a place where we're doing
> it wrong and hacking around something we know to be broken. The next move is
> to get rid of that one instance, not add more code like this. The UDP thing
> is a problem and needs to be fixed.
>
> The business about fixing bugs getting Dan into the to 40 is weird. It's not
> quite accurate - Dan is #47 by commits (I'm assuming that's the metric here)
> with 28 commits, so he's in the top 50 but not the top 40. There are 23
> people who have 100 commits or more, and that's roughly the group I would
> consider to be the "core devs". This paragraph is frustrating because it
> gives the imo unfair impression that not many people are working on Julia.
> Having 23+ people working actively on a programming language implementation
> is a lot.
>
> Ranking of how likely Travis builds are to fail by language doesn't seem
> meaningful. A huge part of this is how aggressively each project uses
> Travis. We automatically test just about everything, even completely
> experimental branches. Julia packages can turn Travis testing on with a flip
> of a switch. So lots of things are broken on Travis because we've made it
> easy to use. We should, of course, fix these things, but other projects
> having higher uptime numbers doesn't imply that they're more reliable - it
> probably just means they're using Travis less.
>
> In general, a lot of Dan's issues only crop up if you are using Julia
> master. The Gadfly dates regression is probably like this and the two weeks
> of Travis failures was only on master during a "chaos month" - i.e. a month

Tobias Knopp

unread,
Dec 29, 2014, 3:58:01 PM12/29/14
to julia...@googlegroups.com
So you dislike Julia and encountered several bugs. Reading your posts is like you want to blame someone for that. If you are not satisfied with Julia simply do not use it.

And seriously: You cannot compare Julia with a project that has Google in the background. Its clear that they have a "more clear" development model and more documentation. Some goes for Rust. Julia is from and for researchers. And there are several people very satisfied with how Julia evolves (including me).

Tobias

Jeff Lunt

unread,
Dec 29, 2014, 4:05:50 PM12/29/14
to julia...@googlegroups.com
To be fair, that's really an argument in Dan's favor, unless Dan is not a researcher, in which case you might be able to say that Julia is better for you because you're a researcher and Dan is not. But that would imply a domain mismatch.

To say that one likes and understands a language, warts and all, is like defending something because I know it and love it, rather than because it is objectively the best tool.


Dan Luu

unread,
Dec 29, 2014, 4:06:43 PM12/29/14
to julia...@googlegroups.com
On Mon, Dec 29, 2014 at 2:58 PM, Tobias Knopp
<tobias...@googlemail.com> wrote:
> So you dislike Julia and encountered several bugs. Reading your posts is
> like you want to blame someone for that. If you are not satisfied with Julia
> simply do not use it.

I don't really use Julia anymore! Thanks for the suggestion, though.
Also, I hope you don't mind if I rescind my comment about the
community. Yikes.

Stefan Karpinski

unread,
Dec 29, 2014, 4:12:36 PM12/29/14
to Julia Users
On Mon, Dec 29, 2014 at 3:37 PM, Dan Luu <dan...@gmail.com> wrote:
Here are a few responses to the stuff in this thread, plus Keno's comment on HN.

The Travis stats are only for builds against master, i.e., only for
things that got merged. BTW, this is mentioned in the linked post. For
every project listed in http://danluu.com/broken-builds/, only the
"main" branch on github was used, with the exception of a couple
projects where that didn't make sense (IIRC, scala has some weird
setup, as does d). "Using travis less" doesn't really make sense in
that context. I heard that response a lot from people in various
communities when I wrote that post, but from checking the results, the
projects that have better travis results are more rigorous and, on
average, the results are biased against the projects with the highest
uptimes. There are exceptions, of course.

I didn't read through the broken builds post in detail – thanks for the clarification. Julia basically uses master as a branch for merging and simmering experimental work. It seems like many (most?) projects don't do this, and instead use master for stable work.
 
I have a "stable" .3 build I use for all my Julia scripts and IIRC
that's where I saw the dates issue with Gadfly. I dunno, maybe I
should only use older releases? But if I go to the Julia download
page, the options are 0.3.4, 0.2.1, and 0.1.2. This might not be true,
but I'm guessing that most packages don't work with 0.2.1 or 0.1.2. I
haven't tried with 0.3.4 since I haven't touched Julia for a while.
It's possible that the issue is now fixed, but the issue is still open
and someone else also commented that they're seeing the same problem.

Entirely possible – packages are definitely not as stable in terms of remaining unbroken as Julia's stable releases are. I think it's getting better, but can still be frustrating.
 
Sorry, I'm not being a good open source citizen and filing bugs, but
when you run into 4 bugs when writing a half-hour script, filing bugs
is a drag on productivity. A comment I've gotten here and elsewhere is
basically "of course languages have bugs!". But there have been
multiple instances where I've run into more bugs in an hour of Julia
than I've ever hit with scala and go combined, and scala is even known
for being buggy! Between scala and go, I've probably spent 5x or 10x
the time I've spent in Julia. Just because some number will be
non-zero doesn't mean that all non-zero numbers are the same. There
are various reasons that's not a fair comparison. I'm just saying that
I expect to hit maybe one bug per hour while writing Julia, and I
expect maybe 1 bug per year for most languages, even pre-1.0 go.

This seems like a crazy high number to me. My experience with other people who are using Julia for work has been about 1-3 legitimate Julia bugs per year. You seem to have a knack for pushing systems to their limit (plus, there was that period where you were filing bugs you found with a fuzzer).

Why not just post the throwaway script you wrote as an issue? That would take about a minute. More than anything, I'm interested in what the specific Unicode bug. Was there a problem with UTF-8 decoding?
 
I don't think 40 vs. 50 really changes the argument, but of course
I've been drifting down in github's ranking since I haven't done any
Julia lately and other people are committing code.

No, of course, that part doesn't matter. But what was the point of including that stat at all? It makes it seem like you're trying to imply that not a lot of people have worked on the project and that fixing a small number of bugs gets you high up on the contributors list.
 
I don't think it's inevitable that language code is inscrutable. If I
grep through the go core code (excluding tests, but including
whitespace), it's 9% pure comment lines, and 16% lines with comments.
It could use more comments, but it's got enough comments (and
descriptive function and variable names) that I can go into most files
and understand the code.

I never said it was – what I said was that most programming language implementations happen to have inscrutable code, not that they must. That's not an excuse and we should do better.
 
It sounds like, as is, there isn't a good story for writing robust
Julia program? There are bugs and exceptions will happen. Putting
aside that `catch` non-deterministically fails to catch, what's a
program supposed to do when some bug in Base causes a method to throw
a bounds error? You've said that the error handling strategy is
go-like, but I basically never get a panic in go (I actually can't
recall it ever having happened, although it's possible I've gotten one
at some point). That's not even close to being true in Julia.
Terminating is fine for scripts where I just want to get at the source
of the bug and fix it, but it's not so great for programs that
shouldn't ever crash or corrupt data? Is the answer just "don't write
stuff like that in Julia"?

At the moment, I think the way to write a reliable Julia systems is to compartmentalize and make each component restartable. This is very much in the Erlang philosophy, although we're miles away from being as good as Erlang at this kind of thing. I know that it's basically the opposite of the Google approach. I don't think Julia is the best choice currently for writing mission-critical systems software.

Stefan Karpinski

unread,
Dec 29, 2014, 4:14:07 PM12/29/14
to Julia Users
Let's please take Dan's comments as a constructive critique (which it is), rather than an attack. I know Dan personally and happen to know that this is where he's coming from.

Tobias Knopp

unread,
Dec 29, 2014, 4:27:43 PM12/29/14
to julia...@googlegroups.com
Stefan, ok. My advice was actually also constructive. I have tried various open source software in my life and there were several that were broken. But then I have simple not used it if I am not satisfied.

I think it is clear that Julia's development model could be improved. But unless a company hires some full time developer to work on Julia a change in the development model is not easily done.

Cheers

Tobias 

Stefan Karpinski

unread,
Dec 29, 2014, 4:42:13 PM12/29/14
to Julia Users
I think the main takeaways from Dan's post are the following:
  • Figure out a better way to measure coverage and work towards 100% coverage.
  • Make a thorough pass over all Base code and carefully examine situations where we throw exceptions to make sure they are correct and can only ever happen if the caller made an error. Document the conditions under which each exported function may raise an exception.
  • Improve the readability of Julia's implementation code. Rename the less scrutable functions and variables. Add comments, add to the developer docs. It's not that much code, so this isn't that awful but it is some tricky code that's in flux.

Steven G. Johnson

unread,
Dec 29, 2014, 4:45:51 PM12/29/14
to julia...@googlegroups.com
On Monday, December 29, 2014 4:12:36 PM UTC-5, Stefan Karpinski wrote:
I didn't read through the broken builds post in detail – thanks for the clarification. Julia basically uses master as a branch for merging and simmering experimental work. It seems like many (most?) projects don't do this, and instead use master for stable work.

Yeah, a lot of projects use the Gitflow model, in which a develop branch is used for experimental work and master is used for (nearly) release candidates.

I can understand where Dan is coming from in terms of finding issues continually when using Julia, but in my case it's more commonly "this behavior is annoying / could be improved" than "this behavior is wrong".  It's rare for me to code for a few hours in Julia without filing issues in the former category, but out of the 300 issues I've filed since 2012, it looks like less than two dozen are in the latter "definite bug" category.

I'm don't understand his perspective on "modern test frameworks" in which FactCheck is light-years better than a big file full of asserts.  Maybe my age is showing, but from my perspective FactCheck (and its Midje antecedent) just gives you a slightly more verbose assert syntax and a way of grouping asserts into blocks (which doesn't seem much better than just adding a comment at the top of a group of asserts).   Tastes vary, of course, but Dan seems to be referring to some dramatic advantage that isn't a matter of mere spelling.  What am I missing?

Milan Bouchet-Valat

unread,
Dec 29, 2014, 4:58:41 PM12/29/14
to julia...@googlegroups.com
Le lundi 29 décembre 2014 à 16:41 -0500, Stefan Karpinski a écrit :
> I think the main takeaways from Dan's post are the following:
> * Figure out a better way to measure coverage and work towards
> 100% coverage.
> * Make a thorough pass over all Base code and carefully examine
> situations where we throw exceptions to make sure they are
> correct and can only ever happen if the caller made an error.
> Document the conditions under which each exported function may
> raise an exception.
I'd add: improve the manual to make it clear that Julia's philosophy
with regard to exceptions (easy).

I'm only realizing today that exceptions are supposed to be raised in
Julia only when the caller is at fault. If we want packages to follow
this pattern, better make it as clear as possible.


Regards

> * Improve the readability of Julia's implementation code. Rename

Tobias Knopp

unread,
Dec 29, 2014, 5:02:59 PM12/29/14
to julia...@googlegroups.com
I think one important way to improve the stability of Julia is to separate Julia and its standard library (e.g. split Base into "crucial base" and stdlib). This will help making the core rock solid and will further reduce the number of binary dependencies to a minimum. It also helps to make more clear who the maintainer of a specific set of functions is. (see https://github.com/JuliaLang/julia/issues/5155).

Stefan Karpinski

unread,
Dec 29, 2014, 5:03:32 PM12/29/14
to Julia Users
On Mon, Dec 29, 2014 at 4:45 PM, Steven G. Johnson <steve...@gmail.com> wrote:
I'm don't understand his perspective on "modern test frameworks" in which FactCheck is light-years better than a big file full of asserts.  Maybe my age is showing, but from my perspective FactCheck (and its Midje antecedent) just gives you a slightly more verbose assert syntax and a way of grouping asserts into blocks (which doesn't seem much better than just adding a comment at the top of a group of asserts).   Tastes vary, of course, but Dan seems to be referring to some dramatic advantage that isn't a matter of mere spelling.  What am I missing?

Man, I'm glad I'm not the only one. Can someone explain what the big deal about the FactCheck approach is? Am I missing something really fundamental here?

Stefan Karpinski

unread,
Dec 29, 2014, 5:04:16 PM12/29/14
to Julia Users
On Mon, Dec 29, 2014 at 4:58 PM, Milan Bouchet-Valat <nali...@club.fr> wrote:
I'd add: improve the manual to make it clear that Julia's philosophy
with regard to exceptions (easy).

I'm only realizing today that exceptions are supposed to be raised in
Julia only when the caller is at fault. If we want packages to follow
this pattern, better make it as clear as possible.

Yes, we should do this. Also relevant: https://github.com/JuliaLang/julia/issues/7026 

Jameson Nash

unread,
Dec 29, 2014, 5:05:27 PM12/29/14
to julia...@googlegroups.com
I imagine there are advantages to frameworks in that you can expected failures and continue through the test suite after one fails, to give a better % success/failure metric than Julia's simplistic go/no-go approach.

I used JUnit many years ago for a high school class, and found that, relative to `@assert` statements, it had more options for asserting various approximate and conditional statements that would otherwise have been very verbose to write in Java. Browsing back through it's website now (http://junit.org/ under Usage and Idioms), it apparently now has some more features for testing such as rules, theories, timeouts, and concurrency). Those features would likely help improve testing coverage by making tests easier to describe.

Jeff Bezanson

unread,
Dec 29, 2014, 5:05:57 PM12/29/14
to julia...@googlegroups.com
Dan, I know there are many areas where we should improve. For now I
share Stefan's frustration about the mystery bugs you keep alluding
to. I don't expect a full detailed report on each one, and I get that
you don't want to interrupt work to file them. But we have now seen at
least two blog posts and one long list post inspired in part by these
bugs. If you have time to write all that, you have time to at least
send us your script. We have not often asked others to fix their own
bugs, and we have not been known to call people brats, but we have
been known to fix lots of bugs. I know fixing bugs one-by-one is not
as good as systematically improving our tests and process, but it is
more helpful than alarmist invective.

Jeff Bezanson

unread,
Dec 29, 2014, 5:13:35 PM12/29/14
to julia...@googlegroups.com
Reporting % success rather than demanding 100% success would seem to
be a strictly weaker testing policy.

Arguably, with macros you need fewer features since `@test a == b`
could recognize an equality test and report what a and b were. But one
feature we could stand to add is asserting properties that must be
true for all arguments, and running through lots of combinations of
instances. However, in reality we do some of this already, since the
"files full of asserts" also in many cases do nested loops of tests.
Saying we do "just asserts" obscures this fact.

Patrick O'Leary

unread,
Dec 29, 2014, 5:37:15 PM12/29/14
to julia...@googlegroups.com
On Monday, December 29, 2014 4:13:35 PM UTC-6, Jeff Bezanson wrote:
But one
feature we could stand to add is asserting properties that must be
true for all arguments, and running through lots of combinations of
instances.

Anyone who is interested in this is welcome to use https://github.com/pao/QuickCheck.jl as a starting point.

Dan Luu

unread,
Dec 29, 2014, 6:42:32 PM12/29/14
to julia...@googlegroups.com
Welp, I ended up checking up on this thread again because of a
conversation with Stefan, so here are some more responses.

I tried https://github.com/dcjones/Gadfly.jl/issues/462 on the current
release binary on julialang.org and it still fails as before, so that
wasn't because I was running off of master. I updated the issue.

Yes, I agree that I seem to run into an unusual number of bugs. My
guess is it's partially because I basically don't do any of the kind
of data stuff people normally do with Julia and I'm off doing stuff
that's untested and has rarely, if ever, been used before. But IIRC, I
ran into a bug where code_llvm and code_native would segfault.
Sometimes stuff just breaks and doesn't get fixed for a while.

I don't really want to get sucked into a discussion about test
methodologies, so I'm happy to concede the point if it will get me out
of that debate.

Alright, I'll see if I can find my script somewhere and copy+paste it
to make a bugreport, but it's a pretty obnoxious bug report. It's a
write-once throwaway script that probably has all sorts of stuff wrong
with it. Also, it takes as input `git log` from the linux kernel git
repo, which is pretty large.

Once, while running it, an exception escaped from a try/catch and
killed the script. But it only happened once so I don't know how many
times you'd have to re-run it to get that result. So, that's not
really nicely reproducible.

Otherwise, if you remove the try/catch statements a couple of string
related things will blow up with an exception.

The entitled brat response wasn't aimed at you (Jeff), but I've
literally never written anything negative about an open source project
without having someone tell me that I'm an entitled jerk, so I
expected to get that response to this post. And I did, so that streak
continues!

Jeff Bezanson

unread,
Dec 29, 2014, 8:38:51 PM12/29/14
to julia...@googlegroups.com

I feel like you are trying to convey the impression that finding bugs in julia results in insults and no help from us. That is a total mis-characterization of the project. There is also no equivalence between responses to bug reports, and responses to blog posts. As far as I know, all 9000 of our bug reports have been received with gratitude. However your post says or implies that we don't care about error handling, tell people to fix their own bugs, and even that we don't understand our own code. You can very well expect some pushback on that.

jrgar...@gmail.com

unread,
Dec 29, 2014, 9:07:39 PM12/29/14
to julia...@googlegroups.com
On Monday, December 29, 2014 2:39:34 PM UTC-5, Stefan Karpinski wrote:
Unlike Python, catching exceptions in Julia is not considered a valid way to do control flow. Julia's philosophy here is closer to Go's than to Python's – if an exception gets thrown it should only ever be because the caller screwed up and the program may reasonably panic. You can use try/catch to handle such a situation and recover, but any Julia API that requires you to do this is a broken API.

I would really like if I could throw and catch an exception without needing to consider that my program might panic as a result of doing so.  I just looked through the entire corpus of Julia code I have written so far, and the only places I catch exceptions are when the exception is actually due to calling a Python API via PyCall.  I am willing to accept that using exceptions is not a very Julian way of doing things, but I still want them to work when they are needed.

Stefan Karpinski

unread,
Dec 29, 2014, 9:27:41 PM12/29/14
to Julia Users
On Mon, Dec 29, 2014 at 9:07 PM, <jrgar...@gmail.com> wrote:

I would really like if I could throw and catch an exception without needing to consider that my program might panic as a result of doing so.  I just looked through the entire corpus of Julia code I have written so far, and the only places I catch exceptions are when the exception is actually due to calling a Python API via PyCall.  I am willing to accept that using exceptions is not a very Julian way of doing things, but I still want them to work when they are needed.

"Panic" is the Go term for "throw". Your Julia program will not panic if you throw an exception – throw/catch works just fine.

Ravi Mohan

unread,
Dec 29, 2014, 9:55:37 PM12/29/14
to julia...@googlegroups.com
Fwiw the correct engineering response here seems to be to acknowledge the subset of Dan's criticisms that are valid/reasonable, fix those, and get back to work. Criticising Dan's motives etc isn't a productive path (imo) If there are low hanging fruit fixes on such a successful project,(the build/test thing certainly seems to be one) that is a *good* thing. Yes the HN crowd can be a bit rough (I am plinkplonk on HN, fwiw) , and often unreasonable, but hey anyone running an open source project can't afford to get disturbed by weird discussions on HN.

All projects have bugs, and if someone has an uncanny knack for surfacing heisenbugs, that is a good thing, irrespective of communication style.

My 2 cents (I am just tinkering with Julia and don't use it anger yet, but after some discussion with Viral (who is my neighbor) am considering jumping in - Julia is a brilliant project). As a prospective contributor to Julia, I am encouraged by Stefan's approach to this)

regards,
Ravi

Tim Holy

unread,
Dec 29, 2014, 10:09:05 PM12/29/14
to julia...@googlegroups.com
Yeah, there's absolutely no problem with try/catch or try/finally. The only
debate is about how much you should use them. HDF5 uses loads of `try/finally`s
to clean up properly before throwing an error.

--Tim

Tim Holy

unread,
Dec 29, 2014, 11:39:58 PM12/29/14
to julia...@googlegroups.com
For anyone who wants to help with the test coverage issue, I just posted some
instructions here:
https://github.com/JuliaLang/julia/issues/9493

--Tim

Dan Luu

unread,
Dec 30, 2014, 12:11:55 AM12/30/14
to julia...@googlegroups.com
Hi Jeff,


That's a lot of claims, so let me just respond to one, that my post
"implies ... that we don't understand our own code".

Where I've been vague and imply something it's because I don't like
calling people out by name.

I literally stated the opposite in my post, saying that the Julia core
team can "hold all of the code in their collective heads".

I'm guessing the objection is to the line "code that even the core
developers can't figure out because it's too obscure", but that refers
to the vague anecdote in the previous paragraph. The plural here is a
side effect of being vague, not an implication that you can't figure
it out your own code.

If it helps clear the air, the specifics are that when I ask Stefan
how something works the most common response I get is that I should
ask you or the mailing list since it's your code and he doesn't really
understand it. He could probably figure it out, but it's enough of a
non-trivial effort that he always forwards me to someone else. You
might object that I never actually emailed you, but that's because
I've seen what happened Leah emailed you, with plenty of reminder
emails, when she was working on her thesis and trying to figure out
things that would help her finish her thesis. Most emails didn't get a
response, even with a reminder or two, and I figured I'd get the same
treatment since I don't even know you.

I understand that you must be incredibly busy, between doing your own
thesis and also being the most prolific contributor to Julia. But
perhaps you can see why, in this case, I might not expect my questions
to be "received with gratitude".

There are some easily verifiable claims in my post that got
"pushback". That plotting bug? Probably because I'm using master,
which wasn't true and could have been checked by anyone with a release
build handy. That thing about build stats? Probably grabbing the wrong
numbers, which wasn't true, and could be easily spot checked by using
the script pointed in my linked post.

In the past, I've talked to someone about the build being broken and
gotten the response that it worked for him, and when I pointed out
that Travis had been broken for half a day I got some response about
how Travis often has spurious fails. The bug eventually got fixed, a
few days later, but in the meantime the build was broken and there was
also a comment about how people shouldn't expect the build on master
to not be broken. I'm being vague again because I don't see calling
people out as being very productive, but if you prefer I can dig
through old chat logs the dredge up the specifics.

Now, you say that responses to bug reports aren't responses to blog
posts. That's true, but perhaps you can see why I might not feel that
it's a great use of time to file every bug I run across when the
responses I've gotten outside of github bug reports have been what
they are.

Sorry if I've caused any offense with my vagueness and implications
that can be read from my vagueness.


Best,
Dan

Christian Peel

unread,
Dec 30, 2014, 12:44:06 AM12/30/14
to julia...@googlegroups.com
Dan, thanks for the honest critique.  Keno, Stephan, Jeff, thanks for the quick and specific replies.  Tim, thanks for the very explicit instructions on how newbies such as myself can contribute.  I also think julia-users is very welcoming, which helps me be bullish on the language.

Jeff Bezanson

unread,
Dec 30, 2014, 1:10:09 AM12/30/14
to julia...@googlegroups.com
Ok, I'm starting to see where you're coming from a bit better. As you
can gather, I don't read every issue discussion (especially in
packages) so I have not seen some of these things. Personally, I have
always really appreciated your bug reports and fixed several myself. I
hope to keep doing that.

Typically I have paid more attention to the julia issue tracker than
to almost anything else, so to me stuff that's not there doesn't
exist. (This is why Steve J had to file an issue to get me to write my
thesis!) I'm sorry about the lack of email responses. However that
does not mean I'm not receptive to bug reports. Applying my "received
with gratitude" phrase to something completely different is confusing
separate things.

I'm sorry about your bug reports getting dismissed. I will try not to
do that. In fact I just opened a bug for the UDP issue you referred to
(Leah pointed me to it). However the context of responding to points
in a blog post is utterly different. Writing a blog post is inviting
debate. If you post stats that make a project look not-so-great, of
course people will question them, wrong though they may be. To me
that's acceptable, while being overly dismissive of bug reports is
not.

There are many kinds of bugs. If you file 20, it's quite possible that
5 will be fixed, 5 will sit around for a long time, 5 will be
duplicates, and 5 will be dismissed for some reason. But that's still
5 fixed bugs, and to me that's how progress happens.

Keno Fischer

unread,
Dec 30, 2014, 1:49:04 AM12/30/14
to julia...@googlegroups.com
> That thing about build stats? Probably grabbing the wrong
numbers, which wasn't true, and could be easily spot checked by using
the script pointed in my linked post.

I apologize for missing that part if your post (and I added a follow up comment to the hacker news discussion once you pointed that out). I did actually go back and look at it, but I somehow must have read over it. It wasn't meant of a criticism of your methodology - I was simply looking at the recent travis build and the only ones failing were development branches and those with dependency issues. I'm still not entirely convinced that the Travis numbers are a good indicator of build status for Julia, because of the dependency issue which comes up way more often than julia changes breaking, as well as the way we uses branches may be different from other projects - nevertheless, let's rest that debate.

This whole discussion turned a little more adversarial than I had hoped it would. Still, I expect we can at least take some points away from this whole thing and I hope that this experience didn't entirely ruin you're experience with Julia - maybe try it again after it matures a little more. 

Tobias Knopp

unread,
Dec 30, 2014, 3:01:29 AM12/30/14
to julia...@googlegroups.com
I have to say that Jeff and Stefan (and of course all the other from the "core" team) do an awesome job. I have been waiting myself for responses of Jeff but with a software project that big it is absolutely normal that one will not always get an immediate response to every bug report.

If someone thinks that the community or the development model has a problem this can be discussed on the mailing list or the issue tracker.

What I absolutely not get is the critique about the issues in Julia packages. It is the entire point to decouple core and packages so that these can be independently maintained. And this is how we should proceed. We just need some default packages that are so mature that they can hold the standard of Julia core. And I also think that we will make Jeffs (Keno, Jameson, ..) life easier when Julia core bugs are really core bugs and not issues in the (to large) base library.

Gray Calhoun

unread,
Dec 30, 2014, 8:48:31 AM12/30/14
to julia...@googlegroups.com
Only partly related, but one of the tings that I've found most
surprising about Julia is how much of the substantive discussion
and planning happens on Github vs the mailing list. Personally, I
had a radically different view of the development process, etc
after following the project on GitHub than I got from this list.

When the website is redesigned, we may want to explain that
aspect of the project & community better.

On Monday, December 29, 2014 11:11:55 PM UTC-6, Dan Luu wrote:
 [...]

Tony Kelman

unread,
Dec 30, 2014, 10:17:06 AM12/30/14
to julia...@googlegroups.com
The Travis complaint is valid and really difficult. We rely on Travis pretty heavily since it's a great tool, but there's something peculiar that I don't think anyone fully understands about the Travis environment (VM config? amount of memory? dunno), Julia's runtime, or the combination of the two, that leads to failures that we never see locally. For example https://github.com/JuliaLang/julia/issues/9176 describes an issue that has been present, ongoing, and intermittently causing Travis builds to fail for over a month now. I stopped adding to my list of occurrences after 25 not because it stopped happening, but because it didn't seem worth the effort to continue finding and chronicling each time. The lack of responses probably indicates no one else has been able to reproduce it locally either, or has any idea what's causing it. I wish Julia didn't have this kind of problem. I think we all do. I wouldn't feel comfortable tagging a release in the current state of master, but we're not anywhere near an RC for 0.4 yet so hopefully there's time to maybe figure some of this stuff out. And perfect is the enemy of the good, etc.

Jeff Lunt

unread,
Dec 30, 2014, 10:26:26 AM12/30/14
to julia...@googlegroups.com
Thanks, Tim!

On Mon, Dec 29, 2014 at 10:39 PM, Tim Holy <tim....@gmail.com> wrote:

Stefan Karpinski

unread,
Dec 30, 2014, 1:53:19 PM12/30/14
to Julia Users
On Tue, Dec 30, 2014 at 12:11 AM, Dan Luu <dan...@gmail.com> wrote:
If it helps clear the air, the specifics are that when I ask Stefan
how something works the most common response I get is that I should
ask you or the mailing list since it's your code and he doesn't really
understand it. He could probably figure it out, but it's enough of a
non-trivial effort that he always forwards me to someone else. You
might object that I never actually emailed you, but that's because
I've seen what happened Leah emailed you, with plenty of reminder
emails, when she was working on her thesis and trying to figure out
things that would help her finish her thesis. Most emails didn't get a
response, even with a reminder or two, and I figured I'd get the same
treatment since I don't even know you.

I understand that you must be incredibly busy, between doing your own
thesis and also being the most prolific contributor to Julia. But
perhaps you can see why, in this case, I might not expect my questions
to be "received with gratitude".

Sending questions or issues directly to maintainers of open source projects that could be posted in public on mailing lists or issue trackers is inconsiderate – both to the person you're asking and to all the people who might have been interested, now or in the future, but who are excluded from your private conversation. If you post a question or problem on julia-dev, julia-users, or GitHub (or StackOverflow or Quora), there's a decent chance that I will answer it or at least chime in. There is also a large chance that someone else will answer the question instead, relieving me of that effort – and there's a very large chance that their answer will be better that the one I would have given. That discussion will also be there in the future for others to see. Perhaps most importantly, it's not uncommon for these discussions to spur action – and usually I'm not the one acting. When you email me directly, you are forcing me to be the one to act. So my action is is this: I will tell you to post it in public, basically regardless of what the question is. If there's something that needs to be discussed privately, no problem, but all of these things were clearly not in that category.

Isaiah Norton

unread,
Dec 30, 2014, 3:07:41 PM12/30/14
to julia...@googlegroups.com
+1 to everything Stefan said. Even for internals arcana, by a quick headcount there are (at very least) 10-15 people not named Jeff who have spent significant time in various parts of src/ and can point people in the right direction on questions about almost everything except (possibly) type inference --- all of whom are fairly active on the mailing lists. So the odds of getting an answer here are good, as evidenced by a number of recent discussions on -dev. There are certainly some misses, but bumping unanswered questions after a day or three is fine -- preferably along with some brief sketch of current understanding and in some cases a more focused question to get things going, such as "what is this struct for" or "where do I set a breakpoint to watch this behavior".

Jim Garrison

unread,
Dec 30, 2014, 5:02:00 PM12/30/14
to julia...@googlegroups.com

Stefan, I misunderstood so thank you for the clarification.

Part of the reason I was inclined to think that exceptions are unsupported is that I often see my code segfault if I create an exception e.g. by pressing Ctrl+C.  For instance, if I open the REPL, and type

    julia> x = rand(4000,4000)
    julia> x * x

and press Ctrl+C during execution, I nearly always get a segfault.  In Python I almost never see a segfault as an exception unwinds (and when I do, I file a bug).  But in Julia it seems to be the norm for me.

Somewhat related, I also experience intermittent segfaults on exit on a cluster I use at UCSB unless I set OPENBLAS_NUM_THREADS=1.  (I'd like to get a stack trace on this and file a real bug, but I've been unable so far to find where the core dumps disappear to even with sysadmin help, and the problem goes away when I run julia under gdb).

And when I run the release-0.3 branch under valgrind (even something as simple as the empty script `julia -e ""`), the results can be somewhat scary (at least that is my interpretation).

Together these things imply to me that not enough effort/testing is being put into ensuring that resources are cleaned up correctly as julia terminates, but I'm curious if others have different takes on this.

I've been using Julia since September and overall I feel like I am hitting real bugs at a much higher rate than a few per year (and can in that sense relate to Dan's post).  But for me Julia has made me so much more productive that even dealing with these issues is more fun (and productive) than my former days of using C++.  As such, I'd really like to do what I can to ensure overall trend is heading in the direction of increased stability over time.  I have a few ideas for things to do, but am curious to know first what people think of my above assessment.

Steven G. Johnson

unread,
Dec 30, 2014, 5:27:19 PM12/30/14
to julia...@googlegroups.com

On Tuesday, December 30, 2014 5:02:00 PM UTC-5, Jim Garrison wrote:
Part of the reason I was inclined to think that exceptions are unsupported is that I often see my code segfault if I create an exception e.g. by pressing Ctrl+C.  For instance, if I open the REPL, and type

    julia> x = rand(4000,4000)
    julia> x * x

and press Ctrl+C during execution, I nearly always get a segfault.  In Python I almost never see a segfault as an exception unwinds (and when I do, I file a bug).  But in Julia it seems to be the norm for me.

I'm not seeing a segfault in this particular case on my machine, but in general the difficulty is that external C libraries (such as openblas) are rarely interrupt-safe: stopping them at a random part and then restarting the function call will often crash.  My suggestion has been to defer ctrl-c interrupts (SIGINT signals) around external C calls (ccall), but this has not been implemented yet: https://github.com/JuliaLang/julia/issues/2622

(My understanding is that Python similarly disables interrupts in external C library: http://stackoverflow.com/questions/14271697/ctrlc-doesnt-interrupt-call-to-shared-library-using-ctypes-in-python)
 
And when I run the release-0.3 branch under valgrind (even something as simple as the empty script `julia -e ""`), the results can be somewhat scary (at least that is my interpretation).

Valgrind tends to report false positives in language runtimes using mark-and-sweep garbage collection, if I recall correctly

Steven

Isaiah Norton

unread,
Dec 30, 2014, 5:40:35 PM12/30/14
to julia...@googlegroups.com
And when I run the release-0.3 branch under valgrind (even something as simple as the empty script `julia -e ""`), the results can be somewhat scary (at least that is my interpretation).

Valgrind tends to report false positives in language runtimes using mark-and-sweep garbage collection, if I recall correctly

The valgrind issues I saw last time I ran it (2 mo. ago) were mostly (possibly all) missing suppressions for core calls like memcpy. I haven't yet tried with the latest valgrind version as suggested here:

Sean Marshallsay

unread,
Dec 30, 2014, 5:50:31 PM12/30/14
to julia...@googlegroups.com
Just my two cents here Jim but I've been using v0.4 (usually updated daily) extensively since the summer and have only run into one segfault (which sat very firmly in "I'm doing something stupidly unsafe here" territory).

I would argue that if you run into a segfault Julia is definitely not behaving and you should file an issue.

Stefan Karpinski

unread,
Dec 30, 2014, 6:41:19 PM12/30/14
to Julia Users
On Mon, Dec 29, 2014 at 3:37 PM, Dan Luu <dan...@gmail.com> wrote:
I have a "stable" .3 build I use for all my Julia scripts and IIRC that's where I saw the dates issue with Gadfly. I dunno, maybe I should only use older releases?

This seems to be at odds with this claim in the blog post:

When I worked around that I ran into a regression that caused plotting to break large parts of the core language, so that data manipulation had to be done before plotting.

That change only exists on master, not in the 0.3.x stable releases. So it seems likely that you were actually using the unstable development version of Julia when you encountered all of these problems. Otherwise you could not have encountered that bug.

Tim Holy

unread,
Dec 30, 2014, 8:10:38 PM12/30/14
to julia...@googlegroups.com
On Tuesday, December 30, 2014 06:40:30 PM Stefan Karpinski wrote:
> That change only exists on master, not in the 0.3.x stable releases. So it
> seems likely that you were actually using the unstable development version
> of Julia when you encountered all of these problems. Otherwise you could
> not have encountered that bug.

Actually, while that particular construction is only available in julia 0.4,
it turned out upon deeper investigation that you can trigger the same bug on
0.3: see, for example,
https://github.com/JuliaLang/Color.jl/issues/68

This is the issue (one of two, actually), that I branded "convertalypse," and
in my view it's one of the nastier bugs that has ever lurked this long in
julia base: this definitely qualifies as a wart to be embarrassed about. It
wasn't discovered until long after julia 0.3's release, unfortunately, and it
has been extremely hard to track down. I tried 3 times myself (devoting big
chunks of a day to it), and failed to make any real progress.

Fortunately, within the last 24 hours, our superhero Jameson Nash seems to
have just diagnosed the problem and proposed a fix.
https://github.com/JuliaLang/julia/issues/8631#issuecomment-68336062.
Hopefully the same fix will apply on julia 0.3, too.

Best,
--Tim

Stefan Karpinski

unread,
Dec 30, 2014, 9:29:58 PM12/30/14
to julia...@googlegroups.com
Ah, that's good to know. Even better that Jameson may have fixed it!

Jameson Nash

unread,
Dec 31, 2014, 12:28:40 AM12/31/14
to julia...@googlegroups.com
I haven't quite fixed it yet, although I've pushed it from the realm of a heisenbug to an issue with the return value of typeintersect for some specific inputs.

I'm guessing however that this is one question where Jeff's expertise (in type system design) is rather critical, over emailing a distribution list (although all the information I've gather is on the issue tracker, if there's a lurking type-theorist on this list)... :)

Avik Sengupta

unread,
Dec 31, 2014, 7:15:25 AM12/31/14
to julia...@googlegroups.com
This has actually been a particularly nasty bug, it broke many packages on 0.3.x, from Gadfly to GLM to HDF5, starting sometime in mid October. Tim had a workaround in Color.jl that solved some of the issues, but there are still reports of more failures.

Thanks to Tim and Jameson for tracking this ...

Regards
-
Avik

Ismael VC

unread,
Dec 31, 2014, 6:55:11 PM12/31/14
to julia...@googlegroups.com
+1 to addering to the git flow, I had also allways expected for the master branch to be as stable and possible, while development happening in another branch, not the other way around and sometimes I've had to search for a past working commit in order to build julia, which strikes me as odd, as you guys really follow good development techniques.

Would it be difficult to change this, maybe for a post 0.4 era?

El lunes, 29 de diciembre de 2014 10:36:19 UTC-6, Christian Peel escribió:
Dan Luu has a critique of Julia up at http://danluu.com/julialang/  (reddit thread at http://bit.ly/1wwgnks)
Is the language feature-complete enough that there could be an entire point release that targeted some of the less-flashy things he mentioned?  I.e. commented code, better testing, error handling, and just fixing bugs?   If it's not there, is there any thoughts on when it would be?

ele...@gmail.com

unread,
Dec 31, 2014, 8:30:38 PM12/31/14
to julia...@googlegroups.com


On Thursday, January 1, 2015 9:55:11 AM UTC+10, Ismael VC wrote:
+1 to addering to the git flow, I had also allways expected for the master branch to be as stable and possible, while development happening in another branch, not the other way around and sometimes I've had to search for a past working commit in order to build julia, which strikes me as odd, as you guys really follow good development techniques.

Well, using master as the development branch is the Linux Kernal workflow, so I doubt you can call it unusual.  It is also the approach mostly used in the git book chapter http://git-scm.com/book/en/v2/Distributed-Git-Distributed-Workflows.  

Really experimental things are in feature branches which will eventually be merged into master.

Stable is the release 0.3 branch.

When the first 0.4 RCs are made, a branch will be made for 0.4.

Cheers
Lex

PS thats as I understand the workflow as an outside observer, so consider this the test to see how understandable Julia's workflow is.

Ismael VC

unread,
Dec 31, 2014, 8:44:53 PM12/31/14
to julia...@googlegroups.com
I didn't know that fact about the Linux kernel, or how usual it is, I've just red the git book and it explains it like this:

http://git-scm.com/book/en/v2/Git-Branching-Branching-Workflows


ele...@gmail.com

unread,
Jan 1, 2015, 1:58:18 AM1/1/15
to julia...@googlegroups.com


On Thursday, January 1, 2015 11:44:53 AM UTC+10, Ismael VC wrote:
I didn't know that fact about the Linux kernel, or how usual it is, I've just red the git book and it explains it like this:

http://git-scm.com/book/en/v2/Git-Branching-Branching-Workflows




And just under that diagram it says:

We will go into more detail about the various possible workflows for your Git project in Chapter 5, so before you decide which branching scheme your next project will use, be sure to read that chapter.
 
Chapter 5 is more on distributed projects such as Julia (and Linux :).

Cheers
Lex

Viral Shah

unread,
Jan 1, 2015, 2:27:38 AM1/1/15
to julia...@googlegroups.com
While the basic assert based tests are good enough for me, I do wish that the test framework could be more flexible. Some of this is historic - we started out not wanting a separate set of unit vs. comprehensive test suites. The goal with the unit tests was to have something that could be easily tested rapidly during development and catch regressions in the basic system. This evolved into something more than what it was intended to do. We even added some very basic perf tests to this framework.

I find myself wanting a few more things from it as I have worked on the ARM port on and off. Some thoughts follow.

I'd love to be able to run the entire test suite, knowing how many tests there are in all, how many pass and how many fail. Over time, it is nice to know how the total number of tests has increased along with the code in base. Currently, on ARM, tons of stuff fails and I run all the tests by looping over all the test files, and they all give up after the first failure.

If I had, say, the serial number of the failing cases, I can keep repeatedly testing just those as I try to fix a particular issue. Currently, the level of granularity is a whole test file.

Documentation of the test framework in the manual has been on my mind. We have it in the standard library documentation, but not in the manual. This has been on my mind for a while.

Code coverage is essential - but that has already been discussed in detail in this thread, and some good work has already started.

Beyond basic correctness testing, numerical codes need to also have tests for ill-conditioned inputs. For the most part, we depend on our libraries to be well-tested (LAPACK, FFTW, etc.), but increasingly, we are writing our own libraries. Certainly package authors are pushing boundaries here.

A better perf test framework would also be great to have. Ideally, the perf test coverage would cover everything, and also have the ability to compare against performance in the past. Elliot's Codespeed was meant to do this, but somehow it hasn't worked out yet. I am quite hopeful that we will figure it out.

Stuff like QuickCheck that generate random test cases are useful, but I am not convinced that should be in Base.

-viral

On Tuesday, December 30, 2014 3:35:27 AM UTC+5:30, Jameson wrote:
I imagine there are advantages to frameworks in that you can expected failures and continue through the test suite after one fails, to give a better % success/failure metric than Julia's simplistic go/no-go approach.

I used JUnit many years ago for a high school class, and found that, relative to `@assert` statements, it had more options for asserting various approximate and conditional statements that would otherwise have been very verbose to write in Java. Browsing back through it's website now (http://junit.org/ under Usage and Idioms), it apparently now has some more features for testing such as rules, theories, timeouts, and concurrency). Those features would likely help improve testing coverage by making tests easier to describe.

On Mon Dec 29 2014 at 4:45:53 PM Steven G. Johnson <steve...@gmail.com> wrote:
On Monday, December 29, 2014 4:12:36 PM UTC-5, Stefan Karpinski wrote:
I didn't read through the broken builds post in detail – thanks for the clarification. Julia basically uses master as a branch for merging and simmering experimental work. It seems like many (most?) projects don't do this, and instead use master for stable work.

Yeah, a lot of projects use the Gitflow model, in which a develop branch is used for experimental work and master is used for (nearly) release candidates.

I can understand where Dan is coming from in terms of finding issues continually when using Julia, but in my case it's more commonly "this behavior is annoying / could be improved" than "this behavior is wrong".  It's rare for me to code for a few hours in Julia without filing issues in the former category, but out of the 300 issues I've filed since 2012, it looks like less than two dozen are in the latter "definite bug" category.

I'm don't understand his perspective on "modern test frameworks" in which FactCheck is light-years better than a big file full of asserts.  Maybe my age is showing, but from my perspective FactCheck (and its Midje antecedent) just gives you a slightly more verbose assert syntax and a way of grouping asserts into blocks (which doesn't seem much better than just adding a comment at the top of a group of asserts).   Tastes vary, of course, but Dan seems to be referring to some dramatic advantage that isn't a matter of mere spelling.  What am I missing?

Ismael VC

unread,
Jan 1, 2015, 3:12:59 AM1/1/15
to julia...@googlegroups.com
Perhaps we could add a diagram of the Julia work flow because I think we are using neither one of those models, we don't have lieutenants, nor a dictator do we?
 

​I'm sorry for the ugly diagram, I just want to really understand the current work flow, so correct me if I'm wrong.

I don't know about Linux, but I wonder how frequently do they happen to have a broken master? I this also common situation among distributed open source projects? (I'm going to study Rust's approach too.)

I just thought that the master had to be as stable as possible (reference) and by using the dictator/lieutenant/public_dev approach I assume one gets way more testing but also that one needs way more resources.

After all Linus has to really trust his lieutenants, as the key in this model is delegation and trust.

Since Julia uses neither (a mix?), whats the advantage of the current approach?

Keno Fischer

unread,
Jan 1, 2015, 3:28:39 AM1/1/15
to julia...@googlegroups.com
For us, master means the branch that people willing to test out new changes should be on, in order to provide feedback. If you don't want to do that you should use the stable branch. We try to keep master building as often as possible, and if it doesn't that should be considered a priority and addressed as soon as possible. 

Tobias Knopp

unread,
Jan 1, 2015, 3:45:51 AM1/1/15
to julia...@googlegroups.com
Hi Ismael,

why do you think that master is more frequently broken in Julia than in other projects?
This really does not happen often. People develop in branches and after serious review they are merged to master.

This discussion further is to isolated and does not take into account that Julia is a programming language and that it is very important to testbed language changes during a development period.

The discussion is, by the way, very funny because we had during the 0.3 dev period effectively a "rolling release", i.e. development snapshots were regularly made and these were kept stable.

Cheers,

Tobi

Ismael VC

unread,
Jan 1, 2015, 4:14:29 AM1/1/15
to julia...@googlegroups.com
Tobias: I don't think that Julia is more frequently broken, but Dann experienced this (his blog post started this discussion), I have also experienced it several times (but I'm an inexpert noob) and of course I'm sure other have also experienced this.

I just wanted to know the advantages of Julia's approach compared to following things by the "book".

I know the correct thing it to check if there is an open issue or else open an issue (I've spent last year studying git and a lot of stuff), and all I know comes from whatever I have available to study from, like the git book, and since I clearly don't understand, I just want to understand the issue.

Keno: I certainly want to provide feedback and learn, you'll be having me around a lot starting from this year. :D

Obviously I didn't follow the 0.3 dev cycle, but now I have configured gmail, to recieve absolutely every notification from the Julia project.

As a mater of fact I'll start building Julia from master again tonight and report any issues I might encounter, something I stopped doing because of my lack of knowledge and the availability of binaries.

Thanks you both for taking the time to answer my concerns.

Sean Marshallsay

unread,
Jan 1, 2015, 10:36:11 AM1/1/15
to julia...@googlegroups.com
Ismael,

I think you're over-compliacating Julia's workflow slightly, in that first image you posted (http://git-scm.com/book/en/v2/Git-Branching-Branching-Workflows) just replace the word "master" with "stable/release" and the word "develop" with "master" and that's pretty much it.

Ismael VC

unread,
Jan 1, 2015, 11:18:09 AM1/1/15
to julia...@googlegroups.com
Ok, so the branching models in "git book" are just examples not instructions.


Thanks Sean!

Ivar Nesje

unread,
Jan 1, 2015, 12:05:13 PM1/1/15
to julia...@googlegroups.com
Yes, Git allows for many different models for development. As Julia is a pretty small project (compared to the linux kernel), we have a much simpler structure. Julia is also in a early phase so we are exploring different options, and we need a branch to distribute and try out new ideas. We also occasionally do backwards incompatible changes, so it is really great that we have now left the single rolling release model that we had before 0.3.0, but keep a stable branch with 0.3 without BC breaking changes.

Currently we are maintaining two main branches (master and release-0.3). I can definitely see the point that it would be great to have an additional develop branch, in order to have more widespread testing of changes before committing to master. Unfortunately that will require tons of extra effort and confusion and heighten the barrier for contribution. Considering the current size of the community, I don't think it will be much of a blessing.

We test all significant changes in a branch backed PR and run automated regression tests on multiple platforms. Some issues will naturally not be caught in such a process, and some will only occasionally be trigger a test failure when a race condition occurs. Some issues still will only fail the build on a VM with a specific processor (or amount of memory), so it will be hard to figure out.

Ivar

Ismael VC

unread,
Jan 1, 2015, 12:27:31 PM1/1/15
to julia...@googlegroups.com
I get it now, in Julia stable releases are in a frozen branches instead of being nodes from a stable master branch, which is what I was expecting. Instead of that master is used for developing the next release, qgit is helping me to understand Julia's work-flow:



Rust (like in the git book):




I am just curious about which is the pros and cons of taking any of those approaches, I'm sorry if my questions are dumb, boring or annoying, I try not to do that.

Ismael VC

unread,
Jan 1, 2015, 12:32:00 PM1/1/15
to julia...@googlegroups.com
Ivar thank you very much answering, that's the kind of insight that I was searching for ...you answered while I was still taking snapshots so sorry for the noise! :D
Reply all
Reply to author
Forward
0 new messages