When I grepped through Base to find instances of actually catching an exception and doing something based on the particular exception, I could only find a single one.
Here are a few responses to the stuff in this thread, plus Keno's comment on HN.
The Travis stats are only for builds against master, i.e., only for
things that got merged. BTW, this is mentioned in the linked post. For
every project listed in http://danluu.com/broken-builds/, only the
"main" branch on github was used, with the exception of a couple
projects where that didn't make sense (IIRC, scala has some weird
setup, as does d). "Using travis less" doesn't really make sense in
that context. I heard that response a lot from people in various
communities when I wrote that post, but from checking the results, the
projects that have better travis results are more rigorous and, on
average, the results are biased against the projects with the highest
uptimes. There are exceptions, of course.
I have a "stable" .3 build I use for all my Julia scripts and IIRC
that's where I saw the dates issue with Gadfly. I dunno, maybe I
should only use older releases? But if I go to the Julia download
page, the options are 0.3.4, 0.2.1, and 0.1.2. This might not be true,
but I'm guessing that most packages don't work with 0.2.1 or 0.1.2. I
haven't tried with 0.3.4 since I haven't touched Julia for a while.
It's possible that the issue is now fixed, but the issue is still open
and someone else also commented that they're seeing the same problem.
Sorry, I'm not being a good open source citizen and filing bugs, but
when you run into 4 bugs when writing a half-hour script, filing bugs
is a drag on productivity. A comment I've gotten here and elsewhere is
basically "of course languages have bugs!". But there have been
multiple instances where I've run into more bugs in an hour of Julia
than I've ever hit with scala and go combined, and scala is even known
for being buggy! Between scala and go, I've probably spent 5x or 10x
the time I've spent in Julia. Just because some number will be
non-zero doesn't mean that all non-zero numbers are the same. There
are various reasons that's not a fair comparison. I'm just saying that
I expect to hit maybe one bug per hour while writing Julia, and I
expect maybe 1 bug per year for most languages, even pre-1.0 go.
I don't think 40 vs. 50 really changes the argument, but of course
I've been drifting down in github's ranking since I haven't done any
Julia lately and other people are committing code.
I don't think it's inevitable that language code is inscrutable. If I
grep through the go core code (excluding tests, but including
whitespace), it's 9% pure comment lines, and 16% lines with comments.
It could use more comments, but it's got enough comments (and
descriptive function and variable names) that I can go into most files
and understand the code.
It sounds like, as is, there isn't a good story for writing robust
Julia program? There are bugs and exceptions will happen. Putting
aside that `catch` non-deterministically fails to catch, what's a
program supposed to do when some bug in Base causes a method to throw
a bounds error? You've said that the error handling strategy is
go-like, but I basically never get a panic in go (I actually can't
recall it ever having happened, although it's possible I've gotten one
at some point). That's not even close to being true in Julia.
Terminating is fine for scripts where I just want to get at the source
of the bug and fix it, but it's not so great for programs that
shouldn't ever crash or corrupt data? Is the answer just "don't write
stuff like that in Julia"?
I didn't read through the broken builds post in detail – thanks for the clarification. Julia basically uses master as a branch for merging and simmering experimental work. It seems like many (most?) projects don't do this, and instead use master for stable work.
I'm don't understand his perspective on "modern test frameworks" in which FactCheck is light-years better than a big file full of asserts. Maybe my age is showing, but from my perspective FactCheck (and its Midje antecedent) just gives you a slightly more verbose assert syntax and a way of grouping asserts into blocks (which doesn't seem much better than just adding a comment at the top of a group of asserts). Tastes vary, of course, but Dan seems to be referring to some dramatic advantage that isn't a matter of mere spelling. What am I missing?
I'd add: improve the manual to make it clear that Julia's philosophy
with regard to exceptions (easy).
I'm only realizing today that exceptions are supposed to be raised in
Julia only when the caller is at fault. If we want packages to follow
this pattern, better make it as clear as possible.
But one
feature we could stand to add is asserting properties that must be
true for all arguments, and running through lots of combinations of
instances.
I feel like you are trying to convey the impression that finding bugs in julia results in insults and no help from us. That is a total mis-characterization of the project. There is also no equivalence between responses to bug reports, and responses to blog posts. As far as I know, all 9000 of our bug reports have been received with gratitude. However your post says or implies that we don't care about error handling, tell people to fix their own bugs, and even that we don't understand our own code. You can very well expect some pushback on that.
Unlike Python, catching exceptions in Julia is not considered a valid way to do control flow. Julia's philosophy here is closer to Go's than to Python's – if an exception gets thrown it should only ever be because the caller screwed up and the program may reasonably panic. You can use try/catch to handle such a situation and recover, but any Julia API that requires you to do this is a broken API.
I would really like if I could throw and catch an exception without needing to consider that my program might panic as a result of doing so. I just looked through the entire corpus of Julia code I have written so far, and the only places I catch exceptions are when the exception is actually due to calling a Python API via PyCall. I am willing to accept that using exceptions is not a very Julian way of doing things, but I still want them to work when they are needed.
Part of the reason I was inclined to think that exceptions are unsupported is that I often see my code segfault if I create an exception e.g. by pressing Ctrl+C. For instance, if I open the REPL, and type
julia> x = rand(4000,4000)
julia> x * x
and press Ctrl+C during execution, I nearly always get a segfault. In Python I almost never see a segfault as an exception unwinds (and when I do, I file a bug). But in Julia it seems to be the norm for me.
And when I run the release-0.3 branch under valgrind (even something as simple as the empty script `julia -e ""`), the results can be somewhat scary (at least that is my interpretation).
And when I run the release-0.3 branch under valgrind (even something as simple as the empty script `julia -e ""`), the results can be somewhat scary (at least that is my interpretation).
Valgrind tends to report false positives in language runtimes using mark-and-sweep garbage collection, if I recall correctly
When I worked around that I ran into a regression that caused plotting to break large parts of the core language, so that data manipulation had to be done before plotting.
Dan Luu has a critique of Julia up at http://danluu.com/julialang/ (reddit thread at http://bit.ly/1wwgnks)
Is the language feature-complete enough that there could be an entire point release that targeted some of the less-flashy things he mentioned? I.e. commented code, better testing, error handling, and just fixing bugs? If it's not there, is there any thoughts on when it would be?
+1 to addering to the git flow, I had also allways expected for the master branch to be as stable and possible, while development happening in another branch, not the other way around and sometimes I've had to search for a past working commit in order to build julia, which strikes me as odd, as you guys really follow good development techniques.
I didn't know that fact about the Linux kernel, or how usual it is, I've just red the git book and it explains it like this:
http://git-scm.com/book/en/v2/Git-Branching-Branching-Workflows
I imagine there are advantages to frameworks in that you can expected failures and continue through the test suite after one fails, to give a better % success/failure metric than Julia's simplistic go/no-go approach.I used JUnit many years ago for a high school class, and found that, relative to `@assert` statements, it had more options for asserting various approximate and conditional statements that would otherwise have been very verbose to write in Java. Browsing back through it's website now (http://junit.org/ under Usage and Idioms), it apparently now has some more features for testing such as rules, theories, timeouts, and concurrency). Those features would likely help improve testing coverage by making tests easier to describe.On Mon Dec 29 2014 at 4:45:53 PM Steven G. Johnson <steve...@gmail.com> wrote:On Monday, December 29, 2014 4:12:36 PM UTC-5, Stefan Karpinski wrote:I didn't read through the broken builds post in detail – thanks for the clarification. Julia basically uses master as a branch for merging and simmering experimental work. It seems like many (most?) projects don't do this, and instead use master for stable work.
Yeah, a lot of projects use the Gitflow model, in which a develop branch is used for experimental work and master is used for (nearly) release candidates.
I can understand where Dan is coming from in terms of finding issues continually when using Julia, but in my case it's more commonly "this behavior is annoying / could be improved" than "this behavior is wrong". It's rare for me to code for a few hours in Julia without filing issues in the former category, but out of the 300 issues I've filed since 2012, it looks like less than two dozen are in the latter "definite bug" category.
I'm don't understand his perspective on "modern test frameworks" in which FactCheck is light-years better than a big file full of asserts. Maybe my age is showing, but from my perspective FactCheck (and its Midje antecedent) just gives you a slightly more verbose assert syntax and a way of grouping asserts into blocks (which doesn't seem much better than just adding a comment at the top of a group of asserts). Tastes vary, of course, but Dan seems to be referring to some dramatic advantage that isn't a matter of mere spelling. What am I missing?