In order to change software you need to understand the
program. Unfortunately most people equate "understanding
the program" as being equivalent to "what the function does".
What it also has to mean is "why the function does it".
In order to write a program that "lives", that is, one
that can be maintained and changed you need to capture
why the code exists and why it is written that way.
The best solution I have found is called Literate Programming.
The LP idea is that you write the program for the programmer
rather than the machine. You should be able to sit and read
a book that explains the program, including the "why". The
real code is in the document but the text explaining the
program is the focus.
I would encourage you to look at Lisp in Small Pieces.
It is a literate program, a book, that contains a complete
lisp system with the interpreter and compiler but it is
written to be read.
Tim Daly
"The hardest part of literate programming is the documentation"
Iif nothing else, at least my current job is teaching me the value of
using Clojure at my next job.
So for now, C# at work....Clojure at home, for everything else, there
is beer.....
Timothy
> --
> You received this message because you are subscribed to the Google
> Groups "Clojure" group.
> To post to this group, send email to clo...@googlegroups.com
> Note that posts from new members are moderated - please be patient with your first post.
> To unsubscribe from this group, send email to
> clojure+u...@googlegroups.com
> For more options, visit this group at
> http://groups.google.com/group/clojure?hl=en
--
“One of the main causes of the fall of the Roman Empire was
that–lacking zero–they had no way to indicate successful termination
of their C programs.”
(Robert Firth)
I currently work on a thick-client system. But our back-end is quite
stateless. One thing that irks me the most about our system in C# is
that we've intertwined business logic with business data. So, in our
application, let's say we have a person class. We will have the
following classes:
tPerson (data class generated by our ORM)
PersonDTO (data transfer object)
Person (business object)
So the idea is that SQL will load data from the tPerson table and
place it into a tPerson C# object. Now, the last thing we want, is to
have a hard dependency on tPerson from our Person object. that is, we
never want to have Person take tPerson as an argument. Because then
those two objects are tightly coupled. Where one goes, the other must
follow.
So instead we have a PersonDTO object that transfers the data between
the objects. The Person object the contains business logic (user must
have a last name...user must be over 18, etc.). The sad thing is, that
business logic is now built directly into Person. This complicates the
whole system.
What Rich is advocating is this: throw all the data into a hashmap.
Suddenly, my SQL driver can just dump data in to the map, I can throw
that map around without introducing dependencies,
and I can pass that map through the web to the client. In addition to
all this, I should break my rules out of the Person object and into a
set of validator functions. Now I have real power! Suddenly my rules
apply to any and every object that has a last name property and a age
property! So instead of 3 tightly coupled objects, I will be left with
a map, and a set of highly generic functions that can be reused again
and again.
Getting back to your question, I think it's just good to sit down with
your web app and start asking yourself. "What assumptions am I making
about this code? Am I assuming function a will always call function b?
Could there be a case where I would want a to call c? Well in that
case, maybe I should account for that...". The
SQLObject->DTO->BusinessObject pattern is fairly common in the real
world. So perhaps that is something to re-evaluate in your designs.
Timothy
Having used lisp in many different forms over the last 40 years
I think that the "complecting" of nil to represent all three
concepts is one of the most brilliant aspects of the language.
In fact it is one of the key flaws of scheme, in my opinion,
that they added true and false.
There is a fourth use of nil that is also very convenient.
Lisp functions return it as a default value. This makes it
possible to wrap functions with functions which other languages
like C++ make very difficult.
(e.g. if we have a C++ function
void foo()
we cannot wrap it with another
bar(foo())
well, we can but we have to use the comma hack as in
bar((foo(),1))
)
Java code is littered with checks for null before every
iterator construct where the code could be so much cleaner
if iterators just "did the right thing" with null, that is,
end the iteration.
The use of nil as a unified value for the many meanings leads
to a lot of useful features. The context of the nil value
completely defines the intended meaning.
Tim Daly
Yes, you correctly interpreted my post. That is my opinion.
> The context of the nil value
> completely defines the intended meaning.
This is a point I disagree with. The context defines the meaning of
nil intended by the person coding that function. It does nothing to
ensure that the coder has thought about what the function will do if
nil is used with another meaning, and it does nothing to ensure that
consumers of that function will use nil in the way the coder intended.
I have found this to be a relatively common source of bugs that pass
test cases (because test cases are written by the coder who has a
specific intention in mind) but show up in the field.
>
> Having used lisp in many different forms over the last 40 years
> I think that the "complecting" of nil to represent all three
> concepts is one of the most brilliant aspects of the language.
That may be. If so, it undermines one of the messages in the video
that complecting=bad. If this particular complection is brilliant, it
naturally leads to a lot of deeper questions: When is complecting
brilliant rather than bad? How does one tell the difference?
How does nil represent empty? '() does not equal nil.
> It is also easy in the sense that it is more similar to what Lisp users (as
> opposed to Scheme) are used to from past experience. But it is
> decidedly less simple to have these ideas complected.
AFAIK, Common Lisp does treat nil and empty lists as equivalent. Looks
like a clear difference, not "more similar" to me.
--
Thorsten Wilms
thorwil's design for free software:
http://thorwil.wordpress.com/
--
Let's explore that a little further:
* Non-existence
- Accessing a local or var that has never been declared
* False
- (if nil :never-here :but-here)
* Empty
- (seq [])
And maybe there is another?
* Not set
- (def x)
- (:x {:a 1})
But which one should nil actually mean? Given a green-field scenario that is.
I can definitely see where you're going, but I wonder if the use of
nil in Clojure is the result of a cost/benefit analysis in relation to
Java interop?
(cons 1 nil) is one obvious example.
The pattern of using first/next/nil? as a more efficient/compact
alternative to first/rest/empty? is arguably another.
True, but the multiple meanings of nil creates additional complexity.
Contrast, for example, (filter identity s) and (keep identity s). One
strips nil and false values, the other just strips nil values. Why do
we need both? Because sometimes we mean nil and false to be the same,
and sometimes we don't. Have you ever gotten this wrong? I have, and
I understand these issues pretty well. Maybe you haven't, maybe you
can "juggle more balls" than I can. But as Rich pointed out in the
video, simplicity is about respecting the fact that all of our brains
have limitations and can get tripped up when things are complected.
One more anecdote about this.
One time, I wrote a function that looked like this:
(defn f [s]
(when s .....))
At the time I wrote the function, I did the analysis, and realized
that my function was always being called with sequences (specifically,
sequences that had already been "seq-ified" at some prior point), so
it was safe to use "when" as a way to screen out the empty things. So
I opted for this easy, efficient way to express this.
Somewhere along the line, as my application grew more complex, I
needed to reuse f in another context, and when looking at the docs for
f (which said something like "consumes a sequence and does ...", I
thought I could safely pass in a lazy sequence. But I couldn't
because when a lazy sequence is "empty" it is not "nil". My program
was buggy and it took a while track down the source of the problem.
Yes, it was my fault. In retrospect, I see that my program would have
been more robust had I not made assumptions about s, and written it as
(when (seq s) ...)
or perhaps
(when (not (empty? s)) ...)
But I do think it's fair to pin at least some of the blame on the
complexity of nil. Since nil can be used interchangeably with the
concept of emptiness in so many circumstances, and was interchangeable
in the initial context of my function, it was all too easy to rely on
that behavior.
Oh, I definitely considered the types when I wrote the function. It's
just that at the time I wrote it, I was confident the input would
already be seq-ified. nil, among its many purposes, is a key part of
the "seq interface", and testing for nil is how you are expected to
interact with seqs to determine emptiness. As my program grew, the
assumption that the input would definitely be a seq was invalidated.
This is exactly the inherent challenge of making evolving,
maintainable programs that Rich speaks of in his video.
If the only way to test for a seq to be empty were a general-purpose
function like empty? that applied to all collections, my code would
have worked in the new context. If the only to test for a seq to be
empty were a seq-specific function like seq-empty? then when placed in
the new context, my code would have broken in a very clear, easily
diagnosable way. But because we test for seqs to be empty using nil,
an object with many other purposes, my code appeared to work, but was
semantically wrong -- always a hard thing to track down.
I suppose one could also place some small portion of the blame on the
problem of assigning clear linguistic labels to Clojure's types. Is
something a sequence? A seq? A seqable? A collection? It's always
been difficult to come up with clear definitions for these categories,
and to specify the appropriate precondition with perfect clarity in a
docstring.
--Mark
THIS is where the multiple meaning of nil in traditional lisp is
brilliant. I believe that Clojure "got it wrong" in the design
decision to make (seq s) and (not (empty? s)) have different
semantics. This is the same mindset that causes (me) so much grief
in Java... looping and iteration does the wrong thing with NULL and
I have to check for NULL every time. Yet everyone, if given an empty
list of things to shop for, will know NOT to go shopping.
>
> Yes, it was my fault. In retrospect, I see that my program would have
> been more robust had I not made assumptions about s, and written it as
> (when (seq s) ...)
> or perhaps
> (when (not (empty? s)) ...)
Actually I don't think this is entirely your fault (modulo the fact
that we need to understand our language semantics). I believe that
this is due to a deep design flaw. You're not the only person to
mis-handle an empty sequence.
>
> But I do think it's fair to pin at least some of the blame on the
> complexity of nil. Since nil can be used interchangeably with the
> concept of emptiness in so many circumstances, and was interchangeable
> in the initial context of my function, it was all too easy to rely on
> that behavior.
>
Tim Daly
Literate Software
On Fri, Oct 21, 2011 at 12:41 PM, David Nolen <dnolen...@gmail.com> wrote:Oh, I definitely considered the types when I wrote the function. It's
> Just because we have dynamic types does not give us the freedom to not
> consider them.
just that at the time I wrote it, I was confident the input would
already be seq-ified. nil, among its many purposes, is a key part of
the "seq interface", and testing for nil is how you are expected to
interact with seqs to determine emptiness. As my program grew, the
assumption that the input would definitely be a seq was invalidated.
This is exactly the inherent challenge of making evolving,
maintainable programs that Rich speaks of in his video.
If all of these dynamics types and all of the tests "respected nil"
in its many meanings then
(when s ...,
(when (seq s)...,
(when (empty? s)...,
would not be an issue. (when s...) would "just work".
>
>
> Clearly express a consideration about the types at play.
Clojure was supposed to transparently substitute things like sequences
and vectors everywhere that lisp used lists. That would be true if nil
was respected but is not true now and this complicates the code without
apparent benefit, in my opinion.
In lisp you can ask what the type is (e.g. by calling consp, vectorp,
etc) but these type-specific predicates are relatively rarely used.
In fact, when they are used then you are struggling with data-level
issue that could probably be abstracted away (e.g. a code smell).
Clojure is a great language but the nil handling is, in my opinion,
a design flaw. It forces the introduction of (empty?...) and an
awareness of the data types into view unnecessarily.
Tim Daly
Literate Software
It can be very difficult to enumerate (or even remember :) all of the contending tradeoffs around something like Clojure's nil handling.
The is no doubt nil punning is a form of complecting. But you don't completely remove all issues merely by using empty collections and empty?, you need something like Maybe and then things get really gross (IMO, for a concise dynamic language).
I like nil punning, and find it to be a great source of generalization and reduction of edge cases overall, while admitting the introduction of edges in specific cases. I am with Tim in preferring CL's approach over Scheme's, and will admit to personal bias and a certain comfort level with its (albeit small) complexity.
However, it couldn't be retained everywhere. In particular, two things conspire against it. One is laziness. You can't actually return nil on rest without forcing ahead. Clojure old timers will remember when this was different and the problems it caused. I disagree with Mark that this is remains significantly complected, nil is not an empty collection, nil is nothing.
Second, unlike in CL where the only 'type' of empty collection is nil and cons is not polymorphic, in Clojure conj *is* polymorphic and there can only be one data type created for (conj nil ...), thus we have [], {}, and empty?. Were data structures to collapse to nil on emptying, they could not be refilled and retain type.
At this point, this discussion is academic as nothing could possibly change in this area.
The easiest way to think about is is that nil means nothing, and an empty collection is not nothing. The sequence functions are functions of collection to (possibly lazy) collection, and seq/next is forcing out of laziness. No one is stopping you from using rest and empty?, nor your friend from using next and conditionals. Peace!
Rich
My apologies that what I have said about nil punning came across
as criticism directed at you. That was not intentional. I have
the highest respect for your design work. You're doing an amazing
job and I continue to learn from you.
I understand the lazy vs empty issue and I think you made a good
tradeoff. I'm just bemoaning the fact that nil-punning is really
vital in keeping data-type issues out of my lisp life and that
won't work in Clojure.
Ultimately this is cured by deep learning of the language semantics.
I still have a way to go.
Tim Daly
Literate Software
> Rich,
>
> My apologies that what I have said about nil punning came across
> as criticism directed at you.
It certainly didn't come across that way - no worries :-)
Rich
Brilliant or clever, with all the downsides cleverness entails? I'm
as guilty "punning on nil" as any old Common Lisper, but I have to ask
myself, "Is it good or just easy?"
> There is a fourth use of nil that is also very convenient.
> Lisp functions return it as a default value. This makes it
> possible to wrap functions with functions which other languages
> like C++ make very difficult.
[...]
I agree, but that does not justify the (mis)use of nil for other purposes.
> The use of nil as a unified value for the many meanings leads
> to a lot of useful features. The context of the nil value
> completely defines the intended meaning.
And now one _must_ provide the context for each use of nil, or, as
Kent Pitman said, "Several broad classes of bugs and confusions can be
traced to improper attempts to recover intentional type information
from representation types." http://www.nhplace.com/kent/PS/EQUAL.html
So, now, instead of "littering" code with tests for nil (or perhaps
taking that as a cue to eliminate the possibility of nil), one has
complected nil and intentional context throughout.
Cheers,
Mike
Yes: Was that a nil value for the key :foo in my map or did :foo not
exist? In Common Lisp, some such functions return multiple values,
de-multiplexing (deplecting?) the separate meanings of nil. Then, of
course, you need a multiple-values-bind special form to handle those
return values. Braids begetting braids.
Curious, this, when Clojure already says (not (= nil ())).
Mike
I suspect he might have meant even more when he said, "... learn SQL, finally."
SQL is more than a convenient wrapper around fopen, fread, and fwrite.
Consider the possibility of expressing as many of your business rules
as possible, declaratively, in SQL. Then, beyond simply "dumping"
data into the map from the DBMS, consider that the validation
functions could be queries asking the DBMS if the user input is valid.
Cheers,
Mike
If you need to distinguish between ":foo is missing" and ":foo's value
indicates non-existence", what about:
(get my-map :foo ::missing)
--
Sean A Corfield -- (904) 302-SEAN
An Architect's View -- http://corfield.org/
World Singles, LLC. -- http://worldsingles.com/
Railo Technologies, Inc. -- http://www.getrailo.com/
"Perfection is the enemy of the good."
-- Gustave Flaubert, French realist novelist (1821-1880)
> I like nil punning, and find it to be a great source of generalization and reduction of edge cases overall, while admitting the introduction of edges in specific cases. I am with Tim in preferring CL's approach over Scheme's, and will admit to personal bias and a certain comfort level with its (albeit small) complexity.
Late to this party, but around 1984, George C. Charrette (sp?) wrote a brilliant post to the common lisp mailing list. He told of a dream in which (he said) he'd suddenly realized Scheme was right about everything where it and Common Lisp differed. So, in a white heat of inspiration, he took a relatively simple CL function and rewrote it, step by step, by removing nasty CL-isms like nil punning. Of course, at each step, the function got wordier, more special-case-ey, and (arguably) harder to understand.
It was a masterpiece of snark. I've never been able to find it since. If anyone has a copy, I'd love to get one.
-----
Brian Marick, Artisanal Labrador
Now working at http://path11.com
Contract programming in Ruby and Clojure
Occasional consulting on Agile
He points out that Fexpr is more primitive (in the sense of
"simple") than Lambda. Fexpr decouples the operand access
from the operand evaluation allowing more detailed control.
(Fexpr is an old MacLisp term.)
Given an s-expression there is always a question of what the
symbols mean. The meaning is supplied by the environment, of
which there are many. For instance, there is a dynamic
environment (runtime call), the static environment (the
value at the time the text is written), the macro environment,
etc. See chapter 2 of Lisp in Small Pieces for a really
in-depth discussion.
It is hair-hurting discussions like this that make lisp so
much more interesting than other languages. There isn't a
way to express the concepts in other languages.
Tim Daly
"There is no such thing as a *simple* job" :-)
"
One way to look at the difference between functional and object-oriented
algorithms is to consider the relationship between types and operations.
Let’s say I have a small set of types {A, B, C} and operations {f, g, h,
+}.
f
g
h
+
A
f(x:A) → A
g(x:A) → B
h(x:A, y:C) →
B
x:A + y:A → A
B
f(x:B) → B
g(x:B) → C
h(x:B, y:C) →
A
x:B + y:B → B
C
f(x:C) → C
g(x:C) → A
n/a
x:C + y:C → C
Table 1 shows how these operations might be modeled with functions. The
same function name refers to different operations based on the argument
type(s). Operations are grouped by function name, as shown by the
columns of the table."
Consider the table above. You can "walk the type circle" from A->A
by the calls: g(A)->B, g(B)->C, g(C)->A, or equivalently,
g(g(g(A)))->A.
Now consider making the same table for Clojure types and functions.
"A" might be a list, "B" might be a hash-map, etc. The "f" function
might be conj.
Given such a clojure table, the question is: Can we complete the
circle for each data type? Are we missing any functions? Is our
set of chosen functions "orthogonal"? Given the set of types and
a list of functions could we construct the table automatically?
Tim Daly
> Given an s-expression there is always a question of what the
> symbols mean. The meaning is supplied by the environment, of
> which there are many. For instance, there is a dynamic
> environment (runtime call), the static environment (the
> value at the time the text is written), the macro environment,
> etc. See chapter 2 of Lisp in Small Pieces for a really
> in-depth discussion.
I lost my copy of Lisp In Small Pieces, and it's prolly the book I miss
the most. Another book that I got alot out of was Concepts, Techniques,
and Models of Computer Programming (Van Roy and Haridi). It's a great
tour of many "paradigms" of programming. If you are enjoying this
topic, you might want to check it out. It is less Lisp specific,
obviously.
--
Craig Brozefsky <cr...@red-bean.com>
Premature reification is the root of all evil