I dont like writing if else blocks,
--
guava-...@googlegroups.com
Project site: http://guava-libraries.googlecode.com
This group: http://groups.google.com/group/guava-discuss
This list is for general discussion.
To report an issue: http://code.google.com/p/guava-libraries/issues/entry
To get help: http://stackoverflow.com/questions/ask (use the tag "guava")
--
guava-...@googlegroups.com
Project site: http://guava-libraries.googlecode.com
This group: http://groups.google.com/group/guava-discuss
This list is for general discussion.
To report an issue: http://code.google.com/p/guava-libraries/issues/entry
To get help: http://stackoverflow.com/questions/ask (use the tag "guava")
The functional style of this example has at least 1 benefit: it's lazy.
If you don't need the laziness, then yes, the imperative style is
often easier on the eyes.
I got used to Function though. I don't see the plumbing anymore, and
if you name your Function instances nicely, then something like
'Iterables.transform(listOfX, toY)' looks alright. You can reduce the
damage to your teammates' eyes by trying to isolate the Function
plumbing a bit.
You can get a win if you can compose functional operations though.
Writing them out imperatively quickly isn't so readably anymore
either.
On Wed, Apr 20, 2011 at 2:02 PM, Colin Decker <cgde...@gmail.com> wrote:
> Never, ever use a design pattern just because you think you should be using
> a design pattern. Your attempts to force design patterns on something that
> should be very simple are making your code anything but simple. Guava
> doesn't provide anything that's going to make it easier for you to do what
> you want because you really shouldn't be doing it.
--
Craig Berry
Software Engineer
Google (Santa Monica CA)
+1. If I recall correctly, the book _Refactoring_ describes a
"Replace Conditional with Polymorphism" refactoring, which is
frequently invaluable. But not all |if| statements can be similarly
eliminated. In particular, if you're explicitly passing in two
continuations, one for the |true| case and one for the |false| case,
the code you're writing is the same kind of Master Control Flow as an
|if| statement, and you aren't getting the benefits of polymorphism.
Here's a similar answer to a similar question:
http://stackoverflow.com/questions/519515/how-can-polymorphism-replace-an-if-else-statement-inside-of-a-loop/519537#519537
Yes and no. It's the most common reason, and, as you point out, if
you can easily extract a Function object into a field or another
method, transform() can be shorter and more intention-revealing than a
loop. This is especially true if you have a Function that can be
reused for multiple calls -- a transform() here, an
ordering.onResultOf() there, a Multimaps.index() over there.... Of
course, opinions on when to use transform() still vary widely. I tend
to think that, if the Function perform an obvious, simple, easily
nameable job, and if you're willing to give it its own method or
field, the transform() approach will often win out. (When we're
working on collections code, sometimes a view is *required*, so
transform() is the only game in town.) Still, while "the clumsiest
possible syntax" is "just" syntax, but we've all seen code (heck, I've
*written* code) that overuses functional programming enough that the
syntax gets in the way. IMO, this arises mostly in unusual cases.
One example is nested functional programming calls, where you see
something like "Predicates.filter(Iterables.transform(inputs,
nameFunction), Predicates.and(before(x), after(y))." Here static
imports can help, but "filter(transform(inputs, nameFunction,
and(before(x), after(y))" is still messy. We could further you out by
providing filter() and transform() as infix methods on an object, and
we'll probably eventually do this, but in the meantime, syntax is a
barrier for nested calls. I've written similarly ugly code by trying
to operate functionally on nested data structures. And of course, if
your transformation is anything that throws checked exceptions, you
have to do it by hand. In the end, we all have to use our judgment on
each instance, and, while I don't want to put words in Rob's mouth, if
he were forced to choose between the Chain and transform() examples in
this thread to put into his code, I suspect he'd pick transform(). We
can disagree over where the line is between imperative and functional
programming, but I don't think anyone is trying to equate those two
examples.
All that said, I do want to emphasize that there are additional
arguments against the *lazy* functional approach:
1) If you function is expensive, you may prefer to perform it only
once. (If you do nothing but loop over the result of transform() a
single time, then this is already happening, but it's easy to forget
if you pass the List far from the point at which it was created.)
2) If your function creates mutable objects, you probably want to
create each output object only once. My smug response to this is
"Well, of *course* I wouldn't use mutable objects -- here or anywhere
else that they'd be a problem." But a Googler had a fascinating bug a
while back because of this. Since I'm bringing it up in this context,
the problem may be obvious, but it wasn't at the time, when we had no
idea whether Futures.successfulAsList (coming soon to a Guava near
you!), his CheckedFuture implementation, or what was wrong. Here's
his code. Somehow, even though successfulAsList(...).get(...) was
returning a list with entries, doSomething was never being called
because checkedGet kept timing out.
List<CheckedFuture<String, MyException>> futures = ...;
try {
Futures.successfulAsList(futures).get(TIMEOUT, TimeUnit.SECONDS);
}
...
for (CheckedFuture<String, MyException> future : futures) {
try {
doSomething(future.checkedGet(0, SECONDS));
} catch (MyException e) {}
}
The problem was hidden in that first "...." He was creating the list
with Lists.transform(). I don't think of Futures as being "mutable"
in the "mutable objects are evil" sense because they mostly hide their
state, but here that matters.
You still might legitimately claim that you wouldn't make this
mistake, and that's fine, but I want to counter the idea that lazy
transformations are a safe default. The decision to go with either a
for() loop or a transform() call requires some consideration
3) If the transformation may fail due to programmer errors (==
RuntimeExceptions, e.g., a NPE if any element is null), you may prefer
to perform all transformations immediately so that any failure occurs
as soon as possible when the most information about the errant caller
is available.
So again, I'm more of a fan of transform() than Rob is, and I'm in
agreement with you on your example. Still, I think that reasonable
people can disagree on your example and agree on extreme cases like
Chain. And regardless of what our usual preferences are, we should
consider each instance on a case-by-case basis.
4) If your transformer is a Function<SomeVeryLargeObject, Integer>
(say, veryLargeObject.getId()), then the view returned by transform()
will retain references to the full objects, keeping them alive even
though you may need only their IDs.
I also meant to explicitly point out that not only could you avoid
these problems by calling ImmutableList.copyOf(transform(...)), but
we've been considering a Lists.transformedCopy() to do it for you.
We're still not sure whether to add it or not.
On Fri, Apr 22, 2011 at 3:27 AM, Jed Wesley-Smith
<jed.wes...@gmail.com> wrote:
> it is amusing that people still want to explicitly create intermediate
> ephemeral Lists for these kind of trivial transformations rather than use
> the functional approach. The ONLY argument against it being that java has
> the clumsiest possible syntax for an inline anonymous function
Still, while "the clumsiest
possible syntax" is "just" syntax, but we've all seen code (heck, I've
*written* code) that overuses functional programming enough that the
syntax gets in the way.
All that said, I do want to emphasize that there are additional
arguments against the *lazy* functional approach:
1) If you function is expensive, you may prefer to perform it only
once. (If you do nothing but loop over the result of transform() a
single time, then this is already happening, but it's easy to forget
if you pass the List far from the point at which it was created.)
3) If the transformation may fail due to programmer errors (==
RuntimeExceptions, e.g., a NPE if any element is null), you may prefer
to perform all transformations immediately so that any failure occurs
as soon as possible when the most information about the errant caller
is available.
So again, I'm more of a fan of transform() than Rob is, and I'm in
agreement with you on your example. Still, I think that reasonable
people can disagree on your example and agree on extreme cases like
Chain. And regardless of what our usual preferences are, we should
consider each instance on a case-by-case basis.
I can wave my hands a bit about functional programming vs. imperative,
but I'm useless once we get into "second-order" considerations like
these. I'm glad that we have people around with that experience.
>> 1) If you function is expensive, you may prefer to perform it only
>> once. (If you do nothing but loop over the result of transform() a
>> single time, then this is already happening, but it's easy to forget
>> if you pass the List far from the point at which it was created.)
>
> You can be lazy and still memoize. Feature Request!
We do in fact have some work underway to have MapMaker do this for
you. Of course, you can build it yourself on top of MapMaker today,
and I'll paste in some of our internal code for anyone who just can't
wait:
public static <F, T> Function<F, T> memoize(
Function<? super F, ? extends T> delegate, MapMaker mapMaker) {
final Map<F, T> map = mapMaker.makeComputingMap(delegate);
return new Function<F, T>() {
@Override
public T apply(F from) {
return map.get(from);
}
};
}
Sorry, I meant to respond to this but skipped over it while writing.
It's possible that, in simplifying the code from the original example,
I've messed things up, and it's possible that I misinterpreted the
author's intent, but I believe the goal is: "When all tasks are done,
or when |SECONDS| seconds have elapsed, process all available
results." One way to do this is to maintain a deadline to use for
each checkedGet call:
List<CheckedFuture<String, MyException>> futures = ...;
long deadline = nanoTime() + SECONDS.toNanos(TIMEOUT);
for (CheckedFuture<String, MyException> future : futures) {
try {
doSomething(future.checkedGet(deadline - nanoTime(), NANOSECONDS));
} catch (MyException e) { ... }
}
That's actually shorter than the code I gave:
List<CheckedFuture<String, MyException>> futures = ...;
try {
Futures.successfulAsList(futures).get(TIMEOUT, TimeUnit.SECONDS);
} catch (Exception e) {} // deal with it later
for (CheckedFuture<String, MyException> future : futures) {
try {
doSomething(future.checkedGet(0, SECONDS));
} catch (MyException e) { ... }
}
And that's not to mention that the deadline approach is more likely to
be able to parallelize its doSomething() calls with the work of the
futures or that we really ought to deal with InterruptedException
specially instead of using a blanket "catch (Exception e)." So I'm
not sure what the advantage of the successfulAsList() approach is.
Some people don't realize that it's safe to pass a negative deadline
to get()/checkedGet(), so perhaps he thought he would need an if()
statement there? I can follow up internally if you'd like, but I
think I've at least got the motivation right, if not the fine details
of his approach.