Library jar before and after ripping out views.
% ls -l /scratch/trunk2/build/pack/lib/scala-library.jar
-rw-r--r-- 1 paulp admin 7161826 Dec 16 19:53
/scratch/trunk2/build/pack/lib/scala-library.jar
% ls -l /scratch/trunk3/build/pack/lib/scala-library.jar
-rw-r--r-- 1 paulp admin 6561034 Dec 16 19:51
/scratch/trunk3/build/pack/lib/scala-library.jar
Statistics from the scala code in my general ~/src directory.
% find . -name '*.scala' -print0 |xargs -0 cat | wc -l
904766
% find . -name '*.scala' -print0 |xargs -0 ack '\.view\b' |egrep
-v ':(import|package)' | grep -v deprecated |wc -l
92
One view usage per 10,000 lines of scala code. (And that may be
generous, as some of those 92 hits are either copies of compiler/library
source or things testing the compiler/library source. However I could
have missed some dotless uses so we'll call it a wash.)
So a feature used in .01% of lines of scala written is consuming 9% of
the jar. I focus on 9% not because I'm obsessed with jar size but as a
proxy for all the overhead associated with any large body of code. And
views have not been small in the overhead department, and they're still
not close to being a robust unit.
https://issues.scala-lang.org/secure/IssueNavigator.jspa?mode=hide&requestId=10906
Displaying issues 1 to 49 of 49 matching issues. [...]
Solution: exile to separate jar, deprecate for 2.10, and retire
them afterward. They can easily be pimped in by any who wish to use
them, so the 99.99% of lines shouldn't have to pay the inheritance tax to
make things very slightly easier on the .01%. (I'm not even sure it
makes things slightly easier.) And maybe many view designs will flourish
in this brave new viewless world.
If we were determined not to let those view methods go away, we could
stub them as empty methods in the core library and have some way to
configure them to invoke a library elsewhere. Anything to get them out of the
main collections inheritance hierarchy.
*** FOOTNOTES ***
Enclosed for any who may wish to validate my experimental results.
[1] the list of projects containing those 900,000 lines of scala
Asynchronous-Functional-Programming Autoproxy-Lite Gitalist Hooks
InteractiveHelp Play20 SBT.tmbundle Scalala Scalog ShiftIt Xus
a.sh alacs amzn-cloudinit annotation-tools annovention anti-xml
argot ashlar asm bash-completion bash-completion-lib blueeyes
blueprint borachio brepl caliper categories cccp chromium-tabs
collection-tests config-typesafe coreen d3 derivative-combinators
dotfiles.repository.steve.org.uk dracula egit egit-github ensime escabot
extractors factorie files.txt finagle findbugs git-prompt git-pulls
git-sbt-processor git-scripts gitblit github-gem github-services
github-terminal gitignore gll-combinators gmailr google-code-prettify
google-diff-match-patch google-toolbox-for-mac grizzled-scala groovy
guava hammersmith hawtdispatch helix homebrew hudson ii irccat ivy
izpack jdepend jenkins jgit jhbuild jline jruby jruby-launcher
jsonpretty jsr292-cookbook jvm-language-runtime jvm-verifier jython
kawa launch4j learn_you_a_scalaz leiningen logula lookatgit mada
magpie maven-sbt maven-scala-plugin meow metascala minikanren
mirah mnemonic mozrepl my-benchmark nailgun neercs nestedvm
ninja~indify-repo noboxing-plugin ostrich osx-window-sizing ozma
parameterized-trigger-plugin parboiled pegdown perf4j play play-scala
projectplus pythagoras quickcursor rapture-io recursivity-commons
replhtml repo rugu samskivert sbt-idea sbt-twt sbteclipse scala-arm
scala-cel scala-collection-test scala-datetime scala-graph scala-ide
scala-io scala-javautils scala-json scala-proxy scala-query scala-redis
scala-refactoring scala-reflectionator scala-stm scala-symbol-browser
scala-time scalabot scalacheck scalaconsole scalacs scalafs
scalaj-collection scalaj-reflect scalamd scalang scalanlp-core
scalariform scalastyle scalate scalatra scalaz scalaz-camel scalex
scamacs scutil simile-butterfly sing slf4j smock snakeyaml solarized
sonatype-aether source.tmbundle spakle spark spde specs specs2
specs2-test sperformance standard-project stopwatch talking-puffin texto
threeten treetop twitter-util uniscala unplanned up vibrantinklion
view-files.txt vscaladoc windmill yarrgs yeti zmpp2
[2] the view hits.
anti-xml/src/main/scala/com/codecommit/antixml/Group.scala:205:
val b = VectorCase.newBuilder[B] ++= g.view.take(index)
anti-xml/src/main/scala/com/codecommit/antixml/ZipperHoleMap.scala:129:
val els = depthFirst.view map { case (p,v) =>
ashlar/compiler/src/main/scala/com/smokejumperit/ashlar/compiler/ast/CanonicalNameAST.scala:430: val
condition = matchCondition(matchCase.view, names)
ashlar/compiler/src/main/scala/com/smokejumperit/ashlar/compiler/ast/ClassifyAST.scala:427: matchCondition(target,
targetType, caseAST.view),
ashlar/compiler/src/main/scala/com/smokejumperit/ashlar/compiler/ast/RawAST.scala:169: matchCondition(it.view),
ashlar/compiler/src/main/scala/com/smokejumperit/ashlar/compiler/ast/TypedRawAST.scala:140: matchCondition(m.view,
types),
ashlar/compiler/src/main/scala/com/smokejumperit/ashlar/compiler/flow/CojenWalker.scala:299: it.view.values.foreach(
arg => walkOps(ctx, arg) )
ashlar/compiler/src/main/scala/com/smokejumperit/ashlar/compiler/flow/CojenWalker.scala:303: it.view.checkMethod,
ashlar/compiler/src/main/scala/com/smokejumperit/ashlar/compiler/flow/CojenWalker.scala:305: (it.view.valueTypes.map(t
=> T.forClass(t)) ++ Seq(TypeDesc.OBJECT)).toArray
ashlar/compiler/src/main/scala/com/smokejumperit/ashlar/compiler/flow/CojenWalker.scala:325: val
bodyCtx = matchCase.view.exposedVariables.zipWithIndex.foldLeft(ctx) {
case (tmpCtx, (expVar, idx)) =>
blueeyes/src/main/scala/blueeyes/persistence/cache/functional/TemporalCache.scala:25:
TemporalCacheState(removed.view.map(t => (t._1,
t._2.value)).toMap, TemporalCache(kept.toMap))
factorie/src/main/scala/cc/factorie/la/SparseBinaryVector.scala:182:
override def toString = getClass.getName+"("+"len="+length+"
1s=("+ind.view(0, _size).mkString("[", ", ", "]")+"))"
factorie/src/test/scala/cc/factorie/TestUtils.scala:32: def
remainingSeq[T](s:Seq[(T, Int)], seq:Seq[(T, Int)]) =
seq.view(lastElemZipIndex(s)+1, seq.length)
mada/src/main/scala/sequence/vector/Scala.scala:17: for (e <- _1.view) {
mada/src/test/scala/sequence/iterative/ConversionTest.scala:53: x.view
mnemonic/bytecode/src/main/scala/net/virtualvoid/bytecode/backend/ASM.scala:310:
candsAndLabels.view.unzip.zipped foreach emitBranch
ozma/src/compiler/scala/tools/nsc/backend/ozcode/OzCodes.scala:123:
val indices = for ((param, idx) <- params.view.zipWithIndex
ozma/src/scalalib/scala/collection/generic/SliceInterval.scala:28: *
val coll = (1 to 100).view.slice(10, 30).slice(1, 3)
ozma/src/scalalib/scala/collection/IterableProxyLike.scala:38:
override def view = self.view
ozma/src/scalalib/scala/collection/IterableProxyLike.scala:39:
override def view(from: Int, until: Int) = self.view(from, until)
ozma/src/scalalib/scala/collection/parallel/ParIterableLike.scala:777:
override def seq = self.seq.view
ozma/src/scalalib/scala/collection/parallel/ParIterableViewLike.scala:88:
override def seq = forcedPar.seq.view.asInstanceOf[IterableView[S,
CollSeq]]
ozma/src/scalalib/scala/collection/parallel/ParSeqLike.scala:344:
override def seq = self.seq.view
ozma/src/scalalib/scala/collection/parallel/ParSeqViewLike.scala:70:
override def seq = forcedPar.seq.view.asInstanceOf[SeqView[S,
CollSeq]]
ozma/src/scalalib/scala/collection/SeqLike.scala:518: b ++=
toCollection(rest).view drop replaced
ozma/src/scalalib/scala/collection/SeqLike.scala:531: b ++=
toCollection(rest).view.tail
ozma/src/scalalib/scala/collection/SeqProxyLike.scala:69: override
def view = self.view
ozma/src/scalalib/scala/collection/SeqProxyLike.scala:70: override
def view(from: Int, until: Int) = self.view(from, until)
ozma/src/scalalib/scala/collection/TraversableProxyLike.scala:94:
override def view = self.view
ozma/src/scalalib/scala/collection/TraversableProxyLike.scala:95:
override def view(from: Int, until: Int): TraversableView[A, Repr] =
self.view(from, until)
ozma/src/scalalib/scala/xml/factory/NodeFactory.scala:32:
ch1.view.zipAll(ch2.view, null, null) forall { case (x,y) => x eq y }
scala-collection-test/src/test/scala/com/bizo/scala/collection/TraversableSpec.scala:531:
def result() = { builder.result.view }
scala-collection-test/src/test/scala/com/bizo/scala/collection/TraversableSpec.scala:543:
def result() = { builder.result.view(min, max) }
scala-graph/core/src/main/scala/scalax/collection/GraphDegree.scala:75:
nodes.toList.view map nodeDegree filter degreeFilter min
scala-graph/core/src/main/scala/scalax/collection/GraphDegree.scala:89:
nodes.toList.view map nodeDegree filter degreeFilter max
scala-graph/core/src/main/scala/scalax/collection/GraphDegree.scala:100:
val v = nodes.toList.view map nodeDegree sorted IntReverseOrdering
scala-graph/core/src/main/scala/scalax/collection/GraphDegree.scala:115:
else nodes.view map nodeDegree filter
degreeFilter)
scala-graph/json/src/main/scala/scalax/collection/io/json/imp/Stream.scala:36:
(for (jsonNode <- nodeList.view) yield {
scala-graph/json/src/main/scala/scalax/collection/io/json/imp/Stream.scala:71:
(for (jsonEdge <- edgeList.view) yield {
scala-graph/json/src/main/scala/scalax/collection/io/json/imp/Stream.scala:90:
(for (jsonEdge <- edgeList.view) yield {
scala-graph/json/src/main/scala/scalax/collection/io/json/imp/Stream.scala:107:
(for (jsonEdge <- edgeList.view) yield {
scala-graph/json/src/main/scala/scalax/collection/io/json/imp/Stream.scala:124:
(for (jsonEdge <- edgeList.view) yield {
scala-graph/json/src/main/scala/scalax/collection/io/json/imp/Stream.scala:142:
(for (jsonEdge <- edgeList.view) yield {
scala-graph/json/src/main/scala/scalax/collection/io/json/imp/Stream.scala:161:
(for (jsonEdge <- edgeList.view) yield {
scala-graph/json/src/main/scala/scalax/collection/io/json/imp/Stream.scala:178:
(for (jsonEdge <- edgeList.view) yield {
scala-graph/json/src/main/scala/scalax/collection/io/json/imp/Stream.scala:195:
(for (jsonEdge <- edgeList.view) yield {
scala-io/core/src/main/scala/scalax/io/ArrayBufferSeekableChannel.scala:52:
val toRead = data.view.slice(position.toInt, min(position.toInt +
dst.limit-dst.position, data.size))
scala-io/core/src/main/scala/scalax/io/JavaConverters.scala:68:
lazy val chars = codec.decode(t.view.map{_.toByte}.toArray)
scala-io/core/src/main/scala/scalax/io/LongTraversableLike.scala:687://
override def view = super.view
scala-io/core/src/test/scala/scalaio/test/LongTraversableTest.scala:338:
val list = (f(expectedData().toList.view) map { i => expectedCount
+= 1; i })
scala-query/src/main/scala/org/scalaquery/ql/Query.scala:42: new
Query[E, U](unpackable, cond, condHaving, modifiers ::: by.view.map(c
=> new Grouping(Node(c))).toList)
scala-query/src/main/scala/org/scalaquery/ql/TableBase.scala:45:
m <- getClass().getMethods.view
scala-query/src/main/scala/org/scalaquery/ql/TableBase.scala:61:
m <- getClass().getMethods.view
scalaj-reflect/src/main/scala/scalaj/reflect/AutographBook.scala:17:
tpe.getAnnotations.view collect { case x: ScalaSignature => x } map
{sigBytesFromAnnotation} headOption
scalaj-reflect/src/main/scala/scalaj/reflect/AutographBook.scala:41:
splits.view flatMap { case (l,r) => safeGetClass(l) map (_ -> r) }
headOption
Scalala/src/main/scala/scalala/library/Library.scala:213:
matrix.domain._1.view.map(sliceRows(matrix,_)).reduceLeft(opAdd);
Scalala/src/main/scala/scalala/library/Library.scala:223:
matrix.domain._2.view.map(sliceCols(matrix,_)).reduceLeft(opAdd);
Scalala/src/main/scala/scalala/library/Plotting.scala:464://
val values = items.view.map(item =>
scalanlp-core/data/src/main/scala/scalanlp/text/tokenize/PTBTokenizer.scala:92:
SL(pref.mkString("") + d + tail.view.map(_.chars).mkString(""))
scalanlp-core/data/src/main/scala/scalanlp/text/tokenize/Tokenizer.scala:102:
if (f.isInstanceOf[Transformer]) f else f.andThen((i :
Iterable[String]) => i.view);
scalanlp-core/data/src/main/scala/scalanlp/util/ScalaQL.scala:60:
peeked.view(0, n);
scalanlp-core/learn/src/main/scala/scalanlp/classify/LogisticClassifier.scala:101:
for( datum <- range.view map data) {
scalanlp-core/learn/src/main/scala/scalanlp/maxent/MaxEntObjectiveFunction.scala:154:
val (grad: DenseVector[Double],prob: Double) =
eCounts.zipWithIndex.par.view.map { case (vec,c) =>
scalanlp-core/learn/src/main/scala/scalanlp/stats/RandomizationTest.scala:42:
val baseDiff = diff(lpairs.view.map(_._1),lpairs.view.map(_._2));
scalanlp-core/learn/src/main/scala/scalanlp/stats/RandomizationTest.scala:72:
) apply (dataset.view.map(Example.lift(c1)).map(_.label).force,dataset.view.map(Example.lift(c2)).map(_.features).force);
scalate/samples/scalate-sample/src/main/scala/org/fusesource/scalate/sample/ServletRendersView.scala:40:
context.view(model)
scalate/scalate-spring-mvc/src/main/scala/org/fusesource/scalate/spring/view/ScalateView.scala:62:
context.view(it.get.asInstanceOf[AnyRef])
scalate/scalate-spring-mvc/src/main/scala/org/fusesource/scalate/spring/view/ScalateViewResolver.scala:29:
override def requiredViewClass(): java.lang.Class[_] =
classOf[org.fusesource.scalate.spring.view.ScalateView]
scalate/scalate-util/src/main/scala/org/fusesource/scalate/util/Files.scala:32:
file.listFiles.view.map(recursiveFind(_)(filter)).find(_.isDefined).getOrElse(None)
scalate/scalate-util/src/main/scala/org/fusesource/scalate/util/Files.scala:40:
directories.view.map(recursiveFind(_)(filter)).find(_.isDefined).getOrElse(None)
scalate/scalate-util/src/main/scala/org/fusesource/scalate/util/Files.scala:48:
children(file).view.flatMap(andDescendants(_)))
scalate/scalate-util/src/main/scala/org/fusesource/scalate/util/Objects.scala:88:
constructors.view.map(c =>
tryCreate(c)).find(_.isDefined).getOrElse(None)
scalate/scalate-util/src/main/scala/org/fusesource/scalate/util/ResourceLoader.scala:89:
sourceDirectories.view.map(new File(_, uri)).find(_.exists) match {
scalate/scalate-wikitext/src/main/scala/org/fusesource/scalate/wikitext/SwizzleLinkFilter.scala:94:
sourceDirectories.view.map
{findMatching(_)}.find(_.isDefined).getOrElse(None)
scutil/src/main/scala/scutil/ext/InputStreamImplicits.scala:23: if
(len != -1) out ++= buffer.view(0, len)
specs2/src/main/scala/org/specs2/reporter/SpecsArguments.scala:51:
argumentsFragments.view.zip(nestedArguments).collect { case
(ApplicableArguments(value), args) => (value, args) }
specs2/src/main/scala/org/specs2/reporter/SpecsArguments.scala:55:
def fragmentAndSpecNames: Seq[(T, SpecName)] =
argumentsFragments.view.zip(nestedSpecNames).collect { case
(ApplicableArguments(value), name) => (value, name) }
specs2/src/main/scala/org/specs2/reporter/SpecsArguments.scala:61:
argumentsFragments.view.zip(nestedArguments).zip(nestedSpecNames).collect
{ case ((ApplicableArguments(value), args), name) => (value, args,
name) }
specs2/src/main/scala/org/specs2/runner/SpecificationsFinder.scala:23:
specificationNames(path, pattern, basePath,
verbose).view.filter(filter).flatMap(n => createSpecification(n))
sperformance/src/test/scala/CollectionsShootout.scala:148:
List.range(1, size).toList.view
sperformance/src/test/scala/CollectionsShootout.scala:155:
ArrayBuffer.range(1, size).view
sperformance/src/test/scala/CollectionsShootout.scala:184:
col.view.map(_ * 2).take(100).force
sperformance/src/test/scala/CollectionsShootout.scala:189:
col.view.filter(_ % 2 == 0).take(100).force
sperformance/src/test/scala/CollectionsShootout.scala:194:
col.view.zipWithIndex.filter(_._2 % 2 == 0).take(100).force
sperformance/src/test/scala/CollectionsShootout.scala:236:
performance of "Vector.view" in {
sperformance/src/test/scala/CollectionsShootout.scala:239:
a.view.zipWithIndex.filter(x => x._1.startsWith("1") &&
(x._2%3==0)).map(_._1).force
sperformance/src/test/scala/CollectionsShootout.scala:244:
a.view.filter(_.startsWith("1")).foldLeft("")( (prev, cur) => cur)
sperformance/src/test/scala/CollectionsShootout.scala:260:
performance of "ArrayBuffer.view" in {
sperformance/src/test/scala/CollectionsShootout.scala:263:
a.view.zipWithIndex.filter(x => x._1.startsWith("1") &&
(x._2%3==0)).map(_._1).force
sperformance/src/test/scala/CollectionsShootout.scala:268:
a.view.filter(_.startsWith("1")).foldLeft("")( (prev, cur) => cur)
uniscala/uniscala-granite/granite-core/src/main/scala/net/uniscala/granite/DefaultIDUrlParamCodec.scala:61:
val result = codecs.urlParamCodecs.view map { codec =>
uniscala/uniscala-granite/granite-core/src/main/scala/net/uniscala/granite/DefaultIDUrlParamCodec.scala:86:
val result = codecs.urlParamCodecs.view map { codec =>
I don't understand the question :-(
or the view.dropRight-problem. how is a view supposed to know when it
has reached the last-nth item without traversing the whole collection
first (or asking for its size which might be equal to traversing it)?
my suggestion would be to fix the problems (at least the unsupported
operation exceptions) by splitting up traversable into
stricttraversable, nonstricttraversable and
idontcarewhichoneitistraversable which they both extend from. similar to
the .par & .seq collections. all unsupported operations like dropright
would simply not exist in the trait, and no one would try to call them.
i must admit it's just an idea and i haven't checked if it might work at all
-1000 to removing views.
Cleaning them up and reducing overhead is great, but views are crucial to the collections library. My guess is that people don't know them well enough to use them or are suffering from view's poor zip performance ...
In any case, something that removes views I am against. They're a perfect solution when you need them.
I agree. Reasons to keep views:
- They are hard to do, so rolling your own is a pain
- They convert some algorithms from N space and time to constant
space and time
- They are the equivalent of the standard behavior of Java's collections.
Views seem to be underused. But I think that reflects the fact that, often,
performance is not critical, so the standard behavior of collections
is fine. But when you need them, you really do need them.
Cheers
-- Martin
Maybe they just prefer not to roll the dice on runtime errors:
https://issues.scala-lang.org/browse/SI-4332
I agree with the last comment in that ticket.
"For now, scala collection views are not usable at all, see errors
below. It is very bad thing, since there are NO WARNINGS about broken
feature in API docs. I thought Scala API is more or less stable,
disregarding small bugs. I believe it is wrong to leave broken
features in Scala releases."
> In any case, something that removes views I am against. They're a perfect solution when you need them.
Nobody is suggesting removing them from the world.
Those are all reasons to keep them in the world. They aren't reasons
to keep them in the core scala library.
Reasons to remove them: they're huge, they're dangerous, and they're
barely used (with good reason, which I can elaborate in more detail if
it's really necessary but I don't think it should be necessary to
belabor it.) And nobody is going to fix them. I feel very safe in
saying that, because I'm not: they're too complicated, I don't like
the design, I don't feel it can be made robust, and I don't want to
sink any more time into them.
How often is it really that you can't get the same effect - more
predictably, no runtime errors, no 10^7 factor performance regressions
- with an iterator? I realize you can construct an example, that's not
the question. I think features should have to earn their keep far
more effectively than this one does.
What use cases do exist right now that are satisfiable by views but
not by iterators? Is it possible to have a simpler version of views
that support less operations? In my mind, views for the most part
should just be Iterables, regardless of their originating collection.
> >https://issues.scala-lang.org/secure/IssueNavigator.jspa?mode=hide&re...
> > ashlar/compiler/src/main/scala/com/smokejumperit/ashlar/compiler/ast/Canoni calNameAST.scala:430:
> > val
> > condition = matchCondition(matchCase.view, names)
> > ashlar/compiler/src/main/scala/com/smokejumperit/ashlar/compiler/ast/Classi fyAST.scala:427:
> > matchCondition(target,
> > targetType, caseAST.view),
> > ashlar/compiler/src/main/scala/com/smokejumperit/ashlar/compiler/ast/RawAST .scala:169:
> > matchCondition(it.view),
> > ashlar/compiler/src/main/scala/com/smokejumperit/ashlar/compiler/ast/TypedR awAST.scala:140:
> > matchCondition(m.view,
> > types),
> > ashlar/compiler/src/main/scala/com/smokejumperit/ashlar/compiler/flow/Cojen Walker.scala:299:
> > it.view.values.foreach(
> > arg => walkOps(ctx, arg) )
> > ashlar/compiler/src/main/scala/com/smokejumperit/ashlar/compiler/flow/Cojen Walker.scala:303:
> > it.view.checkMethod,
> > ashlar/compiler/src/main/scala/com/smokejumperit/ashlar/compiler/flow/Cojen Walker.scala:305:
> > (it.view.valueTypes.map(t
> > => T.forClass(t)) ++ Seq(TypeDesc.OBJECT)).toArray
> > ashlar/compiler/src/main/scala/com/smokejumperit/ashlar/compiler/flow/Cojen Walker.scala:325:
> > val
> > bodyCtx = matchCase.view.exposedVariables.zipWithIndex.foldLeft(ctx) {
> > case (tmpCtx, (expVar, idx)) =>
>
> > blueeyes/src/main/scala/blueeyes/persistence/cache/functional/TemporalCache .scala:25:
> > scala-collection-test/src/test/scala/com/bizo/scala/collection/TraversableS pec.scala:531:
> > def result() = { builder.result.view }
>
> > scala-collection-test/src/test/scala/com/bizo/scala/collection/TraversableS pec.scala:543:
> > def result() = { builder.result.view(min, max) }
> > scala-graph/core/src/main/scala/scalax/collection/GraphDegree.scala:75:
> > nodes.toList.view map nodeDegree filter degreeFilter min
> > scala-graph/core/src/main/scala/scalax/collection/GraphDegree.scala:89:
> > nodes.toList.view map nodeDegree filter degreeFilter max
> > scala-graph/core/src/main/scala/scalax/collection/GraphDegree.scala:100:
> > val v = nodes.toList.view map nodeDegree sorted IntReverseOrdering
> > scala-graph/core/src/main/scala/scalax/collection/GraphDegree.scala:115:
>
> ...
>
> read more »
trait ParSeqViewLike[+T,
+Coll <: Parallel,
+CollSeq,
+This <: ParSeqView[T, Coll, CollSeq] with
ParSeqViewLike[T, Coll, CollSeq, This, ThisSeq],
+ThisSeq <: SeqView[T, CollSeq] with
SeqViewLike[T, CollSeq, ThisSeq]]
extends GenSeqView[T, Coll]
with GenSeqViewLike[T, Coll, This]
with ParIterableView[T, Coll, CollSeq]
with ParIterableViewLike[T, Coll, CollSeq, This, ThisSeq]
with ParSeq[T]
with ParSeqLike[T, This, ThisSeq]
{
self =>
import tasksupport._
trait Transformed[+S] extends ParSeqView[S, Coll, CollSeq]
with super[ParIterableView].Transformed[S] with
super[GenSeqViewLike].Transformed[S] {
override def splitter: SeqSplitter[S]
override def iterator = splitter
override def size = length
}
trait Sliced extends super[GenSeqViewLike].Sliced with
super[ParIterableViewLike].Sliced with Transformed[T] {
// override def slice(from1: Int, until1: Int): This =
newSliced(from1 max 0, until1 max 0).asInstanceOf[This]
override def splitter = self.splitter.psplit(from, until - from)(1)
override def seq = self.seq.slice(from, until)
}
trait Mapped[S] extends super[GenSeqViewLike].Mapped[S] with
super[ParIterableViewLike].Mapped[S] with Transformed[S] {
override def splitter = self.splitter.map(mapping)
override def seq = self.seq.map(mapping).asInstanceOf[SeqView[S, CollSeq]]
}
trait Appended[U >: T] extends super[GenSeqViewLike].Appended[U]
with super[ParIterableViewLike].Appended[U] with Transformed[U] {
override def restPar: ParSeq[U] = rest.asParSeq
override def splitter = self.splitter.appendParSeq[U,
SeqSplitter[U]](restPar.splitter)
override def seq = self.seq.++(rest).asInstanceOf[SeqView[U, CollSeq]]
}
trait Forced[S] extends super[GenSeqViewLike].Forced[S] with
super[ParIterableViewLike].Forced[S] with Transformed[S] {
override def forcedPar: ParSeq[S] = forced.asParSeq
override def splitter: SeqSplitter[S] = forcedPar.splitter
override def seq = forcedPar.seq.view.asInstanceOf[SeqView[S, CollSeq]]
}
trait Zipped[S] extends super[GenSeqViewLike].Zipped[S] with
super[ParIterableViewLike].Zipped[S] with Transformed[(T, S)] {
override def splitter = self.splitter zipParSeq otherPar.splitter
override def seq = (self.seq zip other).asInstanceOf[SeqView[(T,
S), CollSeq]]
}
trait ZippedAll[U >: T, S] extends
super[GenSeqViewLike].ZippedAll[U, S] with
super[ParIterableViewLike].ZippedAll[U, S] with Transformed[(U, S)] {
override def splitter: SeqSplitter[(U, S)] =
self.splitter.zipAllParSeq(otherPar.splitter, thisElem, thatElem)
override def seq = (self.seq.zipAll(other, thisElem,
thatElem)).asInstanceOf[SeqView[(U, S), CollSeq]]
}
trait Reversed extends super.Reversed with Transformed[T] {
override def splitter: SeqSplitter[T] = self.splitter.reverse
override def seq = self.seq.reverse.asInstanceOf[SeqView[T, CollSeq]]
}
// use only with ParSeq patches, otherwise force
trait Patched[U >: T] extends super.Patched[U] with Transformed[U] {
def patchPar: ParSeq[U] = patch.asInstanceOf[ParSeq[U]]
override def splitter: SeqSplitter[U] =
self.splitter.patchParSeq[U](from, patchPar.splitter, replaced)
override def seq = self.seq.patch(from, patch,
replaced).asInstanceOf[SeqView[U, CollSeq]]
}
// !!!
//
// What is up with this trait and method, why are they here doing
// nothing but throwing exceptions, without even being deprecated?
// They're not implementing something abstract; why aren't they
// just removed?
//
// use Patched instead
trait Prepended[U >: T] extends super.Prepended[U] with Transformed[U] {
unsupported
}
protected def newPrepended[U >: T](elem: U): Transformed[U] = unsupported
/* wrapper virtual ctors */
protected override def newSliced(_endpoints: SliceInterval):
Transformed[T] = new { val endpoints = _endpoints } with Sliced
protected override def newAppended[U >: T](that: GenIterable[U]):
Transformed[U] = {
// we only append if `that` is a parallel sequence, i.e. it has a
precise splitter
if (that.isParSeq) new Appended[U] { val rest = that }
else newForced(mutable.ParArray.fromTraversables(this, that))
}
protected override def newForced[S](xs: => GenSeq[S]): Transformed[S] = {
if (xs.isParSeq) new Forced[S] { val forced = xs }
else new Forced[S] { val forced = mutable.ParArray.fromTraversables(xs) }
}
protected override def newMapped[S](f: T => S): Transformed[S] = new
Mapped[S] { val mapping = f }
protected override def newZipped[S](that: GenIterable[S]):
Transformed[(T, S)] = new Zipped[S] { val other = that }
protected override def newZippedAll[U >: T, S](that: GenIterable[S],
_thisElem: U, _thatElem: S): Transformed[(U, S)] = new ZippedAll[U, S]
{
val other = that
val thisElem = _thisElem
val thatElem = _thatElem
}
protected def newReversed: Transformed[T] = new Reversed { }
protected def newPatched[U >: T](_from: Int, _patch: GenSeq[U],
_replaced: Int): Transformed[U] = new {
val from = _from;
val patch = _patch;
val replaced = _replaced
} with Patched[U]
/* operation overrides */
/* sliced */
override def slice(from: Int, until: Int): This =
newSliced(SliceInterval(from, until)).asInstanceOf[This]
override def take(n: Int): This = newSliced(SliceInterval(0,
n)).asInstanceOf[This]
override def drop(n: Int): This = newSliced(SliceInterval(n,
length)).asInstanceOf[This]
override def splitAt(n: Int): (This, This) = (take(n), drop(n))
/* appended */
override def ++[U >: T, That](xs: GenTraversableOnce[U])(implicit
bf: CanBuildFrom[This, U, That]): That =
newAppended(xs.toTraversable).asInstanceOf[That]
override def :+[U >: T, That](elem: U)(implicit bf:
CanBuildFrom[This, U, That]): That = ++(Iterator.single(elem))(bf)
//override def union[U >: T, That](that: GenSeq[U])(implicit bf:
CanBuildFrom[This, U, That]): That = this ++ that
/* misc */
override def map[S, That](f: T => S)(implicit bf: CanBuildFrom[This,
S, That]): That = newMapped(f).asInstanceOf[That]
override def zip[U >: T, S, That](that: GenIterable[S])(implicit bf:
CanBuildFrom[This, (U, S), That]): That =
newZippedTryParSeq(that).asInstanceOf[That]
override def zipWithIndex[U >: T, That](implicit bf:
CanBuildFrom[This, (U, Int), That]): That =
newZipped(ParRange(0, splitter.remaining, 1, false)).asInstanceOf[That]
override def reverse: This = newReversed.asInstanceOf[This]
override def reverseMap[S, That](f: T => S)(implicit bf:
CanBuildFrom[This, S, That]): That = reverse.map(f)
/* patched */
override def updated[U >: T, That](index: Int, elem: U)(implicit bf:
CanBuildFrom[This, U, That]): That = {
require(0 <= index && index < length)
patch(index, List(elem), 1)(bf)
}
override def padTo[U >: T, That](len: Int, elem: U)(implicit bf:
CanBuildFrom[This, U, That]): That = patch(length, Seq.fill(len -
length)(elem), 0)
override def +:[U >: T, That](elem: U)(implicit bf:
CanBuildFrom[This, U, That]): That = patch(0,
mutable.ParArray.fromTraversables(Iterator.single(elem)), 0)
override def patch[U >: T, That](from: Int, patch: GenSeq[U],
replace: Int)(implicit bf: CanBuildFrom[This, U, That]): That =
newPatched(from, patch, replace).asInstanceOf[That]
/* forced */
// override def diff[U >: T](that: GenSeq[U]): This =
newForced(thisParSeq diff that).asInstanceOf[This]
// override def intersect[U >: T](that: GenSeq[U]): This =
newForced(thisParSeq intersect that).asInstanceOf[This]
// override def sorted[U >: T](implicit ord: Ordering[U]): This =
newForced(thisParSeq sorted ord).asInstanceOf[This]
override def collect[S, That](pf: PartialFunction[T, S])(implicit
bf: CanBuildFrom[This, S, That]): That =
filter(pf.isDefinedAt).map(pf)(bf)
override def scanLeft[S, That](z: S)(op: (S, T) => S)(implicit bf:
CanBuildFrom[This, S, That]): That =
newForced(thisParSeq.scanLeft(z)(op)).asInstanceOf[That]
override def scanRight[S, That](z: S)(op: (T, S) => S)(implicit bf:
CanBuildFrom[This, S, That]): That =
newForced(thisParSeq.scanRight(z)(op)).asInstanceOf[That]
override def groupBy[K](f: T => K): immutable.ParMap[K, This] =
thisParSeq.groupBy(f).map(kv => (kv._1,
newForced(kv._2).asInstanceOf[This]))
override def force[U >: T, That](implicit bf: CanBuildFrom[Coll, U,
That]) = bf ifParallel { pbf =>
executeAndWaitResult(new Force(pbf,
splitter).mapResult(_.result).asInstanceOf[Task[That, _]])
} otherwise {
val b = bf(underlying)
b ++= this.iterator
b.result
}
/* tasks */
protected[this] class Force[U >: T, That](cbf: CanCombineFrom[Coll,
U, That], protected[this] val pit: SeqSplitter[T])
extends Transformer[Combiner[U, That], Force[U, That]] {
var result: Combiner[U, That] = null
def leaf(prev: Option[Combiner[U, That]]) = result =
pit.copy2builder[U, That, Combiner[U, That]](reuse(prev,
cbf(self.underlying)))
protected[this] def newSubtask(p: SuperParIterator) = new
Force(cbf, down(p))
override def merge(that: Force[U, That]) = result = result combine
that.result
}
}
On Sat, Dec 17, 2011 at 07:27:42AM -0800, Paul Phillips wrote:
> Hey lurkers, you think this is irrelevant to you if you don't use
> views, think again. Drink in the trait signature below and, if you
> have the courage, follow the thread of interlocking type parameters
> and bounds as far as it goes. Then try to imagine what kind of tax
> that places on changing anything in the collections. Oh, you don't
> want to change anything in the collections? Can I close all the
> tickets then?
OK, I'll bite.
For the work I'm doing I have tended to stay away from views. So I
wouldn't miss them much myself if they were demoted to some kind of
second-class status.
I don't know how much weight that carries, but it's what I have.
-- Erik
--
Johannes
-----------------------------------------------
Johannes Rudolph
http://virtual-void.net
Cheers
-- Martin
It's impossible to ship with the standard library because it's so big
(or so I hear.) This is like having a cage full of lions, and when
someone wants to free the lions, saying that you don't see any other
lions roaming free, so why should these lions get special treatment?
There are no lions roaming free because they're all in the cage!
No, but proguard isn't able to remove view classes in many cases even
if they are not used any where in user code. And they are the current
road block for using the Scala standard library with Android.
I agree views are a huge pain to implement correctly, but that does
not invalidate the fact that they are essential in some scenarios.
Here's the fundamental problem with views: To do them correctly you
need to re-implement a lot of classes in every layer of your
inheritance library. Each class captures the behavior of one common
collection operation and possible a couple of variants. We used to
have 4 such layers: Traversable, Iterable, Seq, and Linear/IndexedSeq.
Views were a pain to implement then, but it was sort of manageable. We
now have almost twice the number of layers because of the Gen...
types. This seemed to have pushed views over the edge where no person
can understand the code anymore and bugs like SI-4332 creep in and are
almost impossible to fix (I am pretty sure that you could take a slice
of a slice before 2.9, the whole point of views is that this should be
effcient!)
So, here's an alternate proposal to get back some sanity: Let's
completely separate the implementations of sequential and parallel
collections, including their views. This means we still keep GenSeq,
GenIterable as common supertypes, but these would be pure interfaces
without any implementation. And traits GenIterableLike, GenSeqLike,
etc would be removed.
This means that both parallel and sequential collections have to
implement their own view hierarchies, but at least there's no
interference between them. And we collapse the number of levels from 7
to 4 or maybe 3 (if Iterable and Traversable get merged). Hopefully
this makes views manageable again. And it would have other benefits
such as reducing the number of supertraits of common types such as
List.
Cheers
-- Martin
I will travel backward in time eight months to endorse this idea.
...
On 4/17/11 12:47 PM, martin odersky wrote:
> Agree in principle, but wondering how to do better?
That's one of those tough open ended questions, because it seems
unlikely I have any ideas which are going to be news to you. Broadly
speaking, looser coupling and a cleaner separation of implementation and
interface. I feel like traits have been something of a trap in that
they have reduced the motivation to cleanly define interfaces, and they
have made it too easy to reuse code (and actually, too hard not to) so
code ends up being reused in places where the implementation for that
purpose is questionable at best; and in other cases the "reused" code
becomes a hostile entity which one must defend against by being sure to
override the existing implementation.
References: various issues which have arisen with mutable vs. immutable,
linear vs. indexed access, parallel vs. sequential traversal, lazy vs.
eager evaluation, by-name vs. by-value calls, view vs. non-view monadic
operations.
I think mixin composition where resulting behavior depends on the
linearization is something which can only be gotten right in small
doses, and that we are way past the maximum dose in the collections.
I have lots of ideas, but assuming we are only in the market for
short-term plausibly viable approaches with the parallel collections, I
think we should try to completely divorce them in terms of inheritance
and provide one or more mechanisms for hopping back and forth
(conversions and/or "views" of one as the other.) They can reuse some
pure interfaces.
I wish we had language support for pure interfaces, by the way. There's
no way to write a trait and require that it provide no implementations.
We could make "abstract trait" mean this.
Same for me. When I read about view for the first time, I was thrilled
by the feature, and was wondering why it was not the default behaviour
for all collection iteration. I did try to use them, sometimes getting
completlly unexpected results, most of the time hitting a bug along the
road. After some time, I realize that they weren't the free lunch I was
hopping for and I found that it was far safer to just ignore them, and
build custom solution when the need happened.
On the other hand, I'm all for a modularized and/or lighter Scala-lib,
so if my vote count, I'm for removing them.
Ok, well, either that, or make what is needed to have them be really
useful (useful as in "in Scala, the default pattern for fast iteration
is "for { elt <- mycollection.view ... }, that a no brainer")
Cheers,
--
Francois ARMAND
http://fanf42.blogspot.com
http://www.normation.com
How many of these 92 view usages can't be work-arounded with a
iterator-like solution ? That would help to see to what extends view
usage are mandatory. If not even only one of all the view usages in the
wild are not (better) coded with something else, that will be a serious
wondering of their value/cost ratio.
Thanks,
92 was a little much to verify individually. Instead I enumerated all
the uses of views in the compiler and library. As the perpetrators of
views, one could reasonably expect us to have at least one compelling
use.
src/compiler/scala/tools/nsc/interactive/Global.scala:420:
source.content.view.drop(start).take(length).mkString+" :
"+source.path+" ("+start+", "+end+
source.content.slice(start, start + length)
src/compiler/scala/tools/nsc/interactive/tests/Tester.scala:161:
changes.view.reverse foreach (_.insertAll())
changes reverseMap (_.insertAll())
src/compiler/scala/tools/nsc/symtab/classfile/ClassfileParser.scala:395:
bytesBuffer ++= in.buf.view(start + 3, start + 3 + len)
bytesBuffer ++= in.buf.iterator.slice(start + 3, start + 3 + len)
src/library/scala/collection/IterableProxyLike.scala:99: override def
view = self.view
src/library/scala/collection/IterableProxyLike.scala:100: override
def view(from: Int, until: Int) = self.view(from, until)
src/library/scala/collection/parallel/ParIterableLike.scala:791:
override def seq = self.seq.view
src/library/scala/collection/parallel/ParIterableViewLike.scala:88:
override def seq = forcedPar.seq.view.asInstanceOf[IterableView[S,
CollSeq]]
src/library/scala/collection/parallel/ParSeqLike.scala:350:
override def seq = self.seq.view
src/library/scala/collection/parallel/ParSeqViewLike.scala:70:
override def seq = forcedPar.seq.view.asInstanceOf[SeqView[S,
CollSeq]]
Baton-passing calls.
src/library/scala/collection/SeqLike.scala:525: b ++=
toCollection(rest).view drop replaced
b ++= (toCollection(rest).iterator drop replaced)
src/library/scala/collection/SeqLike.scala:538: b ++=
toCollection(rest).view.tail
b ++= toCollection(rest).iterator.tail
src/library/scala/collection/SeqLike.scala:780: if
(S.view.slice(m0, m1) == W.view.slice(n0, n1)) m0
if (S.iterator.slice(m0, m1) == W.iterator.slice(n0, n1))
src/library/scala/collection/SeqProxyLike.scala:68: override def view
= self.view
src/library/scala/collection/SeqProxyLike.scala:69: override def
view(from: Int, until: Int) = self.view(from, until)
More batons.
src/library/scala/xml/factory/NodeFactory.scala:32:
ch1.view.zipAll(ch2.view, null, null) forall { case (x,y) => x eq y }
(ch1 corresponds ch2)(_ eq _)
src/compiler/scala/tools/nsc/doc/html/page/ReferenceIndex.scala:55:
entry(groups._1, groups._2.view)
Not sure, this one might be legit.
So, somewhere in the neighborhood of zero.
Does dropping views in favour of iterators in the compiler have any
effect on compilation performance?
Cheers,
Miles
--
Miles Sabin
tel: +44 7813 944 528
gtalk: mi...@milessabin.com
skype: milessabin
g+: http://www.milessabin.com
http://twitter.com/milessabin
http://www.chuusai.com/
My guess is yes, for the better, especially if we properly account for
the occasional five order of magnitude slowdown in views. But I don't
know, one can only gather so much evidence and I feel like I've
already shown enough to put the guy in the chair while simultaneously
being lethally injected and hanged.
https://github.com/scala/scala/commit/be49752855
A patch for views. Most relevant change:
Almost all view classes now list parents like
trait Appended[B >: A] extends super.Appended[B] with Transformed[B]
instead of the former
trait Appended[B >: A] extends Transformed[B] with super.Appended[B]
because as it was, the implementation of foreach in
TraversableViewLike#Transformed was repeatedly trumping overrides found
in e.g. IterableLike. This change was not without its own consequences,
and much of the rest of the patch is dealing with that. A more general
issue is clearly revealed here: there is no straightforward way to deal
with trait composition and overrides when some methods should prefer B
over A and some the reverse. (It's more like A through Z in this case.)
That closes #4279, with some views being five orders of magnitude slower
than necessary. There is a test that confirms they'll stay performance
neighbors.
In the view classes (Zipped, Mapped, etc.) I attended to them with
comb and brush until they were reasonably consistent. I only use
"override" where necessary and throw in some "final" in the interests
of trying to anchor the composition outcome. I also switched the
newSliced, newZipped, etc. methods to use early init syntax since a
number have abstract vals and I found at least one bug originating with
uninitialized access.
Thanks Paul.
How about moving them to a jar that is distributed with scala but not part of the core like swing?
If we treat views like swing, I'm all for that. Same with parallel collections. However, the definition of "core scala" is where I gave my -1000.
I'd be all for moving them into a src / collection / views directory and support and release the library with core scala.
Making them a completely separate project is what concerns me.
Also, I'll happily start maintaining views at the expense of other projects if it's a "they leave standard district if no one takes them" deal.
Mostly I want the Traversable class and TraversableView....
would happen in the same release. If we are concerned to break compatibility when merging/removing Traversable, I would really prefer going to whole way in one step. Not breaking (in the worst case) stuff in three separate releases.
Thanks and bye,
Simon
Scala usually breaks binary compatibility between major releases and retains it for bug fix releases.
When 2.10 is released we'll try to jumpstart the community libraries (those that opt in) so that 2.10.x versions are immediately available.
Am Dienstag, 20. Dezember 2011, 15:37:04 schrieb Matthew Pocock:
> Hi Paul,
>
...
> It is not hard to see how something performing optimization at a higher
> level can rework a chain of maps and filters, fusing the loops and
> functions, at least for some of the collection types. If we had compiler
> support for this that was reliable then there would almost certainly be no
> need for views in the first place. This would be my preferred fix. After
> that, moving views out of the core collections jar into a views jar with
> pimps would work for me.
This would be fantastic!
I thought the same as I had a brief look at the compile{} statement example - I thought: why only for GPU why not also on a standard architecture? Just have a code block (e.g. a "typical" collection processing chain) and instruct the compiler to work especially hard to optimize this thing by transformations - at the cost of a longer compile time and a reduced instruction set within such a block.
I think a clever optimization of any and all scala code is way beyond reach (maybe I am wrong) but optimizing a distinct block with a reduced language model could be feasible. At the same time this would allow to control where the compiler should spend extra time to get things as fast as possible.
Just my 5 cents.
Greetings
Bernd
I would rejoice to see them removed or at least separated from the
main collections code and downplayed. For many reasons, not the least
of which is jar size.
Martin points out that sometimes using a view will improve the
asymptotic complexity of a piece of code. That's true, but aren't
there satisfactory alternative ways to get the same speedup? (Someone
mentioned iterators.) I'd need to see a strong collection of examples
to be convinced otherwise, especially given that having views around
is so expensive for the collections implementers.
--
Seth Tisue | Northwestern University | http://tisue.net
lead developer, NetLogo: http://ccl.northwestern.edu/netlogo/
Martin points out that sometimes using a view will improve the
asymptotic complexity of a piece of code. That's true, but aren't
there satisfactory alternative ways to get the same speedup? (Someone
mentioned iterators.)
First, one could write it method-like, same as collections:
new SeqView takeWhile (_ < 50) filter (_ % 2 == 0) dropWhile (_ < 10)
map (_.toString)
I expect this could be a layer over what you have:
class SeqView(val op: ViewOp = Id) {
def takeWhile(f: Function1) = new SeqView(op o TakeWhile f)
def filter(f: Fucntion1) = new SeqView(op o Filter f)
// etc
}
I'm not sure how you managed the type parameters/type inference, however.
The second thing I think would made this propose better would be to
define a method on the collection to apply the view:
class List {
def getView(op: ViewOp) = op(this)
def getView(view: SeqView) = getView(view.op)
}
Taking both together, we can write this:
val myView = new SeqView takeWhile (_ < 50) filter (_ % 2 == 0)
dropWhile (_ < 10) map (_.toString)
(1 to 100).toList getView myView
Or even
(1 to 100).toList getView (new SeqView takeWhile (_ < 50) filter (_ %
2 == 0) dropWhile (_ < 10) map (_.toString))
(*) According to
http://www.cis.upenn.edu/~bcpierce/papers/lenses-etapsslides.pdf, if
the composition is made entirely out of slices (take/drop variants),
it could be a well-behaved lens. Of course, that assumes a putback
(update/updated) functionality is provided. I'm guessing putback could
be made available, but it also could be made available *later*.
--
Daniel C. Sobral
I travel to the future all the time.
While I think this is a move into the right direction, I think it should go into 2.12 (never thought I would be one of those who doesn't want to ship everything immediately :-D), because this should be - in the best case - only the tip of the iceberg.
In my opinion, we need to have a look at the whole collection framework and rethink our strategy we have concerning performance and the things Java 8 ships.
The only advantage of views compared to Java 8's Streams as far as I see is that one can't "exhaust" them.
On the other side, even large non-view operations which should be slower by all means sometimes beat views.
Additionally, it seems we can't really make sure that we don't have
dozens of missing/broken methods in views all over the place.
I mean, IF we can make views comparably fast, then we have a huge issue less (then only the question whether we should do it by default remains ... I think that would be the right thing, but even if we don't we still don't mandate more boilerplate than Java), if we can't make it work, we need to consider other options:
As far as I see, individual collection types shine when we use individual operation for which they were built, but bulk-operations can almost never use these advantages, because what matters most here is pure iteration speed (e. g. having some special data-structure with a blazingly fast operation X just can't show its advantages when we spent a considerable amount of time building and rebuilding the collection after pretty much every step ... which are probably exactly those types of operations many special purpose data-structures are not fast at).
Maybe a "common" (e. g. one type) way to facilitate these bulk-operations similar to Streams would prove to be more beneficial, because it would allow us to really tune all the operations there instead of having to deal with * plus the issue of all the overhead of the mentioned reification of the data structure after every operation.
A completely different approach would be to keep things the way they are and add some notion of purity to the compiler, so that the compiler can avoid the overhead if it isn't observable from the outside. I'm not sure this is practical though, considering that we want to have less complexity, not more and purity is a huge, complicated topic which we probably won't be able to spec, implement and ship in the next few releases.
In the end, if we really want to fix the remaining issues of the collection library, we have to think about what level of breakage is acceptable. I think it is possible to preserve source compatibility to a large extent, but I think all those people with custom collections will have to fix things manually.
Idea: Maybe we can make parallel collections lazy by default (what I mean by that is to avoid building all the immediate data structures mentioned above), because there aren't any guarantees regarding the order in which things happen anyway.
--
You received this message because you are subscribed to the Google Groups "scala-debate" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scala-debate...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
-- Francois ARMAND http://rudder-project.org http://www.normation.com