Can we just up the tuple size limit from 22 to 100 and be done with it?

6,063 views
Skip to first unread message

Sean

unread,
Apr 2, 2013, 1:22:59 PM4/2/13
to scala-l...@googlegroups.com
I understand arguments that you shouldn't design your classes to have that many fields, but this is the real world and the real world needs a general purpose language.  Sometimes we are dealing with data transfer objects to legacy code that we don't control.  And sometimes those data transfer objects have more than 22 fields.  Sure this allows people to create classes with too many fields, but it also allows people to express classes with too many fields in the simplest possible way.  My impression of Scala's design goals is that it was more important to be a powerful language than an nanny language.

Since the Tuple classes are static, we have to set the limit somewhere.  22 is simply too low.  100 is a nice round number that should solve the vast majority of the problems.


Raoul Duke

unread,
Apr 2, 2013, 1:24:34 PM4/2/13
to scala-l...@googlegroups.com
> Since the Tuple classes are static, we have to set the limit somewhere. 22
> is simply too low. 100 is a nice round number that should solve the vast
> majority of the problems.

nah, 128.

(is there not a way to magically make them, or something like them,
not be static? macros? i dunno.)

Adam Shannon

unread,
Apr 2, 2013, 1:33:56 PM4/2/13
to scala-l...@googlegroups.com
It's been fixed already. https://github.com/scala/scala/pull/2305


--
You received this message because you are subscribed to the Google Groups "scala-language" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scala-languag...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.





--
Adam Shannon
Developer
University of Northern Iowa
Junior -- Computer Science & Mathematics
http://ashannon.us

Tom Switzer

unread,
Apr 2, 2013, 1:37:20 PM4/2/13
to scala-l...@googlegroups.com
That's just case classes, right? Tuples are still limited to 22.

Tom Switzer

unread,
Apr 2, 2013, 1:38:04 PM4/2/13
to scala-l...@googlegroups.com
That said, I think once you get above a few things, you might as well create a case class anyways, since tuples become incomprehensible.

Adam Shannon

unread,
Apr 2, 2013, 1:39:43 PM4/2/13
to scala-l...@googlegroups.com
That's what I was thinking. If you need 22+ items together it's going to be easier to get members by their name rather than ._71

Paul Phillips

unread,
Apr 2, 2013, 2:13:07 PM4/2/13
to scala-l...@googlegroups.com
On Tue, Apr 2, 2013 at 10:38 AM, Tom Switzer <thomas....@gmail.com> wrote:
That said, I think once you get above a few things, you might as well create a case class anyways, since tuples become incomprehensible.

Yes - I promise we're not going to ship a statically pre-generated Tuple71.

(But if we ever do, I insist we @specialize it.)

Jason Zaugg

unread,
Apr 2, 2013, 2:17:56 PM4/2/13
to scala-l...@googlegroups.com
Currently the bytecode size of TupleN is proportional to N-squared,
due to the the fact that every default parameter on the synthetic copy
method generates a "default getter" method, which itself has N type
parameters. Tuple22 is about 30k, of which 2/3 is the copy method.

Hopefully the change to unshackle case classes solves most of the
problems, but I'm sypathetic to the cause of raising the this limit,
too.

-jason
We can work around that easily enough; I have a branch that doesn't
generate default getters at all for case class copy methods; or we
could block generation of that method by adding a private copy method
to large Tuples.

Paul Phillips

unread,
Apr 2, 2013, 2:26:00 PM4/2/13
to scala-l...@googlegroups.com

On Tue, Apr 2, 2013 at 11:17 AM, Jason Zaugg <jza...@gmail.com> wrote:
Hopefully the change to unshackle case classes solves most of the
problems, but I'm sypathetic to the cause of raising the this limit,
too.

We could always use a macro + Dynamic, as in this recent illustration:

scala> val d = improving.Dict.dict.safeOracle
d: improving.SafeDict = Dict(88629 words, 103040 definitions)

scala> d.inconceivable
res9: improving.Definitions = 
inconceivable
In`con*ceiv"a*ble, a. Etym: [Pref. in- not + conceivable: cf. F.inconcevable.]Defn: Not conceivable; incapable of being conceived by the mind; notexplicable by the human intellect, or by any known principles oragencies; incomprehensible; as, it is inconceivable to us how thewill acts in producing muscular motion.It is inconceivable to me that a spiritual substance should representan extended figure. Locke.-- In`con*ceiv"a*ble*ness, n.-- In`con*ceiv"a*bly, adv.The inconceivableness of a quality existing without any subject topossess it. A. Tucker.

scala> d.bippy
<console>:9: error: not found: "bippy"
              d.bippy
              ^


Chris Hodapp

unread,
Apr 2, 2013, 3:06:40 PM4/2/13
to scala-l...@googlegroups.com
What about something like (extremely rough outline):

trait ~[+A, +B]

class Tuple[T <: ~[_, _]] extends Dynamic {
    val ints = new Array[Int](<count of Ints within T>)
    val doubles = new Array[Double](<count of Doubles within T>)
    ...
    val objects = new Array[AnyRef](<count of reference types>)

    def selectDynamic(name: String) =
        macro <macro to get the right value out of the right array or throw a no such member>
}

Then you have something like a Tuple[Int~Int~String~List[Int]].

Chris Hodapp

unread,
Apr 2, 2013, 3:34:29 PM4/2/13
to scala-l...@googlegroups.com
Also, copy on both case classes and tuples should be able to be made constant-size with macros and applyDynamicNamed. Maybe this is what Paul is referring to?

Chris Hodapp

unread,
Apr 2, 2013, 10:19:56 PM4/2/13
to scala-l...@googlegroups.com
I had a couple free minutes and implemented (compiling) sketches of most of the macros needed to do this. The only ones left would one to make one of the things and one to get something out of the reference array and instanceOf it to the right type.


I will finish it when I have the chance (realistically not until at least tomorrow night).


On Tuesday, April 2, 2013 2:06:40 PM UTC-5, Chris Hodapp wrote:

Tom Switzer

unread,
Apr 2, 2013, 10:50:14 PM4/2/13
to scala-l...@googlegroups.com
Cool. You can even avoid macros and use Shapeless' HLists:


And then using it:

scala> val xs = 1 :: "asdf" :: 2.3 :: 4 :: Vector(1, 2) :: HNil
xs: ... = ...

scala> AnyRefCounter.countRefs(xs)
res0: Int = 2

For the exact types, its a bit easier:

scala> Nat.toInt(xs.filter[Double].length)
res1: Int = 1

Paul Phillips

unread,
Apr 3, 2013, 3:13:10 AM4/3/13
to scala-l...@googlegroups.com
On Tue, Apr 2, 2013 at 7:19 PM, Chris Hodapp <clho...@gmail.com> wrote:

I will finish it when I have the chance (realistically not until at least tomorrow night).

Hey, if we're going to brute force boilerplate our way to the end, we may as well skip the macros and generate Tuple715.

I don't ever want to see code like that - I mean that literally - I will put my retinas into witness relocation. Don't tell me it's faster to get it working this way, because it isn't. Don't tell me that macros were lesson one somewhere and that they'll be covering method-level abstraction next week. Don't indulge habits of this nature: whatever you're getting out of it, it isn't worth it.

Direct transcription punctuated with minor modification is an activity better left to DNA.

Chris Hodapp

unread,
Apr 3, 2013, 5:57:37 AM4/3/13
to scala-l...@googlegroups.com
If you're referring to the massive similarities in the countXImpl methods, yes. I am ashamed that I didn't think to abstract over those minor differences. Worse, I have since realized that I don't actually care how many of each category you have.

However, the goal here is NOT the same as that of just generating more tuples. Rather the goal is to have a single Tuple class that supports an arbitrary number of members, while retaining type safety.

Feel free to mock the result if I deserve it, of course. :)

Chris Hodapp

unread,
Apr 3, 2013, 8:42:59 AM4/3/13
to scala-l...@googlegroups.com
Still far from 'done', but here is something looking somewhat tuple-like:



Session:
scala> import tuple._
import tuple._

scala> val t1 = Tuple(1,2,"a", "b")
t1: tuple.Tuple[tuple.~[tuple.~[tuple.~[Int(1),Int(2)],String("a")],String("b")]] = tuple.Tuple@69b95c3a

scala> t1._1
res0: Int = 1

scala> t1._2
res1: Int = 2

scala> t1._3
res2: String = a

scala> t1._4
res3: String = b

scala> val t2 = Tuple(List(1,2,3), "a", 2.3, false)
t2: tuple.Tuple[tuple.~[tuple.~[tuple.~[List[Int],String("a")],Double(2.3)],Boolean(false)]] = tuple.Tuple@3b59a690

scala> t2._1
res4: List[Int] = List(1, 2, 3)

scala> t2._2
res5: String = a

scala> t2._4
res6: Boolean = false

scala> t2._0
error: exception during macro expansion:
java.lang.IllegalArgumentException
at tuple.TupleMacros$.get(Tuple.scala:115)


scala> t2._100
error: exception during macro expansion:
java.lang.IllegalArgumentException
at tuple.TupleMacros$.get(Tuple.scala:114)


scala> val letters = Tuple('a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z')
letters: tuple.Tuple[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[Char('a'),Char('b')],Char('c')],Char('d')],Char('e')],Char('f')],Char('g')],Char('h')],Char('i')],Char('j')],Char('k')],Char('l')],Char('m')],Char('n')],Char('o')],Char('p')],Char('q')],Char('r')],Char('s')],Char('t')],Char('u')],Char('v')],Char('w')],Char('x')],Char('y')],Char('z')]] = tuple.Tuple@4303418

scala> letters._1
res9: Char = a

scala> letters._5
res10: Char = e

scala> letters._26
res11: Char = z

scala> 1 to 100 toList
<console>:11: warning: postfix operator toList should be enabled
by making the implicit value language.postfixOps visible.
This can be achieved by adding the import clause 'import scala.language.postfixOps'
or by setting the compiler option -language:postfixOps.
See the Scala docs for value scala.language.postfixOps for a discussion
why the feature should be explicitly enabled.
              1 to 100 toList
                       ^
res12: List[Int] = List(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100)

scala> val numbers = Tuple(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100)
numbers: tuple.Tuple[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[tuple.~[...,...],...(53)],Int(54)],Int(55)],Int(56)],Int(57)],Int(58)],Int(59)],Int(60)],Int(61)],Int(62)],Int(63)],Int(64)],Int(65)],Int(66)],Int(67)],Int(68)],Int(69)],Int(70)],Int(71)],Int(72)],Int(73)],Int(74)],Int(75)],Int(76)],Int(77)],Int(78)],Int(79)],Int(80)],Int(81)],Int(82)],Int(83)],Int(84)],Int(85)],Int(86)],Int(87)],Int(88)],Int(89)],Int(90)],Int(91)],Int(92)],Int(93)],Int(94...
scala> numbers._100
res13: Int = 100

Paul Phillips

unread,
Apr 3, 2013, 10:15:29 AM4/3/13
to scala-l...@googlegroups.com
A few implementation notes:

 - If you call c.abort rather than throwing an exception from the macro, you can have a regular error message instead of "error: exception during macro expansion:"

- Array[Unit] can be represented with an Int (at worst.)

- Declaring everything as vals up front means you have a high minimum memory consumption. There should be an abstract version which declares defs, which makes it possible to have lower memory consumption; in fact it would make possible moving everything into the code segment if one didn't mind a new anonymous class for every different tuple...

- Another way to pay for something in the ballpark of what you use would be a single field which holds an Array[Object], with that Array holding 1-9 arrays which in turn hold the actual values of the tuple.


Chris Hodapp

unread,
Apr 3, 2013, 10:40:59 AM4/3/13
to scala-l...@googlegroups.com
On Wednesday, April 3, 2013 9:15:29 AM UTC-5, Paul Phillips wrote:
A few implementation notes:

 - If you call c.abort rather than throwing an exception from the macro, you can have a regular error message instead of "error: exception during macro expansion:"
Wonderful
 

- Array[Unit] can be represented with an Int (at worst.)
I actually don't think it needs to exist. If the type comes to Unit, I could just return a fresh Unit literal expression.
 

- Declaring everything as vals up front means you have a high minimum memory consumption. There should be an abstract version which declares defs, which makes it possible to have lower memory consumption; in fact it would make possible moving everything into the code segment if one didn't mind a new anonymous class for every different tuple...

- Another way to pay for something in the ballpark of what you use would be a single field which holds an Array[Object], with that Array holding 1-9 arrays which in turn hold the actual values of the tuple.
Yes. That should reduce memory usage, but it does add a layer of indirection. I had been thinking about what would happen if I went the other way: A few fields to prevent it from being slower than a real Tuple with small sizes in addition to what's there now.

I guess it's down to the following queries:
1) How much time does an array-indirection add to a value lookup?
2) How much space do (theoretically a maximum of) 9 extra pointers take up? Note that I don't make the arrays to put into the vals if there are no values of that type.

Or are vals memory-costly in a way I don't understand?

Chris Hodapp

unread,
Apr 3, 2013, 10:53:51 AM4/3/13
to scala-l...@googlegroups.com
I just did a little experiment: and found:
scala> Array(1,2,3).asInstanceOf[Array[AnyRef]]
java.lang.ClassCastException: [I cannot be cast to [Ljava.lang.Object;
So it looks like if I went with the array-of-arrays, I would have to give up
on my ad-hoc specialization system, which defeats the purpose of having multiple arrays in the first place.

Erik Osheim

unread,
Apr 3, 2013, 11:00:09 AM4/3/13
to scala-l...@googlegroups.com
On Wed, Apr 03, 2013 at 07:53:51AM -0700, Chris Hodapp wrote:
> I just did a little experiment: and found:
>
> > scala> Array(1,2,3).asInstanceOf[Array[AnyRef]]
> > java.lang.ClassCastException: [I cannot be cast to [Ljava.lang.Object;
>
> So it looks like if I went with the array-of-arrays, I would have to give up
> on my ad-hoc specialization system, which defeats the purpose of having
> multiple arrays in the first place.

I'm not sure that's the right test, exactly...

scala> val as = new Array[AnyRef](2)
as: Array[AnyRef] = Array(null, null)

scala> as(0) = Array(1,2,3)

scala> as(1) = Array("foo", "bar")

scala> as(0).asInstanceOf[Array[Int]](1)
res3: Int = 2

scala> as(1).asInstanceOf[Array[AnyRef]](1)
res4: AnyRef = bar

-- Erik

Haoyi Li

unread,
Apr 3, 2013, 11:06:00 AM4/3/13
to scala-l...@googlegroups.com
If i remember correctly...

- Arrays are covariant
- EXCEPT for arrays of primitives, which have to match


-Haoyi



Chris Hodapp

unread,
Apr 3, 2013, 11:08:48 AM4/3/13
to scala-l...@googlegroups.com, er...@plastic-idolatry.com
You and Paul are right. It doesn't need to be an Array[AnyRef] just an AnyRef. I'm silly. It's still small time vs small space though, right? Am I overacting to be worried about the time of an extra Array deindexing being between the user and their tupled data? It seems like some JVM magic would improve repeated access to the same array...

Paul Phillips

unread,
Apr 3, 2013, 11:15:53 AM4/3/13
to scala-l...@googlegroups.com, Erik Osheim
On Wed, Apr 3, 2013 at 8:08 AM, Chris Hodapp <clho...@gmail.com> wrote:
You and Paul are right. It doesn't need to be an Array[AnyRef] just an AnyRef. I'm silly. It's still small time vs small space though, right? Am I overacting to be worried about the time of an extra Array deindexing being between the user and their tupled data? It seems like some JVM magic would improve repeated access to the same array...

If it's an AnyRef now you've added another layer of indirection for every heterogenous tuple, though it would benefit a completely homogenous tuple if you took advantage of it to put an Array[Int] or whatever there when you could.

I won't try to quantify the time/space tradeoff, but I don't think it's close. If you create ten fields up front, no jvm magic can save you from spending those ten fields on every instance.



Erik Osheim

unread,
Apr 3, 2013, 11:22:58 AM4/3/13
to scala-l...@googlegroups.com
On Wed, Apr 03, 2013 at 08:15:53AM -0700, Paul Phillips wrote:
> I won't try to quantify the time/space tradeoff, but I don't think it's
> close. If you create ten fields up front, no jvm magic can save you from
> spending those ten fields on every instance.

I think I agree with Paul here. If you're really concerned about the
time it takes to pull a value out of a tuple, the best advice is: don't
use a tuple at all!

And of course, if someone really wants 87 fields and 87 getter methods
they can always build their own case class and get precisely that.

That said, I guess I'd recommend benchmarking some simple
implementations, both in terms of access time but also by measuring
memory overhead. Once we have data it'll be easier to see the best path
forward.

-- Erik

Chris Hodapp

unread,
Apr 3, 2013, 11:27:31 AM4/3/13
to scala-l...@googlegroups.com, Erik Osheim
Actually, didn't my experiment prove that you
could not lift an Array[Int] into the single
Array[AnyRef] field (hey, it was good for
something!).

Also, the thing Tuple wouldn't have to be
all-the-same just all-reference, since the
single Array[AnyRef] field can hold those
members directly.

Overall, you and guys (I see Erik's post as
well) have me pretty convinced that the
access time thing is probably not that big a
deal and that keeping allocation small is
way more important. Of course, real
benchmarking is better than intuition, so
I'll do them eventually.

Erik Osheim

unread,
Apr 3, 2013, 11:29:11 AM4/3/13
to Chris Hodapp, scala-l...@googlegroups.com
On Wed, Apr 03, 2013 at 08:27:31AM -0700, Chris Hodapp wrote:
> Actually, didn't my experiment prove that you
> could not lift an Array[Int] into the single
> Array[AnyRef] field (hey, it was good for
> something!).

Paul was saying that if you have a field:

var a: AnyRef = null

Then you can put whatever you want into it:

a = new Array[Int](1,2,3)
a = new Array[String]("foo", "bar")

So in that case you would just say a.asInstanceOf[Array[Int]](1).

> Overall, you and guys (I see Erik's post as
> well) have me pretty convinced that the
> access time thing is probably not that big a
> deal and that keeping allocation small is
> way more important. Of course, real
> benchmarking is better than intuition, so
> I'll do them eventually.

Sounds good. Interested to see the result.

-- Erik

Paul Phillips

unread,
Apr 3, 2013, 11:30:01 AM4/3/13
to scala-l...@googlegroups.com, Erik Osheim
On Wed, Apr 3, 2013 at 8:27 AM, Chris Hodapp <clho...@gmail.com> wrote:
Actually, didn't my experiment prove that you
could not lift an Array[Int] into the single
Array[AnyRef] field (hey, it was good for
something!).

No; but that isn't what I said anyway, I said a single AnyRef field.

If you have a single AnyRef field, and only Ints, you can store an Array[Int] in the field. If you have multiple types, you can store an Array[AnyRef], which holds Array[Int], Array[Float], etc.

If you have a single Array[AnyRef] field, then you will have one fewer indirection in the heterogenous case, but one additional one in the homogenous case. In the homogenous case you'd have an Array[AnyRef] of size one, holding an Array[Int] at index 0.


John Nilsson

unread,
Apr 3, 2013, 5:52:51 PM4/3/13
to scala-l...@googlegroups.com
"if one didn't mind a new anonymous class for every different tuple"

I think this is the direction this discussion should be headed. Would it not be better to simply have a macro, or the compiler, generate anonymous types with real names for the attributes for the cases where a tuple would be used?

For reference, here is how C# solves a similar problem
            var a = 1;
            var b = 2;
            var tuple1 = new { a, b };
            var tuple2 = new { a = 1, b = 2 };
Not entierly unlike an anonymous structural type in scala
      val tuple2 = new { val a = 1; val b = 2 }
I do like the shorthand, it makes for almost, but not quite, SQL-like linq (the main use case)
Unlike a structural type they are real "anonymous" types. So no reflection or such involved.


However Scala could do much better:
* Value semantics: tuple1 should equal tuple2
* No "new" keyword
* Pattern match to extract values
* A step towards unifying parameterlists and tuples?

How about if this:
val a = 1
val b = 2
val tuple1 = (a,b)
val tuple2 = ( a = 1, b = 2)

Would be interpreted as if something like this was written?
      case class Tuple2$a_Int_b_Int(a: Int, b: Int) extends Tuple2(a,b)
      val a = 1;
      val b = 2;
      val tuple1 = Tuple2$a_Int_b_Int(a,b)
      val tuple2 = Tuple2$a_Int_b_Int(a = 1,b = 2)

BR,
John




Rex Kerr

unread,
Apr 3, 2013, 6:01:56 PM4/3/13
to scala-l...@googlegroups.com
On Tue, Apr 2, 2013 at 1:22 PM, Sean <seans...@gmail.com> wrote:
I understand arguments that you shouldn't design your classes to have that many fields, but this is the real world and the real world needs a general purpose language.  Sometimes we are dealing with data transfer objects to legacy code that we don't control.  And sometimes those data transfer objects have more than 22 fields.  Sure this allows people to create classes with too many fields, but it also allows people to express classes with too many fields in the simplest possible way.  My impression of Scala's design goals is that it was more important to be a powerful language than an nanny language.

Since the Tuple classes are static, we have to set the limit somewhere.  22 is simply too low.  100 is a nice round number that should solve the vast majority of the problems.

What is the use case where you need 60 items, but not 100, and you can't nest tuples, and you can't use case classes?

Personally, I think 22 is simply too high.  The use cases solved in the high teens seem few and far between, and if you get into that absurdly unreadable range, you should probably be using different data structures.

While I'm all for on-demand compilation of library code in response to demands of users' source code, and would be happy if such a feature could be deployed for tuples if it in fact existed, I don't really see why one ought to do this for tuples alone (and certainly not when they need to be compiled in advance).

  --Rex

Sean

unread,
Apr 3, 2013, 8:13:00 PM4/3/13
to scala-l...@googlegroups.com
> What is the use case where you need 60 items, but not 100, and you can't nest tuples, and you can't use case classes?

Sounds like you are describing something different, as that is not the use case I described.  Like the tuple limitation, the case class limitation is also 22 as of Scala 10.0.1, so my original use case applies.  Are you suggesting we should have a different limitation for use cases than we do for tuples?  That might make functions like .tupled behave in surprising ways, for example if you did something like: (MyCaseClass.apply _).tupled

I was hoping for an implementation that would shield the programmer from limitations not related to their application's need.   No matter what the actual limit is, someone can always argue why not one more or why not one less (raould, I concede that 128 is more of a round number than 100 in base 2).  My point for setting it higher was to make it less likely that anyone would notice the limitation.  Jason Zaugg pointed out that there are good reasons for the limitation, but those reasons are based on the Scala compiler implementation details, not the needs of an application written in Scala.   From the conversation between Chris Hodapp and Paul Phillips, it looks like there may a way to not have a limitation at all, so I am hopeful that a solution along these lines will prove practical.

Regarding 22 being too high, I am not convinced it is a good idea for a general purpose programming language to make that decision on the behalf of every possible application that might ever be written in that language.  I would rather leave the domain modelling decisions to the individual application developers, and have the programming language take a hands off approach, focusing on being useful while staying out of the way.

Sean

unread,
Apr 3, 2013, 8:46:19 PM4/3/13
to scala-l...@googlegroups.com
Typo, meant to say "case classes", not "use cases"


On Wednesday, April 3, 2013 5:13:00 PM UTC-7, Sean wrote:
> What is the use case where you need 60 items, but not 100, and you can't nest tuples, and you can't use case classes?

Sounds like you are describing something different, as that is not the use case I described.  Like the tuple limitation, the case class limitation is also 22 as of Scala 10.0.1, so my original use case applies.  Are you suggesting we should have a different limitation for case classes than we do for tuples?  That might make functions like .tupled behave in surprising ways, for example if you did something like: (MyCaseClass.apply _).tupled

Rex Kerr

unread,
Apr 3, 2013, 9:11:28 PM4/3/13
to scala-l...@googlegroups.com
On Wed, Apr 3, 2013 at 8:13 PM, Sean <seans...@gmail.com> wrote:
> What is the use case where you need 60 items, but not 100, and you can't nest tuples, and you can't use case classes?

Sounds like you are describing something different, as that is not the use case I described.  Like the tuple limitation, the case class limitation is also 22 as of Scala 10.0.1, so my original use case applies.  Are you suggesting we should have a different limitation for case classes than we do for tuples?  That might make functions like .tupled behave in surprising ways, for example if you did something like: (MyCaseClass.apply _).tupled

There's already a pull request to that effect (linked by Adam Shannon, 3rd message in the thread).  And I think that's perfectly okay--if you want to do something extraordinary, you won't necessarily have full support.  For example, you already miss out on specialization (for speed) if you use tuples (but not case classes) of size >3.

The method/function duality already has this problem, and there just doesn't seem to be that much friction.  For example:

scala> class C
defined class C

scala> def foo(a:C, b:C, c:C, d:C, e:C, f:C, g:C, h:C, i:C, j:C, k:C, l:C, m:C,
n:C, o:C, p:C, q:C, r:C, s:C, t:C, u:C, v:C, w:C, x:C, y:C, z:C) = "bar"
foo: (a: C, b: C, c: C, d: C, e: C, f: C, g: C, h: C, i: C, j: C, k: C, l: C, m: C,
n: C, o: C, p: C, q: C, r: C, s: C, t: C, u: C, v: C, w: C, x: C, y: C, z: C)String

scala> foo _
<console>:23: error: missing arguments for method foo;
follow this method with `_' if you want to treat it as a partially applied function
              foo _
              ^

scala> def baz(a:C) = "bippy"
baz: (a: C)String

scala> baz _
res78: C => String = <function1>

Surprising?  Yes, but what are you doing with a twenty-six argument method?
 

I was hoping for an implementation that would shield the programmer from limitations not related to their application's need.

I agree, and this is completely consistent with what I said in my previous message.
 
  No matter what the actual limit is, someone can always argue why not one more or why not one less (raould, I concede that 128 is more of a round number than 100 in base 2).

Well, you could say "255, because you can't have more in the JVM anyway".  Or you could complain that Scala should shield you from JVM limitations and start packaging parameters into tuples behind the scenes.
 
From the conversation between Chris Hodapp and Paul Phillips, it looks like there may a way to not have a limitation at all, so I am hopeful that a solution along these lines will prove practical.

So far I am not particularly keen on the proposal for a core library feature since it's slow and bulky compared to normal tuples, and makes a mess of the types (though that could be improved with compiler support, possibly).  I'd be inclined to just use HLists instead--or HArray which Jesper Nordenberg blogged about 3 1/2 years ago:
  http://jnordenberg.blogspot.com/2009/09/type-lists-and-heterogeneously-typed.html

But I wouldn't try to pretend these are tuples.  They're useful, and they're different.
 

Regarding 22 being too high, I am not convinced it is a good idea for a general purpose programming language to make that decision on the behalf of every possible application that might ever be written in that language.  I would rather leave the domain modelling decisions to the individual application developers, and have the programming language take a hands off approach, focusing on being useful while staying out of the way.

Very often the compiler developer knows what the useful limitations are better than the application designer does.  Also, compiler designers should encourage good behavior to the extent that it doesn't get in the way of necessary behavior.  Anyone making a flat 84-argument case class has the option of nesting seven 12-argument case classes, so it's not like the lack of support is going to prevent the code from working.  It'll just prevent the structure from being flat, which arguably was a bad idea anyway (but can be expeditious if someone else already decided to deliver it in a flat form).

  --Rex
 


Sean

unread,
Apr 3, 2013, 9:58:49 PM4/3/13
to scala-l...@googlegroups.com
> Very often the compiler developer knows what the useful limitations are better than the application designer does.  Also, compiler designers should encourage good behavior to the extent that it doesn't get in the way of necessary behavior.  Anyone making a flat 84-argument case class has the option of nesting seven 12-argument case classes, so it's not like the lack of support is going to prevent the code from working.  It'll just prevent the structure from being flat, which arguably was a bad idea anyway (but can be expeditious if someone else already decided to deliver it in a flat form)

Yep, the real issue was with case classes.  I actually didn't need tuples, but I did need each field to have a name and a type (so couldn't use a map).  The problem is when you have a flat data structure you don't control, and need a data transfer object to transfer that information, you don't want to have to parcel it out into 22 field blocks because of a language limitation.   The data transfer object should match the data.  The domain objects are within your control, so there it is a matter of putting a class in charge of making sense of the translation between the data transfer objects and the domain objects.

The HList is pretty interesting, I will keep that in mind if I ever need a really long tuple.
Seems like the options for a huge flat data transfer object are
* Map if you need the name and have all the same types
* Tuple if you don't need the name, have different types, and have 22 or less fields
* HList if you don't need the name, have different types, and have more than 22 fields (or don't like tuples)
* Case class if you need both the name and the type have 22 fields or less
* Wait for "SI-7296 Lifting the limit on case class arity", if you need a case class with more than 22 fields

Looks like I will be counting the days until I get to use Scala 2.11
Reply all
Reply to author
Forward
0 new messages