--
Daniel C. Sobral
I travel to the future all the time.
However, I think "lazy def" would make
perfect sense and not create a new keyword.
I've been thinking about submitting a new SIP, but I want to get a sanity check and some help first... I would like to be able to memoize the results of referentially transparent functions / methods. I can do this manually by implementing some caching strategy encapsulated in, say, a MemoizedFunction[-T, +R]... But that feels unsatisfying; I want language support.
I think proving something in Scala is side-effect free would easily
get its own SIP, much bigger than memoization.
memoization is not something for which the one and only true solution exists, meaning it cannot generically be done right once and for all. Based on this observation and the fact that you can easily memoize functions (albeit not methods directly) using existing infrastructure, I don't see the point of adding "language support". I don't know enough about the proposed macros, but couldn't they possibly abstract over arity?
Regards,
Roland
On Tuesday, 31. January 2012 at 09:09, Eugene Burmako wrote:
If macro annotations make it into Scala, it'd be possible to write "@memo def foo = bar" (or even "@pure def foo = bar", since pure here is a TypeName, not a TermName pure from Scalaz). Then in the macro you'll be able to: 1) rewrite the body of foo to your liking, 2) perform static analysis of your liking.The only restriction with this is the fact that you'll only have ASTs for the function itself and for its brothers and sisters from the same compiler run. Analyzing ASTs of program entities from already compiled libraries will require decompilation or having annotations on them (annotations can be picked reflectively, ASTs can not).
in scala, you'd need to write a wrapper for any single one of the function-traits
-------- Original-Nachricht --------
> Datum: Mon, 30 Jan 2012 23:24:53 -0800 (PST)
> Von: rkuhn <goo...@rkuhn.info>
> An: scala-...@googlegroups.com
> Betreff: [scala-debate] memoized functions / methods
>>in scala, you'd need to write a wrapper for any single one of the function-traitsnot necessarily, if you use macros.
This is actually possible right now in the wip tree, isn't it?
>
> On Jan 31, 11:44 am, Jason Zaugg <jza...@gmail.com> wrote:
>> On Tue, Jan 31, 2012 at 11:33 AM, Eugene Burmako <eugene.burm...@epfl.ch>wrote:
>>
>> > >>in scala, you'd need to write a wrapper for any single one of the
>> > function-traits
>> > not necessarily, if you use macros.
>>
>> I was just wondering how macros could be used to conveniently implement
>> single abstract method (SAM) interfaces.
>>
>> Will it be possible to write a macro that converts:
>>
>> trait Compute { def doit(in: Double): Double) }
>> sam[Compute]((_: Double) * 2)
>>
>> To:
>>
>> new Compute { def doit(in: Double) = in * 2 }
>>
>> The motivation is two-fold, firstly to avoid writing the wrapper type
>> manually for each SAM interface, and secondly to achieve better performance
>> by avoiding boxing and the megamorphic call to FunctionN#apply.
--
But it doesn't need to, does it? There are many ways to do a "lazy
val", but I don't see anyone out there implementing alternatives, or
complaining about Scala-choosen one.
If Scala had builtin memoization, that would not prevent people with
specific needs to implement their own solution. Rather, it would
simply make a lot of code simpler, and get some people who have never
heard of it, or don't want to waste the effort to get one implemented,
to use and benefit from it.
I am.
https://issues.scala-lang.org/browse/SUGGEST-11
--
Johannes
-----------------------------------------------
Johannes Rudolph
http://virtual-void.net
You are the exception that proves the rule! ;-)
By the way, I just read your suggestion, and I'd like to make a
comment about the last remark you make, about JVM optimizing the
implementation. Recent benchmarks with code that uses lazy val
indicates that, when encountering them, JVM simply gives up on
optimizing it. It's a magnitude-change switch: replace a val with a
lazy val and see your performance decrease tenfold.
So the claim that JVM will optimize it fairly well is cannot be taken
for granted.
> It's probably a good idea to allow interchangeable caching strategies.
> For example, assume users might want to use a cache tuned for their
> application (such as ehcache). I figure a 'Cache' typeclass with a
> sensible default 'implicit val' provided in predef would be the way to go.
Cheers,
dr
For what it's worth - I would prefer 'deterministic' (burrowed from various SQL dialects). That doesn't describe the fact that something is actually memoized but indicates that it COULD be memoized if the implementation decides to do so.
That opens up the freedom for the implementation to drop memoization at the cost of performance if the need arises (runtime architecture, memory consumption, solution space).
The actual memoization and all the possible behaviour of it could be delegated to a "global" memoization handler with one instance per thread (I don't assume that sharing the caches between multiple threads in a concurrent setup will perform well - but this could be customizable behaviour as well).
Global because this opens up the possibility to implement cache balancing mechanisms over several memoized functions (e.g. a global LRU dropout scheme or something balancing LRU and cost of calculation). From my experience it's not satisfactory to throw just a caching Map at a function when the solution space is non-trivial.
An application could replace the default handler (which e.g. could do simply nothing and all memoization interception code would silently be erased) with one that implements exact that memoization behaviour which is needed for this specific application.
The hard part is to be able to catch nondeterministic operations in the function body as good as possible. The first step would simply be to delegate it completely to the developer - deterministic would be a mere tag then without guarantees or enforced behaviour restrictions.
Just my 5 cents
Greetings Bernd
memoization can be useful even without the function being "pure". It
can become tricky if you decide to use it and are a tricky programmer,
but it actually can turn into a feature. One use-case I thought of :
you run an online app which just ranks the distant users as they click
on some button in your webpage. Let"s assume you have users A,B, C and
so on.
var i = 0
memo def rank(u: User) = {i = i + 1; i}
The result is that A,B, and C can click how many times they want,
their ranking won't be altered after their first click.
Let's assume they clicked in the order (A,C,B), represented by a
collection, users = (A,C,B). We could have done :
var i = 0
val rank: Map[User,Int] = users.map(u => (u, {i = i + 1; i}) // or
some cleaner way without a var, but let me get my point please
But we don't have 'users'. The elements order will only be available
at runtime, and only after and undetermined time. Memoization allows
for a progressive treatment of this
"to-be collection", possibly using side-effects, with a fairly
easy-to-read code.
2012/2/1, Bernd Johannes <bjoh...@bacon.de>:
--
*Alex REPAIN*
Computer Science Engineer
Scala-enthusiast