Dependency Injection in Lift

338 views
Skip to first unread message

Chris Lewis

unread,
Aug 30, 2009, 11:21:53 AM8/30/09
to lif...@googlegroups.com
I like the Lift framework. It has its rough edges, but it's a great way
to get into web app development using scala. It borrows many good ideas
from other frameworks, most notably its convention over configuration
structure (rails) and its scriptless view layer (wicket).

One thing I'm not a big fan of is its baked-in database layer, the
Mapper (now in flux and being reborn as Record), and so was pleased to
find the JPA archetype in the 1.1 tree. Using this archetype, you get a
barebones but functioning lift app using pure JPA. This is a great
start, but when I poked around the snippets I saw two things that
troubled me:

The underlying entity manager API leaks directly into what would be the
service layer API; a single object exposed as Model.
The snippet code is hardwired to Model, which uses it directly as a
global DAO.

This archetype is still in development, and it very well may change.
It's carries a nature of being experimental; showing you how it can be
done, but probably not how it should be done.

However, it highlighted an issue I have with Lift, one that the boring
enterprise crowd has solved: dependency injection.

I have an admittedly specific idea in mind for what I want to implement
in my would-be Lift app: I want to be able to declare a few fields and
annotate them so that a layer above will provide me with acceptable
instances. Yeah, I want to inject DAOs in the oh-so-familiar
Guice/Spring/T5 IoC way. I like this partially because it's familiar,
but also because it provides me with loosely coupled code.

There's been some good discussion on the subject of implementing
dependency injection in Scala using mere language constructs. I dove
into this subject, starting with chapter 27 of
[http://www.artima.com/shop/programming_in_scala]: "Modular Programming
Using Objects." It's a good read, and I recommend the book. After that I
found my way to some relevant posts in the blogs of Debasish Ghosh and
Jonas Boner, respectively:

http://debasishg.blogspot.com/2008/02/scala-to-di-or-not-to-di.html
http://jonasboner.com/2008/10/06/real-world-scala-dependency-injection-di.html

Very cool indeed, but I've slightly digressed. What I want to explore is
how to loosely couple the persistence implementation (be it JPA, JDO, or
a baked in model) with the accessing of persistent objects. I don't see
how the aforementioned technique (the "cake" pattern) would help in the
case of lift snippets, because we don't have any kind of hooks where we
can provide configuration of snippets (at least, not that I know of).
This is exactly the issue that DI solves.

So what are the thoughts of the lift-power users? Is there a way to get
this in lift, or would you say that I am doing it wrong?

sincerely,
chris

marius d.

unread,
Aug 30, 2009, 11:46:27 AM8/30/09
to Lift
Most of DI of Lift is currently done using PartialFunction-s and
Function lists that people can set in Boot or for snippets in case on
binding functions usign SHtml helpers etc.

Personally I'm not at all a fan of Pojo/Poji DI by annotations
especially in Scala realm where there are other artifacts such as
function composition, monads, mixin composition, higher order
functions etc. The other problem with annotations is that we can't
currently build annotations in Scala to be visible at runtime, so we'd
probably have to code them in Java or use some existent Java
annotations ... but this already smells hacky IMHO.

If enterprise folks solve one problem by DI by annotation it doesn't
mean that this fits in all contexts.Persistence loosely coupling can
be achieved in many ways:

1. Implement your own persistence semantic on top of Record
2. Implement your own traits hence your own abstractions
3. etc

What we've learned with Lift is the it is OK to give to persistence
objects understanding of rendering. Having dumb objects that carry on
just data and rely of layers that can do different jobs (render,
persist) is IMO not a very good design approach.

Having snippets invoking the persistence layer is ok, in fact it is
natural for applications. Of course with a proper level of persistence
abstraction IF there are chances for the application to use a
different persistence mechanism then say JDBC. But many application
don't really need such a rigorous decoupling so using mapper/record
from snippets makes a lot of sense.

Br's,
Marius
> http://debasishg.blogspot.com/2008/02/scala-to-di-or-not-to-di.htmlhttp://jonasboner.com/2008/10/06/real-world-scala-dependency-injectio...

Chris Lewis

unread,
Aug 30, 2009, 2:03:47 PM8/30/09
to lif...@googlegroups.com
I am specifically talking about decoupling my web logic, ie, event
handlers for forms in lift snippets, from the persistence layer. As
currently implemented, snippets know exactly what persistence mechanism
is in use because there is no intermediary API. If I'm using Mapper, my
snippets must use the Mapper api. If JPA, the global EM wrapper "Model."
The same, I imagine, holds true for the Record api. This makes the
persistence layer a Leaky Abstraction
(http://en.wikipedia.org/wiki/Leaky_abstraction), and I want to avoid that.

> Most of DI of Lift is currently done using PartialFunction-s and
> Function lists that people can set in Boot or for snippets in case on
> binding functions usign SHtml helpers etc.

Ok, but how does that help me decouple my web logic from the persistence
details?

> What we've learned with Lift is the it is OK to give to persistence
> objects understanding of rendering. Having dumb objects that carry on
> just data and rely of layers that can do different jobs (render,
> persist) is IMO not a very good design approach.

I disagree. An entity, like "Author", is nothing more than an expression
of a real-world concept modeled in code. It should know about itself,
its direct constituents (like a "Book" collection), anything else that
defines its own semantics, and nothing more. How it is stored is none of
its business.


Don't misunderstand me - I accept that I may be missing something. We
agree that the concept of DI is valuable because it helps us keep
abstractions loosely coupled. I don't see the problem with annotations,
but I am not at all married to them.

You point at partial functions and traits to implement abstractions over
the persistence layer, but what is missing is how to apply that to
snippets. Yes, I could abstract the layer however I want, but my
snippets we still be required to get at the layer by calling it
directly, instead of having it provided. Can you share some input on
that part?

Thanks for the discussion,

chris

Jeppe Nejsum Madsen

unread,
Aug 30, 2009, 3:34:47 PM8/30/09
to lif...@googlegroups.com
Chris Lewis <burning...@gmail.com> writes:

> I am specifically talking about decoupling my web logic, ie, event
> handlers for forms in lift snippets, from the persistence layer. As
> currently implemented, snippets know exactly what persistence mechanism
> is in use because there is no intermediary API.

Chris, I'm sharing the same concerns as you about the decoupling. For
now, I've just accepted it to get started with Lift.

But now that our app starts to grow, I think we'll need to find a
good solution for this in order to

1) Maintain a good test suite (I'm a strong believer in TDD and
automated testing in general. I don't think that having type safety and
FP makes tests obsolete).

2) Loosely couple the code to make it maintainable over time


One of my big issues right now is how to test snippets that access the
persistence/business layer. This is trivial if snippets has some kind of
DI, as you could just inject mock objects instead of the real
thing. Alas, I haven't found a good solution yet. I do think that Scala
provides some language support for this (ie. the articles you linked to)
and I would like to pursue this first, before using more heavyweight
solutions such as Spring/Guice etc.

/Jeppe

Chris Lewis

unread,
Aug 30, 2009, 4:58:42 PM8/30/09
to lif...@googlegroups.com
One option might be implicit parameters, but it doesn't seem as clean
(could be a knee jerk). I tried defining an implicit param on the form
handler, but then lift couldn't find the mapped handler. Doing this I
believe changes the function signature, and so the reflective call
doesn't see it.

However, you can define a method on your snippet that takes an implicit.
Consider a simple snippet:


trait UserService {
def findByUserName(userName: String) : String
}

object Config {
implicit val us = new UserService() {
def findByUserName(userName: String) = userName
}
}

import Config._

class MySnippet {

def userService(implicit us: UserService) = us

def login(xhtml : NodeSeq) : NodeSeq = {
var userName = ""
var password = ""

def doLogin() = {
println(userName + "; " + userService.findByUserName(userName))
}

bind("user", xhtml,
"userName" -> SHtml.text(userName, userName = _),
"password" -> SHtml.password(password, password = _),
"submit" -> SHtml.submit(?("Save"), doLogin _)
)
}
}


Notice the part in the doLogin closure:

userService.findByUserName(userName)

Because of the universal access principal, we can treat userService, a
single argument function that returns type UserService (a trait), as an
object. Also see how the userService method receives an implicit
parameter. Because we define an object (Config) that provides an
implicit value of that type, and we import that value, the compiler can
provide it implicitly.

One thing about this method is that we have to have a satisfying
implicit value in scope. In a unit test we could easily do it on the
fly, but for normal execution I'm not sure where you can plug something in.

I'd still love to hear more thoughts, and if this method could be at all
usable.

sincerely,
chris

marius d.

unread,
Aug 30, 2009, 5:01:57 PM8/30/09
to Lift


On Aug 30, 9:03 pm, Chris Lewis <burningodzi...@gmail.com> wrote:
> I am specifically talking about decoupling my web logic, ie, event
> handlers for forms in lift snippets, from the persistence layer. As
> currently implemented, snippets know exactly what persistence mechanism
> is in use because there is no intermediary API. If I'm using Mapper, my
> snippets must use the Mapper api. If JPA, the global EM wrapper "Model."
> The same, I imagine, holds true for the Record api.

Why do you say this holds true about Record? ... Recourd is not bound
to any persistence technology. If you are concerned about Mapper,
means to me that you want a complete abstraction such that you can
replace JDBC with something totally different. Ok, but what stops you
from invoking the Mapper from layer abstracted by application specific
traits?

This makes the
> persistence layer a Leaky Abstraction
> (http://en.wikipedia.org/wiki/Leaky_abstraction), and I want to avoid that.
>
>  > Most of DI of Lift is currently done using PartialFunction-s and
>  > Function lists that people can set in Boot or for snippets in case on
>  > binding functions usign SHtml helpers etc.
>
> Ok, but how does that help me decouple my web logic from the persistence
> details?

The statement was about Lift's DI beyond the context of persistence.
If you want your snippets to not know about Mapper abstract the Mapper
wok with your own traits ... could use a Factory pattern or something
similar.

>
>  > What we've learned with Lift is the it is OK to give to persistence
>  > objects understanding of rendering. Having dumb objects that carry on
>  > just data and rely of layers that can do different jobs (render,
>  > persist) is IMO not a very good design approach.
>
> I disagree. An entity, like "Author", is nothing more than an expression
> of a real-world concept modeled in code. It should know about itself,
> its direct constituents (like a "Book" collection), anything else that
> defines its own semantics, and nothing more. How it is stored is none of
> its business.

I didn't quite expect that you would :). We found Lift's approach to
be quite productive in real life apps.

>
> Don't misunderstand me - I accept that I may be missing something. We
> agree that the concept of DI is valuable because it helps us keep
> abstractions loosely coupled. I don't see the problem with annotations,
> but I am not at all married to them.

No worries I think your approach for a debate is a very healthy one.
Having different opinions is OK. I explained one of the problems with
annotations in Scala

***** "The other problem with annotations is that we can't currently
build annotations in Scala to be visible at runtime, so we'd probably
have to code them in Java or use some existent Java annotations ...
but this already smells hacky IMHO. "****

>
> You point at partial functions and traits to implement abstractions over
> the persistence layer, but what is missing is how to apply that to
> snippets. Yes, I could abstract the layer however I want, but my
> snippets we still be required to get at the layer by calling it
> directly, instead of having it provided. Can you share some input on
> that part?

def mySnipetFunc(xhtml: NodeSeq) : NodeSeq = {
val persistence = MyPersistenceFactory.getPersistence();
...
persistence.getBy( --- some predicate ---)
...
}

This is a trivial model ... but in most cases this would be enough. In
many cases I don't really need something that injects a reference to
an annotated class member.

One other approach would be to use a RequestVar or a SessionVar to
hold a Persistence reference and you can access it from different
places. You could set the proper context for such var-s from from our
LoanWrapper added in boot by calling S.addAround.
> >>http://debasishg.blogspot.com/2008/02/scala-to-di-or-not-to-di.htmlht......

David Pollak

unread,
Aug 31, 2009, 4:58:39 PM8/31/09
to lif...@googlegroups.com
Chris,

I agree with Marius' comments.  By using Scala's functions and partial functions, I have not found any need for Dependency Injection or many of the other Java limitation workaround patterns.

Snippets are not associated in any way with persistence.  Snippets can work any way you want and are not tied to a particular mechanism for storing data.  Snippets are simply a way to transform XML to XML.

Lift's mapper classes are meant to be ActiveRecord-ish... closely tied to an RDBMS... nothing more, nothing less.  There's no mocking in Mapper... most of the tests I write for Mapper related stuff functions just fine against H2 or PostgreSQL.

Scala's traits used in conjunction with runtime logic singletons (e.g., LiftRules and S in Lift) mean that you don't need DI or other stuff.  How can these things be used together?

  • Business logic should be expressed in traits.  Because traits can be mixed into anything and they can contain methods, you can abstract your business logic from your persistence, but also mix the business logic into your persistence.
  • Instead of hardcoding the access to the singletons, you can go through a Factory:
    User.find(By(...)) ->  UserFactory().find(By(...))


On Sun, Aug 30, 2009 at 8:21 AM, Chris Lewis <burning...@gmail.com> wrote:

I like the Lift framework. It has its rough edges, but it's a great way
to get into web app development using scala. It borrows many good ideas
from other frameworks, most notably its convention over configuration
structure (rails) and its scriptless view layer (wicket).

One thing I'm not a big fan of is its baked-in database layer, the
Mapper (now in flux and being reborn as Record), and so was pleased to
find the JPA archetype in the 1.1 tree. Using this archetype, you get a
barebones but functioning lift app using pure JPA. This is a great
start, but when I poked around the snippets I saw two things that
troubled me:

The underlying entity manager API leaks directly into what would be the
service layer API; a single object exposed as Model.
The snippet code is hardwired to Model, which uses it directly as a
global DAO.

This archetype is still in development, and it very well may change.
It's carries a nature of being experimental; showing you how it can be
done, but probably not how it should be done.

This is debatable.  While I agree that the large, multi-team projects are going to call for more abstraction than a smaller project, showing people how to do things closer to the metal and letting them build their own abstractions can be more instructive.
 

However, it highlighted an issue I have with Lift, one that the boring
enterprise crowd has solved: dependency injection.

I have an admittedly specific idea in mind for what I want to implement
in my would-be Lift app: I want to be able to declare a few fields and
annotate them so that a layer above will provide me with acceptable
instances. Yeah, I want to inject DAOs in the oh-so-familiar
Guice/Spring/T5 IoC way. I like this partially because it's familiar,
but also because it provides me with loosely coupled code.

There's nothing that prevents you from doing that.  Lift is agnostic as to the classes used/accessed in snippets.

Personally, I think that annotations indicate failure of the language.  When annotations are required, it leads to a "second language" that runs in parallel with the main language (Java, Scala, Python, etc.)  Thus, I have tried to keep annotations out of Lift.
 

There's been some good discussion on the subject of implementing
dependency injection in Scala using mere language constructs. I dove
into this subject, starting with chapter 27 of
[http://www.artima.com/shop/programming_in_scala]: "Modular Programming
Using Objects." It's a good read, and I recommend the book. After that I
found my way to some relevant posts in the blogs of Debasish Ghosh and
Jonas Boner, respectively:

http://debasishg.blogspot.com/2008/02/scala-to-di-or-not-to-di.html
http://jonasboner.com/2008/10/06/real-world-scala-dependency-injection-di.html

Very cool indeed, but I've slightly digressed. What I want to explore is
how to loosely couple the persistence implementation (be it JPA, JDO, or
a baked in model) with the accessing of persistent objects. I don't see
how the aforementioned technique (the "cake" pattern) would help in the
case of lift snippets, because we don't have any kind of hooks where we
can provide configuration of snippets (at least, not that I know of).
This is exactly the issue that DI solves.

I don't believe snippets need this kind of configuration if you follow the same patterns as we've followed with LiftRules and S.  If, however, you disagree, you can always create your own SnippetScope trait that you mix into your Snippets.  This trait could provide all the services your snippet could want (persistence, etc.)  The trait could configure itself during construction and provide whatever services you want.
 

So what are the thoughts of the lift-power users? Is there a way to get
this in lift, or would you say that I am doing it wrong?

One of the most flexible parts of Scala is uniform access.  A parameterless method looks like a val looks like a var looks like an object.  Thus an trait can define a contract (e.g. def firstName: ValueHolder[String]) which could be satisfied by object firstName extends MappedString or lazy val firstName = .... ,etc.  With this kind of flexibility, fields, methods, etc. all appear to the application and to the trait as the same thing.  This, combined with factory functions and partial functions lead to a very flexible way to build environment using language constructs.

Does this help?

Thanks,

David
 

sincerely,
chris





--
Lift, the simply functional web framework http://liftweb.net
Beginning Scala http://www.apress.com/book/view/1430219890
Follow me: http://twitter.com/dpp
Git some: http://github.com/dpp

Jeppe Nejsum Madsen

unread,
Sep 1, 2009, 4:46:47 AM9/1/09
to lif...@googlegroups.com
David Pollak <feeder.of...@gmail.com> writes:

> Chris,
>
> I agree with Marius' comments. By using Scala's functions and partial
> functions, I have not found any need for Dependency Injection or many of the
> other Java limitation workaround patterns.
>
> Snippets are not associated in any way with persistence. Snippets can work
> any way you want and are not tied to a particular mechanism for storing
> data. Snippets are simply a way to transform XML to XML.

David. I'm also struggling with some of these issues, mostly due to the
fact that I need to supply mocks for testing and not so much because I
crave another layer of indirection :-)

I think what Chris was talking about was not so much that snippets are
tied to any specific persistence mechanism, but more that many (most?)
snippets, to do something useful, needs to access some functionality in
the business logic. An using a static reference for this makes it
difficult to swap BL implementations (ie with mocks). In an IoC
container, those dependencies would be injected automatically to the
snippet.

I'm unsure how this could be implemented in Lift/Scala but would prefer
to use the language itself. Chris already showed one possible solution
with implicits, but I think there may be better solutions out there. I
agree with you on your view on annotations :-)

I feel I have a pretty good grasp on using FP "in the small" e.g for
algorithms and data structures, but can't yet see how FP constructs
(partial functions etc) can be used "in the large" e.g. for composing
whole applications. Even more so when combining this with scala's
powerful type system.

> Scala's traits used in conjunction with runtime logic singletons (e.g.,
> LiftRules and S in Lift) mean that you don't need DI or other stuff. How
> can these things be used together?

One of my issues wrt to testing lift apps is actually the use of these
singletons. Much of my application code relies on these and requires an
elaborate setup to test properly. I can of course extract my own traits
for all the functionality that I use in S, LiftRules etc. but this seems
like something that could be integrated into lift proper. I'll spend
some more time on this and get back when/if I have some suggested
improvements :-)


> I don't believe snippets need this kind of configuration if you follow the
> same patterns as we've followed with LiftRules and S.

Could you briefly mention these patterns (or point to the code :-)?
There's a lot of code in there and, while readable, I don't think I can
distill the patterns yet....

/Jeppe

Chris Lewis

unread,
Sep 1, 2009, 8:56:19 AM9/1/09
to lif...@googlegroups.com
David,

I'm still investigating options, but I wanted to restate my main issue
simply. It is the requirement snippets have on global data; that is it.
The way they receive data from and expose data to templates is really
nice. However, without the use of global objects (including lift
infrastructure like S, as well as application-level services), a snippet
cannot do anything useful. Instead of expressing dependencies via
constructor or function arguments, snippets must reach out. Calls to
factories provide a way to ask for dependencies, but again binds the
snippet to a specific factory (which in turn requires dependencies be
configured such that a specific factory can provide them).

I see this as problematic. I don't want spring in the mix, and I share
the disdain for java annotations in scala - but there has to be a better
way than globals.

Thanks so much for your work and continued engagement.

sincerely,
chris

Jeppe Nejsum Madsen wrote:

David Pollak

unread,
Sep 1, 2009, 1:46:28 PM9/1/09
to lif...@googlegroups.com
On Tue, Sep 1, 2009 at 5:56 AM, Chris Lewis <burning...@gmail.com> wrote:

David,

I'm still investigating options, but I wanted to restate my main issue
simply. It is the requirement snippets have on global data; that is it.
The way they receive data from and expose data to templates is really
nice. However, without the use of global objects (including lift
infrastructure like S, as well as application-level services), a snippet
cannot do anything useful. Instead of expressing dependencies via
constructor or function arguments, snippets must reach out.

And where do these constructor parameters come from?

How is:

class Foo(snippetConstructors: XX) extends Snippet {

}

Any more abstract than:

class Foo with MyProjectState {
  
}

where:

trait MyProjectState {
  def snippetConstructor: XX
}


 
Calls to
factories provide a way to ask for dependencies, but again binds the
snippet to a specific factory (which in turn requires dependencies be
configured such that a specific factory can provide them).

I see this as problematic. I don't want spring in the mix, and I share
the disdain for java annotations in scala - but there has to be a better
way than globals.

S is not global.  Sure, it appears to be global, but it's not.

There's nothing magic about S.  You can create your own.

You can also have a trait that configures itself on construction and you can mix that trait into your snippets.

Either of these solutions gives you the ability to achieve your goals...

I'm not sure why you object to the S/factory paradigm.  At some point, the turtles end and you have to provide a mechanism for associating the abstraction (trait/interface) with some concrete implementation.  Why do you view DI magic as a more satisfying mechanism for resolving abstract to concrete?  Personally, I find anything that's magic (rather than concrete code that I can control, e.g., factory functions, partial functions, etc. to be much more maintainable.)
 

David Pollak

unread,
Sep 1, 2009, 3:22:11 PM9/1/09
to lif...@googlegroups.com
On Tue, Sep 1, 2009 at 1:46 AM, Jeppe Nejsum Madsen <je...@ingolfs.dk> wrote:

David Pollak <feeder.of...@gmail.com> writes:

> Chris,
>
> I agree with Marius' comments.  By using Scala's functions and partial
> functions, I have not found any need for Dependency Injection or many of the
> other Java limitation workaround patterns.
>
> Snippets are not associated in any way with persistence.  Snippets can work
> any way you want and are not tied to a particular mechanism for storing
> data.  Snippets are simply a way to transform XML to XML.

David. I'm also struggling with some of these issues, mostly due to the
fact that I need to supply mocks for testing and not so much because I
crave another layer of indirection :-)

I think what Chris was talking about was not so much that snippets are
tied to any specific persistence mechanism, but more that many (most?)
snippets, to do something useful, needs to access some functionality in
the business logic. An using a static reference for this makes it
difficult to swap BL implementations (ie with mocks). In an IoC
container, those dependencies would be injected automatically to the
snippet.

This is not a snippet issue.  It's a Mapper issue.  Mapper is very rigid about the backing store of records.  Based on this conversation, I'll fix the issue in Record... in fact I've started the process of fixing things so you can abstract your business logic entirely away from the implementation of the record that contains the fields.  Also, the existing mechanism where a mapper record knows what its connection identifier is will be extended to support non-JDBC connections.

This will give you mocks.  It will give you separation of BL from persistence via traits. 
 

I'm unsure how this could be implemented in Lift/Scala but would prefer
to use the language itself. Chris already showed one possible solution
with implicits, but I think there may be better solutions out there. I
agree with you on your view on annotations :-)

Implicits buy you nothing.  Implicits must be defined somewhere and are only bound (at compile time) based on type.  What DI gives you is dynamic runtime decisions about what implementation to use.  In Lift, we do that with functions and partial functions.  S and LiftRules both contain excellent patterns for making decisions at runtime.
 

I feel I have a pretty good grasp on using FP "in the small" e.g for
algorithms and data structures, but can't yet see how FP constructs
(partial functions etc) can be used "in the large" e.g. for composing
whole applications. Even more so when combining this with scala's
powerful type system.

The nice thing about functional composition is that it gives you the ability to compose big things from small.

However, in the present case, you only really care about a couple of functions that provide runtime vending of classes that have a particular interface/trait.  You configure these functions early in the apps lifecycle and they vend the right things.
 

> Scala's traits used in conjunction with runtime logic singletons (e.g.,
> LiftRules and S in Lift) mean that you don't need DI or other stuff.  How
> can these things be used together?

One of my issues wrt to testing lift apps is actually the use of these
singletons. Much of my application code relies on these and requires an
elaborate setup to test properly. I can of course extract my own traits
for all the functionality that I use in S, LiftRules etc. but this seems
like something that could be integrated into lift proper. I'll spend
some more time on this and get back when/if I have some suggested
improvements :-)

Please separate the concept of mocking persisted information from the rest of the discussion.   There should be no need to abstract away S or LiftRules.  If there's some configurability that S and LiftRules are not providing, please let us know and we'll add more things to S and LiftRules.  My point is that S and LiftRules provide good patterns for creating flexible systems that do the right thing whether in test, development, or production modes.
 


> I don't believe snippets need this kind of configuration if you follow the
> same patterns as we've followed with LiftRules and S.

Could you briefly mention these patterns (or point to the code :-)?
There's a lot of code in there and, while readable, I don't think I can
distill the patterns yet....

In S:

  /**
   * Returns the Locale for this request based on the LiftRules.localeCalculator
   * method.
   *
   * @see LiftRules.localeCalculator(HTTPRequest)
   * @see java.util.Locale
   */
  def locale: Locale = LiftRules.localeCalculator(containerRequest)

This calculate the locale based on the current request by calling LiftRules.localeCalculator:

  /**
   * A function that takes the current HTTP request and returns the current
   */
  var localeCalculator: Box[HTTPRequest] => Locale = defaultLocaleCalculator _

  def defaultLocaleCalculator(request: Box[HTTPRequest]) =
  request.flatMap(_.locale).openOr(Locale.getDefault())

localeCalculator is a function that defaults to an implementation that looks in the request, but this could be overriden:

LiftRules.localeCalculator = request => User.currentUser.map(_.locale.is) openOr LiftRules.defaultLocaleCalculator(request)

We've changed the rules for calculating locale.  The specifics of a User-based locale calculator have replaced the generic locale calculator... the "dependency on User has been injected" but none of the consumers of this API know that it's changed... they call S.locale and all is good.

 

/Jeppe



Chris Lewis

unread,
Sep 1, 2009, 10:57:39 PM9/1/09
to lif...@googlegroups.com
Ok, I had never looked at the source for S or LiftRules, but just poked
around in S and some dots connected. Assign different functions to the S
var members and you change functionality. Cool! Different than what my
mind defaults to, but so simple.

(You can see I have some baggage, and I am happy to let it go.)

Forgive my ignorance, and this issue probably has more to do with my
newness to scala rather than lift, but I don't see how your trait
example allows one to "construct" the snippet. Take a payment service
example. I start off with PayPal and some months later I switch my
processor to CyberSource. I don't want to tie the snippet to a specific
processor, so my mind, transposing java, says to write a payment service
interface and implementation (of course there's a DI container that
manages the wiring). In scala, if I mixin a trait that provides the
payment service implementation, I still have to change the trait being
mixed in if I want a different implementation. That, or have the trait
itself resolve the implementation, which is plumbing that would have to
be repeated per dependency-bearing trait.

This sounds messy and like a maintenance headache - I feel like I'm
missing your point here. I get the answer to "mocking" lift internals,
but hot-swapping service implementations without incurring a maintenance
hit is still unclear. Thanks again!

sincerely,
chris

Ross Mellgren

unread,
Sep 1, 2009, 11:05:39 PM9/1/09
to lif...@googlegroups.com
At work we're implementing a multi-module server using lots of Java
and some new Scala components with and without Lift. To manage the
service swapability using the JBoss container by deploying SARs and
WARs.

I'm not sure what kind of design you're going for, but I figured I'd
throw in my 2¢.

-Ross

Jeppe Nejsum Madsen

unread,
Sep 2, 2009, 3:34:29 AM9/2/09
to lif...@googlegroups.com
Chris Lewis <burning...@gmail.com> writes:

> Take a payment service example. I start off with PayPal and some
> months later I switch my processor to CyberSource. I don't want to tie
> the snippet to a specific processor, so my mind, transposing java,
> says to write a payment service interface and implementation (of
> course there's a DI container that manages the wiring). In scala, if I
> mixin a trait that provides the payment service implementation, I
> still have to change the trait being mixed in if I want a different
> implementation.

But how is that different from e.g changing a bean name in a Spring
configuration (it's been a while since I used Spring, things may have
changed :-)

> That, or have the trait itself resolve the implementation, which is
> plumbing that would have to be repeated per dependency-bearing trait.

I you need the same dependency injected in several places, you could
create a Configuration trait that holds all your dependencies (akin to
the Spring context). Granted, this will provide all your services to all
snippets, which seems less than ideal.....

This is an interesting discussion, and I'm still pondering at a good
solution to the above.

/Jeppe

Chris Lewis

unread,
Sep 2, 2009, 7:25:14 AM9/2/09
to lif...@googlegroups.com


Jeppe Nejsum Madsen wrote:
> Chris Lewis <burning...@gmail.com> writes:
>
>> Take a payment service example. I start off with PayPal and some
>> months later I switch my processor to CyberSource. I don't want to tie
>> the snippet to a specific processor, so my mind, transposing java,
>> says to write a payment service interface and implementation (of
>> course there's a DI container that manages the wiring). In scala, if I
>> mixin a trait that provides the payment service implementation, I
>> still have to change the trait being mixed in if I want a different
>> implementation.
>
> But how is that different from e.g changing a bean name in a Spring
> configuration (it's been a while since I used Spring, things may have
> changed :-)

How can it be tested with different implementations? I change the trait
being extended when I run tests, then change back for deployment (that
is, change the actual source)? A spring context isn't compiled into the
code, so I can simply change the context being used and I have my
different implementations.

I DO NOT want this. I want to understand the trait example as I'm sure
I'm missing something. David's explanation of the architecture of S and
LiftRules was convincing, and I'm sure I could be convinced on the trait
issue if I could see an exmaple that proves it's not a maintenance
nightmare.

Jeppe Nejsum Madsen

unread,
Sep 2, 2009, 10:22:48 AM9/2/09
to lif...@googlegroups.com
Chris Lewis <burning...@gmail.com> writes:


> How can it be tested with different implementations? I change the trait
> being extended when I run tests, then change back for deployment (that
> is, change the actual source)? A spring context isn't compiled into the
> code, so I can simply change the context being used and I have my
> different implementations.
>
> I DO NOT want this. I want to understand the trait example as I'm sure
> I'm missing something.

I think it's the fact that traits are stackable. I'm still exploring
solutions, but this example shows that at least testing is possible
without changing the source (See the full example here:
http://gist.github.com/179733 )

def main(args : Array[String]) : Unit = {
// Real config
(new MySnippet).render
(new YourSnippet).render

// Do test
(new MySnippet with MockConfiguration).render
}

yields:

Rendering my snippet
CyberSource payment for my stuff in CyberSourcePaymentService$CyberSourceImpl@3ba42792
Rendering your snippet
Doing it with MyDepImpl and paying it
CyberSource payment for MyDepImpl in CyberSourcePaymentService$CyberSourceImpl@2bd1e730
Rendering my snippet
Mock payment for my stuff in MockConfiguration$MockImpl@148238f4

One issue with this solution, as I wrote earlier is that all snippets
have access to all services in the Configuration, not just the declared
dependencies (which are actually superfluous :-(

/Jeppe

David Pollak

unread,
Sep 2, 2009, 1:39:28 PM9/2/09
to lif...@googlegroups.com
On Tue, Sep 1, 2009 at 7:57 PM, Chris Lewis <burning...@gmail.com> wrote:

Ok, I had never looked at the source for S or LiftRules, but just poked
around in S and some dots connected. Assign different functions to the S
var members and you change functionality. Cool! Different than what my
mind defaults to, but so simple.

(You can see I have some baggage, and I am happy to let it go.)

Forgive my ignorance, and this issue probably has more to do with my
newness to scala rather than lift, but I don't see how your trait
example allows one to "construct" the snippet. Take a payment service
example. I start off with PayPal and some months later I switch my
processor to CyberSource. I don't want to tie the snippet to a specific
processor, so my mind, transposing java, says to write a payment service
interface and implementation (of course there's a DI container that
manages the wiring). In scala, if I mixin a trait that provides the
payment service implementation, I still have to change the trait being
mixed in if I want a different implementation. That, or have the trait
itself resolve the implementation, which is plumbing that would have to
be repeated per dependency-bearing trait.

This sounds messy and like a maintenance headache - I feel like I'm
missing your point here. I get the answer to "mocking" lift internals,
but hot-swapping service implementations without incurring a maintenance
hit is still unclear. Thanks again!


Assuming you have an interface defined for the payment gateway (I think you'd have to do this in Guice-land)... let's call that PaymentGatewayIntf.

So, you've got a PayPal implementation (we'll assume a singleton for right now):

object PayPalGateway extends PaymentGatewayIntf {
....
}


You could define a trait that you could mix into your snippets:

trait PGTrait {
  def paymentGateway: PaymentGatewayIntf = PayPalGateway
}

So, your trait does a little abstraction, but it defaults to a hard-coded singleton.

Let's say you want to have a little more abstraction:

object MyAppRules {
  // a function that calculates the payment gateway with a default to the PayPal gateway
  var paymentGateway: () => PaymentGatewayIntf = () => PayPalGateway
}

And we update the trait (but none of the code that the trait is mixed into):

trait PGTrait {
  // def becomes lazy val so that the calculation is done once per instance
  lazy val paymentGateway: PaymentGatewayIntf = MyAppRules.paymentGateway()
}

But let's say that we want a session-specific gateway:

object sessionPaymentGateway extends SessionVar[Box[PaymentGatewayIntf]](Empty)

and the trait becomes:

trait PGTrait {
  lazy val paymentGateway: PaymentGatewayIntf = sessionPaymentGateway.is openOr MyAppRules.paymentGateway()
}

Let's say we're running in test mode, in Boot.scala:

if (Props.testMode) {
  MyAppRules.paymentGateway = () => MockPaymentGateway
}

Now, we want to internationalize, so let's change the default runs for calculating a payment gateway (note that this does not impact the trait):

object MyAppRules {
  // a function that calculates the payment gateway with a default to the PayPal gateway
  var paymentGateway: () => PaymentGatewayIntf = () => S.locale {
    case Locale.US => PayPalGateway
    case Locale.RU => RussianPaymentGateway
    case _ => PayPalGateway
  }
}
 

I hope this helps put the "Scala way" (or more specifically "DPP's Scala way") into perspective.

Thanks,

David

Timothy Perrett

unread,
Sep 2, 2009, 2:19:12 PM9/2/09
to Lift
Chris,

I read your comments with interest - just to clarify, are you against
changing code / would prefer a configuration file? I sort of got that
vibe from some of your posts... Personally, im not down with
configuration files and prefer code that configures code.

Some of the systems i've got actually use a pattern very similar to
what DPP detailed - it works fine for me as I use specs and mokkito to
do testing - have you seen those?

Cheers, Tim

Kris Nuttycombe

unread,
Sep 2, 2009, 4:27:01 PM9/2/09
to lif...@googlegroups.com
I think that the following really misses the point of dependency injection:

On Wed, Sep 2, 2009 at 11:39 AM, David
Pollak<feeder.of...@gmail.com> wrote:
>
> Let's say we're running in test mode, in Boot.scala:
> if (Props.testMode) {
>   MyAppRules.paymentGateway = () => MockPaymentGateway
> }

In order to test in isolation, production code should never have to
have any idea that mock classes might exist. In most cases, they don't
- the mock is a dynamic proxy that has expectations configured on it
*in the test case*.

Dependency injection can be used to do configuration at any level of
granularity, not just at the "global config" level that is Boot.scala.
This is the whole reason the enterprise world has rejected singletons,
because any code that uses such a singleton cannot be tested in
isolation without messing with a class that may be largely irrelevant
to the functionality being tested. If the only dependencies that an
object has are provided through constructor parameters, any and all
external state that the object depends upon can be trivially mocked
simply by passing in different parameters.

With respect to Tim's comment, with Guice you usually don't use a
configuration file; your configuration is in code. In a test case, you
create an Injector using a set of modules that have the "rules" for
object creation (specifications for what type or instance of object is
to be injected in any given position) and then you use this Injector
as your factory. In a production system, the process is exactly the
same - but you create the Injector with a different set of modules.

In reading this thread, I can't help but to wonder... how extensively
have those of you who are purporting traits and partial functions to
be a replacement for DI actually used a modern dependency injection
framework?

Kris

David Pollak

unread,
Sep 2, 2009, 6:30:03 PM9/2/09
to lif...@googlegroups.com
On Wed, Sep 2, 2009 at 1:27 PM, Kris Nuttycombe <kris.nu...@gmail.com> wrote:

I think that the following really misses the point of dependency injection:

On Wed, Sep 2, 2009 at 11:39 AM, David
Pollak<feeder.of...@gmail.com> wrote:
>
> Let's say we're running in test mode, in Boot.scala:
> if (Props.testMode) {
>   MyAppRules.paymentGateway = () => MockPaymentGateway
> }

In order to test in isolation, production code should never have to
have any idea that mock classes might exist. In most cases, they don't
- the mock is a dynamic proxy that has expectations configured on it
*in the test case*.

At some point, the concrete implementation has to be specified, DI or no.  At some point there needs to be a definition (in a config file, in an annotation, in Boot, in the current session, on the current call stack) of the concrete class.  Having a factory function that can be changed means that you can define how an instance is created, that's all.
 

Dependency injection can be used to do configuration at any level of
granularity, not just at the "global config" level that is Boot.scala.

And my example above allowed for configuration at any level (well... the example didn't include 'current request' but that's a change from SessionVar to RequestVar)
 
This is the whole reason the enterprise world has rejected singletons,
because any code that uses such a singleton cannot be tested in
isolation without messing with a class that may be largely irrelevant
to the functionality being tested.

I guess this is where our philosophies diverge.  I believe in integration tests and unit tests of things that deal with untyped data (Strings).  Most other forms of testing tend in my experience to be pointless: they take lots of time to write and run and yield very few delta defects.

Additionally, the S pattern looks like a global, but is in fact a front end to thread-specific state.  Scala's DynamicVar (S sits on top of a DynamicVar style pattern) gives you a lot of latitude to have a concrete symbol for something with a dynamic meaning for that symbol.
 
If the only dependencies that an
object has are provided through constructor parameters, any and all
external state that the object depends upon can be trivially mocked
simply by passing in different parameters.

I don't understand the difference between having a parameter magically passed because on an annotation and making a method call to get a parameter that satisfies an interface other than the call is explicit and the annotation based mechanism is something that happens by magic where I regard magic to be bad.
 

With respect to Tim's comment, with Guice you usually don't use a
configuration file; your configuration is in code. In a test case, you
create an Injector using a set of modules that have the "rules" for
object creation (specifications for what type or instance of object is
to be injected in any given position) and then you use this Injector
as your factory. In a production system, the process is exactly the
same - but you create the Injector with a different set of modules.

So, you've got code that makes a determination about how to be a factory for a given interface under a specific circumstance, which sounds a lot like what I'm proposing.
 

In reading this thread, I can't help but to wonder... how extensively
have those of you who are purporting traits and partial functions to
be a replacement for DI actually used a modern dependency injection
framework?

I'm all for a discussion and even disagreement that's based on code and code examples.  I'm not keen on insulting other people because their views and experience differs from yours.

Thanks,

David
 

Kris


Kris Nuttycombe

unread,
Sep 3, 2009, 12:06:47 AM9/3/09
to lif...@googlegroups.com
On Wed, Sep 2, 2009 at 4:30 PM, David
Pollak<feeder.of...@gmail.com> wrote:
>
>
> On Wed, Sep 2, 2009 at 1:27 PM, Kris Nuttycombe <kris.nu...@gmail.com>
> wrote:
>>
>> I think that the following really misses the point of dependency
>> injection:
>>
>> On Wed, Sep 2, 2009 at 11:39 AM, David
>> Pollak<feeder.of...@gmail.com> wrote:
>> >
>> > Let's say we're running in test mode, in Boot.scala:
>> > if (Props.testMode) {
>> >   MyAppRules.paymentGateway = () => MockPaymentGateway
>> > }
>>
>> In order to test in isolation, production code should never have to
>> have any idea that mock classes might exist. In most cases, they don't
>> - the mock is a dynamic proxy that has expectations configured on it
>> *in the test case*.
>
> At some point, the concrete implementation has to be specified, DI or no.
> At some point there needs to be a definition (in a config file, in an
> annotation, in Boot, in the current session, on the current call stack) of
> the concrete class.  Having a factory function that can be changed means
> that you can define how an instance is created, that's all.

My point is that Boot is part of the production codebase, and as such
it should be entirely ignorant of the test harness.

>> Dependency injection can be used to do configuration at any level of
>> granularity, not just at the "global config" level that is Boot.scala.
>
> And my example above allowed for configuration at any level (well... the
> example didn't include 'current request' but that's a change from SessionVar
> to RequestVar)

But can you test a snippet in the absence of references to RequestVar,
SessionVar, S, and the rest of Lift if the snippet makes calls to such
objects? I don't want to have to set up the state of a Req having been
processed through a RewriteRequest and so on to create an environment
for my snippet to run in.

>> This is the whole reason the enterprise world has rejected singletons,
>> because any code that uses such a singleton cannot be tested in
>> isolation without messing with a class that may be largely irrelevant
>> to the functionality being tested.
>
> I guess this is where our philosophies diverge.  I believe in integration
> tests and unit tests of things that deal with untyped data (Strings).  Most
> other forms of testing tend in my experience to be pointless: they take lots
> of time to write and run and yield very few delta defects.

I guess I'm just not that good; I miss boundary cases in my algorithms
on a not-too infrequent basis, particularly when there's a large state
space that the configurations of my persistent data can occupy. I find
unit tests to be extremely helpful, particularly with how often the
requirements I'm trying to satisfy grow and force me to refactor.

What's strange to me is that in my experience, unit tests are quick to
write and run, while integration tests are the ones that are a
nightmare to set up for.

> Additionally, the S pattern looks like a global, but is in fact a front end
> to thread-specific state.  Scala's DynamicVar (S sits on top of a DynamicVar
> style pattern) gives you a lot of latitude to have a concrete symbol for
> something with a dynamic meaning for that symbol.

I understand this, but to me thread-local state is little better than
global state, because when you come down to it RequestVar and
SessionVar instances behave like globals within the context of the
request or the session, respectively. If I have multiple snippets on a
page that both happen to mutate the state of a RequestVar without
checking it, code that's ignorant of the order of snippet calls cannot
reliably make any assumptions about said state. This has caused me
bugs that took serious time to track down and in some cases still
aren't fully resolved.

>> If the only dependencies that an
>> object has are provided through constructor parameters, any and all
>> external state that the object depends upon can be trivially mocked
>> simply by passing in different parameters.
>
> I don't understand the difference between having a parameter magically
> passed because on an annotation and making a method call to get a parameter
> that satisfies an interface other than the call is explicit and the
> annotation based mechanism is something that happens by magic where I regard
> magic to be bad.

I guess I feel like dependency injection is a declarative approach,
which I prefer to the imperative method call. Ultimately, the
significant question is what is allowed to configure how that method
call responds; if there are several layers of framework (Boot, S,
RequestVar, etc.) between the configurer and the configuree I don't
have confidence that the state I'm trying to establish wont get mucked
up along the way. DI is hardly magic; it's just a matter of having a
piece of code that will calculate a dependency graph for you then find
the correct objects to plug in from a flat scope to establish the
state of the object you request.

>> With respect to Tim's comment, with Guice you usually don't use a
>> configuration file; your configuration is in code. In a test case, you
>> create an Injector using a set of modules that have the "rules" for
>> object creation (specifications for what type or instance of object is
>> to be injected in any given position) and then you use this Injector
>> as your factory. In a production system, the process is exactly the
>> same - but you create the Injector with a different set of modules.
>
> So, you've got code that makes a determination about how to be a factory for
> a given interface under a specific circumstance, which sounds a lot like
> what I'm proposing.

The difference is that in your example, it eventually all devolves to
the MyAppRules object, and if you want to make a change to the gateway
that is used you have to change MyAppRules itself rather than supply a
different instance of a class conforming to the MyAppRules interface.
There are other issues - take the example of choosing the payment
gateway based upon locale. What if there are other factors you want to
choose based upon - say the part of the application that the gateway
is being called from? With the DI solution, that part of the
application simply instantiates its own injector with the correct
configuration; with your solution, MyAppRules then becomes coupled to
some notion about who its caller is.

>> In reading this thread, I can't help but to wonder... how extensively
>> have those of you who are purporting traits and partial functions to
>> be a replacement for DI actually used a modern dependency injection
>> framework?
>
> I'm all for a discussion and even disagreement that's based on code and code
> examples.  I'm not keen on insulting other people because their views and
> experience differs from yours.

I'm surprised you found my question insulting, and if you took it as a
slight I apologize, but I was actually honestly wanting to know, not
trying to score some sort of point. I know that my understanding of
the utility of DI was seriously flawed until I had used Guice in a
large project and discovered how simple it made testing for me. I
realize now based upon what you said above that you don't really find
unit testing to be of value, so I can see how these issues are not as
important to you.

Kris

Chris Lewis

unread,
Sep 3, 2009, 1:05:44 AM9/3/09
to lif...@googlegroups.com


Timothy Perrett wrote:
> Chris,
>
> I read your comments with interest - just to clarify, are you against
> changing code / would prefer a configuration file? I sort of got that
> vibe from some of your posts... Personally, im not down with
> configuration files and prefer code that configures code.

No, I prefer "wiring" configurations to be in code. Certain artifacts
however, like localized strings or environmental configurations that
depend on the server (dev/qa/prod), seem better off externalized.

I suppose you got that vibe from the comment I made a bit earlier, about
needing to recompile to reflect a change in a dependency configuration
upstream (in a trait). That was a vague comment rooted in my ignorance
of the apparently late binding of calls on trait methods. More on that
later.

>
> Some of the systems i've got actually use a pattern very similar to
> what DPP detailed - it works fine for me as I use specs and mokkito to
> do testing - have you seen those?

DPP's explanation of how to mock infrastructure code (bound to S, etc)
made since, but it still feels a bit sketchy. Again, this may be my
misunderstanding, but he's saying to do something like replace the value
of the function S.redirectTo, so I can test as needed. So here we go:

S.redirectTo = () => { println("redirect received"); }

Now that value is overwritten. What if I was unit testing a bunch of
snippets, some of those snippets call the same global function, but I
need to do per-snippet recordings/inspections of those calls? Must I
reconfigure the values under S each time? What if I forget one?

I've seen Bill Venner's specs - haven't used it but it looks cool. I've
not heard of mokkito (and didn't see a relevant link on google), so I
don't know how these tools might help here. Do share :-)

sincerely,
chris

>
> Cheers, Tim
> >
>

Kris Nuttycombe

unread,
Sep 3, 2009, 11:08:51 AM9/3/09
to lif...@googlegroups.com
On Wed, Sep 2, 2009 at 11:05 PM, Chris Lewis<burning...@gmail.com> wrote:
>
> DPP's explanation of how to mock infrastructure code (bound to S, etc)
> made since, but it still feels a bit sketchy. Again, this may be my
> misunderstanding, but he's saying to do something like replace the value
> of the function S.redirectTo, so I can test as needed. So here we go:
>
> S.redirectTo = () => { println("redirect received"); }
>
> Now that value is overwritten. What if I was unit testing a bunch of
> snippets, some of those snippets call the same global function, but I
> need to do per-snippet recordings/inspections of those calls? Must I
> reconfigure the values under S each time? What if I forget one?
>
> I've seen Bill Venner's specs - haven't used it but it looks cool. I've
> not heard of mokkito (and didn't see a relevant link on google), so I
> don't know how these tools might help here. Do share :-)

It's "mockito" - http://mockito.org/ which looks like EasyMock (what
I've used most) except that the interfaces are a bit more fluent and
mockito allows mocking of concrete classes, not just interfaces.

Kris

Chris Shorrock

unread,
Sep 3, 2009, 12:02:59 PM9/3/09
to lif...@googlegroups.com
Let me prefix this by saying this has been a brilliant conversation with some obviously smart people and has been enjoyable to follow over the past little bit.

I'll argue (briefly as possible, although I'm notoriously wordy) that one of the biggest deltas between the two sides of the conversation is the concept of IoC and how it's being applied within the context of Lift. If we define dependency injection as the ability to retrieve a reference to a "depended upon" object it's been shown that we can achieve this through the various patterns which have proven themselves viable due to how the language has constructed itself.

it sounds like the bigger stumbling point is the concept of IoC, that is, how we retrieve a reference to these objects. While dependency injection and IoC may be synonyms in some contexts I want to define them differently here, and I would say then define IoC different from DI in that it changes how things are referenced within the execution graph of an application. DI can obviously thus be used to achieve IoC.

In a previous message David questions::

How is:
class Foo(snippetConstructors: XX) extends Snippet {}

Any more abstract than:
class Foo with MyProjectState {}

where:
trait MyProjectState { def snippetConstructor: XX}

And in this case I would say the difference is IoC. When testing Foo in the first instance, it's explicitly clear what you need to "mock" out to Test Foo, where in the later example what you require is a little less clear. Of course this is a pretty trivial example so if we example further the differences between:

def foo(state:S) = { ... } 
vs
def state = S
def foo() = {  /* uses state */ }

Again, pretty similar. But the difference is is that foo in the first case has declared precisely what it requires to perform it's operation. It's contract is very clear. Foo in the later case looks like it can be called without any parameter, but it implicitly needs a reference to S via the state() method, thus you need an explicit understanding of how things are used within a method to be able to test, or use it.  Again simple example, but as the complexity of a method grows this problem exacerbates itself until the point where testing has becomes very difficult. 

Is this a huge deal, maybe not, but when this type of thing is repeated over the course of time, with many developers on a project I think it could get hard to manage. Finally, the other thing that I don't believe anybody has mentioned is how this all relates to LoD. In the example above, if we need to do something like:

def state = S
def foo() = { state.servletRequest.getCookies() }

In order to test not only do we need to understand foo(), how it's using S via the state method but we also need to understand that S is calling servletRequest, which is calling cookies, further complicating testing.

def foo(cookies:Array[Cookie]) = { ... }

Would be a much preferable method signature. Anyways, I'll wrap up my thoughts there, most of my opinions here come from having lead development on a large SOA system where we made TONS of architectural mistakes, which really made testing a pain in the ass. In the past 6 or so we've started  to employ some of the techniques discussed here and it's really made things much easier, and cleaner. With that said, this was all Java based, and while I've been using Scala at personal projects for some time now, only recently did we start to roll it out within the company, so it's possible my opinions may be deprecated due to lack of hands on unit-testing experience using some of the patterns mentioned above :)

(love the framework by the way)

Kris Nuttycombe

unread,
Sep 3, 2009, 12:48:27 PM9/3/09
to lif...@googlegroups.com
This is a great analysis, Chris, thank you. I'll be saving this one
away for my next discussion of DI and IOC - the Law of Demeter point
is a particularly salient one that had been implicit in my thinking
but really needs to be discussed.

As usual, when people have different axioms communication is difficult
because such axioms tend to be assumed common when they really
probably are not.
Kris

Timothy Perrett

unread,
Sep 3, 2009, 2:22:43 PM9/3/09
to Lift
Guys,

Can I direct this thread to a previous discussion on DI / IoC that
took place on EPFL scala-user list:

http://www.nabble.com/Dependency-injection-in-Scala--ts15229956.html

Some interesting thoughts on there that appear to be most relevant to
this thread :-)

Cheers, Tim

David Pollak

unread,
Sep 3, 2009, 8:10:32 PM9/3/09
to lif...@googlegroups.com
If you've got a dependency on something, you're going to have to set it up.  Whether that setup is implicit because you've got some annotations on some classes or explicit by creating some setup method that wraps the call to a test or a set of tests, you've got setup.

There's no practical difference between the setup required to do:

@inject
class Snippet(someState: State) {
  def transform(in: NodeSeq): NodeSeq = ...
}

Set up test injections

testSnippet

And 

class Snippet {
  lazy val someState = myState.is
}

S.init(mockSession) {
  myState.doWith(mockValue) {
    testSnippet
  }
}
 

> Additionally, the S pattern looks like a global, but is in fact a front end
> to thread-specific state.  Scala's DynamicVar (S sits on top of a DynamicVar
> style pattern) gives you a lot of latitude to have a concrete symbol for
> something with a dynamic meaning for that symbol.

I understand this, but to me thread-local state is little better than
global state, because when you come down to it RequestVar and
SessionVar instances behave like globals within the context of the
request or the session, respectively. If I have multiple snippets on a
page that both happen to mutate the state of a RequestVar without
checking it, code that's ignorant of the order of snippet calls cannot
reliably  make any assumptions about said state. This has caused me
bugs that took serious time to track down and in some cases still
aren't fully resolved.

Yeah... this is a problem with state.  I'm not sure how DI or IoC addresses it, however.  Statefulness leads to messiness.
 

>> If the only dependencies that an
>> object has are provided through constructor parameters, any and all
>> external state that the object depends upon can be trivially mocked
>> simply by passing in different parameters.
>
> I don't understand the difference between having a parameter magically
> passed because on an annotation and making a method call to get a parameter
> that satisfies an interface other than the call is explicit and the
> annotation based mechanism is something that happens by magic where I regard
> magic to be bad.

I guess I feel like dependency injection is a declarative approach,
which I prefer to the imperative method call. Ultimately, the
significant question is what is allowed to configure how that method
call responds; if there are several layers of framework (Boot, S,
RequestVar, etc.) between the configurer and the configuree I don't
have confidence that the state I'm trying to establish wont get mucked
up along the way. DI is hardly magic; it's just a matter of having a
piece of code that will calculate a dependency graph for you then find
the correct objects to plug in from a flat scope to establish the
state of the object you request.

And I prefer to have that calculation be explicit.
 

>> With respect to Tim's comment, with Guice you usually don't use a
>> configuration file; your configuration is in code. In a test case, you
>> create an Injector using a set of modules that have the "rules" for
>> object creation (specifications for what type or instance of object is
>> to be injected in any given position) and then you use this Injector
>> as your factory. In a production system, the process is exactly the
>> same - but you create the Injector with a different set of modules.
>
> So, you've got code that makes a determination about how to be a factory for
> a given interface under a specific circumstance, which sounds a lot like
> what I'm proposing.

The difference is that in your example, it eventually all devolves to
the MyAppRules object, and if you want to make a change to the gateway
that is used you have to change MyAppRules itself rather than supply a
different instance of a class conforming to the MyAppRules interface.

But the instance of the class has to be supplied somewhere.  At some point, there is a global because at some point the actual class needs to be determined.  In the examples I've been giving, the location of the calculation is something we can point to: the thread, then the session, then a global.  It's no different in Guice (I'm not familiar with all the ways that you can specify how to find the thing that vends the instance, but I'm betting the list and order of consultation is about the same as for my examples.)
 
There are other issues - take the example of choosing the payment
gateway based upon locale. What if there are other factors you want to
choose based upon - say the part of the application that the gateway
is being called from? With the DI solution, that part of the
application simply instantiates its own injector with the correct
configuration; with your solution, MyAppRules then becomes coupled to
some notion about who its caller is.

It's coupled to the notion of what the call stack is, which is no different from "that part of the application".
 

>> In reading this thread, I can't help but to wonder... how extensively
>> have those of you who are purporting traits and partial functions to
>> be a replacement for DI actually used a modern dependency injection
>> framework?
>
> I'm all for a discussion and even disagreement that's based on code and code
> examples.  I'm not keen on insulting other people because their views and
> experience differs from yours.

I'm surprised you found my question insulting, and if you took it as a
slight I apologize, but I was actually honestly wanting to know, not
trying to score some sort of point. I know that my understanding of
the utility of DI was seriously flawed until I had used Guice in a
large project and discovered how simple it made testing for me.

Thanks,

David
 
I
realize now based upon what you said above that you don't really find
unit testing to be of value, so I can see how these issues are not as
important to you.

Kris


Reply all
Reply to author
Forward
0 new messages