I'm trying to build a
pipeline in
Scala that can be fed data from any of two
structured data sources: database or
JSON files. I've employed
ScalikeJdbc for reading the database, which also makes use of
case classes just like
PlayJSON.
Design considerations dictate that I must
use the same case classes for both
frameworks:
ScalikeJdbc and
PlayJSON.
ScalikeJdbc requires me to create a companion object against every case class with an apply() method as shown below.
case class Db(
id: Option[Int],
name: String,
shouldCopy: Option[Boolean] = None
)
object Db extends SQLSyntaxSupport[Db] {
// for ScalikeJdbc
override val tableName: String = "dbs"
def apply(rs: WrappedResultSet): Db = {
Db(
rs.intOpt("id"),
rs.string("name"),
rs.booleanOpt("should_copy")
)
}
}
The above code works fine in isolation until I try to add PlayJSON's Reads[T], Writes[T] macros to the companion object:
object Db extends SQLSyntaxSupport[Db] {
...
// for Play-Json [Scala-Json]
implicit val reads = Json.reads[Db]
implicit val writes = Json.writes[Db]
}
This fails the compilation with the following stack-trace:
[error] path/Db.scala:74:34: ambiguous reference to overloaded definition,
[error] both method apply in object Db of type (id: Option[Int], name: String, shouldCopy: Option[Boolean])com.package.Db
[error] and method apply in object Db of type (rs: scalikejdbc.WrappedResultSet)com.package.Db
[error] match expected type ?
[error] implicit val reads = Json.reads[Db]
[error] ^
I'm new to Scala and Play Framework but I have this feeling that Reads[T], Writes[T] macros can be done away with in favour of something else that can avert this situation. [Correct me if I'm wrong]. Alternatively, precluding the need to use companion object for ScalikeJdbc might help overcoming the situation.
Please add any helpful insights on this issue. Feel free to report any discrepancies in my overall design and library choices.
Please find the corresponding question on StackOverflow
here.