Just a basic question, is there way that we can use the backend of Hadoop or Spark and run rattle on top of it today?
--
You received this message because you are subscribed to the Google Groups "rattle-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rattle-users+unsubscribe@googlegroups.com.
To post to this group, send email to rattle...@googlegroups.com.
Visit this group at https://groups.google.com/group/rattle-users.
For more options, visit https://groups.google.com/d/optout.
Looking forward to any contributions!Not yet, as such. If R is installed on the instance you could run Rattle there but not make use of the specific model functions for Spark/Hadoop.One approach that is very close is using the RevoScaleR (now Microsoft R) functions. Initial support for tree and forest is already in Rattle for the local compute context. RevoScaleR allows the compute context to be changed to Hadoop or Spark with a couple of lines of code and then all remaining code stays as it is and runs as is on the different remote compute context whether that is a Hadoop or Spark server. So then run Rattle on your laptop and target the computation on Hadoop/Spark.
It is being looked at.
For general open source support, contributions of code, even sample code based on Rattle's Log tab, is always useful. I.e., how would you change the code exposed in the Log tab to use Hadoop or Spark? That can then be incorporated into Rattle's code generator.
Regards,
Graham
On 15 October 2016 at 02:13, ramkumar nimmakayala <ramkumar.n...@gmail.com> wrote:
Just a basic question, is there way that we can use the backend of Hadoop or Spark and run rattle on top of it today?
--
You received this message because you are subscribed to the Google Groups "rattle-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rattle-users...@googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to rattle-users+unsubscribe@googlegroups.com.