Groups
Groups
Sign in
Groups
Groups
SparkR Developers
Conversations
About
Send feedback
Help
SparkR Developers
Contact owners and managers
1–30 of 256
Mark all as read
Report group
0 selected
raji sadhu
,
uttam gogineni
3
9/25/19
execution time
function(){ start.time <- Sys.time() ```your spark r code for naive bayes ``` end.time <- Sys.
unread,
execution time
function(){ start.time <- Sys.time() ```your spark r code for naive bayes ``` end.time <- Sys.
9/25/19
Ermutarra
8/14/17
modulo operation with sparkR?
Hi all, How can I perform the modulo operation with sparkR? I can't find the function in the API
unread,
modulo operation with sparkR?
Hi all, How can I perform the modulo operation with sparkR? I can't find the function in the API
8/14/17
sg...@mozilla.com
7/20/17
Cancelling a running job
Hello, If i submitted a job through SparkR (interactive), how can i cancel it? Eg write.parquet(ms3,
unread,
Cancelling a running job
Hello, If i submitted a job through SparkR (interactive), how can i cancel it? Eg write.parquet(ms3,
7/20/17
Marco Biglieri
2/1/17
warning netlib integration
Hello, I have used the glm to create a linear regressio model but there is these warning messages:
unread,
warning netlib integration
Hello, I have used the glm to create a linear regressio model but there is these warning messages:
2/1/17
Saurabh Bhatt
, …
Jeff van Geete
3
1/27/17
Error in SparkR installation from Github
Not sure I understand. How do I access the SparkR library in R - using lib.loc in the call to library
unread,
Error in SparkR installation from Github
Not sure I understand. How do I access the SparkR library in R - using lib.loc in the call to library
1/27/17
prasan...@customercentria.com
12/5/16
using SparkR how to build stepwise regression model
In R step(glm(formula,data,family),direction="forword") function is there for stepwise
unread,
using SparkR how to build stepwise regression model
In R step(glm(formula,data,family),direction="forword") function is there for stepwise
12/5/16
Info Cim
12/2/16
Spark training in bay area
Spark is a fast and general processing engine compatible with Hadoop data. Apache Spark is a fast,
unread,
Spark training in bay area
Spark is a fast and general processing engine compatible with Hadoop data. Apache Spark is a fast,
12/2/16
Shilp
11/18/16
How to perform inner join in SparkR 2.0.0
Hi, I have two SparkR data frames say table1 with 2 columns "EmailID" and "Count"
unread,
How to perform inner join in SparkR 2.0.0
Hi, I have two SparkR data frames say table1 with 2 columns "EmailID" and "Count"
11/18/16
prasan...@customercentria.com
10/18/16
using Sparkr in rstudio(windows)/getting error while reading large csv file/want to build glm model
* R code : csv file contain 30 columns and 200000 records / Please suggest me code to build glm model
unread,
using Sparkr in rstudio(windows)/getting error while reading large csv file/want to build glm model
* R code : csv file contain 30 columns and 200000 records / Please suggest me code to build glm model
10/18/16
Shilp
10/14/16
Replace Certain Values of A column in SparklyR
Hi, I have a sparkR Data frame and I want to Replace certain Rows of a Column which satisfy certain
unread,
Replace Certain Values of A column in SparklyR
Hi, I have a sparkR Data frame and I want to Replace certain Rows of a Column which satisfy certain
10/14/16
guangw...@gmo.jp
,
Shivaram Venkataraman
2
9/23/16
How to use lead() in SparkR
This mailing list is no longer active as the SparkR project is now a part of Apache Spark. Please
unread,
How to use lead() in SparkR
This mailing list is no longer active as the SparkR project is now a part of Apache Spark. Please
9/23/16
Info Cim
9/6/16
Spark training in bay area
Spark is a top-level project of the Apache Software Foundation, designed to be used with a range of
unread,
Spark training in bay area
Spark is a top-level project of the Apache Software Foundation, designed to be used with a range of
9/6/16
greatsingh...@gmail.com
,
Shivaram Venkataraman
2
8/23/16
Book or step by step Guide on sparkR for a absolute beginner
I am not aware of such a book for SparkR. You can also email the Spark user mailing list user@spark.
unread,
Book or step by step Guide on sparkR for a absolute beginner
I am not aware of such a book for SparkR. You can also email the Spark user mailing list user@spark.
8/23/16
infoc...@gmail.com
7/28/16
Big Data Analyst Training
The amount of data produced across the globe has been going on increasing quickly and will continue
unread,
Big Data Analyst Training
The amount of data produced across the globe has been going on increasing quickly and will continue
7/28/16
Info Cim
7/27/16
Hadoop Training In Bay Area
Hadoop is a very common and powerful (raised, flat supporting surface) for working with data, but it
unread,
Hadoop Training In Bay Area
Hadoop is a very common and powerful (raised, flat supporting surface) for working with data, but it
7/27/16
pvill...@stratio.com
7/5/16
Call to new JObject sometimes returns an empty R environment
Hi all, I have recently moved from SparkR 1.5.2 to 1.6.0. I am doing some experiments using SparkR:::
unread,
Call to new JObject sometimes returns an empty R environment
Hi all, I have recently moved from SparkR 1.5.2 to 1.6.0. I am doing some experiments using SparkR:::
7/5/16
Sean Farrell
, …
Gonzalo Andrés Moreno Gómez
3
7/3/16
Building a Random Forest model with SparkR
Hi Sean: Where do I find the function includePackage(sc, randomforest)? regards Gonzalo El jueves, 19
unread,
Building a Random Forest model with SparkR
Hi Sean: Where do I find the function includePackage(sc, randomforest)? regards Gonzalo El jueves, 19
7/3/16
David Cabanillas
,
Shivaram Venkataraman
2
6/12/16
saveAsTextFile
For Hadoop operations to work on Windows you might need the Hadoop binary package installed (See the
unread,
saveAsTextFile
For Hadoop operations to work on Windows you might need the Hadoop binary package installed (See the
6/12/16
Devansh Doshi
, …
Karthik Venkatesan
4
4/29/16
Not able to create a Spark context from R studio
Has anyone solved this problem? Thanks, Karthik On Monday, November 2, 2015 at 11:28:09 AM UTC-8,
unread,
Not able to create a Spark context from R studio
Has anyone solved this problem? Thanks, Karthik On Monday, November 2, 2015 at 11:28:09 AM UTC-8,
4/29/16
Saurabh Bhatt
2
3/11/16
Running UDF in SparkR
UDF Functions = functions created in R On Friday, March 11, 2016 at 4:58:12 PM UTC+5:30, Saurabh
unread,
Running UDF in SparkR
UDF Functions = functions created in R On Friday, March 11, 2016 at 4:58:12 PM UTC+5:30, Saurabh
3/11/16
Duttatreya Sadhu
12/10/15
read a csv file in Spark R
Currently I read the csv file through read.csv and then convert it into spark dataframe is there a
unread,
read a csv file in Spark R
Currently I read the csv file through read.csv and then convert it into spark dataframe is there a
12/10/15
Sebastian YEPES
12/9/15
SparkR - Not distributing SparkR module in YARN
Hello, Has anyone else also encountered this issue? https://issues.apache.org/jira/browse/SPARK-12239
unread,
SparkR - Not distributing SparkR module in YARN
Hello, Has anyone else also encountered this issue? https://issues.apache.org/jira/browse/SPARK-12239
12/9/15
armen donigian
,
Sebastian YEPES
2
12/7/15
SparkR OCR file read
You need to create a Hive context to read ORC files: hiveContext <- sparkRHive.init(sc) d <-
unread,
SparkR OCR file read
You need to create a Hive context to read ORC files: hiveContext <- sparkRHive.init(sc) d <-
12/7/15
Jaikumar Krishna
11/25/15
SparkR dataframe
Hi , My input is stored in HDFS with out header by comma seperated (sl.no,name,class,age), i can read
unread,
SparkR dataframe
Hi , My input is stored in HDFS with out header by comma seperated (sl.no,name,class,age), i can read
11/25/15
Dmitriy Selivanov
11/23/15
SparkR naming convention
Hi, SparkR developers. Thank for all your work. I'm newbie here, but google a little bit, before
unread,
SparkR naming convention
Hi, SparkR developers. Thank for all your work. I'm newbie here, but google a little bit, before
11/23/15
Radhika Parik
11/2/15
How to Partition an RDD by Ket in sparkR?
Hi I am fetching data from Hive in my SparkR program. I use map to convert this dataframe to a
unread,
How to Partition an RDD by Ket in sparkR?
Hi I am fetching data from Hive in my SparkR program. I use map to convert this dataframe to a
11/2/15
Shiva Ram
,
Shivaram Venkataraman
4
9/10/15
createSparkContext on edu.berkeley.cs.amplab.sparkr.RRDD failed --> SparkR(SparkR-pkg)
Unfortunately I don't know much about CDH version mapping. Please email the Spark user mailing
unread,
createSparkContext on edu.berkeley.cs.amplab.sparkr.RRDD failed --> SparkR(SparkR-pkg)
Unfortunately I don't know much about CDH version mapping. Please email the Spark user mailing
9/10/15
Daniel Dean
,
Shivaram Venkataraman
2
8/21/15
SparkR external datasource
The best way to do this would be to implement a Spark SQL data source http://spark.apache.org/docs/
unread,
SparkR external datasource
The best way to do this would be to implement a Spark SQL data source http://spark.apache.org/docs/
8/21/15
Selcuk Korkmaz (JIRA)
8/7/15
[JIRA] (SPARKR-246) SparkR installation error: Error: Invalid or corrupt jarfile sbt/sbt-launch-0.13.6.jar
Selcuk Korkmaz created an issue SparkR / Bug SPARKR-246 SparkR installation error: Error: Invalid or
unread,
[JIRA] (SPARKR-246) SparkR installation error: Error: Invalid or corrupt jarfile sbt/sbt-launch-0.13.6.jar
Selcuk Korkmaz created an issue SparkR / Bug SPARKR-246 SparkR installation error: Error: Invalid or
8/7/15
Ashish Dutt
8/5/15
How to connect to remote HDFS programmatically to retrieve data, analyse it and then write the data back to HDFS?
Use Case: To automate the process of data extraction (HDFS), data analysis (pySpark/sparkR) and
unread,
How to connect to remote HDFS programmatically to retrieve data, analyse it and then write the data back to HDFS?
Use Case: To automate the process of data extraction (HDFS), data analysis (pySpark/sparkR) and
8/5/15