how to achieve with low latency and memory usage while filtering among big amount of MongoDB collections in Apache Spark

17 views
Skip to first unread message

Sanjay E

unread,
Mar 13, 2017, 10:47:19 PM3/13/17
to mongodb-user
Hi Everyone

I am new to MongoDB

I wants to know how Native Spark MongoDB connector works with Big data and where it's dataset is Stored and how it is Processed.Let anyone please Explain me in Detail manner .

Thanks,
Sanjay

Wan Bachtiar

unread,
Mar 20, 2017, 1:31:35 AM3/20/17
to mongodb-user

where it’s dataset is Stored and how it is Processed

Hi Sanjay,

Your questions are quite broad, and I’ll try to answer in an overview manner.

The MongoDB connector for Spark can read/write data into MongoDB deployments. The dataset from Apache Spark can be stored to any Spark compatible storage systems. For example, Apache Spark can read data from HDFS and write processed results into MongoDB.

To find out more about MongoDB Spark Connector, I would recommend to review the following resources:

Regards,

Wan.

Reply all
Reply to author
Forward
0 new messages