By using these queries, we broke down the complete dataset to only return a subset.
Ideally, I would like to reuse the existing queries with the mongo-spark-connector.
So my questions are:
find()
. Generally, you'll use an aggregate to calculate something specific. Whereas, you'll be using find()
to retrieve documents". I basically want to pre-filter the data being passed from mongodb to spark and not add any more fields or do group by's or the like. Thanks
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.
For other MongoDB technical support options, see: https://docs.mongodb.com/manual/support/
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe@googlegroups.com.
To post to this group, send email to mongod...@googlegroups.com.
Visit this group at https://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/e9b55c4d-ab00-4037-9b98-58144a1a6786%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.