I am suspecting some documents are corrupted at mongodb.
Hi Ayan,
It’s possible that there are few documents in the collection that doesn’t have the same value types as you expected. (inconsistent values)
You can try to utilise $type to query and find those documents.
I tried to exclude the column from a custom select statement but did not work. Is it possible?
You can exclude certain fields to be mapped by defining a schema. See also Explicitly declare a schema
Is there any way to suppress errors to a certain amount? I do not want to stall the load of 1M record if 1 record is bad.
If you’re referring to errors where a field have inconsistent value types, you can try to define the field for the schema to be nullable.
Although this is more related to Apache Spark itself, I would suggest to post a question on StackOverflow:Spark to reach wider audience.
Regards,
Wan.
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.
For other MongoDB technical support options, see: https://docs.mongodb.com/manual/support/
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe@googlegroups.com.
To post to this group, send email to mongod...@googlegroups.com.
Visit this group at https://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/b29a4a4c-734f-426a-be3b-373c3db492d0%40googlegroups.com.