There has been several times now over the last 6 months that Atlas has done some maintenance or network configurations where my production servers simply do NOT recover properly.
In this case, using Kubernetes pods running reactivemongo 1.0 against Atlas instances using URI class/type mongodb+srv, It didn’t seem to take connectivity/services “down” but because of the spinning of the exception below, our systems were very slow and never recovered or stopped logging these exceptions.
I have tried setting networkaddress.cache.ttl to something like 10 seconds thinking maybe it was a cached DNS resolution however this just didn’t seem to effect anything.
Spinning and spewing of this stacktrace creating a CPU overload of the production servers.
2020-10-30 23:59:14,050 [ERROR] r.c.a.MongoDBSystem - [Supervisor-1/surchx] Fails to connect channel #d2661b82
java.nio.channels.ClosedChannelException: null
at reactivemongo.io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957)
at reactivemongo.io.netty.channel.AbstractChannel$AbstractUnsafe.ensureOpen(AbstractChannel.java:976)
at reactivemongo.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.connect(AbstractNioChannel.java:237)
at reactivemongo.io.netty.channel.DefaultChannelPipeline$HeadContext.connect(DefaultChannelPipeline.java:1342)
at reactivemongo.io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:548)
at reactivemongo.io.netty.channel.AbstractChannelHandlerContext.access$1000(AbstractChannelHandlerContext.java:61)
at reactivemongo.io.netty.channel.AbstractChannelHandlerContext$9.run(AbstractChannelHandlerContext.java:538)
at reactivemongo.io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at reactivemongo.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at reactivemongo.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
Are you suggesting that I should set “r.c.a.MongoDBSystem” to “OFF” since the log messages were ERROR level logs.
I certainly would except some pool/actor completion based on a network error and logs that are related to that. However, these repeating (many per second) happened for hours after the event (still don’t know exactly what the Atlas event was other than it appeared somewhat severe).
I would expect the actors which were active or ready-to-be-active to make db calls may or would fail. However, after those actor failed and we replaced by newly created actors, that calls would succeed once the mongo+srv URI responded with a successful connection.
In my case, I had GB’s of exception logs until I restarted the pods and everything went back to normal.
I mention actors here but it’s probably relevant that this INFO message kept occurring too… (perhaps one for each exception) (12-15 per/second every 5 seconds)
"[Supervisor-1/db] Fails to connect channel #7e90b17f"
Thanks for responding, just looking for some tips on supporting anything to do with Atlas (or mongodb server) outages and having my Play 2.8 app keep on going after the errors/events have gone away.
--
You received this message because you are subscribed to the Google Groups "ReactiveMongo -
http://reactivemongo.org" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
reactivemong...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/reactivemongo/7699db68-aedb-444d-837c-436e726c4df3n%40googlegroups.com.
Are you suggesting that I should set “r.c.a.MongoDBSystem” to “OFF” since the log messages were ERROR level logs.
I certainly would except some pool/actor completion based on a network error and logs that are related to that. However, these repeating (many per second) happened for hours after the event (still don’t know exactly what the Atlas event was other than it appeared somewhat severe).
I would expect the actors which were active or ready-to-be-active to make db calls may or would fail. However, after those actor failed and we replaced by newly created actors, that calls would succeed once the mongo+srv URI responded with a successful connection.
In my case, I had GB’s of exception logs until I restarted the pods and everything went back to normal.
I mention actors here but it’s probably relevant that this INFO message kept occurring too… (perhaps one for each exception) (12-15 per/second every 5 seconds)
"[Supervisor-1/db] Fails to connect channel #7e90b17f"
Thanks for responding, just looking for some tips on supporting anything to do with Atlas (or mongodb server) outages and having my Play 2.8 app keep on going after the errors/events have gone away.
This (below -- except for the timestamp and the channel #id) is the only stack trace I see in my logs. Over and over and over again for hours and many per second.
The code is normal collection reads and writes. There is no hint of my code in any output unfortunately.
I am not really sure how to explain this any more. I have a production system which is using rm 1.0.0 and an atlas primary, secondary, secondary mongodb backend. Atlas had some cluster reboot and my reactivemongo “system” seemed to go crazy and never repair or corrected whatever state it was in. This has happened to me several times. I guess I can try to replicate with some local networking hiccups.
What I was really trying to ask is… if a mongodb goes down, (and perhaps the hostname(s) maps to a different IP address), should reactivemongo system recover and still satisfy database requests?
2020-10-30 23:59:14,050 [ERROR] r.c.a.MongoDBSystem - [Supervisor-1/surchx] Fails to connect channel #d2661b82
java.nio.channels.ClosedChannelException: null
at reactivemongo.io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957)
at reactivemongo.io.netty.channel.AbstractChannel$AbstractUnsafe.ensureOpen(AbstractChannel.java:976)
at reactivemongo.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.connect(AbstractNioChannel.java:237)
at reactivemongo.io.netty.channel.DefaultChannelPipeline$HeadContext.connect(DefaultChannelPipeline.java:1342)
at reactivemongo.io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:548)
at reactivemongo.io.netty.channel.AbstractChannelHandlerContext.access$1000(AbstractChannelHandlerContext.java:61)
at reactivemongo.io.netty.channel.AbstractChannelHandlerContext$9.run(AbstractChannelHandlerContext.java:538)
at reactivemongo.io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at reactivemongo.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at reactivemongo.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
From: reacti...@googlegroups.com <reacti...@googlegroups.com>
On Behalf Of Cédric Chantepie
Sent: Monday, November 2, 2020 1:29 PM
To: ReactiveMongo - http://reactivemongo.org <reacti...@googlegroups.com>
Subject: Re: anyone seen abnormal reactivemongo recovery when Atlas does some hosted db maint or network changes?
And stacktrace
--
You received this message because you are subscribed to the Google Groups "ReactiveMongo -
http://reactivemongo.org" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
reactivemong...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/reactivemongo/4f6f8c32-2a8c-4c27-a6aa-f1f004a51748n%40googlegroups.com.
Yes, correct, no stack traces (or long term interruption of our service(s)) except for these repeating exceptions. I’ll re-include them together since I posted in different messages… Of course this is just 4 of the thousands but I wanted to include the timestamps to show how frequently they are coming.
Info 2020-10-30 18:00:04.863 MDT "[Supervisor-1/db] Fails to connect channel #4f04841e"
2020-10-31 00:00:04,863 [ERROR] r.c.a.MongoDBSystem - [Supervisor-1/db] Fails to connect channel #4f04841e
Error 2020-10-30 18:00:04.863 MDT java.nio.channels.ClosedChannelException: null at reactivemongo.io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957) at reactivemongo.io.netty.channel.AbstractChannel$AbstractUnsafe.ensureOpen(AbstractChannel.java:976) at reactivemongo.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.connect(AbstractNioChannel.java:237) at reactivemongo.io.netty.channel.DefaultChannelPipeline$HeadContext.connect(DefaultChannelPipeline.java:1342) at reactivemongo.io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:548) at reactivemongo.io.netty.channel.AbstractChannelHandlerContext.access$1000(AbstractChannelHandlerContext.java:61) at reactivemongo.io.netty.channel.AbstractChannelHandlerContext$9.run(AbstractChannelHandlerContext.java:538) at reactivemongo.io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164) at reactivemongo.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472) at reactivemongo.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
Info 2020-10-30 18:00:04.863 MDT "[Supervisor-1/db] Fails to connect channel #8206dbce"
2020-10-31 00:00:04,863 [ERROR] r.c.a.MongoDBSystem - [Supervisor-1/db] Fails to connect channel #8206dbce
Error 2020-10-30 18:00:04.864 MDT java.nio.channels.ClosedChannelException: null at reactivemongo.io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957) at reactivemongo.io.netty.channel.AbstractChannel$AbstractUnsafe.ensureOpen(AbstractChannel.java:976) at reactivemongo.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.connect(AbstractNioChannel.java:237) at reactivemongo.io.netty.channel.DefaultChannelPipeline$HeadContext.connect(DefaultChannelPipeline.java:1342) at reactivemongo.io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:548) at reactivemongo.io.netty.channel.AbstractChannelHandlerContext.access$1000(AbstractChannelHandlerContext.java:61) at reactivemongo.io.netty.channel.AbstractChannelHandlerContext$9.run(AbstractChannelHandlerContext.java:538) at reactivemongo.io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164) at reactivemongo.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472) at reactivemongo.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
Info 2020-10-30 18:00:04.864 MDT "[Supervisor-1/db] Fails to connect channel #a1fe6459"
Error 2020-10-30 18:00:04.864 MDT 2020-10-31 00:00:04,864 [ERROR] r.c.a.MongoDBSystem - [Supervisor-1/db] Fails to connect channel #a1fe6459
Error 2020-10-30 18:00:04.864 MDT java.nio.channels.ClosedChannelException: null at reactivemongo.io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957) at reactivemongo.io.netty.channel.AbstractChannel$AbstractUnsafe.ensureOpen(AbstractChannel.java:976) at reactivemongo.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.connect(AbstractNioChannel.java:237) at reactivemongo.io.netty.channel.DefaultChannelPipeline$HeadContext.connect(DefaultChannelPipeline.java:1342) at reactivemongo.io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:548) at reactivemongo.io.netty.channel.AbstractChannelHandlerContext.access$1000(AbstractChannelHandlerContext.java:61) at reactivemongo.io.netty.channel.AbstractChannelHandlerContext$9.run(AbstractChannelHandlerContext.java:538) at reactivemongo.io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164) at reactivemongo.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472) at reactivemongo.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
Error 2020-10-30 18:00:04.864 MDT 2020-10-31 00:00:04,864 [ERROR] r.c.a.MongoDBSystem - [Supervisor-1/db] Fails to connect channel #8e1cde89
2020-10-30 18:00:04.864 MDT java.nio.channels.ClosedChannelException: null at reactivemongo.io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957) at reactivemongo.io.netty.channel.AbstractChannel$AbstractUnsafe.ensureOpen(AbstractChannel.java:976) at reactivemongo.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.connect(AbstractNioChannel.java:237) at reactivemongo.io.netty.channel.DefaultChannelPipeline$HeadContext.connect(DefaultChannelPipeline.java:1342) at reactivemongo.io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:548) at reactivemongo.io.netty.channel.AbstractChannelHandlerContext.access$1000(AbstractChannelHandlerContext.java:61) at reactivemongo.io.netty.channel.AbstractChannelHandlerContext$9.run(AbstractChannelHandlerContext.java:538) at reactivemongo.io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164) at reactivemongo.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472) at reactivemongo.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
Info 2020-10-30 18:00:04.864 MDT "[Supervisor-1/db] Fails to connect channel #8e1cde89"
To view this discussion on the web visit https://groups.google.com/d/msgid/reactivemongo/bcb71951-8113-4502-b544-db61cc6e71bdn%40googlegroups.com.
You received this message because you are subscribed to a topic in the Google Groups "ReactiveMongo - http://reactivemongo.org" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/reactivemongo/LW8hKoKxJGc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to reactivemong...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/reactivemongo/BY5PR18MB3761663DDD5ECDACFD7F99D2CD100%40BY5PR18MB3761.namprd18.prod.outlook.com.
So when there are network failures like this there is no other way to have a production system operate properly other than “rebooting the server”. That just seems really wonky.
Is there anything you can share about the internals of the mongo+srv protocol or bootstrapping that might suggest caching of hostnames? Or anything host caching you can think of that might not be honoring the networkaddress.cache.ttl JVM setting?
There must be someone using reactivemongo with Kubernetes with Atlas – and having similar issues. Anyone?
Anyway, thanks for your time. Unfortunately, if I can’t hack through this somehow it’s probably the end of the road.
From: reacti...@googlegroups.com <reacti...@googlegroups.com>
On Behalf Of Cédric Chantepie
Sent: Monday, November 2, 2020 3:36 PM
To: reacti...@googlegroups.com
Subject: Re: anyone seen abnormal reactivemongo recovery when Atlas does some hosted db maint or network changes?
Then it means that the pool is doing is job handling network signals through netty.
You can try to optimize network options, but there is no driver issue as far as I see.
Le lun. 2 nov. 2020 à 22:48, Brad Rust <br...@interpayments.com> a écrit :
Yes, correct, no stack traces (or long term interruption of our service(s)) except for these repeating exceptions. I’ll re-include them together since I posted in different messages… Of course this is just 4 of the thousands but I wanted to include the timestamps to show how frequently they are coming.
Info 2020-10-30 18:00:04.863 MDT "[Supervisor-1/db] Fails to connect channel #4f04841e"
2020-10-31 00:00:04,863 [ERROR] r.c.a.MongoDBSystem - [Supervisor-1/db] Fails to connect channel #4f04841e
Error 2020-10-30 18:00:04.863 MDT java.nio.channels.ClosedChannelException: null at reactivemongo.io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957) at reactivemongo.io.netty.channel.AbstractChannel$AbstractUnsafe.ensureOpen(AbstractChannel.java:976) at reactivemongo.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.connect(AbstractNioChannel.java:237) at reactivemongo.io.netty.channel.DefaultChannelPipeline$HeadContext.connect(DefaultChannelPipeline.java:1342) at reactivemongo.io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:548) at reactivemongo.io.netty.channel.AbstractChannelHandlerContext.access$1000(AbstractChannelHandlerContext.java:61) at reactivemongo.io.netty.channel.AbstractChannelHandlerContext$9.run(AbstractChannelHandlerContext.java:538) at reactivemongo.io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164) at reactivemongo.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472) at reactivemongo.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
Info 2020-10-30 18:00:04.863 MDT "[Supervisor-1/db] Fails to connect channel #8206dbce"
2020-10-31 00:00:04,863 [ERROR] r.c.a.MongoDBSystem - [Supervisor-1/db] Fails to connect channel #8206dbce
Error 2020-10-30 18:00:04.864 MDT java.nio.channels.ClosedChannelException: null at reactivemongo.io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957) at reactivemongo.io.netty.channel.AbstractChannel$AbstractUnsafe.ensureOpen(AbstractChannel.java:976) at reactivemongo.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.connect(AbstractNioChannel.java:237) at reactivemongo.io.netty.channel.DefaultChannelPipeline$HeadContext.connect(DefaultChannelPipeline.java:1342) at reactivemongo.io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:548) at reactivemongo.io.netty.channel.AbstractChannelHandlerContext.access$1000(AbstractChannelHandlerContext.java:61) at reactivemongo.io.netty.channel.AbstractChannelHandlerContext$9.run(AbstractChannelHandlerContext.java:538) at reactivemongo.io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164) at reactivemongo.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472) at reactivemongo.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
Info 2020-10-30 18:00:04.864 MDT "[Supervisor-1/db] Fails to connect channel #a1fe6459"
Error 2020-10-30 18:00:04.864 MDT 2020-10-31 00:00:04,864 [ERROR] r.c.a. - [Supervisor-1/db] Fails to connect channel #a1fe6459
To view this discussion on the web visit https://groups.google.com/d/msgid/reactivemongo/CAFGeoOV8f9nZ-H1pgVzkvMPo9bq3WBg6hjFEWi0%2BH9fxXNkhXg%40mail.gmail.com.
So when there are network failures like this there is no other way to have a production system operate properly other than “rebooting the server”. That just seems really wonky.
Is there anything you can share about the internals of the mongo+srv protocol or bootstrapping that might suggest caching of hostnames? Or anything host caching you can think of that might not be honoring the networkaddress.cache.ttl JVM setting?
Here is what I am seeing as I am trying to replicate this.
How I am trying to replicate (and seem to have replicated *something* that I think is abnormal)
I am not sure what ”DB resolution in the document” section you are referring to but I would be happy and willing to try different options but I simply don’t know where to start.
From: reacti...@googlegroups.com <reacti...@googlegroups.com>
On Behalf Of Cédric Chantepie
Sent: Tuesday, November 3, 2020 10:18 AM
To: ReactiveMongo - http://reactivemongo.org <reacti...@googlegroups.com>
Subject: Re: anyone seen abnormal reactivemongo recovery when Atlas does some hosted db maint or network changes?
On Tuesday, 3 November 2020 at 06:41:16 UTC+1 br...@interpayments.com wrote:
--
You received this message because you are subscribed to the Google Groups "ReactiveMongo -
http://reactivemongo.org" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
reactivemong...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/reactivemongo/fc5c7cc1-9ec4-4c96-ac27-24785492d6d7n%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/reactivemongo/BY5PR18MB376180560A78C38AF2FC00D5CDEF0%40BY5PR18MB3761.namprd18.prod.outlook.com.
This is also happening to us at Talenteca.com ... the application is OK after the MongoDB cluster maintenance but those logs appear like crazy tons per second killing the hard disk and CPU usage then crashing the application.This is happening for us since ReactiveMongo version 0.16 ... so every time we have a maintenance we have to restart the application.
--
You received this message because you are subscribed to the Google Groups "ReactiveMongo - http://reactivemongo.org" group.
To unsubscribe from this group and stop receiving emails from it, send an email to reactivemong...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/reactivemongo/f848f998-db97-4563-89b9-c5a47879786dn%40googlegroups.com.
- Start play app, do all the happy-path reads and writes with success
- Atlas pause-cluster
- As expected play app logs exceptions about connecting to primary and/or secondary
- Atlas resume-cluster
- The play app recovers (mostly) and allows my happy-path reads and writes with success
- MongoDBSystem continues to log the following …
- r.c.a.MongoDBSystem - [Supervisor-6/my-db] Fails to connect channel #<SOME_CHANNEL_ID>
- In MongoDBSystem, it goes through the connectAll(nodeSet) code where the updateNode block is called where the netty connect fails yielding the above (#6) logged exception (MongoDBSystem:1528). The nodes of the nodeSet have “Disconnected” status entities that *never* get removed. So while the application is still functional, from this point I will *forever* until server restart, have Disconnected channels in the node.
- Looking at this a different way, on MongoDBSystem:1572:updateNode(node, node.connections, Vector.empty), the node.connections Vector has connections which stay in a Disconnected state and never get remove or cleaned up or whatever. If you were to look at the toShortString of these, mine look like this
- Node[testcluster-shard-00-00…...mongodb.net:27017: Unknown (9/9/10 available connections), latency=9223372036854775807ns, authenticated={}]
- Node[testcluster-shard-00-01…...mongodb.net:27017: Unknown (8/8/10 available connections), latency=9223372036854775807ns, authenticated={}]
- Node[testcluster-shard-00-02…...mongodb.net:27017: Primary (9/9/10 available connections), latency=430148298551800ns, authenticated={.....@admin}]
- I guess my assumption is that eventually you would expect all of those Nodes to come back to 10/10/10 as things recover.
I am not sure what ”DB resolution in the document” section you are referring to but I would be happy and willing to try different options but I simply don’t know where to start.
val (even to lazy val), as it’s better to get a fresh reference each time, to automatically recover from any previous issues (e.g. network failure).
If there is a branch or some replication you need me to try? I haven’t built the driver from source before but I am willing to do that to help out.
If there is a particular part in the codebase that you want me to try to figure it out, I can try that too. It’s just a bit overwhelming just to start looking around.
From: reacti...@googlegroups.com <reacti...@googlegroups.com>
On Behalf Of Cédric Chantepie
Sent: Thursday, November 5, 2020 6:10 AM
To: ReactiveMongo - http://reactivemongo.org <reacti...@googlegroups.com>
Subject: Re: anyone seen abnormal reactivemongo recovery when Atlas does some hosted db maint or network changes?
I would hypothesize that some node are removed from the replicaSet by the cluster restart, are no longer part of after, but are still accessible by network.
--
You received this message because you are subscribed to the Google Groups "ReactiveMongo -
http://reactivemongo.org" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
reactivemong...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/reactivemongo/3b96d822-5b63-4fb4-b1d2-9eac9a1c807an%40googlegroups.com.
If there is a branch or some replication you need me to try? I haven’t built the driver from source before but I am willing to do that to help out.
If there is a particular part in the codebase that you want me to try to figure it out, I can try that too.
It’s just a bit overwhelming just to start looking around
I can’t seem to run the RM-SBT-Playground on my wsl2 ubuntu shell. Any thoughts or suggestions?
SBT command: sbt
[info] welcome to sbt 1.3.13 (AdoptOpenJDK Java 11.0.8)
[info] loading settings for project global-plugins from sbt-updates.sbt ...
[info] loading global plugins from /home/brust/.sbt/1.0/plugins
[info] loading project definition from /home/brust/src/RM-SBT-Playground/project
[info] loading settings for project rm-sbt-playground from build.sbt ...
[info] set current project to RM-SBT-Playground (in build file:/home/brust/src/RM-SBT-Playground/)
[info] Compiling 1 Scala source to /home/brust/src/RM-SBT-Playground/target/scala-2.12/classes ...
[error] /home/brust/src/RM-SBT-Playground/src/main/scala/Playground.scala:10:28: object bson is not a member of package reactivemongo.api
[error] import reactivemongo.api.bson.BSONDocument
[error] ^
[error] /home/brust/src/RM-SBT-Playground/src/main/scala/Playground.scala:21:22: value close is not a member of reactivemongo.api.MongoConnection
[error] con.foreach(_._1.close()(5.seconds))
[error] ^
[error] /home/brust/src/RM-SBT-Playground/src/main/scala/Playground.scala:24:24: value fromStringWithDB is not a member of object reactivemongo.api.MongoConnection
[error] (MongoConnection.fromStringWithDB(uri).flatMap { dbUri =>
[error] ^
[error] /home/brust/src/RM-SBT-Playground/src/main/scala/Playground.scala:73:14: not found: value BSONDocument
[error] find(BSONDocument.empty).one[BSONDocument].map(_.isDefined), timeout))
[error] ^
[error] /home/brust/src/RM-SBT-Playground/src/main/scala/Playground.scala:73:38: not found: type BSONDocument
[error] find(BSONDocument.empty).one[BSONDocument].map(_.isDefined), timeout))
[error] ^
[error] /home/brust/src/RM-SBT-Playground/src/main/scala/Playground.scala:96:48: not found: value BSONDocument
[error] case Some(db) => db.collection("bar").find(BSONDocument.empty).
[error] ^
[error] /home/brust/src/RM-SBT-Playground/src/main/scala/Playground.scala:97:35: not found: type BSONDocument
[error] tailable.awaitData.cursor[BSONDocument]().fold({}) { (_, doc) =>
[error] ^
[error] /home/brust/src/RM-SBT-Playground/src/main/scala/Playground.scala:98:29: not found: value BSONDocument
[error] println(s"doc = ${BSONDocument pretty doc}")
[error] ^
[error] 8 errors found
[error] (Compile / compileIncremental) Compilation failed
[error] Total time: 4 s, completed 2020 Nov 9 15:02:39
From: reacti...@googlegroups.com <reacti...@googlegroups.com>
On Behalf Of Cédric Chantepie
Sent: Thursday, November 5, 2020 4:07 PM
To: ReactiveMongo - http://reactivemongo.org <reacti...@googlegroups.com>
Subject: Re: anyone seen abnormal reactivemongo recovery when Atlas does some hosted db maint or network changes?
On Thursday, 5 November 2020 at 17:50:25 UTC+1 br...@interpayments.com wrote:
--
You received this message because you are subscribed to the Google Groups "ReactiveMongo -
http://reactivemongo.org" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
reactivemong...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/reactivemongo/7732e1f6-2ca9-4d26-9b7e-1fc7156fd74an%40googlegroups.com.
You received this message because you are subscribed to a topic in the Google Groups "ReactiveMongo - http://reactivemongo.org" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/reactivemongo/LW8hKoKxJGc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to reactivemong...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/reactivemongo/BY5PR18MB3761A3A07260DC7C403B0440CDEA0%40BY5PR18MB3761.namprd18.prod.outlook.com.
I tried with a Debian (stretch) docker image with java 1.8 and sbt 1.4.1 from here (https://github.com/mozilla/docker-sbt)... Just in case someone else wants a ready to go docker image to bootstrap from.
I received exactly the same errors. I didn’t see it yesterday but RM_VERSION is 0.17.1 in build.sbt
So, after setting `export RM_VERSON=1.0.0`, everything is bootstrapping and running ok.
To view this discussion on the web visit https://groups.google.com/d/msgid/reactivemongo/CAFGeoOX14Ujr5mTU-yzb6WfxrQNfEbhpNoojX6SY1pk_z15y2g%40mail.gmail.com.
We changed the database and collection references from val to var and the problems did not appear again for more than two months. Hope this could help. Thanks!