Hi there,
I'm having sort of the same problem with this behavior.
I'm using the advised configuration to have proper streaming:
Despite the reactive streams being correctly signaled for termination, the underlying SQL statement is not terminated.
What I've seen happening is that slick only cancels the statement between fetches.
So the problem arises when, for any reason, the streaming query I send to the DB takes a lot of time to even start sending data to the client.
If, on the slick side, the stream is cancelled, I can see the process still working on the DB until the first batch of data is sent into the (still) opened JDBC connection.
Afterwards, all resources seem to be closed.
In fact, with postgres, if we are not careful enough to correctly configure the action for proper streaming and don't mark it with a pinned session,
we will be asking for the full DB data set and may end up exhausting all the application memory even after cancelling the stream, because the
JDBC layer continues to consume the whole data set.
To overcome this, I'm thinking about getting the actual SQL statement used by Slick into the application scope, by means of the "statementInit" param:
def withStatementParameters(rsType: ResultSetType = null,
rsConcurrency: ResultSetConcurrency = null,
rsHoldability: ResultSetHoldability = null,
statementInit: Statement => Unit = null,
fetchSize: Int = 0)
The "statementInit" param exposes the statement, which I can now bring into scope by saving it in some wrapper and then use it in my source like so:
theAkkaSource.watchTermination() { (_, f) =>
f.onComplete { _ =>
statementWrapper.cancelStatement()
}
NotUsed
}
I think this might work.
However, I don't understand why slick does not deal with this?
Any thoughts or help?