Backpressure for several subsequent API calls

85 views
Skip to first unread message

Ronald van Raaphorst

unread,
Jun 19, 2025, 3:03:51 AMJun 19
to vert.x
Hi all,

I'm looking for a setup for the following problem:

An external application calls our API with an id of some sort.
That id is used to call another external API, which results in a list of other ids. A jobId is returned to the caller.
Those ids are fed one by one or as a list to another external API, and again a list of ids are returned.
This is repeated for a third system and these results are returned with the jobId given earlier.

We can't predict how long each API will take, so some form of backpressure is required in order not to overload our and the external systems. While I could argue that this would be a Kafka flow, we can't use that (yet).

I'm familiair with Kotlin's Flow (https://kotlinlang.org/docs/flow.html#flows), with Vertx's pipeTo and with Vertx's reactive streams (https://vertx.io/docs/vertx-reactive-streams/java/).
I also found a Vertx InboundBuffer in the io.vertx.core.streams package, and that a message consumer actually uses streams and/or an InboundBuffer, correct?

I image an InboundBuffer connected to/used by a streaming reader implementation that reads the buffer and calls an API request, and messages the result to the next step.
Is there something build in for this already? It's pretty hard to figure out concrete steps to take based on the code and the examples.

Is there a processor handler in the streams flow, that kinda 'maps' the input to the result of the extenal API call and sends that to a writer?

Hope the problem is understandable, can't imagine others haven't solved this before :)

TIA
Ronald




Thomas SEGISMONT

unread,
Jun 30, 2025, 10:57:06 AMJun 30
to ve...@googlegroups.com
Hi,

What do you mean with not overloading the backends? Limiting the number of concurrent requests? Or limiting the rate of requests? Both?

Thomas

--
You received this message because you are subscribed to the Google Groups "vert.x" group.
To unsubscribe from this group and stop receiving emails from it, send an email to vertx+un...@googlegroups.com.
To view this discussion, visit https://groups.google.com/d/msgid/vertx/f56605dd-b2f4-4aab-a948-4a1a47237904n%40googlegroups.com.
Message has been deleted

Ronald van Raaphorst

unread,
Jul 2, 2025, 9:09:38 AMJul 2
to vert.x
Hi Thomas,

My first bet would be to limit the number of concurrent requests for each service individually.

We don't know the performance characteristics of these external services.
Each service might show quicker responses with less concurrent requests, or the response time might level out above a certain number of concurrent requests. We might also restrain the number of concurrent requests per service based on the average response time of that service during the last five minutes for example.
The response times might also depend on external factors like the time of the day, as these services might be used by other applications as well.
Each service should max out but the total throughput depends on the performance of the slowest service of course.

So I'm looking for clues as how to implement this:

Request -> |Queue1 -> Call service 1| -> |Queue2 -> Call service2| -> |Queue3 -> Call service 3| -> Return response

When Queue 2 is below a treshold, it can instruct the previous block to start emptying Queue1 (ie calling service 1).
Same for Queue 3. Each service can be called with say max 5 requests concurrently. 
If a service responds, another call can be made.

Hope this helps explaining the problem.

Ronald

Thomas SEGISMONT

unread,
Jul 10, 2025, 1:32:51 PMJul 10
to ve...@googlegroups.com
If HTTP/1 is the transport between your app and the external services, and pipelining is disabled, the connection pool size will be the concurrency limit.
But this is fixed on startup so if you must adapt to dynamic constraints like time of the day, you can't rely on HTTP Client pool internal queues. and pool sizes.

Vert.x doesn't have builtin, general purpose stream/flow implementations, but we aim at making it work well with Kotlin Coroutines, Virtual threads, RxJava and Mutiny.
If you're familiar with Kotlin flows, you could built your pipeline with them.

Message has been deleted
Message has been deleted

Bruno F

unread,
Jul 16, 2025, 4:13:59 AMJul 16
to vert.x
I'm not familiar with Kotlin Flows but with regular Kotlin Coroutines you can do something like this:

Producers will prevent from putting too many things in memory: they will "yield" data on demand.
The cool thing with Vert.x is that you don't need the Dispatchers.IO I had to use here in my example: a single eventloop thread will be able to achieve the concurrency.
I'm using that pattern in production and it's working very well.

Bruno F

unread,
Jul 16, 2025, 4:13:59 AMJul 16
to vert.x
I'm not familiar with Kotlin flows but in using pure Kotlin coroutines  you can do something like this:




Le jeudi 10 juillet 2025 à 19:32:51 UTC+2, Thomas SEGISMONT a écrit :

Thomas SEGISMONT

unread,
Jul 16, 2025, 4:16:26 AMJul 16
to ve...@googlegroups.com
Thanks for the feedback, Bruno!

Reply all
Reply to author
Forward
0 new messages