[EVENTING] How to manage burst requests that cause a lot of failures in knative serving?

47 views
Skip to first unread message

JinWoo Ahn

unread,
Apr 4, 2022, 8:19:38 AMApr 4
to Knative Users
Hello community. 

I want to call knative service as a subscription using the broker-filter model. However, when a burst event occurs, the broker sends too many events to the knative service at the same time, causing too many failures. 

 For example, let's assume that the knative service is set to max-scale: 10, concurrency: 10, and each pod runs for 1 minute. If I generate 10,000 events at once, in knative serving, up to 100 requests are running at the same time. However, since many requests are simultaneously sent through the broker to the knative serving, most of the remaining requests fail. Even though I configure delivery, many failures occur continuously and it takes too long time to process all the events. 

 To solve this problem, I want to set the rate-limit for calling knative service in the subscriber of the event. Is there any way to set something like this?

Thanks, 
Jinwoo Ahn.




Pierangelo Di Pilato

unread,
Apr 4, 2022, 8:28:55 AMApr 4
to JinWoo Ahn, Knative Users
Hi Jinwoo,

which broker or channel implementation are you using?
Depending on the implementation, it might have some configurations to
apply back-pressure.

InMemoryChannel won't handle these cases well since it's not meant for
production.

Thanks,
> --
> You received this message because you are subscribed to the Google Groups "Knative Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to knative-user...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/knative-users/82bb4321-f40b-4ef2-bf9e-53c5db98a489n%40googlegroups.com.



--

Pierangelo Di Pilato
Software Engineer
Red Hat, Inc
https://www.redhat.com/

JinWoo Ahn

unread,
Apr 4, 2022, 9:08:32 AMApr 4
to Knative Users
Hello Pierangelo. Thank you for reply.

I used Multi-tenant channel-based broker with in-memory channel. 

I also have surveyed kafka channel, but I couldn't find that setting..


2022년 4월 4일 월요일 오후 9시 28분 55초 UTC+9에 pdip...@redhat.com님이 작성:

Evan Anderson

unread,
Apr 6, 2022, 9:48:18 AMApr 6
to JinWoo Ahn, Knative Users
I suspect for the Kafka channel, controlling the topic parallelism will be sufficient to control the max-in-flight event deliveries. For other broker implementations which don't use the "max N outstanding deliveries per partition" (RabbitMQ, GCP PubSub, In-Memory), you might need a feature request.

However, I want to spin this in a slightly different direction -- do you *want* to have to configure the trigger given your scaling settings, or would you prefer to have the Trigger use something like slow start to "learn" the concurrent delivery rate?

JinWoo Ahn

unread,
Apr 11, 2022, 5:12:34 AMApr 11
to Knative Users
I guess that for the kafka channel, I can handle global topic parallelism by changing the number of partition. 

First, I want to configure each trigger's rate-limit. My revisions use Knative Pod Autoscaler(KPA), and each revision's concurrency and upper bound is different. 
I think that for the eventing side, concurrent request to revision should be equal or less than "<concurrency>*<upper bound>" since it is revision's limitation to handle concurrent requests.

Second, I think that if learning concurrent delivery rate is possible, it would be better solution!

2022년 4월 6일 수요일 오후 10시 48분 18초 UTC+9에 evan.k....@gmail.com님이 작성:
Reply all
Reply to author
Forward
0 new messages