Reason: Connection timed out

Skip to first unread message


May 1, 2021, 10:13:52 AM5/1/21
to sdc-user
I am getting this error once in couple of days.
Same url returns data from postman or browser. It works every time when sdc is restarted until it fails again
This happens only with all pipeline with http client as origin, rest of the pipelines does not have any issues.

Pipeline Status: RUNNING_ERROR: HTTP_32 - Error executing request. HTTP-Status: NULL Reason: Connection timed out (Connection timed out) ( View Details... )

Thanks you in advance 

Ales Cervenka

May 1, 2021, 12:22:33 PM5/1/21
to sdc-user, MT
I've been running into this issue as well, MT. I don't know what this is caused by and how to resolve it, just like you I just figured out that it helps to restart the SDC and the HTTP Clients start working again.

I am curious to know if others managed to address this.


May 4, 2021, 10:21:07 AM5/4/21
to sdc-user,, MT
I think our issue is resolved by increasing heap memory, we are going to watch it closely for few week before concluding the issue. Not sure why it is giving timeout message.
i think it is related to how http client usage the memory for large pull/batch
Hope this helps

Mitch Barnett

May 4, 2021, 1:42:55 PM5/4/21
to MT, sdc-user,
Hi All,

If the issue is indeed caused by heap memory utilization, symptoms of a GC pause would certainly include HTTP connection timeouts.
Check your SDC instance's GC logs to verify whether or not you're coming up against the upper boundaries of your available heap memory - a tool like gceasy can be pretty useful for parsing and visual analysis of your GC logs.

You can also check GC memory usage in real-time from the SDC UI by going to Administration (gear icon, top-right) > SDC Metrics > Heap Memory Usage graph.

A restart of your SDC is resolving this issue because it'll flush all in-use memory, bringing up a JVM with little to no heap memory pressure.
I'm not sure which HTTP client stages (destination, origin, processor) both of your pipelines consist of, but we do have configurations in place to help control the amount of memory usage for a given pipeline (Max Batch Size, Batch Wait Time, Max Object Length). Check out our documentation for additional clarification on those configurations.

Ultimately the resolution may be to simply increase the heap size for your SDC JVM because your usage or average record size has increased.

Mitch Barnett
Software Engineer - Engineering Productivity

You received this message because you are subscribed to the Google Groups "sdc-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
To view this discussion on the web visit
Reply all
Reply to author
Message has been deleted
0 new messages