Max. Throughput and Min. Sending Interval

179 views
Skip to first unread message

miljenko.j...@gmail.com

unread,
Nov 8, 2017, 3:04:02 AM11/8/17
to open62541
Hello!

I am doing a benchmark study of open62541 master version on a 400 MHz Microcontroller and comparing it to freeopcua.  The server application uses the variable-data-source semantics and runs in blocking mode.

Compared to freeopcua, open625441 has by a factor of 10 better performance on my CPU. However, there is a caveat - open62541 has much lower data rates. For example, sending a message of 16 Bytes every 1 millisecond with open62541 only causes a 4KB data rate, which means that a message is sent at most every 4 ms.

I have set the minimumSamplingInterval and the requestedPublishingInterval parameters to minimum (1 ms). This speeded up things a little, but still, a 256 KB message is at most sent every 36 ms when it should be sent every 1 ms. (The network capacity is available.)
For this use case, freeopcua only has problems sending messages in an interval less than 10 ms and it uses about 20% CPU load. Open62541 has a negligent 3% CPU Load, but the messages are sent in a 36 ms interval at most.

I have noticed that in open62541, the larger the message length, the larger also is the real sending interval. For example, a 4096 B byte array message can be sent every 86 ms at most.  Since the CPU load using open62541 is relatively low, I assume that there may be some sort of calculation inside the protocol, to limit the effective data rate in order not to burden the CPU too much. Is this correct?

Are there any possibilities to really push the protocol to the limits to see how well it performs under stress? Are there other parameters beside the two mentioned above that can be modified to achieve greater throughput?

With best regards!
M.J.


Stefan Profanter

unread,
Nov 8, 2017, 3:08:01 AM11/8/17
to open62541
Hi Miljenko,
you could also try to use the fixed network buffer which is currently in a PR and see if it makes any difference in the performance measuring

https://github.com/open62541/open62541/pull/1129

BR
Stefan

Julius Pfrommer

unread,
Nov 8, 2017, 4:15:01 AM11/8/17
to open62541
The single malloc per message probably cannot account for a 20ms delay.
This is a long time, even on a 400MHz CPU!

I have not seen the effect you mentioned on other systems.
Do you use an (embedded) OS?
I suggest you add more logging to pinpoint the source of the delay.

For example before/after calling "send" here: https://github.com/open62541/open62541/blob/master/plugins/ua_network_tcp.c#L137

Log messages have the following format:
UA_LOG_INFO(UA_Log_Stdout, UA_LOGCATEGORY_NETWORK, "message");

And the default logger prints a high-precision timestamp.

Best regards,
Julius

Miljenko Jakovljevic

unread,
Nov 8, 2017, 5:24:57 AM11/8/17
to open62541
Hello Stefan,
Hello Julius,

yes the default logger time stamps are very useful. I am using an embedded OS. In the end the solution was to play with the default parameters. The implementation itself looks very efficient.

After looking at the source code, I have changed two additional server initialization variables in UA_ServerConfig: publishingIntervalLimits and  samplingIntervalLimits; and have set them to 10 ms. This resulted in a significant data rate increase.The default values for UA_VariableAttributes_default and UA_ServerConfig_new_default might be worth mentioning in the documentation.

Best regards!
M.J.
Reply all
Reply to author
Forward
0 new messages