Hello!
I am doing a benchmark study of open62541 master version on a 400 MHz Microcontroller and comparing it to freeopcua. The server application uses the variable-data-source semantics and runs in blocking mode.
Compared to freeopcua, open625441 has by a factor of 10 better performance on my CPU. However, there is a caveat - open62541 has much lower data rates. For example, sending a message of 16 Bytes every 1 millisecond with open62541 only causes a 4KB data rate, which means that a message is sent at most every 4 ms.
I have set the minimumSamplingInterval and the requestedPublishingInterval parameters to minimum (1 ms). This speeded up things a little, but still, a 256 KB message is at most sent every 36 ms when it should be sent every 1 ms. (The network capacity is available.)
For this use case, freeopcua only has problems sending messages in an interval less than 10 ms and it uses about 20% CPU load. Open62541 has a negligent 3% CPU Load, but the messages are sent in a 36 ms interval at most.
I have noticed that in open62541, the larger the message length, the larger also is the real sending interval. For example, a 4096 B byte array message can be sent every 86 ms at most. Since the CPU load using open62541 is relatively low, I assume that there may be some sort of calculation inside the protocol, to limit the effective data rate in order not to burden the CPU too much. Is this correct?
Are there any possibilities to really push the protocol to the limits to see how well it performs under stress? Are there other parameters beside the two mentioned above that can be modified to achieve greater throughput?
With best regards!
M.J.