High CPU Usage on latest OpENer

79 views
Skip to first unread message

Richard Wu

unread,
Aug 24, 2024, 9:45:33 AM8/24/24
to EIP Stack Group OpENer Developers
Hi all,

We have an application developed using OpENer 1.2, recently we ported into latest OpENer 2.3 master. This application exchanges IO data between Rockwell PLC at 50ms RPI using 16 connections. We observed that the OpENer 1.2 based version the CPU usage around 20% vs. 2.3 based version 100%. We compared 1.2 and 2.3, it seems there are no significant refactors in network event model handling. any comments on this? our application extended the input and output assemblies to up to 32. in the attachment, please find the Flame graphs on both versions. 

Thanks in advance for any comments and advices.

Richard



perf23.svg
perf_cpu_good12.svg

martin...@gmail.com

unread,
Sep 2, 2024, 4:41:18 AM9/2/24
to EIP Stack Group OpENer Developers
Hi,

v1.2 is more than 10 years old. Since then a major refactoring of OpENer has been performed, various changes to match the EIP Spec and additional functionality has been added. I used it the released versions on plugfests without ever noticing a high CPU load, so unfortunately I cannot say what causes your system to go to 100% CPU.

If I read the graph right, most of the stuff not just is part of the select loop, which makes sense on the first glance, because a lot of timeout functionality has been added over the years.

One change which could perhaps cause this (a wild guess) is, that in the current version a stack variable is instantiated for the receiving and resulting message. In older version the answer was directly written onto the memory of the receiving message, however this design made it impossible to get and hold the necessary information for the answer.

Cheers

Richard Wu

unread,
Sep 4, 2024, 12:52:19 PM9/4/24
to EIP Stack Group OpENer Developers
Hi Martin,

Thank you for the replies. in the older version OpENer, we noticed that it used global variable  as receiving buffer vs. it is on stack in the latest version. We will try to do some experiments on it to see if it  makes differences. Another observation that  for the high cpu test case, on PLC we have 50 ms RPI, if we reduce it to 100 ms RPI, it seems to bring down the cpu to normal. I have the flamegraph in the attachment.

Thanks,

Richard
perf100newstack.svg

Richard Wu

unread,
Sep 10, 2024, 10:30:58 AM9/10/24
to EIP Stack Group OpENer Developers
We actually found the root cause from high CPU.

 1097 void CheckAndHandleConsumingUdpSocket(void) {
   1098   DoublyLinkedListNode *iterator = connection_list.first;
   1099
   1100   CipConnectionObject *current_connection_object = NULL;
   1101
   1102   /* see a message of the registered UDP socket has been received     */
   1103   while(NULL != iterator) {
   1104     current_connection_object = (CipConnectionObject *) iterator->data;
   1105     iterator = iterator->next; /* do this at the beginning as the close function may can make the entry invalid */
   1106
   1107     if( (kEipInvalidSocket !=
   1108          current_connection_object->socket[kUdpCommuncationDirectionConsuming])
   1109         && ( true ==
   1110              CheckSocketSet(current_connection_object->socket[
   1111                               kUdpCommuncationDirectionConsuming
   1112                             ]) ) ) {
   1113       //ioLogInfo("[DB] Processing UDP consuming message\n");
   1114       struct sockaddr_in from_address = { 0 };
   1115       socklen_t from_address_length = sizeof(from_address);
   1116       CipOctet incoming_message[PC_OPENER_ETHERNET_BUFFER_SIZE] = { 0 };
   1117       /**
   1118        * CHANGE[DB]: Bug- should be using the current socket instead the global variable
   1119        */
   1120       // int received_size = recvfrom(g_network_status.udp_io_messaging,
   1121       //                              NWBUF_CAST incoming_message,
   1122       //                              sizeof(incoming_message),
   1123       //                              0,
   1124       //                              (struct sockaddr *) &from_address,
   1125       //                              &from_address_length);
   1126
   1127       int received_size = recvfrom(current_connection_object->socket[
   1128                               kUdpCommuncationDirectionConsuming
   1129                             ],
   1130                                    NWBUF_CAST incoming_message,
   1131                                    sizeof(incoming_message),
   1132                                    0,
   1133                                    (struct sockaddr *) &from_address,


martin...@gmail.com

unread,
Nov 21, 2024, 2:51:29 AM11/21/24
to EIP Stack Group OpENer Developers
Oh very cool,

thanks Richard! Could you provide a pull request, then I can merge your fix into the code base.

Cheers,
Martin

Reply all
Reply to author
Forward
0 new messages