traceroute --mtu <target>DpdkDevice::getAmountOfFreeMbufs() ==> returns the amount of free mbufs in the pool
DpdkDevice::getAmountOfMbufsInUse() ==> returns the amount of used mbufs (which are currently not in the pool)
DpdkDeviceList::startDpdkWorkerThreads()991 newMBuf->data_len = rawPacket->getRawDataLen();
992
993 mBufArr[packetsToSendInThisIteration] = newMBuf;
994 packetIndex++;
995 packetsToSendInThisIteration++;
996 numOfSendFailures = 0;Please let me know what you think.
Thanks,Yeah I do both in same iteration.
I actually noticed that in the code, and tested both ways (receiving and sending in the same iteration / sending in the next iteration). It did not produce significant differences in the results. But I would like to ask why do they do it that way in the example? What's the reason for this?
Yes, I could make my own buffer, but since DPDK have this functions, I thought I would be better to use them instead of reinventing the wheel. As I said, I actually did the changes to PcapPlusPlus code to use rte_eth_tx_buffer, but have had no time to implement the timer, so did only tests with a buffer size of 1 (it makes no sense to compare performance). I will let you know the results when I get back to this.
Anyway, I'm not sure DPDK 18 can help you with that: if the hash function can be configured in the PMD, it may be supported in earlier versions as well. And if it can only be configured in the vmxnet3 driver, it's a VMWare issue rather than DPDK.
I can help you check this out if you want
Thanks,
portConf.rxmode.mq_mode = DPDK_CONFIG_MQ_MODE;
portConf.rx_adv_conf.rss_conf.rss_key= DpdkDevice::m_RSSKey;
portConf.rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IPV4 | ETH_RSS_IPV6;
#define VMXNET3_RSS_OFFLOAD_ALL ( \
ETH_RSS_IPV4 | \
ETH_RSS_NONFRAG_IPV4_TCP | \
ETH_RSS_IPV6 | \
ETH_RSS_NONFRAG_IPV6_TCP)--
You received this message because you are subscribed to the Google Groups "PcapPlusPlus support" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pcapplusplus-support+unsub...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/pcapplusplus-support/062bdf1a-feb9-4497-adfd-11b4090a285b%40googlegroups.com.
LOG_ERROR("Couldn't set new allocated mBuf size to %d bytes", rawPacket->getRawDataLen());rte_pktmbuf_free(newMBuf);
printf("tailroom is: %d\n", rte_pktmbuf_tailroom(newMBuf));
printf("headroom is: %d\n", rte_pktmbuf_headroom(newMBuf));
-parallel=1 -time=10, No MBuf errors, no crashes, no problems-parallel=4 -time=10, Between 2 and 10 MBuf errors. Some of the connections transmits 0 bytes (but only on some iterations, see below **)-parallel=1 -time=40, No MBuf errors, no crashes, no problems-parallel=8 -time=10, Between 2 and 10 MBuf errors. More connections transmits 0 bytes and are more constant between iterations. After a few runs, the app crashes with PANIC (see below *)-parallel=1 -time=80, No MBuf errors, no crashes, no problems-parallel=16 -time=10, App won't even start. Crashes with panic inmediately-parallel=1 -time=160, No MBuf errors, no crashes, no problems
PANIC in vmxnet3_unmap_pkt():EOP desc does not point to a valid mbuf11: [/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7effa3c8d41d]]10: [/lib/x86_64-linux-gnu/libpthread.so.0(+0x76ba) [0x7effa45796ba]]
[ 4] 7.00-8.00 sec 258 MBytes 2.16 Gbits/sec 20 195 KBytes[ 6] 7.00-8.00 sec 0.00 Bytes 0.00 bits/sec 0 202 KBytes[ 8] 7.00-8.00 sec 107 MBytes 900 Mbits/sec 85 103 KBytes[ 10] 7.00-8.00 sec 181 MBytes 1.52 Gbits/sec 45 264 KBytes[SUM] 7.00-8.00 sec 546 MBytes 4.58 Gbits/sec 150
I don't believe the 2 issues are related, so my suggestion is to focus on the first one first.
From the information you provided about the tailroom being constant 2048 with 1 connection and 534 with more than 1 connection, there's obviously something here. This is also the reason for getting the "Couldn't set new allocated mBuf size to *** bytes" error.
Let me elaborate on that:
mbuf structure is described in DPDK documentation:
http://dpdk.org/doc/guides/prog_guide/mbuf_lib.html
As you can read in this doc, tailroom is the amount of bytes left in the mbuf after packet data.
I'm not sure where you put the rte_pktmbuf_tailroom() call but from your results on 1 connection (value being 2048) it seems you put this call just after allocating the mbuf from the mbuf pool (when packet data is still empty).
Assuming you didn't change the place of the rte_pktmbuf_tailroom() call going from 1 connection to multiple connection, there is something strange here: it seems that when enabling RSS and allocating a new mbuf from the mbuf pool, the mbuf is not really empty and tailroom value of 534 indicates there is a packet data with length of 1514. Something here doesn't make sense.
To ensure this theory I'd add a call to rte_pktmbuf_data_len() to verify packet data length is indeed 1514.
Please add it to your code and let me know if it's indeed the value you're getting.
I'm not sure why that happens, but I have a few questions that may shed some light and help us investigate:
I'd appreciate if you can investigate in those directions and let me know your finding.
Thanks,