Can't benchmark memcached with DPDK on AWS EC2 instances

122 views
Skip to first unread message

Steven Cheng

<yucheng871011@gmail.com>
unread,
Sep 15, 2022, 11:06:35 AM9/15/22
to seastar-dev
Hi everyone,

I'm new to seastar/DPDK, after trying several days, I think I'm pretty close to running memcached on AWS EC2 instance.

Here is my build steps and instance configuration (networking configuration):

# Build seastar
$ sudo ./install-dependencies.sh
$ git submodule update --init
$ ./configure.py --mode=release --enable-dpdk
$ ninja -C build/release

# Setup EC2 instances
1. I choose t2.micro as client instance which will run memtier_benchmark to benchmark memcached
2. I choose c5n.xlarge as server instance which support ENA driver
3. I create a new NIC and attach it to server (both NICs are in the same security group)
4. Since there are two NICs in server instance, I allocate an Elastic IP (3.130.X.X) for SSH usage
5. I add in-bound rule for server's security group, which allow TCP traffic on port 11212 (memcached running on)

Then, I do the system configuration as the followings
$ sudo sh -c "echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages" # Reserve hugepages
$ sudo modprobe uio && sudo insmod /usr/lib/modules/5.15.0-1019-aws/extra/dpdk/igb_uio.ko wc_activate=1 # Install kernel modules
$ sudo mkdir /mnt/huge && sudo mount -t hugetlbfs nodev /mnt/huge
$ sudo ./dpdk/usertools/dpdk-devbind.py --bind=igb_uio ens6


I first testing memcached with POSIX network stack as:
$ sudo ./memcached --smp 4 --max-slab-size 100 --network-stack posix --host-ipv4-addr 3.130.35.164 --port 11212

and using memtier_benchmark as:
$ memtier_benchmark -s 3.130.35.164 -p 11212 --protocol=memcache_text --clients=10 --ratio=1:1 --key-pattern=R:R --key-minimum=16 --key-maximum=16 --data-size=128 --test-time=20

Everything goes good. However, if I lauch memcached with DPDK as:
$ sudo ./memcached --network-stack native --dpdk-pmd --dhcp 0 --host-ipv4-addr 3.130.35.164 --netmask-ipv4-addr 255.0.0.0 --smp 4 --port 11212

(I'm not sure netmask is to do with client IP, which is random after re-start instance)

Benchmarks tool can't not establish connection, the following are error messages from memtier_benchmark:
```
[RUN #1] Preparing benchmark client...
[RUN #1] Launching threads now...
Connection error: Connection refused
Connection error: Connection refused
Connection error: Connection refused
```

There seems no error message (including dmesg) in server side:
```
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: No available hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: PCI device 0000:00:05.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 1d0f:ec20 net_ena
EAL: PCI device 0000:00:06.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 1d0f:ec20 net_ena
PMD: Placement policy: Low latency
INFO 2022-09-15 12:19:44,261 [shard 0] seastar - Created fair group io-queue-0, capacity rate 2147483:2147483, limit 12582912, rate 16777216 (factor 1), threshold 2000
INFO 2022-09-15 12:19:44,262 [shard 0] seastar - IO queue uses 0.75ms latency goal for device 0
INFO 2022-09-15 12:19:44,262 [shard 0] seastar - Created io group dev(0), length limit 4194304:4194304, rate 2147483647:2147483647
INFO 2022-09-15 12:19:44,262 [shard 0] seastar - Created io queue dev(0) capacities: 512:2000:2000 1024:3000:3000 2048:5000:5000 4096:9000:9000 8192:17000:17000 16384:33000:33000 32768:65000:65000 65536:129000:129000 131072:257000:257000
ports number: 1
Port 0: max_rx_queues 8 max_tx_queues 8
Port 0: using 4 queues
Port 0: RSS table size is 128
LRO is off
RX checksum offload supported
TX ip checksum offload supported
TX TCP&UDP checksum offload supported
Port 0 init ... done:
Creating Tx mbuf pool 'dpdk_pktmbuf_pool0_tx' [2048 mbufs] ...
Creating Rx mbuf pool 'dpdk_pktmbuf_pool0_rx' [2048 mbufs] ...
Creating Tx mbuf pool 'dpdk_pktmbuf_pool3_tx' [2048 mbufs] ...
Creating Tx mbuf pool 'dpdk_pktmbuf_pool1_tx' [2048 mbufs] ...
Creating Tx mbuf pool 'dpdk_pktmbuf_pool2_tx' [2048 mbufs] ...
Creating Rx mbuf pool 'dpdk_pktmbuf_pool3_rx' [2048 mbufs] ...
Creating Rx mbuf pool 'dpdk_pktmbuf_pool1_rx' [2048 mbufs] ...
Creating Rx mbuf pool 'dpdk_pktmbuf_pool2_rx' [2048 mbufs] ...
Port 0: Changing HW FC settings is not supported

Checking link status
Created DPDK device
done
Port 0 Link Up - speed 0 Mbps - full-duplex
seastar memcached v1.0
```

Also, I have observed the utilization of four cores are 100%. I'm not sure if there is any missing stuff to build seaster. I also suspect there are some mistakes I made in network configuration, but be honestly, I have no idea.

Any suggestion will be appreciated and thanks in advance!
Reply all
Reply to author
Forward
0 new messages