Not understanding MAC address learning in SD-Fabric

29 views
Skip to first unread message

David Lake

unread,
Nov 18, 2022, 12:15:07 PM11/18/22
to sdfabr...@opennetworking.org

Hello

 

More help please

 

I have simplified my system to a single Stratum EdgeCore switch with two ports:

 

10.0.100.254 is the Access port on the Stratum switch

10.0.100.1 is the gNB UERANSIM.

 

10.0.200.254 is on the Core (N6) on the Stratum switch

 

The N3 address is 10.0.100.100

 

There are layer 2 switches (two separate switches) between the Stratum ports and the UERANSIM and between the N6 interface and the general network.

 

Everything looks OK and I see the IP addresses added to ONOS.

 

BUT.  When I try and ping 10.0.100.254 from 10.0.100.1 or try to ping 10.0.200.254 from a host on 10.0.200.0, I see the ‘hosts’ populate in ONOS but I don’t see ICMP replies or ARPS back so the MAC address tables on the switches between the hosts and the EdgeCore don’t populate.

 

Also, how does the gNB know how to get to 10.0.100.100?   What sets up the ARP/MAC learning?

 

This is my netfcfg.json:

 

{

            "devices": {

                "device:5g1": {

                    "segmentrouting": {

                        "ipv4NodeSid": 101,

                        "ipv4Loopback": "192.168.1.1",

                        "routerMac": "00:AA:00:00:00:01",

                        "isEdgeRouter": true,

                        "adjacencySids": []

                    },

                    "basic" : {

                      "name": "5g1",

                      "managementAddress": "grpc://10.5.23.21:9339?device_id=1",

                      "driver": "stratum-tofino",

                      "pipeconf": "org.stratumproject.fabric-upf.montara_sde_9_7_0"

                    }

                }

            },

            "ports": {

                "device:5g1/0": {

                    "interfaces": [{

                        "name": "5g1-0",

                        "ips": ["10.0.100.254/24"],

                        "vlan-untagged": 100

                                  }]

                                },

                "device:5g1/2": {

                    "interfaces": [{

                        "name": "5g1-2",

                        "ips": ["10.0.200.254/24"],

                        "vlan-untagged": 200

                                  }]

                                }

                     },

            "apps": {

                "org.omecproject.up4": {

                   "up4": {

                       "devices": [

                          "device:5g1"

                          ],

                       "n3Addr": "10.0.100.100",

                       "uePools": [

                          "10.250.0.0/16"

                          ],

                       "sliceId": 0,

                       "pscEncapEnabled": false

                          }

                       }

                    }

}

 

Thanks

 

David Lake

 

Tel: +44 (0)7711 736784

Text

Description automatically generated with low confidence

5G & 6G Innovation Centres

Institute for Communication Systems (ICS)
University of Surrey
Guildford
GU2 7XH

 

Ventre, Pier

unread,
Nov 18, 2022, 3:32:09 PM11/18/22
to David Lake, sdfabr...@opennetworking.org

Hi David,

Are you using the helm-charts to deploy sd-fabric ?

Can you get the list of the active applications in ONOS (apps -a -s)?

 

Thanks

Pier

--
You received this message because you are subscribed to the Google Groups "SDFABRIC-Dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sdfabric-dev...@opennetworking.org.
To view this discussion on the web visit https://groups.google.com/a/opennetworking.org/d/msgid/sdfabric-dev/DBAPR06MB68551A637E9EFA17DF228F98B5099%40DBAPR06MB6855.eurprd06.prod.outlook.com.

---------------------------------------------------------------------
INTEL CORPORATION ITALIA S.p.A. con unico socio
Sede: Milanofiori Palazzo E 4
CAP 20094 Assago (MI)
Capitale Sociale Euro 104.000,00 interamente versato
Partita I.V.A. e Codice Fiscale  04236760155
Repertorio Economico Amministrativo n. 997124
Registro delle Imprese di Milano nr. 183983/5281/33
Soggetta ad attivita' di direzione e coordinamento di
INTEL CORPORATION, USA

This e-mail and any attachments may contain confidential material for
the sole use of the intended recipient(s). Any review or distribution
by others is strictly prohibited. If you are not the intended
recipient, please contact the sender and delete all copies.

Arcaduino

unread,
Nov 18, 2022, 4:09:10 PM11/18/22
to Ventre, Pier, David Lake, sdfabr...@opennetworking.org
Hi David,
In addition to Pier's request, the output of "devices" command in ONOS as well as "flows -s" on the involved devices can give some extra information to help troubleshooting.

Thanks,
Rubén
--
Enviado desde mi dispositivo Android con K-9 Mail. Por favor, disculpa mi brevedad.

David Lake

unread,
Nov 18, 2022, 6:57:39 PM11/18/22
to Arcaduino, Ventre, Pier, sdfabr...@opennetworking.org

Hi Reuben and Pier

 

karaf@root > apps -s -a

*   3 org.onosproject.gui                  2.5.8.SNAPSHOT ONOS Legacy GUI

*   4 org.onosproject.drivers              2.5.8.SNAPSHOT Default Drivers

*   6 org.onosproject.protocols.grpc       2.5.8.SNAPSHOT gRPC Protocol Subsystem

*   7 org.onosproject.route-service        2.5.8.SNAPSHOT Route Service Server

*   8 org.onosproject.fpm                  2.5.8.SNAPSHOT FIB Push Manager (FPM) Route Receiver

*   9 org.onosproject.dhcprelay            2.5.8.SNAPSHOT DHCP Relay Agent

*  11 org.onosproject.protocols.gnmi       2.5.8.SNAPSHOT gNMI Protocol Subsystem

*  12 org.onosproject.generaldeviceprovider 2.5.8.SNAPSHOT General Device Provider

*  13 org.onosproject.protocols.p4runtime  2.5.8.SNAPSHOT P4Runtime Protocol Subsystem

*  14 org.onosproject.p4runtime            2.5.8.SNAPSHOT P4Runtime Provider

*  15 org.onosproject.drivers.p4runtime    2.5.8.SNAPSHOT P4Runtime Drivers

*  16 org.onosproject.pipelines.basic      2.5.8.SNAPSHOT Basic Pipelines

*  17 org.onosproject.pipelines.fabric     2.5.8.SNAPSHOT Fabric Pipeline

*  18 org.onosproject.protocols.gnoi       2.5.8.SNAPSHOT gNOI Protocol Subsystem

*  19 org.onosproject.drivers.gnoi         2.5.8.SNAPSHOT gNOI Drivers

*  20 org.onosproject.netcfghostprovider   2.5.8.SNAPSHOT Network Config Host Provider

*  21 org.onosproject.hostprobingprovider  2.5.8.SNAPSHOT Host Probing Provider

*  22 org.onosproject.drivers.gnmi         2.5.8.SNAPSHOT gNMI Drivers

*  23 org.onosproject.drivers.stratum      2.5.8.SNAPSHOT Stratum Drivers

*  26 org.onosproject.lldpprovider         2.5.8.SNAPSHOT LLDP Link Provider

*  27 org.onosproject.portloadbalancer     2.5.8.SNAPSHOT Port Load Balance Service

*  28 org.onosproject.drivers.barefoot     2.5.8.SNAPSHOT Barefoot Drivers

*  29 org.onosproject.hostprovider         2.5.8.SNAPSHOT Host Location Provider

*  31 org.onosproject.mcast                2.5.8.SNAPSHOT Multicast traffic control

*  32 org.omecproject.up4                  1.2.0.SNAPSHOT UP4

*  33 org.onosproject.segmentrouting       3.3.0.SNAPSHOT Trellis Control App

*  34 org.stratumproject.fabric-tna        1.2.0.SNAPSHOT Fabric-TNA Pipeconf

 

 

karaf@root > devices

id=device:5g1, available=true, local-status=connected 1m42s ago, role=MASTER, type=SWITCH, mfr=Barefoot Networks, hw=Tofino, sw=Stratum, serial=unknown, chassis=0, driver=stratum-tofino:org.stratumproject.fabric-upf.montara_sde_9_7_0, locType=none, managementAddress=grpc://10.5.23.21:9339?device_id=1, name=5g1, p4DeviceId=1, protocol=P4Runtime, gNMI, gNOI

karaf@root >  

 

karaf@root > flows -s

deviceId=device:5g1, flowRuleCount=56

    PENDING_ADD, bytes=0, packets=0, table=FabricEgress.dscp_rewriter.rewriter, priority=100, selector=[eg_port=0x3], treatment=[immediate=[FabricEgress.dscp_rewriter.rewrite()]]

    PENDING_ADD, bytes=0, packets=0, table=FabricEgress.dscp_rewriter.rewriter, priority=100, selector=[eg_port=0x1], treatment=[immediate=[FabricEgress.dscp_rewriter.rewrite()]]

    PENDING_ADD, bytes=0, packets=0, table=FabricEgress.egress_next.egress_vlan, priority=100, selector=[vlan_id=0xffe, eg_port=0xffffff03], treatment=[immediate=[FabricEgress.egress_next.pop_vlan()]]

    PENDING_ADD, bytes=0, packets=0, table=FabricEgress.egress_next.egress_vlan, priority=100, selector=[vlan_id=0xffe, eg_port=0xffffff02], treatment=[immediate=[FabricEgress.egress_next.pop_vlan()]]

    PENDING_ADD, bytes=0, packets=0, table=FabricEgress.egress_next.egress_vlan, priority=100, selector=[vlan_id=0xffe, eg_port=0xffffff00], treatment=[immediate=[FabricEgress.egress_next.pop_vlan()]]

    PENDING_ADD, bytes=0, packets=0, table=FabricEgress.egress_next.egress_vlan, priority=100, selector=[vlan_id=0xffe, eg_port=0xffffff01], treatment=[immediate=[FabricEgress.egress_next.pop_vlan()]]

    PENDING_ADD, bytes=0, packets=0, table=FabricEgress.egress_next.egress_vlan, priority=0, selector=[VLAN_VID:100, eg_port=0x0], treatment=[immediate=[FabricEgress.egress_next.pop_vlan()]]

    PENDING_ADD, bytes=0, packets=0, table=FabricEgress.egress_next.egress_vlan, priority=0, selector=[VLAN_VID:200, eg_port=0x2], treatment=[immediate=[FabricEgress.egress_next.pop_vlan()]]

    PENDING_ADD, bytes=0, packets=0, table=FabricEgress.pkt_io_egress.switch_info, priority=100, selector=[], treatment=[immediate=[FabricEgress.pkt_io_egress.set_switch_info(cpu_port=0xfffffffd)]]

    PENDING_ADD, bytes=0, packets=0, table=FabricEgress.upf.gtpu_encap, priority=0, selector=[], treatment=[immediate=[FabricEgress.upf.gtpu_only()]]

    PENDING_ADD, bytes=0, packets=0, table=FabricIngress.acl.acl, priority=40000, selector=[ETH_TYPE:ipv4, IPV4_DST:192.168.1.1/32], treatment=[immediate=[FabricIngress.acl.punt_to_cpu()]]

    PENDING_ADD, bytes=0, packets=0, table=FabricIngress.acl.acl, priority=40000, selector=[ETH_TYPE:ipv4, IPV4_DST:10.0.200.254/32], treatment=[immediate=[FabricIngress.acl.punt_to_cpu()]]

    PENDING_ADD, bytes=0, packets=0, table=FabricIngress.acl.acl, priority=40000, selector=[ETH_TYPE:ipv4, IPV4_DST:10.0.100.254/32], treatment=[immediate=[FabricIngress.acl.punt_to_cpu()]]

    PENDING_ADD, bytes=0, packets=0, table=FabricIngress.acl.acl, priority=40000, selector=[ETH_TYPE:bddp], treatment=[immediate=[FabricIngress.acl.punt_to_cpu()]]

    PENDING_ADD, bytes=0, packets=0, table=FabricIngress.acl.acl, priority=40000, selector=[ETH_TYPE:lldp], treatment=[immediate=[FabricIngress.acl.punt_to_cpu()]]

    PENDING_ADD, bytes=0, packets=0, table=FabricIngress.acl.acl, priority=30000, selector=[ETH_TYPE:arp], treatment=[immediate=[FabricIngress.acl.copy_to_cpu()]]

    PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.fwd_classifier, priority=110, selector=[IN_PORT:4294967042, ETH_TYPE:mpls_unicast, ip_eth_type=0x800], treatment=[immediate=[FabricIngress.filtering.set_forwarding_type(fwd_type=0x1)]]

    PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.fwd_classifier, priority=110, selector=[IN_PORT:4294967041, ETH_TYPE:mpls_unicast, ip_eth_type=0x800], treatment=[immediate=[FabricIngress.filtering.set_forwarding_type(fwd_type=0x1)]]

    PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.fwd_classifier, priority=110, selector=[IN_PORT:4294967043, ETH_TYPE:mpls_unicast, ip_eth_type=0x800], treatment=[immediate=[FabricIngress.filtering.set_forwarding_type(fwd_type=0x1)]]

    PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.fwd_classifier, priority=110, selector=[IN_PORT:4294967040, ETH_TYPE:mpls_unicast, ip_eth_type=0x800], treatment=[immediate=[FabricIngress.filtering.set_forwarding_type(fwd_type=0x1)]]

    PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.fwd_classifier, priority=101, selector=[IN_PORT:1, ETH_DST_MASKED:00:AA:00:00:00:01/FF:FF:FF:FF:FF:FF, eth_type=0x8847&&&0xffff, ip_eth_type=0x86dd], treatment=[immediate=[FabricIngress.filtering.set_forwarding_type(fwd_type=0x1)]]

    PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.fwd_classifier, priority=101, selector=[IN_PORT:3, ETH_DST_MASKED:00:AA:00:00:00:01/FF:FF:FF:FF:FF:FF, eth_type=0x8847&&&0xffff, ip_eth_type=0x86dd], treatment=[immediate=[FabricIngress.filtering.set_forwarding_type(fwd_type=0x1)]]

    PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.fwd_classifier, priority=101, selector=[IN_PORT:1, ETH_DST_MASKED:00:AA:00:00:00:01/FF:FF:FF:FF:FF:FF, eth_type=0x8847&&&0xffff, ip_eth_type=0x800], treatment=[immediate=[FabricIngress.filtering.set_forwarding_type(fwd_type=0x1)]]

    PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.fwd_classifier, priority=101, selector=[IN_PORT:3, ETH_DST_MASKED:00:AA:00:00:00:01/FF:FF:FF:FF:FF:FF, eth_type=0x8847&&&0xffff, ip_eth_type=0x800], treatment=[immediate=[FabricIngress.filtering.set_forwarding_type(fwd_type=0x1)]]

    PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.fwd_classifier, priority=100, selector=[IN_PORT:1, ETH_DST_MASKED:00:AA:00:00:00:01/FF:FF:FF:FF:FF:FF, ip_eth_type=0x86dd], treatment=[immediate=[FabricIngress.filtering.set_forwarding_type(fwd_type=0x4)]]

    PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.fwd_classifier, priority=100, selector=[IN_PORT:3, ETH_DST_MASKED:00:AA:00:00:00:01/FF:FF:FF:FF:FF:FF, ip_eth_type=0x86dd], treatment=[immediate=[FabricIngress.filtering.set_forwarding_type(fwd_type=0x4)]]

    PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.fwd_classifier, priority=100, selector=[IN_PORT:1, ETH_DST_MASKED:00:AA:00:00:00:01/FF:FF:FF:FF:FF:FF, ip_eth_type=0x800], treatment=[immediate=[FabricIngress.filtering.set_forwarding_type(fwd_type=0x2)]]

    PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.fwd_classifier, priority=100, selector=[IN_PORT:3, ETH_DST_MASKED:00:AA:00:00:00:01/FF:FF:FF:FF:FF:FF, ip_eth_type=0x800], treatment=[immediate=[FabricIngress.filtering.set_forwarding_type(fwd_type=0x2)]]

    PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.fwd_classifier, priority=100, selector=[IN_PORT:4294967040, ip_eth_type=0x800], treatment=[immediate=[FabricIngress.filtering.set_forwarding_type(fwd_type=0x2)]]

    PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.fwd_classifier, priority=100, selector=[IN_PORT:4294967042, ip_eth_type=0x800], treatment=[immediate=[FabricIngress.filtering.set_forwarding_type(fwd_type=0x2)]]

    PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.fwd_classifier, priority=100, selector=[IN_PORT:4294967043, ip_eth_type=0x800], treatment=[immediate=[FabricIngress.filtering.set_forwarding_type(fwd_type=0x2)]]

    PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.fwd_classifier, priority=100, selector=[IN_PORT:4294967293, ip_eth_type=0x800], treatment=[immediate=[FabricIngress.filtering.set_forwarding_type(fwd_type=0x2)]]

    PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.fwd_classifier, priority=100, selector=[IN_PORT:4294967041, ip_eth_type=0x800], treatment=[immediate=[FabricIngress.filtering.set_forwarding_type(fwd_type=0x2)]]

    PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.ingress_port_vlan, priority=100, selector=[IN_PORT:3, VLAN_VID:4090, vlan_is_valid=0x1], treatment=[immediate=[FabricIngress.filtering.permit(port_type=0x2)]]

    PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.ingress_port_vlan, priority=100, selector=[IN_PORT:1, VLAN_VID:4090, vlan_is_valid=0x1], treatment=[immediate=[FabricIngress.filtering.permit(port_type=0x2)]]

    PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.ingress_port_vlan, priority=100, selector=[IN_PORT:3, vlan_is_valid=0x0], treatment=[immediate=[FabricIngress.filtering.permit_with_internal_vlan(vlan_id=0xffe, port_type=0x2)]]

    PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.ingress_port_vlan, priority=100, selector=[IN_PORT:1, vlan_is_valid=0x0], treatment=[immediate=[FabricIngress.filtering.permit_with_internal_vlan(vlan_id=0xffe, port_type=0x2)]]

    PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.ingress_port_vlan, priority=100, selector=[IN_PORT:4294967293, vlan_is_valid=0x0], treatment=[immediate=[FabricIngress.filtering.permit_with_internal_vlan(vlan_id=0xffe, port_type=0x3)]]

    PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.ingress_port_vlan, priority=100, selector=[IN_PORT:4294967042, vlan_is_valid=0x0], treatment=[immediate=[FabricIngress.filtering.permit_with_internal_vlan(vlan_id=0xffe, port_type=0x3)]]

    PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.ingress_port_vlan, priority=100, selector=[IN_PORT:4294967040, vlan_is_valid=0x0], treatment=[immediate=[FabricIngress.filtering.permit_with_internal_vlan(vlan_id=0xffe, port_type=0x3)]]

    PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.ingress_port_vlan, priority=100, selector=[IN_PORT:4294967041, vlan_is_valid=0x0], treatment=[immediate=[FabricIngress.filtering.permit_with_internal_vlan(vlan_id=0xffe, port_type=0x3)]]

    PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.ingress_port_vlan, priority=100, selector=[IN_PORT:4294967043, vlan_is_valid=0x0], treatment=[immediate=[FabricIngress.filtering.permit_with_internal_vlan(vlan_id=0xffe, port_type=0x3)]]

    PENDING_ADD, bytes=0, packets=0, table=FabricIngress.forwarding.bridging, priority=5, selector=[VLAN_VID:100], treatment=[immediate=[FabricIngress.forwarding.set_next_id_bridging(next_id=0x7d1)]]

    PENDING_ADD, bytes=0, packets=0, table=FabricIngress.forwarding.bridging, priority=5, selector=[VLAN_VID:200], treatment=[immediate=[FabricIngress.forwarding.set_next_id_bridging(next_id=0x7d2)]]

    PENDING_ADD, bytes=0, packets=0, table=FabricIngress.next.multicast, priority=0, selector=[next_id=0x7d1], treatment=[immediate=[FabricIngress.next.set_mcast_group_id(group_id=0x7d1)]]

    PENDING_ADD, bytes=0, packets=0, table=FabricIngress.next.multicast, priority=0, selector=[next_id=0x7d2], treatment=[immediate=[FabricIngress.next.set_mcast_group_id(group_id=0x7d2)]]

    PENDING_ADD, bytes=0, packets=0, table=FabricIngress.pre_next.next_vlan, priority=0, selector=[next_id=0x7d1], treatment=[immediate=[FabricIngress.pre_next.set_vlan(vlan_id=0x64)]]

    PENDING_ADD, bytes=0, packets=0, table=FabricIngress.pre_next.next_vlan, priority=0, selector=[next_id=0x7d2], treatment=[immediate=[FabricIngress.pre_next.set_vlan(vlan_id=0xc8)]]

    PENDING_ADD, bytes=0, packets=0, table=FabricIngress.qos.default_tc, priority=10, selector=[slice_tc=0x0&&&0x3c, tc_unknown=0x1], treatment=[immediate=[FabricIngress.qos.set_default_tc(tc=0x0)]]

    PENDING_ADD, bytes=0, packets=0, table=FabricIngress.qos.queues, priority=10, selector=[slice_tc=0x0], treatment=[immediate=[FabricIngress.qos.set_queue(qid=0x0)]]

    PENDING_ADD, bytes=0, packets=0, table=FabricIngress.slice_tc_classifier.classifier, priority=100, selector=[IN_PORT:1], treatment=[immediate=[FabricIngress.slice_tc_classifier.trust_dscp()]]

    PENDING_ADD, bytes=0, packets=0, table=FabricIngress.slice_tc_classifier.classifier, priority=100, selector=[IN_PORT:3], treatment=[immediate=[FabricIngress.slice_tc_classifier.trust_dscp()]]

    PENDING_ADD, bytes=0, packets=0, table=FabricIngress.upf.interfaces, priority=128, selector=[ipv4_dst_addr=0xafa0000/16, gtpu_is_valid=0x0], treatment=[immediate=[FabricIngress.upf.iface_core(slice_id=0x0)]]

    PENDING_ADD, bytes=0, packets=0, table=FabricIngress.upf.interfaces, priority=128, selector=[ipv4_dst_addr=0xa006464/32, gtpu_is_valid=0x1], treatment=[immediate=[FabricIngress.upf.iface_access(slice_id=0x0)]]

    PENDING_ADD, bytes=0, packets=0, table=FabricIngress.upf.uplink_recirc_rules, priority=138, selector=[IPV4_SRC:10.250.0.0/16, IPV4_DST:10.250.0.0/16], treatment=[immediate=[FabricIngress.upf.recirc_allow()]]

    PENDING_ADD, bytes=0, packets=0, table=FabricIngress.upf.uplink_recirc_rules, priority=128, selector=[IPV4_DST:10.250.0.0/16], treatment=[immediate=[FabricIngress.upf.recirc_deny()]]

 

karaf@root > hosts

id=24:8A:07:B8:71:DA/None, mac=24:8A:07:B8:71:DA, locations=[device:5g1/[1/2](3)], auxLocations=null, vlan=None, ip(s)=[10.0.200.1], innerVlan=None, outerTPID=unknown, provider=of:org.onosproject.provider.host, configured=false

id=24:8A:07:B8:71:DB/None, mac=24:8A:07:B8:71:DB, locations=[device:5g1/[1/0](1)], auxLocations=null, vlan=None, ip(s)=[10.0.100.1], innerVlan=None, outerTPID=unknown, provider=of:org.onosproject.provider.host, configured=false

 

(Note that 10.0.100.1 and 10.0.200.1 is the UERANSIM system)

 

If I set ONOS logging to WARN, I see this error when I try to pint 10.0.200.254 from 10.0.200.1:

 

23:55:31.436 WARN [SegmentRoutingManager] Received unexpected ARP packet on device:5g1/[1/2](3)

23:55:32.445 WARN [SegmentRoutingManager] Received unexpected ARP packet on device:5g1/[1/2](3)

23:55:33.471 WARN [SegmentRoutingManager] Received unexpected ARP packet on device:5g1/[1/2](3)

 

 

David

Ventre, Pier

unread,
Nov 19, 2022, 3:25:18 AM11/19/22
to David Lake, Arcaduino, sdfabr...@opennetworking.org

Hi David,

Few interesting things:

  1. It looks like the interfaces in ONOS are not correctly configured. We need to figure out what are the ports configured in the dataplane. Can you provide your chassis config ?
  2. I noticed the flows are in PENDING_ADD. They stay in that state forever or is it was just temporary ? If they are stuck in that state can you get and attach the onos-diagnostics* ?

 

Thanks

Pier

 

* https://docs.sd-fabric.org/master/troubleshooting.html#onos-diagnostics

David Lake

unread,
Nov 19, 2022, 5:00:09 AM11/19/22
to Ventre, Pier, Arcaduino, sdfabr...@opennetworking.org

Hi Pier

 

  1. I’ve not loaded any chassis configuration into the switch – the Stratum config guide states this:

 

“           By default, Stratum tries to load a default Chassis Config file on startup for supported platforms. This file controls which ports are configured, and by default, we configure all ports in their non-channelized, maximum speed configuration.”

 

When I look at the ONOS gui, I see the switch with three ports active at 40Gbit/s.

 

  1. The flows stay in PENDING_ADD even if I try to ping the interfaces from the intermediate switch (10.0.200.99).  I can ping the host (10.0.200.1) from the switch and the port-state to the EdgeCore switch is showing as ‘Up’ at 40G:

 

 

UERANSIM (10.0.200.1) ------   Layer 2 switch (one VLAN, 10.0.200.99) ------ EdgeCore switch (10.0.200.254).

 

ONOS Diag attached!

 

Thank you!

 

David

onos-diags.tar.gz

Ventre, Pier

unread,
Nov 19, 2022, 4:07:42 PM11/19/22
to David Lake, Arcaduino, sdfabr...@opennetworking.org

Well, I am not familiar with the Stratum config guide that you are quoting but I believe in our documentation we suggest to always to provision a chassis config. Anyway, as long as the default speed is fine for your intermediate switches it should be ok.

 

These onos logs are not very useful and are messy (have you enabled the root log ?), can you check if in Statum logs there is anything useful that can suggest what’s going on between ONOS and Stratum ?

 

Also the ports are not correctly configured in ONOS, you should use the port number and not the port name. It should be something like:

 

  "ports" : {

    "device:5g1/1" : {

      "interfaces" : [ {

        "name" : "5g1-0",

        "ips" : [ "10.0.100.254/24" ],

        "vlan-untagged" : 100,

        "mac" : "00:AA:00:00:00:01"

      } ]

    },

    "device:5g1/3" : {

      "interfaces" : [ {

        "name" : "5g1-2",

        "ips" : [ "10.0.200.254/24" ],

        "vlan-untagged" : 200,

        "mac" : "00:AA:00:00:00:01"

      } ]

    }

  },

 

Port 2 looks disable.

Ruben

unread,
Nov 20, 2022, 3:51:40 AM11/20/22
to Ventre, Pier, David Lake, sdfabr...@opennetworking.org
Pier, the guide David mentioned is this one:

https://github.com/stratum/stratum/blob/main/stratum/hal/bin/barefoot/README.run.md#chassis-config

For me the pending add issue of the loaded flows must be related to a device or port misconfiguration, but haven't found it yet.

Cheers,
Rubén

David Lake

unread,
Nov 20, 2022, 6:25:03 AM11/20/22
to Ruben, Ventre, Pier, sdfabr...@opennetworking.org
Thanks both.  The config is very simple with just two ports, 1:0 and 1:2 active.

I’ll change the port configurations as Pier suggests and see what happens.

I’m not sure how to load the chassis configuration- the documentation mentions this and points to the Stratum documents but that doesn’t give details.   The ports do seem to all be up correctly at L1/2 though.

I’m trying to find the Stratum logs at the moment…

David

Sent from Outlook for iOS

From: Ruben <arca...@riseup.net>
Sent: Sunday, November 20, 2022 8:51:32 AM
To: Ventre, Pier <pier....@intel.com>; Lake, David (PG/R - Comp Sci & Elec Eng) <d.l...@surrey.ac.uk>; sdfabr...@opennetworking.org <sdfabr...@opennetworking.org>

Ventre, Pier

unread,
Nov 20, 2022, 6:50:26 AM11/20/22
to David Lake, sdfabr...@opennetworking.org

David Lake

unread,
Nov 20, 2022, 6:56:20 AM11/20/22
to Ventre, Pier, sdfabr...@opennetworking.org
Using the Helm charts and instructions here:


David

Sent from Outlook for iOS

From: Ventre, Pier <pier....@intel.com>
Sent: Sunday, November 20, 2022 11:50:17 AM
To: Lake, David (PG/R - Comp Sci & Elec Eng) <d.l...@surrey.ac.uk>; sdfabr...@opennetworking.org <sdfabr...@opennetworking.org>

Ventre, Pier

unread,
Nov 20, 2022, 7:13:46 AM11/20/22
to David Lake, sdfabr...@opennetworking.org

David Lake

unread,
Nov 20, 2022, 7:07:16 PM11/20/22
to Ventre, Pier, sdfabr...@opennetworking.org

OK – it does appear to be a problem with the port configuration from looking at the Stratum logs where I see this repeated every 4 seconds:

 

E20221120 12:43:54.025368  5432 bfrt_p4runtime_translator.cc:698] StratumErrorSpace::ERR_INVALID_PARAM: RET_CHECK failure (stratum/hal/lib/barefoot/bfrt_p4runtime_translator.cc:698) 'sdk_port_to_singleton_port_.contains(sdk_port_id)' is false. Could not find singleton port for sdk port 0.

E20221120 12:43:54.025424  5432 bfrt_p4runtime_translator.cc:264] Return Error: TranslateValue(field_match.exact().value(), *uri, to_sdk, to_bit_width) at stratum/hal/lib/barefoot/bfrt_p4runtime_translator.cc:264

E20221120 12:43:54.025444  5432 bfrt_table_manager.cc:540] Return Error: bfrt_p4runtime_translator_->TranslateTableEntry( result, false) at stratum/hal/lib/barefoot/bfrt_table_manager.cc:540

E20221120 12:43:54.025470  5432 bfrt_table_manager.cc:595] Return error: ReadAllTableEntries(session, wanted_table_entry, writer) failed with StratumErrorSpace::ERR_INVALID_PARAM: RET_CHECK failure (stratum/hal/lib/barefoot/bfrt_p4runtime_translator.cc:698) 'sdk_port_to_singleton_port_.contains(sdk_port_id)' is false. Could not find singleton port for sdk port 0.

E20221120 12:43:54.025514  5432 bfrt_table_manager.cc:595] StratumErrorSpace::ERR_INVALID_PARAM: RET_CHECK failure (stratum/hal/lib/barefoot/bfrt_p4runtime_translator.cc:698) 'sdk_port_to_singleton_port_.contains(sdk_port_id)' is false. Could not find singleton port for sdk port 0.Failed to read all table entries for request table_id: 40271115 counter_data { }.

E20221120 12:43:54.025830  5432 bfrt_node.cc:366] StratumErrorSpace::ERR_AT_LEAST_ONE_OPER_FAILED: One or more read operations failed.

E20221120 12:43:54.025869  5432 p4_service.cc:352] Failed to read forwarding entries from node 1: One or more read operations failed.

 

I’ll change the Netconf.json file and reload.

David Lake

unread,
Nov 21, 2022, 9:06:15 AM11/21/22
to Ventre, Pier, sdfabr...@opennetworking.org

Hello

 

As you correctly predicted (!) it was a configuration issue with the port naming in the netcfg.json file.

 

I now have the Stratum switch working with ONOS.  I am now working on the UP4 side to understand how to make that work with UERANSIM.

 

Thank you SO SO much for the help!

 

David

Ventre, Pier

unread,
Nov 21, 2022, 10:18:47 AM11/21/22
to David Lake, sdfabr...@opennetworking.org

Noyce! Let’s reuse the other thread with Tomasz and Daniele for Up4 troubleshooting.

 

Pier

David Lake

unread,
Nov 25, 2022, 2:00:48 PM11/25/22
to Ventre, Pier, sdfabr...@opennetworking.org

So the REALLY good news is that I have the UP4 UPF working PERFECTLY in both directions using pfcpsim to add the tunnels!

 

I thought that was going to be the difficult piece!!!

Ventre, Pier

unread,
Nov 25, 2022, 2:05:47 PM11/25/22
to David Lake, sdfabr...@opennetworking.org

Very Noyce 😊.

Reply all
Reply to author
Forward
0 new messages