Hello
More help please ☹
I have simplified my system to a single Stratum EdgeCore switch with two ports:
10.0.100.254 is the Access port on the Stratum switch
10.0.100.1 is the gNB UERANSIM.
10.0.200.254 is on the Core (N6) on the Stratum switch
The N3 address is 10.0.100.100
There are layer 2 switches (two separate switches) between the Stratum ports and the UERANSIM and between the N6 interface and the general network.
Everything looks OK and I see the IP addresses added to ONOS.
BUT. When I try and ping 10.0.100.254 from 10.0.100.1 or try to ping 10.0.200.254 from a host on 10.0.200.0, I see the ‘hosts’ populate in ONOS but I don’t see ICMP replies or ARPS back so the MAC address tables on the switches between the hosts and the EdgeCore don’t populate.
Also, how does the gNB know how to get to 10.0.100.100? What sets up the ARP/MAC learning?
This is my netfcfg.json:
{
"devices": {
"device:5g1": {
"segmentrouting": {
"ipv4NodeSid": 101,
"ipv4Loopback": "192.168.1.1",
"routerMac": "00:AA:00:00:00:01",
"isEdgeRouter": true,
"adjacencySids": []
},
"basic" : {
"name": "5g1",
"managementAddress": "grpc://10.5.23.21:9339?device_id=1",
"driver": "stratum-tofino",
"pipeconf": "org.stratumproject.fabric-upf.montara_sde_9_7_0"
}
}
},
"ports": {
"device:5g1/0": {
"interfaces": [{
"name": "5g1-0",
"ips": ["10.0.100.254/24"],
"vlan-untagged": 100
}]
},
"device:5g1/2": {
"interfaces": [{
"name": "5g1-2",
"ips": ["10.0.200.254/24"],
"vlan-untagged": 200
}]
}
},
"apps": {
"org.omecproject.up4": {
"up4": {
"devices": [
"device:5g1"
],
"n3Addr": "10.0.100.100",
"uePools": [
],
"sliceId": 0,
"pscEncapEnabled": false
}
}
}
}
Thanks
David Lake
Tel: +44 (0)7711 736784
5G & 6G Innovation Centres
Institute for Communication Systems (ICS)
University of Surrey
Guildford
GU2 7XH
Hi David,
Are you using the helm-charts to deploy sd-fabric ?
Can you get the list of the active applications in ONOS (apps -a -s)?
Thanks
Pier
--
You received this message because you are subscribed to the Google Groups "SDFABRIC-Dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
sdfabric-dev...@opennetworking.org.
To view this discussion on the web visit
https://groups.google.com/a/opennetworking.org/d/msgid/sdfabric-dev/DBAPR06MB68551A637E9EFA17DF228F98B5099%40DBAPR06MB6855.eurprd06.prod.outlook.com.
---------------------------------------------------------------------
INTEL CORPORATION ITALIA S.p.A. con unico socio
Sede: Milanofiori Palazzo E 4
CAP 20094 Assago (MI)
Capitale Sociale Euro 104.000,00 interamente versato
Partita I.V.A. e Codice Fiscale 04236760155
Repertorio Economico Amministrativo n. 997124
Registro delle Imprese di Milano nr. 183983/5281/33
Soggetta ad attivita' di direzione e coordinamento di
INTEL CORPORATION, USA
This e-mail and any attachments may contain confidential material for
the sole use of the intended recipient(s). Any review or distribution
by others is strictly prohibited. If you are not the intended
recipient, please contact the sender and delete all copies.
Hi Reuben and Pier
karaf@root > apps -s -a
* 3 org.onosproject.gui 2.5.8.SNAPSHOT ONOS Legacy GUI
* 4 org.onosproject.drivers 2.5.8.SNAPSHOT Default Drivers
* 6 org.onosproject.protocols.grpc 2.5.8.SNAPSHOT gRPC Protocol Subsystem
* 7 org.onosproject.route-service 2.5.8.SNAPSHOT Route Service Server
* 8 org.onosproject.fpm 2.5.8.SNAPSHOT FIB Push Manager (FPM) Route Receiver
* 9 org.onosproject.dhcprelay 2.5.8.SNAPSHOT DHCP Relay Agent
* 11 org.onosproject.protocols.gnmi 2.5.8.SNAPSHOT gNMI Protocol Subsystem
* 12 org.onosproject.generaldeviceprovider 2.5.8.SNAPSHOT General Device Provider
* 13 org.onosproject.protocols.p4runtime 2.5.8.SNAPSHOT P4Runtime Protocol Subsystem
* 14 org.onosproject.p4runtime 2.5.8.SNAPSHOT P4Runtime Provider
* 15 org.onosproject.drivers.p4runtime 2.5.8.SNAPSHOT P4Runtime Drivers
* 16 org.onosproject.pipelines.basic 2.5.8.SNAPSHOT Basic Pipelines
* 17 org.onosproject.pipelines.fabric 2.5.8.SNAPSHOT Fabric Pipeline
* 18 org.onosproject.protocols.gnoi 2.5.8.SNAPSHOT gNOI Protocol Subsystem
* 19 org.onosproject.drivers.gnoi 2.5.8.SNAPSHOT gNOI Drivers
* 20 org.onosproject.netcfghostprovider 2.5.8.SNAPSHOT Network Config Host Provider
* 21 org.onosproject.hostprobingprovider 2.5.8.SNAPSHOT Host Probing Provider
* 22 org.onosproject.drivers.gnmi 2.5.8.SNAPSHOT gNMI Drivers
* 23 org.onosproject.drivers.stratum 2.5.8.SNAPSHOT Stratum Drivers
* 26 org.onosproject.lldpprovider 2.5.8.SNAPSHOT LLDP Link Provider
* 27 org.onosproject.portloadbalancer 2.5.8.SNAPSHOT Port Load Balance Service
* 28 org.onosproject.drivers.barefoot 2.5.8.SNAPSHOT Barefoot Drivers
* 29 org.onosproject.hostprovider 2.5.8.SNAPSHOT Host Location Provider
* 31 org.onosproject.mcast 2.5.8.SNAPSHOT Multicast traffic control
* 32 org.omecproject.up4 1.2.0.SNAPSHOT UP4
* 33 org.onosproject.segmentrouting 3.3.0.SNAPSHOT Trellis Control App
* 34 org.stratumproject.fabric-tna 1.2.0.SNAPSHOT Fabric-TNA Pipeconf
karaf@root > devices
id=device:5g1, available=true, local-status=connected 1m42s ago, role=MASTER, type=SWITCH, mfr=Barefoot Networks, hw=Tofino, sw=Stratum, serial=unknown, chassis=0, driver=stratum-tofino:org.stratumproject.fabric-upf.montara_sde_9_7_0, locType=none, managementAddress=grpc://10.5.23.21:9339?device_id=1, name=5g1, p4DeviceId=1, protocol=P4Runtime, gNMI, gNOI
karaf@root >
karaf@root > flows -s
deviceId=device:5g1, flowRuleCount=56
PENDING_ADD, bytes=0, packets=0, table=FabricEgress.dscp_rewriter.rewriter, priority=100, selector=[eg_port=0x3], treatment=[immediate=[FabricEgress.dscp_rewriter.rewrite()]]
PENDING_ADD, bytes=0, packets=0, table=FabricEgress.dscp_rewriter.rewriter, priority=100, selector=[eg_port=0x1], treatment=[immediate=[FabricEgress.dscp_rewriter.rewrite()]]
PENDING_ADD, bytes=0, packets=0, table=FabricEgress.egress_next.egress_vlan, priority=100, selector=[vlan_id=0xffe, eg_port=0xffffff03], treatment=[immediate=[FabricEgress.egress_next.pop_vlan()]]
PENDING_ADD, bytes=0, packets=0, table=FabricEgress.egress_next.egress_vlan, priority=100, selector=[vlan_id=0xffe, eg_port=0xffffff02], treatment=[immediate=[FabricEgress.egress_next.pop_vlan()]]
PENDING_ADD, bytes=0, packets=0, table=FabricEgress.egress_next.egress_vlan, priority=100, selector=[vlan_id=0xffe, eg_port=0xffffff00], treatment=[immediate=[FabricEgress.egress_next.pop_vlan()]]
PENDING_ADD, bytes=0, packets=0, table=FabricEgress.egress_next.egress_vlan, priority=100, selector=[vlan_id=0xffe, eg_port=0xffffff01], treatment=[immediate=[FabricEgress.egress_next.pop_vlan()]]
PENDING_ADD, bytes=0, packets=0, table=FabricEgress.egress_next.egress_vlan, priority=0, selector=[VLAN_VID:100, eg_port=0x0], treatment=[immediate=[FabricEgress.egress_next.pop_vlan()]]
PENDING_ADD, bytes=0, packets=0, table=FabricEgress.egress_next.egress_vlan, priority=0, selector=[VLAN_VID:200, eg_port=0x2], treatment=[immediate=[FabricEgress.egress_next.pop_vlan()]]
PENDING_ADD, bytes=0, packets=0, table=FabricEgress.pkt_io_egress.switch_info, priority=100, selector=[], treatment=[immediate=[FabricEgress.pkt_io_egress.set_switch_info(cpu_port=0xfffffffd)]]
PENDING_ADD, bytes=0, packets=0, table=FabricEgress.upf.gtpu_encap, priority=0, selector=[], treatment=[immediate=[FabricEgress.upf.gtpu_only()]]
PENDING_ADD, bytes=0, packets=0, table=FabricIngress.acl.acl, priority=40000, selector=[ETH_TYPE:ipv4, IPV4_DST:192.168.1.1/32], treatment=[immediate=[FabricIngress.acl.punt_to_cpu()]]
PENDING_ADD, bytes=0, packets=0, table=FabricIngress.acl.acl, priority=40000, selector=[ETH_TYPE:ipv4, IPV4_DST:10.0.200.254/32], treatment=[immediate=[FabricIngress.acl.punt_to_cpu()]]
PENDING_ADD, bytes=0, packets=0, table=FabricIngress.acl.acl, priority=40000, selector=[ETH_TYPE:ipv4, IPV4_DST:10.0.100.254/32], treatment=[immediate=[FabricIngress.acl.punt_to_cpu()]]
PENDING_ADD, bytes=0, packets=0, table=FabricIngress.acl.acl, priority=40000, selector=[ETH_TYPE:bddp], treatment=[immediate=[FabricIngress.acl.punt_to_cpu()]]
PENDING_ADD, bytes=0, packets=0, table=FabricIngress.acl.acl, priority=40000, selector=[ETH_TYPE:lldp], treatment=[immediate=[FabricIngress.acl.punt_to_cpu()]]
PENDING_ADD, bytes=0, packets=0, table=FabricIngress.acl.acl, priority=30000, selector=[ETH_TYPE:arp], treatment=[immediate=[FabricIngress.acl.copy_to_cpu()]]
PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.fwd_classifier, priority=110, selector=[IN_PORT:4294967042, ETH_TYPE:mpls_unicast, ip_eth_type=0x800], treatment=[immediate=[FabricIngress.filtering.set_forwarding_type(fwd_type=0x1)]]
PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.fwd_classifier, priority=110, selector=[IN_PORT:4294967041, ETH_TYPE:mpls_unicast, ip_eth_type=0x800], treatment=[immediate=[FabricIngress.filtering.set_forwarding_type(fwd_type=0x1)]]
PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.fwd_classifier, priority=110, selector=[IN_PORT:4294967043, ETH_TYPE:mpls_unicast, ip_eth_type=0x800], treatment=[immediate=[FabricIngress.filtering.set_forwarding_type(fwd_type=0x1)]]
PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.fwd_classifier, priority=110, selector=[IN_PORT:4294967040, ETH_TYPE:mpls_unicast, ip_eth_type=0x800], treatment=[immediate=[FabricIngress.filtering.set_forwarding_type(fwd_type=0x1)]]
PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.fwd_classifier, priority=101, selector=[IN_PORT:1, ETH_DST_MASKED:00:AA:00:00:00:01/FF:FF:FF:FF:FF:FF, eth_type=0x8847&&&0xffff, ip_eth_type=0x86dd], treatment=[immediate=[FabricIngress.filtering.set_forwarding_type(fwd_type=0x1)]]
PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.fwd_classifier, priority=101, selector=[IN_PORT:3, ETH_DST_MASKED:00:AA:00:00:00:01/FF:FF:FF:FF:FF:FF, eth_type=0x8847&&&0xffff, ip_eth_type=0x86dd], treatment=[immediate=[FabricIngress.filtering.set_forwarding_type(fwd_type=0x1)]]
PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.fwd_classifier, priority=101, selector=[IN_PORT:1, ETH_DST_MASKED:00:AA:00:00:00:01/FF:FF:FF:FF:FF:FF, eth_type=0x8847&&&0xffff, ip_eth_type=0x800], treatment=[immediate=[FabricIngress.filtering.set_forwarding_type(fwd_type=0x1)]]
PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.fwd_classifier, priority=101, selector=[IN_PORT:3, ETH_DST_MASKED:00:AA:00:00:00:01/FF:FF:FF:FF:FF:FF, eth_type=0x8847&&&0xffff, ip_eth_type=0x800], treatment=[immediate=[FabricIngress.filtering.set_forwarding_type(fwd_type=0x1)]]
PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.fwd_classifier, priority=100, selector=[IN_PORT:1, ETH_DST_MASKED:00:AA:00:00:00:01/FF:FF:FF:FF:FF:FF, ip_eth_type=0x86dd], treatment=[immediate=[FabricIngress.filtering.set_forwarding_type(fwd_type=0x4)]]
PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.fwd_classifier, priority=100, selector=[IN_PORT:3, ETH_DST_MASKED:00:AA:00:00:00:01/FF:FF:FF:FF:FF:FF, ip_eth_type=0x86dd], treatment=[immediate=[FabricIngress.filtering.set_forwarding_type(fwd_type=0x4)]]
PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.fwd_classifier, priority=100, selector=[IN_PORT:1, ETH_DST_MASKED:00:AA:00:00:00:01/FF:FF:FF:FF:FF:FF, ip_eth_type=0x800], treatment=[immediate=[FabricIngress.filtering.set_forwarding_type(fwd_type=0x2)]]
PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.fwd_classifier, priority=100, selector=[IN_PORT:3, ETH_DST_MASKED:00:AA:00:00:00:01/FF:FF:FF:FF:FF:FF, ip_eth_type=0x800], treatment=[immediate=[FabricIngress.filtering.set_forwarding_type(fwd_type=0x2)]]
PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.fwd_classifier, priority=100, selector=[IN_PORT:4294967040, ip_eth_type=0x800], treatment=[immediate=[FabricIngress.filtering.set_forwarding_type(fwd_type=0x2)]]
PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.fwd_classifier, priority=100, selector=[IN_PORT:4294967042, ip_eth_type=0x800], treatment=[immediate=[FabricIngress.filtering.set_forwarding_type(fwd_type=0x2)]]
PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.fwd_classifier, priority=100, selector=[IN_PORT:4294967043, ip_eth_type=0x800], treatment=[immediate=[FabricIngress.filtering.set_forwarding_type(fwd_type=0x2)]]
PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.fwd_classifier, priority=100, selector=[IN_PORT:4294967293, ip_eth_type=0x800], treatment=[immediate=[FabricIngress.filtering.set_forwarding_type(fwd_type=0x2)]]
PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.fwd_classifier, priority=100, selector=[IN_PORT:4294967041, ip_eth_type=0x800], treatment=[immediate=[FabricIngress.filtering.set_forwarding_type(fwd_type=0x2)]]
PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.ingress_port_vlan, priority=100, selector=[IN_PORT:3, VLAN_VID:4090, vlan_is_valid=0x1], treatment=[immediate=[FabricIngress.filtering.permit(port_type=0x2)]]
PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.ingress_port_vlan, priority=100, selector=[IN_PORT:1, VLAN_VID:4090, vlan_is_valid=0x1], treatment=[immediate=[FabricIngress.filtering.permit(port_type=0x2)]]
PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.ingress_port_vlan, priority=100, selector=[IN_PORT:3, vlan_is_valid=0x0], treatment=[immediate=[FabricIngress.filtering.permit_with_internal_vlan(vlan_id=0xffe, port_type=0x2)]]
PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.ingress_port_vlan, priority=100, selector=[IN_PORT:1, vlan_is_valid=0x0], treatment=[immediate=[FabricIngress.filtering.permit_with_internal_vlan(vlan_id=0xffe, port_type=0x2)]]
PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.ingress_port_vlan, priority=100, selector=[IN_PORT:4294967293, vlan_is_valid=0x0], treatment=[immediate=[FabricIngress.filtering.permit_with_internal_vlan(vlan_id=0xffe, port_type=0x3)]]
PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.ingress_port_vlan, priority=100, selector=[IN_PORT:4294967042, vlan_is_valid=0x0], treatment=[immediate=[FabricIngress.filtering.permit_with_internal_vlan(vlan_id=0xffe, port_type=0x3)]]
PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.ingress_port_vlan, priority=100, selector=[IN_PORT:4294967040, vlan_is_valid=0x0], treatment=[immediate=[FabricIngress.filtering.permit_with_internal_vlan(vlan_id=0xffe, port_type=0x3)]]
PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.ingress_port_vlan, priority=100, selector=[IN_PORT:4294967041, vlan_is_valid=0x0], treatment=[immediate=[FabricIngress.filtering.permit_with_internal_vlan(vlan_id=0xffe, port_type=0x3)]]
PENDING_ADD, bytes=0, packets=0, table=FabricIngress.filtering.ingress_port_vlan, priority=100, selector=[IN_PORT:4294967043, vlan_is_valid=0x0], treatment=[immediate=[FabricIngress.filtering.permit_with_internal_vlan(vlan_id=0xffe, port_type=0x3)]]
PENDING_ADD, bytes=0, packets=0, table=FabricIngress.forwarding.bridging, priority=5, selector=[VLAN_VID:100], treatment=[immediate=[FabricIngress.forwarding.set_next_id_bridging(next_id=0x7d1)]]
PENDING_ADD, bytes=0, packets=0, table=FabricIngress.forwarding.bridging, priority=5, selector=[VLAN_VID:200], treatment=[immediate=[FabricIngress.forwarding.set_next_id_bridging(next_id=0x7d2)]]
PENDING_ADD, bytes=0, packets=0, table=FabricIngress.next.multicast, priority=0, selector=[next_id=0x7d1], treatment=[immediate=[FabricIngress.next.set_mcast_group_id(group_id=0x7d1)]]
PENDING_ADD, bytes=0, packets=0, table=FabricIngress.next.multicast, priority=0, selector=[next_id=0x7d2], treatment=[immediate=[FabricIngress.next.set_mcast_group_id(group_id=0x7d2)]]
PENDING_ADD, bytes=0, packets=0, table=FabricIngress.pre_next.next_vlan, priority=0, selector=[next_id=0x7d1], treatment=[immediate=[FabricIngress.pre_next.set_vlan(vlan_id=0x64)]]
PENDING_ADD, bytes=0, packets=0, table=FabricIngress.pre_next.next_vlan, priority=0, selector=[next_id=0x7d2], treatment=[immediate=[FabricIngress.pre_next.set_vlan(vlan_id=0xc8)]]
PENDING_ADD, bytes=0, packets=0, table=FabricIngress.qos.default_tc, priority=10, selector=[slice_tc=0x0&&&0x3c, tc_unknown=0x1], treatment=[immediate=[FabricIngress.qos.set_default_tc(tc=0x0)]]
PENDING_ADD, bytes=0, packets=0, table=FabricIngress.qos.queues, priority=10, selector=[slice_tc=0x0], treatment=[immediate=[FabricIngress.qos.set_queue(qid=0x0)]]
PENDING_ADD, bytes=0, packets=0, table=FabricIngress.slice_tc_classifier.classifier, priority=100, selector=[IN_PORT:1], treatment=[immediate=[FabricIngress.slice_tc_classifier.trust_dscp()]]
PENDING_ADD, bytes=0, packets=0, table=FabricIngress.slice_tc_classifier.classifier, priority=100, selector=[IN_PORT:3], treatment=[immediate=[FabricIngress.slice_tc_classifier.trust_dscp()]]
PENDING_ADD, bytes=0, packets=0, table=FabricIngress.upf.interfaces, priority=128, selector=[ipv4_dst_addr=0xafa0000/16, gtpu_is_valid=0x0], treatment=[immediate=[FabricIngress.upf.iface_core(slice_id=0x0)]]
PENDING_ADD, bytes=0, packets=0, table=FabricIngress.upf.interfaces, priority=128, selector=[ipv4_dst_addr=0xa006464/32, gtpu_is_valid=0x1], treatment=[immediate=[FabricIngress.upf.iface_access(slice_id=0x0)]]
PENDING_ADD, bytes=0, packets=0, table=FabricIngress.upf.uplink_recirc_rules, priority=138, selector=[IPV4_SRC:10.250.0.0/16, IPV4_DST:10.250.0.0/16], treatment=[immediate=[FabricIngress.upf.recirc_allow()]]
PENDING_ADD, bytes=0, packets=0, table=FabricIngress.upf.uplink_recirc_rules, priority=128, selector=[IPV4_DST:10.250.0.0/16], treatment=[immediate=[FabricIngress.upf.recirc_deny()]]
karaf@root > hosts
id=24:8A:07:B8:71:DA/None, mac=24:8A:07:B8:71:DA, locations=[device:5g1/[1/2](3)], auxLocations=null, vlan=None, ip(s)=[10.0.200.1], innerVlan=None, outerTPID=unknown, provider=of:org.onosproject.provider.host, configured=false
id=24:8A:07:B8:71:DB/None, mac=24:8A:07:B8:71:DB, locations=[device:5g1/[1/0](1)], auxLocations=null, vlan=None, ip(s)=[10.0.100.1], innerVlan=None, outerTPID=unknown, provider=of:org.onosproject.provider.host, configured=false
(Note that 10.0.100.1 and 10.0.200.1 is the UERANSIM system)
If I set ONOS logging to WARN, I see this error when I try to pint 10.0.200.254 from 10.0.200.1:
23:55:31.436 WARN [SegmentRoutingManager] Received unexpected ARP packet on device:5g1/[1/2](3)
23:55:32.445 WARN [SegmentRoutingManager] Received unexpected ARP packet on device:5g1/[1/2](3)
23:55:33.471 WARN [SegmentRoutingManager] Received unexpected ARP packet on device:5g1/[1/2](3)
David
Hi David,
Few interesting things:
Thanks
Pier
* https://docs.sd-fabric.org/master/troubleshooting.html#onos-diagnostics
To view this discussion on the web visit https://groups.google.com/a/opennetworking.org/d/msgid/sdfabric-dev/DBAPR06MB68559F142F304F0D46D0B10FB5099%40DBAPR06MB6855.eurprd06.prod.outlook.com.
Hi Pier
“ By default, Stratum tries to load a default Chassis Config file on startup for supported platforms. This file controls which ports are configured, and by default, we configure all ports in their non-channelized, maximum speed configuration.”
When I look at the ONOS gui, I see the switch with three ports active at 40Gbit/s.
UERANSIM (10.0.200.1) ------ Layer 2 switch (one VLAN, 10.0.200.99) ------ EdgeCore switch (10.0.200.254).
ONOS Diag attached!
Thank you!
David
Well, I am not familiar with the Stratum config guide that you are quoting but I believe in our documentation we suggest to always to provision a chassis config. Anyway, as long as the default speed is fine for your intermediate switches it should be ok.
These onos logs are not very useful and are messy (have you enabled the root log ?), can you check if in Statum logs there is anything useful that can suggest what’s going on between ONOS and Stratum ?
Also the ports are not correctly configured in ONOS, you should use the port number and not the port name. It should be something like:
"ports" : {
"device:5g1/1" : {
"vlan-untagged" : 100,
"mac" : "00:AA:00:00:00:01"
} ]
},
"device:5g1/3" : {
"vlan-untagged" : 200,
"mac" : "00:AA:00:00:00:01"
} ]
}
},
Port 2 looks disable.
To view this discussion on the web visit https://groups.google.com/a/opennetworking.org/d/msgid/sdfabric-dev/DBAPR06MB68556F85E2A35125D3F34EEEB5089%40DBAPR06MB6855.eurprd06.prod.outlook.com.
How do you deploy SD-Fabric ?
To view this discussion on the web visit https://groups.google.com/a/opennetworking.org/d/msgid/sdfabric-dev/DBAPR06MB685547EEC0668155F6B9D1A8B50B9%40DBAPR06MB6855.eurprd06.prod.outlook.com.
You can get the stratum logs from kubectl. Chassis config can be provisioned through the helm chart values.
To view this discussion on the web visit https://groups.google.com/a/opennetworking.org/d/msgid/sdfabric-dev/DBAPR06MB68550D3B136E7C8D08FF08BBB50B9%40DBAPR06MB6855.eurprd06.prod.outlook.com.
OK – it does appear to be a problem with the port configuration from looking at the Stratum logs where I see this repeated every 4 seconds:
E20221120 12:43:54.025368 5432 bfrt_p4runtime_translator.cc:698] StratumErrorSpace::ERR_INVALID_PARAM: RET_CHECK failure (stratum/hal/lib/barefoot/bfrt_p4runtime_translator.cc:698) 'sdk_port_to_singleton_port_.contains(sdk_port_id)' is false. Could not find singleton port for sdk port 0.
E20221120 12:43:54.025424 5432 bfrt_p4runtime_translator.cc:264] Return Error: TranslateValue(field_match.exact().value(), *uri, to_sdk, to_bit_width) at stratum/hal/lib/barefoot/bfrt_p4runtime_translator.cc:264
E20221120 12:43:54.025444 5432 bfrt_table_manager.cc:540] Return Error: bfrt_p4runtime_translator_->TranslateTableEntry( result, false) at stratum/hal/lib/barefoot/bfrt_table_manager.cc:540
E20221120 12:43:54.025470 5432 bfrt_table_manager.cc:595] Return error: ReadAllTableEntries(session, wanted_table_entry, writer) failed with StratumErrorSpace::ERR_INVALID_PARAM: RET_CHECK failure (stratum/hal/lib/barefoot/bfrt_p4runtime_translator.cc:698) 'sdk_port_to_singleton_port_.contains(sdk_port_id)' is false. Could not find singleton port for sdk port 0.
E20221120 12:43:54.025514 5432 bfrt_table_manager.cc:595] StratumErrorSpace::ERR_INVALID_PARAM: RET_CHECK failure (stratum/hal/lib/barefoot/bfrt_p4runtime_translator.cc:698) 'sdk_port_to_singleton_port_.contains(sdk_port_id)' is false. Could not find singleton port for sdk port 0.Failed to read all table entries for request table_id: 40271115 counter_data { }.
E20221120 12:43:54.025830 5432 bfrt_node.cc:366] StratumErrorSpace::ERR_AT_LEAST_ONE_OPER_FAILED: One or more read operations failed.
E20221120 12:43:54.025869 5432 p4_service.cc:352] Failed to read forwarding entries from node 1: One or more read operations failed.
I’ll change the Netconf.json file and reload.
Hello
As you correctly predicted (!) it was a configuration issue with the port naming in the netcfg.json file.
I now have the Stratum switch working with ONOS. I am now working on the UP4 side to understand how to make that work with UERANSIM.
Thank you SO SO much for the help!
David
To view this discussion on the web visit https://groups.google.com/a/opennetworking.org/d/msgid/sdfabric-dev/DBAPR06MB6855643D39E5F452431CEBC8B50B9%40DBAPR06MB6855.eurprd06.prod.outlook.com.
Noyce! Let’s reuse the other thread with Tomasz and Daniele for Up4 troubleshooting.
Pier
So the REALLY good news is that I have the UP4 UPF working PERFECTLY in both directions using pfcpsim to add the tunnels!
I thought that was going to be the difficult piece!!!
Very Noyce 😊.