Community scale test bed(s)

48 views
Skip to first unread message

Manue...@telekom.de

unread,
Jul 6, 2022, 6:45:57 AM7/6/22
to voltha-...@opencord.org, amit....@radisys.com, mahir....@netsia.com, andrea.c...@intel.com

Hi All,

 

 

We (DT) would like to suggest looking into current open community scale testing testbeds/capacities and needs going forward:

 

  1. Status quo of community testbeds and scale test capacity
    1. Existing community testbeds (hosting taken over by Radisys from ONF Menlo)

Get an overview of current testbeds and assignments.

    1. Berlin VOLTHA community testbed (hosted by DT) - use of servers

The VOLTHA community testbed that DT is hosting in Berlin is currently equipped with six (6) servers.

Should we use some of these for scale testing?

 

  1. Community scale testing needs and plans towards VOL2.11 and beyond

Considering that there is a plan to introduce a new VOLTHA Micro-Service Controller (while maintaining support for ONOS at least for some time), there would probably be an -at least temporary- need for more testing capacity.

 

Thoughts, comments?

Would like to discuss here on the list, as well as on upcoming TST meeting(s).

(Andrea, we know / suppose you are off to new adventures, but any possible feedback from your end is of still very welcome.)

 

Best regards,

Manuel

Girish Gowdra

unread,
Jul 6, 2022, 10:30:11 PM7/6/22
to Manuel Paul, VOLTHA Discuss, Amit Ghosh, Mahir Gunyel, Campanella, Andrea, Gowdra, Girish
Hi Manuel,
I can share some feedback on this. 

There are three different kinds of test beds
1. Hardware OLT and ONUs based test beds used to run functional/error/failure/dataplane tests with 2 or 3 ONUs attached to an OLT.
2. Scale test beds used to run scale tests with 1000s of subscribers using emulated OLTs and ONUs (uses BBSIM). These tests run on real servers
3. Periodic tests that run on AWS on master and previous releases of voltha - uses BBSIM for OLT/ONU emulation.

 Hardware OLT and ONUs based test beds
At the Radisys Hillsboro site we have the following pods. By pod we mean collection of server(s), ONU(s), OLT(s) and optionally a switch. 
1. Three RadiSys OLT test pods - one each for 3200G, 1600G and 1600x OLTs. Each of these pods is a single node(server) cluster. There are few Sercomm ONUs attached to each OLTs. These pods mainly run DT workflow tests
2. There are two other pods that are based on Edgecore ASxVOLT16 OLTs. Each of these pods is a three node cluster. There are Alpha and/or Iskratel ONUs attached to these OLTs. These run a combination of DT and ATT workflow tests periodically.

We have another test pod at a colo facility in Santa Clara. This pod is based on Edgecore ASxVOLT16 and has a 3 node cluster. There are a few Alpha ONUs attached to this pod and runs TT and ATT workflow tests periodically.

We have another test pod at DT Berlin based on four different OLTs - Edgecore ASGvOLT64, Edgecore ASXVOLT16, ADTRAN 6320 and Zyxel SDA3016SS. These OLTs are attached to a three node cluster and we periodically run DT workflow based jobs. 

Scale test pod
There are currently three scale test pods that do scale tests on emulated OLTs and ONUs (using BBSIM). Each of these clusters have 3 nodes.
Two of the scale clusters are hosted at RadiSys site. One of these clusters (voltha-scale-1 as we name it) is used for scale testing with a single voltha stack and periodically tests scale of 4096 subscribers (2 OLTs, 16 PON ports each, 32 subscribers per PON) for all three workflows - ATT, DT and TT. The other scale cluster at this site (voltha-scale-2) was used for scale testing with 10 voltha stacks for testing 10k subscribers for all workflows(1 OLT per stack, 16 PON ports per OLT, 32 subscribers per PON).

The third scale cluster is at DT Berlin site (we name it berlin-community-pod-2). This was used to test the new LWC from radisys. I think Teo used it to do some initial experiments before it was open-sourced to see how it performed at scale.  
I think there is enough capacity to do the scale tests given that we have three scale clusters and the scale jobs do not take a lot of time to execute (a few 10s of mins at most), so you can schedule multiple flavors of jobs on these pods everyday.

Jobs on AWS using emulated OLTs and ONUs
There are multiple jobs that periodically run on AWS based on master and previous version of voltha - running various flavors of jobs. check here https://jenkins.opencord.org/search/?q=periodic-voltha&Jenkins-Crumb=8277214702f26f16bb04a0f09f2e44619b16771126c11e0d07791be18a11bab8 for all these jobs. 

Hope this helps.
Andrea/Teo/Hardik, please feel free to chime in if you would like to add more details.
=====
PS:
The https://gerrit.opencord.org/ci-management has all the details about the jobs that are currently configured to execute periodically for the voltha project.
The https://gerrit.opencord.org/voltha-system-tests repo has details about the tests that run.
The https://gerrit.opencord.org/pod-configs repo has details about the configurations needed for hardware based test pods.

- girish

--
You received this message because you are subscribed to the Google Groups "VOLTHA Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to voltha-discus...@opencord.org.
To view this discussion on the web visit https://groups.google.com/a/opencord.org/d/msgid/voltha-discuss/FR0P281MB2890EE3A96E2CDFE5A3B650A9C809%40FR0P281MB2890.DEUP281.PROD.OUTLOOK.COM.
Reply all
Reply to author
Forward
0 new messages