Review: Extending SMP to CNCF Community Infrastructure Lab

0 views
Skip to first unread message

Lee Calcote

unread,
Jan 14, 2022, 10:42:05 AMJan 14
to Service Mesh Performance Community, maint...@smp-spec.io, us...@getnighthawk.dev, GetNighthawk Maintainers, comm...@meshery.io
All,

The first draft of the the "Extending SMP to CNCF Community Infrastructure Lab” proposal is complete and ready for your commentary.

Xin Huang of Intel will provide an overview of the "Extending SMP to CNCF Community Infrastructure Lab” proposal in today’s community call, starting in 20 minutes. Please join for a discussion (community calendar).

Regards,
Lee

Lee Calcote

unread,
Jan 20, 2022, 9:09:55 AMJan 20
to Service Mesh Performance Community, maint...@smp-spec.io, us...@getnighthawk.dev, GetNighthawk Maintainers, Service Mesh Performance Maintainers, Ganguli, Mrittika, Frederick Kautz
All,

For those interested and available, a discussion with the CNCF Cluster / community lab administrator has been scheduled for today at 2:30PM Central. Meeting invitation is attached. Everyone is welcome to join the discussion. Context here - https://github.com/cncf/cluster/issues/115#issuecomment-1010020520.


Regards,
Lee

Service Mesh Performance: Using the CNCF Cluster.ics

Lee Calcote

unread,
Jan 21, 2022, 6:43:20 PMJan 21
to Service Mesh Performance Community, us...@getnighthawk.dev, GetNighthawk Maintainers, Service Mesh Performance Maintainers, Ganguli, Mrittika, Frederick Kautz
Hi All,

This meeting recording is now available. 

The gist of the discussion is that the Service Mesh Performance project is clear to use the CNCF’s labs (see server configs - https://metal.equinix.com/product/servers/) and should be able to leverage the existing automation in the SMP GitHub Action in the Equinix environment.

Next steps involve access to the lab management portal and integrating with the existing bare metal provisioning capabilities of Equinix Metal.

Regards,
Lee

--
Visit and engage with other Service Mesh Performance Community in the community forum at https://discuss.layer5.io or Slack at https://slack.layer5.io.
---
You received this message because you are subscribed to the Google Groups "Service Mesh Performance Community" group.
To unsubscribe from this group and stop receiving emails from it, send an email to community+...@smp-spec.io.
To view this discussion on the web visit https://groups.google.com/a/smp-spec.io/d/msgid/community/84865E6D-0DFF-444D-ABBF-0867E092029B%40layer5.io.
<Service Mesh Performance: Using the CNCF Cluster.ics>
--
Visit and engage with other Service Mesh Performance Community in the community forum at https://discuss.layer5.io or Slack at https://slack.layer5.io.
---
You received this message because you are subscribed to the Google Groups "Service Mesh Performance Community" group.
To unsubscribe from this group and stop receiving emails from it, send an email to community+...@smp-spec.io.
To view this discussion on the web visit https://groups.google.com/a/smp-spec.io/d/msgid/community/84865E6D-0DFF-444D-ABBF-0867E092029B%40layer5.io.

Lee Calcote

unread,
Jan 21, 2022, 6:45:00 PMJan 21
to Service Mesh Performance Community, us...@getnighthawk.dev, GetNighthawk Maintainers, Service Mesh Performance Maintainers, Ganguli, Mrittika, Frederick Kautz
Also, meeting minutes available here (and pasted below):

  • Scheduling performance benchmark tests
    • [Sunku/Xin] Defining benchmark test configurations
      • [Xin] Readiness for CNCF labs. Can we use a self-hosted runner?
      • Self-hosted runner - yes, need to consider setup/tear-down, but yes.
      • Bare metal provisioning of Kubernetes
        • Could use Terraform, Ansible, Cloud-init available; 
        • Equinix portal/web console provides access to a lab project in which multiple users can have access to the resources assigned to the project.
        • SSH keys are provided by the authorized users and those keys are included in the 
        • Access over standard network interface and serial over SSH interface is available to tap into the console port (IPMI level of control) on the node.
        • Node power settings and OS installations are possible from the management console, and also, using Equinix’s API.
        • Could potentially be done within the GitHub workflow that contains the GitHub Action.
        • Quick install Ubuntu nodes are available (and CentOS 8) - Ubuntu (quick) sounds great.
        • Internet access - - access to pull images; software artifacts?
        • There is a provisioning option in the management console to prevent orphaning nodes, leaving them unused.
        • The deprovisioning cycle for a given node can take longer than it takes to provision the node in the first place. For tenants that rapidly run through test after test, that tenant can be an accidental bad actor, temporarily consuming all available nodes.

      • We do desire to run these tests and publish results on a regular basis.
      • # of nodes:
      • Profile of nodes: some with 4 NICs, most dual-NIC’ed. From 2x10GB to 2x25GB.
        • NSM’s Telco use cases helped with quad NIC design. “N2 server”
        • Next-gen N3 server in the works.
      • Consistent profile of nodes to start.
      • Start small; start single data center.
      • Can have a long-lived control node.
      • iPixie supported; some BIOS/firmware configuration is possible (e.g. latest NIC drivers)



Reply all
Reply to author
Forward
0 new messages