Queries regarding Google's experience of TNA and other orchestration/management platforms for their Content Data Networks #442
Replies: 2 comments
-
@tliron @s3wong @kaushikgoa-google perhaps you can provide some insight? |
Beta Was this translation helpful? Give feedback.
-
This topic is still in its infancy.
Not at present. I have proposed a scalability taskforce that would look at this as well as other bandwidth and storage scalability topics.
This topic is a high priority for me personally. In Nephio R1 and R2 we have not shown this ability quite yet, as our scaling/healing scenarios are handled in the management cluster based on user/external triggers. The best place to develop these capabilities is likely in the ongoing work on a Service Assurance use case, in the SIG#1 workgroup. Achieving this could involve proper observability (see above) as well as integration with a policy framework, such as Open Policy Agent (OPA). However, my personal opinion is that we can move forward with a PoC even if we do not have clear technology choices yet. I will propose such a PoC for Nephio R3.
Not at present, though there is interest in this and potential for contribution from Google and other parties.
As stated above, this is the purview of the observability taskforce. The general intent in Nephio is to not reinvent wheels but to rely on "bread and butter" technologies from CNCF, LFN, and the greater Kubernetes ecosystem.
"Configuration" is front and center for Nephio. Specifically let's divide it to what I call "pre-stand-up" and "post-stand-up" configuration. So far Nephio has been focused on the former, in the Configuration as Data (CaD) paradigm. More specifically, network function packaging provides a unified frontend for both network function and infrastructure requirements. This allows for controllers to configure the infrastructure before and during (Day 2) the workload operation.
As stated, this has been the core of Nephio development thus far. The best way to appreciate the scope is by going through our example use cases.
Yes, yes, and yes. :) As stated, the focus is on the special challenge that network functions are strongly vertically integrated between software and hardware, with the cloud infrastructure (Kubernetes, CNI, Linux OS and drivers, GPU/DPU, ToR switches, etc.) in between. I would interpret "application level" here as what I called "post-stand-up configuration", e.g. NETCONF and gNMI interactions with the network function after it has been deployed. We have not addressed this yet, but there is a lot of interest in this area and we plan some integrated solutions in the future. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
All reactions