Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

2.5 docs #1158

Merged
merged 3 commits into from
Dec 10, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -108,6 +108,8 @@
* [Spot Checklist](using-kubecost/navigating-the-kubecost-ui/savings/spot-checklist.md)
* [Spot Commander](using-kubecost/navigating-the-kubecost-ui/savings/spot-commander.md)
* [Persistent Volume Right-Sizing Recommendations](using-kubecost/navigating-the-kubecost-ui/savings/pv-right-sizing-rec.md)
* [GPU Optimization](using-kubecost/navigating-the-kubecost-ui/savings/gpu-optimization.md)
* [Turbonomic Actions](using-kubecost/navigating-the-kubecost-ui/savings/turbonomic-actions.md)
* [Budgets](using-kubecost/navigating-the-kubecost-ui/budgets.md)
* [Audits](using-kubecost/navigating-the-kubecost-ui/audits.md)
* [Anomaly Detection](using-kubecost/navigating-the-kubecost-ui/anomaly-detection.md)
Expand Down Expand Up @@ -161,6 +163,7 @@
* [Container Request Right Sizing Recommendation API (V2)](apis/savings-apis/api-request-right-sizing-v2.md)
* [Container Request Recommendation Apply/Plan APIs](apis/savings-apis/api-request-recommendation-apply.md)
* [Abandoned Workloads API](apis/savings-apis/api-abandoned-workloads.md)
* [Turbonomic Actions APIs](apis/savings-apis/api-turbonomic-actions.md)
* [Filter Parameters (v2)](apis/filters-api.md)

## Architecture
Expand All @@ -186,6 +189,7 @@
* [Importing Kubecost Data into Microsoft Power BI](integrations/import-kubecost-data-into-microsoft-power-bi.md)
* [Integrating Kubecost with Datadog](integrations/integrating-kubecost-with-datadog.md)
* [Using Custom Webhook to Create a Kubecost Stage in Spinnaker](integrations/spinnaker-custom-webhook.md)
* [Kubecost Turbonomic Integration](integrations/turbonomic-integration.md)

## Troubleshooting

Expand Down
174 changes: 174 additions & 0 deletions apis/savings-apis/api-turbonomic-actions.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,174 @@
# Turbonomic Actions

{% swagger method="get" path="turbonomic/resizeWorkloadControllers" baseUrl="http://<kubecost-address>/model/savings/" summary="Turbonomic Actions: Resize Workload Controllers" %}
{% swagger-description %}
The Resize Workload Controllers API returns workloads for which request resizing has been recommended by Turbonomic. The list of results returned should align with those in the Turbonomic Actions Center.
{% endswagger-description %}

{% swagger-parameter in="path" name="filter" type="string" required="false" %}
Filter your results by cluster, namespace and/or controller.
{% endswagger-parameter %}

{% swagger-response status="200: OK" description="" %}
```json
{
"code": 200,
"data": {
"numResults": 1,
"totalSavings": 2.00,
"actions": [
{
"action": {
"cluster": "standard-cluster-1",
"namespace": "kubecost",
"controller": "kubecost-cost-analyzer",
"replicaCount": 1,
"compoundActions": {
"cost-model": [
{
"target": "VCPURequest",
"unit": "mCores",
"oldValue": 200,
"newValue": 100
}
]
},
"available": true,
"targetId": "11111111111111"
},
"currentMonthlyRate": 4.00,
"predictedMonthlyRate": 2.00,
"predictedSavings": 2.00
}
]
}
}
```
{% endswagger-response %}
{% endswagger %}

{% swagger method="get" path="turbonomic/suspendContainerPods" baseUrl="http://<kubecost-address>/model/savings/" summary="Turbonomic Actions: Suspend Container Pods" %}
{% swagger-description %}
The Suspend Container Pods API returns pods that Turbonomic recommends for suspension. The list of results returned should align with those in the Turbonomic Actions Center.
{% endswagger-description %}

{% swagger-parameter in="path" name="filter" type="string" required="false" %}
Filter your results by cluster, namespace, controller and/or pod.
{% endswagger-parameter %}

{% swagger-response status="200: OK" description="" %}
```json
{
"code": 200,
"data": {
"numResults": 1,
"totalSavings": 12.37,
"actions": [
{
"action": {
"cluster": "standard-cluster-1",
"namespace": "infra-cost",
"controller": "infra-cost-agent",
"pod": "infra-cost-agent-xdj34",
"available": true,
"targetId": "11111111111111"
},
"currentMonthlyRate": 12.37,
"predictedMonthlyRate": 0,
"predictedSavings": 12.37
}
]
}
}
```
{% endswagger-response %}
{% endswagger %}

{% swagger method="get" path="turbonomic/suspendVirtualMachines" baseUrl="http://<kubecost-address>/model/savings/" summary="Turbonomic Actions: Suspend Virtual Machines" %}
{% swagger-description %}
The Suspend Container Pods API returns virtual machines that Turbonomic recommends for suspension. The list of results returned should align with those in the Turbonomic Actions Center.
{% endswagger-description %}

{% swagger-parameter in="path" name="filter" type="string" required="false" %}
Filter your results by cluster.
{% endswagger-parameter %}

{% swagger-response status="200: OK" description="" %}
```json
{
"code": 200,
"data": {
"numResults": 1,
"totalSavings": 9.03,
"actions": [
{
"action": {
"cluster": "standard-cluster-1",
"node": "gke-standard-cluster-1-spotpool-b4a02c44-1001",
"available": true,
"targetId": "11111111111111"
},
"currentMonthlyRate": 9.03,
"predictedMonthlyRate": 0,
"predictedSavings": 9.03
}
]
}
}
```
{% endswagger-response %}
{% endswagger %}

{% swagger method="get" path="turbonomic/moveContainerPods" baseUrl="http://<kubecost-address>/model/savings/" summary="Turbonomic Actions: Move Container Pods" %}
{% swagger-description %}
The Move Container Pods API returns pods that Turbonomic recommends to be moved from one node to another. The list of results returned should align with those in the Turbonomic Actions Center.
{% endswagger-description %}

{% swagger-parameter in="path" name="filter" type="string" required="false" %}
Filter your results by cluster, namespace, controller and/or pod.
{% endswagger-parameter %}

{% swagger-response status="200: OK" description="" %}
```json
{
"code": 200,
"data": {
"numResults": 2,
"totalSavings": 30.0,
"actions": [
{
"action": {
"cluster": "standard-cluster-1",
"namespace": "turbo-server",
"controller": "db",
"pod": "db-ffbdfb97b-aroxf",
"originNode": "gke-standard-cluster-1-pool-1-b4a02c44-1001",
"destinationNode": "gke-standard-cluster-1-pool-2-91dc432d-1002",
"available": true,
"targetId": "11111111111111"
},
"currentMonthlyRate": 27.90,
"predictedMonthlyRate": 0,
"predictedSavings": 27.90
},
{
"action": {
"cluster": "standard-cluster-1",
"namespace": "infra-kubecost",
"controller": "infra-kubecost-cost-analyzer",
"pod": "infra-kubecost-cost-analyzer-566b488b69-1001a",
"originNode": "gke-standard-cluster-1-pool-2-91dc432d-1002",
"destinationNode": "gke-standard-cluster-1-pool-3-57364626-1003",
"available": true,
"targetId": "11111111111112"
},
"currentMonthlyRate": 2.10,
"predictedMonthlyRate": 0,
"predictedSavings": 2.10
}
]
}
}
```
{% endswagger-response %}
{% endswagger %}
Binary file added images/gpu-savings-optimize-dashboard.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/gpu-savings-optimize-modal.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/savings-turbo-actions-mcp.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/savings-turbo-actions-rwc.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/savings-turbo-actions-scp.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/savings-turbo-actions-svm.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/savings-turbo-actions.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
35 changes: 35 additions & 0 deletions install-and-configure/advanced-configuration/gpu.md
Original file line number Diff line number Diff line change
Expand Up @@ -348,3 +348,38 @@ kubectl -n kubecost port-forward svc/kubecost-prometheus-server 8080:80
Open the Prometheus web interface in your browser by navigating to `http://localhost:8080`. In the search box, begin typing the prefix for a metric, for example `DCGM_FI_DEV_POWER_USAGE`. Click Execute to view the returned query and verify that there is data present. An example is shown below.

![Prometheus query showing DCGM Exporter metric](/images/gpu-prometheus-query.png)

## Shared GPU Support

Kubecost supports NVIDIA GPU sharing using either the CUDA [time-slicing](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/gpu-sharing.html) or [Multi-Process Service (MPS)](https://docs.nvidia.com/deploy/mps/index.html) methods. MIG is currently unsupported but is being evaluated for a future release. When employing either time-slicing or MPS, you must use the `renameByDefault=true` option in the [NVIDIA device plugin's](https://github.com/NVIDIA/k8s-device-plugin?tab=readme-ov-file#shared-access-to-gpus) configuration stanza. This parameter instructs the device plugin to advertise the resource `nvidia.com/gpu.shared` on nodes where GPU sharing is enabled. Without this configuration option, the device plugin will instead advertise `nvidia.com/gpu` which will mean Kubecost is unable to disambiguate an "exclusive" GPU access request from a shared GPU access request. As a result, Kubecost's cost information will be inaccurate.

{% hint style="warning" %}
Prior to enabling GPU sharing in your cluster, view the [Limitations](#limitations) section to determine if this is right for you.
{% endhint %}

The following is an example of a time-slicing configuration which sets the `renameByDefault` parameter.

```yaml
version: v1
sharing:
timeSlicing:
renameByDefault: true
failRequestsGreaterThanOne: true
resources:
- name: nvidia.com/gpu
replicas: 4
```

With this configuration saved and applied to nodes, they will begin to advertise the `nvidia.com/gpu.shared` device with a quantity equal to the replica count, defined in the configuration, multiplied by the number of physical GPUs inside the node. For example, a node with four (4) physical NVIDIA GPUs which uses this configuration will advertise sixteen (16) shared GPU devices.

```sh
$ kubectl describe node mynodename
...
Capacity:
nvidia.com/gpu.shared: 16
...
```

### Limitations

There are limitations of which to be aware when using NVIDIA GPU sharing with either time-slicing or MPS. Because [NVIDIA does not support](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/gpu-sharing.html#limitations) providing utilization metrics via DCGM Exporter for containers using shared GPUs, Kubecost will display a GPU cost of zero for these workloads. However, the [GPU Savings Optimization](/using-kubecost/navigating-the-kubecost-ui/savings/gpu-optimization.md) card (Kubecost Enterprise) will be able to indicate in the utilization table which containers are configured for GPU sharing providing some visibility.
56 changes: 56 additions & 0 deletions integrations/turbonomic-integration.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
# Kubecost Turbonomic Integration

{% hint style="info" %}
This integration is currently in beta. Please read the documentation carefully.
{% endhint %}

The Turbonomic Integration feature enables users to obtain supplemental cost information on actions recommended by Turbonomic. This integration is required to display the [Turbonomic Actions Savings Cards](../using-kubecost/navigating-the-kubecost-ui/savings/turbonomic-actions.md).

## Usage

Prerequisites:

- A running Turbonomic client

Kubecost will require network access to your Turbonomic installation via an OAuth 2.0 Client. We require the following settings on the OAuth client:
- Role: `ADVISOR`
- ClientAuthenticationMethods: `client_secret_post`

Please see the [IBM Turbonomic documentation](https://www.ibm.com/docs/en/tarm/8.14.3?topic=cookbook-authenticating-oauth-20-clients-api#cookbook_administration_oauth_authentication__title__4) on more instructions on how to create an OAuth 2.0 client.

### Step 1: Configure Helm values

The below YAML is an example of how to configure the Turbonomic integration in your Helm values file.

```yaml
global:
integrations:
turbonomic:
enabled: true
clientId: "" # REQUIRED. OAuth 2.0 client ID
clientSecret: "" # REQUIRED. OAuth 2.0 client secret
role: "ADVISOR" # REQUIRED. OAuth 2.0 client role
host: "" # REQUIRED. URL to the Turbonomic API (e.g. "https://turbonomic.example.com")
insecureClient: false # Whether to verify certificate or not. Default false.
```

### Step 2: Apply and validate your changes

If deploying changes via Helm, you will be able to run a command similar to:

```sh
helm upgrade -i kubecost cost-analyzer \
--repo https://kubecost.github.io/cost-analyzer/ \
--namespace kubecost \
-f values.yaml
```

Once you've applied your changes, validate that the integration is successful by checking the Aggregator pod logs. You should see logs similar to the following:

```sh
kubectl logs statefulset/kubecost-aggregator -n kubecost | grep -i "Turbonomic"
```

```txt
DBG Turbonomic: Ingestor: completed run with 32 turbonomic actions ingested
```
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
# GPU Optimization

The GPU optimization page, a Kubecost Enterprise feature, shows you details on your workloads (containers and their relatives) which are using GPUs and proactively identifies ways in which you can save money on them. Kubecost collects and processes [GPU utilization metrics](/install-and-configure/advanced-configuration/gpu.md) to power the contents of this page. The page is broken down into two main sections: a workload utilization table and recommendation cards.

{% hint style="info" %}
If the GPU Optimization savings card appears to be greyed out, click the meatballs menu in the upper right and select "Unarchive".
{% endhint %}

![GPU Optimization dashboard](/images/gpu-savings-optimize-dashboard.png)

## Utilization Table

The utilization table displays the GPU-related workloads in your Kubecost environment and provides many details which can be helpful to understand what is going on. Unlike other pages in Kubecost which display all workloads, the utilization table on this page is constrained to only workloads which are requesting some amount of GPU. It is not an extraction of the Allocation page, for example. Aggregations which do not feature a GPU in some way will be intentionally absent from this table. For example, your Kubecost estate has three (3) clusters but only one (1) of them has GPUs. Only the cluster with GPUs will display content on this table.

Depending on the aggregation, there will be information presented specific to that aggregation that may not be found on others. For example, aggregating by cluster shows the the number of nodes containing at least one GPU as well as the number of containers requesting at least one GPU during the given time window. The container and pod aggregations show, among other columns, the node on which this ran or is running, whether it is using [shared GPUs](/install-and-configure/advanced-configuration/gpu.md#shared-gpu-support), and its average and max utilization of those GPUs.

A utilization threshold slider is provided at the top of this table allowing you to constrain the returned results to a value of the GPU utilization, either maximum or average, up to and inclusive of that number. This is to allow easier identification of GPU-related workloads across your estate. For example, you wish to view workloads which are using a maximum GPU utilization of up to 80%. Set the slider to 80% and Kubecost filters from view any workloads above this number.

## Recommendations

The bottom half of the page presents recommendations on where and how to save money on GPU workloads. Depending on the time window defined at the top of the page, Kubecost locates and displays one card per container where it has identified a possible savings opportunity. Each recommendation is presented as a separate card.

Kubecost provides proactive recommendations on how to save money on GPU workloads in three different categories: Optimize, Remove, and Share.

- **Optimize**: Containers which request more than one GPU but are not using at least one of those GPUs will trigger the Optimize recommendation. In this card, Kubecost shows the container which can be optimized by reconfiguring it to remove the number of unused GPUs observed during the time window selected. This can be useful, for example, in cases where the application in the container was either not written to make use of multiple GPUs or where use of multiple GPUs is not achieved due to the nature of the workload. The possible savings displayed on this tile is the cost of only the unused GPUs over the course of a month.
- **Remove**: Containers which request a single GPU but are found to not use it are flagged for removal. In this card, Kubecost shows the container which can be removed from the cluster thereby freeing up its GPU. You may see this card if, for example, a workload has been created which requests a GPU but never uses it due to a misconfiguration, or where a workload did use a GPU for a period of time but that use has ended yet the container continues to run. Whatever your case, containers which request but do not use a GPU make it such that other workloads such as pending jobs cannot be scheduled due to "GPU squatting." The possible savings displayed on this tile is the cost of removing this container entirely from the cluster over the course of a month.
- **Share**: Containers which request a single GPU but are using somewhere between zero and 100% are identified as candidates for GPU sharing. In this card, Kubecost shows the container which is not fully utilizing a GPU and can potentially request access to a shared GPU instead. GPU sharing is a technique whereby multiple containers, each which need some GPU resources, all execute concurrently on a single GPU thereby potentially reducing costs by requiring fewer total GPUs. See the section on GPU sharing [here](/install-and-configure/advanced-configuration/gpu.md#shared-gpu-support) for more details on how or if this is right for you. Because reconfiguring a workload to request access to a shared GPU is highly variable and depends on many factors, Kubecost does not show a possible savings number associated with this recommendation type. This does not mean, however, that no savings are likely to result in configuring your cluster and appropriate workloads for GPU sharing.

Clicking on each recommendation tile displays a window with further details on the recommendation designed to help you identify exactly which workload Kubecost has flagged and more information on why the recommendation was made all with the goal of helping you gain confidence in the accuracy of the recommendation. The window contains a utilization graph over the selected time window, details on the container and its location in the cluster, and an explanation with more details on the recommendation.

![GPU Optimization savings modal](/images/gpu-savings-optimize-modal.png)

## Known Limitations

In the first version of the GPU Optimization Savings Insights card there are a few limitations of which to be aware.

- Multiple containers with the same name and running on the same cluster, node, and namespace combination (i.e., "identical" containers) might result in the following effects:
- The savings number provided on Optimize and Remove cards may be an implicit sum of the total cost these containers.
- Recommendations will only be provided for one of them.
- The utilization table may not show these identical containers.
- GPU nodes must be running or have run at least one container utilizing a GPU for it to be represented on the utilization table in either the Cluster aggregation’s GPU nodes column or on the Node aggregation.
- Optimize may be as accurate as possible in certain cases since Kubecost currently infers utilization about all GPUs from a single averaged utilization number.
- For upgrades from prior versions to 2.5.0, there may be cases where Max. GPU Utilization could be a smaller percentage than Avg. GPU Utilization. This will self correct once the chosen window size is smaller than the time the 2.5.0 instance has been collecting the new max. GPU util. metric.
- The GPU Optimization card on the Savings Insights screen may initially appear greyed out. Click the meatballs icon in the upper right and choose "Unarchive" to make the card appear as the others.
Loading
Loading