Skip to content

Commit

Permalink
feat: get official field doc (#457)
Browse files Browse the repository at this point in the history
* fix(deps): update module github.com/aws/aws-sdk-go to v1.44.267 (#451)

Signed-off-by: Renovate Bot <bot@renovateapp.com>
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Signed-off-by: David Sabatie <david.sabatie@notrenet.com>

* feat: get official field doc

Signed-off-by: David Sabatie <david.sabatie@notrenet.com>

* feat: use schema from server

Signed-off-by: David Sabatie <david.sabatie@notrenet.com>

* feat: add configuration api route (#459)

* feat: add configuration api route

Signed-off-by: Matthis Holleville <matthish29@gmail.com>

* feat: rename cache methods

Signed-off-by: Matthis Holleville <matthish29@gmail.com>

---------

Signed-off-by: Matthis Holleville <matthish29@gmail.com>
Signed-off-by: David Sabatie <david.sabatie@notrenet.com>

* fix(deps): update module github.com/aws/aws-sdk-go to v1.44.269 (#458)

Signed-off-by: Renovate Bot <bot@renovateapp.com>
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Signed-off-by: David Sabatie <david.sabatie@notrenet.com>

* fix: updated list.go to handle k8sgpt cache list crashing issue (#455)

* Update list.go

Signed-off-by: Krishna Dutt Panchagnula <krishnadutt123@gmail.com>

* fix: updated list.go to handle k8sgpt cache list crashing issue

Signed-off-by: Krishna Dutt Panchagnula <krishnadutt123@gmail.com>

---------

Signed-off-by: Krishna Dutt Panchagnula <krishnadutt123@gmail.com>
Co-authored-by: Alex Jones <alexsimonjones@gmail.com>
Signed-off-by: David Sabatie <david.sabatie@notrenet.com>

* chore(main): release 0.3.5 (#452)

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Signed-off-by: David Sabatie <david.sabatie@notrenet.com>

* chore(deps): update google-github-actions/release-please-action digest to 51ee8ae (#464)

Signed-off-by: Renovate Bot <bot@renovateapp.com>
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Signed-off-by: David Sabatie <david.sabatie@notrenet.com>

* fix: name of sa reference in deployment (#468)

Signed-off-by: Johannes Kleinlercher <johannes@kleinlercher.at>
Signed-off-by: David Sabatie <david.sabatie@notrenet.com>

* fix(deps): update module github.com/aws/aws-sdk-go to v1.44.270 (#465)

Signed-off-by: Renovate Bot <bot@renovateapp.com>
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Signed-off-by: David Sabatie <david.sabatie@notrenet.com>

* fix: typo (#463)

Signed-off-by: Rakshit Gondwal <rakshitgondwal3@gmail.com>
Co-authored-by: Thomas Schuetz <38893055+thschue@users.noreply.github.com>
Signed-off-by: David Sabatie <david.sabatie@notrenet.com>

* fix(deps): update module github.com/aws/aws-sdk-go to v1.44.271 (#469)

Signed-off-by: Renovate Bot <bot@renovateapp.com>
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Signed-off-by: David Sabatie <david.sabatie@notrenet.com>

* fix(deps): update module github.com/aws/aws-sdk-go to v1.44.269 (#458)

Signed-off-by: Renovate Bot <bot@renovateapp.com>
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>

* fix(deps): update module github.com/aws/aws-sdk-go to v1.44.270 (#465)

Signed-off-by: Renovate Bot <bot@renovateapp.com>
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Signed-off-by: David Sabatie <david.sabatie@notrenet.com>

* fix(deps): update module github.com/aws/aws-sdk-go to v1.44.271 (#469)

Signed-off-by: Renovate Bot <bot@renovateapp.com>
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Signed-off-by: David Sabatie <david.sabatie@notrenet.com>

* feat: Add with-doc flag to enable/disable kubernetes doc

Signed-off-by: David Sabatie <david.sabatie@notrenet.com>

* use fmt.Sprintf in apireference.go

Signed-off-by: David Sabatie <david.sabatie@notrenet.com>

* add --with-doc to readme

Signed-off-by: David Sabatie <david.sabatie@notrenet.com>

---------

Signed-off-by: Renovate Bot <bot@renovateapp.com>
Signed-off-by: David Sabatie <david.sabatie@notrenet.com>
Signed-off-by: Matthis Holleville <matthish29@gmail.com>
Signed-off-by: Krishna Dutt Panchagnula <krishnadutt123@gmail.com>
Signed-off-by: Johannes Kleinlercher <johannes@kleinlercher.at>
Signed-off-by: Rakshit Gondwal <rakshitgondwal3@gmail.com>
Signed-off-by: golgoth31 <golgoth31@users.noreply.github.com>
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Matthis <99146727+matthisholleville@users.noreply.github.com>
Co-authored-by: Krishna Dutt Panchagnula <krishnadutt123@gmail.com>
Co-authored-by: Alex Jones <alexsimonjones@gmail.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Johannes Kleinlercher <johannes@kleinlercher.at>
Co-authored-by: Rakshit Gondwal <98955085+rakshitgondwal@users.noreply.github.com>
Co-authored-by: Thomas Schuetz <38893055+thschue@users.noreply.github.com>
  • Loading branch information
9 people authored May 31, 2023
1 parent 6052a5b commit f9621af
Show file tree
Hide file tree
Showing 18 changed files with 309 additions and 53 deletions.
20 changes: 11 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -128,6 +128,7 @@ _This mode of operation is ideal for continuous monitoring of your cluster and c
* Run `k8sgpt filters` to manage the active filters used by the analyzer. By default, all filters are executed during analysis.
* Run `k8sgpt analyze` to run a scan.
* And use `k8sgpt analyze --explain` to get a more detailed explanation of the issues.
* You also run `k8sgpt analyze --with-doc` (with or without the explain flag) to get the official documention from kubernetes.

## Analyzers

Expand Down Expand Up @@ -163,6 +164,7 @@ _Run a scan with the default analyzers_
k8sgpt generate
k8sgpt auth add
k8sgpt analyze --explain
k8sgpt analyze --explain --with-doc
```

_Filter on resource_
Expand Down Expand Up @@ -279,7 +281,7 @@ curl -X GET "http://localhost:8080/analyze?namespace=k8sgpt&explain=false"
<details>
<summary> LocalAI provider </summary>

To run local models, it is possible to use OpenAI compatible APIs, for instance [LocalAI](https://github.com/go-skynet/LocalAI) which uses [llama.cpp](https://github.com/ggerganov/llama.cpp) and [ggml](https://github.com/ggerganov/ggml) to run inference on consumer-grade hardware. Models supported by LocalAI for instance are Vicuna, Alpaca, LLaMA, Cerebras, GPT4ALL, GPT4ALL-J and koala.
To run local models, it is possible to use OpenAI compatible APIs, for instance [LocalAI](https://github.com/go-skynet/LocalAI) which uses [llama.cpp](https://github.com/ggerganov/llama.cpp) and [ggml](https://github.com/ggerganov/ggml) to run inference on consumer-grade hardware. Models supported by LocalAI for instance are Vicuna, Alpaca, LLaMA, Cerebras, GPT4ALL, GPT4ALL-J and koala.


To run local inference, you need to download the models first, for instance you can find `ggml` compatible models in [huggingface.com](https://huggingface.co/models?search=ggml) (for example vicuna, alpaca and koala).
Expand Down Expand Up @@ -309,16 +311,16 @@ k8sgpt analyze --explain --backend localai

<em>Prerequisites:</em> an Azure OpenAI deployment is needed, please visit MS official [documentation](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource) to create your own.

To authenticate with k8sgpt, you will need the Azure OpenAI endpoint of your tenant `"https://your Azure OpenAI Endpoint"`, the api key to access your deployment, the deployment name of your model and the model name itself.
To authenticate with k8sgpt, you will need the Azure OpenAI endpoint of your tenant `"https://your Azure OpenAI Endpoint"`, the api key to access your deployment, the deployment name of your model and the model name itself.


To run k8sgpt, run `k8sgpt auth` with the `azureopenai` backend:
To run k8sgpt, run `k8sgpt auth` with the `azureopenai` backend:
```
k8sgpt auth add --backend azureopenai --baseurl https://<your Azure OpenAI endpoint> --engine <deployment_name> --model <model_name>
```
Lastly, enter your Azure API key, after the prompt.

Now you are ready to analyze with the azure openai backend:
Now you are ready to analyze with the azure openai backend:
```
k8sgpt analyze --explain --backend azureopenai
```
Expand Down Expand Up @@ -395,31 +397,31 @@ The Kubernetes system is trying to scale a StatefulSet named fake-deployment usi

Config file locations:
| OS | Path |
|---------|--------------------------------------------------|
| ------- | ------------------------------------------------ |
| MacOS | ~/Library/Application Support/k8sgpt/k8sgpt.yaml |
| Linux | ~/.config/k8sgpt/k8sgpt.yaml |
| Windows | %LOCALAPPDATA%/k8sgpt/k8sgpt.yaml |
</details>

<details>
There may be scenarios where caching remotely is prefered.
There may be scenarios where caching remotely is prefered.
In these scenarios K8sGPT supports AWS S3 Integration.

<summary> Remote caching </summary>

_As a prerequisite `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` are required as environmental variables._

_Adding a remote cache_
Note: this will create the bucket if it does not exist
```
k8sgpt cache add --region <aws region> --bucket <name>
```

_Listing cache items_
```
k8sgpt cache list
```

_Removing the remote cache_
Note: this will not delete the bucket
```
Expand Down
5 changes: 4 additions & 1 deletion cmd/analyze/analyze.go
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@ var (
namespace string
anonymize bool
maxConcurrency int
withDoc bool
)

// AnalyzeCmd represents the problems command
Expand All @@ -45,7 +46,7 @@ var AnalyzeCmd = &cobra.Command{

// AnalysisResult configuration
config, err := analysis.NewAnalysis(backend,
language, filters, namespace, nocache, explain, maxConcurrency)
language, filters, namespace, nocache, explain, maxConcurrency, withDoc)
if err != nil {
color.Red("Error: %v", err)
os.Exit(1)
Expand Down Expand Up @@ -91,4 +92,6 @@ func init() {
AnalyzeCmd.Flags().StringVarP(&language, "language", "l", "english", "Languages to use for AI (e.g. 'English', 'Spanish', 'French', 'German', 'Italian', 'Portuguese', 'Dutch', 'Russian', 'Chinese', 'Japanese', 'Korean')")
// add max concurrency
AnalyzeCmd.Flags().IntVarP(&maxConcurrency, "max-concurrency", "m", 10, "Maximum number of concurrent requests to the Kubernetes API server")
// kubernetes doc flag
AnalyzeCmd.Flags().BoolVarP(&withDoc, "with-doc", "d", false, "Give me the official documentation of the involved field")
}
2 changes: 1 addition & 1 deletion go.mod
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ require (
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/protobuf v1.5.3 // indirect
github.com/google/btree v1.1.2 // indirect
github.com/google/gnostic v0.6.9 // indirect
github.com/google/gnostic v0.6.9
github.com/google/go-cmp v0.5.9 // indirect
github.com/google/go-containerregistry v0.14.0 // indirect
github.com/google/gofuzz v1.2.0 // indirect
Expand Down
25 changes: 20 additions & 5 deletions pkg/analysis/analysis.go
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ import (
"sync"

"github.com/fatih/color"
openapi_v2 "github.com/google/gnostic/openapiv2"
"github.com/k8sgpt-ai/k8sgpt/pkg/ai"
"github.com/k8sgpt-ai/k8sgpt/pkg/analyzer"
"github.com/k8sgpt-ai/k8sgpt/pkg/cache"
Expand All @@ -45,6 +46,7 @@ type Analysis struct {
Explain bool
MaxConcurrency int
AnalysisAIProvider string // The name of the AI Provider used for this analysis
WithDoc bool
}

type AnalysisStatus string
Expand All @@ -63,7 +65,7 @@ type JsonOutput struct {
Results []common.Result `json:"results"`
}

func NewAnalysis(backend string, language string, filters []string, namespace string, noCache bool, explain bool, maxConcurrency int) (*Analysis, error) {
func NewAnalysis(backend string, language string, filters []string, namespace string, noCache bool, explain bool, maxConcurrency int, withDoc bool) (*Analysis, error) {
var configAI ai.AIConfiguration
err := viper.UnmarshalKey("ai", &configAI)
if err != nil {
Expand Down Expand Up @@ -128,6 +130,7 @@ func NewAnalysis(backend string, language string, filters []string, namespace st
Explain: explain,
MaxConcurrency: maxConcurrency,
AnalysisAIProvider: backend,
WithDoc: withDoc,
}, nil
}

Expand All @@ -136,11 +139,23 @@ func (a *Analysis) RunAnalysis() {

coreAnalyzerMap, analyzerMap := analyzer.GetAnalyzerMap()

// we get the openapi schema from the server only if required by the flag "with-doc"
openapiSchema := &openapi_v2.Document{}
if a.WithDoc {
var openApiErr error

openapiSchema, openApiErr = a.Client.Client.Discovery().OpenAPISchema()
if openApiErr != nil {
a.Errors = append(a.Errors, fmt.Sprintf("[KubernetesDoc] %s", openApiErr))
}
}

analyzerConfig := common.Analyzer{
Client: a.Client,
Context: a.Context,
Namespace: a.Namespace,
AIClient: a.AIClient,
Client: a.Client,
Context: a.Context,
Namespace: a.Namespace,
AIClient: a.AIClient,
OpenapiSchema: openapiSchema,
}

semaphore := make(chan struct{}, a.MaxConcurrency)
Expand Down
3 changes: 3 additions & 0 deletions pkg/analysis/output.go
Original file line number Diff line number Diff line change
Expand Up @@ -78,6 +78,9 @@ func (a *Analysis) textOutput() ([]byte, error) {
color.YellowString(result.Name), color.CyanString(result.ParentObject)))
for _, err := range result.Error {
output.WriteString(fmt.Sprintf("- %s %s\n", color.RedString("Error:"), color.RedString(err.Text)))
if err.KubernetesDoc != "" {
output.WriteString(fmt.Sprintf(" %s %s\n", color.RedString("Kubernetes Doc:"), color.RedString(err.KubernetesDoc)))
}
}
output.WriteString(color.GreenString(result.Details + "\n"))
}
Expand Down
24 changes: 21 additions & 3 deletions pkg/analyzer/cronjob.go
Original file line number Diff line number Diff line change
Expand Up @@ -18,16 +18,26 @@ import (
"time"

"github.com/k8sgpt-ai/k8sgpt/pkg/common"
"github.com/k8sgpt-ai/k8sgpt/pkg/kubernetes"
"github.com/k8sgpt-ai/k8sgpt/pkg/util"
cron "github.com/robfig/cron/v3"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime/schema"
)

type CronJobAnalyzer struct{}

func (analyzer CronJobAnalyzer) Analyze(a common.Analyzer) ([]common.Result, error) {

kind := "CronJob"
apiDoc := kubernetes.K8sApiReference{
Kind: kind,
ApiVersion: schema.GroupVersion{
Group: "batch",
Version: "v1",
},
OpenapiSchema: a.OpenapiSchema,
}

AnalyzerErrorsMetric.DeletePartialMatch(map[string]string{
"analyzer_name": kind,
Expand All @@ -43,8 +53,11 @@ func (analyzer CronJobAnalyzer) Analyze(a common.Analyzer) ([]common.Result, err
for _, cronJob := range cronJobList.Items {
var failures []common.Failure
if cronJob.Spec.Suspend != nil && *cronJob.Spec.Suspend {
doc := apiDoc.GetApiDocV2("spec.suspend")

failures = append(failures, common.Failure{
Text: fmt.Sprintf("CronJob %s is suspended", cronJob.Name),
Text: fmt.Sprintf("CronJob %s is suspended", cronJob.Name),
KubernetesDoc: doc,
Sensitive: []common.Sensitive{
{
Unmasked: cronJob.Namespace,
Expand All @@ -59,8 +72,11 @@ func (analyzer CronJobAnalyzer) Analyze(a common.Analyzer) ([]common.Result, err
} else {
// check the schedule format
if _, err := CheckCronScheduleIsValid(cronJob.Spec.Schedule); err != nil {
doc := apiDoc.GetApiDocV2("spec.schedule")

failures = append(failures, common.Failure{
Text: fmt.Sprintf("CronJob %s has an invalid schedule: %s", cronJob.Name, err.Error()),
Text: fmt.Sprintf("CronJob %s has an invalid schedule: %s", cronJob.Name, err.Error()),
KubernetesDoc: doc,
Sensitive: []common.Sensitive{
{
Unmasked: cronJob.Namespace,
Expand All @@ -78,9 +94,11 @@ func (analyzer CronJobAnalyzer) Analyze(a common.Analyzer) ([]common.Result, err
if cronJob.Spec.StartingDeadlineSeconds != nil {
deadline := time.Duration(*cronJob.Spec.StartingDeadlineSeconds) * time.Second
if deadline < 0 {
doc := apiDoc.GetApiDocV2("spec.startingDeadlineSeconds")

failures = append(failures, common.Failure{
Text: fmt.Sprintf("CronJob %s has a negative starting deadline", cronJob.Name),
Text: fmt.Sprintf("CronJob %s has a negative starting deadline", cronJob.Name),
KubernetesDoc: doc,
Sensitive: []common.Sensitive{
{
Unmasked: cronJob.Namespace,
Expand Down
15 changes: 14 additions & 1 deletion pkg/analyzer/deployment.go
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,10 @@ import (
"fmt"

v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime/schema"

"github.com/k8sgpt-ai/k8sgpt/pkg/common"
"github.com/k8sgpt-ai/k8sgpt/pkg/kubernetes"
"github.com/k8sgpt-ai/k8sgpt/pkg/util"
)

Expand All @@ -31,6 +33,14 @@ type DeploymentAnalyzer struct {
func (d DeploymentAnalyzer) Analyze(a common.Analyzer) ([]common.Result, error) {

kind := "Deployment"
apiDoc := kubernetes.K8sApiReference{
Kind: kind,
ApiVersion: schema.GroupVersion{
Group: "apps",
Version: "v1",
},
OpenapiSchema: a.OpenapiSchema,
}

AnalyzerErrorsMetric.DeletePartialMatch(map[string]string{
"analyzer_name": kind,
Expand All @@ -45,8 +55,11 @@ func (d DeploymentAnalyzer) Analyze(a common.Analyzer) ([]common.Result, error)
for _, deployment := range deployments.Items {
var failures []common.Failure
if *deployment.Spec.Replicas != deployment.Status.Replicas {
doc := apiDoc.GetApiDocV2("spec.replicas")

failures = append(failures, common.Failure{
Text: fmt.Sprintf("Deployment %s/%s has %d replicas but %d are available", deployment.Namespace, deployment.Name, *deployment.Spec.Replicas, deployment.Status.Replicas),
Text: fmt.Sprintf("Deployment %s/%s has %d replicas but %d are available", deployment.Namespace, deployment.Name, *deployment.Spec.Replicas, deployment.Status.Replicas),
KubernetesDoc: doc,
Sensitive: []common.Sensitive{
{
Unmasked: deployment.Namespace,
Expand Down
20 changes: 18 additions & 2 deletions pkg/analyzer/hpa.go
Original file line number Diff line number Diff line change
Expand Up @@ -17,17 +17,27 @@ import (
"fmt"

"github.com/k8sgpt-ai/k8sgpt/pkg/common"
"github.com/k8sgpt-ai/k8sgpt/pkg/kubernetes"
"github.com/k8sgpt-ai/k8sgpt/pkg/util"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime/schema"
)

type HpaAnalyzer struct{}

func (HpaAnalyzer) Analyze(a common.Analyzer) ([]common.Result, error) {

kind := "HorizontalPodAutoscaler"
apiDoc := kubernetes.K8sApiReference{
Kind: kind,
ApiVersion: schema.GroupVersion{
Group: "autoscaling",
Version: "v1",
},
OpenapiSchema: a.OpenapiSchema,
}

AnalyzerErrorsMetric.DeletePartialMatch(map[string]string{
"analyzer_name": kind,
Expand Down Expand Up @@ -76,8 +86,11 @@ func (HpaAnalyzer) Analyze(a common.Analyzer) ([]common.Result, error) {
}

if podInfo == nil {
doc := apiDoc.GetApiDocV2("spec.scaleTargetRef")

failures = append(failures, common.Failure{
Text: fmt.Sprintf("HorizontalPodAutoscaler uses %s/%s as ScaleTargetRef which does not exist.", scaleTargetRef.Kind, scaleTargetRef.Name),
Text: fmt.Sprintf("HorizontalPodAutoscaler uses %s/%s as ScaleTargetRef which does not exist.", scaleTargetRef.Kind, scaleTargetRef.Name),
KubernetesDoc: doc,
Sensitive: []common.Sensitive{
{
Unmasked: scaleTargetRef.Name,
Expand All @@ -94,8 +107,11 @@ func (HpaAnalyzer) Analyze(a common.Analyzer) ([]common.Result, error) {
}

if containers <= 0 {
doc := apiDoc.GetApiDocV2("spec.scaleTargetRef.kind")

failures = append(failures, common.Failure{
Text: fmt.Sprintf("%s %s/%s does not have resource configured.", scaleTargetRef.Kind, a.Namespace, scaleTargetRef.Name),
Text: fmt.Sprintf("%s %s/%s does not have resource configured.", scaleTargetRef.Kind, a.Namespace, scaleTargetRef.Name),
KubernetesDoc: doc,
Sensitive: []common.Sensitive{
{
Unmasked: scaleTargetRef.Name,
Expand Down
Loading

0 comments on commit f9621af

Please sign in to comment.