Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable misspell, nestif golang linter #2240

Merged
merged 44 commits into from
Dec 21, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
44 commits
Select commit Hold shift + click to select a range
41261a0
Enable gocyclo linter
ankitjain235 Aug 3, 2023
118bc8a
Enable gocritic linter
ankitjain235 Aug 3, 2023
9fe8ef0
Fix typo for linters-settings, fix gocyclo lint failures
ankitjain235 Aug 3, 2023
88f1097
Add lll linter
ankitjain235 Aug 3, 2023
cf599c9
Add nakedret linter
ankitjain235 Aug 3, 2023
4c91b50
Enable dupl linter
ankitjain235 Aug 3, 2023
54d1d5e
Enable exportloopref linter
ankitjain235 Aug 3, 2023
ad04108
Enable importas linter
ankitjain235 Aug 3, 2023
953121f
Enable misspell linter
ankitjain235 Aug 3, 2023
605f039
Enable nestif linter
ankitjain235 Aug 3, 2023
31e838f
Fix minor typo
ankitjain235 Aug 7, 2023
b4cf462
Merge master, resolve conflicts
ankitjain235 Oct 3, 2023
2f1bbe4
Update new lint misses, add TODO for lint disables
ankitjain235 Oct 3, 2023
ef78a07
Merge master, resolve conflicts
ankitjain235 Oct 3, 2023
62b8959
Merge master, resolve conflicts
ankitjain235 Oct 3, 2023
41933d9
Merge master, resolve conflicts
ankitjain235 Oct 3, 2023
9c3b9a1
Merge master, resolve conflicts
ankitjain235 Oct 3, 2023
44a39cb
Fix misspell lint failure
ankitjain235 Oct 3, 2023
a874943
Merge master, resolve conflicts
ankitjain235 Nov 29, 2023
612154b
Apply linter to new changes
ankitjain235 Nov 29, 2023
0007316
Merge master, resolve conflicts
ankitjain235 Nov 29, 2023
7a3945a
Fix merge issues
ankitjain235 Nov 29, 2023
75b4f9d
Merge parent, resolve conflicts
ankitjain235 Nov 29, 2023
bc30421
Merge parent, resolve conflicts
ankitjain235 Nov 29, 2023
6695929
Merge master, resolve conflicts
ankitjain235 Dec 4, 2023
8c38dc0
Merge branch 'enable-linters-2' into enable-linters-3
ankitjain235 Dec 4, 2023
d522754
Add missing newlint
ankitjain235 Dec 4, 2023
32f0cae
Merge parent, resolve conflicts
ankitjain235 Dec 4, 2023
fded60e
Add explanation for ignoring dupl linter
ankitjain235 Dec 4, 2023
3f68dab
Merge branch 'enable-linters-2' into enable-linters-3
ankitjain235 Dec 4, 2023
08e0240
Merge branch 'enable-linters-3' into enable-linters-4
ankitjain235 Dec 4, 2023
e30c1ce
Remove unnecessary comment
ankitjain235 Dec 4, 2023
5b6234c
Merge branch 'master' into enable-linters-2
ankitjain235 Dec 6, 2023
5e97339
Address review comments
ankitjain235 Dec 18, 2023
da05972
Address review comments - move args
ankitjain235 Dec 19, 2023
a8795b2
Merge branch 'master' into enable-linters-2
ankitjain235 Dec 19, 2023
95d4b08
Merge parent, resolve conflicts
ankitjain235 Dec 19, 2023
7e7f3bf
Merge parent, resolve conflicts
ankitjain235 Dec 19, 2023
446c437
Temporarily disable depguard linter, lint fix
ankitjain235 Dec 19, 2023
8727b06
Merge branch 'enable-linters-2' into enable-linters-3
ankitjain235 Dec 19, 2023
a435fdf
Merge branch 'enable-linters-3' into enable-linters-4
ankitjain235 Dec 19, 2023
74180ac
Fix merge error
ankitjain235 Dec 19, 2023
722b3f8
Merge master, resolve conflicts
ankitjain235 Dec 21, 2023
b7c7b3a
Fix error message in test
ankitjain235 Dec 21, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions .golangci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,8 @@ linters:
- dupl
- exportloopref
- importas
- misspell
- nestif

run:
timeout: 10m # golangci-lint run's timeout.
Expand All @@ -37,6 +39,8 @@ issues:
- unparam # Tests might have unused function parameters.
- lll
- dupl
- misspell
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is fine to enable misspell in test file if doesn't require much changes

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It does require changes. Enabling these linters for tests would need a lot of extra work. I'll create a separate issue for the same?

- nestif

- text: "`ctx` is unused" # Context might not be in use in places, but for consistency, we pass it.
linters:
Expand All @@ -60,3 +64,5 @@ linters-settings:
alias: metav1
- pkg: github.com/kanisterio/kanister/pkg/apis/cr/v1alpha1
alias: crv1alpha1
nestif:
min-complexity: 6
PrasadG193 marked this conversation as resolved.
Show resolved Hide resolved
4 changes: 2 additions & 2 deletions pkg/app/app.go
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ import (

// App represents an application we can install into a namespace.
type App interface {
// Init instantiates the app based on the environemnt configuration,
// Init instantiates the app based on the environment configuration,
// including environement variables and state in the Kubernetes cluster. If
// any required configuration is not discoverable, Init will return an
// error.
Expand All @@ -48,7 +48,7 @@ type App interface {
type DatabaseApp interface {
App
// Ping will issue trivial request to the database to see if it is
// accessable.
// accessible.
Ping(context.Context) error
// Insert adds n entries to the database.
Insert(ctx context.Context) error
Expand Down
4 changes: 2 additions & 2 deletions pkg/app/cassandra.go
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,7 @@ func (cas *CassandraInstance) Object() crv1alpha1.ObjectReference {
}
}

// Uninstall us used to remove the datbase application
// Uninstall us used to remove the database application
func (cas *CassandraInstance) Uninstall(ctx context.Context) error {
log.Print("Uninstalling application.", field.M{"app": cas.name})
cli, err := helm.NewCliClient()
Expand All @@ -142,7 +142,7 @@ func (cas *CassandraInstance) GetClusterScopedResources(ctx context.Context) []c
return nil
}

// Ping is used to ping the application to check the datbase connectivity
// Ping is used to ping the application to check the database connectivity
func (cas *CassandraInstance) Ping(ctx context.Context) error {
log.Print("Pinging the application.", field.M{"app": cas.name})

Expand Down
2 changes: 1 addition & 1 deletion pkg/app/elasticsearch.go
Original file line number Diff line number Diff line change
Expand Up @@ -174,7 +174,7 @@ func (esi *ElasticsearchInstance) Insert(ctx context.Context) error {
addDocumentToIndexCMD := []string{"sh", "-c", esi.curlCommandWithPayload("POST", esi.indexname+"/_doc/?refresh=true", "'{\"appname\": \"kanister\" }'")}
_, stderr, err := esi.execCommand(ctx, addDocumentToIndexCMD)
if err != nil {
// even one insert failed we will have to return becasue
// even one insert failed we will have to return because
// the count wont match anyway and the test will fail
return errors.Wrapf(err, "Error %s inserting document to an index %s.", stderr, esi.indexname)
}
Expand Down
2 changes: 1 addition & 1 deletion pkg/app/mongodb-deploymentconfig.go
Original file line number Diff line number Diff line change
Expand Up @@ -184,7 +184,7 @@ func (mongo *MongoDBDepConfig) execCommand(ctx context.Context, command []string
return "", "", err
}
stdout, stderr, err := kube.Exec(mongo.cli, mongo.namespace, podName, containerName, command, nil)
log.Print("Executing the command in pod and contianer", field.M{"pod": podName, "container": containerName, "cmd": command})
log.Print("Executing the command in pod and container", field.M{"pod": podName, "container": containerName, "cmd": command})

return stdout, stderr, errors.Wrapf(err, "Error executing command in the pod")
}
2 changes: 1 addition & 1 deletion pkg/blockstorage/awsefs/awsefs.go
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ var allowedMetadataKeys = map[string]bool{
"newFileSystem": true,
}

// NewEFSProvider retuns a blockstorage provider for AWS EFS.
// NewEFSProvider returns a blockstorage provider for AWS EFS.
func NewEFSProvider(ctx context.Context, config map[string]string) (blockstorage.Provider, error) {
awsConfig, region, err := awsconfig.GetConfig(ctx, config)
if err != nil {
Expand Down
2 changes: 1 addition & 1 deletion pkg/blockstorage/azure/auth.go
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ import (

const ActiveDirectory = "activeDirectory"

// currently avaialble types: https://docs.microsoft.com/en-us/azure/developer/go/azure-sdk-authorization
// currently available types: https://docs.microsoft.com/en-us/azure/developer/go/azure-sdk-authorization
// to be available with azidentity: https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#readme-credential-types
// determine if the combination of creds are client secret creds
func isClientCredsAvailable(config map[string]string) bool {
Expand Down
28 changes: 14 additions & 14 deletions pkg/blockstorage/gcepd/gcepd.go
Original file line number Diff line number Diff line change
Expand Up @@ -479,20 +479,20 @@ func (s *GpdStorage) SetTags(ctx context.Context, resource interface{}, tags map
if err != nil {
return err
}
} else {
vol, err := s.service.Disks.Get(s.project, res.Az, res.ID).Context(ctx).Do()
if err != nil {
return err
}
tags = ktags.AddMissingTags(vol.Labels, ktags.GetTags(tags))
slr := &compute.ZoneSetLabelsRequest{
LabelFingerprint: vol.LabelFingerprint,
Labels: blockstorage.SanitizeTags(tags),
}
op, err = s.service.Disks.SetLabels(s.project, res.Az, vol.Name, slr).Do()
if err != nil {
return err
}
return s.waitOnOperation(ctx, op, res.Az)
}
vol, err := s.service.Disks.Get(s.project, res.Az, res.ID).Context(ctx).Do()
if err != nil {
return err
}
tags = ktags.AddMissingTags(vol.Labels, ktags.GetTags(tags))
slr := &compute.ZoneSetLabelsRequest{
LabelFingerprint: vol.LabelFingerprint,
Labels: blockstorage.SanitizeTags(tags),
}
op, err = s.service.Disks.SetLabels(s.project, res.Az, vol.Name, slr).Do()
if err != nil {
return err
}
return s.waitOnOperation(ctx, op, res.Az)
}
Expand Down
2 changes: 1 addition & 1 deletion pkg/blockstorage/getter/getter.go
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ var _ Getter = (*getter)(nil)

type getter struct{}

// New retuns a new Getter
// New returns a new Getter
func New() Getter {
return &getter{}
}
Expand Down
2 changes: 1 addition & 1 deletion pkg/blockstorage/vmware/vmware.go
Original file line number Diff line number Diff line change
Expand Up @@ -795,7 +795,7 @@ func (ge govmomiError) Format() string {
return fmt.Sprintf("[%s]", strings.Join(msgs, "; "))
}

//nolint:gocognit
//nolint:gocognit,nestif
func (ge govmomiError) ExtractMessages() []string {
err := ge.err

Expand Down
2 changes: 1 addition & 1 deletion pkg/blockstorage/zone/zone.go
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ func FromSourceRegionZone(ctx context.Context, m Mapper, kubeCli kubernetes.Inte
}
}
if len(newZones) == 0 {
return nil, errors.Errorf("Unable to find valid availabilty zones for region (%s)", sourceRegion)
return nil, errors.Errorf("Unable to find valid availability zones for region (%s)", sourceRegion)
}
var zones []string
for z := range newZones {
Expand Down
4 changes: 2 additions & 2 deletions pkg/blockstorage/zone/zone_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -562,14 +562,14 @@ func (s ZoneSuite) TestFromSourceRegionZone(c *C) {
inZones: []string{"us-west-2a"},
inCli: nil,
outZones: nil,
outErr: fmt.Errorf(".*Unable to find valid availabilty zones for region.*"),
outErr: fmt.Errorf(".*Unable to find valid availability zones for region.*"),
},
{ // Kubernetes provided zones are invalid use valid sourceZones
inRegion: "us-west-2",
inZones: []string{"us-west-2a", "us-west-2b", "us-west-2d"},
inCli: nil,
outZones: []string{"us-west-2a", "us-west-2b"},
outErr: fmt.Errorf(".*Unable to find valid availabilty zones for region.*"),
outErr: fmt.Errorf(".*Unable to find valid availability zones for region.*"),
},
{ // Source zone not found but other valid zones available
inRegion: "us-west-2",
Expand Down
2 changes: 1 addition & 1 deletion pkg/config/helpers.go
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ func GetClusterName(cli kubernetes.Interface) (string, error) {
func GetEnvOrSkip(c *check.C, varName string) string {
v := os.Getenv(varName)
if v == "" {
reason := fmt.Sprintf("Test %s requires the environemnt variable '%s'", c.TestName(), varName)
reason := fmt.Sprintf("Test %s requires the environment variable '%s'", c.TestName(), varName)
c.Skip(reason)
}
return v
Expand Down
2 changes: 1 addition & 1 deletion pkg/controllers/repositoryserver/repository.go
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ func (h *RepoServerHandler) connectToKopiaRepository() error {
MetadataCacheLimitMB: *cacheSizeSettings.Metadata,
},
Username: h.RepositoryServer.Spec.Repository.Username,
// TODO(Amruta): Generate path for respository
// TODO(Amruta): Generate path for repository
RepoPathPrefix: h.RepositoryServer.Spec.Repository.RootPath,
Location: h.RepositoryServerSecrets.storage.Data,
}
Expand Down
2 changes: 1 addition & 1 deletion pkg/controllers/repositoryserver/secrets_manager.go
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ type repositoryServerSecrets struct {
// getSecretsFromCR fetches all the secrets in the RepositoryServer CR
func (h *RepoServerHandler) getSecretsFromCR(ctx context.Context) error {
// TODO: For now, users should make sure all the secrets and the RepositoryServer CR are present in the
// same namespace. This namespace field can be overriden when we start creating secrets using 'kanctl' utility
// same namespace. This namespace field can be overridden when we start creating secrets using 'kanctl' utility
repositoryServer := h.RepositoryServer
h.Logger.Info("Fetching secrets from all the secret references in the CR")
storage, err := h.fetchSecret(ctx, &repositoryServer.Spec.Storage.SecretRef)
Expand Down
2 changes: 1 addition & 1 deletion pkg/customresource/customresource.go
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,7 @@ func createCRD(context Context, resource CustomResource) error {
}

func rawCRDFromFile(path string) ([]byte, error) {
// yamls is the variable that has embeded custom resource manifest. More at `embed.go`
// yamls is the variable that has embedded custom resource manifest. More at `embed.go`
return yamls.ReadFile(path)
}

Expand Down
2 changes: 1 addition & 1 deletion pkg/customresource/embed.go
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ import "embed"
// embed.go embeds the CRD yamls (actionset, profile, blueprint) with the
// controller binary so that we can read these manifests in runtime.

// We need these manfiests at two places, at `pkg/customresource/` and at
// We need these manifests at two places, at `pkg/customresource/` and at
// `helm/kanister-operator/crds`. To make sure we are not duplicating the
// things we have original files at `pkg/customresource` and have soft links
// at `helm/kanister-operator/crds`.
Expand Down
57 changes: 33 additions & 24 deletions pkg/function/create_rds_snapshot.go
Original file line number Diff line number Diff line change
Expand Up @@ -95,30 +95,9 @@ func createRDSSnapshot(ctx context.Context, instanceID string, dbEngine RDSDBEng
// Create Snapshot
snapshotID := fmt.Sprintf("%s-%s", instanceID, rand.String(10))

log.WithContext(ctx).Print("Creating RDS snapshot", field.M{"SnapshotID": snapshotID})
if !isAuroraCluster(string(dbEngine)) {
dbSnapshotOutput, err := rdsCli.CreateDBSnapshot(ctx, instanceID, snapshotID)
if err != nil {
return nil, errors.Wrap(err, "Failed to create snapshot")
}

// Wait until snapshot becomes available
log.WithContext(ctx).Print("Waiting for RDS snapshot to be available", field.M{"SnapshotID": snapshotID})
if err := rdsCli.WaitUntilDBSnapshotAvailable(ctx, snapshotID); err != nil {
return nil, errors.Wrap(err, "Error while waiting snapshot to be available")
}
if dbSnapshotOutput.DBSnapshot != nil && dbSnapshotOutput.DBSnapshot.AllocatedStorage != nil {
allocatedStorage = *(dbSnapshotOutput.DBSnapshot.AllocatedStorage)
}
} else {
if _, err := rdsCli.CreateDBClusterSnapshot(ctx, instanceID, snapshotID); err != nil {
return nil, errors.Wrap(err, "Failed to create cluster snapshot")
}

log.WithContext(ctx).Print("Waiting for RDS Aurora snapshot to be available", field.M{"SnapshotID": snapshotID})
if err := rdsCli.WaitUntilDBClusterSnapshotAvailable(ctx, snapshotID); err != nil {
return nil, errors.Wrap(err, "Error while waiting snapshot to be available")
}
allocatedStorage, err = createSnapshot(ctx, rdsCli, snapshotID, instanceID, string(dbEngine))
if err != nil {
return nil, err
}

// Find security group ids
Expand Down Expand Up @@ -160,6 +139,36 @@ func createRDSSnapshot(ctx context.Context, instanceID string, dbEngine RDSDBEng
return output, nil
}

func createSnapshot(ctx context.Context, rdsCli *rds.RDS, snapshotID, dbEngine, instanceID string) (int64, error) {
PrasadG193 marked this conversation as resolved.
Show resolved Hide resolved
log.WithContext(ctx).Print("Creating RDS snapshot", field.M{"SnapshotID": snapshotID})
var allocatedStorage int64
if !isAuroraCluster(dbEngine) {
dbSnapshotOutput, err := rdsCli.CreateDBSnapshot(ctx, instanceID, snapshotID)
if err != nil {
return allocatedStorage, errors.Wrap(err, "Failed to create snapshot")
}

// Wait until snapshot becomes available
log.WithContext(ctx).Print("Waiting for RDS snapshot to be available", field.M{"SnapshotID": snapshotID})
if err := rdsCli.WaitUntilDBSnapshotAvailable(ctx, snapshotID); err != nil {
return allocatedStorage, errors.Wrap(err, "Error while waiting snapshot to be available")
}
if dbSnapshotOutput.DBSnapshot != nil && dbSnapshotOutput.DBSnapshot.AllocatedStorage != nil {
allocatedStorage = *(dbSnapshotOutput.DBSnapshot.AllocatedStorage)
}
return allocatedStorage, nil
}
if _, err := rdsCli.CreateDBClusterSnapshot(ctx, instanceID, snapshotID); err != nil {
return allocatedStorage, errors.Wrap(err, "Failed to create cluster snapshot")
}

log.WithContext(ctx).Print("Waiting for RDS Aurora snapshot to be available", field.M{"SnapshotID": snapshotID})
if err := rdsCli.WaitUntilDBClusterSnapshotAvailable(ctx, snapshotID); err != nil {
return allocatedStorage, errors.Wrap(err, "Error while waiting snapshot to be available")
}
return allocatedStorage, nil
}

func (crs *createRDSSnapshotFunc) Exec(ctx context.Context, tp param.TemplateParams, args map[string]interface{}) (map[string]interface{}, error) {
// Set progress percent
crs.progressPercent = progress.StartedPercent
Expand Down
2 changes: 1 addition & 1 deletion pkg/function/restore_data_using_kopia_server.go
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ import (

const (
RestoreDataUsingKopiaServerFuncName = "RestoreDataUsingKopiaServer"
// SparseRestoreOption is the key for specifiying whether to do a sparse restore
// SparseRestoreOption is the key for specifying whether to do a sparse restore
SparseRestoreOption = "sparseRestore"
)

Expand Down
14 changes: 8 additions & 6 deletions pkg/function/restore_rds_snapshot.go
Original file line number Diff line number Diff line change
Expand Up @@ -199,11 +199,7 @@ func restoreRDSSnapshot(
// If securityGroupID arg is nil, we will try to find the sgIDs by describing the existing instance
// Find security group ids
if sgIDs == nil {
if !isAuroraCluster(string(dbEngine)) {
sgIDs, err = findSecurityGroups(ctx, rdsCli, instanceID)
} else {
sgIDs, err = findAuroraSecurityGroups(ctx, rdsCli, instanceID)
}
sgIDs, err = findSecurityGroupIDs(ctx, rdsCli, instanceID, string(dbEngine))
if err != nil {
return nil, errors.Wrapf(err, "Failed to fetch security group ids. InstanceID=%s", instanceID)
}
Expand Down Expand Up @@ -236,6 +232,12 @@ func restoreRDSSnapshot(
RestoreRDSSnapshotEndpoint: dbEndpoint,
}, nil
}
func findSecurityGroupIDs(ctx context.Context, rdsCli *rds.RDS, instanceID, dbEngine string) ([]string, error) {
if !isAuroraCluster(dbEngine) {
return findSecurityGroups(ctx, rdsCli, instanceID)
}
return findAuroraSecurityGroups(ctx, rdsCli, instanceID)
}

//nolint:unparam
func postgresRestoreCommand(pgHost, username, password string, backupArtifactPrefix, backupID string, profile []byte, dbEngineVersion string) ([]string, error) {
Expand Down Expand Up @@ -336,7 +338,7 @@ func restoreAuroraFromSnapshot(ctx context.Context, rdsCli *rds.RDS, instanceID,
}

log.WithContext(ctx).Print("Creating DB instance in the cluster")
// After Aurora cluster is created, we will have to explictly create the DB instance
// After Aurora cluster is created, we will have to explicitly create the DB instance
dbInsOp, err := rdsCli.CreateDBInstance(
ctx,
nil,
Expand Down
2 changes: 1 addition & 1 deletion pkg/kube/pod.go
Original file line number Diff line number Diff line change
Expand Up @@ -376,7 +376,7 @@ func checkPVCAndPVStatus(ctx context.Context, vol corev1.Volume, p *corev1.Pod,

switch pvc.Status.Phase {
case corev1.ClaimLost:
return errors.Errorf("PVC %s assoicated with pod %s has status: %s", pvcName, p.Name, corev1.ClaimLost)
return errors.Errorf("PVC %s associated with pod %s has status: %s", pvcName, p.Name, corev1.ClaimLost)
case corev1.ClaimPending:
pvName := pvc.Spec.VolumeName
if pvName == "" {
Expand Down
2 changes: 1 addition & 1 deletion pkg/kube/pod_controller.go
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ type PodController interface {

// podController keeps Kubernetes Client and PodOptions needed for creating a Pod.
// It implements the PodControllerProcessor interface.
// All communication with kubernetes API are done via PodControllerProcessor interface, which could be overriden for testing purposes.
// All communication with kubernetes API are done via PodControllerProcessor interface, which could be overridden for testing purposes.
type podController struct {
cli kubernetes.Interface
podOptions *PodOptions
Expand Down
2 changes: 1 addition & 1 deletion pkg/objectstore/objectstore.go
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ type Directory interface {
// DeleteDirectory deletes the current directory
DeleteDirectory(context.Context) error

// DeleteAllWithPrefix deletes all directorys and objects with a provided prefix
// DeleteAllWithPrefix deletes all directories and objects with a provided prefix
DeleteAllWithPrefix(context.Context, string) error

// ListDirectories lists all the directories rooted in
Expand Down
Loading