Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

artifact: protect against unbounded artifact decompression (1.5.0) #16151

Merged
merged 2 commits into from
Feb 14, 2023

Conversation

shoenig
Copy link
Member

@shoenig shoenig commented Feb 13, 2023

This is a forward-port of #16126.

Targets main/1.5.0 only.

Starting with 1.5.0, we set default values for artifact decompression limits.

artifact.decompression_size_limit (default "100GB") - the maximum amount of
data that will be decompressed before triggering an error and cancelling
the operation
artifact.decompression_file_count_limit (default 4096) - the maximum number
of files that will be decompressed before triggering an error and
cancelling the operation.

Note: these limits simply "seem reasonable". Download sizes are already limited to 100GB by default, and 4096 seems like enough files for most use cases without being unbounded.

Starting with 1.5.0, set defaut values for artifact decompression limits.

artifact.decompression_size_limit (default "100GB") - the maximum amount of
data that will be decompressed before triggering an error and cancelling
the operation

artifact.decompression_file_count_limit (default 4096) - the maximum number
of files that will be decompressed before triggering an error and
cancelling the operation.
Copy link
Member

@tgross tgross left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. I've left one question but that's just for my understanding and not a blocker

@@ -166,6 +186,22 @@ func (a *ArtifactConfig) Validate() error {
return fmt.Errorf("s3_timeout must be > 0")
}

if a.DecompressionFileCountLimit == nil {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this value (and DecompressionSizeLimit on line 192) ever be nil in 1.5.0? If it's not set in the user's configuration, we should be falling back to the default value. Is this just belt-and-suspenders against a NPE when we ParseBytes later?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed it "can't" ever be nil because we only create one of these from a DefaultConfig() helper; the validation here should not set a default at all I think, and simply fail the validation if the value is nil.

Good catch!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants