-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Simplify cloud credentials mounting #548
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @PhilippeMoussalli!
docs/pipeline.md
Outdated
@@ -98,11 +98,12 @@ during component development. | |||
|
|||
The local runner will try to check if the `base_path` of the pipeline is a local or remote storage. If it's local, the `base_path` will be mounted as a bind volume on every service/component. | |||
|
|||
If you want to use remote paths (GCS, S3, etc.) you can use the `--extra_volumes` argument to mount extra credentials. This volume will be mounted to every component/service of the docker-compose spec. | |||
If you want to use remote paths (GCS, S3, etc.) you can use the `--auth-gcp`, `--auth-aws` or `--auth-azure`. | |||
This will mount your default local cloud credentials to the pipeline. You can also use the `--extra_volumes` argument to mount extra credentials or additional files. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we need to make it more explicit that users should make sure to authenticate correctly locally before?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good call, added it
Addresses #256