-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Copying data to s3 from a named pipe using process substitution fails #6145
Comments
Hi @boltronics, Thanks for bringing this to our attention. I'll review with the team to get their thoughts on this. |
Hi @boltronics, Thanks for your patience. I was looking through our docs and I probably just missed it, but can you provide a link to where it specifically says FIFO queues are skipped? I was able to find that we do support downloading to FIFOs, so that at least explains some of the behavior. This is something I'd like to have documented better, if possible. In terms of expectations, I understand the point you're making based on the examples above, but the only documented (and guaranteed to work) examples are the ones that reference the addition of file streaming capability in the s3 |
Hi @stobrien89, No problem, certainly not urgent since there is a work-around. I was trying to look up exit codes to handle in my script:
That led me to this page: It is there that it says:
I don't normally have to reference the docs at all to use The |
Hi @boltronics, Thanks for the additional info! That definitely clears things up. Like @kdaily mentioned in #6160, having a stricter compatibility mode with Unix commands/processes would be useful, but we do have a large number of windows users, so that's something else we have to take into consideration. I'll mark this as a feature request for now, but I can't make any guarantees as to when/if something like this would be implemented. |
Any news on it? |
This is possible:
This is possible:
This is not possible:
and yet, this is possible:
This is inconsistent behaviour and thus not aligned with user expectations. The docs say FIFO queues are skipped, but that's clearly not always the case since
-
works. Strangely, even using--include '*'
doesn't help matters.SDK version number
aws-cli/1.19.1 Python/3.7.3 Linux/5.10.0-0.bpo.5-amd64 botocore/1.20.0
Platform/OS/Hardware/Device
Debian GNU/Linux 10, x86_64
To Reproduce (observed behavior)
$ aws s3 cp <(echo "Test") s3://my-bucket/test.txt
Expected behavior
I expect
s3://my-bucket/test.txt
should exist as an S3 object with the text "Test".The text was updated successfully, but these errors were encountered: