-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
S3 progress verbosity should be optional #519
Comments
Is there anybody out there? |
I suppose this has been put on the back, back burner? Original comment is from 2013... |
have you tried |
This seems like something you should be able to solve with the unix tool chain. |
Nice idea @beauhoyt but the inverted
It would be nice to see an option such as |
@ddarbyson just make sure nothing goes wrong and you're good to go :) |
@SirZach if software development could be so easy :) |
Fixing it with a grep wrapper is a horrible, horrible solution. I'd find it a very useful option for reporting transmitted files, without having to embed such grep wrappers into cron jobs. |
i have a very long running sync, and it has to be restarted, is there anyway to have the s3 sync report where it is in verifying objects that have already been sync'd? i am only getting "Completed 0 parts(0) with ... file(s) remaining" message. |
Has anyone taken a stab at fixing this? I have a cron job backing up a 2.5 GB file to S3 and it generates nearly 10,000 lines of this in my logs and I would love to get rid of it:
--quiet and --only-show-errors doesn't seem to make any difference in hiding the progress. |
I'm not seeing this problem. --only-show-errors is doing what I expect.
Is your awscli up to date?
…Sent from my iPhone
On Mar 17, 2017, at 11:00 AM, Benjamin Bunk <notifications@github.com<mailto:notifications@github.com>> wrote:
Has anyone taken a stab at fixing this? I have a cron job backing up a 2.5 GB file to S3 and it generates nearly 10,000 lines of this in my logs and I would love to get rid of it:
Completed 1.0 MiB/2.5 GiB (4.9 MiB/s) with 1 file(s) remaining
--quiet and --only-show-errors doesn't seem to make any difference in hiding the progress.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub<#519 (comment)>, or mute the thread<https://github.com/notifications/unsubscribe-auth/AJNlDLRjzZCvV5kAVzbaLof0xFoixXdIks5rmp_ygaJpZM4BQ5n->.
Confidentiality Notice: This e-mail (including any attachments) is intended only for the recipients named above. It may contain confidential or privileged information and should not be read, copied or otherwise used by any other person. If you are not a named recipient, please notify the sender of that fact and delete the e-mail from your system.
|
@nkadel-skyhook I believe what is expected is to show the success, without the progress, observed :
expected :
|
Ok was able to find a workaround, but it's not the most elegant... You got in syslog the actual copy message, successful or not, without the progress information. |
@jrottenberg I wouldn't consider feeding the output of an aws s3 sync to "logger" to be a reasonable approach. System logs are not necessarily visible to non-root users. I would like an option similar to rsync's "-v" setting, which provides a simple record of each file as it is being transferred, and no churning report of the progress of the file transfer. awscli does not currently have such an option, as best I can tell. It's an all or nothing. setting. |
ah , the logger part is just to avoid the redirect to a file (it can fill up disk, no timestamp, etc : issues syslog addresses for me and I have awslogs for my users), the main point is on the tr and grep -v. |
+1 would like this implemented. I work for a fairly large org and we use AWS services extensively. Scheduling backups to S3 is a pain in the ass because awscli s3 sync generates unreadable garbage output when redirected to a file, making it impossible to verify or detect when something goes wrong. For us |
That option of piping the output through the scripting reported above is useful : is useful. As a pure matter of form, I'd urge using ' or " consistently, rather than mixing them, so I'd use this below.
This also avoids the "grep fails to see any text that does not say '^Completed', and therefore reports an error" of the original tool. The remaining danger is that this pipes the output through a "sed" command and can return the results of the "sed" command, not the results of a failed "aws s3 sync" command. To avoid missing error reports, precede the command in a "bash" shell script with "set -o pipefail. Something like this:
Autocorrect kept trying to change "pipefail" to "pipefile"! |
Another +1 for this. |
Totally necessary IMO. I run aws s3 sync in a daily cron and now it makes monitoring progress by a mailto a nightmare. |
+1 for this |
+1. I can't believe that 4 years later someone has picked this up only to have it stalled with "needs-review" status on a pull request 2747. This is rather tragic that it took so long for someone to do the work and now we wait (again) for someone to review & approve. I was going to pull down the code and attempt it myself. |
Yes please, +1 |
Implemented in #2747 You should now be able to specify the |
It looks like progress messages are sent to stdout and errors to stderr. Sending stdout to /dev/null works for me: aws s3 cp a b > /dev/null |
When using
aws s3 sync
the verbose output will provide updates for "Completed 101 of 109 part(s) with 4 file(s) remaining"... When piping the output to a log file the results are as shown:Log files would read much better if the "Completed ... of ... parts(s) etc" was not displayed.
Having a new --[no-]progress option for the S3 commands would be nice.
The text was updated successfully, but these errors were encountered: