-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Don't panic when stdout doesn't exist #1014
Conversation
Signed-off-by: Peter Atashian <retep998@gmail.com>
|
||
# Detailed design | ||
|
||
Change the internal implementation of `println!` `print!` `panic!` and `assert!` to not `panic!` when `stdout` or `stderr` doesn't exist. When getting `stdout` or `stderr` through the `std::io` methods, those versions should continue to return an error if `stdout` or `stderr` doesn't exist. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To clarify, does this RFC propose ignoring all errors or just those related to a missing stdout/stderr? It sounds like the latter, but I just wanted to make sure.
Also, do you think that the stdin
behavior should remain the same as-is today or act as if it were a io::empty
instance?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It proposes ignoring only a missing stdout
/stderr
.
I currently don't hold an opinion on what to do with stdin
considering there is no convenience macro for it, but if an argument can be made for something better than the current behavior, I'll update the RFC.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right now there is an unwrap
for Windows when getting a handle to the stdin of the process. This RFC would probably alter that to return something akin to io::Empty
though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not return Result
from those methods instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's certainly an alternative! It breaks the signatures as-is today, however.
I’d find it pretty nice if On the other hand, it becomes too easy to ignore other I/O errors that might arise. |
What does "stdout does not exist" mean on Unix? Does it mean that there is no open file with fd 1? The problem of this situation is that newly opened files will be assigned fd 1 automatically (at least on Linux with both open() and fopen()), and then accidentally get used as stdout if stdout missing is checked more than once, or if C libraries are called, or if the files are not CLOEXEC and processes are spawned. So it might be worth thinking whether automatically opening /dev/null in fd 1 (and likewise fd 0 and 2) makes sense. There are problems though with this, namely the fact that it would need to be done at process init to be effective, automatically altering the fd table, and requiring /dev to be present (if /dev is not mounted, one could instead use pipe() and spawn a thread to read from the other end, although this is pretty horrible). This has potential security implications as well, since running a setuid program with no fd 0/1/2 could make it write data to random opened files (maybe something already mitigates this?). |
Signed-off-by: Peter Atashian <retep998@gmail.com>
Signed-off-by: Peter Atashian <retep998@gmail.com>
An alternative would be to provide some functions in the standard library to put the standard streams into a "known good" configuration. I don't think panicking on std* not being there is too problematic provided there's a reasonably simple way to stop it from happening. Preferably one that works cross-platform and can dynamically detect the bad condition. To bring up the situation with Python, I recall jumping through some rather questionable hoops to automatically detect missing |
This does not hold true if our process is daemonized. My two cents: Doing something automatically is un-rusty, because it will probably be the wrong thing in some situations. Thus leaving a possible (though unlikely) error to the user to handle (as in returning an IoResult) would be the most workable solution. We should also note that this could trigger an unused_result (or some such) warning depending on lint settings – perhaps we should suppress those by default. |
I'm against having every single Perhaps a method can be added that tells you the state of stdout stderr and stdin and another that lets you redirect them. This way if they point to nothingness you can choose to redirect the output to a file or something. Of course they'd still default to what this RFC proposes to prevent programs from panicking and then aborting when stdout and stderr aren't there. |
Thumb up for this RFC. Even stderr and stdin would exists in some situation.
I don't think this is a problem. If the program is launched without stdout or stderr, than the user is intent to not seeing back traces from the console. If such back trace info is important, the program should restore them in a file manually. |
Wrong. A daemon has standard streams until it daemonizes. Also on POSIX, an attacker may try to disturb a process by closing it's stdio handles from the outside (those are available from /proc).
That's for the user to decide. Perhaps log a message to a file. Or do nothing. Or stop the world, because printing a result is the sole reason for this program to run. As I said, there is no right default solution. Splitting the print call and the
Not necessarily – as I have outlined above, there may be other interferences leading to loss of stdout/in/... Also library code may print to stdout for some reason – while we certainly do not want it to crash, we should allow it to do something if an error was silenced. |
If your program is being attacked by rogue processes that are trying to cut out your stdout, I think you have bigger problems than worrying about stdout. |
Probably. My point was that
Case in point: I recently had a long-running process (written in Java if you must know) that I had started for the weekend. My colleague needed to change some settings, but it was running in my shell. So he hijacked my console (he has root access to the server) and sent some commands to the process. However doing so, he killed stdout by mistake. The process threw a ClosedChannelException. There went my weekend. |
Perhaps a combination of these things?
|
For I think it is always true that stdout is a easy to use but not reliable. Even "printf(3)" could return negative value to indicate errors(but not abort your program). |
@retep998 Sounds good.
We also need to answer the question if there are any programs in the wild that depend on |
Well there's three options for that.
|
|
It seems there is already a way to redirect stdio via I'm not sure how conditional redirects would work, since the only way to tell if redirecting is needed if an attempt is made to get |
The docs on And I think having the fallback install permanently on failure would be acceptable. This could also be implemented by a Write handler that attempts to Write to the default stdout, and on failure set up the fallback channel and repeat the Write (note we also may want to extend this behavior to processes started from, but not necessarily written in rust). |
To extend the behavior to processes merely started from a Rust process would involve piping their output to our Rust process (so the Rust process would have to live longer than the child), plus it would no longer be a genuine console handle, which on Windows is a significant difference that would impair the child process's ability to use fancy console features. |
Yes, a piped handle is distinctly non-interactive (also under Linux). Isn't that a well-known tradeoff? I suppose one could write a terminal emulation in rust, but it should not reside in std::io. |
Note: this PR is in its Final Comment Period as of Yesterday |
I'd really rather avoid by default modifying the behavior of stdio we pass to child processes for the sake of protecting them. Child processes can protect themselves and they should be responsible for cases where they inherit a missing stdout, otherwise we're just getting into the realm of overly obtrusive hand-holding. As for modifying the return value of I think I'm pretty happy with this RFC just stopping your program from panicking (and then aborting) because someone decided to run your process without a console on Windows or as a daemon. If someone wants to handle the errors then they already have the option of using |
👍 from me then. |
👍 from me. Even if the process has stdio set up, there's a small window of time during process shutdown when retrieval of the global handle will fail: https://github.com/rust-lang/rust/blob/master/src/libstd/io/stdio.rs#L241. It seems like it'd make sense to return a sink there as well instead of panicking. |
This RFC looks like it has fairly broad consensus at this point, and it fits into the model of picking a reasonable set of defaults for behavior in the standard library. The standard I/O streams are already locked and buffered, so there's a good deal of machinery happening behind the scenes to start out with, and this is not necessarily adding an undue amount of extra burden. Furthermore, raw access to the stdio streams is still planned which will bypass all of these points, allowing for finer-grained control of the output of a program. Thanks again for the RFC @retep998! |
Rendered