-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add clippy.toml to disallow methods #204
Conversation
Apparently some of this code wasn't formatted already. I'll leave it in the newly formatted state. |
@@ -40,7 +40,7 @@ pub struct CompressionSessionConfig { | |||
|
|||
impl<C: Send> CompressionSession<C> { | |||
pub fn new(config: CompressionSessionConfig) -> Result<Self, OSStatus> { | |||
let (tx, rx) = mpsc::channel(); | |||
let (tx, rx) = mpsc::sync_channel(1024); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The best thing I can say about the previous behavior is that the unbounded growth probably wouldn't happen. The only use of this I see is through VideoEncoder
, which enforces alternating send
and try_recv
calls. (Also, it requests real-time and no frame reordering, which should minimize how many frames Video Toolbox buffers.) So the only way I can see excessive growth is surprisingly high latency within Video Toolbox then a bunch of frames completing at once, not just the caller isn't calling receive as often as it's calling encode.
As for the new behavior...
- 1024 frames is better than unlimited but still is a lot, 17 seconds even at 60 fps. Like, 1 second should be plenty when we're aiming for real-time. Note the
VideoEncoder
caller knows the expected frame rate. - What happens if the
callback
blocks? Apple's docs mention "the system may call this function asynchronously, on a different thread from the one that calls VTCompressionSessionEncodeFrame" but that's not as specific as I was hoping. Do you see anything that talks more about this? I wonder if we should be usingtry_send
and storing an error status if it sends or something / looking for that later.
Similar questions about the decode session code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Blocking behavior is totally unspecified in the docs. I was definitely a bit overzealous with 1024, how about 120? I know it's still pretty large, but I'd prefer to avoid blocking or dropping data if at all possible. I think blocking is probably preferable to dropping data.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I hesitate because I'm not sure what the "different thread" might be, or what the consequences are of blocking it. Does it handle other video streams? Is it some general-purpose async thread with non-video responsibilities? We could try to answer this experimentally—shouldn't be hard to hack the callback to abort, run the test under a debugger, and see what thread died. But the answer could change between calls or between OS versions or whatever anyway.
I think it actually might be better to cause the compression session to fail (all subsequent receive operations receive an error), because that will be a clear, well-understood failure mode.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, that makes sense to me. I'll author a commit changing to that behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On the decompression_session
side, I noticed the channel can only ever have one value as it's a oneshot channel. So I swapped it out for a Arc<Mutex<Option<_>>>
to reflect that.
})); | ||
if r.is_err() { | ||
// Send correct data, or send nothing at all. Do not block. | ||
ctx.frames = None; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It doesn't look like VideoEncoder::next_video_encoder_trait_frame
will return an error when this happens:
fn next_video_encoder_trait_frame(&mut self) -> Result<Option<VideoEncoderOutput<F>>, OSStatus> {
Ok(match self.sess.frames().try_recv().ok().transpose()? {
Some(frame) => { /* ... */ }
None => None,
})
}
Shouldn't it distinguish TryRecvError::Empty
(expected) from TryRecvError::Disconnected
(this new error)?
and though the .frames()
interface here isn't directly used anywhere else that I can see, we should still probably document what it means when the caller observes a disconnect before dropping the VideoEncoder
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, yeah you're right. Making this distinction will require breaking the API because the current Error type doesn't account for errors that aren't from the OS. I'll add a commit that does it anyways.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Resolved in the two most recent commits
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Had to look a couple times at the decode stuff: I thought it was asynchronous like the encode. I see now that we are not setting kVTDecodeFrame_EnableAsynchronousDecompression
so it is guaranteed to be sync. I suppose even having a mutex there isn't strictly necessary then, but it seems fine to me and may avoid a lifetime fight with the compiler.
No description provided.