Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add clippy.toml to disallow methods #204

Merged
merged 10 commits into from
Aug 20, 2024
Merged

Add clippy.toml to disallow methods #204

merged 10 commits into from
Aug 20, 2024

Conversation

Xaeroxe
Copy link
Contributor

@Xaeroxe Xaeroxe commented Aug 13, 2024

No description provided.

@Xaeroxe
Copy link
Contributor Author

Xaeroxe commented Aug 15, 2024

Apparently some of this code wasn't formatted already. I'll leave it in the newly formatted state.

@Xaeroxe Xaeroxe requested a review from scottlamb August 15, 2024 18:25
@@ -40,7 +40,7 @@ pub struct CompressionSessionConfig {

impl<C: Send> CompressionSession<C> {
pub fn new(config: CompressionSessionConfig) -> Result<Self, OSStatus> {
let (tx, rx) = mpsc::channel();
let (tx, rx) = mpsc::sync_channel(1024);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The best thing I can say about the previous behavior is that the unbounded growth probably wouldn't happen. The only use of this I see is through VideoEncoder, which enforces alternating send and try_recv calls. (Also, it requests real-time and no frame reordering, which should minimize how many frames Video Toolbox buffers.) So the only way I can see excessive growth is surprisingly high latency within Video Toolbox then a bunch of frames completing at once, not just the caller isn't calling receive as often as it's calling encode.

As for the new behavior...

  • 1024 frames is better than unlimited but still is a lot, 17 seconds even at 60 fps. Like, 1 second should be plenty when we're aiming for real-time. Note the VideoEncoder caller knows the expected frame rate.
  • What happens if the callback blocks? Apple's docs mention "the system may call this function asynchronously, on a different thread from the one that calls VTCompressionSessionEncodeFrame" but that's not as specific as I was hoping. Do you see anything that talks more about this? I wonder if we should be using try_send and storing an error status if it sends or something / looking for that later.

Similar questions about the decode session code.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Blocking behavior is totally unspecified in the docs. I was definitely a bit overzealous with 1024, how about 120? I know it's still pretty large, but I'd prefer to avoid blocking or dropping data if at all possible. I think blocking is probably preferable to dropping data.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I hesitate because I'm not sure what the "different thread" might be, or what the consequences are of blocking it. Does it handle other video streams? Is it some general-purpose async thread with non-video responsibilities? We could try to answer this experimentally—shouldn't be hard to hack the callback to abort, run the test under a debugger, and see what thread died. But the answer could change between calls or between OS versions or whatever anyway.

I think it actually might be better to cause the compression session to fail (all subsequent receive operations receive an error), because that will be a clear, well-understood failure mode.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, that makes sense to me. I'll author a commit changing to that behavior.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On the decompression_session side, I noticed the channel can only ever have one value as it's a oneshot channel. So I swapped it out for a Arc<Mutex<Option<_>>> to reflect that.

}));
if r.is_err() {
// Send correct data, or send nothing at all. Do not block.
ctx.frames = None;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't look like VideoEncoder::next_video_encoder_trait_frame will return an error when this happens:

   fn next_video_encoder_trait_frame(&mut self) -> Result<Option<VideoEncoderOutput<F>>, OSStatus> {
       Ok(match self.sess.frames().try_recv().ok().transpose()? {
           Some(frame) => { /* ... */ }
            None => None,
        })
    }

Shouldn't it distinguish TryRecvError::Empty (expected) from TryRecvError::Disconnected (this new error)?

and though the .frames() interface here isn't directly used anywhere else that I can see, we should still probably document what it means when the caller observes a disconnect before dropping the VideoEncoder.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, yeah you're right. Making this distinction will require breaking the API because the current Error type doesn't account for errors that aren't from the OS. I'll add a commit that does it anyways.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Resolved in the two most recent commits

@Xaeroxe Xaeroxe requested a review from scottlamb August 19, 2024 20:12
Copy link
Collaborator

@scottlamb scottlamb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Had to look a couple times at the decode stuff: I thought it was asynchronous like the encode. I see now that we are not setting kVTDecodeFrame_EnableAsynchronousDecompression so it is guaranteed to be sync. I suppose even having a mutex there isn't strictly necessary then, but it seems fine to me and may avoid a lifetime fight with the compiler.

@Xaeroxe Xaeroxe merged commit 89c92f9 into main Aug 20, 2024
5 checks passed
@Xaeroxe Xaeroxe deleted the clippy branch August 20, 2024 17:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants