-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The problem of drawing frame images on the canvas #6389
Comments
HLS.js does not provide methods for interacting with HTMLMediaElement that are already available as part of the Web API. |
Is there any other way for hls.js to play on browsers that do not support mse? Using canvas and audiocontext |
HLS.js only uses MSE. |
drawing VideoFrame via canvas + AudioFrame (PCM data) using WebAudio does not directly related to hls.js (or any other streaming lib on MSE), and it is achievable througth various way depending on your actual needs. The only problem is you get YUV data only when the frame being rendered, which is suitable for postprocessing.
|
Can I obtain a buffer for processing during hls.js decoding? |
first of all, hls.js or other similar library does not provide "decoding" functionality, video decoding is not something directly exposed to js context by normal approach. MSE on the other hand, also not standalone "decoder", you can think of it as a source provider, for web devloper to customize the way of "streaming" media data to browser. from your previous descrption I think you are on a wrong track. If device/os does not support MSE, it is likely does not support any of the newer API for decoding/rendering, rely on native hls support probably is your only choice. And if you really just want to control the decoding/rendering process: simple solution: no complex solution: yes, you can build custom MSE and custom HTMLVideoElement using WebCodec or even WASM, as long as you followed MSE spec, then modify hls.js to use your custom modules. In that case hls.js will push remuxed FMP4 segment to your MSE interface, and you can do your work after that (e.g using WebCodec for decoding then output YUV data and manage your own frame buffer, before sending to canvas) |
Is your feature request related to a problem? Please describe.
None
Describe the solution you'd like
In hls.js, using MediaSource combined with Video to display videos, but is there a way to obtain YUV data for each frame? Then use Canvas and AudioContext to play the video
Additional context
No response
The text was updated successfully, but these errors were encountered: