You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Extra steps to reproduce the problem?
(1) Spin up a local HTTP server supporting chunk transfer encoding
(2) Any supported input file will probably do, but I have only tested with opus in webm so far.
What is the expected result?
Expected shaka to produce and upload the init segment and the segments to the local http server.
What happens instead?
The local http server does get a couple of PUT requests for the init segment path, but they contain 0 bytes of data.
Shaka packager then appears to hang, but if one enables more verbose logging it can be observed that there appears to be an infinite loop of write attempts to the init segment internally. See atached log.
What happens is that when the HttpFile::Write call is made the upload_cache_ is already close, so it returns 0 bytes written and this will be retried over and over again but will never succeed. I think the reason that the upload_cache_ is closed is due to the Flush happening in ThreadedIoFile::Seek causing the HttpFile::Flush method to close the upload cache. Somewhere, someone needs to Reopen the upload cache for this to break out of the infinite retry loop.
The text was updated successfully, but these errors were encountered:
Closing the upstream on flush will effectively terminate the ongoing
curl connection. This means that we would need re-establish the
connection in order to resume writing, this is not what we want. In the
spirit of the documentation of File::Flush
```c++
/// Flush the file so that recently written data will survive an
/// application crash (but not necessarily an OS crash). For
/// instance, in LocalFile the data is flushed into the OS but not
/// necessarily to disk.
```
We will instead wait for the curl thread to finish consuming what ever
might be in the upload cache, but leave the connection open for
subsequent writes.
Fixes#1196
System info
Operating System: Ubuntu Focal
Shaka Packager Version: v2.6.1
Issue and steps to reproduce the problem
Packager Command:
packager 'input=opus_96.opus.webm,stream=0,init_segment=http://localhost:35559/http/init.webm,segment_template=http://localhost:35559/http/$Number%05d$.webm'
Extra steps to reproduce the problem?
(1) Spin up a local HTTP server supporting chunk transfer encoding
(2) Any supported input file will probably do, but I have only tested with opus in webm so far.
What is the expected result?
Expected shaka to produce and upload the init segment and the segments to the local http server.
What happens instead?
The local http server does get a couple of PUT requests for the init segment path, but they contain 0 bytes of data.
Shaka packager then appears to hang, but if one enables more verbose logging it can be observed that there appears to be an infinite loop of write attempts to the init segment internally. See atached log.
What happens is that when the
HttpFile::Write
call is made theupload_cache_
is already close, so it returns 0 bytes written and this will be retried over and over again but will never succeed. I think the reason that theupload_cache_
is closed is due to theFlush
happening in ThreadedIoFile::Seek causing theHttpFile::Flush
method to close the upload cache. Somewhere, someone needs to Reopen the upload cache for this to break out of the infinite retry loop.The text was updated successfully, but these errors were encountered: