You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It works as expected when tested individually so my guess would be that there has been a network/server error that cause downloaded content to be a different. Note that we make an initial tiny request to find Size to decide whether we need to download to disk or not.
We could re-run this and hope this was fixed on it own but this sound like it could happen again given the large size of the content.
Fixing this would be difficult though ; this issue happens on a different libzim-handled thread long after we've added it so we can't catch the (libzim8+ only) exception and retry.
The text was updated successfully, but these errors were encountered:
- Investigating #40, using a copy of scraperlib's URLItem with verbose details
to identify which URL causes the issue
- not crashing on resource duplicates (duplicate content in different node IDs)
- fixed suceeded boolean that would caused creating ZIM even on exception
- [debug] raising first exception
- updated scraperlib to 2.0
- Duplicated and modified the URLProvider:
- reading source until we reached specified size (/!\ risk of being stuck)
- clearly returning an empty Blob at the end (might ave been the reason)
- Added new feature to URLItem to not use URLProvided for content under 2MiB
In this run, a kolibri2zim over the full khan-academy in English crashed with
This is due to this assert inside libzim's writer
Code has been modified since (see https://github.com/openzim/libzim/blob/3a9f574d1aa2f722257f195fcdd6874e3517b8c6/src/writer/cluster.cpp#L246) and would generate a
RuntimeError
exception instead but the problem is the same: the size written to the ZIM is different from the size returned by the Provider'sget_size()
.Given kolibri2zim only prints debug after addition to the creator, we don't know which Entry caused the issue.
My investigations would point to a funneled file as other types of content are added via string and the size is automatically calculated.
Funneled ones on the other hand are files that we download directly from the Studio into the ZIM using scraperlib's
URLItem
.Looking at the KA DB, I found a single file reported to have the expected size: c142275210f3f6dec3dfbdb1d9836e7b.mp4.
It works as expected when tested individually so my guess would be that there has been a network/server error that cause downloaded content to be a different. Note that we make an initial tiny request to find Size to decide whether we need to download to disk or not.
We could re-run this and hope this was fixed on it own but this sound like it could happen again given the large size of the content.
Fixing this would be difficult though ; this issue happens on a different libzim-handled thread long after we've added it so we can't catch the (libzim8+ only) exception and retry.
The text was updated successfully, but these errors were encountered: