You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I think what happens is that when we get a short download we still save it to disk. Then when we gunzip it it fails when gzip notices that the file is short. However we were still writing the file, so the bit that has been written out already remains. If the user does not notice the error then they'll continue and we end up trying to parse the truncated tar file.
So there are three cases of bad error handling here:
The HTTP lib is not notifying us that the download was shorter than expected. Perhaps we're expected to do this manually.
When we decompress the .tar.gz, we should do it into a temp file and only overwrite the target if the decompress succeeds.
The tar code needs auditing to check that it handles corrupt tar files correctly. There's newer tar code in the hackage-server that we could test with and steal if necessary.
Fixed points 2 and 3 above. This should be enough to give much better behaviour in the case of a truncated download.
Sat Aug 23 00:00:33 BST 2008 Duncan Coutts <duncan@haskell.org>
- Decompress the repo index atomically.
So if decompression fails (eg if the index is corrupt) then
the decompressed file does not get (partially) written.
Sun Aug 24 19:05:01 BST 2008 Duncan Coutts <duncan@haskell.org>
- Use updated tar code
Much more robust. Correctly detects trucated archives.
(Imported from Trac #269, reported by guest on 2008-04-11)
Not having an up-to-date package list causes
(.. which is a bad error message.)The text was updated successfully, but these errors were encountered: