Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update the docker image to use the latest odc-stats version 1.0.33 #87

Closed
vikineema opened this issue Feb 17, 2023 · 9 comments
Closed
Assignees

Comments

@vikineema
Copy link
Contributor

The docker image tagged latest uses the odc-stats version 1.0.32 instead of the latest odc-stats version which is 1.0.33.
docker container run -it opendatacube/datacube-statistician:latest pip list| grep odc-stats

odc-stats 1.0.32

@fangfy
Copy link
Contributor

fangfy commented Feb 20, 2023

Hi @emmaai @SpacemanPaul @omad, it looks like dockerise test failed (https://github.com/opendatacube/odc-stats/actions/runs/4191291551/jobs/7265577644). Any idea why?

@emmaai
Copy link
Contributor

emmaai commented Feb 20, 2023

The connection is reset by s3. Rerun will fix it.

@vikineema
Copy link
Contributor Author

Hi @emmaai @spac @omad . I've tested the latest published image from the rerun workflow and the odc-stats version is still 1.0.32. What could be causing this?

@emmaai
Copy link
Contributor

emmaai commented Feb 21, 2023

had a look, it used the cache available in ecr, since nothing has changed with the files involved in the docker image building.

@emmaai emmaai self-assigned this Feb 21, 2023
@fangfy
Copy link
Contributor

fangfy commented Feb 27, 2023

Hi @emmaai, do you have an estimate of when this can be fixed? we urgently need this new image to generate S2 gm products.

@jmettes
Copy link
Contributor

jmettes commented Feb 27, 2023

I've been noticing a lot of intermittent failures in tests using external S3 data in odc-tools - especially those using s3://sentinel-s2-l2a-cogs. I wonder if that's what's happening here too. I wonder if that bucket recently changed to a less highly-available storage class. Ideally these tests would be replaced with mocks, or locally stored data.

Maybe in the meantime, it's worth trying to set GDAL_HTTP_MAX_RETRY somewhere?
rasterio/rasterio#2119 (comment)

Apparently max_retry default is 0:
https://gdal.org/user/virtual_file_systems.html

@emmaai
Copy link
Contributor

emmaai commented Feb 27, 2023

Hi @emmaai, do you have an estimate of when this can be fixed? we urgently need this new image to generate S2 gm products.

Within this week.

@emmaai
Copy link
Contributor

emmaai commented Feb 27, 2023

I've been noticing a lot of intermittent failures in tests using external S3 data in odc-tools - especially those using s3://sentinel-s2-l2a-cogs. I wonder if that's what's happening here too. I wonder if that bucket recently changed to a less highly-available storage class. Ideally these tests would be replaced with mocks, or locally stored data.

Maybe in the meantime, it's worth trying to set GDAL_HTTP_MAX_RETRY somewhere? rasterio/rasterio#2119 (comment)

Apparently max_retry default is 0: https://gdal.org/user/virtual_file_systems.html

For the test, GDAL_HTTP_MAX_RETRY is set here

- GDAL_HTTP_MAX_RETRY=5

and here
GDAL_HTTP_MAX_RETRY: 5

Still, the connection can be reset by s3 server, which can not be caught or retried.

@emmaai
Copy link
Contributor

emmaai commented Mar 1, 2023

Could you have a look at PR #88, it should resolve the issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants