You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When we send a subsetting request to Harmony with too many granule IDs, we get an error "413 Request entity too large". After discussing this with the Harmony team, they suggested switching to a POST request (instead of the current GET request) which should move the granule Id's to the form body and allow us to send many granule IDs. However, harmony-py does not currently support this, so we'd need to manually construct the Harmony request in this case. A ticket was created for harmony-py, HARMONY-1721, which adds support for this use-case.
Note: this only impacts the data subscriber and not the data downloader. When submitting Harmony requests with the data downloader, we don't submit granule IDs but rather forward the spatiotemporal bounds the user provided.
The text was updated successfully, but these errors were encountered:
When we send a subsetting request to Harmony with too many granule IDs, we get an error "413 Request entity too large". After discussing this with the Harmony team, they suggested switching to a POST request (instead of the current GET request) which should move the granule Id's to the form body and allow us to send many granule IDs. However, harmony-py does not currently support this, so we'd need to manually construct the Harmony request in this case. A ticket was created for harmony-py, HARMONY-1721, which adds support for this use-case.
Issue can be recreated like so:
podaac-data-subscriber -c SWOT_L2_LR_SSH_BASIC_2.0 -d ./data --start-date 2023-05-01T00:00:00Z --end-date 2024-12-21T00:00:00Z --subset -b="120,-30,160,20"
Note: this only impacts the data subscriber and not the data downloader. When submitting Harmony requests with the data downloader, we don't submit granule IDs but rather forward the spatiotemporal bounds the user provided.
The text was updated successfully, but these errors were encountered: