-
-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update ollama_dl.py - for larger files and continuos download #1
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you! A couple comments within, and could you ensure the code is properly formatted with ruff
? It looks like there are some spurious changes that remove newlines and make it non-compliant with the Ruff code style. I just added a pre-commit workflow to master
if that helps.
headers = {} | ||
mode = "wb" | ||
|
||
if temp_path.exists(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since the temporary path is a new one every time we enter this function, and the current time, fractional seconds and all included, there's practically zero chance that this will ever be true. 🤔
else: | ||
temp_path.unlink() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure why we'd want to remove the tempfile and start anew if it already looks complete?
log.debug(f"Content-Length: {content_length}, Expected total size: {total_size}") | ||
|
||
with temp_path.open(mode) as f: | ||
async for chunk in resp.aiter_bytes(8192): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
8192 bytes is tiny for multi-gigabyte files.
The large buffer size was chosen on purpose to avoid as many read
OS calls. See e.g. pytorch/pytorch#116536 (comment) for an explanation.
async def download(*, registry: str, name: str, version: str, dest_dir: str): | ||
with Progress() as progress: | ||
async with httpx.AsyncClient() as client: | ||
async with httpx.AsyncClient(timeout=httpx.Timeout(None)) as client: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there should be some timeout.
Notice that I am getting errors for models such as qwen2.5:14b, and any other large model. here are the changes we made
Updated format_size function:
Now it can handle sizes in GB, which is necessary for files larger than 4GB.
New download_with_resume function:
This function handles the actual downloading, including the ability to resume partial downloads. It uses a temporary file and renames it only after a successful download.
Improved download_blob function:
Increased max_retries to 5 by default.
Implemented exponential backoff for retries.
Uses the new download_with_resume function.
Timeout adjustment:
In the download function, we set timeout=httpx.Timeout(None) for the httpx client, allowing for very long downloads without timing out.
Chunk size adjustment:
In the download_with_resume function, we use a chunk size of 8192 bytes (8KB) to read the response. This smaller chunk size allows for more frequent progress updates and potentially better memory management for very large files.