Skip to content
This repository has been archived by the owner on Jun 24, 2024. It is now read-only.

Update to latest upstream LLaMA implementation #210

Closed
philpax opened this issue May 10, 2023 · 1 comment
Closed

Update to latest upstream LLaMA implementation #210

philpax opened this issue May 10, 2023 · 1 comment
Labels
issue:enhancement New feature or request model:llama LLaMA model topic:model-support Support for new models
Milestone

Comments

@philpax
Copy link
Collaborator

philpax commented May 10, 2023

We're a couple weeks out of date with the current implementation of LLaMA in llama.cpp. There's quite a few changes (including always generating the BOS at the start!) that we should update to handle.

@philpax philpax added issue:enhancement New feature or request model:llama LLaMA model labels May 10, 2023
@AshleySchaeffer
Copy link

AshleySchaeffer commented May 14, 2023

There's also this that's happened:

ggerganov/llama.cpp#1412

Which would help with GPU related issues I believe.

@philpax philpax added this to the 0.2 milestone May 18, 2023
@philpax philpax added the topic:model-support Support for new models label May 22, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
issue:enhancement New feature or request model:llama LLaMA model topic:model-support Support for new models
Projects
None yet
Development

No branches or pull requests

2 participants