-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
'./quantize' is not recognized as the name of a cmdlet, function, script file, or operable program #241
Comments
same error here. |
In my case, there was an error earlier on while running CMake in llama. For some reason it was expecting Visual Studio 15 2017 (and it couldn't find it). So I cleared CMakeCache and CMakeFiles and ran manually Then I reran the npx command, but I got stuck anyway on /quantize, this time with error: Cannot create process, error code: 267 |
Same problem here (Windows 10). I had no issues with installing alpaca though. |
Same problem here |
I just fixed this on Windows Server 2019, but also works on Windows 11, I had to manually quantize it. In command line as administrator. CD to your C:\Users\YOURUSER\dalai\llama\build\bin\Releases then run ./quantize C:\Users\YOURUSER\dalai\llama\models\7B\ggml-model-f16.bin C:\Users\YOURUSER\dalai\llama\models\7B\ggml-model-q4_0.bin 2 this will unpack the llama model. I now have the model showing in the drop now for both llama and alpaca on windows |
@Moralizing Thanks. This worked like a charm. However now there's another issue #245 . Basically nothing happens when you give it a prompt. |
I had the same issue on both Windows 10 and 11 (different machines, both VS 2022). I think it's a environment variable issue or it it should have changed directories to where the binary is. But basically, it appears to have had an issue finding quantize.exe (also I believe "./quantize" is invalid syntax for Windows). I just searched for "quantize" in the "%userprofile%\dalai" directory and replaced ./quantize with the full path to the EXE and everything worked. |
@taaalha this sounds like a bug in the install scripts so I think the issue should remain open until its resolved; just because there are workaround doesn't mean there isn't a problem |
I find it only under "%userprofile%\dalai\llama.devops\tools.sh". |
same error here. PS C:\dalai\llama\build\Release> [System.Console]::OutputEncoding=[System.Console]::InputEncoding=[odels\7B\ggml-model-q4_0.bin 2/quantize c:\dalai\llama\models\7B\ggml-model-f16.bin c:\dalai\llama\mo
|
I tried just 'quantized' and it couldn't find the binary, but providing the full path worked. But now I'm having a new issue (related) with the WebUI. With "Debug" enabled, it appears that the backslashes in the Windows paths are being removed when Powershell is called. I looked at the code but can't track down where this is happening, the regex and whatnot seems fine. I've had this issue on two machines and I'm going to check to see if it happens on a friends PC. Might just be moving everything to Linux, which is fine and probably for the best. |
./command is actually how powershell prefers commands. it will not run them otherwise |
Navigate to the 'bin/releases' folder and then either open the terminal there or copy the necessary files to the 'release' folder |
This will fix this issue |
Had this problem for Llama. Alpaca worked fine for me and was able to play with it. |
It's a bug in the install script.
The install script is running the command from C:\Users\XXX\dalai\llama\build\Release , but the quantize.exe was built in C:\Users\XXX\dalai\llama\build*bin*\Release (Microsoft Visual Studio 2022) I opened a powershell in C:\Users\XXX\dalai and ran the command
It worked |
for me what solved the problem via cmd was. |
right after the build process, and while the model is being downloaded, do these two steps:
The reason being some version of VC++ will build the output to .\llama\build\bin\Release\ instead of .\llama\build\Release\ Once the model is downloaded, the quantize.exe and the consecutive steps will run without issues. final: $> npx dalai serve --home . |
Can't quantize the ggml-model-q4_0.bin file for the 13B version: llama_model_quantize: loading model from 'ggml-model-q4_0.bin' But the others, ggml-model-f16.bin and ggml-model-f16.bin.1 work when quantized I'm trying to quantize the models in this folder: |
J'ai aussi VS 2022 mais je n'ai pas quantize |
To fix the issue, in CMakeLists.txt:
|
J'ai trouvé un moyen très simple juste en utilisant le gestionnaire de fichier, lorsque vous avez lancé npx ...... et qu'il est en train de télécharger 7b, alors vous faite un copier/coller des fichiers qui sont dans C:\Users\Utilisateur\dalai\llama\build\bin\Release et vous les mettez dans C:\Users\Utilisateur\dalai\llama\build\Release et la suite de l'installation continuera sans problème ;) |
after the quantization runs the model still is not quantized, |
Found the bug and fixed on my end on a windows machine: change from to |
::OutputEncoding=[System.Console]::InputEncoding=[System.Text.Encoding]::UTF8; ./quantize
The term './quantize' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
What could be the solution to this?
I was trying to install 7B model by
npx dalai llama install 7B
on Windows 10The text was updated successfully, but these errors were encountered: