Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Keep getting an error about "no such file or directory", I dont understand? #209

Open
nich2198 opened this issue Mar 22, 2023 · 41 comments · May be fixed by #373
Open

Keep getting an error about "no such file or directory", I dont understand? #209

nich2198 opened this issue Mar 22, 2023 · 41 comments · May be fixed by #373

Comments

@nich2198
Copy link

C:\Users\nicho>npx dalai alpaca install 7B
mkdir C:\Users\nicho\dalai
{ method: 'install', callparams: [ '7B' ] }
ERROR [Error: ENOENT: no such file or directory, rename 'C:\Users\nicho\dalai\alpaca\models' -> 'C:\Users\nicho\dalai\tmp\models'] {
errno: -4058,
code: 'ENOENT',
syscall: 'rename',
path: 'C:\Users\nicho\dalai\alpaca\models',
dest: 'C:\Users\nicho\dalai\tmp\models'
}
But i already installed some of it, but it stops here and every time i run it again it stops here.

@sudarshan-koirala
Copy link

sudarshan-koirala commented Mar 22, 2023

@nich2198 Got the same error message but the following steps solved.

  1. Remove the folder dalai created automatically.
  2. Create a folder named dalai at the root directory (In your case C:\Users\nicho.
  3. Create another folder alpaca inside dalai folder.
  4. Create another folder models inside alpaca folder.
  5. Go back to root directory and run the command.

@haller33
Copy link

haller33 commented Mar 22, 2023

i have the same error two

$ npx dalai llama install 7B
Need to install the following packages:
  dalai@0.2.52
Ok to proceed? (y)
mkdir /home/meta/dalai
{ method: 'install', callparams: [ '7B' ] }
ERROR [Error: ENOENT: no such file or directory, rename '/home/meta/dalai/llama/models' -> '/home/meta/dalai/tmp/models'] {
  errno: -2,
  code: 'ENOENT',
  syscall: 'rename',
  path: '/home/meta/dalai/llama/models',
  dest: '/home/meta/dalai/tmp/models'
}

the solution of @sudarshan-koirala with is 2 lines on bash

$ cd ~
$ rm -rf dalai
$ mkdir -p dalai/alpaca/models
$ npx dalai llama install 7B

but suddenly get a new error

$ npx dalai llama install 7B
mkdir /home/meta/dalai
{ method: 'install', callparams: [ '7B' ] }
ERROR [Error: ENOENT: no such file or directory, rename '/home/meta/dalai/llama/models' -> '/home/meta/dalai/tmp/models'] {
  errno: -2,
  code: 'ENOENT',
  syscall: 'rename',
  path: '/home/meta/dalai/llama/models',
  dest: '/home/meta/dalai/tmp/models'
}

@sudarshan-koirala
Copy link

sudarshan-koirala commented Mar 22, 2023

@haller33 Thanks for the commands. You created a folder named alpaca and used llama model. To use llama models, rename alpaca folder to llama.

@MikePrograms
Copy link

MikePrograms commented Mar 22, 2023

@haller33 Thanks for the commands. You created a folder named alpaca and used llama model. To use llama models, rename alpaca folder to llama.

Facing the same issue on my end. Here is the log:

`
mikeprograms@Mikes-MacBook-Pro llama % npx dalai llama install 7B
mkdir /Users/mikeprograms/dalai
{ method: 'install', callparams: [ '7B' ] }
mkdir /Users/mikeprograms/dalai/llama
try fetching /Users/mikeprograms/dalai/llama https://github.com/candywrap/llama.cpp.git
[E] Pull TypeError: Cannot read properties of null (reading 'split')
at new GitConfig (/Users/mikeprograms/gits/llama/node_modules/isomorphic-git/index.cjs:1604:30)
at GitConfig.from (/Users/mikeprograms/gits/llama/node_modules/isomorphic-git/index.cjs:1627:12)
at GitConfigManager.get (/Users/mikeprograms/gits/llama/node_modules/isomorphic-git/index.cjs:1750:22)
at async _getConfig (/Users/mikeprograms/gits/llama/node_modules/isomorphic-git/index.cjs:5397:18)
at async normalizeAuthorObject (/Users/mikeprograms/gits/llama/node_modules/isomorphic-git/index.cjs:5407:19)
at async Object.pull (/Users/mikeprograms/gits/llama/node_modules/isomorphic-git/index.cjs:11682:20)
at async Dalai.add (/Users/mikeprograms/gits/llama/node_modules/dalai/index.js:364:7)
at async Dalai.install (/Users/mikeprograms/gits/llama/node_modules/dalai/index.js:316:5) {
caller: 'git.pull'
}
try cloning /Users/mikeprograms/dalai/llama https://github.com/candywrap/llama.cpp.git
next llama [AsyncFunction: make]
make
exec: make in /Users/mikeprograms/dalai/llama
make
exit

The default interactive shell is now zsh.
To update your account to use zsh, please run chsh -s /bin/zsh.
For more details, please visit https://support.apple.com/kb/HT208050.
bash-3.2$ make
I llama.cpp build info:
I UNAME_S: Darwin
I UNAME_P: arm
I UNAME_M: arm64
I CFLAGS: -I. -O3 -DNDEBUG -std=c11 -fPIC -pthread -DGGML_USE_ACCELERATE
I CXXFLAGS: -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -pthread
I LDFLAGS: -framework Accelerate
I CC: Apple clang version 14.0.0 (clang-1400.0.29.202)
I CXX: Apple clang version 14.0.0 (clang-1400.0.29.202)

cc -I. -O3 -DNDEBUG -std=c11 -fPIC -pthread -DGGML_USE_ACCELERATE -c ggml.c -o ggml.o
c++ -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -pthread -c utils.cpp -o utils.o
c++ -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -pthread main.cpp ggml.o utils.o -o main -framework Accelerate
run ./main -h for help
c++ -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -pthread quantize.cpp ggml.o utils.o -o quantize -framework Accelerate
bash-3.2$ exit
exit
ERROR [Error: ENOTEMPTY: directory not empty, rename '/Users/mikeprograms/dalai/tmp/models' -> '/Users/mikeprograms/dalai/llama/models'] {
errno: -66,
code: 'ENOTEMPTY',
syscall: 'rename',
path: '/Users/mikeprograms/dalai/tmp/models',
dest: '/Users/mikeprograms/dalai/llama/models'
}
mikeprograms@Mikes-MacBook-Pro llama %
`

@haller33
Copy link

haller33 commented Mar 22, 2023

@MikePrograms see like is just a question of running the right command, i have mess up with the two last comands.

the correct solution will be like

for LLaMA 7B

$ cd ~
$ rm -rf dalai
$ mkdir -p dalai/llama/models
$ npx dalai llama install 7B

for Alpaca 7B

$ cd ~
$ rm -rf dalai
$ mkdir -p dalai/alpaca/models
$ npx dalai alpaca install 7B

look if it's work for you

@MikePrograms
Copy link

Alpaca installs just fine from scratch. I get to the same part on both installation options, but this error only shows up when trying to do the LLaMA version:


next llama [AsyncFunction: make]
make
exec: make in /Users/mikeprograms/dalai/llama
make
exit

The default interactive shell is now zsh.
To update your account to use zsh, please run `chsh -s /bin/zsh`.
For more details, please visit https://support.apple.com/kb/HT208050.
bash-3.2$ make
I llama.cpp build info:
I UNAME_S:  Darwin
I UNAME_P:  arm
I UNAME_M:  arm64
I CFLAGS:   -I.              -O3 -DNDEBUG -std=c11   -fPIC -pthread -DGGML_USE_ACCELERATE
I CXXFLAGS: -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -pthread
I LDFLAGS:   -framework Accelerate
I CC:       Apple clang version 14.0.0 (clang-1400.0.29.202)
I CXX:      Apple clang version 14.0.0 (clang-1400.0.29.202)

cc  -I.              -O3 -DNDEBUG -std=c11   -fPIC -pthread -DGGML_USE_ACCELERATE   -c ggml.c -o ggml.o
c++ -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -pthread -c utils.cpp -o utils.o
c++ -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -pthread main.cpp ggml.o utils.o -o main  -framework Accelerate
run ./main -h for help
c++ -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -pthread quantize.cpp ggml.o utils.o -o quantize  -framework Accelerate
bash-3.2$ exit
exit
ERROR [Error: ENOTEMPTY: directory not empty, rename '/Users/mikeprograms/dalai/tmp/models' -> '/Users/mikeprograms/dalai/llama/models'] {
  errno: -66,
  code: 'ENOTEMPTY',
  syscall: 'rename',
  path: '/Users/mikeprograms/dalai/tmp/models',
  dest: '/Users/mikeprograms/dalai/llama/models'
}
mikeprograms@Mikes-MacBook-Pro ~ %

@nich2198
Copy link
Author

@nich2198 Got the same error message but the following steps solved.

  1. Remove the folder dalai created automatically.
  2. Create a folder named dalai at the root directory (In your case C:\Users\nicho.
  3. Create another folder alpaca inside dalai folder.
  4. Create another folder models inside alpaca folder.
  5. Go back to root directory and run the command.

This worked, Thank you

@api-haus
Copy link

api-haus commented Mar 22, 2023

Can I change the root folder? I want to host my models from another drive on Windows.

UPD: Found in source code. npx dalai llama install 65B --home "D:\DALAI"

I think it really could be useful to treat CWD as root folder.

@gavn8r
Copy link

gavn8r commented Mar 22, 2023

Using this solution, I've run into a new problem, also experienced by MikePrograms. I'm now getting a 'ENOTEMPTY' error.

Here's the what I get:

next llama [AsyncFunction: make]
 make
 exec: make in /Users/gavn8r/dalai/llama
make
exit

The default interactive shell is now zsh.
To update your account to use zsh, please run `chsh -s /bin/zsh`.
For more details, please visit https://support.apple.com/kb/HT208050.
bash-3.2$ make
I llama.cpp build info: 
I UNAME_S:  Darwin
I UNAME_P:  i386
I UNAME_M:  x86_64
I CFLAGS:   -I.              -O3 -DNDEBUG -std=c11   -fPIC -pthread -mf16c -mfma -mavx -mavx2 -DGGML_USE_ACCELERATE
I CXXFLAGS: -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -pthread
I LDFLAGS:   -framework Accelerate
I CC:       Apple clang version 14.0.0 (clang-1400.0.29.202)
I CXX:      Apple clang version 14.0.0 (clang-1400.0.29.202)

cc  -I.              -O3 -DNDEBUG -std=c11   -fPIC -pthread -mf16c -mfma -mavx -mavx2 -DGGML_USE_ACCELERATE   -c ggml.c -o ggml.o
c++ -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -pthread -c utils.cpp -o utils.o
c++ -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -pthread main.cpp ggml.o utils.o -o main  -framework Accelerate
run ./main -h for help
c++ -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -pthread quantize.cpp ggml.o utils.o -o quantize  -framework Accelerate
bash-3.2$ exit
exit
ERROR [Error: ENOTEMPTY: directory not empty, rename '/Users/gavn8r/dalai/tmp/models' -> '/Users/gavn8r/dalai/llama/models'] {
  errno: -66,
  code: 'ENOTEMPTY',
  syscall: 'rename',
  path: '/Users/gavn8r/dalai/tmp/models',
  dest: '/Users/gavn8r/dalai/llama/models'
}
gavn8r@Gavins-Mac-mini ~ % 

@nich2198 Got the same error message but the following steps solved.

1. Remove the folder dalai created automatically.
2. Create a folder named `dalai` at the root directory (In your case `C:\Users\nicho`.
3. Create another folder alpaca inside dalai folder.
4. Create another folder models inside alpaca folder.
5. Go back to root directory and run the command.

@ivanstepanovftw
Copy link

@nich2198 Got the same error message but the following steps solved.

  1. Remove the folder dalai created automatically.
  2. Create a folder named dalai at the root directory (In your case C:\Users\nicho.
  3. Create another folder alpaca inside dalai folder.
  4. Create another folder models inside alpaca folder.
  5. Go back to root directory and run the command.

Could you please add pull request?

@luisee
Copy link

luisee commented Mar 22, 2023

Usando esta solución, me encontré con un nuevo problema, también experimentado por MikePrograms. Ahora estoy recibiendo un error 'ENOTEMPTY'.

Esto es lo que obtengo:

next llama [AsyncFunction: make]
 make
 exec: make in /Users/gavn8r/dalai/llama
make
exit

The default interactive shell is now zsh.
To update your account to use zsh, please run `chsh -s /bin/zsh`.
For more details, please visit https://support.apple.com/kb/HT208050.
bash-3.2$ make
I llama.cpp build info: 
I UNAME_S:  Darwin
I UNAME_P:  i386
I UNAME_M:  x86_64
I CFLAGS:   -I.              -O3 -DNDEBUG -std=c11   -fPIC -pthread -mf16c -mfma -mavx -mavx2 -DGGML_USE_ACCELERATE
I CXXFLAGS: -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -pthread
I LDFLAGS:   -framework Accelerate
I CC:       Apple clang version 14.0.0 (clang-1400.0.29.202)
I CXX:      Apple clang version 14.0.0 (clang-1400.0.29.202)

cc  -I.              -O3 -DNDEBUG -std=c11   -fPIC -pthread -mf16c -mfma -mavx -mavx2 -DGGML_USE_ACCELERATE   -c ggml.c -o ggml.o
c++ -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -pthread -c utils.cpp -o utils.o
c++ -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -pthread main.cpp ggml.o utils.o -o main  -framework Accelerate
run ./main -h for help
c++ -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -pthread quantize.cpp ggml.o utils.o -o quantize  -framework Accelerate
bash-3.2$ exit
exit
ERROR [Error: ENOTEMPTY: directory not empty, rename '/Users/gavn8r/dalai/tmp/models' -> '/Users/gavn8r/dalai/llama/models'] {
  errno: -66,
  code: 'ENOTEMPTY',
  syscall: 'rename',
  path: '/Users/gavn8r/dalai/tmp/models',
  dest: '/Users/gavn8r/dalai/llama/models'
}
gavn8r@Gavins-Mac-mini ~ % 

@nich2198Recibí el mismo mensaje de error pero los siguientes pasos se resolvieron.

1. Remove the folder dalai created automatically.
2. Create a folder named `dalai` at the root directory (In your case `C:\Users\nicho`.
3. Create another folder alpaca inside dalai folder.
4. Create another folder models inside alpaca folder.
5. Go back to root directory and run the command.

I get the same error only in Llama, not in Alpaca.

@hristo-dinkov
Copy link

This bug was introduced by ggerganov/llama.cpp#355
It seems that you can not move from tmp/ to models/ because models is populated with this new file that was added by the upper commit. I am not really sure how to fix this issue because I am not a JS expert.

Can somebody help having that in mind?

@rosx27
Copy link

rosx27 commented Mar 22, 2023

having the same exact issue. ive tried the solution suggested by sudarshan-koirala with no luck. prior to the latest update, i did not get this error but it filled out my storage so i had to abort, however, when i tried installing it on other computer it gave me a different error that is out of scope of this thread so im still havent tried this yet.

@Theknight2015
Copy link

I'm having the same issue as @MikePrograms where the Alpaca installation goes fine (I should note that I have to manually setup the directory as root/dalai/alpaca/models for it to work) but the LLaMA installation stops with the errors below

I'm using root because I thought my issues was permission based at first and now it still happens in the root account so I'm guessing not. I've gone as far as on both accounts and I've tried wiping and starting over from scratch with clean OS install.

With Directory manually setup root/dalai/llama/models OR root/dalai/llama/models/7B
ERROR [Error: ENOTEMPTY: directory not empty, rename '/root/dalai/tmp/models' -> '/root/dalai/llama/models'] { errno: -39, code: 'ENOTEMPTY', syscall: 'rename', path: '/root/dalai/tmp/models', dest: '/root/dalai/llama/models

I used these commands to remove the entire dalai directory after previous failure and followed the below guide.

$ cd ~ $ rm -rf dalai $ mkdir -p dalai/llama/models $ npx dalai llama install 7B

I get the same exact error of
ERROR [Error: ENOTEMPTY: directory not empty, rename '/root/dalai/tmp/models' -> '/root/dalai/llama/models'] { errno: -39, code: 'ENOTEMPTY', syscall: 'rename', path: '/root/dalai/tmp/models', dest: '/root/dalai/llama/models

If I simply go to my root directory and run the following command without any manual directories setup and without using the above mentioned prep steps on a brand new install...
npx dalai llama install 7B

I get this error
ERROR [Error: ENOENT: no such file or directory, rename '/root/dalai/llama/models' -> '/root/dalai/tmp/models'] { errno: -2, code: 'ENOENT', syscall: 'rename', path: '/root/dalai/llama/models', dest: '/root/dalai/tmp/models'

I'm running Debian 11 Bullseye
Python V 3.9.2
Node.js installed from nodesource with curl & apt commands

  • Node.js V 18.15.0
  • npm V 9.5.0

I'm not sure what I'm doing wrong here.
Any and all help would be greatly appreciated.

@5k1ttl3
Copy link

5k1ttl3 commented Mar 22, 2023

Same issue here:

ERROR [Error: ENOTEMPTY: directory not empty, rename '/home/dalai3/dalai/tmp/models' -> '/home/dalai3/dalai/llama/models']

nodejs 18.13.0
npm 8.19.3

ubuntu desktop 22.10

Edit: this used to worked a couple of days ago. A fresh install of the OS in both cases.

@RiccaDS
Copy link

RiccaDS commented Mar 22, 2023

I seem to have resolved the same issue on my system. Now I get a make error but this is another story. In my case I was installing in a folder outside my home with the command
npx dalai alpaca install 7B --home /mnt/Storage/software/
I tried adding the /dalai/alpaca/models/ folders but alone it didn't work. I needed to change a bit the npx command as follows
npx dalai alpaca install 7B --home /mnt/Storage/software/dalai

@hatlem
Copy link

hatlem commented Mar 22, 2023

Same issue here, I thought for a moment that is was only me..

@Acro88
Copy link

Acro88 commented Mar 22, 2023

Same problem here!
Can someone answer if there is a way to install the model in another directory? Not on C

@hatlem
Copy link

hatlem commented Mar 22, 2023

I tried what is mentioned here but I'm still getting:

Installing collected packages: sentencepiece, mpmath, urllib3, typing-extensions, sympy, pillow, numpy, networkx, MarkupSafe, idna, filelock, charset-normalizer, certifi, requests, jinja2, torch, torchvision, torchaudio
Successfully installed MarkupSafe-2.1.2 certifi-2022.12.7 charset-normalizer-3.1.0 filelock-3.10.1 idna-3.4 jinja2-3.1.2 mpmath-1.3.0 networkx-3.0 numpy-1.24.2 pillow-9.4.0 requests-2.28.2 sentencepiece-0.1.97 sympy-1.11.1 torch-2.0.0 torchaudio-2.0.1 torchvision-0.15.1 typing-extensions-4.5.0 urllib3-1.26.15
bash-3.2$ exit
exit
ERROR [Error: ENOENT: no such file or directory, rename '/Users/me/dalai/llama/models' -> '/Users/me/dalai/tmp/models'] {
errno: -2,
code: 'ENOENT',
syscall: 'rename',
path: '/Users/me/dalai/llama/models',
dest: '/Users/me/dalai/tmp/models'
}
I retired deleting everything and only having a empty folder. I tried doing it fresh. I checked if there is something in the temp folder, or the llama folder. But there is not even stuff there the last time I ran it. There was some the first times.. And GPT is not even sure what to do :)

@xamox
Copy link

xamox commented Mar 22, 2023

I think once this is resolved we need to get a docker container built so it's consistent and stop messing around with the "works on my machine" problem.

@neiz
Copy link

neiz commented Mar 22, 2023

@RiccaDS I think that the current issue is related to llama and not alpaca

@kapilkd13
Copy link

Any solution that works to resolve this issue on mac?

@RiccaDS
Copy link

RiccaDS commented Mar 22, 2023

Same problem here! Can someone answer if there is a way to install the model in another directory? Not on C

Check my reply above to install in another directory and to make it work. Also always check that there is a "models" folder in your Alpaca folder.

@RiccaDS I think that the current issue is related to llama and not alpaca

It depends which current issue you mean. The OP posted on Alpaca and I had the same ENOENT error on Alpaca. ENOTEMPTY on the other side can be regarded as another issue I think.

@neiz
Copy link

neiz commented Mar 22, 2023

Agreed, I believe this issue 209 is currently having

  1. workaround for Alpaca (your suggestion) (ENOENT)
  2. issue with llama (no workaround yet) (ENOTEMPTY)

@quinm0
Copy link

quinm0 commented Mar 22, 2023

Oh lol, seems everyone is hanging out here. Is this only a problem if you're using the --home param?

@neiz
Copy link

neiz commented Mar 22, 2023

I'm getting the llama ENOTEMPTY without any additional params (ubuntu 22.10);

npx dalai llama install 7B

So far, it seems that the 'dangling file' prevent the empty dir was introduced here: https://github.com/ggerganov/llama.cpp/pull/355/files#diff-7696a3039a95b41e744a08272f14d2a4345fddbd06ac482deb37f03a3afad2b5R142

@n8jadams
Copy link

n8jadams commented Mar 22, 2023

I got it working by setting up all the needed directories manually (I'm on an m1 macOS w/ node v18.15.0)

rm -rf ~/dalai
mkdir -p ~/dalai/alpaca/models ~/dalai/tmp/models
npx dalai alpaca install 7B
rm -rf ~/dalai/tmp
mkdir -p ~/dalai/llama/models ~/dalai/tmp/models
npx dalai llama install 7B 13B

The installs are in progress. Crossing my fingers...

EDIT: Didn't work.

EDIT 2: Trying this other solution where I re-create the tmp directory between installing the different models. :/

EDIT 3: Holding off on using this project until this stuff gets sorted out.

@quinm0
Copy link

quinm0 commented Mar 22, 2023

I tried that and it didn't work out for me. My lunch time is up so I'll be back here later to see if I can help at all

@tleers
Copy link

tleers commented Mar 22, 2023

Can you try #223? Fixed the issue for me.

@Acro88
Copy link

Acro88 commented Mar 22, 2023

I am a total noob! How do I try this? xD

@quinm0
Copy link

quinm0 commented Mar 22, 2023

#224 (comment)

@ivanstepanovftw
Copy link

It does not work anyway, download model manually. Also it will eat up all your RAM until you have swap.

@neiz
Copy link

neiz commented Mar 22, 2023

Using the new 0.3.1 fixed it for me -

npx dalai@0.3.1 llama install 65b

(insert your model of choice)

@deter3
Copy link

deter3 commented Mar 23, 2023

npx dalai@0.3.1 llama install 65b>

worked for me on LLAMA installation .

@mouchourider
Copy link

@neiz solution solved it for me thanks!

@drwootton
Copy link

drwootton commented Mar 23, 2023

I finally got the alpaca model installed after solving several problems including updating to get the fix for #223.
I ran into other problems along the way

  • Root is required to install on a real system (not container) since the installer runs dnf and apt commands.
  • I had to install apt since the apt installer is not part of a normal Fedora install.
  • The download step of the installer is flaky and I had to run it a bunch of times since the connection would drop and not recover most times I ran it, which meant starting the download over. This was on a 300Mbit/sec or faster cable internet connection.
  • I filled the disk partition while attempting to install the llama model. There was an error message complaining about that but the installer did not stop.

@tleers
Copy link

tleers commented Mar 23, 2023

@drwootton do you still experience these issues with the latest version of dalai? @cocktailpeanut seems to have already fixed any remaining issues, so simply updating your repo to the latest version will likely fix any remaining issues.

@drwootton
Copy link

I got dalai to run this morning after manually updating dalai as described in #223 and changed ownership of all files so I could run as my non-root account. I saved my llama quantized model, deleted the dalai directory and ran npx dalai alpaca install 7B --home /shared/dalai using my non-root account , which completed succesfully. I put back my llama model and things seem to be working.

I don't know if the requirements to run as root are removed now, or if the install script discovered the dependencies were already installed, making the dnf and apt commands unnecessary.

I also am not sure if I did install dalai at the latest level, since my dalai directory is not a git repository, so I wasn't able to run a git pull command. I assume the npx dalai alpaca install 7B command did get me the latest version.

@rosx27
Copy link

rosx27 commented Mar 23, 2023

Using the new 0.3.1 fixed it for me -

npx dalai@0.3.1 llama install 65b

(insert your model of choice)

this works for me on 2 different windows 10 computers. thanks!

@shtyftu
Copy link

shtyftu commented Mar 23, 2023

I had similliar error on windows 10 using "--home" flag. This flag seems broken now but it's possible to replace it with soft link aka ntfs junction. This command solved my issue and helped me download models successfully:
mklink /J "C:\Users\myUserName\dalai "E:\dalai"

@sagardesai90
Copy link

I'm resorting to DMing people on twitter who have these amazing demos like using llama + whisper.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet