Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add ollama service #972

Open
wants to merge 15 commits into
base: master
Choose a base branch
from
Open

Add ollama service #972

wants to merge 15 commits into from

Conversation

Velnbur
Copy link

@Velnbur Velnbur commented Jun 11, 2024

Closes: #971

@Velnbur Velnbur changed the title feat: Add ollama service Add ollama service Jun 11, 2024
@Velnbur
Copy link
Author

Velnbur commented Jun 11, 2024

I'm pretty new to Nix, and the whole language itself, so I could make some mistakes. I made this by example, copied modules/services/emacs.nix and replaced the package, some of the options and changed command for launchd

@Samasaur1
Copy link
Contributor

This looks good to me. I assume you've tested it and it works on your computer?

@Velnbur
Copy link
Author

Velnbur commented Jun 13, 2024

This looks good to me. I assume you've tested it and it works on your computer?

Yes, recently, I found that I had to add the service file to the modules list (see last commit). But then it worked in my personal configuration with:

services.ollama.enable = true;

Then, pull the model with:

ollama pull zephyr # for example

and the image is here and ready.

@Samasaur1
Copy link
Contributor

Ah, one other thing. If you interact with the service using the ollama binary, then I think generally you'd want enabling the service to add the binary to your PATH as well. I'm not sure how to handle this, though, since this is a user-scoped launch agent and I'm not sure how to add a program to the path of the "current user" in nix-darwin

@Samasaur1
Copy link
Contributor

Hopefully one of @Enzime @emilazy will have the answer for you

@emilazy
Copy link
Collaborator

emilazy commented Jun 13, 2024

Thank you for this PR!

I believe you’d need to know the username to use users.users.<name>.packages, which is a bit awkward. I think the best option here would just be to use launchd.agents (or even launchd.daemons) to apply it to all users and add ollama to environment.systemPackages; “activation‐user” agents are somewhat awkward and I’d like to get rid of them at some point.

NixOS already has a services.ollama module. Generally it’s best if options with the same name are compatible with NixOS, and it seems to have more functionality; would you be willing to consider basing this module on that one? There’d be some tweaks necessary – sandbox/writablePaths/openFirewall aren’t directly applicable, acceleration probably doesn’t apply either, and the option defaults would need porting to launchd syntax – but otherwise I think it should be possible to offer substantially the same interface, which lets us share maintenance effort with NixOS and helps people reuse configurations between the two. If that (understandably) seems like too much fuss, though, I think we could achieve basic compatibility by removing the exec option and ensuring that that we match NixOS’s defaults (e.g. binding to 127.0.0.1:11434, unless that’s already the upstream default).


models = mkOption {
type = types.str;
default = "%S/ollama/models";
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need some help here. I didn't get why in nixpkgs:

https://github.com/NixOS/nixpkgs/blob/be45e3445c7fd559f99f0581d807b1e32b381129/nixos/modules/services/misc/ollama.nix#L35

authors used string formatting specifier %S by default, without inserting anything there and just directly passing into environmental variable:

https://github.com/NixOS/nixpkgs/blob/be45e3445c7fd559f99f0581d807b1e32b381129/nixos/modules/services/misc/ollama.nix#L137

In ollama code, I can't find any string formatting related to models destination path. Am I missing something?

Also, actually, ollama uses $HOME/.ollama/models by default:

https://github.com/ollama/ollama/blob/89c79bec8cf7a7b04f761fcc5306d2edf47a4164/envconfig/config.go#L294

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is systemd‐specific syntax, documented in systemd.unit(5). I believe it would ordinarily expand to /var/lib, but because of the DynamicUser sandboxing it acts slightly differently (per systemd.exec(5)):

If DynamicUser= is used, the logic for CacheDirectory=, LogsDirectory= and StateDirectory= is slightly altered: the directories are created below /var/cache/private, /var/log/private and /var/lib/private, respectively, which are host directories made inaccessible to unprivileged users, which ensures that access to these directories cannot be gained through dynamic user ID recycling. Symbolic links are created to hide this difference in behaviour. Both from perspective of the host and from inside the unit, the relevant directories hence always appear directly below /var/cache, /var/log and /var/lib.

Since we don’t have the fancy DynamicUser sandboxing on macOS, you can just read this as /var/lib.

So basically we have a choice here. The thing that would best match NixOS is to define a user for this to run as (see modules/services/gitlab-runner.nix, modules/services/hercules-ci-agent/default.nix, modules/services/ofborg/default.nix, and modules/services/buildkite-agents.nix for examples), set the daemon to run as that user, and use paths under /var/lib here (although I don’t think that directory is commonly used on macOS). (I believe the current state of the PR would run Ollama as root, which is definitely not what we want.)

However, defining users on macOS is a bit annoying – they show up in System Settings and can potentially cause fuss on system upgrades unless you use a ~100 UID range that Apple keeps encroaching on and that Nix already puts 32 users in. So an alternative would be to define a launchd.agent instead and default this stuff to $HOME/.ollama or $HOME/LIbrary/Application Support/ollama or something. That would isolate Ollama less than the NixOS module does, and mean that every interactive user gets their own copy of the Ollama data, but I don’t know if that really matters.

Ideally we’d have a better story for defining daemon users, but for now the latter approach may be more practical.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, seems that I'll return ollama.user.agents back and use the same default as in ollama: $HOME/.ollama/models.


home = lib.mkOption {
type = types.str;
default = "%S/ollama";
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The same here

@Velnbur
Copy link
Author

Velnbur commented Jun 16, 2024

First of all, thank you all for your responses!

would you be willing to consider basing this module on that one?

Yeah, as mentioned in issue #972, I was looking for something similiar to what I found in NixOS docs. So it makes total sense.

If that (understandably) seems like too much fuss, though, I think we could achieve basic compatibility by removing the exec option and ensuring that that we match NixOS’s defaults

I did so by adding some of the options. With others, like: sandbox, writablePaths and openFirewall, I would need some help as couldn't find how other services in nix-darwin archived similar functionality.

Maybe, I should add blank options with warnings, that won't be used, to make it compatible with existing NixOS module for now.

@emilazy
Copy link
Collaborator

emilazy commented Jun 16, 2024

Thanks! I think you can just omit sandbox, writablePaths, and openFirewall. macOS doesn’t provide the flexible service sandboxing that systemd does on Linux, and I don’t know if we can even control the macOS firewall programmatically like that. Generally when we can’t provide functionality from a NixOS module we don’t offer the options at all, especially in this case where they may be security‐relevant. Sometimes we use mkRemovedOptionModule (you can grep for equivalent to this NixOS option to see examples), but I wouldn’t bother blocking merge on that.

@Velnbur Velnbur force-pushed the master branch 2 times, most recently from 6dbf183 to dd4e6fb Compare June 28, 2024 08:54
@Velnbur
Copy link
Author

Velnbur commented Jun 28, 2024

@emilazy Currently, I'm a bit confused, directly setting models as $HOME/.ollama/models didn't work
(as I did in 6891635), but setting them to specific path worked. As I understood launchd doesn't expand shell variables in .plist config files, or maybe I'm missing something.

I can't find an example in nix-darwin repo where $HOME was used in EnvironmentVariables. So I suggest doing the same as for ipfs.dataDir, making it null by default, so OLLAMA_MODELS won't be specified at all, and ollama will fallback to its default value by itself.

P.S.: If the final approach is okay, do I need to squash the commits?

rgruyters added a commit to rgruyters/dotfiles that referenced this pull request Sep 20, 2024
Required due to missing services enable option on Darwin

Ref: LnL7/nix-darwin#972
rgruyters added a commit to rgruyters/dotfiles that referenced this pull request Sep 20, 2024
Required due to missing services enable option on Darwin

Ref: LnL7/nix-darwin#972
@jyp
Copy link

jyp commented Nov 21, 2024

Is this PR still current? Any alternative for an ollama service?

@tqwewe
Copy link

tqwewe commented Dec 20, 2024

@jyp you can add this to your system flake by copying the contents of ollama.nix (in this PR) to a local ollama.nix file, and importing it into your system flake.

@tqwewe
Copy link

tqwewe commented Dec 20, 2024

Is it possible for ollama to run on the GPU with Intel Macs? I tried simply setting OLLAMA_INTEL_GPU = "true"; but that didn't seem to work

@Velnbur
Copy link
Author

Velnbur commented Dec 20, 2024

@tqwewe It seems to be experimental:

https://github.com/ollama/ollama/blob/290cf2040af072812b4b29e61aee773310917f62/envconfig/config.go#L164

Also, I'm on M2, so I can't figure it out either

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add ollama service
5 participants