Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: add execution permission to the llama-server when running tabby… #2549

Merged
merged 1 commit into from
Jun 29, 2024

Conversation

Srkl
Copy link
Contributor

@Srkl Srkl commented Jun 29, 2024

… on Linux

The user should give execution permission to the llama-server when running on Tabby on a Linux standalone install. Otherwise, the application will crash due to a permission error.

The application panicked (crashed).
Message:  Failed to start llama-server <embedding> with command Command { std: "/home/<user>/tabby/dist/tabby/llama-server" "-m" "/home/<user>/.tabby/models/TabbyML/Nomic-Embed-Text/ggml/model.gguf" "--cont-batching" "--port" "30888" "-np" "1" "--log-disable" "--ctx-size" "4096" "-ngl" "9999" "--embedding" "--ubatch-size" "4096", kill_on_drop: true }: Permission denied (os error 13)
Location: crates/llama-cpp-server/src/supervisor.rs:80

Summary by CodeRabbit

  • Documentation
    • Updated installation instructions for Linux to include making llama-server executable alongside tabby.

… on Linux

The user should give execution permission to the llama-server when running on Tabby on a Linux standalone install. Otherwise, the application will crash due to a permission error. 

```sh
The application panicked (crashed).
Message:  Failed to start llama-server <embedding> with command Command { std: "/home/<user>/tabby/dist/tabby/llama-server" "-m" "/home/<user>/.tabby/models/TabbyML/Nomic-Embed-Text/ggml/model.gguf" "--cont-batching" "--port" "30888" "-np" "1" "--log-disable" "--ctx-size" "4096" "-ngl" "9999" "--embedding" "--ubatch-size" "4096", kill_on_drop: true }: Permission denied (os error 13)
Location: crates/llama-cpp-server/src/supervisor.rs:80
```
Copy link
Contributor

coderabbitai bot commented Jun 29, 2024

Walkthrough

The recent change updates the installation instructions for Tabby on Linux. Specifically, it modifies the command for making a file executable by adding llama-server to the chmod +x tabby command. This change enhances clarity and ensures the correct files have executable permissions when setting up Tabby.

Changes

File(s) Change Summary
website/docs/quick-start/.../index.mdx Updated the Linux installation instructions to include llama-server in the chmod +x tabby command.

Poem

In the realm of code, where servers play,
A change took root, clear as day,
The llama joined, in mode so grand,
Executable now, with a simple command.
A step was shared, for all to see,
Linux setup, now hassle-free. 🌟


Tip

Early access features
  • OpenAI gpt-4o model for reviews and chat.

Note:

  • You can disable early access features from the CodeRabbit UI or by setting early_access: false in the CodeRabbit configuration file.
  • Please join our Discord Community to provide feedback and report issues.
  • OSS projects are always opted into early access features.

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share
Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between 6ccbcdc and 1518c89.

Files selected for processing (1)
  • website/docs/quick-start/installation/linux/index.mdx (2 hunks)
Additional comments not posted (1)
website/docs/quick-start/installation/linux/index.mdx (1)

34-34: LGTM!

The addition of llama-server to the chmod +x command ensures that both tabby and llama-server files are made executable, preventing permission errors.

However, please verify that the llama-server file exists in the repository.

@wsxiaoys
Copy link
Member

Thanks for the PR

@wsxiaoys wsxiaoys merged commit 984d522 into TabbyML:main Jun 29, 2024
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants