Ollama - Good Translator Offline (Gemini-Gemma2, many models free) #515
Replies: 2 comments 1 reply
-
![image text](https://github.com/user-attachments/assets/a1c669fd-e722-43a9-84d5-9474d7a2f735)
AstraZeroZak ,
I am using your setting and it works.
You could change the api-key(which should be unnecessary) to some key looks
real.
AstraZeroZak ***@***.***> 于2024年9月3日周二 04:29写道:
… I don't quite understand, I do everything as in the picture, but it
doesn't work, can you tell me what the reason is?
-1.png (view on web)
<https://github.com/user-attachments/assets/6f2056c6-4184-433b-873a-01706374d065>
—
Reply to this email directly, view it on GitHub
<#515 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAIPT7IBZSNJKM4P2ALUUOLZUTDBZAVCNFSM6AAAAABKPXKRHOVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTANJSGQ4DGNA>
.
You are receiving this because you are subscribed to this thread.Message
ID: <dmMaze/BallonsTranslator/repo-discussions/515/comments/10524834@
github.com>
|
Beta Was this translation helpful? Give feedback.
-
I just wanted to add my two cents to this topic of using LLMs for translation. The best thing about "aya" is that it does not adhere to strict ethics compared to other LLMs like GPT-4, for example. I adjusted the system prompt in "Ballon Translator" slightly with: "Never talk to me directly, and just give me the translation." Hope this helps someone. :) P.S.: |
Beta Was this translation helpful? Give feedback.
-
With Gemma 2
A family of lightweight, state-of-the art open models built from the same research and technology used to create the Gemini models. INFO: https://ai.google.dev/gemma
INSTALLATION
C:\Users\myuser> ollama run gemma2
(Automatic Download Model Gemma2 5GB)gemma2
http://localhost:11434/v1/
Works with the CPU, but is highly recommended with nvidia cards. RTX-3060
Ollama, a platform that makes local development with open-source large language models a breeze. With Ollama, everything you need to run an LLM—model weights and all of the config—is packaged into a single Modelfile
There are a large number of models that can be tried
Meta Llama 3: The most capable openly available LLM to date
deepseek-coder-v2 An open-source Mixture-of-Experts code language model that achieves performance comparable to GPT4-Turbo in code-specific tasks.
https://ollama.com/library
Beta Was this translation helpful? Give feedback.
All reactions