Skip to content

Commit

Permalink
minors
Browse files Browse the repository at this point in the history
  • Loading branch information
JimmyZou committed Apr 30, 2024
1 parent 8f8628b commit 773bf36
Show file tree
Hide file tree
Showing 2 changed files with 35 additions and 2 deletions.
9 changes: 7 additions & 2 deletions src/blog_multimodal_fundation.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,9 +15,14 @@
- knowledge distill

---
## Some tools
## Some Popular Papers

#### Efficiently Modeling Long Sequences with Structured State Spaces [[pdf]](https://arxiv.org/pdf/2111.00396)
_Albert Gu, Karan Goel, and Christopher R´e_

#### Mamba: Linear-Time Sequence Modeling with Selective State Spaces [[pdf]](https://arxiv.org/pdf/2312.00752)
_Albert Gu and Tri Dao_

- [run llama2 locally](https://www.linkedin.com/pulse/three-steps-run-llama-2-7b-chat-model-any-cpu-machine-nirmal-patel-3pw9f/)

---
## Fundation Models
Expand Down
28 changes: 28 additions & 0 deletions src/running_with_llm_win11.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
## Install Docker + WSL + Ubuntu + Nvidia on Win11
[link](https://blog.csdn.net/godblesstao/article/details/135893429)
1. Install Docker Desktop
2. Install WSL2 (Ubuntu22.04)
3. Update Win11 Nvidia driver and cuda>=12.0 via Nvidia GeForce Experience
4. MobaXterm with session Ubuntu22.04
5. Install Docker in Ubuntu22.04
6. Install Nvidia Container Toolkit

## Both run in docker!
### [Ollama](https://ollama.com/)
```
sudo docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
sudo docker exec -it ollama ollama run llama3
```
### [OpenWebUI](https://github.com/open-webui/open-webui)
```
sudo docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
http://localhost:3000
```

## [Llama3](https://github.com/meta-llama/llama3)
```
torchrun --nproc_per_node 1 example_chat_completion.py \
--ckpt_dir Meta-Llama-3-8B-Instruct/ \
--tokenizer_path Meta-Llama-3-8B-Instruct/tokenizer.model \
--max_seq_len 512 --max_batch_size 6
```

0 comments on commit 773bf36

Please sign in to comment.