Skip to content
/ llm.rs Public
forked from karpathy/llm.c

LLM training in simple, raw C/CUDA, migrated into Rust

License

Notifications You must be signed in to change notification settings

yijunyu/llm.rs

 
 

Repository files navigation

llm.rs

Migration of Karpathy's llm.c project into Rust

Development Process

The development steps taken to migrate llm.c into Rust

1. Utilizing c2rust

Using c2rust, train_gpt2.c was translated from Karpathy's llm.c project to Rust.

2. Utilizing GPT4

Although the transpilation of c2rust was successful, all the for loops have been turned into while loops.

Using GPT-4, we are able to convert all the while loops back into for loops.

3. Utilizing Mate

Furthermore, using Mate, we converted some of these for loops into iter() functions using the Rayon library.

4. Manual Updates

Currently, the project is undergoing manual updates to find performance improvements

Performance

Currently this implementation is still slower than the C version based on the following benchmarks:

C Rust C++ Mojo
Intel Core i7-9700 8-core 2.447s 1.251s
Intel Xeon E5-2690 v3 12-core 2.110s 2.439s 1.037s 6.190s

LLM Training Results

Quick Start

Install python dependencies, output tokenized dataset, and load in the weights:

make setup

Run the training script:

make train

This will run cargo build --release from the llm-rs cargo project after which the binary will be copied into the main project folder.

TODO

  • Fix types to remove unnecessary casts
  • Restructure the training script for improved readability
  • Implement the latest version of the tokenizer
  • Implement the latest version of the data loader
  • Improve speed to match the performance of the C implementation
  • Migrate the testing script
  • Fix tinystories dataset download

About

LLM training in simple, raw C/CUDA, migrated into Rust

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Rust 71.4%
  • Python 28.3%
  • Makefile 0.3%