Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it possible to join the implementation of GPT3 Tokenizer #62

Open
OneSeven opened this issue Feb 8, 2023 · 11 comments
Open

Is it possible to join the implementation of GPT3 Tokenizer #62

OneSeven opened this issue Feb 8, 2023 · 11 comments
Labels
enhancement New feature or request

Comments

@OneSeven
Copy link

OneSeven commented Feb 8, 2023

Use Go to implement this function: https://platform.openai.com/tokenizer

@sashabaranov
Copy link
Owner

Related: https://github.com/openai/tiktoken

@OneSeven
Copy link
Author

OneSeven commented Feb 8, 2023

Related: https://github.com/openai/tiktoken

Thanks, but I think I need a library that can be called through golang.

@sashabaranov
Copy link
Owner

@OneSeven sure, I mean, we either would need to be able to embed this library (via cgo or otherwise) or would need to translate it from Rust to Go.

@OneSeven
Copy link
Author

OneSeven commented Feb 8, 2023

@OneSeven sure, I mean, we either would need to be able to embed this library (via cgo or otherwise) or would need to translate it from Rust to Go.

Do you have plans to add this functionality to the current SDK.
I would love to contribute, but my level is far from enough, sorry.

@sashabaranov
Copy link
Owner

There's no plan for that right now, but we are open for contributions 😄

I guess you can also call github.com/openai/tiktoken as a separate binary from Go.

@ealvar3z
Copy link
Contributor

ealvar3z commented Feb 9, 2023

@OneSeven sure, I mean, we either would need to be able to embed this library (via cgo or otherwise) or would need to translate it from Rust to Go.

Isn't this library in Python? and if porting; how would you prefer the scaffolding of the porting into your repo? would it be a separate repo and then you import it into go-gpt3, etc. In other words, I am attempting to see your vision if porting it from Python to Go is feasible.

@marcel
Copy link

marcel commented Feb 9, 2023

There's a go library already: https://github.com/samber/go-gpt-3-encoder

@OneSeven
Copy link
Author

OneSeven commented Feb 9, 2023

There's a go library already: https://github.com/samber/go-gpt-3-encoder

This library can only be used for English characters, and the correct results cannot be obtained for other languages

@sashabaranov
Copy link
Owner

@ealvar3z It's Rust wrapped in Python https://github.com/openai/tiktoken/blob/main/src/lib.rs

If it would be possible to bring tokenization with zero (or minimal) dependencies — I'm all for merging it. Otherwise, I think it makes sense to implement it in a separate repo.

@vvatanabe
Copy link
Collaborator

Good example of how to count tokens:

@GwynethLlewelyn
Copy link

GwynethLlewelyn commented Mar 30, 2024

Since the original issue was opened, there has been some progress!

The documentation on the official OpenAI repository currently points to pkoukk/tiktoken-go as the Go library for tokenizing (no endorsements, just a link).

You can see from the test script that it deals with tokens in different languages and alphabets. It might still get things wrong, but at least they are as wrong as the official OpenAI Python version!

Dependencies currently listed by its go.mod:

module github.com/pkoukk/tiktoken-go

go 1.19

require (
	github.com/dlclark/regexp2 v1.10.0
	github.com/google/uuid v1.3.0
	github.com/stretchr/testify v1.8.2
)

require (
	github.com/davecgh/go-spew v1.1.1 // indirect
	github.com/pmezard/go-difflib v1.0.0 // indirect
	gopkg.in/yaml.v3 v3.0.1 // indirect
)

It's not "zero" dependencies as you'd prefer, but close! I haven't looked into the code very deeply.

The dependency upon google/uuid is pretty standard; one wonders why the Go core developers haven't incorporated it into the Go Standard Library yet (it does have a few quirks, though, but because it comes from Google itself, I guess it's ok to use).

The inclusion of dlclark/regexp2 — as opposed to using the standard regexp built on top of Google's RE2 engine — is very likely because the former closely follows the algorithm used by .NET, which might be a requirement for the tokenizer to come up with the same results as tiktoken.

And stretchr/testify is evidently only used for the testing bits; it has no relevance to the overall tokenizer code itself.

Performance, according to the published benchmarks (e.g., those included in its test suite), seems to be the same as the original Python code.

I think you've got your tiktokenizer candidate! 😀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

6 participants