-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Release 1.2.0 #11
Merged
Merged
Release 1.2.0 #11
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…ion. - Created the Generator class - Created the Model_Type enum class - Created the Translator class model.py: - Removed Config and reload_config. - Extracted generate_text method to generator class. - Added the model_type and AutoModelForSeq2SeqLM imports. - Updated __init__ to receive the a parameter to specify the model's type. - Updated _load and _download methods to use either AutoModelForCausalLM or AutoModelForSeq2SeqLM depending of the _model_type attribute's value server.py: - Added the Generator and Model_Type imports. - Renamed model into generator. - Replaced the Model class by the Generator class.
- Added sentencepiece pip package
request.py: - Added TEXT_TRANSLATION request. server.py: - Imported translator - Added translator object - Updated handler function. - Replaced the return p_data in the if by a single one at the end. - Added global translator. - Added the possibility to load either a generation or translation model. - Added a TEXT_TRANSLATION case.
- Replaced translator by from_eng_translator and to_eng_translator as the server will need to translate in two ways. - Added translate function - Fixed shutdown_server's indentation.
translator: - translate_text now returns a string instead of a list. server: - Renamed translate into translate_text.
Removed sentencepiece torch from 1.10.2 to 1.11 transformers from 4.16.2 to 4.18
transformers 4.18 becomes 4.16 because 4.16 is the latest in conda-forge channel.
server: - Updated handle_request to add more debug messages and to use the use_gpu value for both the generator and translator. config: - Set LOG_FILEMODE default value to 'w' - Set LOG_LEVEL default value to DEBUG (Experimental branch only)
Merge Develop into Auto-Translation
…_Server into Auto_Translation
generator.py: - Updated generate_text to support the gpu once again. Simplified the script by merging all the models input into a single dict "model_input". translator.py: - Updated translate_text to support cuda if it is enabled. - Set use_cache to true.
Auto translation
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.