Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release 1.2.0 #11

Merged
merged 16 commits into from
Jun 19, 2022
Merged

Release 1.2.0 #11

merged 16 commits into from
Jun 19, 2022

Conversation

Lyaaaaaaaaaaaaaaa
Copy link
Member

No description provided.

…ion.

- Created the Generator class
- Created the Model_Type enum class
- Created the Translator class

model.py:
- Removed Config and reload_config.
- Extracted generate_text method to generator class.
- Added the model_type and AutoModelForSeq2SeqLM imports.
- Updated __init__ to receive the a parameter to specify the model's
type.
- Updated _load and _download methods to use either AutoModelForCausalLM
or AutoModelForSeq2SeqLM depending of the _model_type attribute's value

server.py:
- Added the Generator and Model_Type imports.
- Renamed model into generator.
- Replaced the Model class by the Generator class.
- Added sentencepiece pip package
request.py:
- Added TEXT_TRANSLATION request.

server.py:
- Imported translator
- Added translator object
- Updated handler function.
  - Replaced the return p_data in the if by a single one at the end.
  - Added global translator.
- Added the possibility to load either a generation or translation
model.
  - Added a TEXT_TRANSLATION case.
- Replaced translator by from_eng_translator and to_eng_translator as
the server will need to translate in two ways.
- Added translate function
- Fixed shutdown_server's indentation.
translator:
- translate_text now returns a string instead of a list.

server:
- Renamed translate into translate_text.
Removed sentencepiece
torch from 1.10.2 to 1.11
transformers from 4.16.2 to 4.18
transformers 4.18 becomes 4.16 because 4.16 is the latest in conda-forge
channel.
server:
- Updated handle_request to add more debug messages and to use the
use_gpu value for both the generator and translator.

config:
- Set LOG_FILEMODE default value to 'w'
- Set LOG_LEVEL default value to DEBUG (Experimental branch only)
Merge Develop into Auto-Translation
generator.py:
- Updated generate_text to support the gpu once again. Simplified the
script by merging all the models input into a single dict "model_input".

translator.py:
- Updated translate_text to support cuda if it is enabled.
- Set use_cache to true.
@Lyaaaaaaaaaaaaaaa Lyaaaaaaaaaaaaaaa merged commit b21c634 into main Jun 19, 2022
@Lyaaaaaaaaaaaaaaa Lyaaaaaaaaaaaaaaa deleted the Develop branch August 13, 2022 07:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant