This repository has been archived by the owner on Dec 16, 2022. It is now read-only.
Add way of skipping pretrained weights download #5172
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Fixes #4599.
Changes proposed in this pull request:
load_weights: bool
(default =True
) parameter tocached_transformers.get()
and all higher-level modules that call this function, such asPretrainedTransformerEmbedder
andPretrainedTransformerMismatchedEmbedder
. Setting this parameter toFalse
will avoid downloading and loading pretrained transformer weights, so only the architecture is instantiated. So you can set the parameter toFalse
via theoverrides
parameter when loading an AllenNLP model/predictor from an archive to avoid an unnecessary download.For example, suppose your training config looks something like this:
And now you have an archive from training this model:
model.tar.gz
. Then you can load the trained model into a predictor like so: