All notable changes to the project are documented in this file.
Version numbers are of the form 1.0.0
.
Any version bump in the last digit is backwards-compatible, in that a model trained with the previous version can still
be used for translation with the new version.
Any bump in the second digit indicates a backwards-incompatible change,
e.g. due to changing the architecture or simply modifying model parameter names.
Note that Sockeye has checks in place to not translate with an old model that was trained with an incompatible version.
Each version section may have have subsections for: Added, Changed, Removed, Deprecated, and Fixed.
- Removed use of
expand_dims
in favor ofreshape
to save memory.
- Fixed default setting of source factor combination to be 'concat' for backwards compatibility.
- Sockeye now outputs fields found in a JSON input object, if they are not overwritten by Sockeye. This behavior can be enabled by selecting
--json-input
(to read input as a JSON object) and--output-type json
(to write a JSON object to output).
- Source factors can now be added to the embeddings instead of concatenated with
--source-factors-combine sum
(default: concat)
- Fixed training crashes with
--learning-rate-decay-optimizer-states-reset initial
option.
- Added
fertility
as a further type of attention coverage. - Added an option for training to keep the initializations of the model via
--keep-initializations
. When set, the trainer will avoid deleting the params file for the first checkpoint, no matter what--keep-last-params
is set to.
- Fix to argument names that are allowed to differ for resuming training.
- More informative error message about inconsistent --shared-vocab setting.
- Adding translation sampling via
--sample [N]
. This causes the decoder to sample each next step from the target distribution probabilities at each timestep. An optional value ofN
causes the decoder to sample only from the topN
vocabulary items for each hypothesis at each timestep (the default is 0, meaning to sample from the entire vocabulary).
- The checkpoint decoder and nvidia-smi subprocess are now launched from a forkserver, allowing for a better separation between processes.
- Add option to make
TranslatorInputs
directly from a dict.
- Update to MXNet 1.3.1. Removed requirements/requirements.gpu-cu{75,91}.txt as CUDA 7.5 and 9.1 are deprecated.
- Performance optimization to skip the softmax operation for single model greedy decoding is now only applied if no translation scores are required in the output.
- Full training state is now returned from EarlyStoppingTrainer's fit().
- Training state cleanup will not be performed for training runs that did not converge yet.
- Switched to portalocker for locking files (Windows compatibility).
- Added nbest translation, exposed as
--nbest-size
. Nbest translation means to not only output the most probable translation according to a model, but the top n most probable hypotheses. If--nbest-size > 1
and the option--output-type
is not explicitly specified, the output type will be changed to one JSON list of nbest translations per line.--nbest-size
can never be larger than--beam-size
.
- Changed
sockeye.rerank
CLI to be compatible with nbest translation JSON output format.
- Added
sockeye.score
CLI for quickly scoring existing translations (documentation).
- Entry-point clean-up after the contrib/ rename
- Update to MXNet 1.3.0.post0
- Renamed
contrib
to less-genericsockeye_contrib
--source-factor-vocabs
can be set to provide source factor vocabularies.
- Always skipping softmax for greedy decoding by default, only for single models.
- Added option
--skip-topk
for greedy decoding.
- Fixed bug in constrained decoding to make sure best hypothesis satifies all constraints.
- Added a CLI for reranking of an nbest list of translations.
- Check for equivalency of training and validation source factors was incorrectly indented.
- Removed dependence on the nvidia-smi tool. The number of GPUs is now determined programatically.
- Translator.max_input_length now reports correct maximum input length for TranslatorInput objects, independent of the internal representation, where an additional EOS gets added.
- translate CLI: no longer rely on external, user-given input id for sorting translations. Also allow string ids for sentences.
- Fixed issue with
--num-words 0:0
in image captioning and another issue related to loading all features to memory with variable length.
- Added an 8 layer LSTM model similar (but not exactly identical) to the 'GNMT' architecture to autopilot.
- Fixed an issue with
--max-num-epochs
causing training to stop before the update/batch that actually completes the epoch was made.
<s>
now supported as the first token in a multi-word negative constraint (e.g.,<s> I think
to prevent a sentence from starting withI think
)
- Bugfix in resetting the state of a multiple-word negative constraint
- Simplified gluon blocks for length calculation
- Require numpy 1.14 or later to avoid MKL conflicts between numpy as mxnet-mkl.
- Fixed bad check for existence of negative constraints.
- Resolved conflict for phrases that are both positive and negative constraints.
- Fixed softmax temperature at inference time.
- Image Captioning now supports constrained decoding.
- Image Captioning: zero padding of features now allows input features of different shape for each image.
- Fixed issue with the incorrect order of translations when empty inputs are present and translating in chunks.
- Determining the max output length for each sentence in a batch by the bucket length rather than the actual in order to match the behavior of a single sentence translation.
- Updated to MXNet 1.2.1
- ROUGE scores are now available in
sockeye-evaluate
. - Enabled CHRF as an early-stopping metric.
- Added support for
--beam-search-stop first
for decoding jobs with--batch-size > 1
.
- Now supports negative constraints, which are phrases that must not appear in the output.
- Global constraints can be listed in a (pre-processed) file, one per line:
--avoid-list FILE
- Per-sentence constraints are passed using the
avoid
keyword in the JSON object, with a list of strings as its field value.
- Global constraints can be listed in a (pre-processed) file, one per line:
- Added option to pad vocabulary to a multiple of x: e.g.
--pad-vocab-to-multiple-of 16
.
- Pre-training the RNN decoder. Usage:
- Train with flag
--decoder-only
. - Feed identical source/target training data.
- Train with flag
- Preserving max output length for each sentence to allow having identical translations for both with and without batching.
- No longer restrict the vocabulary to 50,000 words by default, but rather create the vocabulary from all words which occur at least
--word-min-count
times. Specifying--num-words
explicitly will still lead to a restricted vocabulary.
- Temporarily fixing the pyyaml version to 3.12 as version 4.1 introduced some backwards incompatible changes.
- Fix silent failing of NDArray splits during inference by using a version that always returns a list. This was causing incorrect behavior when using lexicon restriction and batch inference with a single source factor.
- ROUGE score evaluation. It can be used as the stopping criterion for tasks such as summarization.
- Update requirements to use MKL versions of MXNet for fast CPU operation.
- Dockerfiles and convenience scripts for running
fast_align
to generate lexical tables. These tables can be used to create top-K lexicons for faster decoding via vocabulary selection (documentation).
- Updated default top-K lexicon size from 20 to 200.
- Correctly create the convolutional embedding layers when the encoder is set to
transformer-with-conv-embed
. Previously no convolutional layers were added so that a standard Transformer model was trained instead.
- Make sure the default bucket is large enough with word based batching when the source is longer than the target (Previously there was an edge case where the memory usage was sub-optimal with word based batching and longer source than target sentences).
- Constrained decoding was missing a crucial cast
- Fixed test cases that should have caught this
- Transformer parametrization flags (model size, # of attention heads, feed-forward layer size) can now optionally
defined separately for encoder & decoder. For example, to use a different transformer model size for the encoder,
pass
--transformer-model-size 1024:512
.
- LHUC is now supported in transformer models
- [Experimental] Introducing the image captioning module. Type of models supported: ConvNet encoder - Sockeye NMT decoders. This includes also a feature extraction script, an image-text iterator that loads features, training and inference pipelines and a visualization script that loads images and captions. See this tutorial for its usage. This module is experimental therefore its maintenance is not fully guaranteed.
- Updated to MXNet 1.2
- Use of the new LayerNormalization operator to save GPU memory.
- Removed summation of gradient arrays when logging gradients. This clogged the memory on the primary GPU device over time when many checkpoints were done. Gradient histograms are now logged to Tensorboard separated by device.
- Added decoding with target-side lexical constraints (documentation in
tutorials/constraints
).
- Introduced Sockeye Autopilot for single-command end-to-end system building.
See the Autopilot documentation and run with:
sockeye-autopilot
. Autopilot is acontrib
module with its own tests that are run periodically. It is not included in the comprehensive tests run for every commit.
- Fixed two bugs with training resumption:
- removed overly strict assertion in the data iterator for model states before the first checkpoint.
- removed deletion of Tensorboard log directory.
- Added support for config files. Command line parameters have precedence over the values read from the config file.
Minimal working example:
python -m sockeye.train --config config.yaml
with contents ofconfig.yaml
as follows:source: source.txt target: target.txt output: out validation_source: valid.source.txt validation_target: valid.target.txt
The full set of arguments is serialized to out/args.yaml
at the beginning of training (before json was used).
- All source side sequences now get appended an additional end-of-sentence (EOS) symbol. This change is backwards compatible meaning that inference with older models will still work without the EOS symbol.
- Default training parameters have been changed to reflect the setup used in our arXiv paper. Specifically, the default
is now to train a 6 layer Transformer model with word based batching. The only difference to the paper is that weight
tying is still turned off by default, as there may be use cases in which tying the source and target vocabularies is
not appropriate. Turn it on using
--weight-tying --weight-tying-type=src_trg_softmax
. Additionally, BLEU scores from a checkpoint decoder are now monitored by default.
- Re-allow early stopping w.r.t BLEU
- Fixed a problem with lhuc boolean flags passed as None.
- Reorganized beam search. Normalization is applied only to completed hypotheses, and pruning of
hypotheses (logprob against highest-scoring completed hypothesis) can be specified with
--beam-prune X
- Enabled stopping at first completed hypothesis with
--beam-search-stop first
(default is 'all')
- Removed tensorboard logging of embedding & output parameters at every checkpoint. This used a lot of disk space.
- Added support for LHUC in RNN models (David Vilar, "Learning Hidden Unit Contribution for Adapting Neural Machine Translation Models" NAACL 2018)
- Word based batching with very small batch sizes.
- Fixed a problem with learning rate scheduler not properly being loaded when resuming training.
- Fixed a problem with trainer not waiting for the last checkpoint decoder (#367).
- Added options to control training length w.r.t number of updates/batches or number of samples:
--min-updates
,--max-updates
,--min-samples
,--max-samples
.
- Training now supports training and validation data that contains empty segments. If a segment is empty, it is skipped during loading and a warning message including the number of empty segments is printed.
- Removed combined linear projection of keys & values in source attention transformer layers for performance improvements.
- The topk operator is performed in a single operation during batch decoding instead of running in a loop over each sentence, bringing speed benefits in batch decoding.
- Added Tensorboard logging for all parameter values and gradients as histograms/distributions. The logged values correspond to the current batch at checkpoint time.
- Tensorboard logging now is done with the MXNet compatible 'mxboard' that supports logging of all kinds of events (scalars, histograms, embeddings, etc.). If installed, training events are written out to Tensorboard compatible even files automatically.
- Removed the
--use-tensorboard
argument fromsockeye.train
. Tensorboard logging is now enabled by default ifmxboard
is installed.
- Change default target vocab name in model folder to
vocab.trg.0.json
- Changed serialization format of top-k lexica to pickle/Numpy instead of JSON.
sockeye-lexicon
now supports two subcommands: create & inspect. The former provides the same functionality as the previous CLI. The latter allows users to pass source words to the top-k lexicon to inspect the set of allowed target words.
- Added ability to choose a smaller
k
at decoding runtime for lexicon restriction.
- Added a flag
--strip-unknown-words
tosockeye.translate
to remove any<unk>
symbols from the output strings.
- Added a flag
--fixed-param-names
to prevent certain parameters from being optimized during training. This is useful if you want to keep pre-trained embeddings fixed during training. - Added a flag
--dry-run
tosockeye.train
to not perform any actual training, but print statistics about the model and mode of operation.
sockeye.evaluate
can now handle multiple hypotheses files by simply specifying--hypotheses file1 file2...
. For each metric the mean and standard deviation will be reported across files.
- Optionally store the beam search history to a
json
output using thebeam_store
output handler.
- Use stack operator instead of expand_dims + concat in RNN decoder. Reduces memory usage.
- Updated to MXNet 1.1.0
-
Source factors, as described in
Linguistic Input Features Improve Neural Machine Translation (Sennrich & Haddow, WMT 2016) PDF bibtex
Additional source factors are enabled by passing
--source-factors file1 [file2 ...]
(-sf
), where file1, etc. are token-parallel to the source (-s
). An analogous parameter,--validation-source-factors
, is used to pass factors for validation data. The flag--source-factors-num-embed D1 [D2 ...]
denotes the embedding dimensions and is required if source factor files are given. Factor embeddings are concatenated to the source embeddings dimension (--num-embed
).At test time, the input sentence and its factors can be passed in via STDIN or command-line arguments.
- For STDIN, the input and factors should be in a token-based factored format, e.g.,
word1|factor1|factor2|... w2|f1|f2|... ...1
. - You can also use file arguments, which mirrors training:
--input
takes the path to a file containing the source, and--input-factors
a list of files containing token-parallel factors. At test time, an exception is raised if the number of expected factors does not match the factors passed along with the input.
- For STDIN, the input and factors should be in a token-based factored format, e.g.,
-
Removed bias parameters from multi-head attention layers of the transformer.
- Loading/Saving auxiliary parameters of the models. Before aux parameters were not saved or used for initialization. Therefore the parameters of certain layers were ignored (e.g., BatchNorm) and randomly initialized. This change enables to properly load, save and initialize the layers which use auxiliary parameters.
- Device locking: Only one process will be acquiring GPUs at a time. This will lead to consecutive device ids whenever possible.
- Internal change: Standardized all data to be batch-major both at training and at inference time.
- When a device lock file exists and the process has no write permissions for the lock file we assume that the device is locked. Previously this lead to an permission denied exception. Please note that in this scenario we an not detect if the original Sockeye process did not shut down gracefully. This is not an issue when the sockeye process has write permissions on existing lock files as in that case locking is based on file system locks, which cease to exist when a process exits.
- Changed to a custom speedometer that tracks samples/sec AND words/sec. The original MXNet speedometer did not take variable batch sizes due to word-based batching into account.
- Fixed entry points in
setup.py
.
- Update to MXNet 1.0.0 which adds more advanced indexing features, benefitting the beam search implementation.
--kvstore
now accepts 'nccl' value. Only works if MXNet was compiled withUSE_NCCL=1
.
--gradient-compression-type
and--gradient-compression-threshold
flags to use gradient compression. See MXNet FAQ on Gradient Compression.
- Taking the BOS and EOS tag into account when calculating the maximum input length at inference.
- fixed a problem with
--num-samples-per-shard
flag not being parsed as int.
- New CLI
sockeye.prepare_data
for preprocessing the training data only once before training, potentially splitting large datasets into shards. At training time only one shard is loaded into memory at a time, limiting the maximum memory usage.
- Instead of using the
--source
and--target
argumentssockeye.train
now accepts a--prepared-data
argument pointing to the folder containing the preprocessed and sharded data. Using the raw training data is still possible and now consumes less memory.
- Optionally apply query, key and value projections to the source and target hidden vectors in the CNN model
before applying the attention mechanism. CLI parameter:
--cnn-project-qkv
.
- A warning will be printed if the checkpoint decoder slows down training.
- Exposing the xavier random number generator through
--weight-init-xavier-rand-type
.
- Exposing MXNet's Nesterov Accelerated Gradient, Adadelta and Adadelta optimizers.
- A tool that initializes embedding weights with pretrained word representations,
sockeye.init_embedding
.
- Added support for Swish-1 (SiLU) activation to transformer models
(Ramachandran et al. 2017: Searching for Activation Functions,
Elfwing et al. 2017: Sigmoid-Weighted Linear Units for Neural Network Function Approximation
in Reinforcement Learning). Use
--transformer-activation-type swish1
. - Added support for GELU activation to transformer models (Hendrycks and Gimpel 2016: Bridging Nonlinearities and
Stochastic Regularizers with Gaussian Error Linear Units.
Use
--transformer-activation-type gelu
.
- Fast decoding for transformer models. Caches keys and values of self-attention before softmax.
Changed decoding flag
--bucket-width
to apply only to source length.
- Gradient norm clipping (
--gradient-clipping-type
) and monitoring.
- Changed
--clip-gradient
to--gradient-clipping-threshold
for consistency.
- Sorting sentences during decoding before splitting them into batches.
- Default chunk size: The default chunk size when batching is enabled is now batch_size * 500 during decoding to avoid users accidentally forgetting to increase the chunk size.
- Downscaled fixed positional embeddings for CNN models.
- Renamed
--monitor-bleu
flag to--decode-and-evaluate
to illustrate that it computes other metrics in addition to BLEU.
--decode-and-evaluate-use-cpu
flag to use CPU for decoding validation data.--decode-and-evaluate-device-id
flag to use a separate GPU device for validation decoding. If not specified, the existing and still default behavior is to use the last acquired GPU for training.
- A tool that extracts specified parameters from params.x into a .npz file for downstream applications or analysis.
- Added chrF metric
(Popovic 2015: chrF: character n-gram F-score for automatic MT evaluation) to Sockeye.
sockeye.evaluate now accepts
bleu
andchrf
as values for--metrics
- Transformer models do not ignore
--num-embed
anymore as they did silently before. As a result there is an error thrown if--num-embed
!=--transformer-model-size
. - Fixed the attention in upper layers (
--rnn-attention-in-upper-layers
), which was previously not passed correctly to the decoder.
- Removed RNN parameter (un-)packing and support for FusedRNNCells (removed
--use-fused-rnns
flag). These were not used, not correctly initialized, and performed worse than regular RNN cells. Moreover, they made the code much more complex. RNN models trained with previous versions are no longer compatible. - Removed the lexical biasing functionality (Arthur ETAL'16) (removed arguments
--lexical-bias
and--learn-lexical-bias
).
- Updated to MXNet 0.12.1, which includes an important bug fix for CPU decoding.
- Removed dependency on sacrebleu pip package. Now imports directly from
contrib/
.
- Transformers now always use the linear output transformation after combining attention heads, even if input & output depth do not differ.
- Fixed a bug where vocabulary slice padding was defaulting to CPU context. This was affecting decoding on GPUs with very small vocabularies.
- Fixed an issue with the use of
ignore
inCrossEntropyMetric::cross_entropy_smoothed
. This was affecting runs with Eve optimizer and label smoothing. Thanks @kobenaxie for reporting.
- Lexicon-based target vocabulary restriction for faster decoding. New CLI for top-k lexicon creation, sockeye.lexicon.
New translate CLI argument
--restrict-lexicon
.
- Bleu computation based on Sacrebleu.
- Fixed yet another bug with the data iterator.
- Fixed a bug with the revised data iterator not correctly appending EOS symbols for variable-length batches. This reverts part of the commit added in 1.10.1 but is now correct again.
- Fixed a bug with max_observed_{source,target}_len being computed on the complete data set, not only on the
sentences actually added to the buckets based on
--max_seq_len
.
--max-num-epochs
flag to train for a maximum number of passes through the training data.
- Reduced memory footprint when creating data iterators: integer sequences are streamed from disk when being assigned to buckets.
- Updated MXNet dependency to 0.12 (w/ MKL support by default).
- Changed
--smoothed-cross-entropy-alpha
to--label-smoothing
. Label smoothing should now require significantly less memory due to its addition to MXNet'sSoftmaxOutput
operator. --weight-normalization
now applies not only to convolutional weight matrices, but to output layers of all decoders. It is also independent of weight tying.- Transformers now use
--embed-dropout
. Before they were using--transformer-dropout-prepost
for this. - Transformers now scale their embedding vectors before adding fixed positional embeddings. This turns out to be crucial for effective learning.
.param
files now use 5 digit identifiers to reduce risk of overflowing with many checkpoints.
- Added CUDA 9.0 requirements file.
--loss-normalization-type
. Added a new flag to control loss normalization. New default is to normalize by the number of valid, non-PAD tokens instead of the batch size.--weight-init-xavier-factor-type
. Added new flag to control Xavier factor type when--weight-init=xavier
.--embed-weight-init
. Added new flag for initialization of embeddings matrices.
--smoothed-cross-entropy-alpha
argument. See above.--normalize-loss
argument. See above.
- Batch decoding. New options for the translate CLI:
--batch-size
and--chunk-size
. Translator.translate() now accepts and returns lists of inputs and outputs.
- Exposing the MXNet KVStore through the
--kvstore
argument, potentially enabling distributed training.
- Optional smart rollback of parameters and optimizer states after updating the learning rate
if not improved for x checkpoints. New flags:
--learning-rate-decay-param-reset
,--learning-rate-decay-optimizer-states-reset
- The RNN variational dropout mask is now independent of the input (previously any zero initial state led to the first state being canceled).
- Correctly pass
self.dropout_inputs
float tomx.sym.Dropout
inVariationalDropoutCell
.
- Instead of truncating sentences exceeding the maximum input length they are now translated in chunks.
- Convolutional decoder.
- Weight normalization (for CNN only so far).
- Learned positional embeddings for the transformer.
--attention-*
CLI params renamed to--rnn-attention-*
.--transformer-no-positional-encodings
generalized to--transformer-positional-embedding-type
.