Skip to content
This repository has been archived by the owner on Jan 3, 2023. It is now read-only.

Commit

Permalink
v1.1.0 release bump.
Browse files Browse the repository at this point in the history
  • Loading branch information
scttl committed Oct 30, 2015
1 parent 5271175 commit 7a56fa9
Show file tree
Hide file tree
Showing 3 changed files with 59 additions and 4 deletions.
55 changes: 55 additions & 0 deletions ChangeLog
Original file line number Diff line number Diff line change
@@ -1,5 +1,60 @@
# ChangeLog

## v1.1.0 (2015-10-30):

### Modifications

* Sentiment analysis support (LSTM lookupTable based), new IMDB example
* Support for merge and branch layer stacks via LayerContainers
* Sequential, Tree, MergeBroadcast, MergeMultiStream
* Support for freezing layer stacks
* Adagrad optimizer support
* new GPU kernels for fast compounding batch norm, conv and pooling engine
updates, new kernel build system and flags.
* Modifications for Caffe support
* conv, pooling, P/Q updates, dropout layer normalization more in-line with
Caffe approach. NOTE: this breaks backwards compatibility with some
strided conv/pool related models serialized using older versions of neon
as the output sizes may now be different. See the FAQ for more info.
* serialization enhancements to make caffe model import/export easier
* use per-channel mean subtraction instead of single global. NOTE: this
breaks backwards compatibility with ImgMaster saved datasets prior to this
revision. To correct, please use the included `update_dataset_cache.py`
script in the util directory.
* Default training cost display during progress bar is now calculated on a
rolling window basis rather than from the beginning of each epoch
* Separate Layer configuration and initialization steps
* YAML based alexnet example
* Callback enhancements.
* now pass args instead of having to spell out callbacks in each example
* Changed validation callback to loss callback, validation_frequency now
evaluation_frequency
* Generic metric callback.
* Various bug fixes
* non-contiguous array get for GPUTensors
* 1D slicing returns 2D matrices
* bin/neon serialization fixes for RNNs
* 3D conv fixes for fprop, bprop
* batch norm inference fix
* bias layer size fix
* Documentation updates and improvements

## v1.0.0 (2015-09-15):

### Modifications

Primarily bug fixes:

* Ensure root logging handler setup [#82]
* C++ utility for CUDA compatibility checking [#83]
* Add predict function to models [#86]
* Fix bug in learning rate schedule impacting deserialization
* Speed up batch norm computation
* Average gradients in OpTree, fix tests
* Use inference mode for fprop during validation
* Add top-k misclassifcation metric
* Simplify maxas install, make vis requirements optional, doc updates.

## v1.0.0rc1 (2015-09-08):

### Modifications
Expand Down
6 changes: 3 additions & 3 deletions doc/source/faq.rst
Original file line number Diff line number Diff line change
Expand Up @@ -90,10 +90,10 @@ both ends of the dimension instead of just one end.
I'm getting an error loading a serialized model
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Starting with release 1.0.1 there was a change to the way that the
Starting with release 1.1.0 there was a change to the way that the
output sizes of convolution and pooling layers are calculated. This change
could make this new version incompatible with models saved using previous
versions of neon. With release 1.0.1, the calculation of the output size
versions of neon. With release 1.1.0, the calculation of the output size
of convolution and pooling layers stops before the kernel runs over the edge
of the input data plus the padding. The formula for this is:

Expand All @@ -111,7 +111,7 @@ neon.
I'm getting an error when I try to use ImgMaster
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

In release 1.0.1, the format of the mean image saved in the
In release 1.1.0, the format of the mean image saved in the
`dataset_cache.pkl` file has changed. Previous versions of neon were storing
a mean value for each pixel of the input image whereas in the new version only
a single mean value is stored for each input channel. So now, for an RGB
Expand Down
2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@
import subprocess

# Define version information
VERSION = '1.0.0rc1'
VERSION = '1.1.0'
FULLVERSION = VERSION
write_version = True

Expand Down

0 comments on commit 7a56fa9

Please sign in to comment.