Skip to content

Keras based Tutorials and implementations for "Self-normalizing networks" - new activation function SELU

Notifications You must be signed in to change notification settings

bigsnarfdude/SELU_Keras_Tutorial

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

35 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SELU_Keras_Tutorial

Self-Normalizing Neural Networks

This paper introduces new methods to significantly increase NN effectiveness using three design choices:

1. By using a new activation function called SELU. 
2. By using a new Dropout function called AlphaDropout (set to a default value around 0.1 for lecun_normal).
3. By using weight initialization technique (lecun_normal).

SNNs with more than 4 layers outperform both: RandomForest and SVM tools. SELUs with α = α01 and λ = λ01 and 
the proposed dropout technique and initialization strategy appear to outperform traditional RELU based NN/FFN.

NOTE: In the few experiments I have conducted, I have also observed a small bump in scores just changing the 
Dense Fully Connected layers using SELU/SSN.

SELU HOT

.

Step #1

Take a look at the benchmark comparison to understand MLP-SELU

https://github.com/bigsnarfdude/SELU_Keras_Tutorial/blob/master/Basic_MLP_combined_comparison.ipynb

SELU COMPARISON

Step #2

Here is a comparison of final FC layer in RELU vs SELU

https://github.com/bigsnarfdude/SELU_Keras_Tutorial/blob/master/FC_Layer_Comparison.ipynb

About

Keras based Tutorials and implementations for "Self-normalizing networks" - new activation function SELU

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published