Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update applications.md #7428

Merged
merged 1 commit into from
Jul 25, 2017
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
96 changes: 90 additions & 6 deletions docs/templates/applications.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,10 +14,12 @@ Weights are downloaded automatically when instantiating a model. They are stored
- [VGG19](#vgg19)
- [ResNet50](#resnet50)
- [InceptionV3](#inceptionv3)
- [MobileNet](#mobilenet)

All of these architectures (except Xception) are compatible with both TensorFlow and Theano, and upon instantiation the models will be built according to the image data format set in your Keras configuration file at `~/.keras/keras.json`. For instance, if you have set `image_data_format=channels_last`, then any model loaded from this repository will get built according to the TensorFlow data format convention, "Width-Height-Depth".
All of these architectures (except Xception and MobileNet) are compatible with both TensorFlow and Theano, and upon instantiation the models will be built according to the image data format set in your Keras configuration file at `~/.keras/keras.json`. For instance, if you have set `image_data_format=channels_last`, then any model loaded from this repository will get built according to the TensorFlow data format convention, "Width-Height-Depth".

The Xception model is only available for TensorFlow, due to its reliance on `SeparableConvolution` layers.
The MobileNet model is only available for TensorFlow, due to its reliance on `DepthwiseConvolution` layers.

-----

Expand Down Expand Up @@ -168,6 +170,7 @@ model = InceptionV3(input_tensor=input_tensor, weights='imagenet', include_top=T
- [VGG19](#vgg19)
- [ResNet50](#resnet50)
- [InceptionV3](#inceptionv3)
- [MobileNet](#mobilenet)

-----

Expand All @@ -176,7 +179,7 @@ model = InceptionV3(input_tensor=input_tensor, weights='imagenet', include_top=T


```python
keras.applications.xception.Xception(include_top=True, weights='imagenet', input_tensor=None, input_shape=None)
keras.applications.xception.Xception(include_top=True, weights='imagenet', input_tensor=None, input_shape=None, pooling=None, classes=1000)
```

Xception V1 model, with weights pre-trained on ImageNet.
Expand Down Expand Up @@ -235,7 +238,7 @@ These weights are trained by ourselves and are released under the MIT license.
## VGG16

```python
keras.applications.vgg16.VGG16(include_top=True, weights='imagenet', input_tensor=None, input_shape=None)
keras.applications.vgg16.VGG16(include_top=True, weights='imagenet', input_tensor=None, input_shape=None, pooling=None, classes=1000)
```

VGG16 model, with weights pre-trained on ImageNet.
Expand Down Expand Up @@ -290,7 +293,7 @@ These weights are ported from the ones [released by VGG at Oxford](http://www.ro


```python
keras.applications.vgg19.VGG19(include_top=True, weights='imagenet', input_tensor=None, input_shape=None)
keras.applications.vgg19.VGG19(include_top=True, weights='imagenet', input_tensor=None, input_shape=None, pooling=None, classes=1000)
```


Expand Down Expand Up @@ -347,7 +350,7 @@ These weights are ported from the ones [released by VGG at Oxford](http://www.ro


```python
keras.applications.resnet50.ResNet50(include_top=True, weights='imagenet', input_tensor=None, input_shape=None)
keras.applications.resnet50.ResNet50(include_top=True, weights='imagenet', input_tensor=None, input_shape=None, pooling=None, classes=1000)
```


Expand Down Expand Up @@ -404,7 +407,7 @@ These weights are ported from the ones [released by Kaiming He](https://github.c


```python
keras.applications.inception_v3.InceptionV3(include_top=True, weights='imagenet', input_tensor=None, input_shape=None)
keras.applications.inception_v3.InceptionV3(include_top=True, weights='imagenet', input_tensor=None, input_shape=None, pooling=None, classes=1000)
```

Inception V3 model, with weights pre-trained on ImageNet.
Expand Down Expand Up @@ -453,3 +456,84 @@ A Keras model instance.
### License

These weights are released under [the Apache License](https://github.com/tensorflow/models/blob/master/LICENSE).

-----

## MobileNet


```python
keras.applications.mobilenet.MobileNet(input_shape=None, alpha=1.0, depth_multiplier=1, dropout=1e-3, include_top=True, weights='imagenet', input_tensor=None, pooling=None, classes=1000)
```

MobileNet model, with weights pre-trained on ImageNet.

Note that only TensorFlow is supported for now,
therefore it only works with the data format
`image_data_format='channels_last'` in your Keras config at `~/.keras/keras.json`.
To load a MobileNet model via `load_model`, import the custom objects `relu6` and `DepthwiseConv2D` and pass them to the `custom_objects` parameter.

E.g.

```python
model = load_model('mobilenet.h5', custom_objects={
'relu6': mobilenet.relu6,
'DepthwiseConv2D': mobilenet.DepthwiseConv2D})
```


The default input size for this model is 224x224.

### Arguments

- input_shape: optional shape tuple, only to be specified
if `include_top` is False (otherwise the input shape
has to be `(224, 224, 3)` (with `channels_last` data format)
or (3, 224, 224) (with `channels_first` data format).
It should have exactly 3 inputs channels,
and width and height should be no smaller than 32.
E.g. `(200, 200, 3)` would be one valid value.
- alpha: controls the width of the network.
- If `alpha` < 1.0, proportionally decreases the number
of filters in each layer.
- If `alpha` > 1.0, proportionally increases the number
of filters in each layer.
- If `alpha` = 1, default number of filters from the paper
are used at each layer.
- depth_multiplier: depth multiplier for depthwise convolution
(also called the resolution multiplier)
- dropout: dropout rate
- include_top: whether to include the fully-connected
layer at the top of the network.
- weights: `None` (random initialization) or
`imagenet` (ImageNet weights)
- input_tensor: optional Keras tensor (i.e. output of
`layers.Input()`)
to use as image input for the model.
- pooling: Optional pooling mode for feature extraction
when `include_top` is `False`.
- `None` means that the output of the model
will be the 4D tensor output of the
last convolutional layer.
- `avg` means that global average pooling
will be applied to the output of the
last convolutional layer, and thus
the output of the model will be a
2D tensor.
- `max` means that global max pooling will
be applied.
- classes: optional number of classes to classify images
into, only to be specified if `include_top` is True, and
if no `weights` argument is specified.

### Returns

A Keras model instance.

### References

- [MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/pdf/1704.04861.pdf)

### License

These weights are released under [the Apache License](https://github.com/tensorflow/models/blob/master/LICENSE).