Skip to content
This repository has been archived by the owner on Nov 3, 2022. It is now read-only.

Migration of keras-team/keras-contrib to tensorflow/addon #519

Open
gabrieldemarmiesse opened this issue Sep 21, 2019 · 27 comments
Open

Migration of keras-team/keras-contrib to tensorflow/addon #519

gabrieldemarmiesse opened this issue Sep 21, 2019 · 27 comments

Comments

@gabrieldemarmiesse
Copy link
Contributor

Hello all,

Keras-contrib host many good features. We'd like to keep them. Since we're focusing on tensorflow, and will deprecate the multi-backend Keras, we need to think about what will happen to other keras-related projets.

The goal of keras-contrib and tensorflow/addons are very similar, but tensorflow/addons benefits from a better support (read workforce) from google. With many processes already in place to make the maintenance easier.

As such, we'll deprecate keras-contrib. People who would like to keep a feature and are ready to maintain it should do a pull request to tensorflow/addons.

We don't have a timeline yet as of when we get this repository out of github.com/keras-team but it will happen eventually.

If you have any questions concerning the migration or if you need help with it, feel free to comment below.

@lingvisa
Copy link

hi, I would like to be able to do this: "from keras_contrib.layers import CRF", as before.

@gabrieldemarmiesse
Copy link
Contributor Author

The CRF is available in tf.addons. See tensorflow/addons#314

@lingvisa
Copy link

Hi, @gabrieldemarmiesse, why doesn't it have a CRF layer implementation in tf.addons.text. The original one has this, as you can see: https://github.com/keras-team/keras-contrib/blob/master/keras_contrib/layers/crf.py

@seanpmorgan
Copy link

Hi @lingvisa we currently have CRF layer under review:
https://github.com/tensorflow/addons/pull/377/files but as I mentioned in the other thread the under review implementation seems overly complicated due to using ported tf.contrib CRF functions. I've made a note in that PR to consider using keras-contrib version.

Separately, TF Addons is happy to review any keras-contrib functionality that has proved useful and well tested as it should most likely align with our API formats. Looking forward to merging a lot of the great functionality of this repo with the ability to write custom TF ops/kernels in our library.

@lingvisa
Copy link

@gabrieldemarmiesse That would be great, if the layer can be added. Thanks.

@gabrieldemarmiesse
Copy link
Contributor Author

@lingvisa We would need to find a maintainer for the CRF layer in tf addons to add the layer. I'm too busy with keras to do that at the moment. Hopefully, someone else will volunteer to do it.

@RaphaelMeudec
Copy link
Contributor

@gabrieldemarmiesse Is there a list of features in keras-contrib that should be moved to tensorflow/addons? Can dedicate some time to migrate stuff over the next few weeks

@gabrieldemarmiesse
Copy link
Contributor Author

I think the problem is that none of knows exactly which functions/classes are in keras-contrib and which functions/classes are in tensorflow-addons. I didn't have the time to look into it. If we had two lists we could compare easily and then know what to migrate. That's not super fun to do sadly :(

@seanpmorgan
Copy link

Generated components using the __init__ files of each repo:

TensorFlow Addons:

activations.gelu
activations.hardshrink
activations.lisht
activations.softshrink
activations.sparsemax
activations.tanhshrink	
activations.rrelu

image.dense_image_warp
image.interpolate_bilinear
image.euclidean_dist_transform
image.adjust_hsv_in_yiq
image.random_hsv_in_yiq
image.mean_filter2d
image.median_filter2d
image.rotate
image.transform
image.sparse_image_warp
image.interpolate_spline
image.translate
image.connected_components

layers.GeLU
layers.Maxout
layers.GroupNormalization
layers.InstanceNormalization
layers.CorrelationCost
layers.PoincareNormalize
layers.Sparsemax
layers.WeightNormalization

losses.ContrastiveLoss
losses.SigmoidFocalCrossEntropy
losses.LiftedStructLoss
losses.NpairsLoss
losses.NpairsMultilabelLoss
losses.SparsemaxLoss
losses.TripletSemiHardLoss

metrics.CohenKappa
metrics.F1Score
metrics.FBetaScore
metrics.RSquare
metrics.MultiLabelConfusionMatrix

optimizers.ConditionalGradient
optimizers.LazyAdam
optimizers.Lookahead
optimizers.MovingAverage
optimizers.RectifiedAdam
optimizers.AdamW
optimizers.SGDW

rnn.LayerNormLSTMCell
rnn.NASCell

seq2seq.AttentionMechanism
seq2seq.AttentionWrapper
seq2seq.AttentionWrapperState
seq2seq.BahdanauAttention
seq2seq.BahdanauMonotonicAttention
seq2seq.LuongAttention
seq2seq.LuongMonotonicAttention
seq2seq.hardmax
seq2seq.monotonic_attention
seq2seq.safe_cumprod
seq2seq.BasicDecoder
seq2seq.BasicDecoderOutput
seq2seq.BeamSearchDecoder
seq2seq.BeamSearchDecoderOutput
seq2seq.BeamSearchDecoderState
seq2seq.FinalBeamSearchDecoderOutput
seq2seq.gather_tree
seq2seq.gather_tree_from_array
seq2seq.tile_batch
seq2seq.BaseDecoder
seq2seq.Decoder
seq2seq.dynamic_decode
seq2seq.SequenceLoss
seq2seq.sequence_loss
seq2seq.CustomSampler
seq2seq.GreedyEmbeddingSampler
seq2seq.InferenceSampler
seq2seq.SampleEmbeddingSampler
seq2seq.Sampler
seq2seq.ScheduledEmbeddingTrainingSampler
seq2seq.ScheduledOutputTrainingSampler
seq2seq.TrainingSampler

text.crf_binary_score
text.crf_decode
text.crf_decode_backward
text.crf_decode_forward
text.crf_forward
text.crf_log_likelihood
text.crf_log_norm
text.crf_multitag_sequence_score
text.crf_sequence_score
text.crf_unary_score
text.viterbi_decode
text.skip_gram_sample
text.skip_gram_sample_with_text_vocab
text.parse_time

Keras-Contrib

activations.squash

applications.DenseNet
applications.ResNet
applications.ResNet18
applications.ResNet34
applications.ResNet50
applications.ResNet101
applications.ResNet152
applications.WideResidualNetwork
applications.NASNet
applications.NASNetLarge
applications.NASNetMobile

callbacks.SnapshotCallbackBuilder
callbacks.SnapshotModelCheckpoint
callbacks.DeadReluDetector
callbacks.CyclicLR
callbacks.TensorBoardGrouped

contraints.Clip

initializers.ConvolutionAware

layers.PELU
layers.SReLU
layers.Swish
layers.SineReLU
layers.CosineConv2D
layers.CosineConvolution2D
layers.SubPixelUpscaling
layers.CosineDense
layers.CRF
layers.Capsule
layers.InstanceNormalization
layers.GroupNormalization

losses.DSSIMObjective
losses.jaccard_distance
losses.crf_loss
losses.crf_nll

metrics.crf_accuracy
metrics.crf_marginal_accuracy
metrics.crf_viterbi_accuracy

optimizers.FTML
optimizers.Padam
optimizers.Yogi
optimizers.LARS

@gabrieldemarmiesse
Copy link
Contributor Author

Thanks a bunch @seanpmorgan, I'll go over each of the items and then make a list with a proposition of what is relevant. Also since there are a lot of classes in keras-contrib that are not relevant anymore (e.g. TensorboardGrouped).

@gabrieldemarmiesse
Copy link
Contributor Author

Can be migrated and have maintainers

callbacks.SnapshotCallbackBuilder      -> @titu1994

activations.squash                     -> @SriRangaTarun

layers.Capsule                         -> @SriRangaTarun
layers.SubPixelUpscaling               -> @titu1994
layers.Swish                           -> @gabrieldemarmiesse
layers.SineReLU (can be an activation) -> @wilderrodrigues

optimizers.Padam                       -> @MFreidank
optimizers.Yogi                        -> @MarcoAndreaBuchmann

Can be migrated but no maintainers

callbacks.DeadReluDetector
callbacks.CyclicLR

initializers.ConvolutionAware

layers.PELU
layers.SReLU
layers.CosineConv2D
layers.CosineDense
layers.CRF

optimizers.FTML
optimizers.LARS

losses.DSSIMObjective
losses.jaccard_distance
losses.crf_loss
losses.crf_nll

metrics.crf_accuracy
metrics.crf_marginal_accuracy
metrics.crf_viterbi_accuracy

@seanpmorgan do you want to make a selection? What do we do about features without maintainers?

@RaphaelMeudec
Copy link
Contributor

@gabrieldemarmiesse imo, CyclicLR shouldn't be migrated as a callback, but rather as a LearningRateScheduler. Could also be split into different schedulers to avoid the mode argument. What's your opinion on this?

@gabrieldemarmiesse
Copy link
Contributor Author

Yup, I agree with the splitting. Seem good.

@seanpmorgan
Copy link

@seanpmorgan do you want to make a selection? What do we do about features without maintainers?

Probably would be best that we create an issue for each maintained piece so the community can comment. In general though those all look pretty useful so if the maintainers are willing to maintain on TF Addons then I see no reason we can't move them.

For the ones without maintainers maybe we can open an issue to see if there is anyone interested in maintaining provided there is a community need for them

@bhack
Copy link

bhack commented Nov 7, 2019

Doesn't it make sense to freeze the main Keras repo and evaluate Keras issue migration and decoupling with TF.keras ticket on TF repore before migrating contrib to add-on?
See also the governante thread at keras-team/governance#2

@seanpmorgan
Copy link

@bhack I agree that Keras repo needs to decouple where tickets are going... but I'm not sure that's preventing existing functionality in contrib to be migrated? Could you further explain that point?

@bhack
Copy link

bhack commented Nov 8, 2019

Here you have in the list on addon side activations.gelu.
It was in addon now It Is entering in Keras but through Tensorflow (TF.keras) PR but we have issue in Keras and refused PR in Keras or in comment suggesting to forward to contrib and so on..
So It Is really a mess and very very confusing for newcomers:
Just as an example tensorflow/addons#550 (comment)

@bhack
Copy link

bhack commented Dec 11, 2019

We are still having tripled bugs with a risk to have fourfold cases:
keras-team/keras#13443
tensorflow/addons#179
tensorflow/tensorflow#24387

@rajveershringi
Copy link

Hi, I am not able to import the tf version for keras_contrib :
from tf_keras_contrib.layers import CRF
I have already done as mentioned on the repo :
Install keras_contrib for tensorflow.keras

git clone https://www.github.com/keras-team/keras-contrib.git
cd keras-contrib
python convert_to_tf_keras.py
USE_TF_KERAS=1 python setup.py install

I can see the module in my pip list as
tf-keras-contrib 2.0.8

Have restarted my jupyter-notebook server and still is not able to pick up the module. Any help would be great.

@soumayan
Copy link

@rajveershringi did you solve this problem?

@rajveershringi
Copy link

@soumayan unfortunately not.

@soumayan
Copy link

@rajveershringi did you follow this link?
https://libraries.io/pypi/tf-crf-layer
Here it is mentioned that some versions of tensorflow will not support this

@wilderrodrigues
Copy link
Contributor

Hi,

I will migrate the SineRelu activation function this week.

Cheers,
Wilder

@bhack
Copy link

bhack commented Jun 23, 2021

Will we have again a keras-contrib repo or something similar? /cc @qlzh727 @fchollet

@qlzh727
Copy link
Member

qlzh727 commented Jun 23, 2021

I think tf/addons is probably my preference, to reduce the overhead of management. A lot of component in tf/addons is already based on keras classes like layers. @fchollet, what's your thought?

@bhack
Copy link

bhack commented Jun 23, 2021

We need to careful think about this as we also have a new threshold in Addons about accepting new contributions (>50 citatons).

So It Is not anymore "what It Is too experimental will go in addons":

https://github.com/tensorflow/addons/blob/master/CONTRIBUTING.md#requirements-for-new-contributions-to-the-repository

As we highly depend on keras we could have overlaps/duplicates with components that your maintain in the new keras standalone repo or keras-cv/keras-nlp (also if currently they are just a model garden sub-mirror).

About the custom ops I hope that we could reduce the numbers of custom ops we are maintaing cause they have a quite large maintainership overhead and a restricted HW compatibility.

/cc @seanpmorgan @yarri-oss @thealamkin

@yinciki
Copy link

yinciki commented Apr 24, 2022

hi, any plans to migrate ftml?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants