Skip to content

Official Repository of "MDL-CW: A Multimodal Deep Learning Framework with Cross Weights" (CVPR 2016)

Notifications You must be signed in to change notification settings

SarahRastegar/MDL-CW

Repository files navigation

MDL-CW: A Multimodal Deep Learning Framework with Cross Weights

In this paper, we aim to use the complementary information of different modalities to find a more informative representation for multi-modal data, you can find our paper here:

Dependencies

Multimodal deep learning with cross weights is built upon Keras' framework.

Datasets

We use multimodal benchmarks in this paper, including:

  • PASCAL-Sentence and SUN-Attribute

We also use a toy dataset to showcase our model abilities:

  • Multi-modal MNIST

Citation

If you use this code in your research, please consider citing our paper:

@inproceedings{rastegar2016mdl,
  title={Mdl-cw: A multimodal deep learning framework with cross weights},
  author={Rastegar, Sarah and Soleymani, Mahdieh and Rabiee, Hamid R and Shojaee, Seyed Mohsen},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  pages={2601--2609},
  year={2016}
}

About

Official Repository of "MDL-CW: A Multimodal Deep Learning Framework with Cross Weights" (CVPR 2016)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages