Skip to content

Latest commit

 

History

History
20 lines (18 loc) · 2.28 KB

README.md

File metadata and controls

20 lines (18 loc) · 2.28 KB

TextGAN

Overview

Generative Adversarial Nets (GANs) face problems when dealing with the tasks of generating discrete data. This non-differentiability problem can be addressed by using gradient estimators. In theory, the bias and variance of these estimators have been discussed, but there has not been much work done on testing them on GAN-Based Text Generation. We will be analyzing the bias and variance of two gradient estimators, Gumbel-Softmax and REBAR, on GAN-Based Text Generation. We propose two sets of experiments based on differing sentence length and vocabulary size to analyse bias and variance, respectively. In this work, we evaluate the bias and variance impact of the above two gradient estimators on GAN Text Generation problem. We also create a novel GAN-Based Text Generation model on top of RelGAN by replacing Gumbel-Softmax with REBAR. The performance of the new model is evaluated using BLEU score and compared with RelGAN.

The project was originally forked from here.

Selected Experiment Results

Bias-Variance Analysis

RebarGAN has lower average bias than GumbelGAN for all the sequence lengths and vocabulary sizes we tested. However, at the same time, GumbelGAN has lower average log variance compared to RebarGAN for all tested values.





Rebar-Based RelGAN Evalution

We trained both RelGAN and ReLbarGAN on the Image COCO dataset with 5 MLE pretraining epochs, batch size of 16.



More details can be found in this report.