You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've applied the WGAN algorithm implemented in torchsde/example/sde_gap.py to sine function (deterministic with fixed initial conditions). After 30000 learning epochs we can see that algorithm struggles to capture the periodic structure of the signal:
The sine function was implemented as:
class PeriodicSDE(torch.nn.Module):
sde_type='ito'
noise_type='diagonal'
In my opinion, the reason of low efficiency is caused by vanishing/exploding gradients in discriminator network due to weight clipping. The histograms of weights for input and output layers of "f" function of NCDE discriminator:
Most weights are stucked on the limits imposed by clipping, and effectively the learning process for discriminator network stops once this happens. Is it possible to fix through gradient penalty?
The text was updated successfully, but these errors were encountered:
I've applied the WGAN algorithm implemented in torchsde/example/sde_gap.py to sine function (deterministic with fixed initial conditions). After 30000 learning epochs we can see that algorithm struggles to capture the periodic structure of the signal:
The sine function was implemented as:
class PeriodicSDE(torch.nn.Module):
sde_type='ito'
noise_type='diagonal'
In my opinion, the reason of low efficiency is caused by vanishing/exploding gradients in discriminator network due to weight clipping. The histograms of weights for input and output layers of "f" function of NCDE discriminator:
Most weights are stucked on the limits imposed by clipping, and effectively the learning process for discriminator network stops once this happens. Is it possible to fix through gradient penalty?
The text was updated successfully, but these errors were encountered: