Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

are there some errors in the code? #3

Open
MoWangDaRen opened this issue Nov 13, 2024 · 1 comment
Open

are there some errors in the code? #3

MoWangDaRen opened this issue Nov 13, 2024 · 1 comment

Comments

@MoWangDaRen
Copy link

def forward(self, x):
# Compute FFT to get amplitude and phase
fft_x = torch.fft.fft2(x)
amp = torch.real(fft_x)
pha = torch.imag(fft_x)

    # Apply Dynamic Amplitude Scaling and Phase Continuity
    amp_scaled = self.amplitude_scaling(amp)
    pha_continuous = self.phase_continuity(pha)

    # Processing with complex convolution
    complex_input = torch.complex(amp_scaled, pha_continuous)
    complex_processed = self.complex_conv(complex_input)

    # Separate processed amplitude and phase
    processed_amp = torch.real(complex_processed)
    processed_pha = torch.imag(complex_processed)

    # Process amplitude and phase with Mamba models
    processed_amp = self.model_amp(amp_scaled)
    processed_pha = self.model_pha(pha_continuous)

    # Combine processed amplitude and phase, and apply inverse FFT
    combined_fft = torch.complex(processed_amp, processed_pha)
    output = torch.fft.ifft2(combined_fft).real

    # Apply final smoothing convolution
    output = self.smooth(output)

    # Applying HDR processing after frequency modulation
    x = self.hdr_layer(x)
    return output

In # Process amplitude and phase with Mamba models , why processed_amp = self.model_amp(amp_scaled) but not processed_amp = self.model_amp(processed_amp)

@MoWangDaRen
Copy link
Author

in exposemamba2d.py ,class FSSB

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant