Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: filtered source #1134

Closed
wants to merge 10 commits into from
Closed

Conversation

smartalecH
Copy link
Collaborator

@smartalecH smartalecH commented Feb 25, 2020

Filtered time source object used to "convolve" an impulse response with a user-provided source time profile (i.e. a conventional gaussian). Useful for adjoint calculations.

Estimates the time domain frequency response using a simple, brute force least squares from the frequency response.

Ran a simple test that computes the filtered response in the frequency domain and the time domain (using the FilteredSource). The test is over simple waveguide structure. The current test is failing.

filter_ob

filtered

import meep as mp
import numpy as np
import matplotlib.pyplot as plt
from scipy import signal

# --------------------------- #
# Design filter
# --------------------------- #

freq_min = 1/1.6
freq_max = 1/1.5
nfreq = 100
freqs = np.linspace(freq_min,freq_max,nfreq)

run_time = 1000

# --------------------------- #
# Run normal simulation
# --------------------------- #

cell = [16,8,0]
geometry = [mp.Block(mp.Vector3(mp.inf,1,mp.inf),
                     center=mp.Vector3(),
                     material=mp.Medium(epsilon=12))]
fcen = 1/1.55
gauss_src = mp.GaussianSource(frequency=fcen,fwidth=0.1/1.55)
sources = [mp.EigenModeSource(gauss_src,
                     eig_band=1,
                     size=[0,8],
                     center=[-6,0])]
pml_layers = [mp.PML(1.0)]
resolution = 10
sim = mp.Simulation(cell_size=cell,
                    boundary_layers=pml_layers,
                    geometry=geometry,
                    sources=sources,
                    resolution=resolution)
mon = sim.add_dft_fields([mp.Ez],freq_min,freq_max,nfreq,center=[0,0],size=[0,1])

sim.run(until=run_time)

fields = np.zeros((nfreq,),dtype=np.complex128)
# just store one spatial point for each freq
for f in range(nfreq):
    print(sim.get_dft_array(mon,mp.Ez,f)[0])
    fields[f] = sim.get_dft_array(mon,mp.Ez,f)[0]

# build simple bandpass filter
dt = sim.fields.dt
fs = 1/dt
freqs_scipy = freqs * np.pi / (fs/2)
num_taps = 320
taps = signal.firwin(num_taps,[0.8*freq_min/(fs/2), freq_max/(fs/2)], pass_zero=False,window='boxcar')
w,h = signal.freqz(taps,worN=freqs_scipy)

# frequency domain calculation
desired_fields = h * fields

# --------------------------- #
# Run filtered simulation
# --------------------------- #

filtered_src = mp.FilteredSource(fcen,freqs,h,num_taps,fs,gauss_src)

sources = [mp.EigenModeSource(filtered_src,
                     eig_band=1,
                     size=[0,8],
                     center=[-6,0])]

sim.reset_meep()
sim.change_sources(sources)

mon = sim.add_dft_fields([mp.Ez],freq_min,freq_max,nfreq,center=[0,0],size=[0,1])

sim.run(until=run_time)

fields_filtered = np.zeros((nfreq,),dtype=np.complex128)
# just store one spatial point for each freq
for f in range(nfreq):
    fields_filtered[f] = sim.get_dft_array(mon,mp.Ez,f)[0]

# --------------------------- #
# Compare results
# --------------------------- #

plt.figure()
plt.subplot(2,1,1)
plt.semilogy(freqs,np.abs(desired_fields)**2,'o',label='Frequency Domain')
plt.semilogy(freqs,np.abs(fields_filtered)**2,'--',label='Time Domain')
plt.grid()
plt.xlabel('Frequency')
plt.ylabel('Power')
plt.legend()

plt.subplot(2,1,2)
plt.plot(freqs,np.unwrap(np.angle(desired_fields)),'o',label='Frequency Domain')
plt.plot(freqs,np.unwrap(np.angle(fields_filtered)),'--',label='Time Domain')
plt.grid()
plt.xlabel('Frequency')
plt.ylabel('Angle')
plt.legend()

plt.tight_layout()
plt.savefig('filtered.png')

plt.show()

@stevengj
Copy link
Collaborator

Be careful about the normalization — the DTFT in Meep is normalized to approximate a continuous Fourier transform. I think this is defined in the manual, and is also in the Meep paper — I think it is a factor of Δt/2π or maybe sqrt(2π).

@smartalecH
Copy link
Collaborator Author

I've figured out the scaling and opted to fit my impulse response to the multiplied input waveform and filter transfer function (in the Fourier domain of course). I fit it using a gaussian kernel rbf. The fitting mechanism looks really good:

fit_results

When I timestep using the new source, the frequencies with relatively low signal have trouble matching what I would expect:

filtered

I'm assuming this a consequence of timestepping and summing the DFT fields at each step?

@stevengj
Copy link
Collaborator

stevengj commented Mar 4, 2020

Note that there is a slight error you are introducing here by assuming that the Fourier transform of a Gaussian is a Gaussian. The issue is that Meep is not doing the Fourier transform, it is doing a DTFT (whose inverse is a Fourier-series calculation).

(The same thing applies to computing the Fourier transform of the input current. If you really want the "exact" result you need the DTFT — for example, the LDOS calculation does this.)

(I'm guessing you can compute the inverse DTFT of a Gaussian analytically in terms of an error function.)

Of course, as the resolution increases, the DTFT approaches the continuous Fourier transform, so maybe we can just ignore this if it converges with increasing resolution.

@smartalecH
Copy link
Collaborator Author

smartalecH commented Mar 5, 2020

I've opted to estimate the Gaussian input pulse's frequency response using a DTFT, rather than the approximate Fourier Transform in the codebase.

I'm now just trying to recreate the Gaussian pulse using my filter estimation routine (which is simplified). In other words, I'm not multiplying by a filter profile. This should be an easy test case.

I get two similar issues as before:

  1. I'm only able to match the magnitude and phase responses when is_integrated=False.
  2. There's still a "noise floor".

Screen Shot 2020-03-04 at 8 36 32 PM

This is the routine I use to calculate the DTFT and then estimate it's time-domain impulse response:

https://github.com/smartalecH/meep/blob/eeb1796194bfd7e100847008d4c91f4bc6772689/python/source.py#L85-L117

@stevengj
Copy link
Collaborator

stevengj commented Mar 5, 2020

Set is_integrated = False for the filter source, even if the source you are trying to match is integrated.

It should be possible to design a filter source for is_integrated = True, but you'll need to include a correction factor in the spectrum you are designing since it is now for the dipole moment and not the current.

@smartalecH
Copy link
Collaborator Author

Recent changes use an analytic form of the DTFT. I also fixed the is_integrated bug. The noise floor seems to be practically eliminated. It's not perfect, but probably within the tolerance of the RBF fit. Increasing resolution slightly helps.

res_sweep

Next steps are to build more tests and clean up the code (vectorize where needed).

@smartalecH
Copy link
Collaborator Author

smartalecH commented Mar 12, 2020

The above method was much less robust to other tests than anticipated. Rather than stick with the gaussians and error functions, I determined the best RBF kernel would be a DTFT sinc (not to be confused with the traditional sin(pi * x)/(pi * x)).

The nice thing about using a DTFT sinc as my basis function is my "width" is already determined by the time domain's rect window length (just the time of the simulation). Shifting the sinc's center frequency is simply a phase modulation in the time domain.

This way I can a.) avoid the ugly DTFT integrals from the gaussian and b.) incorporate the natural turn on and turn off of the source. In fact, any meep source is automatically "windowed" by a rect anyway (an issue I found hard to overcome with gaussians).

In summary, this new implementation is cleaner, more accurate, converges faster, and doesn't require the code to guess an optimal kernel width. Since it's still an RBF network, we don't require a uniform frequency grid from the user. This could be very useful for minimax problems.

That being said, I've found that the time-stepping process itself still induces a "noise floor" on the resulting fields. It can be mitigated by increasing the number of frequency points. Obviously, if you increase the frequency density too much without increasing the length of your simulation domain, you'll start aliasing in the time domain. So there is certainly an "optimal" number of frequency points to record (it's rather simple to calculate, actually).

Here are some results illustrating the "noise floor" (sorry for the small figures):

Screen Shot 2020-03-11 at 9 12 07 PM

I still find it interesting that meep induces this noise floor on the measured fields...I'd like to know why this is. But from an implementation perspective, it fits with the meep philosophy to expect the user to choose the correct number of frequency points given their accuracy requirements.

Edit:
Part of the problem seems to be that the DTFT is unable to resolve frequency points that are too close together. This would be a natural consequence if we were clipping the signal too early (i.e. the dual of aliasing) but we are purposefully using temporal basis functions that terminate at the end of our simulation (a phase-modulated Rect).

Here is a simple numerical experiment that illustrates this point. We try to fit a sum of sincs in the frequency domain to an adjoint profile. We then timestep (not with meep, just synthetically for speed) and take the DTFT of said time profile. If the rects aren't long enough, the results don't match, even if their corresponding sincs fit well in the frequency domain.

dtft_problem

@stevengj
Copy link
Collaborator

I would just stick with the gaussians, and just use the fit tolerance to determine where they are cut off in time domain.

The reason is that you in general need to run the time-domain simulation until the fields (not the sources) decay away, and having long high-frequency tails is a problem for this because high-frequency waves are not nice in FDTD (you get lots of discretization effects: PML works badly, group velocity goes to zero, …)

@stevengj
Copy link
Collaborator

stevengj commented Mar 17, 2020

Try it with k = (0.5, 0.5, 0.5), which is anti-periodic (using the same source and symmetries) — I'm guessing that the instabilities from even and odd symmetries will switch.

I think there is an implied boundary condition at the edge of the cell that we are not getting right. (If you have k=0 and even/odd symmetry, it is implicitly even/odd around the boundary; if you have k=0.5 and even/odd symmetry it is implicitly odd/even around the boundary.)

@smartalecH
Copy link
Collaborator Author

smartalecH commented Mar 20, 2020

I swapped the rect basis function with a nuttall window:

wavelet

This window behaves like the gaussian, but has a closed-form DTFT with low high frequency sidelobes.

The results actually look really good:

success

Provided two conditions are met:

  1. The simulation runs long enough for the fields to decay (of course)
  2. The basis function temporal widths must be at least as long as 1/df where df is the minimum frequency spacing. This condition is required, even when the basis functions successfully fit the desired profile, as seen in the previous post.

That last condition isn't a dealbreaker... it just doesn't make sense mathematically. There might be aliasing occurring somewhere, but I don't see where.

Update: Checked the real part and it still manages to fit really well. But the DTFT is still off.:
Figure_2

@smartalecH
Copy link
Collaborator Author

Okay after some deep investigation and thorough testing I've determined the issue is numerical in nature. If the basis functions (Nutall window DTFT transforms) overlap too much, the corresponding Vandermonde matrix is severely ill-conditioned. Consequently, the corresponding nodes are extraordinarily large.

This is totally expected with Vandermonde matrices, and normally wouldn't be a problem as long as the fit converged and our only goal was to interpolate. However, we are going through several numerical operations before the intended interpolation. Consequently, the resulting fields don't converge to what we would expect.

We can limit the widths of the specified window functions to prevent overlapping. Unfortunately, this requires much more simulation time than often needed. Alternatively, we can approach the problem from a non-convex optimization approach and recenter/resize the basis functions as needed.

@smartalecH
Copy link
Collaborator Author

closing as #1167 implements the same thing within the adjoint code.

@smartalecH smartalecH closed this Apr 15, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants