-
Notifications
You must be signed in to change notification settings - Fork 641
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: filtered source #1134
WIP: filtered source #1134
Conversation
Be careful about the normalization — the DTFT in Meep is normalized to approximate a continuous Fourier transform. I think this is defined in the manual, and is also in the Meep paper — I think it is a factor of Δt/2π or maybe sqrt(2π). |
I've figured out the scaling and opted to fit my impulse response to the multiplied input waveform and filter transfer function (in the Fourier domain of course). I fit it using a gaussian kernel rbf. The fitting mechanism looks really good: When I timestep using the new source, the frequencies with relatively low signal have trouble matching what I would expect: I'm assuming this a consequence of timestepping and summing the DFT fields at each step? |
Note that there is a slight error you are introducing here by assuming that the Fourier transform of a Gaussian is a Gaussian. The issue is that Meep is not doing the Fourier transform, it is doing a DTFT (whose inverse is a Fourier-series calculation). (The same thing applies to computing the Fourier transform of the input current. If you really want the "exact" result you need the DTFT — for example, the LDOS calculation does this.) (I'm guessing you can compute the inverse DTFT of a Gaussian analytically in terms of an error function.) Of course, as the resolution increases, the DTFT approaches the continuous Fourier transform, so maybe we can just ignore this if it converges with increasing resolution. |
I've opted to estimate the Gaussian input pulse's frequency response using a DTFT, rather than the approximate Fourier Transform in the codebase. I'm now just trying to recreate the Gaussian pulse using my filter estimation routine (which is simplified). In other words, I'm not multiplying by a filter profile. This should be an easy test case. I get two similar issues as before:
This is the routine I use to calculate the DTFT and then estimate it's time-domain impulse response: |
Set It should be possible to design a filter source for |
Recent changes use an analytic form of the DTFT. I also fixed the Next steps are to build more tests and clean up the code (vectorize where needed). |
The above method was much less robust to other tests than anticipated. Rather than stick with the gaussians and error functions, I determined the best RBF kernel would be a DTFT sinc (not to be confused with the traditional sin(pi * x)/(pi * x)). The nice thing about using a DTFT sinc as my basis function is my "width" is already determined by the time domain's rect window length (just the time of the simulation). Shifting the sinc's center frequency is simply a phase modulation in the time domain. This way I can a.) avoid the ugly DTFT integrals from the gaussian and b.) incorporate the natural turn on and turn off of the source. In fact, any meep source is automatically "windowed" by a rect anyway (an issue I found hard to overcome with gaussians). In summary, this new implementation is cleaner, more accurate, converges faster, and doesn't require the code to guess an optimal kernel width. Since it's still an RBF network, we don't require a uniform frequency grid from the user. This could be very useful for minimax problems. That being said, I've found that the time-stepping process itself still induces a "noise floor" on the resulting fields. It can be mitigated by increasing the number of frequency points. Obviously, if you increase the frequency density too much without increasing the length of your simulation domain, you'll start aliasing in the time domain. So there is certainly an "optimal" number of frequency points to record (it's rather simple to calculate, actually). Here are some results illustrating the "noise floor" (sorry for the small figures): I still find it interesting that meep induces this noise floor on the measured fields...I'd like to know why this is. But from an implementation perspective, it fits with the meep philosophy to expect the user to choose the correct number of frequency points given their accuracy requirements. Edit: Here is a simple numerical experiment that illustrates this point. We try to fit a sum of sincs in the frequency domain to an adjoint profile. We then timestep (not with meep, just synthetically for speed) and take the DTFT of said time profile. If the rects aren't long enough, the results don't match, even if their corresponding sincs fit well in the frequency domain. |
I would just stick with the gaussians, and just use the fit tolerance to determine where they are cut off in time domain. The reason is that you in general need to run the time-domain simulation until the fields (not the sources) decay away, and having long high-frequency tails is a problem for this because high-frequency waves are not nice in FDTD (you get lots of discretization effects: PML works badly, group velocity goes to zero, …) |
Try it with k = (0.5, 0.5, 0.5), which is anti-periodic (using the same source and symmetries) — I'm guessing that the instabilities from even and odd symmetries will switch. I think there is an implied boundary condition at the edge of the cell that we are not getting right. (If you have k=0 and even/odd symmetry, it is implicitly even/odd around the boundary; if you have k=0.5 and even/odd symmetry it is implicitly odd/even around the boundary.) |
I swapped the rect basis function with a nuttall window: This window behaves like the gaussian, but has a closed-form DTFT with low high frequency sidelobes. The results actually look really good: Provided two conditions are met:
That last condition isn't a dealbreaker... it just doesn't make sense mathematically. There might be aliasing occurring somewhere, but I don't see where. Update: Checked the real part and it still manages to fit really well. But the DTFT is still off.: |
Okay after some deep investigation and thorough testing I've determined the issue is numerical in nature. If the basis functions (Nutall window DTFT transforms) overlap too much, the corresponding Vandermonde matrix is severely ill-conditioned. Consequently, the corresponding nodes are extraordinarily large. This is totally expected with Vandermonde matrices, and normally wouldn't be a problem as long as the fit converged and our only goal was to interpolate. However, we are going through several numerical operations before the intended interpolation. Consequently, the resulting fields don't converge to what we would expect. We can limit the widths of the specified window functions to prevent overlapping. Unfortunately, this requires much more simulation time than often needed. Alternatively, we can approach the problem from a non-convex optimization approach and recenter/resize the basis functions as needed. |
closing as #1167 implements the same thing within the adjoint code. |
Filtered time source object used to "convolve" an impulse response with a user-provided source time profile (i.e. a conventional gaussian). Useful for adjoint calculations.
Estimates the time domain frequency response using a simple, brute force least squares from the frequency response.
Ran a simple test that computes the filtered response in the frequency domain and the time domain (using the
FilteredSource
). The test is over simple waveguide structure. The current test is failing.