Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bug in get_epsilon_point for points on the cell boundary #1836

Closed
oskooi opened this issue Dec 1, 2021 · 5 comments · Fixed by #1849
Closed

bug in get_epsilon_point for points on the cell boundary #1836

oskooi opened this issue Dec 1, 2021 · 5 comments · Fixed by #1849
Labels

Comments

@oskooi
Copy link
Collaborator

oskooi commented Dec 1, 2021

There seems to be a bug in get_epsilon_point for parallel simulations. This can be demonstrated using the simple test below which compares the output from get_epsilon_point and get_epsilon_grid over a set of grid points in a 2d cell of vacuum. The result everywhere is expected to be 1.0 everywhere. get_epsilon_point produces the correct result for the serial case but fails in the parallel case.

import unittest
import numpy as np
import meep as mp

class TestGetEpsilonGrid(unittest.TestCase):

    def setUp(self):
        resolution = 10
        self.cell_size = mp.Vector3(1.0,1.0,0)

        self.sim = mp.Simulation(resolution=resolution,
                                 cell_size=self.cell_size)

        self.sim.init_sim()


    def test_get_epsilon_grid(self):
        grid_resolution = 2
        Nx = int(grid_resolution*self.cell_size.x) + 1
        Ny = int(grid_resolution*self.cell_size.y) + 1
        xtics = np.linspace(-0.5*self.cell_size.x,0.5*self.cell_size.x,Nx)
        ytics = np.linspace(-0.5*self.cell_size.y,0.5*self.cell_size.y,Ny)

        eps_grid = self.sim.get_epsilon_grid(xtics,ytics,np.array([0]))
        for xi, xv in enumerate(xtics):
            for yi, yv in enumerate(ytics):
                eps_pt = self.sim.get_epsilon_point(mp.Vector3(xv,yv))
                print("eps:, ({:.4f}, {:.4f}), {}, {}".format(xv,yv,eps_pt,eps_grid[xi,yi]))
                self.assertAlmostEqual(np.real(eps_grid[xi,yi]), np.real(eps_pt), places=6)
                self.assertAlmostEqual(np.imag(eps_grid[xi,yi]), np.imag(eps_pt), places=6)


if __name__ == '__main__':
    unittest.main()

serial simulation

$ mpirun -np 1 python test.py
Using MPI version 3.1, 1 processes
-----------
Initializing structure...
time for choose_chunkdivision = 9.837e-06 s
Working in 2D dimensions.
Computational cell is 1 x 1 x 0 with resolution 10
time for set_epsilon = 0.000194357 s
-----------
eps:, (-0.5000, -0.5000), (1+0j), (1+0j)
eps:, (-0.5000, 0.0000), (1+0j), (1+0j)
eps:, (-0.5000, 0.5000), (1+0j), (1+0j)
eps:, (0.0000, -0.5000), (1+0j), (1+0j)
eps:, (0.0000, 0.0000), (1+0j), (1+0j)
eps:, (0.0000, 0.5000), (1+0j), (1+0j)
eps:, (0.5000, -0.5000), (1+0j), (1+0j)
eps:, (0.5000, 0.0000), (1+0j), (1+0j)
eps:, (0.5000, 0.5000), (1+0j), (1+0j)
.
----------------------------------------------------------------------
Ran 1 test in 0.001s

OK

Elapsed run time = 0.0032 s

parallel simulation

$ mpirun -np 2 python test.py
Using MPI version 3.1, 2 processes
-----------
Initializing structure...
Splitting into 2 chunks by voxels
time for choose_chunkdivision = 2.9331e-05 s
Working in 2D dimensions.
Computational cell is 1 x 1 x 0 with resolution 10
time for set_epsilon = 0.000185343 s
-----------
eps:, (-0.5000, -0.5000), (0.5+0j), (1+0j)
FF
Elapsed run time = 0.0035 s

======================================================================
FAIL: test_get_epsilon_grid (__main__.TestGetEpsilonGrid)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_get_epsilon_grid_bug.py", line 29, in test_get_epsilon_grid
    self.assertAlmostEqual(np.real(eps_grid[xi,yi]), np.real(eps_pt), places=6)
AssertionError: 1.0 != 0.5 within 6 places

----------------------------------------------------------------------
Ran 1 test in 0.001s

FAILED (failures=1)
@oskooi oskooi added the bug label Dec 1, 2021
@stevengj
Copy link
Collaborator

stevengj commented Dec 1, 2021

This function

meep/src/monitor.cpp

Lines 166 to 182 in f9bd757

complex<double> fields::get_chi1inv(component c, direction d, const ivec &origloc, double frequency,
bool parallel) const {
ivec iloc = origloc;
complex<double> aaack = 1.0;
locate_point_in_user_volume(&iloc, &aaack);
for (int sn = 0; sn < S.multiplicity(); sn++)
for (int i = 0; i < num_chunks; i++)
if (chunks[i]->gv.owns(S.transform(iloc, sn))) {
signed_direction ds = S.transform(d, sn);
complex<double> val =
chunks[i]->get_chi1inv(S.transform(c, sn), ds.d, S.transform(iloc, sn), frequency) *
complex<double>(ds.flipped ^ S.transform(component_direction(c), sn).flipped ? -1 : 1,
0);
return parallel ? sum_to_all(val) : val;
}
return d == component_direction(c) ? 1.0 : 0; // default to vacuum outside computational cell
}

should return a nonzero value on exactly one process for each origloc — i.e., from the process that owns the chunk that owns that point — and it should return 0.0 on other processes. I would check that this is the case.

@oskooi
Copy link
Collaborator Author

oskooi commented Dec 2, 2021

I think I have tracked down the cause of the problem: whenever the input ivec parameter origloc involves an element which is anywhere on the cell boundary (i.e., x,y = ±0.5), the line if (chunks[i]->gv.owns(S.transform(iloc, sn))) in the function above always evaluates to false and the if-statement body is never entered. This is a bug because exactly one chunk must own any given boundary voxel. As a result, fields::get_chi1inv returns a value of 1 for all chunks from a line which should never be called:

return d == component_direction(c) ? 1.0 : 0; // default to vacuum outside computational cell

There is something about Meep's default metallic boundaries which is causing the problem here because no such error occurs if the cell is defined using periodic boundaries:

        self.sim = mp.Simulation(resolution=resolution,
                                 cell_size=self.cell_size,
                                 k_point=mp.Vector3())

@oskooi
Copy link
Collaborator Author

oskooi commented Dec 3, 2021

Actually, it turns out that the serial case also fails whenever get_epsilon_point is passed a position anywhere on the cell boundary.

import unittest
import numpy as np
import meep as mp

class TestGetEpsilonGrid(unittest.TestCase):

    def setUp(self):
        resolution = 10
        self.cell_size = mp.Vector3(1.0,1.0,0)

        self.sim = mp.Simulation(resolution=resolution,
                                 cell_size=self.cell_size,
	                         default_material=mp.Medium(epsilon=12.25))

        self.sim.init_sim()

    def test_get_epsilon_grid(self):
        pt = mp.Vector3(-0.5,0.5,0)
        eps_grid = self.sim.get_epsilon_grid(np.array([pt.x]),np.array([pt.y]),np.array([pt.z]))
        eps_pt = self.sim.get_epsilon_point(pt)
        print("eps:, ({},{}), {}, {}".format(pt.x,pt.y,eps_pt,eps_grid))
        self.assertAlmostEqual(np.real(eps_grid), np.real(eps_pt), places=6)
        self.assertAlmostEqual(np.imag(eps_grid), np.imag(eps_pt), places=6)


if __name__ == '__main__':
    unittest.main()
$ mpirun -np 1 python test.py
Using MPI version 3.1, 1 processes
-----------
Initializing structure...
time for choose_chunkdivision = 1.027e-05 s
Working in 2D dimensions.
Computational cell is 1 x 1 x 0 with resolution 10
time for set_epsilon = 0.00018729 s
-----------
eps:, (-0.5,0.5), (1.180722891566265+0j), (12.25+0j)
F
======================================================================
FAIL: test_get_epsilon_grid (__main__.TestGetEpsilonGrid)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_get_epsilon_grid_bug.py", line 37, in test_get_epsilon_grid
    self.assertAlmostEqual(np.real(eps_grid), np.real(eps_pt), places=6)
AssertionError: array(12.25) != 1.180722891566265 within 6 places

----------------------------------------------------------------------
Ran 1 test in 0.001s

FAILED (failures=1)

@oskooi oskooi changed the title bug in get_epsilon_point for parallel simulations bug in get_epsilon_point for for points on the cell boundary Dec 3, 2021
@oskooi oskooi changed the title bug in get_epsilon_point for for points on the cell boundary bug in get_epsilon_point for points on the cell boundary Dec 3, 2021
@stevengj
Copy link
Collaborator

stevengj commented Dec 8, 2021

It sounds like a fix would be to change this line:

return d == component_direction(c) ? 1.0 : 0; // default to vacuum outside computational cell

to something like

return d == component_direction(c) && (parallel || am_master()) ? 1.0 : 0; // default to vacuum outside computational cell

@stevengj
Copy link
Collaborator

stevengj commented Dec 8, 2021

(Right now, the structure class has no way of knowing what material you might want for a point outside the computational cell — it has no knowledge of default_material. We could change that, of course, by adding some additional parameter to the structure class that we could set using default_material. It's not clear to me whether we care very much about what value of ε is returned by get_epsilon_pt for such points, however.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants