Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Thermalized CPU LB produces unphysical currents #3772

Closed
christophlohrmann opened this issue Jun 24, 2020 · 0 comments · Fixed by #3847
Closed

Thermalized CPU LB produces unphysical currents #3772

christophlohrmann opened this issue Jun 24, 2020 · 0 comments · Fixed by #3847

Comments

@christophlohrmann
Copy link
Contributor

christophlohrmann commented Jun 24, 2020

When using the thermalized CPU LB with boundaries that create corners, unphysical currents occur. See below an example. With the CPU implementation, there are currents in y and z direction without any external forces or velocity boundary conditions. For the GPU implementation, this does not happen.

from espressomd import System, lb, lbboundaries, shapes
import matplotlib.pyplot as plt
import numpy as np

impl = 'CPU' #'GPU'

box_l = [40,15,40]
LB_params = {'agrid':1,
             'tau':1,
             'dens':1,
             'visc':0.2,
             'kT':1e-8,
             'seed':41}
padding = 4
system = System(box_l = box_l)
system.cell_system.skin = 1
system.time_step = LB_params['tau']

if impl == 'GPU':
    lbf = lb.LBFluidGPU(**LB_params)
elif impl == 'CPU':
    lbf = lb.LBFluid(**LB_params)
    
system.actors.add(lbf)

top_wall = shapes.Wall(dist = -(box_l[2]-padding), normal = [0,0,-1])
bottom_wall = shapes.Wall(dist = padding, normal = [0,0,1])
left_wall = shapes.Wall(dist = padding, normal = [1,0,0])
right_wall = shapes.Wall(dist = -(box_l[0]-padding), normal = [-1,0,0])
for wall in [top_wall,bottom_wall, left_wall,right_wall]:
    noslip_wall = lbboundaries.LBBoundary(shape = wall)
    system.lbboundaries.add(noslip_wall)

system.integrator.run(100)

#lbf.print_vtk_velocity(f'./velocity_{impl}.vtk')  

probe_xs = np.linspace(0,box_l[0],num=40)
vels_over_time = []
n_samples = 100
for _ in range(n_samples):
    probe_vels = np.array([lbf.get_interpolated_velocity([x,0,box_l[2]/2.]) for x in probe_xs])
    vels_over_time.append(np.mean(probe_vels,axis = 0))
    system.integrator.run(10)
vels_over_time = np.array(vels_over_time)
mean_vel = np.mean(vels_over_time, axis = 0)
std_vel = np.std(vels_over_time, axis=0)
SEM_vel = std_vel/np.sqrt(n_samples)

print(f'velocity = {mean_vel} +- {SEM_vel}')
print(np.all(np.abs(mean_vel)<std_vel))
#I know we should compare to the standard error of the mean (divide by sqrt(n_samples)) but that takes forever to produce True on GPU. 
                          
#fig = plt.figure()
#for direction in [0,1,2]:
    #plt.plot(probe_xs, probe_vels[:,direction], label = f'v_{direction}')
#plt.xlabel('x')
#plt.legend()
#plt.show()
@kodiakhq kodiakhq bot closed this as completed in #3847 Sep 30, 2020
kodiakhq bot added a commit that referenced this issue Sep 30, 2020
Fixes #3804 , fixes #3772

The issue was actually a regression that happened with the switch to philox in commit 
[f3cc4ba](f3cc4ba). Random numbers formerly part of the interval (-0.5,0.5] were replaced by random numers in (0,1].
With this fix the `lb_pressure_tensor_acf.py` runs successfully for both CPU and GPU. However, as @KaiSzuttor correctly mentioned in PR #3831 the CPU part of the test takes a while to execute (on my machine, single core the whole test takes 136 s). I could try to make that faster which, however, would require tweaking the tolerance limits `tol_node` and `tol_global`. @RudolfWeeber , what was your reasoning behind the chosen limits? Or are they semi-arbitrary choices?

Further, this PR corrects the comparison of the off-diagonal elements `avg_ij` vs. `avg_ji` in the test.
jngrad pushed a commit to jngrad/espresso that referenced this issue Oct 13, 2020
Fixes espressomd#3804 , fixes espressomd#3772

The issue was actually a regression that happened with the switch to philox in commit
[f3cc4ba](espressomd@f3cc4ba). Random numbers formerly part of the interval (-0.5,0.5] were replaced by random numers in (0,1].
With this fix the `lb_pressure_tensor_acf.py` runs successfully for both CPU and GPU. However, as @KaiSzuttor correctly mentioned in PR espressomd#3831 the CPU part of the test takes a while to execute (on my machine, single core the whole test takes 136 s). I could try to make that faster which, however, would require tweaking the tolerance limits `tol_node` and `tol_global`. @RudolfWeeber , what was your reasoning behind the chosen limits? Or are they semi-arbitrary choices?

Further, this PR corrects the comparison of the off-diagonal elements `avg_ij` vs. `avg_ji` in the test.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants