Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nbody structs #31

Merged
merged 18 commits into from
Jun 6, 2018
Merged

Conversation

Mikhail-Vaganov
Copy link
Contributor

@Mikhail-Vaganov Mikhail-Vaganov commented May 18, 2018

  • The structures should fit for any n-body simulations: gravitational, electromagnetic, MD simulations, etc. On the other hand, such a generic approach is the main difficulty. An attempt to do it was made with CustomPotentialParameters and in test/nbody_custom_potential_test.jl
  • Several examples are presented in the test files.
  • One of the difficulties I have faced with is the system of units choice.
  • There is a simulation of liquid argon. Next, I can simulate one of the simple models of water.

@coveralls
Copy link

coveralls commented May 18, 2018

Coverage Status

Coverage increased (+12.07%) to 82.44% when pulling ca402ef on Mikhail-Vaganov:nbody-structs into 493d018 on JuliaDiffEq:master.

@codecov
Copy link

codecov bot commented May 18, 2018

Codecov Report

Merging #31 into master will increase coverage by 12.07%.
The diff coverage is 86.54%.

Impacted file tree graph

@@             Coverage Diff             @@
##           master      #31       +/-   ##
===========================================
+ Coverage   70.37%   82.44%   +12.07%     
===========================================
  Files           4       10        +6     
  Lines         108      336      +228     
===========================================
+ Hits           76      277      +201     
- Misses         32       59       +27
Impacted Files Coverage Δ
src/nbody_problem.jl 100% <ø> (ø)
src/nbody_boundary_conditions.jl 100% <100%> (ø)
src/nbody_bodies.jl 100% <100%> (ø)
src/nbody_basic_potentials.jl 100% <100%> (ø)
src/nbody_simulation.jl 100% <100%> (ø)
src/nbody_simulation_result.jl 78.5% <78.5%> (ø)
src/nbody_to_ode.jl 86.59% <86.59%> (ø)
src/nbody_system.jl 96% <96%> (ø)
... and 5 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 493d018...ca402ef. Read the comment docs.

using StaticArrays, DiffEqBase
#Structures for bodies and systems under symulations

# Fields for position, velocity and mass of a particle should be required for every descendant
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might be a good use of https://github.com/tbreloff/ConcreteAbstractions.jl . We can fork and register if it makes sense. Or @def

Copy link
Contributor Author

@Mikhail-Vaganov Mikhail-Vaganov May 18, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is actually cool! Good, I'll check what I can do with @def first.

function ode_system!(du, u, p, t)
du[:, 1:n] = @view u[:, n+1:2n];
@inbounds for i=1:n
du[:, n+i] .= acceleration(u[:, 1:n], i, n, simulation.sysmem, simulation.external_electric_field);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make acceleration non-allocating.

external_gravitational_field)
end

# Function, scpecifically designed for a system of ChargedParticles
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't get this part. What's going on here? Are you using different acceleration functions based on the existence of some fields and not other?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is unclear, I agree. No, not based on the presence or absence of external fields.
Actually, the second implementation is wrong - I want to pass NBodySystem into acceleration function and there to decide what forces act on i-th particle. In addition to the forces from other particles the external fields also affect the i-th particle. But then I realized, that I need the positions of bodies. So here is a mess, I agree.
The better definition of this function would be something like this:

function acceleration(
   u,
   i,
   system :: NBodySystem, 
   external_electric_field,
   external_magnetic_field,
   external_gravitational_field)
+end

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like passing the NBodySimulation could make sense?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I already pass almost all its body, yes. Right now passing NBodySimulation makes sence, indeed. I'll think about it: there might be more properties and fields for simulation. By "simulation" I understand conditions of an experiment, but not properties of the system under test/experiment/simulation.

end

# Function as this one can be used in a force-oriented approach of calculations
function acceleration(
Copy link
Member

@ChrisRackauckas ChrisRackauckas May 18, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a way at the top level to have the potential be fully user-definable?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The potential of what? Just to make it clear)
As I understand, during an arbitrary n-body simulation, a user will have a set of bodies/particles with some physical properties, which are significant in his/her situation. Then, for i-th body in the system user can define a potential function which can be used for calculation of the force acting on this i-th body (as the gradient of the potential).

Right now the suggestion is to write a custom acceleration function and pass it into one of the NBodySimulation constructors. For handling this, in the code where converting NBodySimulation to DEProblem takes place, it is necessary to choose a standard acceleration or a custom one, in case it was defined by a user.

Though it doesn't make big difference, whether we work with forces or potentials, I understand that potentials might be more convenient in some situation. I'll prepare interfaces for them too.

@dextorious
Copy link

A general comment before I go through the code. Fundamentally, any N-body system is driven by a set of potentials (which determine the forces). In the general case (which we should support) potentials can depend on (in order of increasing rarity) coordinates, time, velocities and accelerations. What the potentials depend on fundamentally restricts our choice of solvers. For this package to be an integral part of DiffEq, we need the API to communicate these restrictions. Do the existing DiffEq solvers have traits for this sort of thing?

@ChrisRackauckas
Copy link
Member

There aren't traits, but traits could be added to the solvers so that way they error when some solver assumption is violated. Right now they pass through. For example, many RKN methods make a documented velocity-independence assumption in some part, but if you violate this it just runs with order loss. Expanding the trait system we have is never bad, just takes man hours.

@dextorious
Copy link

Most of the existing RKNs will still work, though, in the sense of producing results that are correct, just at a lower order of accuracy, right?

@ChrisRackauckas
Copy link
Member

ChrisRackauckas commented May 18, 2018

Most of the existing RKNs will still work, though, in the sense of producing results that are correct, just at a lower order of accuracy, right?

yes. Actually, all of them will, likely with order 1.

@dextorious
Copy link

Ok, then any traits we introduce for this should probably only produce warnings, not outright error out.

@dextorious
Copy link

Okay, having gone through the code I think the general idea of it is okay, although many of the examples probably bear little resemblance to how they're going to look in the end (MDSystem, for instance). I think the reasonable thing to do here is to get a simple working example (like a Lennard-Jones fluid or any other simple N-body system that comes to mind) fully implemented (meaning that it produces and solves an appropriate DiffEq ODE system), discuss it and use it to iterate on the API itself, then move to a slightly more complicated example with more features, and so on. The idea being that a gradual growth in features should expose any glaring issues in the core API before it's grown too large to refactor easily.

@Mikhail-Vaganov Mikhail-Vaganov changed the title Nbody structs [WIP] Nbody structs May 23, 2018
+ Periodic boundary conditions
+ Lennard-Jones potential
@Mikhail-Vaganov
Copy link
Contributor Author

@ChrisRackauckas, I haven't corrected all the places afrter your review yet, but I surely keep them in mind.

@ChrisRackauckas
Copy link
Member

No worries

- Prepared functions for temperature as an average kinetic energy of particles.
- Wrote a plot recipe for simulation results (right now it is a scatter plot).
- Next step of refactoring: some files rearrangement.
- Added a tests.
@@ -207,7 +185,41 @@ function pairwise_lennard_jones_acceleration(rs,
return -24*system.lj_parameters.ϵ/system.lj_parameters.σ*accel
end

function temperature(result :: SimulationResult, time :: Float64)
function get_velocity(result :: SimulationResult, time, i = 0)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this doesn't look right

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh, probably not done here yet.

Copy link
Contributor Author

@Mikhail-Vaganov Mikhail-Vaganov May 25, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am confused, what did you mean? I have two variants:

  1. time and i in get_velocity are without type assertion
  2. i in get_velocity represents particle for which I obtain the velocity. And if i=0, then get velocity values for all the particles in a system - probably it is better to have two specific functions.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, the name result for a function argument is a bad choice.

- Codded an iterator for animation simulation results
- Added representation of some objects via Base.show()
- Changed implementation of PBC according to Allen
- created a file structure for the interface types
- refactored code
- changed the gravitational simulation types for the new interface
- added example of a custom potential usage
end

# this should be a method for integrators designed for the SecondOrderODEProblem (It is worth somehow to sort them from other algorithms)
function run_simulation(s::NBodySimulation, alg_type::Union{VelocityVerlet,DPRKN6,Yoshida6}, args...; kwargs...)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are too many algorithms in JuliaDiffEq) It is cool, but I am not sure which of them would be good for N-Body simulations. VelocityVerlet is the most mentioned one. I am just not acquainted with others yet, that is why I've chosen these three for a while.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't choose any. Just leave it open without a type parameter. One of the interesting things will be to test different algorithms. This has no effect on performance anyways.

Copy link
Contributor Author

@Mikhail-Vaganov Mikhail-Vaganov May 30, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In fact, I put some algorithms here in order to distinguish when to transform an N-body-system into ODEProblem or into SecondOrderODEProblem. Via the multiple dispatch mechanism. I could group all the appropriate algorithms for SecondOrderODEProblem under a particular parent, though I am not sure if it is ok for the current structure of JuliaDiffEq. Another approach is to add an argument like transformaton_order::Integer = 2.

end

struct LennardJonesParameters <: PotentialParameters
ϵ::AbstractFloat

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All variables in a struct must have a concrete type. You can do this by parámetrising the type.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right, it would be better for performance (I have just read the Performance tips again).
Ok.

@Mikhail-Vaganov
Copy link
Contributor Author

Mikhail-Vaganov commented Jun 1, 2018

I faced the following behavior of systems under simulations. Sometimes calculations run fast, but sometimes they slow down dramatically.
For example, the interactions of 10 particles of liquid argon have been calculated for 10 hours. Algorithm: VelocityVerlet, timestep 1e-13 s., 300 steps, σ = 3.4e-10, interaction radius R = 2.25σ, box size L=4.6σ with PBC. (Here I didn't stick to the density of liquid argon.)
10 particles with growing energy

In the animation particles are shown not with their real size: markers show just centres of the particles.
The system does not obey the law of energy conservation.
Probably, I need to implement a thermostat, or choose the step of integration more carefully. Chris advised me to check projections of the solution via callbacks. I haven't build the figure for total energy, because Plots crashed along with Julia and I lost the data. Nevertheless, here is the figure for kinetic energy over time:
ccsew5jf3g4
Another sugesstion is to use non-dimensional simulation for calculations so that to reduce usage of small numbers.

@ChrisRackauckas
Copy link
Member

Can you do that last plot in log scale?

@Mikhail-Vaganov
Copy link
Contributor Author

Ok, but for another round of simulations: I lost those previous data due to a sudden crash.

@ChrisRackauckas
Copy link
Member

what you're seeing is instability. The cause of this instability is dependent on the speed of how growth which would be seen in log scale.

- reworked PBC handling
- tried to add ManifoldProjection constraints
@Mikhail-Vaganov
Copy link
Contributor Author

Mikhail-Vaganov commented Jun 5, 2018

After having found what slowed down calculations, I still struggle with instabilities. Here I tried to simulate the behavior of 4 particles with zero initial velocities, but at such distance between them, that corresponds to the average distance of that liquid argon molecules:
4_particles_in_line_224_steps_2

As you can see, at the beginning particles start to repeal from each other, but then the upper particle goes out of the cell borders and its image appears at the bottom of the cell. This image pushed away the particle laying above and then a chain reaction takes place.
Such a picture occures in simulations with different time steps: always simulations stop with WARNING: Instability detected. Aborting
Here is the graphic for total energy growth:
total_energy_4_particles_log

I tried to use ManifoldProjection callbacks for satisfying the energy conservation law solving ODEProblem by Tsit5(), but got a solution with one timestep.

@ChrisRackauckas, @dextorious can you advise me what to try next? My variants are thermostating and/or controlling velocities.

@ChrisRackauckas
Copy link
Member

How many steps is it taking before diverging like that?

@Mikhail-Vaganov
Copy link
Contributor Author

There are 224 steps in the solution. I tried to calculate it in real and reduced units: all the same.
About 200 steps before diverging at 2.0e-12 sec

@ChrisRackauckas
Copy link
Member

Try it on this small time span with arbitrary precision units.

@Mikhail-Vaganov
Copy link
Contributor Author

If I understood correctly: here is total energy at time span (2.0e-12, 3.0e-12) beginning from the 200th time step:
total_energy_200_300_log

@Mikhail-Vaganov
Copy link
Contributor Author

@ChrisRackauckas, probably I didn't understand you correctly. Did you mean simulate the system on the time span beginning when diverging happens?

If I run simulation from, say, the 200th time step, it will break after 224 steps again. So, it seems, that there is no difference in what is the starting time of simulations

@dextorious
Copy link

What happens if you increase the initial inter-particle distance slightly (say, by 20% or so) and increase the size of the box by 2-3x? Does the instability only appear after the periodic boundary conditions are applied?

Thermostatting is not a prerequisite of stable LJ fluid dynamics. An intrusive thermostat might well hide the instability, but that's just masking whatever issue is causing this - we need to find the root cause.

@ChrisRackauckas
Copy link
Member

Do the simulation from the beginning but using bigfloats for state and time. There is a chance that this is really slow (since it's arbitrary precision) and doesn't diverge, so make sure you solve it on a short timespan (but long enough to find issues). That would rule out catestrophic cancellation as the issue. Also reduce the tolerance a lot, to like 1e-8 on abstol and reltol. If you're still having issues, then the problem is somewhere in your field specification. ~200 steps is not enough for energy drift to come into play (that's like 10,000+ steps), at least not this extent. So it's either a numerical accuracy issue (in which case, the tolerance decrease helps), a floating point issue (in which case bigfloats will get rid of it), or it's an issue in the model/potential.

@jgreener64
Copy link

Are you sure you are calculating the interaction between particle i and the closest version of particle j accounting for the PBCs?

From a quick scan of the code I am not sure line 99 of src/nbody_simulation_result.jl changes anything of note, so particles only start to effect particles on the other side of the box when they jump over.

Hope this helps.

@Mikhail-Vaganov
Copy link
Contributor Author

@jgreener64 , you are right! Right now It works incorrectly. Thanks.

@Mikhail-Vaganov
Copy link
Contributor Author

Mikhail-Vaganov commented Jun 6, 2018

Ok, the problem was in PBC. I think I repaired it. Here is the plot for the temperature of the system of 1000 particles (T = 120.0 K, σ = 3.4e-10 m, τ = 1e-14 s) for 400 steps:
1000_particles_temperature_110_degree

And here is the energy graphic:
1000_particles_energy

@ChrisRackauckas , @dextorious , I would like to finish and close this pull request as an accomplished step. How do you think, what else should be done for that?

@ChrisRackauckas
Copy link
Member

LGTM. The only reservation I have is that the old NBodyProblem was deleted but the replacement isn't feature-complete yet (i.e. it cannot take an arbitrary potential function) so there's currently no deprecation path. It should be high on the priorities to get some replacement in this newer API.

I'll give some time for @dextorious to review and then we can merge.

@Mikhail-Vaganov
Copy link
Contributor Author

@ChrisRackauckas , actually NBodyProblem is still there, in nbody_problem.jl file - I've renamed it from nbody.jl. I deleted my old NBodyGravProblem but it is not so important.
Nevertheless, I understood your comment about arbitrary potential.

@ChrisRackauckas ChrisRackauckas changed the title [WIP] Nbody structs Nbody structs Jun 6, 2018
system::sType
tspan::Tuple{Float64,Float64}
boundary_conditions::bcType
external_electric_field
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the way that there are applied doesn't cause performance issues? If it's just a single modifying call then it's fine. From what I can see it looks fine?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right now it isn't implemented: I left it here just to show where I want to set these fields up. In the future, I think, these fields will depend on time and coordinates

@dextorious
Copy link

Okay, I've gone through the code and profiled the argon test a bit. I didn't find any major bugs, but it's pretty clear that further optimization will be in order before we can run this on real systems of appreciable size. The periodic boundaries seem to work fine now, good job on solving that.

On what comes next, my personal view is that it probably makes sense to implement one of the simpler water models (SPC/E or TIP3, for instance) before investing serious effort in optimization because: 1) it involves considerably more functionality and may expose some API issues that would require refactoring and 2) it's much easier to benchmark, since every major MD package out there has published performance numbers on standard water boxes. Still, this is debatable. If you find that performance issues inhibit your productivity too much, it may make sense to optimize sooner. Thermostats would then be the next piece on the agenda, but the simpler thermostats don't really have a significant impact on the performance, so those can be added at any point.

As for the merge decision, that's up to whatever standard @ChrisRackauckas wants to maintain for the master branches of DiffEq packages. It's pretty clear that this API shouldn't be considered stable and some elements will probably change substantially as we work towards more complex problems, but it does work and, in my view, represents a great start.

@ChrisRackauckas ChrisRackauckas merged commit 6481257 into SciML:master Jun 6, 2018
@ChrisRackauckas
Copy link
Member

I think it's always best to do smaller PRs and iterate over time. This is fine on master. Lots of unfinished stuff can be in master and it's only "released" when documented.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants