-
Notifications
You must be signed in to change notification settings - Fork 641
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fields dump load #1674
Fields dump load #1674
Conversation
Codecov Report
@@ Coverage Diff @@
## master #1674 +/- ##
==========================================
- Coverage 73.20% 73.17% -0.03%
==========================================
Files 13 13
Lines 4515 4519 +4
==========================================
+ Hits 3305 3307 +2
- Misses 1210 1212 +2
|
It's fine to write everything to separate files (one per process, say). You'll want to do a couple of things:
Basically, to use HDF5 parallel I/O with a single file you have to create the file and create the dataset collectively, and then each process can call |
Possible Python API: to re-start a simulation, run the same simulation script, creating the
|
Closed by #1738? |
First cut of a PR to add support for dumping and (re)loading the 'fields' state.