Skip to content

Commit

Permalink
refactor docs (deepmodeling#952)
Browse files Browse the repository at this point in the history
* refactor docs

* Update type-embedding.md

* fix typos

* Update lammps.md

* Update model-deviation.md

* fix typos

* fix links in readme; rewrite training

* refactor model part

* fix several typos; move sections

* Update doc/third-party/lammps-command.md

Co-authored-by: tuoping <80671886+tuoping@users.noreply.github.com>

* create markdown files

* revert api_cc

* update developer toxtree

* remove unexpected api_cc.rst

Co-authored-by: Han Wang <amcadmus@gmail.com>
Co-authored-by: tuoping <80671886+tuoping@users.noreply.github.com>
  • Loading branch information
3 people authored Aug 16, 2021
1 parent 6d8c31c commit 820b3ed
Show file tree
Hide file tree
Showing 70 changed files with 1,383 additions and 1,242 deletions.
87 changes: 66 additions & 21 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,10 +24,10 @@ DeePMD-kit is a package written in Python/C++, designed to minimize the effort r
For more information, check the [documentation](https://deepmd.readthedocs.io/).

# Highlights in DeePMD-kit v2.0
* [Model compression](doc/getting-started.md#compress-a-model). Accelerate the efficiency of model inference for 4-15 times.
* [New descriptors](doc/getting-started.md#write-the-input-script). Including [`se_e2_r`](doc/train-se-e2-r.md) and [`se_e3`](doc/train-se-e3.md).
* [Hybridization of descriptors](doc/train-hybrid.md). Hybrid descriptor constructed from concatenation of several descriptors.
* [Atom type embedding](doc/train-se-e2-a-tebd.md). Enable atom type embedding to decline training complexity and refine performance.
* [Model compression](doc/freeze/compress.md). Accelerate the efficiency of model inference for 4-15 times.
* [New descriptors](doc/model/overall.md). Including [`se_e2_r`](doc/model/train-se-e2-r.md) and [`se_e3`](doc/model/train-se-e3.md).
* [Hybridization of descriptors](doc/model/train-hybrid.md). Hybrid descriptor constructed from concatenation of several descriptors.
* [Atom type embedding](doc/model/train-se-e2-a-tebd.md). Enable atom type embedding to decline training complexity and refine performance.
* Training and inference the dipole (vector) and polarizability (matrix).
* Split of training and validation dataset.
* Optimized training on GPUs.
Expand Down Expand Up @@ -55,28 +55,66 @@ In addition to building up potential energy models, DeePMD-kit can also be used

# Download and install

Please follow our [github](https://github.com/deepmodeling/deepmd-kit) webpage to download the [latest released version](https://github.com/deepmodeling/deepmd-kit/tree/master) and [development version](https://github.com/deepmodeling/deepmd-kit/tree/devel).
Please follow our [GitHub](https://github.com/deepmodeling/deepmd-kit) webpage to download the [latest released version](https://github.com/deepmodeling/deepmd-kit/tree/master) and [development version](https://github.com/deepmodeling/deepmd-kit/tree/devel).

DeePMD-kit offers multiple installation methods. It is recommend using easily methods like [offline packages](doc/install.md#offline-packages), [conda](doc/install.md#with-conda) and [docker](doc/install.md#with-docker).
DeePMD-kit offers multiple installation methods. It is recommend using easily methods like [offline packages](doc/install/easy-install.md#offline-packages), [conda](doc/install/easy-install.md#with-conda) and [docker](doc/install/easy-install.md#with-docker).

One may manually install DeePMD-kit by following the instuctions on [installing the python interface](doc/install.md#install-the-python-interface) and [installing the C++ interface](doc/install.md#install-the-c-interface). The C++ interface is necessary when using DeePMD-kit with LAMMPS and i-PI.
One may manually install DeePMD-kit by following the instuctions on [installing the Python interface](doc/install/install-from-source.md#install-the-python-interface) and [installing the C++ interface](doc/install/install-from-source.md#install-the-c-interface). The C++ interface is necessary when using DeePMD-kit with LAMMPS and i-PI.


# Use DeePMD-kit

The typical procedure of using DeePMD-kit includes the following steps

1. [Prepare data](doc/getting-started.md#prepare-data)
2. [Train a model](doc/getting-started.md#train-a-model)
3. [Analyze training with Tensorboard](doc/tensorboard.md)
4. [Freeze the model](doc/getting-started.md#freeze-a-model)
5. [Test the model](doc/getting-started.md#test-a-model)
6. [Compress the model](doc/getting-started.md#compress-a-model)
7. [Inference the model in python](doc/getting-started.md#model-inference) or using the model in other molecular simulation packages like [LAMMPS](doc/getting-started.md#run-md-with-lammps), [i-PI](doc/getting-started.md#run-path-integral-md-with-i-pi) or [ASE](doc/getting-started.md#use-deep-potential-with-ase).

A quick-start on using DeePMD-kit can be found [here](doc/getting-started.md).

A full [document](doc/train-input-auto.rst) on options in the training input script is available.
A quick-start on using DeePMD-kit can be found as follows:

- [Prepare data with dpdata](doc/data/dpdata.md)
- [Training a model](doc/train/training.md)
- [Freeze a model](doc/freeze/freeze.md)
- [Test a model](doc/test/test.md)
- [Running MD with LAMMPS](doc/third-party/lammps.md)

A full [document](doc/train/train-input-auto.rst) on options in the training input script is available.

# Advanced

- [Installation](doc/install/index.md)
- [Easy install](doc/install/easy-install.md)
- [Install from source code](doc/install/install-from-source.md)
- [Install LAMMPS](doc/install/install-lammps.md)
- [Install i-PI](doc/install/install-ipi.md)
- [Building conda packages](doc/install/build-conda.md)
- [Data](doc/data/index.md)
- [Data conversion](doc/data/data-conv.md)
- [Prepare data with dpdata](doc/data/dpdata.md)
- [Model](doc/model/index.md)
- [Overall](doc/model/overall.md)
- [Descriptor `"se_e2_a"`](doc/model/train-se-e2-a.md)
- [Descriptor `"se_e2_r"`](doc/model/train-se-e2-r.md)
- [Descriptor `"se_e3"`](doc/model/train-se-e3.md)
- [Descriptor `"hybrid"`](doc/model/train-hybrid.md)
- [Fit energy](doc/model/train-energy.md)
- [Fit `tensor` like `Dipole` and `Polarizability`](doc/model/train-fitting-tensor.md)
- [Train a Deep Potential model using `type embedding` approach](doc/model/train-se-e2-a-tebd.md)
- [Training](doc/train/index.md)
- [Training a model](doc/train/training.md)
- [Advanced options](doc/train/training-advanced.md)
- [Parallel training](doc/train/parallel-training.md)
- [TensorBoard Usage](doc/train/tensorboard.md)
- [Known limitations of using GPUs](doc/train/gpu-limitations.md)
- [Training Parameters](doc/train/train-input-auto.rst)
- [Freeze and Compress](doc/freeze/index.rst)
- [Freeze a model](doc/freeze/freeze.md)
- [Compress a model](doc/freeze/compress.md)
- [Test](doc/test/index.rst)
- [Test a model](doc/test/test.md)
- [Calculate Model Deviation](doc/test/model-deviation.md)
- [Inference](doc/inference/index.rst)
- [Python interface](doc/inference/python.md)
- [C++ interface](doc/inference/cxx.md)
- [Integrate with third-party packages](doc/third-party/index.rst)
- [Use deep potential with ASE](doc/third-party/ase.md)
- [Running MD with LAMMPS](doc/third-party/lammps.md)
- [LAMMPS commands](doc/third-party/lammps-command.md)
- [Run path-integral MD with i-PI](doc/third-party/ipi.md)


# Code structure
Expand All @@ -101,7 +139,14 @@ The code is organized as follows:

# Troubleshooting

See the [troubleshooting page](doc/troubleshooting/index.md).
- [Model compatibility](doc/troubleshooting/model-compatability.md)
- [Installation](doc/troubleshooting/installation.md)
- [The temperature undulates violently during early stages of MD](doc/troubleshooting/md-energy-undulation.md)
- [MD: cannot run LAMMPS after installing a new version of DeePMD-kit](doc/troubleshooting/md-version-compatibility.md)
- [Do we need to set rcut < half boxsize?](doc/troubleshooting/howtoset-rcut.md)
- [How to set sel?](doc/troubleshooting/howtoset-sel.md)
- [How to control the number of nodes used by a job?](doc/troubleshooting/howtoset_num_nodes.md)
- [How to tune Fitting/embedding-net size?](doc/troubleshooting/howtoset_netsize.md)


# Contributing
Expand Down
10 changes: 5 additions & 5 deletions doc/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -106,8 +106,8 @@ def classify_index_TS():
# -- Project information -----------------------------------------------------

project = 'DeePMD-kit'
copyright = '2020, Deep Potential'
author = 'Deep Potential'
copyright = '2017-2021, Deep Modeling'
author = 'Deep Modeling'

def run_doxygen(folder):
"""Run the doxygen make command in the designated folder"""
Expand Down Expand Up @@ -148,9 +148,9 @@ def setup(app):
# 'sphinx.ext.autosummary'
# ]

mkindex("troubleshooting")
mkindex("development")
classify_index_TS()
#mkindex("troubleshooting")
#mkindex("development")
#classify_index_TS()

extensions = [
"sphinx_rtd_theme",
Expand Down
52 changes: 52 additions & 0 deletions doc/data/data-conv.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
# Data conversion

One needs to provide the following information to train a model: the atom type, the simulation box, the atom coordinate, the atom force, system energy and virial. A snapshot of a system that contains these information is called a **frame**. We use the following convention of units:


Property | Unit
---|---
Time | ps
Length | Å
Energy | eV
Force | eV/Å
Virial | eV
Pressure | Bar


The frames of the system are stored in two formats. A raw file is a plain text file with each information item written in one file and one frame written on one line. The default files that provide box, coordinate, force, energy and virial are `box.raw`, `coord.raw`, `force.raw`, `energy.raw` and `virial.raw`, respectively. *We recommend you use these file names*. Here is an example of force.raw:
```bash
$ cat force.raw
-0.724 2.039 -0.951 0.841 -0.464 0.363
6.737 1.554 -5.587 -2.803 0.062 2.222
-1.968 -0.163 1.020 -0.225 -0.789 0.343
```
This `force.raw` contains 3 frames with each frame having the forces of 2 atoms, thus it has 3 lines and 6 columns. Each line provides all the 3 force components of 2 atoms in 1 frame. The first three numbers are the 3 force components of the first atom, while the second three numbers are the 3 force components of the second atom. The coordinate file `coord.raw` is organized similarly. In `box.raw`, the 9 components of the box vectors should be provided on each line. In `virial.raw`, the 9 components of the virial tensor should be provided on each line in the order `XX XY XZ YX YY YZ ZX ZY ZZ`. The number of lines of all raw files should be identical.

We assume that the atom types do not change in all frames. It is provided by `type.raw`, which has one line with the types of atoms written one by one. The atom types should be integers. For example the `type.raw` of a system that has 2 atoms with 0 and 1:
```bash
$ cat type.raw
0 1
```

Sometimes one needs to map the integer types to atom name. The mapping can be given by the file `type_map.raw`. For example
```bash
$ cat type_map.raw
O H
```
The type `0` is named by `"O"` and the type `1` is named by `"H"`.

The second format is the data sets of `numpy` binary data that are directly used by the training program. User can use the script `$deepmd_source_dir/data/raw/raw_to_set.sh` to convert the prepared raw files to data sets. For example, if we have a raw file that contains 6000 frames,
```bash
$ ls
box.raw coord.raw energy.raw force.raw type.raw virial.raw
$ $deepmd_source_dir/data/raw/raw_to_set.sh 2000
nframe is 6000
nline per set is 2000
will make 3 sets
making set 0 ...
making set 1 ...
making set 2 ...
$ ls
box.raw coord.raw energy.raw force.raw set.000 set.001 set.002 type.raw virial.raw
```
It generates three sets `set.000`, `set.001` and `set.002`, with each set contains 2000 frames. One do not need to take care of the binary data files in each of the `set.*` directories. The path containing `set.*` and `type.raw` is called a *system*.
19 changes: 2 additions & 17 deletions doc/data-conv.md → doc/data/dpdata.md
Original file line number Diff line number Diff line change
@@ -1,21 +1,6 @@
# Data
# Prepare data with dpdata


In this example we will convert the DFT labeled data stored in VASP `OUTCAR` format into the data format used by DeePMD-kit. The example `OUTCAR` can be found in the directory.
```bash
$deepmd_source_dir/examples/data_conv
```


## Definition

The DeePMD-kit organize data in **`systems`**. Each `system` is composed by a number of **`frames`**. One may roughly view a `frame` as a snap short on an MD trajectory, but it does not necessary come from an MD simulation. A `frame` records the coordinates and types of atoms, cell vectors if the periodic boundary condition is assumed, energy, atomic forces and virial. It is noted that the `frames` in one `system` share the same number of atoms with the same type.



## Data conversion

It is conveninent to use [dpdata](https://github.com/deepmodeling/dpdata) to convert data generated by DFT packages to the data format used by DeePMD-kit.
One can use the a convenient tool [`dpdata`](https://github.com/deepmodeling/dpdata) to convert data directly from the output of first priciple packages to the DeePMD-kit format.

To install one can execute
```bash
Expand Down
8 changes: 8 additions & 0 deletions doc/data/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# Data

In this section, we will introduce how to convert the DFT labeled data into the data format used by DeePMD-kit.

The DeePMD-kit organize data in `systems`. Each `system` is composed by a number of `frames`. One may roughly view a `frame` as a snap short on an MD trajectory, but it does not necessary come from an MD simulation. A `frame` records the coordinates and types of atoms, cell vectors if the periodic boundary condition is assumed, energy, atomic forces and virial. It is noted that the `frames` in one `system` share the same number of atoms with the same type.

- [Data conversion](data-conv.md)
- [Prepare data with dpdata](dpdata.md)
11 changes: 11 additions & 0 deletions doc/data/index.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
Data
====
In this section, we will introduce how to convert the DFT labeled data into the data format used by DeePMD-kit.

The DeePMD-kit organize data in :code:`systems`. Each :code:`system` is composed by a number of :code:`frames`. One may roughly view a :code:`frame` as a snap short on an MD trajectory, but it does not necessary come from an MD simulation. A :code:`frame` records the coordinates and types of atoms, cell vectors if the periodic boundary condition is assumed, energy, atomic forces and virial. It is noted that the :code:`frames` in one :code:`system` share the same number of atoms with the same type.

.. toctree::
:maxdepth: 1

data-conv
dpdata
File renamed without changes.
6 changes: 0 additions & 6 deletions doc/development/index.md

This file was deleted.

10 changes: 5 additions & 5 deletions doc/development/type-embedding.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,12 +35,12 @@ The difference between two variants above is whether using the information of ce
## How to use
A detailed introduction can be found at [`se_e2_a_tebd`](../train-se-e2-a-tebd.md). Looking for a fast start up, you can simply add a `type_embedding` section in the input json file as displayed in the following, and the algorithm will adopt atom type embedding algorithm automatically.
An example of `type_embedding` is like
```json=
```json
"type_embedding":{
"neuron":Type[2, 4, 8],
"resnet_dt":Atomfalse,
"seed":Type1
}
"neuron": [2, 4, 8],
"resnet_dt": false,
"seed": 1
}
```


Expand Down
81 changes: 81 additions & 0 deletions doc/freeze/compress.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
# Compress a model

Once the frozen model is obtained from deepmd-kit, we can get the neural network structure and its parameters (weights, biases, etc.) from the trained model, and compress it in the following way:
```bash
dp compress -i graph.pb -o graph-compress.pb
```
where `-i` gives the original frozen model, `-o` gives the compressed model. Several other command line options can be passed to `dp compress`, which can be checked with
```bash
$ dp compress --help
```
An explanation will be provided
```
usage: dp compress [-h] [-v {DEBUG,3,INFO,2,WARNING,1,ERROR,0}] [-l LOG_PATH]
[-m {master,collect,workers}] [-i INPUT] [-o OUTPUT]
[-s STEP] [-e EXTRAPOLATE] [-f FREQUENCY]
[-c CHECKPOINT_FOLDER]
optional arguments:
-h, --help show this help message and exit
-v {DEBUG,3,INFO,2,WARNING,1,ERROR,0}, --log-level {DEBUG,3,INFO,2,WARNING,1,ERROR,0}
set verbosity level by string or number, 0=ERROR,
1=WARNING, 2=INFO and 3=DEBUG (default: INFO)
-l LOG_PATH, --log-path LOG_PATH
set log file to log messages to disk, if not
specified, the logs will only be output to console
(default: None)
-m {master,collect,workers}, --mpi-log {master,collect,workers}
Set the manner of logging when running with MPI.
'master' logs only on main process, 'collect'
broadcasts logs from workers to master and 'workers'
means each process will output its own log (default:
master)
-i INPUT, --input INPUT
The original frozen model, which will be compressed by
the code (default: frozen_model.pb)
-o OUTPUT, --output OUTPUT
The compressed model (default:
frozen_model_compressed.pb)
-s STEP, --step STEP Model compression uses fifth-order polynomials to
interpolate the embedding-net. It introduces two
tables with different step size to store the
parameters of the polynomials. The first table covers
the range of the training data, while the second table
is an extrapolation of the training data. The domain
of each table is uniformly divided by a given step
size. And the step(parameter) denotes the step size of
the first table and the second table will use 10 *
step as it's step size to save the memory. Usually the
value ranges from 0.1 to 0.001. Smaller step means
higher accuracy and bigger model size (default: 0.01)
-e EXTRAPOLATE, --extrapolate EXTRAPOLATE
The domain range of the first table is automatically
detected by the code: [d_low, d_up]. While the second
table ranges from the first table's upper
boundary(d_up) to the extrapolate(parameter) * d_up:
[d_up, extrapolate * d_up] (default: 5)
-f FREQUENCY, --frequency FREQUENCY
The frequency of tabulation overflow check(Whether the
input environment matrix overflow the first or second
table range). By default do not check the overflow
(default: -1)
-c CHECKPOINT_FOLDER, --checkpoint-folder CHECKPOINT_FOLDER
path to checkpoint folder (default: .)
-t TRAINING_SCRIPT, --training-script TRAINING_SCRIPT
The training script of the input frozen model
(default: None)
```
**Parameter explanation**

Model compression, which including tabulating the embedding-net.
The table is composed of fifth-order polynomial coefficients and is assembled from two sub-tables. The first sub-table takes the stride(parameter) as it's uniform stride, while the second sub-table takes 10 * stride as it's uniform stride.
The range of the first table is automatically detected by deepmd-kit, while the second table ranges from the first table's upper boundary(upper) to the extrapolate(parameter) * upper.
Finally, we added a check frequency parameter. It indicates how often the program checks for overflow(if the input environment matrix overflow the first or second table range) during the MD inference.

**Justification of model compression**

Model compression, with little loss of accuracy, can greatly speed up MD inference time. According to different simulation systems and training parameters, the speedup can reach more than 10 times at both CPU and GPU devices. At the same time, model compression can greatly change the memory usage, reducing as much as 20 times under the same hardware conditions.

**Acceptable original model version**

The model compression method requires that the version of DeePMD-kit used in original model generation should be 1.3 or above. If one has a frozen 1.2 model, one can first use the convenient conversion interface of DeePMD-kit-v1.2.4 to get a 1.3 executable model.(eg: ```dp convert-to-1.3 -i frozen_1.2.pb -o frozen_1.3.pb```)
Loading

0 comments on commit 820b3ed

Please sign in to comment.