Skip to content

Support Ewald #137

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 44 commits into from
Dec 20, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
44 commits
Select commit Hold shift + click to select a range
5056f2c
add ewald, passed ener and force test
Aug 28, 2019
b29499a
Merge branch 'devel' into ewald
Oct 16, 2019
eb4ca51
add descrption for the printed data
Oct 16, 2019
c98dd91
implement op ewald_recp, tested force
Oct 17, 2019
d329268
fix bug in ewald, tested virial
Oct 18, 2019
88dcb12
Merge branch 'devel' into ewald
Oct 18, 2019
5c25712
Merge branch 'devel' into ewald
Oct 22, 2019
fdad3ca
add test for ewald_recp python interface. add test for data type sel
Oct 23, 2019
8b729fe
fix bug of wrong index in prod virial. more strict virial test
Oct 27, 2019
c23adee
implement data modifier, substract long-range interaction from data
Oct 28, 2019
8d5d3e8
batch evaluate ewald and correction
Nov 5, 2019
9481af7
Merge branch 'devel' into ewald
Nov 6, 2019
0639458
result of data statisitic (davg, dstd) are private members of descrip…
Nov 6, 2019
a23ec38
rename dstat to input_stat. Refactorize the stat data generation
Nov 6, 2019
afd400f
mv energy shift computation to output data stat of fitting classes
Nov 6, 2019
fd9cff2
modify data only when the data is asked. save computational cost
Nov 7, 2019
4da6fae
modify batch data on load
Nov 7, 2019
835140f
infer global polar
Nov 8, 2019
1c9c74d
use average eig value as bias of polar fitting
Nov 8, 2019
3668f19
fix bug. return to old convention of pass merged data to input stat, …
Nov 8, 2019
122007b
add missing test file
Nov 8, 2019
83509c4
add missing test data
Nov 8, 2019
acb29fa
multi-threading for ewald
Nov 8, 2019
361e7b1
Merge branch 'devel' into ewald
Nov 9, 2019
d7134fd
do not apply polar fitting bias, which does not have effect
Nov 10, 2019
11e0fd2
provide stop learning rate and set the decay rate automatically
Nov 10, 2019
0257797
ewald reciprocal: build computational graph on initialization
Nov 11, 2019
4ebc65e
revise test_ewald according to new interface
Nov 11, 2019
b5be8fe
expance three fold loop to enable better multi-thread parallelization
Nov 11, 2019
151d07a
add c++ interface for DeepTensor, not tested
Nov 16, 2019
d34c055
fix bugs in DeepTensor. mv common methods of DeepPot and DeepTensor i…
Nov 20, 2019
72e9933
fix bug in data modifier: order of fcorr should be mapped back
Nov 21, 2019
44bb07a
first working version of force modification
Nov 22, 2019
06ea446
fix bug of over-boundary back map
Nov 22, 2019
98eef65
handel the case of nloc_real == 0
Nov 25, 2019
68f740d
fix bug of excluding empty from nlist
Nov 25, 2019
c655b6e
pass name_scope at the interface of DeepTensor and DataModifier
Nov 26, 2019
2a47c3e
fix bug of make_default_mesh
Dec 2, 2019
33f9e13
simplify atype for eval_modify. sort in eval_modify
Dec 2, 2019
76a4f85
fix bug of DataModifier: increase start index of descriptor slicing. …
Dec 3, 2019
9ec7cf4
fix bug of binary search. add source files pppm_dplr, fix_dplr
Dec 6, 2019
cd7bc37
remove tmp files after test
Dec 6, 2019
188c918
change the DataModification: freeze dipole part with name dipole_char…
Dec 8, 2019
bd879a0
merge with ewald
Dec 11, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions examples/water/train/polar.json
Original file line number Diff line number Diff line change
Expand Up @@ -31,9 +31,9 @@

"learning_rate" :{
"type": "exp",
"start_lr": 0.001,
"decay_steps": 5000,
"decay_rate": 0.95,
"start_lr": 0.001,
"stop_lr": 3.51e-8,
"_comment": "that's all"
},

Expand Down
8 changes: 4 additions & 4 deletions examples/water/train/polar_se_a.json
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
"_comment": " model parameters",
"model":{
"type_map": ["O", "H"],
"data_stat_nbatch": 1,
"data_stat_nbatch": 10,
"descriptor" :{
"type": "se_a",
"sel": [46, 92],
Expand All @@ -18,7 +18,7 @@
"fitting_net": {
"type": "polar",
"sel_type": [0],
"fit_diag": true,
"fit_diag": false,
"neuron": [100, 100, 100],
"resnet_dt": true,
"seed": 1,
Expand All @@ -29,9 +29,9 @@

"learning_rate" :{
"type": "exp",
"start_lr": 0.01,
"decay_steps": 5000,
"decay_rate": 0.95,
"start_lr": 0.01,
"stop_lr": 3.51e-7,
"_comment": "that's all"
},

Expand Down
4 changes: 2 additions & 2 deletions examples/water/train/wannier.json
Original file line number Diff line number Diff line change
Expand Up @@ -32,9 +32,9 @@

"learning_rate" :{
"type": "exp",
"start_lr": 0.001,
"decay_steps": 5000,
"decay_rate": 0.95,
"start_lr": 0.001,
"stop_lr": 3.51e-8,
"_comment": "that's all"
},

Expand Down
5 changes: 3 additions & 2 deletions examples/water/train/water.json
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
"_comment": " model parameters",
"model":{
"type_map": ["O", "H"],
"data_stat_nbatch": 10,
"descriptor": {
"type": "loc_frame",
"sel_a": [16, 32],
Expand All @@ -28,9 +29,9 @@

"learning_rate" :{
"type": "exp",
"start_lr": 0.001,
"decay_steps": 5000,
"decay_rate": 0.95,
"start_lr": 0.001,
"stop_lr": 3.51e-8,
"_comment": "that's all"
},

Expand Down
4 changes: 2 additions & 2 deletions examples/water/train/water_se_a.json
Original file line number Diff line number Diff line change
Expand Up @@ -24,9 +24,9 @@

"learning_rate" :{
"type": "exp",
"start_lr": 0.001,
"decay_steps": 5000,
"decay_rate": 0.95,
"start_lr": 0.001,
"stop_lr": 3.51e-8,
"_comment": "that's all"
},

Expand Down
4 changes: 2 additions & 2 deletions examples/water/train/water_se_ar.json
Original file line number Diff line number Diff line change
Expand Up @@ -35,9 +35,9 @@

"learning_rate" :{
"type": "exp",
"start_lr": 0.005,
"decay_steps": 5000,
"decay_rate": 0.95,
"start_lr": 0.005,
"stop_lr": 1.76e-7,
"_comment": "that's all"
},

Expand Down
5 changes: 3 additions & 2 deletions examples/water/train/water_se_r.json
Original file line number Diff line number Diff line change
Expand Up @@ -23,9 +23,10 @@
},

"learning_rate" : {
"start_lr": 0.005,
"type": "exp",
"decay_steps": 5000,
"decay_rate": 0.95,
"start_lr": 0.005,
"stop_lr": 1.76e-7,
"_comment": " that's all"
},

Expand Down
4 changes: 2 additions & 2 deletions examples/water/train/water_srtab_example.json
Original file line number Diff line number Diff line change
Expand Up @@ -32,9 +32,9 @@

"learning_rate" :{
"type": "exp",
"start_lr": 0.001,
"decay_steps": 5000,
"decay_rate": 0.95,
"start_lr": 0.001,
"stop_lr": 3.51e-8,
"_comment": "that's all"
},

Expand Down
50 changes: 50 additions & 0 deletions source/lib/include/DataModifier.h
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
#pragma once

#include "NNPInter.h"

class DataModifier
{
public:
DataModifier();
DataModifier(const string & model,
const int & gpu_rank = 0,
const string & name_scope = "");
~DataModifier () {};
void init (const string & model,
const int & gpu_rank = 0,
const string & name_scope = "");
void print_summary(const string &pre) const;
public:
void compute (vector<VALUETYPE> & dfcorr_,
vector<VALUETYPE> & dvcorr_,
const vector<VALUETYPE> & dcoord_,
const vector<int> & datype_,
const vector<VALUETYPE> & dbox,
const vector<pair<int,int>> & pairs,
const vector<VALUETYPE> & delef_,
const int nghost,
const LammpsNeighborList & lmp_list);
VALUETYPE cutoff () const {assert(inited); return rcut;};
int numb_types () const {assert(inited); return ntypes;};
vector<int> sel_types () const {assert(inited); return sel_type;};
private:
Session* session;
string name_scope, name_prefix;
int num_intra_nthreads, num_inter_nthreads;
GraphDef graph_def;
bool inited;
VALUETYPE rcut;
VALUETYPE cell_size;
int ntypes;
string model_type;
vector<int> sel_type;
template<class VT> VT get_scalar(const string & name) const;
template<class VT> void get_vector(vector<VT> & vec, const string & name) const;
void run_model (vector<VALUETYPE> & dforce,
vector<VALUETYPE> & dvirial,
Session * session,
const std::vector<std::pair<string, Tensor>> & input_tensors,
const NNPAtomMap<VALUETYPE> & nnpmap,
const int nghost);
};

63 changes: 63 additions & 0 deletions source/lib/include/DeepTensor.h
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
#pragma once

#include "NNPInter.h"

class DeepTensor
{
public:
DeepTensor();
DeepTensor(const string & model,
const int & gpu_rank = 0,
const string &name_scope = "");
void init (const string & model,
const int & gpu_rank = 0,
const string &name_scope = "");
void print_summary(const string &pre) const;
public:
void compute (vector<VALUETYPE> & value,
const vector<VALUETYPE> & coord,
const vector<int> & atype,
const vector<VALUETYPE> & box,
const int nghost = 0);
void compute (vector<VALUETYPE> & value,
const vector<VALUETYPE> & coord,
const vector<int> & atype,
const vector<VALUETYPE> & box,
const int nghost,
const LammpsNeighborList & lmp_list);
VALUETYPE cutoff () const {assert(inited); return rcut;};
int numb_types () const {assert(inited); return ntypes;};
int output_dim () const {assert(inited); return odim;};
const vector<int> & sel_types () const {assert(inited); return sel_type;};
private:
Session* session;
string name_scope;
int num_intra_nthreads, num_inter_nthreads;
GraphDef graph_def;
bool inited;
VALUETYPE rcut;
VALUETYPE cell_size;
int ntypes;
string model_type;
int odim;
vector<int> sel_type;
template<class VT> VT get_scalar(const string & name) const;
template<class VT> void get_vector (vector<VT> & vec, const string & name) const;
void run_model (vector<VALUETYPE> & d_tensor_,
Session * session,
const std::vector<std::pair<string, Tensor>> & input_tensors,
const NNPAtomMap<VALUETYPE> & nnpmap,
const int nghost = 0);
void compute_inner (vector<VALUETYPE> & value,
const vector<VALUETYPE> & coord,
const vector<int> & atype,
const vector<VALUETYPE> & box,
const int nghost = 0);
void compute_inner (vector<VALUETYPE> & value,
const vector<VALUETYPE> & coord,
const vector<int> & atype,
const vector<VALUETYPE> & box,
const int nghost,
const InternalNeighborList&lmp_list);
};

Loading