Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature: add nearest neighbours analysis #53

Open
wants to merge 48 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
48 commits
Select commit Hold shift + click to select a range
d7a88c0
`HpCalculation`: add more exit codes
bastonero May 31, 2023
5b20e18
Workflows: add the `skip_relax_iterations` logic
bastonero May 31, 2023
9bf0ac5
`Parser`: fix typo in the q point regex matching
bastonero Jun 6, 2023
481a455
`HpCalculation`: add more exit codes
bastonero May 31, 2023
043fece
Workflows: change logic for relabeling +V case
bastonero May 31, 2023
9db7ced
Workflows: add the `skip_relax_iterations` logic
bastonero May 31, 2023
63e018d
Fix `break` wrong indent
bastonero Jun 1, 2023
cf50ce6
Feat: clean the workdirs at each iteration
bastonero Jul 5, 2023
be0363d
Merge branch 'feat/clean_iteration' of https://github.com/aiidateam/a…
bastonero Jul 12, 2023
e196fb3
Address some python warning on typing
bastonero Nov 14, 2023
3e5ba63
Merge branch 'master' of https://github.com/aiidateam/aiida-quantumes…
bastonero Nov 27, 2023
86df3dd
Merge branch 'new/skip_relax_iterations' of https://github.com/aiidat…
bastonero Nov 27, 2023
76f3eae
Merge branch 'new/exit_codes' of https://github.com/aiidateam/aiida-q…
bastonero Nov 27, 2023
800072b
`Feature`: add radial analysis during cycle
bastonero Nov 27, 2023
4e28e25
Fix overrides test
bastonero Nov 27, 2023
7aa81fd
Fix bug in regex for perturbed atoms
bastonero Dec 5, 2023
f9f949f
Fix bug in the hubbard outline
bastonero Dec 5, 2023
5bc0efc
Fix bug in the hubbard outline
bastonero Dec 6, 2023
beb60fb
Merge branch 'feat/voronoi' of https://github.com/aiidateam/aiida-qua…
bastonero Dec 6, 2023
3e10ae0
Merge branch 'master' of https://github.com/aiidateam/aiida-quantumes…
bastonero Dec 6, 2023
5aa1ebf
Fix convergence error handler to prevent exceeding niter_max of 500
Dec 6, 2023
1e1cd96
Fix exit status when convergence is not met
bastonero Dec 6, 2023
30edfdb
Merge branch 'fix/55' of https://github.com/aiidateam/aiida-quantumes…
bastonero Dec 6, 2023
14bc1fc
Fix exit code name
bastonero Dec 6, 2023
e687140
Merge branch 'fix/55' of https://github.com/aiidateam/aiida-quantumes…
bastonero Dec 6, 2023
6263392
Simplify `convergence_not_reached` handler
Dec 7, 2023
9aac7e8
Fix tests.
Dec 7, 2023
bcbd602
Merge branch '48-hpbaseworkchain-convergence-error-handler' of https:…
bastonero Dec 8, 2023
f15de60
Merge branch 'master' of https://github.com/aiidateam/aiida-quantumes…
bastonero Dec 22, 2023
7427431
Some fixes to protocol
bastonero Dec 22, 2023
a3cad55
HpParser: fix q-points/atoms output test case
bastonero Feb 13, 2024
2080fa5
Merge branch 'fix/parsing' of https://github.com/aiidateam/aiida-quan…
bastonero Feb 13, 2024
99a46e4
Merge branch 'master' of https://github.com/aiidateam/aiida-quantumes…
bastonero Feb 13, 2024
83bbbed
Merge branch 'master' of https://github.com/aiidateam/aiida-quantumes…
bastonero Apr 19, 2024
392a772
Fix logic compatibility with `main`
bastonero Apr 19, 2024
2c51276
Add exit code for relabelling failure
bastonero Apr 19, 2024
6ced0df
Merge branch 'fix/relabel/degenerate-spins' of https://github.com/aii…
bastonero Apr 19, 2024
3671a3c
Fix the convergence check in self-consistent workflow
bastonero Apr 22, 2024
ff76121
Merge branch 'fix/check-convergence' of https://github.com/aiidateam/…
bastonero Apr 22, 2024
2fe8e72
Fix the convergence check in self-consistent workflow
bastonero Apr 23, 2024
c65bc6d
Merge branch 'fix/check-convergence' of https://github.com/aiidateam/…
bastonero Apr 23, 2024
996ce64
Fix symbol matching pattern
bastonero May 7, 2024
729fc02
Change nearest-neighbour analysis and parser
bastonero Oct 2, 2024
5bb9e1b
Update dependencies
bastonero Jan 23, 2025
8291e35
Update python version and ci
bastonero Jan 23, 2025
250c6de
Fix pre-commit
bastonero Jan 23, 2025
2e5bbf8
Fix tests
bastonero Jan 23, 2025
2cb4a82
Update pylint deps
bastonero Jan 23, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ jobs:
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: '3.8'
python-version: '3.9'

- name: Install python dependencies
run: pip install -e .[pre-commit,tests]
Expand All @@ -40,7 +40,7 @@ jobs:

strategy:
matrix:
python-version: ['3.8', '3.9']
python-version: ['3.9']

services:
rabbitmq:
Expand Down
4 changes: 2 additions & 2 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,14 +17,14 @@

# Load the dummy profile even if we are running locally, this way the documentation will succeed even if the current
# default profile of the AiiDA installation does not use a Django backend.
from aiida.manage.configuration import load_documentation_profile
from aiida.manage.configuration import Profile, load_profile

# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
import aiida_quantumespresso_hp

load_documentation_profile()
load_profile(Profile('docs', {'process_control': {}, 'storage': {}}))

# -- Project information -----------------------------------------------------

Expand Down
7 changes: 3 additions & 4 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -18,16 +18,15 @@ classifiers = [
'Operating System :: POSIX :: Linux',
'Operating System :: MacOS :: MacOS X',
'Programming Language :: Python',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
'Programming Language :: Python :: 3.10',
'Programming Language :: Python :: 3.11',
]
keywords = ['aiida', 'workflows']
requires-python = '>=3.8'
dependencies = [
'aiida-core~=2.2',
'aiida-quantumespresso~=4.3',
'aiida-core>=2.3.0',
'aiida-quantumespresso>=4.8.0',
]

[project.urls]
Expand All @@ -49,7 +48,7 @@ docs = [
]
pre-commit = [
'pre-commit~=2.17',
'pylint~=2.12.2',
'pylint~=2.17.2',
'pylint-aiida~=0.1.1',
'toml'
]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,10 @@ def structure_relabel_kinds(
new_magnetization[kind_name] = old_magnetization[site['kind']]

site = sites[index]
relabeled.append_atom(position=site.position, symbols=symbol, name=kind_name)
try:
relabeled.append_atom(position=site.position, symbols=symbol, name=kind_name)
except ValueError as exc:
raise ValueError('cannot distinguish kinds with the given Hubbard input configuration') from exc

# Now add the non-Hubbard sites
for site in sites[len(relabeled.sites):]:
Expand Down
6 changes: 3 additions & 3 deletions src/aiida_quantumespresso_hp/calculations/hp.py
Original file line number Diff line number Diff line change
Expand Up @@ -211,17 +211,17 @@ def filename_output_hubbard(cls): # pylint: disable=no-self-argument
return f'{cls.prefix}.Hubbard_parameters.dat'

@classproperty
def filename_input_hubbard_parameters(cls): # pylint: disable=no-self-argument,invalid-name, no-self-use
def filename_input_hubbard_parameters(cls): # pylint: disable=no-self-argument,invalid-name
"""Return the relative input filename for Hubbard parameters, for QuantumESPRESSO version below 7.1."""
return 'parameters.in'

@classproperty
def filename_output_hubbard_dat(cls): # pylint: disable=no-self-argument,invalid-name, no-self-use
def filename_output_hubbard_dat(cls): # pylint: disable=no-self-argument,invalid-name
"""Return the relative input filename for generalised Hubbard parameters, for QuantumESPRESSO v.7.2 onwards."""
return 'HUBBARD.dat'

@classproperty
def dirname_output(cls): # pylint: disable=no-self-argument, no-self-use
def dirname_output(cls): # pylint: disable=no-self-argument
"""Return the relative directory name that contains raw output data."""
return 'out'

Expand Down
33 changes: 27 additions & 6 deletions src/aiida_quantumespresso_hp/parsers/hp.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
from aiida import orm
from aiida.common import exceptions
from aiida.parsers import Parser
import numpy
import numpy as np

from aiida_quantumespresso_hp.calculations.hp import HpCalculation

Expand Down Expand Up @@ -114,8 +114,8 @@ def parse_stdout(self):
parsed_data, logs = parse_raw_output(stdout)
except Exception: # pylint: disable=broad-except
return self.exit_codes.ERROR_OUTPUT_STDOUT_PARSE
else:
self.out('parameters', orm.Dict(parsed_data))

self.out('parameters', orm.Dict(parsed_data))

# If the stdout was incomplete, most likely the job was interrupted before it could cleanly finish, so the
# output files are most likely corrupt and cannot be restarted from
Expand Down Expand Up @@ -202,17 +202,38 @@ def parse_hubbard_dat(self, folder_path):

:return: optional exit code in case of an error
"""
from aiida_quantumespresso.common.hubbard import Hubbard
from aiida_quantumespresso.utils.hubbard import HubbardUtils
filename = HpCalculation.filename_output_hubbard_dat

filepath = os.path.join(folder_path, filename)

hubbard_structure = self.node.inputs.hubbard_structure.clone()

intersites = None
if 'settings' in self.node.inputs:
if 'radial_analysis' in self.node.inputs.settings.get_dict():
kwargs = self.node.inputs.settings.dict.radial_analysis
intersites = HubbardUtils(hubbard_structure).get_intersites_list(**kwargs)

hubbard_structure.clear_hubbard_parameters()
hubbard_utils = HubbardUtils(hubbard_structure)
hubbard_utils.parse_hubbard_dat(filepath=filepath)

self.out('hubbard_structure', hubbard_utils.hubbard_structure)
if intersites is None:
self.out('hubbard_structure', hubbard_utils.hubbard_structure)
else:
hubbard_list = np.array(hubbard_utils.hubbard_structure.hubbard.to_list(), dtype='object')
parsed_intersites = hubbard_list[:, [0, 2, 5]].tolist()
selected_indices = []

for i, intersite in enumerate(parsed_intersites):
if intersite in intersites:
selected_indices.append(i)

hubbard = Hubbard.from_list(hubbard_list[selected_indices])
hubbard_structure.hubbard = hubbard
self.out('hubbard_structure', hubbard_structure)

def get_hubbard_structure(self):
"""Set in output an ``HubbardStructureData`` with standard Hubbard U formulation."""
Expand Down Expand Up @@ -264,7 +285,7 @@ def parse_chi_content(self, handle):
for matrix_name in ('chi0', 'chi'):
matrix_block = blocks[matrix_name]
matrix_data = data[matrix_block[0]:matrix_block[1]]
matrix = numpy.array(self.parse_hubbard_matrix(matrix_data))
matrix = np.array(self.parse_hubbard_matrix(matrix_data))
result[matrix_name] = matrix

return result
Expand Down Expand Up @@ -377,4 +398,4 @@ def parse_hubbard_matrix(data):
if row:
matrix.append(row)

return numpy.array(matrix)
return np.array(matrix)
65 changes: 54 additions & 11 deletions src/aiida_quantumespresso_hp/workflows/hubbard.py
Original file line number Diff line number Diff line change
Expand Up @@ -120,6 +120,8 @@ def define(cls, spec):
spec.input('skip_relax_iterations', valid_type=orm.Int, required=False, validator=validate_positive,
help=('The number of iterations for skipping the `relax` '
'step without performing check on parameters convergence.'))
spec.input('radial_analysis', valid_type=orm.Dict, required=False,
help='If specified, it performs a nearest neighbour analysis and feed the radius to hp.x')
spec.input('relax_frequency', valid_type=orm.Int, required=False, validator=validate_positive,
help='Integer value referring to the number of iterations to wait before performing the `relax` step.')
spec.expose_inputs(PwRelaxWorkChain, namespace='relax',
Expand Down Expand Up @@ -171,6 +173,9 @@ def define(cls, spec):

spec.exit_code(330, 'ERROR_FAILED_TO_DETERMINE_PSEUDO_POTENTIAL',
message='Failed to determine the correct pseudo potential after the structure changed its kind names.')
spec.exit_code(340, 'ERROR_RELABELLING_KINDS',
message='Failed to determine the kind names during the relabelling.')

spec.exit_code(401, 'ERROR_SUB_PROCESS_FAILED_RECON',
message='The reconnaissance PwBaseWorkChain sub process failed')
spec.exit_code(402, 'ERROR_SUB_PROCESS_FAILED_RELAX',
Expand Down Expand Up @@ -258,6 +263,7 @@ def get_builder_from_protocol(
builder.hubbard = hubbard
builder.tolerance_onsite = orm.Float(inputs['tolerance_onsite'])
builder.tolerance_intersite = orm.Float(inputs['tolerance_intersite'])
builder.radial_analysis = orm.Dict(inputs['radial_analysis'])
builder.max_iterations = orm.Int(inputs['max_iterations'])
builder.meta_convergence = orm.Bool(inputs['meta_convergence'])
builder.clean_workdir = orm.Bool(inputs['clean_workdir'])
Expand Down Expand Up @@ -428,9 +434,13 @@ def relabel_hubbard_structure(self, workchain) -> None:
if not is_intersite_hubbard(workchain.outputs.hubbard_structure.hubbard):
for site in workchain.outputs.hubbard.dict.sites:
if not site['type'] == site['new_type']:
result = structure_relabel_kinds(
self.ctx.current_hubbard_structure, workchain.outputs.hubbard, self.ctx.current_magnetic_moments
)
try:
result = structure_relabel_kinds(
self.ctx.current_hubbard_structure, workchain.outputs.hubbard,
self.ctx.current_magnetic_moments
)
except ValueError:
return self.exit_codes.ERROR_RELABELLING_KINDS
self.ctx.current_hubbard_structure = result['hubbard_structure']
if self.ctx.current_magnetic_moments is not None:
self.ctx.current_magnetic_moments = result['starting_magnetization']
Expand Down Expand Up @@ -552,12 +562,25 @@ def recon_scf(self):

bands = workchain.outputs.output_band
parameters = workchain.outputs.output_parameters.get_dict()
# number_electrons = parameters['number_of_electrons']
# is_insulator, _ = find_bandgap(bands, number_electrons=number_electrons)

fermi_energy = parameters['fermi_energy']
is_insulator, _ = find_bandgap(bands, fermi_energy=fermi_energy)
number_electrons = parameters['number_of_electrons']

# Due to uncertainty in the prediction of the fermi energy, we try
# both options of this function. If one of the two give an insulating
# state as a result, we then set fixed occupation as it is likely that
# hp.x would crash otherwise.
is_insulator_1, _ = find_bandgap(bands, fermi_energy=fermi_energy)

if is_insulator:
# I am not sure, but I think for some materials, e.g. having anti-ferromagnetic
# ordering, the following function would crash for some reason, possibly due
# to the format of the BandsData. To double check if actually needed.
try:
is_insulator_2, _ = find_bandgap(bands, number_electrons=number_electrons)
except: # pylint: disable=bare-except
is_insulator_2 = False

if is_insulator_1 or is_insulator_2:
self.report('after relaxation, system is determined to be an insulator')
self.ctx.is_insulator = True
else:
Expand All @@ -569,6 +592,23 @@ def run_hp(self):
workchain = self.ctx.workchains_scf[-1]

inputs = AttributeDict(self.exposed_inputs(HpWorkChain, namespace='hubbard'))

if 'radial_analysis' in self.inputs:
kwargs = self.inputs.radial_analysis.get_dict()
hubbard_utils = HubbardUtils(self.ctx.current_hubbard_structure)
num_neigh = hubbard_utils.get_max_number_of_neighbours(**kwargs)

parameters = inputs.hp.parameters.get_dict()
parameters['INPUTHP']['num_neigh'] = num_neigh

settings = {'radial_analysis': self.inputs.radial_analysis.get_dict()}
if 'settings' in inputs.hp:
settings = inputs.hp.settings.get_dict()
settings['radial_analysis'] = self.inputs.radial_analysis.get_dict()

inputs.hp.parameters = orm.Dict(parameters)
inputs.hp.settings = orm.Dict(settings)

inputs.clean_workdir = self.inputs.clean_workdir
inputs.hp.parent_scf = workchain.outputs.remote_folder
inputs.hp.hubbard_structure = self.ctx.current_hubbard_structure
Expand All @@ -577,7 +617,7 @@ def run_hp(self):
running = self.submit(HpWorkChain, **inputs)

self.report(f'launching HpWorkChain<{running.pk}> iteration #{self.ctx.iteration}')
return ToContext(workchains_hp=append_(running))
self.to_context(**{'workchains_hp': append_(running)})

def inspect_hp(self):
"""Analyze the last completed HpWorkChain.
Expand Down Expand Up @@ -619,6 +659,9 @@ def check_convergence(self):
self.ctx.current_hubbard_structure = workchain.outputs.hubbard_structure
self.relabel_hubbard_structure(workchain)

# if not self.should_check_convergence():
# return

if not len(ref_params) == len(new_params):
self.report('The new and old Hubbard parameters have different lenghts. Assuming to be at the first cycle.')
return
Expand All @@ -634,7 +677,7 @@ def check_convergence(self):
new = np.array(new_onsites, dtype='object')
diff = np.abs(old[:, 4] - new[:, 4])

if (diff > self.inputs.tolerance_onsite).all():
if (diff > self.inputs.tolerance_onsite).any():
check_onsites = False
self.report(f'Hubbard onsites parameters are not converged. Max difference is {diff.max()}.')

Expand All @@ -644,7 +687,7 @@ def check_convergence(self):
new = np.array(new_intersites, dtype='object')
diff = np.abs(old[:, 4] - new[:, 4])

if (diff > self.inputs.tolerance_intersite).all():
if (diff > self.inputs.tolerance_intersite).any():
check_onsites = False
self.report(f'Hubbard intersites parameters are not converged. Max difference is {diff.max()}.')

Expand All @@ -659,7 +702,7 @@ def run_results(self):
if self.ctx.is_converged:
self.report(f'Hubbard parameters self-consistently converged in {self.ctx.iteration} iterations')
else:
self.report(f'Hubbard parameters did not converge at the last iteration #{self.ctx.iteration}.')
self.report(f'Hubbard parameters did not converged at the last iteration #{self.ctx.iteration}.')
return self.exit_codes.ERROR_CONVERGENCE_NOT_REACHED.format(iteration=self.ctx.iteration)

def should_clean_workdir(self):
Expand Down
8 changes: 8 additions & 0 deletions src/aiida_quantumespresso_hp/workflows/protocols/hubbard.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,14 @@ default_inputs:
meta_convergence: True
tolerance_onsite: 0.1
tolerance_intersite: 0.01
radial_analysis:
nn_finder: 'crystal'
nn_inputs:
distance_cutoffs: null # in Angstrom
x_diff_weight: 0
porous_adjustment: False
radius_max: 10.0 # in Angstrom
thr: 0.01 # in Angstrom
scf:
kpoints_distance: 0.4

Expand Down
10 changes: 8 additions & 2 deletions tests/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ def _fixture_code(entry_point_name):

try:
return load_code(label=label)
except exceptions.NotExistent:
except (exceptions.NotExistent, exceptions.MultipleObjectsError):
return InstalledCode(
label=label,
computer=fixture_localhost,
Expand Down Expand Up @@ -377,7 +377,7 @@ def _generate_structure(structure_id=None):
def generate_hubbard_structure(generate_structure):
"""Return a `HubbardStructureData` representing bulk silicon."""

def _generate_hubbard_structure(only_u=False, u_value=1e-5, v_value=1e-5):
def _generate_hubbard_structure(only_u=False, u_value=1e-5, v_value=1e-5, u_o_value=1e-5):
"""Return a `StructureData` representing bulk silicon."""
from aiida_quantumespresso.data.hubbard_structure import HubbardStructureData

Expand All @@ -386,9 +386,12 @@ def _generate_hubbard_structure(only_u=False, u_value=1e-5, v_value=1e-5):

if only_u:
hubbard_structure.initialize_onsites_hubbard('Co', '3d', u_value)
hubbard_structure.initialize_onsites_hubbard('O', '2p', u_o_value)
else:
hubbard_structure.initialize_onsites_hubbard('Co', '3d', u_value)
hubbard_structure.initialize_onsites_hubbard('O', '2p', u_o_value)
hubbard_structure.initialize_intersites_hubbard('Co', '3d', 'O', '2p', v_value)
hubbard_structure.initialize_intersites_hubbard('O', '2p', 'Co', '3d', u_o_value)

return hubbard_structure

Expand Down Expand Up @@ -494,6 +497,8 @@ def generate_inputs_hubbard(generate_inputs_pw, generate_inputs_hp, generate_hub

def _generate_inputs_hubbard(hubbard_structure=None):
"""Generate default inputs for a `SelfConsistentHubbardWorkChain."""
from aiida.orm import Bool

hubbard_structure = hubbard_structure or generate_hubbard_structure()
inputs_pw = generate_inputs_pw(structure=hubbard_structure)
inputs_relax = generate_inputs_pw(structure=hubbard_structure)
Expand All @@ -508,6 +513,7 @@ def _generate_inputs_hubbard(hubbard_structure=None):
inputs_hp.pop('parent_scf')

inputs = {
'meta_convergence': Bool(True),
'hubbard_structure': hubbard_structure,
'relax': {
'base': {
Expand Down
Loading
Loading