Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/max mags #72

Open
wants to merge 39 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
39 commits
Select commit Hold shift + click to select a range
1da9b3b
add sub_solution support; update run_hazard ; update diagnostics;
chrisbc Dec 22, 2021
77b52b9
update branch in Dockerfile
chrisbc Dec 22, 2021
52eb443
fix solvis install prereqs;
chrisbc Dec 22, 2021
2062379
new Java API for regional min mags
chrisbc Dec 22, 2021
3eaa5e9
remove solvis from Docker for now - deps are big/slow to install
chrisbc Dec 22, 2021
b406f4f
setup runzi args for setScalingRelationship;
chrisbc Dec 22, 2021
3fc183e
removed MagRate curves
chrisbc Dec 23, 2021
abcbb67
added crustal support for initial solution
chrisbc Dec 23, 2021
8d4caf8
add init sol & scaling_c_vals to subduction;
chrisbc Dec 23, 2021
338664b
new branch for the Dockerfile
chrisbc Dec 23, 2021
74d9664
add support for config_version
chrisbc Jan 16, 2022
2bd86a1
new logic tree branch permutation generator
chrisbc Jan 16, 2022
514061b
fix up a couple of inversion runner problems
chrisbc Jan 16, 2022
4a6a2e0
update docs/dockerfile
chrisbc Jan 16, 2022
d310fdd
update doc
chrisbc Jan 21, 2022
bafcdc5
add cooling schedule and new permutation gen fn;
chrisbc Jan 21, 2022
7aaf2b4
removed short-circuit;
chrisbc Jan 21, 2022
9ec1491
fix composite args / tag setup
chrisbc Jan 26, 2022
8d3dc88
composite args must be complete
chrisbc Jan 26, 2022
0c765cf
make JVM_HEAP_MAX on AWS 2GB smaller
chrisbc Jan 26, 2022
b8b87e4
hazard task run in AWS batch
chrisbc Jan 31, 2022
ea1d2e3
remove matplot import
chrisbc Jan 31, 2022
7ae3bab
use the new get_factory helper
chrisbc Feb 1, 2022
9ad90c7
coulumb rupture sets and ruptset diags
chrisbc Feb 2, 2022
af61f78
set hazard task gateway port
chrisbc Feb 7, 2022
a8aad7c
latest configs
chrisbc Feb 14, 2022
43bc36e
API change for inversion_runner.setSlipRateUncertaintyConstraint
chrisbc Feb 16, 2022
a2ae8b3
add script to obtain hazard from API;
chrisbc Feb 18, 2022
bdeb213
add all the old configs
chrisbc Feb 18, 2022
4178004
add regional min mag to config and solution builder
Feb 18, 2022
39dbbe7
added notes on running AWS cli v2
Feb 19, 2022
6116dfd
add version 2.3
chrisdicaprio Feb 19, 2022
4fc277c
fixed missing :
chrisdicaprio Feb 19, 2022
f1c1284
added mfd table V2 to output
chrisdicaprio Feb 21, 2022
6bf6276
fixed iter though mfd tables
chrisdicaprio Feb 21, 2022
d58c09d
added support for max_mag
chrisdicaprio Feb 22, 2022
a0a8b45
bundled all min and max mag args into dict
chrisdicaprio Feb 22, 2022
8242bea
added max mag type support
chrisdicaprio Feb 22, 2022
ec8ad45
updated Docker instructions for AWS cli v2
chrisdicaprio Feb 23, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
48 changes: 40 additions & 8 deletions doc/process/AWS_docker_containers_setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,28 +45,39 @@ $ aws batch submit-job --cli-input-json "$(<task-specs/job-submit-002.json)"


### Build new container with no tag, forcing git pull etc
make sure Dockerfile has correct runzi branch


```
export FATJAR_TAG=95-modular-hazard
#EG
export FATJAR_TAG=165-filter-rupset-alpha2
docker build . --build-arg FATJAR_TAG=${FATJAR_TAG} --no-cache
```

### Tag new docker image

```
export RUNZI_GITREF=7ff3e1e
export NZOPENSHA_GITREF=${FATJAR_TAG}
export IMAGE_ID=e31137aa8e4e #from docker build
export CONTAINER_TAG=runzi-${RUNZI_GITREF}_nz_opensha-${NZOPENSHA_GITREF}
export RUNZI_GITREF=8242bea
export IMAGE_ID=b19b436212f2 #from docker build
export CONTAINER_TAG=runzi-${RUNZI_GITREF}_nz_opensha-${FATJAR_TAG}

docker tag ${IMAGE_ID} 461564345538.dkr.ecr.us-east-1.amazonaws.com/nzshm22/runzi-opensha:${CONTAINER_TAG}
```

### get credential, push image into AWS ECR

```
$(aws ecr get-login --no-include-email --region us-east-1)

$(aws ecr get-login --no-include-email --region us-east-1)
docker push 461564345538.dkr.ecr.us-east-1.amazonaws.com/nzshm22/runzi-opensha:${CONTAINER_TAG}

```

### for AWS cli v2
```
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 461564345538.dkr.ecr.us-east-1.amazonaws.com
docker push 461564345538.dkr.ecr.us-east-1.amazonaws.com/nzshm22/runzi-opensha:${CONTAINER_TAG}

```

### Update AWS Job Defintion with ${CONTAINER_TAG}
Expand All @@ -79,14 +90,27 @@ This assumes the command is being run from the folder containing `Dockerfile`
```
# setcorrect environment
set_tosh_test_env
```

### Local cli testing

```
wget https://nzshm-opensha-public-jars.s3.ap-southeast-2.amazonaws.com/nzshm-opensha-all-${FATJAR_TAG}.jar -P $(pwd)/nzshm-opensha/build/libs
export NZSHM22_FATJAR=$(pwd)/nzshm-opensha/build/libs/nzshm-opensha-all-${FATJAR_TAG}.jar
NZSHM22_SCRIPT_CLUSTER_MODE=LOCAL python3 ../../runzi/cli/cli.py
```

### AWS or Dockerised run

run the docker container...
- use LOCAL to run on local docker host
- use AWS to run on AWS Batch

```
# -v $HOME/DEV/GNS/AWS_S3_DATA/WORKING:/WORKING \

# -v $HOME/DEV/GNS/AWS_S3_DATA/WORKING:/WORKING \
export NZSHM22_SCRIPT_CLUSTER_MODE=AWS
docker run -it --rm --env-file environ \
-v $HOME/DEV/GNS/AWS_S3_DATA/WORKING:/WORKING \
-v $HOME/.aws/credentials:/root/.aws/credentials:ro \
-v $(pwd)/../../runzi/cli/config/saved_configs:/app/nzshm-runzi/runzi/cli/config/saved_configs \
-e AWS_PROFILE=toshi_batch_devops \
Expand All @@ -95,6 +119,7 @@ docker run -it --rm --env-file environ \
-e NZSHM22_SCRIPT_CLUSTER_MODE \
-e NZSHM22_S3_REPORT_BUCKET \
-e NZSHM22_REPORT_LEVEL=FULL \
-e NZSHM22_TOSHI_API_KEY \
-e NZSHM22_FATJAR=/app/nzshm-opensha/build/libs/nzshm-opensha-all-${FATJAR_TAG}.jar \
461564345538.dkr.ecr.us-east-1.amazonaws.com/nzshm22/runzi-opensha:${CONTAINER_TAG}
```
Expand All @@ -116,4 +141,11 @@ docker run -it --rm --env-file environ \
-s /app/container_task.sh
```



Note this passing in of credentials is done using Job Definition.jobRoleARN in the ECS environment.


## Running Hazard

There's no cli support yet
12 changes: 10 additions & 2 deletions docker/runzi-opensha/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,6 @@ ARG FATJAR_TAG
WORKDIR /app/nzshm-opensha/build/libs
ADD https://nzshm-opensha-public-jars.s3.ap-southeast-2.amazonaws.com/nzshm-opensha-all-${FATJAR_TAG}.jar .


WORKDIR /WORKING
WORKDIR /app

Expand All @@ -18,7 +17,16 @@ RUN git clone https://github.com/GNS-Science/nzshm-runzi.git

WORKDIR /app/nzshm-runzi
RUN git fetch
RUN git checkout feature/49_create_openquake_source-model_from_solution
RUN git checkout feature/69_new_inversion_configs

# FOR SOLVIS ... Sub Solutions
#
#RUN apk add --no-cache proj-dev geos-dev gdal-dev \
# make automake gcc g++ python3-dev \
# proj-util
#
#ENV PROJ_DIR=/usr
#pip install git+https://github.com/GNS-Science/solvis#egg=solvis geopandas

RUN pip3 install -r requirements.txt
RUN pip3 install -e .
Expand Down
2 changes: 0 additions & 2 deletions runzi/automation/build_manual_index.py
Original file line number Diff line number Diff line change
Expand Up @@ -156,11 +156,9 @@ def inv_template(rgt, upload_folder, tui, display_keys=None):
return f"""<li>
<a href="{tui}Find/{rid}">{rid}</a> result: {result}&nbsp;
<a href="{tui}InversionSolution/{fid}">Inversion Solution detail</a>&nbsp;
<a href="{upload_folder}/{fid}/mag_rates/MAG_rates_log_fixed_yscale.png">Mag Rate overall</a>&nbsp;
{solution_diags}
{named_faults_link}


<br />
<div class="display_info">{display_info}</div>
<br />
Expand Down
25 changes: 10 additions & 15 deletions runzi/automation/run_coulomb_rupture_sets.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,15 +27,11 @@
INITIAL_GATEWAY_PORT = 26533 #set this to ensure that concurrent scheduled tasks won't clash

#If using API give this task a descriptive setting...
TASK_TITLE = "Build Coulomb CFM 0.9A D90 ruptsets for new experiments"
TASK_TITLE = "Build Coulomb full CFM 0.9C D90 with corrected rake orientation"

TASK_DESCRIPTION = """
- saved using simplified (non-descriptive) a file naming.
- saved in non-modular, U3-style archive format.
- simliar setup toRmlsZTozMDMuMEJCOVVY, but with the slimmed-down 9A Fault Model.
"""


def build_tasks(general_task_id, args):
"""
build the shell scripts 1 per task, based on all the inputs
Expand All @@ -49,20 +45,18 @@ def build_tasks(general_task_id, args):
jre_path=OPENSHA_JRE, app_jar_path=FATJAR,
task_config_path=WORK_PATH, jvm_heap_max=JVM_HEAP_MAX, jvm_heap_start=JVM_HEAP_START)

# task_factory = OpenshaTaskFactory(OPENSHA_ROOT, WORK_PATH, scaling.coulomb_rupture_set_builder_task,
# initial_gateway_port=25733,
# jre_path=OPENSHA_JRE, app_jar_path=FATJAR,
# task_config_path=WORK_PATH, jvm_heap_max=JVM_HEAP_MAX, jvm_heap_start=JVM_HEAP_START)

for ( model, min_sub_sects_per_parent,
min_sub_sections, max_jump_distance,
adaptive_min_distance, thinning_factor,
max_sections )\
max_sections,
# use_inverted_rake
)\
in itertools.product(
args['models'], args['min_sub_sects_per_parents'],
args['min_sub_sections_list'], args['jump_limits'],
args['adaptive_min_distances'], args['thinning_factors'],
args['max_sections']
args['max_sections'],
# args['use_inverted_rakes']
):

task_count +=1
Expand All @@ -75,7 +69,8 @@ def build_tasks(general_task_id, args):
max_jump_distance=max_jump_distance,
adaptive_min_distance=adaptive_min_distance,
thinning_factor=thinning_factor,
scaling_relationship='TMG_CRU_2017', #'SHAW_2009_MOD' TODO this is currently not a settable parameter!
scaling_relationship='SIMPLE_CRUSTAL', #TMG_CRU_2017, 'SHAW_2009_MOD' default
# use_inverted_rake=use_inverted_rake
)


Expand Down Expand Up @@ -121,13 +116,14 @@ def build_tasks(general_task_id, args):

args = dict(
##Test parameters
models = ["CFM_0_9A_SANSTVZ_D90"], #, "CFM_0_9_ALL_D90","CFM_0_9_SANSTVZ_2010"]
models = ["CFM_0_9C_SANSTVZ_D90"], #, "CFM_0_9_ALL_D90","CFM_0_9_SANSTVZ_2010"]
jump_limits = [15], #default is 15
adaptive_min_distances = [6,], #9] default is 6
thinning_factors = [0,], #5, 0.1, 0.2, 0.3] #, 0.05, 0.1, 0.2]
min_sub_sects_per_parents = [2], #3,4,5]
min_sub_sections_list = [2],
max_sections=[MAX_SECTIONS],
# use_inverted_rakes=[True]
)

args_list = []
Expand All @@ -151,7 +147,6 @@ def build_tasks(general_task_id, args):

print("GENERAL_TASK_ID:", GENERAL_TASK_ID)


pool = Pool(WORKER_POOL_SIZE)

scripts = []
Expand Down
40 changes: 40 additions & 0 deletions runzi/automation/run_export_hazard.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
import csv
import datetime as dt
from scaling.toshi_api import ToshiApi#, CreateGeneralTaskArgs
from scaling.local_config import (API_KEY, API_URL, S3_URL)

if __name__ == "__main__":

t0 = dt.datetime.utcnow()

headers={"x-api-key":API_KEY}
toshi_api = ToshiApi(API_URL, S3_URL, None, with_schema_validation=True, headers=headers)

ids = ["SW52ZXJzaW9uU29sdXRpb246MjI4NjUuMFpXa3da",
"SW52ZXJzaW9uU29sdXRpb246MjI4NjguMG1hUlNN",
"SW52ZXJzaW9uU29sdXRpb246MjI4ODAuME1NdlVa",
"SW52ZXJzaW9uU29sdXRpb246MjI4ODYuMFhCY1J4",
"SW52ZXJzaW9uU29sdXRpb246MjI4NzUuMFZCWEJq",
"SW52ZXJzaW9uU29sdXRpb246MjI4OTMuMGJURnk4",
"SW52ZXJzaW9uU29sdXRpb246MjI4ODcuMGU2b1JM",
"SW52ZXJzaW9uU29sdXRpb246MjI5MDIuMGR5UGV2"]

csvfile = open('hazard.csv', 'w', newline='')
writer = None

for solution_id in ids:
solution = toshi_api.inversion_solution.get_solution(solution_id)
print(f"process solution: {solution_id}")
for table in solution.get('tables'):
if table.get('table_type') == "HAZARD_SITES":
table_id = table.get('table_id')

hazard = toshi_api.table.get_table(table_id)
if not writer:
writer = csv.writer(csvfile)
writer.writerow(hazard.get("column_headers"))

print(f"writing hazard for table {table_id}")
writer.writerows( hazard.get('rows'))

print("Done! in %s secs" % (dt.datetime.utcnow() - t0).total_seconds())
Loading