Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Route choice #492

Merged
merged 46 commits into from
Feb 6, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
46 commits
Select commit Hold shift + click to select a range
3a3488f
Removed `reached_first` from A*
Jake-Moss Jan 9, 2024
c32e91a
Add preliminary route choice
Jake-Moss Jan 9, 2024
7bf8b4e
Return noexcepts
Jake-Moss Jan 9, 2024
b80b6ea
Working RouteChoice generation
Jake-Moss Jan 11, 2024
3e37811
Removed unused variables and junk
Jake-Moss Jan 11, 2024
1a864c5
Notes
Jake-Moss Jan 11, 2024
291bdc0
fixup! Removed unused variables and junk
Jake-Moss Jan 11, 2024
93526c0
fixup! fixup! Removed unused variables and junk
Jake-Moss Jan 11, 2024
1c449df
Rename files
Jake-Moss Jan 11, 2024
08de182
Minor fixes
Jake-Moss Jan 12, 2024
a8b0c54
Use comrpessed graph
Jake-Moss Jan 12, 2024
a08ecc0
Avoid unnessacary work when we may fill the route set this iteration
Jake-Moss Jan 12, 2024
4f46026
Fix pointer issues, move custom imports
Jake-Moss Jan 15, 2024
c74179b
Spelling
Jake-Moss Jan 16, 2024
62b058d
Prevent memory leak when destination is unreachable
Jake-Moss Jan 16, 2024
5ddf8f4
Remove print out
Jake-Moss Jan 16, 2024
5e9aa21
Prevent infinite loop when depth is unlimited
Jake-Moss Jan 16, 2024
8d5cdc6
Prevent oob access and incredible memory consumption (2gb/s) from A*
Jake-Moss Jan 16, 2024
614b77b
Scratch testing on Arkansas
Jake-Moss Jan 16, 2024
e14809c
Remove working_set, PO1 makes this redundant, reduce memory usage
Jake-Moss Jan 17, 2024
129360c
Remove prints
Jake-Moss Jan 17, 2024
dc95778
Add parallelised batched method for running a list of od pairs
Jake-Moss Jan 17, 2024
82204ba
Remove dead code and fix A* test now that skimming is disabled
Jake-Moss Jan 18, 2024
1313f3c
Move `RouteChoice.run` to be wrapper around `RouteChoice.batched`
Jake-Moss Jan 18, 2024
bacc76c
Fix infinite loop in the case that all possible paths are exhausted
Jake-Moss Jan 18, 2024
4d6536f
Add comprehensive testing
Jake-Moss Jan 18, 2024
e55a433
Warning clean up
Jake-Moss Jan 18, 2024
e7647b9
Update commentary and fix memory leak
Jake-Moss Jan 18, 2024
80bd7f7
Typos
Jake-Moss Jan 18, 2024
b45fc9f
Rename and remove imports
Jake-Moss Jan 18, 2024
4e442f1
Disable initialisation and bounds checking
Jake-Moss Jan 18, 2024
4b65f66
Add docs and support blocking centroid flow with tests
Jake-Moss Jan 18, 2024
eb8800f
new matrix API (#496)
pedrocamargo Jan 29, 2024
6dc3427
linting (#498)
pedrocamargo Jan 30, 2024
f0b5cbd
Merge branch 'develop' into route_choice
Jake-Moss Jan 31, 2024
427f223
Update GMNS urls (#499)
Jake-Moss Jan 31, 2024
9552907
Remove debug method and fix small typos
Jake-Moss Feb 1, 2024
a25841a
Changes OSM downloader to accept a polygon instead of a bounding box …
pedrocamargo Feb 2, 2024
f89c19f
Select link cwf bfw bug (#500)
Jake-Moss Feb 2, 2024
0648ce3
Dead end removal (#494)
Jake-Moss Feb 2, 2024
4057cd7
Fix typo, used wrong matrix (#503)
Jake-Moss Feb 2, 2024
5253500
Merge branch 'develop' of github.com:AequilibraE/aequilibrae into rou…
Feb 2, 2024
9089fea
typo
Feb 2, 2024
92edd72
Allow switching path finding method
Jake-Moss Feb 2, 2024
007e934
Add debug method for display in tests
Jake-Moss Feb 2, 2024
488006f
Linting
Jake-Moss Feb 2, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .flake8
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,4 @@ max-line-length = 120
ignore = E203, E266, E501, W503, F403, F401, C901, W605
max-complexity = 20
select = B,C,E,F,W,T4,B9
exclude = .idea,.git,__pycache__,sphinx,.venv*,.venv,venv,docs/*,benchmarks/*
exclude = .idea,.git,__pycache__,sphinx,.venv*,.venv,venv,docs/*,benchmarks/*,*.pyx,*pxd,*.pxi
2 changes: 1 addition & 1 deletion .github/build_artifacts_qgis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ jobs:
architecture: ['x64']
os: [windows-latest, macos-latest]
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- name: Set Python environment
uses: actions/setup-python@v4
with:
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/build_linux.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ jobs:
HAS_SECRETS: ${{ secrets.AWS_SECRET_ACCESS_KEY != '' }}
continue-on-error: true
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- name: Set up Python 3.9
uses: actions/setup-python@v4
with:
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/build_mac.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ jobs:
matrix:
python-version: ['3.8', '3.9', '3.10', '3.11', '3.12']
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- name: Set Python environment
uses: actions/setup-python@v4
with:
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/build_windows.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ jobs:
python-version: ['3.8', '3.9', '3.10', '3.11', '3.12']
architecture: ['x64']
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- name: Set Python environment
uses: actions/setup-python@v4
with:
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/debug_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ jobs:
runs-on: ubuntu-20.04
container: python:3.9
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- name: Install dependencies
run: |
python -m pip install --upgrade pip
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/documentation.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ jobs:
env:
HAS_SECRETS: ${{ secrets.AWS_SECRET_ACCESS_KEY != '' }}
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- name: Set up Python 3.9
uses: actions/setup-python@v4
with:
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/test_linux_with_coverage.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ jobs:
matrix:
python-version: [3.10]
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- name: Install dependencies
run: |
sudo apt update
Expand Down
8 changes: 4 additions & 4 deletions .github/workflows/unit_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ jobs:
linting:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- name: Set Python environment
uses: actions/setup-python@v4
with:
Expand All @@ -20,8 +20,8 @@ jobs:
pip install -r requirements_additional.txt
pip install -r tests/requirements_tests.txt

- name: Lint with flake8
run: flake8
- name: Lint with ruff
run: ruff .

- name: Check code format with Black
run: black --check .
Expand All @@ -36,7 +36,7 @@ jobs:

max-parallel: 20
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- name: Set Python environment
uses: actions/setup-python@v4
with:
Expand Down
1 change: 1 addition & 0 deletions aequilibrae/distribution/gravity_calibration.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@
The procedures implemented in this code are some of those suggested in
Modelling Transport, 4th Edition, Ortuzar and Willumsen, Wiley 2011
"""

from time import perf_counter

import numpy as np
Expand Down
2 changes: 1 addition & 1 deletion aequilibrae/distribution/synthetic_gravity_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ def load(self, file_name):
else:
raise ValueError("Model has unknown parameters: " + str(key))
except ValueError as err:
raise ValueError("File provided is not a valid Synthetic Gravity Model - {}".format(err.__str__()))
raise ValueError("File provided is not a valid Synthetic Gravity Model - {}".format(err.__str__())) from err

def save(self, file_name):
R"""Saves model to disk in yaml format. Extension is \*.mod"""
Expand Down
11 changes: 3 additions & 8 deletions aequilibrae/matrix/aequilibrae_data.py
Original file line number Diff line number Diff line change
Expand Up @@ -99,23 +99,18 @@ def create_empty(

if not isinstance(self.data_types, list):
raise ValueError('Data types, "data_types", needs to be a list')
# The check below is not working properly with the QGIS importer
# else:
# for dt in self.data_types:
# if not isinstance(dt, type):
# raise ValueError('Data types need to be Python or Numpy data types')

for field in self.fields:
if not type(field) is str:
raise TypeError(field + " is not a string. You cannot use it as a field name")
if not isinstance(field, str):
raise TypeError(f"{field} is not a string. You cannot use it as a field name")
if not field.isidentifier():
raise Exception(field + " is a not a valid identifier name. You cannot use it as a field name")
if field in object.__dict__:
raise Exception(field + " is a reserved name. You cannot use it as a field name")

self.num_fields = len(self.fields)

dtype = [("index", self.aeq_index_type)] + [(f, dt) for f, dt in zip(self.fields, self.data_types)]
dtype = [("index", self.aeq_index_type)] + list(zip(self.fields, self.data_types))

# the file
if self.memory_mode:
Expand Down
60 changes: 45 additions & 15 deletions aequilibrae/matrix/aequilibrae_matrix.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,9 @@
import tempfile
import uuid
import warnings
from copy import copy
from functools import reduce
from pathlib import Path
from typing import List

import numpy as np
Expand Down Expand Up @@ -90,14 +92,18 @@ def __init__(self):
self.omx_file = None # type: omx.File
self.__version__ = VERSION # Writes file version

def save(self, names=()) -> None:
def save(self, names=(), file_name=None) -> None:
"""Saves matrix data back to file.

If working with AEM file, it flushes data to disk. If working with OMX, requires new names.

:Arguments:
**names** (:obj:`tuple(str)`, `Optional`): New names for the matrices. Required if working with OMX files
"""
if file_name is not None:
cores = names if len(names) else self.names
self.__save_as(file_name, cores)
return

if not self.__omx:
self.__flush(self.matrices)
Expand All @@ -122,6 +128,37 @@ def save(self, names=()) -> None:
self.names = self.omx_file.list_matrices()
self.computational_view(names)

def __save_as(self, file_name: str, cores: List[str]):
if Path(file_name).suffix.lower() == ".aem":
mat = AequilibraeMatrix()
args = {
"zones": self.zones,
"matrix_names": cores,
"index_names": self.index_names,
"memory_only": False,
"file_name": file_name,
}
mat.create_empty(**args)
mat.indices[:, :] = self.indices[:, :]
for core in cores:
mat.matrix[core][:, :] = self.matrix[core][:, :]
mat.name = self.name
mat.description = self.description
mat.close()
del mat

elif Path(file_name).suffix.lower() == ".omx":
omx_mat = omx.open_file(file_name, "w")
for core in cores:
omx_mat[core] = self.matrix[core]

for index in self.index_names:
omx_mat.create_mapping(index, self.indices[index])

omx_mat.attrs.name = self.name
omx_mat.attrs.description = self.description
omx_mat.close()

def create_empty(
self,
file_name: str = None,
Expand Down Expand Up @@ -212,16 +249,13 @@ def create_empty(
if mat_name in object.__dict__:
raise ValueError(mat_name + " is a reserved name")
if len(mat_name) > CORE_NAME_MAX_LENGTH:
raise ValueError(
"Matrix names need to be be shorter "
"than {}: {}".format(CORE_NAME_MAX_LENGTH, mat_name)
)
raise ValueError(f"Matrix names need to be shorter than {CORE_NAME_MAX_LENGTH}: {mat_name}")
else:
raise ValueError("Matrix core names need to be strings: " + str(mat_name))
else:
raise Exception("Matrix names need to be provided as a list")

self.names = [x for x in matrix_names]
self.names = copy(matrix_names)
self.cores = len(self.names)
if self.zones is None:
return
Expand Down Expand Up @@ -344,8 +378,8 @@ def robust_name(input_name: str, max_length: int, forbiden_names: List[str]) ->
)
idx_names = functools.reduce(lambda acc, n: acc + [robust_name(n, INDEX_NAME_MAX_LENGTH, acc)], do_idx, [])
else:
core_names = [x for x in do_cores]
idx_names = [x for x in do_idx]
core_names = list(do_cores)
idx_names = list(do_idx)

self.create_empty(
file_name=file_path,
Expand Down Expand Up @@ -391,7 +425,7 @@ def create_from_trip_list(self, path_to_file: str, from_column: str, to_column:
trip_df = pd.read_csv(path_to_file)

# Creating zone indices
zones_list = sorted(list(set(list(trip_df[from_column].unique()) + list(trip_df[to_column].unique()))))
zones_list = sorted(set(list(trip_df[from_column].unique()) + list(trip_df[to_column].unique())))
zones_df = pd.DataFrame({"zone": zones_list, "idx": list(np.arange(len(zones_list)))})

trip_df = trip_df.merge(
Expand Down Expand Up @@ -570,9 +604,7 @@ def __write__(self):
np.memmap(self.file_path, dtype="uint8", offset=17, mode="r+", shape=1)[0] = data_size

# matrix name
np.memmap(self.file_path, dtype="S" + str(MATRIX_NAME_MAX_LENGTH), offset=18, mode="r+", shape=1)[
0
] = self.name
np.memmap(self.file_path, dtype=f"S{MATRIX_NAME_MAX_LENGTH}", offset=18, mode="r+", shape=1)[0] = self.name

# matrix description
offset = 18 + MATRIX_NAME_MAX_LENGTH
Expand Down Expand Up @@ -1095,9 +1127,7 @@ def setName(self, matrix_name: str):
if len(str(matrix_name)) > MATRIX_NAME_MAX_LENGTH:
matrix_name = str(matrix_name)[0:MATRIX_NAME_MAX_LENGTH]

np.memmap(self.file_path, dtype="S" + str(MATRIX_NAME_MAX_LENGTH), offset=18, mode="r+", shape=1)[
0
] = matrix_name
np.memmap(self.file_path, dtype=f"S{MATRIX_NAME_MAX_LENGTH}", offset=18, shape=1)[0] = matrix_name

def setDescription(self, matrix_description: str):
"""
Expand Down
30 changes: 15 additions & 15 deletions aequilibrae/paths/AoN.pyx
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,6 @@ include 'conical.pyx'
include 'inrets.pyx'
include 'parallel_numpy.pyx'
include 'path_file_saving.pyx'
include 'graph_building.pyx'

def one_to_all(origin, matrix, graph, result, aux_result, curr_thread):
# type: (int, AequilibraeMatrix, Graph, AssignmentResults, MultiThreadedAoN, int) -> int
Expand Down Expand Up @@ -242,19 +241,20 @@ def path_computation(origin, destination, graph, results):
original_b_nodes_view)

if a_star_bint:
w = path_finding_a_star(origin_index,
dest_index,
g_view,
b_nodes_view,
graph_fs_view,
nodes_to_indices_view,
lat_view,
lon_view,
predecessors_view,
ids_graph_view,
conn_view,
reached_first_view,
heuristic)
path_finding_a_star(
origin_index,
dest_index,
g_view,
b_nodes_view,
graph_fs_view,
nodes_to_indices_view,
lat_view,
lon_view,
predecessors_view,
ids_graph_view,
conn_view,
heuristic
)
else:
w = path_finding(origin_index,
dest_index if early_exit_bint else -1,
Expand All @@ -267,7 +267,7 @@ def path_computation(origin, destination, graph, results):
reached_first_view)


if skims > 0:
if skims > 0 and not a_star_bint:
skim_single_path(origin_index,
nodes,
skims,
Expand Down
5 changes: 3 additions & 2 deletions aequilibrae/paths/all_or_nothing.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,8 +36,6 @@ def __init__(self, matrix, graph, results):
self.graph = graph
self.results = results
self.aux_res = MultiThreadedAoN()
self.report = []
self.cumulative = 0

if results._graph_id != graph._id:
raise ValueError("Results object not prepared. Use --> results.prepare(graph)")
Expand All @@ -55,6 +53,9 @@ def doWork(self):
self.execute()

def execute(self):
self.report = []
self.cumulative = 0

if pyqt:
self.assignment.emit(["zones finalized", 0])

Expand Down
13 changes: 5 additions & 8 deletions aequilibrae/paths/assignment_paths.py
Original file line number Diff line number Diff line change
Expand Up @@ -94,15 +94,12 @@ def _read_compressed_graph_correspondence(self) -> Dict:

def read_path_file(self, origin: int, iteration: int, traffic_class_id: str) -> (pd.DataFrame, pd.DataFrame):
possible_traffic_classes = list(filter(lambda x: x.id == traffic_class_id, self.classes))
assert (
len(possible_traffic_classes) == 1
), f"traffic class id not unique, please choose one of {list(map(lambda x: x.id, self.classes))}"
class_ids = [x.id for x in self.classes]
assert len(possible_traffic_classes) == 1, f"traffic class id not unique, please choose one of {class_ids}"
traffic_class = possible_traffic_classes[0]
base_dir = os.path.join(
self.path_base_dir, f"iter{iteration}", f"path_c{traffic_class.id}_{traffic_class.name}"
)
path_o_f = os.path.join(base_dir, f"o{origin}.feather")
path_o_index_f = os.path.join(base_dir, f"o{origin}_indexdata.feather")
b_dir = os.path.join(self.path_base_dir, f"iter{iteration}", f"path_c{traffic_class.id}_{traffic_class.name}")
path_o_f = os.path.join(b_dir, f"o{origin}.feather")
path_o_index_f = os.path.join(b_dir, f"o{origin}_indexdata.feather")
path_o = pd.read_feather(path_o_f)
path_o_index = pd.read_feather(path_o_index_f)
return path_o, path_o_index
Expand Down
Loading
Loading