-
Notifications
You must be signed in to change notification settings - Fork 3.4k
Running OSRM
There are two pre-processing pipelines available:
- Contraction Hierarchies (CH) which best fits use-cases where query performance is key, especially for large distance matrices
- Multi-Level Dijkstra (MLD) which best fits use-cases where query performance still needs to be very good; and live-updates to the data need to be made e.g. for regular Traffic updates
Our efforts are shifting towards the MLD pipeline with some features seeing MLD-specific implementations (such as multiple alternative routes). We recommend using the MLD pipeline except for special use-cases such as very large distance matrices where the CH pipeline is still faster.
For the MLD pipeline we need to extract (osrm-extract
) a graph out of the OpenStreetMap base map, then partition (osrm-partition
) this graph recursively into cells, customize the cells (osrm-customize
) by calculating routing weights for all cells, and then spawning up the development HTTP server (osrm-routed
) responding to queries:
cd osrm-backend
wget http://download.geofabrik.de/europe/germany/berlin-latest.osm.pbf
osrm-extract berlin-latest.osm.pbf
osrm-partition berlin-latest.osrm
osrm-customize berlin-latest.osrm
osrm-routed --algorithm=MLD berlin-latest.osrm
For CH the partition and customize pipeline stage gets replaced by adding shortcuts from the Contraction Hierarchies algorithm (osrm-contract
):
cd osrm-backend
wget http://download.geofabrik.de/europe/germany/berlin-latest.osm.pbf
osrm-extract berlin-latest.osm.pbf -p profiles/car.lua
osrm-contract berlin-latest.osrm
osrm-routed berlin-latest.osrm
Exported OSM data files can be obtained from providers such as Geofabrik. OSM data comes in a variety of formats including XML and PBF, and contain a plethora of data. The data includes information which is irrelevant to routing, such as positions of public waste baskets. Also, the data does not conform to a hard standard and important information can be described in various ways. Thus it is necessary to extract the routing data into a normalized format. This is done by the OSRM tool named extractor. It parses the contents of the exported OSM file and writes out routing metadata.
Profiles are used during this process to determine what can be routed along, and what cannot (private roads, barriers etc.).
It's best to provide a swapfile so that the out-of-memory situations do not kill OSRM (the size depends on your map data):
fallocate -l 100G /path/to/swapfile
chmod 600 /path/to/swapfile
mkswap /path/to/swapfile
swapon /path/to/swapfile
Note: this does not write 100 GB of zeros. Instead what it does is allocating a certain amount of blocks and just setting the 'uninitialized' flag on them, returning more or less immediately.
External memory accesses are handled by the stxxl library. Although you can run the code without any special configuration you might see a warning similar to [STXXL-ERRMSG] Warning: no config file found
. Given you have enough free disk space, you can happily ignore the warning or create a config file called .stxxl
in the same directory where the extractor tool sits. The following is taken from the stxxl manual:
You must define the disk configuration for an STXXL program in a file named '.stxxl' that must reside in the same directory where you execute the program. You can change the default file name for the configuration file by setting the enviroment variable STXXLCFG .
Each line of the configuration file describes a disk. A disk description uses the following format:
disk=full_disk_filename,capacity,access_method
Example: at the time of writing this (v4.9.1) the demo server uses 250GB stxxl file with the following configuration:
disk=/path/to/stxxl,250000,syscall
Check STXXL's config documentation.
The so-called Hierarchy is precomputed data that enables the routing engine to find shortest path within short time.
osrm-contract map.osrm
where map.osrm
is the extracted road network. Both are generated by the previous step. A nearest-neighbor data structure and a node map are created alongside the hierarchy.
We provide a demo HTTP server on top of the libosrm
library doing the heavy lifting. You can start it via:
osrm-routed map.osrm
You can access the API on localhost:5000
. See the Server API for details on how to use it.
Here is how to run an example query:
curl "http://127.0.0.1:5000/route/v1/driving/13.388860,52.517037;13.385983,52.496891?steps=true"
Below is an example nginx.conf
configuration that shows how to configure an nginx
as a reverse proxy to send different requests to different osrm-routed
instances:
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
# catch all v5 routes and send them to OSRM
location /route/v1/driving { proxy_pass http://localhost:5000; }
location /route/v1/walking { proxy_pass http://localhost:5001; }
location /route/v1/cycling { proxy_pass http://localhost:5002; }
# Everything else is a mistake and returns an error
location / {
add_header Content-Type text/plain;
return 404 'Your request is bad and you should feel bad.';
}
}
This setup assumes you have 3 different osrm-routed
servers running, each using a different .osrm
file set, and each running on a different HTTP port (5000, 5001, and 5002).
The driving
, walking
, cycling
part of the URL is used by nginx
to select the correct proxy backend, but after that, osrm-routed
ignores it and just returns a route on whatever data that instance of osrm-routed
is running with. It's up to you to ensure that the correct osrm-routed
instance is using the correct datafiles so that /driving/
actually routes on a dataset generated from the car.lua
profile.
I think this page is obsolete. I add few commet since I tried to compile OSMR recently using Codeblocks 20.03 on Windows 10. What I found was that
Several extra libraries ave to be installed like BZip2, lua. I was using MSYS2 so I did, pacman -S mingw-w64-x86_64-bzip2 pacman -S mingw-w64-x86_64-lua pacman -S mingw-w64-x86_64-zlib
instal Intell tbb https://github.com/oneapi-src/oneTBB/releases Extract it to a folder (e.g., C:\tbb).
Ensure to add TBB Path to CMake Command and that the bin folder of MinGW or MSYS2 (e.g., C:\msys64\mingw64\bin) is added to your system PATH environment variable.
Then in osrm-backend\build run
cmake -G "CodeBlocks - MinGW Makefiles" .. -DCMAKE_BUILD_TYPE=Release -DTBB_INCLUDE_DIR="C:/tbb/include" -DTBB_LIBRARY="C:/tbb/lib/intel64/gcc4.8/libtbb.so"
cmake -G "CodeBlocks - MinGW Makefiles" .. -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS="-fno-lto -mconsole" -DCMAKE_EXE_LINKER_FLAGS="-Wl,-e,mainCRTStartup"
It may be enough to compile.
However for some compilation problems I add to do
- remove "-Werror # Treat all warnings like error" in a CMakeLists.txt file
- add in shared_memory.hcp in line 208: (void)lock_file; // This explicitly marks lock_file as used to avoid warning of unused variable
- to avoid an Link Time Optimization (LTO) error run cmake -G "CodeBlocks - MinGW Makefiles" .. -DCMAKE_BUILD_TYPE=Release -DIPO=OFF
- put OFF in option(ENABLE_LTO "Use Link Time Optimisation" OFF) and I add set(CMAKE_INTERPROCEDURAL_OPTIMIZATION OFF) in Cmake
I finally gave up because of Windows console incompatibility (Winmain not found) without knowing the reason even after having, In codebleocks Project properties, Built target, type put Console application