-
Notifications
You must be signed in to change notification settings - Fork 1
OSRM data processing timings
On the Demo Server, the timings were as follows on 2/27/2013:
- osrm-extract 3h10
- osrm-prepare 4h50
- Plus a little overhead from copying / compiling / restarting
So just over 8h total.
At the moment of writing, at least 55GB of free RAM are necessary to complete the entire tool-chain.
Processing North-America on an m2.4xlarge EC2 instance:
- osrm-extract 2h15
- osrm-prepare 4h10
Using 8 cores (defined in extractor.ini and contractor.ini and the following .stxxl: disk=/osm/stxxl,40000,syscall
Note that this only describes a successful extraction run. The graph preparing step failed for lack of RAM.
High-Memory Quadruple Extra Large Instance 68.4 GB of memory, 26 EC2 Compute Units (8 virtual cores with 3.25 EC2 Compute Units each), 1690 GB of local instance storage, 64-bit platform
Instance was started at 1:35 PM (and it finished at 4:20 PM). I allocated 400 GB of disk space to stxxl(Just because I don't want to take any risk with low allocation, though it took about 120GB out of it till my process was alive, unfortunately it didn't went too far.)
The extraction process took about 2 hours and 45 minutes to complete and generated two files planet.osrm and planet.osrm.restrictions which were 16GB (15644 MB to be more precise) and 2 MB respectively. During this whole extraction process the memory usage peak was 23% approximately and the cpu usage varied from 60% to 110%. After the complete extraction the disk space.
Following is the output of extraction process showing time so that you may get an idea how much time it takes.
Parse Data Thread Finished
[extractor] parsing finished after 2968.81 seconds
[extractor] Sorting used nodes ... ok, after 87.5432s
[extractor] Erasing duplicate nodes ... ok, after 95.8619s
[extractor] Sorting all nodes ... ok, after 2061.31s
[extractor] Sorting used ways ... ok, after 24.7719s
[extractor] Sorting restrctns. by from... ok, after 24.8801s
[extractor] Fixing restriction starts ... ok, after 45.958s
[extractor] Sorting restrctns. by to ... ok, after 0.16124s
[extractor] Fixing restriction ends ... ok, after 22.0372s
[info extractor.cpp:312] usable restrictions: 112352
[extractor] Confirming/Writing used nodes ... ok, after 716.915s
[extractor] setting number of nodes ... ok
[extractor] Sorting edges by start ... ok, after 803.349s
[extractor] Setting start coords ... ok, after 958.393s
[extractor] Sorting edges by target ... ok, after 790.56s
[extractor] Setting target coords ... ok, after 1348.08s
[extractor] setting number of edges ... ok
[extractor] writing street name index ... ok, after 0.856781s
[info extractor.cpp:500] Processed 37298 nodes/sec and 39263.5 edges/sec
[info extractor.cpp:504] [extractor] finished.
This step is more cpu and memory intensive. osrm-prepare was started at about 4:23 PM and right after starting it, it continously took 100% CPU and the memory consumption increased from 22% to as high as 79% (which means about 50 GB on this server.)
At this point i.e after about 15 minutes from the start of the osrm-prepare, it was killed (core dumped) and prompt showed :
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Aborted (core dumped)
This error occurs when the machine runs out of RAM.
- The stxxl file was about 120 GB used till the end of the process, so you can set this file size somewhere around it ( Make it 150 GB, if you have enough space to be out of risk).
- While the OSRM extraction process was not so much memory and cpu intensive ( Peak of memory- 22.5% i.e about 15GB of RAM, Peak of CPU usage - 110%, but the CPU usage average was around 80%. This while a maximum CPU usage of 800% was possible.)
- Time it took to extract the whole planet was approximately 3 hours on the above mentioned server