diff --git a/docs/source/files/assignments/08.rst b/docs/source/files/assignments/08.rst index c6b372cc..4bb8266b 100644 --- a/docs/source/files/assignments/08.rst +++ b/docs/source/files/assignments/08.rst @@ -1,15 +1,17 @@ +################ 8. Optimization -***************** +################ +********** 8.1 ARA -======== +********** .. figure:: https://wiki.uni-jena.de/download/attachments/22453005/IMG_7381_0p5.JPG?version=1&modificationDate=1625042348365&api=v2 HPC-Cluster ARA. Source: https://wiki.uni-jena.de/pages/viewpage.action?pageId=22453005 8.1.1 - Uploading and running the code ----------------------------------------- +======================================== First we cloned our github repository to "beegfs" and transfered the bythymetry and displacement data with "wget https://cloud.uni-jena.de/s/CqrDBqiMyKComPc/download/data_in.tar.xz -O tsunami_lab_data_in.tar.xz" there. @@ -41,9 +43,10 @@ sbatch file: Since we only want to use one node, we set ``nodes`` and ``ntasks`` to 1 and ``cpus-per-task`` to 72. 8.1.2 - Visualizations --------------------------- +======================== -**Tohoku 5000** +Tohoku 5000 +----------- .. raw:: html @@ -52,7 +55,8 @@ Since we only want to use one node, we set ``nodes`` and ``ntasks`` to 1 and ``c -**Tohoku 1000** +Tohoku 1000 +----------- .. raw:: html @@ -62,7 +66,8 @@ Since we only want to use one node, we set ``nodes`` and ``ntasks`` to 1 and ``c -**Chile 5000** +Chile 5000 +----------- .. raw:: html @@ -70,7 +75,8 @@ Since we only want to use one node, we set ``nodes`` and ``ntasks`` to 1 and ``c -**Chile 1000** +Chile 1000 +----------- .. raw:: html @@ -82,7 +88,7 @@ Since we only want to use one node, we set ``nodes`` and ``ntasks`` to 1 and ``c Comparing to the simulations from assignment 6, it is clear that all simulations behave equally. 8.1.3 - Private PC vs ARA ---------------------------- +=========================== .. note:: @@ -90,7 +96,7 @@ Comparing to the simulations from assignment 6, it is clear that all simulations The benchmarking mode disables all file output (and also skips all imports of ````). Setups -^^^^^^^^^^ +------- If you are interested, you can view the used configurations here: @@ -103,7 +109,7 @@ If you are interested, you can view the used configurations here: :download:`tohoku1000.json <../../_static/text/tohoku1000.json>` Results -^^^^^^^^^^ +-------- .. list-table:: execution times on different devices :header-rows: 1 @@ -187,17 +193,18 @@ Results and stopped after the program has finished and all memory has been freed. Observations -^^^^^^^^^^^^^^ +-------------- In every scenario, ARA had a faster setup time but slower computation times. We conclude that ARA has faster data/file access (because the setup heavily depends on data reading speed from a file) while the private PC seems to have better single core performance. +************** 8.2 Compilers -=============== +************** 8.2.1 - Generic compiler support ---------------------------------- +================================= We enabled generic compiler support by adding the following lines to our ``SConstruct`` file @@ -221,10 +228,10 @@ Now, scons can be invoked with a compiler of choice, for example by running CXX=icpc scons 8.2.2 & 8.2.3 - Test runs --------------------------- +=========================== Time measurements -^^^^^^^^^^^^^^^^^^^^^^^^^ +------------------ For each run, we used the following configuration: @@ -313,7 +320,7 @@ We therefore ended up using ``compiler/intel/2018-Update1`` and ``gcc (GCC) 4.8. This configuration was the only one that worked for us, as we did not manage to fix all the errors that were thrown at us. Observations from the table -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +---------------------------- As one would intuitively expect, the higher the optimization level is, the quicker the process finished. @@ -328,7 +335,7 @@ We would also need to ensure that there are no other intensive processes running Nonetheless, by using the table as a rough estimate it seems that ``g++`` is faster when using ``-O0`` and ``-Ofast`` while ``icpc`` is preferable for ``-O2``. 8.2.3 - Optimization flags ---------------------------- +=========================== To allow for an easy switch between optimization flag, we added following code to our SConstruct: @@ -355,7 +362,7 @@ and env.Append( CXXFLAGS = [ env['opt'] ] ) The dangers of -Ofast -^^^^^^^^^^^^^^^^^^^^^^^ +---------------------- One of the options that ``-Ofast`` enables is ``-ffast-math``. With that, a whole lot of other options get activated as well, such as @@ -386,7 +393,7 @@ and ``_ 8.2.4 - Compiler reports ------------------------- +========================= We added the support for a compiler report flag with the following lines in our ``SConstruct`` @@ -435,7 +442,7 @@ This snippet refers to the loops that provide our solver with data from a setup: } F-Wave optimization report -^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +--------------------------- The full report can be found :download:`here. <../../_static/text/task8-2-4_fwave_optrpt.txt>` @@ -484,7 +491,7 @@ For ``netUpdates``, the report tells us that We can conclude that the compiler is able to inline our calls to ``computeEigenvalues`` and ``computeEigencoefficients``. WavePropagation2d optimization report -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +-------------------------------------- The full report can be found :download:`here. <../../_static/text/task8-2-4_waveprop2d_optrpt.txt>` @@ -514,12 +521,12 @@ could not be vectorized: Lines 86 and 88 are the two for-loops for y- and x-axis of the x-sweep and lines 152 and 154 are the two for-loops for y- and x-axis of the y-sweep. - +********************************************* 8.3 Instrumentation and Performance Counters -============================================== +********************************************* 8.3.1 to 8.3.4 - VTune ------------------------ +======================= First we used the gui of Intel vTune to specify our reports. @@ -542,7 +549,7 @@ Then the following batch script was used to run the hotspots measurement: /cluster/intel/vtune_profiler_2020.2.0.610396/bin64/vtune -collect hotspots -app-working-dir /beegfs/xe63nel/tsunami_lab/build -- /beegfs/xe63nel/tsunami_lab/build/tsunami_lab ../configs/config.json Hotspots -^^^^^^^^^^ +--------- .. image:: ../../_static/assets/task_8-3-1_hotspot_bottomUp.png @@ -564,7 +571,7 @@ It was interesting to see (although it should not come as a surprise) that the ` of the CPU time. Threads -^^^^^^^^^^ +-------- .. image:: ../../_static/assets/task_8-3-1_threads.png @@ -573,10 +580,10 @@ Threads The poor result for the thread report was also expected, because we only compute sequentially. 8.3.5 - Code optimizations ---------------------------- +=========================== TsunamiEvent2d speedup -^^^^^^^^^^^^^^^^^^^^^^^ +----------------------- In order to increase the speed of this setup, we introduced a variable ``lastnegativeIndex`` for the X and Y direction for the bathymetry and displacement. The idea is the following: @@ -634,7 +641,7 @@ Code snippets of the implementation: } F-Wave solver optimization -^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +---------------------------- In ``computeEigencoefficients``, we changed @@ -677,7 +684,7 @@ Furthermore, we established a constant for :code:`t_real(0.5) * m_g`: Coarse Output optimization -^^^^^^^^^^^^^^^^^^^^^^^^^^^ +---------------------------- Inside the ``write()`` function in ``NetCdf.cpp`` we calculated @@ -699,8 +706,9 @@ once and then reuse it wherever we need it: This way, the division only happens once. +************************ Individual phase ideas -======================== +************************ For the individual phase, we plan on building a graphical user interface using `ImGui `_. diff --git a/docs/source/files/assignments/09.rst b/docs/source/files/assignments/09.rst index e651c553..5697ff51 100644 --- a/docs/source/files/assignments/09.rst +++ b/docs/source/files/assignments/09.rst @@ -1,11 +1,13 @@ +================== 9. Parallelization -******************** +================== +************ 9.1 OpenMP -============ +************ 9.1.1 - Parallelization with OpenMP ----------------------------------------- +==================================== An easy way to parallelize our for loops is using @@ -22,7 +24,7 @@ example: ... 9.1.2 - Parallelization speedup ------------------------------------------- +==================================== We have used following batch script for ara: @@ -42,7 +44,8 @@ We have used following batch script for ara: And got following results: -**Without parallelization** +Without parallelization +----------------------- .. code:: text @@ -54,7 +57,8 @@ And got following results: = 1941.01 seconds = 32.3501 minutes -**With parallelization on 72 cores with 72 threads** +With parallelization on 72 cores with 72 threads +------------------------------------------------ .. code:: text @@ -72,7 +76,8 @@ And got following results: Speedup: :math:`\frac{1941}{75.5} = 25.7` -**With parallelization on 72 cores with 144 threads** +With parallelization on 72 cores with 144 threads +------------------------------------------------- .. code:: text @@ -90,7 +95,7 @@ We can see that having twice the amount of threads resulted in a much slower com We conclude that using more threads than cores results in a slowed down performance. 9.1.3 - 2D for loop parallelization ------------------------------------------- +==================================== The results from above used parallelization in the outer loop. The parallelized inner loops leads to following time: @@ -105,9 +110,10 @@ The parallelized inner loops leads to following time: It is clear, that parallelizing the outer loop is more effficient. 9.1.4 - Pinning and Scheduling ------------------------------------------- +=============================== -**Scheduling** +Scheduling +---------- The upper implementation used the basic :code:`scheduling(static)`. @@ -136,7 +142,8 @@ For :code:`scheduling(auto)` we get: = 84.5467 seconds = 1.40911 minutes -**Pinning** +Pinning +------- Using :code:`OMP_PLACES={0}:36:1` we get: diff --git a/docs/source/files/assignments/project.rst b/docs/source/files/assignments/project.rst index 730e81c4..38f08206 100644 --- a/docs/source/files/assignments/project.rst +++ b/docs/source/files/assignments/project.rst @@ -1,11 +1,13 @@ +################### 10. Project Phase -******************** +################### In the project phase we decided to implement a userfriendly Gui. The aim is to make the usage of our Tsunami solver as easy and interactive as possible. +********************* GUI (Client-side) -================== +********************* .. image:: ../../_static/assets/task-10-Gui_help.png @@ -28,20 +30,22 @@ After selecting the simulation has to be recompiled with the according button be The last tab contains further actions to interact with the simulation. First, the simulation can be started or killed here. Also files for the bathymetry and displacement can be chosen. As an addition, the user can get data like the heigth from the simulation. +********************* Server-side -============= +********************* +********************* Libraries -============== +********************* Communicator -************** +===================== For communication between simulation and the GUI we implemented a communication library. The **Communicator.hpp** library can be used to easily create a client-server TCP connection and handle its communication and logging. Communicator API -****************** +===================== (**File: communicator_api.h**) diff --git a/lib/xlpmg/Communicator.hpp b/lib/xlpmg/Communicator.hpp index f229dbee..1be6fe5b 100644 --- a/lib/xlpmg/Communicator.hpp +++ b/lib/xlpmg/Communicator.hpp @@ -28,7 +28,7 @@ namespace xlpmg { private: // Timeout value for socket operations in seconds - int TIMEOUT = 2; + int TIMEOUT = 20; // Log data for storing communication logs std::string logData = ""; // Socket related variables @@ -265,7 +265,7 @@ namespace xlpmg */ bool checkServerResponse() { - return true; + } /** diff --git a/lib/xlpmg/communicator_api.h b/lib/xlpmg/communicator_api.h index 6ef74a2f..9f32a76a 100644 --- a/lib/xlpmg/communicator_api.h +++ b/lib/xlpmg/communicator_api.h @@ -21,48 +21,57 @@ namespace xlpmg /** * @brief Enum representing the different parts of a message. * - * The MessagePart enum defines the different parts of a message, including the type, key, and arguments. + * The MessagePart enum defines the different parts of a message, including the expectation, urgency, key, and arguments. */ enum MessagePart { - TYPE, + EXPECTATION, + URGENCY, KEY, ARGS }; - /** - * @brief Enum representing the different types of messages. - * - * The MessageType enum defines the different types of messages that can be sent or received. - * It includes server calls, function calls, server responses, and other types of messages. - */ - enum MessageType + enum MessageExpectation { - SERVER_CALL, - FUNCTION_CALL, - OTHER, - SERVER_RESPONSE + NO_RESPONSE, + EXPECT_RESPONSE }; + enum MessageUrgency + { + CRITICAl, + HIGH, + MEDIUM, + LOW + }; + + /** + * @brief Macro to briefly define a mapping between MessageExpectation enum and JSON + */ + NLOHMANN_JSON_SERIALIZE_ENUM(MessageExpectation, {{NO_RESPONSE, "no_response"}, + {EXPECT_RESPONSE, "expect_response"}}); + /** - * @brief Macro to briefly define a mapping between MessageType enum and JSON + * @brief Macro to briefly define a mapping between MessageUrgency enum and JSON */ - NLOHMANN_JSON_SERIALIZE_ENUM(MessageType, {{SERVER_CALL, "server_call"}, - {FUNCTION_CALL, "function_call"}, - {OTHER, "other"}, - {SERVER_RESPONSE, "server_response"}}); + NLOHMANN_JSON_SERIALIZE_ENUM(MessageUrgency, {{CRITICAl, "critical"}, + {HIGH, "high"}, + {MEDIUM, "medium"}, + {LOW, "low"}}); /** * Struct representing a message in the communicator API. * * # Description - * This struct contains information about the type, key, and arguments of a message. + * This struct contains information about the urgency, key, and arguments of a message. */ struct Message { - // The type of the message. - MessageType type = MessageType::OTHER; - // The key associated with the message + // The expectation of the message. + MessageExpectation expectation = MessageExpectation::NO_RESPONSE; + // The urgency of the message. + MessageUrgency urgency = MessageUrgency::MEDIUM; + // The key associated with the message. std::string key = "NONE"; // The arguments of the message. json args = ""; @@ -77,7 +86,8 @@ namespace xlpmg json messageToJson(Message i_message) { json msg; - msg[MessagePart::TYPE] = i_message.type; + msg[MessagePart::EXPECTATION] = i_message.expectation; + msg[MessagePart::URGENCY] = i_message.urgency; msg[MessagePart::KEY] = i_message.key; msg[MessagePart::ARGS] = i_message.args; return msg; @@ -103,98 +113,104 @@ namespace xlpmg Message jsonToMessage(json i_json) { Message l_message; - l_message.type = i_json.at(MessagePart::TYPE); + l_message.expectation = i_json.at(MessagePart::EXPECTATION); + l_message.urgency = i_json.at(MessagePart::URGENCY); l_message.key = i_json.at(MessagePart::KEY); l_message.args = i_json.at(MessagePart::ARGS); return l_message; } - //! Should not not induce any functionality and is only used to check if the other side responds - inline const Message CHECK = {MessageType::SERVER_CALL, "CHECK"}; + ///////////////////////////////// + // NO_RESPONSE // + ///////////////////////////////// - //! Will provide information on the simulation - inline const Message GET_SIMULATION_STATS = {MessageType::SERVER_CALL, "get_simulation_stats"}; + // CRITICAL + //! Server will stop the running simulation. + inline const Message KILL_SIMULATION = {MessageExpectation::NO_RESPONSE, MessageUrgency::CRITICAl, "kill_simulation"}; + //! Tells the Simulator to write a checkpoint. + inline const Message WRITE_CHECKPOINT = {MessageExpectation::NO_RESPONSE, MessageUrgency::CRITICAl, "write_checkpoint"}; + //! Pauses a simulation. + inline const Message PAUSE_SIMULATION = {MessageExpectation::NO_RESPONSE, MessageUrgency::CRITICAl, "pause_simulation"}; //! Tells the server to shutdown. - inline const Message SHUTDOWN_SERVER = {MessageType::SERVER_CALL, "shutdown_server"}; + inline const Message SHUTDOWN_SERVER = {MessageExpectation::NO_RESPONSE, MessageUrgency::CRITICAl, "shutdown_server"}; - //! Tells the server to restart. - inline const Message START_SIMULATION = {MessageType::SERVER_CALL, "start_simulation"}; + // HIGH - //! Server will stop the running simulation. - inline const Message KILL_SIMULATION = {MessageType::SERVER_CALL, "kill_simulation"}; + //! Should not not induce any functionality and is only used to check if the other side responds + inline const Message CHECK = {MessageExpectation::NO_RESPONSE, MessageUrgency::HIGH, "CHECK"}; + //! Tells the server to start the simulator. + inline const Message START_SIMULATION = {MessageExpectation::NO_RESPONSE, MessageUrgency::HIGH, "start_simulation"}; + //! Continues a simulation. + inline const Message CONTINUE_SIMULATION = {MessageExpectation::NO_RESPONSE, MessageUrgency::HIGH, "continue_simulation"}; + //! Tells the Simulator to reset. + inline const Message RESET_SIMULATOR = {MessageExpectation::NO_RESPONSE, MessageUrgency::HIGH, "reset_simulator"}; + //! Tells the Simulator to toggle file i/o usage to given argument. + inline const Message TOGGLE_FILEIO = {MessageExpectation::NO_RESPONSE, MessageUrgency::HIGH, "toggle_fileio"}; - //! Server will recompile with provided arguments. - inline const Message COMPILE = {MessageType::SERVER_CALL, "compile", ""}; + // MEDIUM + //! Server will recompile with provided arguments. + inline const Message COMPILE = {MessageExpectation::NO_RESPONSE, MessageUrgency::MEDIUM, "compile", ""}; //! Server will recompile with provided arguments and run using a bash script. - inline const Message COMPILE_RUN_BASH = {MessageType::SERVER_CALL, "compile_run_bash", ""}; - + inline const Message COMPILE_RUN_BASH = {MessageExpectation::NO_RESPONSE, MessageUrgency::MEDIUM, "compile_run_bash", ""}; //! Server will recompile with provided arguments and run using an sbatch script. - inline const Message COMPILE_RUN_SBATCH = {MessageType::SERVER_CALL, "compile_run_sbatch", ""}; - - //! For sending a file to the server - inline const Message SEND_FILE = {MessageType::SERVER_CALL, "send_file"}; + inline const Message COMPILE_RUN_SBATCH = {MessageExpectation::NO_RESPONSE, MessageUrgency::MEDIUM, "compile_run_sbatch", ""}; + //! Deletes checkpoints. + inline const Message DELETE_CHECKPOINTS = {MessageExpectation::NO_RESPONSE, MessageUrgency::MEDIUM, "delete_checkpoints"}; + //! Deletes stations. + inline const Message DELETE_STATIONS = {MessageExpectation::NO_RESPONSE, MessageUrgency::MEDIUM, "delete_stations"}; + //! Sets the cell amount of the simulation. + inline const Message SET_OFFSET = {MessageExpectation::NO_RESPONSE, MessageUrgency::MEDIUM, "set_offset"}; + //! Sets the offset of the simulation. + inline const Message SET_CELL_AMOUNT = {MessageExpectation::NO_RESPONSE, MessageUrgency::MEDIUM, "set_cell_amount"}; - //! For receiving a file from the server - inline const Message RECV_FILE = {MessageType::SERVER_CALL, "recv_file"}; + // LOW + //! For sending a file to the server + inline const Message SEND_FILE = {MessageExpectation::NO_RESPONSE, MessageUrgency::LOW, "send_file"}; + //! Tells the Simulator to load config from json data. + inline const Message LOAD_CONFIG_JSON = {MessageExpectation::NO_RESPONSE, MessageUrgency::LOW, "load_config_json"}; + //! Tells the Simulator to load config from .json config file. + inline const Message LOAD_CONFIG_FILE = {MessageExpectation::NO_RESPONSE, MessageUrgency::LOW, "load_config_file"}; //! Tells the server to change the read buffer size. - inline const Message SET_READ_BUFFER_SIZE = {MessageType::SERVER_CALL, "set_read_buffer_size"}; - + inline const Message SET_READ_BUFFER_SIZE = {MessageExpectation::NO_RESPONSE, MessageUrgency::MEDIUM, "set_read_buffer_size"}; //! Tells the server to change the send buffer size. - inline const Message SET_SEND_BUFFER_SIZE = {MessageType::SERVER_CALL, "set_send_buffer_size"}; - - //! Tells the Simulator to reset. - inline const Message RESET_SIMULATOR = {MessageType::FUNCTION_CALL, "reset_simulator"}; - - //! Tells the Simulator to write a checkpoint. - inline const Message WRITE_CHECKPOINT = {MessageType::FUNCTION_CALL, "write_checkpoint"}; + inline const Message SET_SEND_BUFFER_SIZE = {MessageExpectation::NO_RESPONSE, MessageUrgency::MEDIUM, "set_send_buffer_size"}; - //! Tells the Simulator to load config from json data. - inline const Message LOAD_CONFIG_JSON = {MessageType::FUNCTION_CALL, "load_config_json"}; - - //! Tells the Simulator to load config from .json config file. - inline const Message LOAD_CONFIG_FILE = {MessageType::FUNCTION_CALL, "load_config_file"}; + //////////////////////////////// + // EXPECT_RESPONSE // + //////////////////////////////// - //! Tells the Simulator to toggle file i/o usage to given argument. - inline const Message TOGGLE_FILEIO = {MessageType::FUNCTION_CALL, "toggle_fileio"}; + // CRITICAL //! Returns the current timestep from the simulator. - inline const Message GET_TIME_VALUES = {MessageType::FUNCTION_CALL, "get_time_values"}; - + inline const Message GET_TIME_VALUES = {MessageExpectation::EXPECT_RESPONSE, MessageUrgency::CRITICAl, "get_time_values"}; + //! Gets system info such as CPU and RAM usage. + inline const Message GET_SYSTEM_INFORMATION = {MessageExpectation::EXPECT_RESPONSE, MessageUrgency::CRITICAl, "get_system_information"}; //! Returns the current simulation sizes from the simulator. - inline const Message GET_SIMULATION_SIZES = {MessageType::FUNCTION_CALL, "get_simulation_sizes"}; + inline const Message GET_SIMULATION_SIZES = {MessageExpectation::EXPECT_RESPONSE, MessageUrgency::CRITICAl, "get_simulation_sizes"}; - //! Tells the server to start sending height data. (buffered) - inline const Message GET_HEIGHT_DATA = {MessageType::FUNCTION_CALL, "get_height_data"}; + // HIGH + //! Tells the server to start sending height data. (buffered) + inline const Message GET_HEIGHT_DATA = {MessageExpectation::EXPECT_RESPONSE, MessageUrgency::MEDIUM, "get_height_data"}; //! Tells the server to start sending bathymetry data. (buffered) - inline const Message GET_BATHYMETRY_DATA = {MessageType::FUNCTION_CALL, "get_bathymetry_data"}; - - //! Sets the cell amount of the simulation. - inline const Message SET_OFFSET = {MessageType::FUNCTION_CALL, "set_offset"}; - - //! Sets the offset of the simulation. - inline const Message SET_CELL_AMOUNT = {MessageType::FUNCTION_CALL, "set_cell_amount"}; - - //! Tells the client that a buffered sending operation has finished. - inline const Message BUFFERED_SEND_FINISHED = {MessageType::SERVER_RESPONSE, "buff_send_finished"}; + inline const Message GET_BATHYMETRY_DATA = {MessageExpectation::EXPECT_RESPONSE, MessageUrgency::MEDIUM, "get_bathymetry_data"}; - //! Deletes checkpoints. - inline const Message DELETE_CHECKPOINTS = {MessageType::SERVER_CALL, "delete_checkpoints"}; + // MEDIUM - //! Deletes stations. - inline const Message DELETE_STATIONS = {MessageType::SERVER_CALL, "delete_stations"}; - - //! Pauses a simulation. - inline const Message PAUSE_SIMULATION = {MessageType::SERVER_CALL, "pause_simulation"}; + //! For receiving a file from the server + inline const Message RECV_FILE = {MessageExpectation::EXPECT_RESPONSE, MessageUrgency::LOW, "recv_file"}; - //! Continues a simulation. - inline const Message CONTINUE_SIMULATION = {MessageType::SERVER_CALL, "continue_simulation"}; + //////////////////////////////// + // SERVER_RESPONSE // + //////////////////////////////// - //! Gets system info such as CPU and RAM usage. - inline const Message GET_SYSTEM_INFORMATION = {MessageType::SERVER_CALL, "get_system_information"}; + //! Tells the client that a buffered sending operation has finished. + inline const Message BUFFERED_SEND_FINISHED = {MessageExpectation::NO_RESPONSE, MessageUrgency::CRITICAl, "buff_send_finished"}; + //! Server response template + inline const Message SERVER_RESPONSE = {MessageExpectation::NO_RESPONSE, MessageUrgency::CRITICAl, "server_response"}; } #endif \ No newline at end of file diff --git a/src/Server.cpp b/src/Server.cpp index 687fd596..9fee29e9 100644 --- a/src/Server.cpp +++ b/src/Server.cpp @@ -126,333 +126,379 @@ int main(int i_argc, char *i_argv[]) json l_parsedData = json::parse(l_rawData); xlpmg::Message l_message = xlpmg::jsonToMessage(l_parsedData); - xlpmg::MessageType l_type = l_message.type; + xlpmg::MessageExpectation l_expectation = l_message.expectation; + xlpmg::MessageUrgency l_urgency = l_message.urgency; std::string l_key = l_message.key; json l_args = l_message.args; - //-------------------------------------------// - //---------------SERVER CALLS----------------// - //-------------------------------------------// - if (l_type == xlpmg::SERVER_CALL) + ///////////////////////////////// + // NO_RESPONSE // + ///////////////////////////////// + if (l_expectation == xlpmg::NO_RESPONSE) { - if (l_key == xlpmg::CHECK.key) + // CRITICAL + if (l_urgency == xlpmg::CRITICAl) { - l_communicator.sendToClient("OK"); - } - else if (l_key == xlpmg::SHUTDOWN_SERVER.key) - { - m_EXIT = true; - exitSimulationThread(); - l_communicator.stopServer(); + if (l_key == xlpmg::KILL_SIMULATION.key) + { + exitSimulationThread(); + } + else if (l_key == xlpmg::PAUSE_SIMULATION.key) + { + std::cout << "Pause simulation" << std::endl; + simulator->setPausingStatus(true); + } + else if (l_key == xlpmg::SHUTDOWN_SERVER.key) + { + m_EXIT = true; + exitSimulationThread(); + l_communicator.stopServer(); + } } - else if (l_key == xlpmg::START_SIMULATION.key) + // HIGH + else if (l_urgency == xlpmg::HIGH) { - std::string l_config = l_args; - if (simulator->isPreparing() || simulator->isCalculating()) + if (l_key == xlpmg::CHECK.key) + { + l_communicator.sendToClient("OK"); + } + else if (l_key == xlpmg::START_SIMULATION.key) + { + std::string l_config = l_args; + if (simulator->isPreparing() || simulator->isCalculating()) + { + std::cout << "Warning: Did not start simulator because it is still preparing or calculating." << std::endl; + } + else + { + if (canRunThread()) + { + m_simulationThread = std::thread(&tsunami_lab::Simulator::start, simulator, l_config); + } + else + { + std::cout << "Warning: Did not start simulator because it is still running." << std::endl; + } + } + } + else if (l_key == xlpmg::CONTINUE_SIMULATION.key) { - std::cout << "Warning: Did not start simulator because it is still preparing or calculating." << std::endl; + std::cout << "Continue simulation" << std::endl; + simulator->setPausingStatus(false); } - else + else if (l_key == xlpmg::RESET_SIMULATOR.key) { + exitSimulationThread(); if (canRunThread()) { - m_simulationThread = std::thread(&tsunami_lab::Simulator::start, simulator, l_config); + m_simulationThread = std::thread(&tsunami_lab::Simulator::resetSimulator, simulator); } else { - std::cout << "Warning: Did not start simulator because it is still running." << std::endl; + std::cout << "Warning: Could not reset because the simulation is still running." << std::endl; } } - } - else if (l_key == xlpmg::KILL_SIMULATION.key) - { - exitSimulationThread(); - } - else if (l_key == xlpmg::GET_SYSTEM_INFORMATION.key) - { - xlpmg::Message l_response = {xlpmg::SERVER_RESPONSE, "system_information"}; - json l_data; - l_data["USED_RAM"] = l_usedRAM; - l_data["TOTAL_RAM"] = l_totalRAM; - l_data["CPU_USAGE"] = l_cpuUsage; - l_response.args = l_data; - l_communicator.sendToClient(xlpmg::messageToJsonString(l_response), false); - } - else if (l_key == xlpmg::COMPILE.key) - { - // Shutdown server - m_EXIT = true; - l_communicator.stopServer(); - exitSimulationThread(); - - std::string env = l_args.value("ENV", ""); // environment var - std::string opt = l_args.value("OPT", ""); // compiler opt - - // compile - exec("chmod +x scripts/compile-bash.sh"); - exec("./scripts/compile-bash.sh \"" + env + "\" \"" + opt + "\" &"); - } - else if (l_key == xlpmg::COMPILE_RUN_BASH.key) - { - // Shutdown server - m_EXIT = true; - l_communicator.stopServer(); - exitSimulationThread(); - - std::string l_env = l_args.value("ENV", ""); // environment var - std::string l_opt = l_args.value("OPT", ""); // compiler opt - int l_port = l_args.value("POR", 8080); - - // compile - exec("chmod +x scripts/compile-bash.sh"); - exec("./scripts/compile-bash.sh \"" + l_env + "\" \"" + l_opt + "\""); - - // run - exec("chmod +x run-bash.sh"); - exec("./run-bash.sh " + std::to_string(l_port) + " &"); - } - else if (l_key == xlpmg::COMPILE_RUN_SBATCH.key) - { - // Shutdown server - m_EXIT = true; - l_communicator.stopServer(); - exitSimulationThread(); - - std::string l_env = l_args.value("ENV", ""); // environment var - std::string l_opt = l_args.value("OPT", ""); // compiler opt - int l_port = l_args.value("POR", 8080); - - std::string l_job = l_args.value("JOB", ""); - std::string l_out = l_args.value("OUT", ""); - std::string l_err = l_args.value("ERR", ""); - std::string l_tim = l_args.value("TIM", ""); - - // compile - exec("chmod +x scripts/compile-bash.sh"); - exec("./scripts/compile-bash.sh \"" + l_env + "\" \"" + l_opt + "\""); - - // generate sbatch - exec("chmod +x scripts/generateSbatch.sh"); - exec("./scripts/generateSbatch.sh " + l_job + " " + l_out + " " + l_err + " " + l_tim + " > run-sbatch.sh"); - - // run - exec("chmod +x run-sbatch.sh"); - exec("sbatch run-sbatch.sh " + std::to_string(l_port)); - } - else if (l_key == xlpmg::SET_READ_BUFFER_SIZE.key) - { - l_communicator.setReadBufferSize(l_args); - } - else if (l_key == xlpmg::SET_SEND_BUFFER_SIZE.key) - { - l_communicator.setSendBufferSize(l_args); - } - else if (l_key == xlpmg::SEND_FILE.key) - { - std::vector l_byteVector = l_args["data"]["bytes"]; - auto l_writeFile = std::fstream(l_args.value("path", ""), std::ios::out | std::ios::binary); - l_writeFile.write((char *)&l_byteVector[0], l_byteVector.size()); - l_writeFile.close(); - } - else if (l_key == xlpmg::RECV_FILE.key) - { - std::string l_file = l_args.value("path", ""); - std::string l_fileDestination = l_args.value("pathDestination", ""); - - if (l_file.length() > 0 && l_fileDestination.length() > 0) + else if (l_key == xlpmg::TOGGLE_FILEIO.key) { - xlpmg::Message l_response = {xlpmg::SERVER_RESPONSE, "file_data"}; - json l_arguments; - l_arguments["path"] = l_fileDestination; - - std::ifstream l_fileData(l_file, std::ios::binary); - l_fileData.unsetf(std::ios::skipws); - std::streampos l_fileSize; - l_fileData.seekg(0, std::ios::end); - l_fileSize = l_fileData.tellg(); - l_fileData.seekg(0, std::ios::beg); - std::vector vec; - vec.reserve(l_fileSize); - vec.insert(vec.begin(), - std::istream_iterator(l_fileData), - std::istream_iterator()); - l_arguments["data"] = json::binary(vec); - l_response.args = l_arguments; - l_communicator.sendToClient(xlpmg::messageToJsonString(l_response)); + if (l_args == "true") + { + simulator->toggleFileIO(true); + } + else if (l_args == "false") + { + simulator->toggleFileIO(false); + } } } - else if (l_key == xlpmg::CONTINUE_SIMULATION.key) - { - std::cout << "Continue simulation" << std::endl; - simulator->setPausingStatus(false); - } - else if (l_key == xlpmg::PAUSE_SIMULATION.key) + // MEDIUM + else if (l_urgency == xlpmg::MEDIUM) { - std::cout << "Pause simulation" << std::endl; - simulator->setPausingStatus(true); - } - } - //-------------------------------------------// - //--------------FUNCTION CALLS---------------// - //-------------------------------------------// - else if (l_type == xlpmg::FUNCTION_CALL) - { + if (l_key == xlpmg::COMPILE.key) + { + // Shutdown server + m_EXIT = true; + l_communicator.stopServer(); + exitSimulationThread(); - if (l_key == xlpmg::RESET_SIMULATOR.key) - { - exitSimulationThread(); - if (canRunThread()) + std::string env = l_args.value("ENV", ""); // environment var + std::string opt = l_args.value("OPT", ""); // compiler opt + + // compile + exec("chmod +x scripts/compile-bash.sh"); + exec("./scripts/compile-bash.sh \"" + env + "\" \"" + opt + "\" &"); + } + else if (l_key == xlpmg::COMPILE_RUN_BASH.key) { - m_simulationThread = std::thread(&tsunami_lab::Simulator::resetSimulator, simulator); + // Shutdown server + m_EXIT = true; + l_communicator.stopServer(); + exitSimulationThread(); + + std::string l_env = l_args.value("ENV", ""); // environment var + std::string l_opt = l_args.value("OPT", ""); // compiler opt + int l_port = l_args.value("POR", 8080); + + // compile + exec("chmod +x scripts/compile-bash.sh"); + exec("./scripts/compile-bash.sh \"" + l_env + "\" \"" + l_opt + "\""); + + // run + exec("chmod +x run-bash.sh"); + exec("./run-bash.sh " + std::to_string(l_port) + " &"); } - else + else if (l_key == xlpmg::COMPILE_RUN_SBATCH.key) { - std::cout << "Warning: Could not reset because the simulation is still running." << std::endl; + // Shutdown server + m_EXIT = true; + l_communicator.stopServer(); + exitSimulationThread(); + + std::string l_env = l_args.value("ENV", ""); // environment var + std::string l_opt = l_args.value("OPT", ""); // compiler opt + int l_port = l_args.value("POR", 8080); + + std::string l_job = l_args.value("JOB", ""); + std::string l_out = l_args.value("OUT", ""); + std::string l_err = l_args.value("ERR", ""); + std::string l_tim = l_args.value("TIM", ""); + + // compile + exec("chmod +x scripts/compile-bash.sh"); + exec("./scripts/compile-bash.sh \"" + l_env + "\" \"" + l_opt + "\""); + + // generate sbatch + exec("chmod +x scripts/generateSbatch.sh"); + exec("./scripts/generateSbatch.sh " + l_job + " " + l_out + " " + l_err + " " + l_tim + " > run-sbatch.sh"); + + // run + exec("chmod +x run-sbatch.sh"); + exec("sbatch run-sbatch.sh " + std::to_string(l_port)); } - } - else if (l_key == xlpmg::GET_TIME_VALUES.key) - { - xlpmg::Message response = {xlpmg::SERVER_RESPONSE, "get_time_values"}; - tsunami_lab::t_idx l_currentTimeStep, l_maxTimeStep; - tsunami_lab::t_real l_timePerTimeStep; - simulator->getTimeValues(l_currentTimeStep, l_maxTimeStep, l_timePerTimeStep); - json l_data; - l_data["currentTimeStep"] = l_currentTimeStep; - l_data["maxTimeStep"] = l_maxTimeStep; - l_data["timePerTimeStep"] = l_timePerTimeStep; - if (simulator->isCalculating()) + else if (l_key == xlpmg::DELETE_CHECKPOINTS.key) { - l_data["status"] = "CALCULATING"; + simulator->deleteCheckpoints(); } - else if (simulator->isPreparing()) + else if (l_key == xlpmg::DELETE_STATIONS.key) { - l_data["status"] = "PREPARING"; + simulator->deleteStations(); } - else if (simulator->isResetting()) + else if (l_key == xlpmg::SET_OFFSET.key) { - l_data["status"] = "RESETTING"; + tsunami_lab::t_real l_offsetX = l_args.value("offsetX", 0); + tsunami_lab::t_real l_offsetY = l_args.value("offsetY", 0); + simulator->setOffset(l_offsetX, l_offsetY); } - else + else if (l_key == xlpmg::SET_CELL_AMOUNT.key) { - l_data["status"] = "IDLE"; + tsunami_lab::t_idx l_nCellsX = l_args["cellsX"]; + tsunami_lab::t_idx l_nCellsY = l_args["cellsY"]; + simulator->setCellAmount(l_nCellsX, l_nCellsY); } - response.args = l_data; - l_communicator.sendToClient(xlpmg::messageToJsonString(response), false); } - else if (l_key == xlpmg::TOGGLE_FILEIO.key) + // LOW + else if (l_urgency == xlpmg::LOW) { - if (l_args == "true") + if (l_key == xlpmg::SEND_FILE.key) { - simulator->toggleFileIO(true); + std::vector l_byteVector = l_args["data"]["bytes"]; + auto l_writeFile = std::fstream(l_args.value("path", ""), std::ios::out | std::ios::binary); + l_writeFile.write((char *)&l_byteVector[0], l_byteVector.size()); + l_writeFile.close(); } - else if (l_args == "false") + else if (l_key == xlpmg::LOAD_CONFIG_JSON.key) { - simulator->toggleFileIO(false); + simulator->loadConfigDataJson(l_args); + if (canRunThread()) + { + m_simulationThread = std::thread(&tsunami_lab::Simulator::resetSimulator, simulator); + } + else + { + std::cout << "Warning: Could not reset because the simulation is still running." << std::endl; + } } - } - else if (l_key == xlpmg::GET_HEIGHT_DATA.key) - { - xlpmg::Message l_heightDataMsg = {xlpmg::SERVER_RESPONSE, "height_data", nullptr}; - - // get data from simulation - if (simulator->getWaveProp() != nullptr) + else if (l_key == xlpmg::LOAD_CONFIG_FILE.key) { - tsunami_lab::patches::WavePropagation *l_waveprop = simulator->getWaveProp(); - const tsunami_lab::t_real *l_heightData = l_waveprop->getHeight(); - const tsunami_lab::t_real *l_bathymetryData = l_waveprop->getBathymetry(); - // calculate array size - tsunami_lab::t_idx l_ncellsX, l_ncellsY; - simulator->getCellAmount(l_ncellsX, l_ncellsY); - for (tsunami_lab::t_idx y = 0; y < l_ncellsY; y++) + simulator->loadConfigDataFromFile(l_args); + if (canRunThread()) { - for (tsunami_lab::t_idx x = 0; x < l_ncellsX; x++) - { - l_heightDataMsg.args.push_back(l_heightData[x + l_waveprop->getStride() * y] + l_bathymetryData[x + l_waveprop->getStride() * y]); - } + m_simulationThread = std::thread(&tsunami_lab::Simulator::resetSimulator, simulator); + } + else + { + std::cout << "Warning: Could not reset because the simulation is still running." << std::endl; } } - l_communicator.sendToClient(xlpmg::messageToJsonString(l_heightDataMsg)); + else if (l_key == xlpmg::SET_READ_BUFFER_SIZE.key) + { + l_communicator.setReadBufferSize(l_args); + } + else if (l_key == xlpmg::SET_SEND_BUFFER_SIZE.key) + { + l_communicator.setSendBufferSize(l_args); + } } - else if (l_key == xlpmg::GET_BATHYMETRY_DATA.key) + } + //////////////////////////////// + // EXPECT_RESPONSE // + //////////////////////////////// + else if (l_expectation == xlpmg::EXPECT_RESPONSE) + { + // CRITICAL + if (l_urgency == xlpmg::CRITICAl) { - xlpmg::Message l_bathyDataMsg = {xlpmg::SERVER_RESPONSE, "bathymetry_data", nullptr}; - - // get data from simulation - if (simulator->getWaveProp() != nullptr) + if (l_key == xlpmg::GET_TIME_VALUES.key) { - tsunami_lab::patches::WavePropagation *l_waveprop = simulator->getWaveProp(); - const tsunami_lab::t_real *l_bathymetryData = l_waveprop->getBathymetry(); - // calculate array size - tsunami_lab::t_idx l_ncellsX, l_ncellsY; - simulator->getCellAmount(l_ncellsX, l_ncellsY); - for (tsunami_lab::t_idx y = 0; y < l_ncellsY; y++) + xlpmg::Message response = xlpmg::SERVER_RESPONSE; + response.key = "time_values"; + tsunami_lab::t_idx l_currentTimeStep, l_maxTimeStep; + tsunami_lab::t_real l_timePerTimeStep; + simulator->getTimeValues(l_currentTimeStep, l_maxTimeStep, l_timePerTimeStep); + json l_data; + l_data["currentTimeStep"] = l_currentTimeStep; + l_data["maxTimeStep"] = l_maxTimeStep; + l_data["timePerTimeStep"] = l_timePerTimeStep; + if (simulator->isCalculating()) { - for (tsunami_lab::t_idx x = 0; x < l_ncellsX; x++) - { - l_bathyDataMsg.args.push_back(l_bathymetryData[x + l_waveprop->getStride() * y]); - } + l_data["status"] = "CALCULATING"; + } + else if (simulator->isPreparing()) + { + l_data["status"] = "PREPARING"; + } + else if (simulator->isResetting()) + { + l_data["status"] = "RESETTING"; } + else + { + l_data["status"] = "IDLE"; + } + response.args = l_data; + l_communicator.sendToClient(xlpmg::messageToJsonString(response), false); } - l_communicator.sendToClient(xlpmg::messageToJsonString(l_bathyDataMsg)); - } - else if (l_key == xlpmg::LOAD_CONFIG_JSON.key) - { - simulator->loadConfigDataJson(l_args); - if (canRunThread()) + else if (l_key == xlpmg::GET_SYSTEM_INFORMATION.key) { - m_simulationThread = std::thread(&tsunami_lab::Simulator::resetSimulator, simulator); + xlpmg::Message l_response = xlpmg::SERVER_RESPONSE; + l_response.key = "system_information"; + json l_data; + l_data["USED_RAM"] = l_usedRAM; + l_data["TOTAL_RAM"] = l_totalRAM; + l_data["CPU_USAGE"] = l_cpuUsage; + l_response.args = l_data; + l_communicator.sendToClient(xlpmg::messageToJsonString(l_response), false); } - else + + else if (l_key == xlpmg::GET_SIMULATION_SIZES.key) { - std::cout << "Warning: Could not reset because the simulation is still running." << std::endl; + xlpmg::Message l_msg = xlpmg::SERVER_RESPONSE; + l_msg.key = "simulation_sizes"; + json l_data; + tsunami_lab::t_idx l_ncellsX, l_ncellsY; + tsunami_lab::t_real l_simulationSizeX, l_simulationSizeY, l_offsetX, l_offsetY; + simulator->getCellAmount(l_ncellsX, l_ncellsY); + simulator->getSimulationSize(l_simulationSizeX, l_simulationSizeY); + simulator->getSimulationOffset(l_offsetX, l_offsetY); + l_data["cellsX"] = l_ncellsX; + l_data["cellsY"] = l_ncellsY; + l_data["simulationSizeX"] = l_simulationSizeX; + l_data["simulationSizeY"] = l_simulationSizeY; + l_data["offsetX"] = l_offsetX; + l_data["offsetY"] = l_offsetY; + l_msg.args = l_data; + l_communicator.sendToClient(xlpmg::messageToJsonString(l_msg)); } - }else if (l_key == xlpmg::LOAD_CONFIG_FILE.key) + } + // HIGH + else if (l_urgency == xlpmg::HIGH) { - simulator->loadConfigDataFromFile(l_args); - if (canRunThread()) + if (l_key == xlpmg::GET_HEIGHT_DATA.key) { - m_simulationThread = std::thread(&tsunami_lab::Simulator::resetSimulator, simulator); + xlpmg::Message l_heightDataMsg = xlpmg::SERVER_RESPONSE; + l_heightDataMsg.key = "height_data"; + json l_data; + + // get data from simulation + if (simulator->getWaveProp() != nullptr) + { + tsunami_lab::patches::WavePropagation *l_waveprop = simulator->getWaveProp(); + const tsunami_lab::t_real *l_heightData = l_waveprop->getHeight(); + const tsunami_lab::t_real *l_bathymetryData = l_waveprop->getBathymetry(); + // calculate array size + tsunami_lab::t_idx l_ncellsX, l_ncellsY; + simulator->getCellAmount(l_ncellsX, l_ncellsY); + for (tsunami_lab::t_idx y = 0; y < l_ncellsY; y++) + { + for (tsunami_lab::t_idx x = 0; x < l_ncellsX; x++) + { + l_data.push_back(l_heightData[x + l_waveprop->getStride() * y] + l_bathymetryData[x + l_waveprop->getStride() * y]); + } + } + } + l_heightDataMsg.args = l_data; + l_communicator.sendToClient(xlpmg::messageToJsonString(l_heightDataMsg)); } - else + else if (l_key == xlpmg::GET_BATHYMETRY_DATA.key) { - std::cout << "Warning: Could not reset because the simulation is still running." << std::endl; + xlpmg::Message l_bathyDataMsg = xlpmg::SERVER_RESPONSE; + l_bathyDataMsg.key = "bathymetry_data"; + json l_data; + + // get data from simulation + if (simulator->getWaveProp() != nullptr) + { + tsunami_lab::patches::WavePropagation *l_waveprop = simulator->getWaveProp(); + const tsunami_lab::t_real *l_bathymetryData = l_waveprop->getBathymetry(); + // calculate array size + tsunami_lab::t_idx l_ncellsX, l_ncellsY; + simulator->getCellAmount(l_ncellsX, l_ncellsY); + for (tsunami_lab::t_idx y = 0; y < l_ncellsY; y++) + { + for (tsunami_lab::t_idx x = 0; x < l_ncellsX; x++) + { + l_data.push_back(l_bathymetryData[x + l_waveprop->getStride() * y]); + } + } + } + l_bathyDataMsg.args = l_data; + l_communicator.sendToClient(xlpmg::messageToJsonString(l_bathyDataMsg)); } } - else if (l_key == xlpmg::DELETE_CHECKPOINTS.key) - { - simulator->deleteCheckpoints(); - } - else if (l_key == xlpmg::DELETE_STATIONS.key) - { - simulator->deleteStations(); - } - else if (l_key == xlpmg::GET_SIMULATION_SIZES.key) + // MEDIUM + else if (l_urgency == xlpmg::MEDIUM) { - xlpmg::Message l_msg = {xlpmg::SERVER_RESPONSE, "simulation_sizes", nullptr}; - tsunami_lab::t_idx l_ncellsX, l_ncellsY; - tsunami_lab::t_real l_simulationSizeX, l_simulationSizeY, l_offsetX, l_offsetY; - simulator->getCellAmount(l_ncellsX, l_ncellsY); - simulator->getSimulationSize(l_simulationSizeX, l_simulationSizeY); - simulator->getSimulationOffset(l_offsetX, l_offsetY); - l_msg.args["cellsX"] = l_ncellsX; - l_msg.args["cellsY"] = l_ncellsY; - l_msg.args["simulationSizeX"] = l_simulationSizeX; - l_msg.args["simulationSizeY"] = l_simulationSizeY; - l_msg.args["offsetX"] = l_offsetX; - l_msg.args["offsetY"] = l_offsetY; - l_communicator.sendToClient(xlpmg::messageToJsonString(l_msg)); - } - else if (l_key == xlpmg::SET_OFFSET.key) - { - tsunami_lab::t_real l_offsetX = l_args.value("offsetX", 0); - tsunami_lab::t_real l_offsetY = l_args.value("offsetY", 0); - simulator->setOffset(l_offsetX, l_offsetY); + if (l_key == xlpmg::RECV_FILE.key) + { + std::string l_file = l_args.value("path", ""); + std::string l_fileDestination = l_args.value("pathDestination", ""); + + if (l_file.length() > 0 && l_fileDestination.length() > 0) + { + xlpmg::Message l_response = xlpmg::SERVER_RESPONSE; + l_response.key = "file_data"; + json l_arguments; + l_arguments["path"] = l_fileDestination; + + std::ifstream l_fileData(l_file, std::ios::binary); + l_fileData.unsetf(std::ios::skipws); + std::streampos l_fileSize; + l_fileData.seekg(0, std::ios::end); + l_fileSize = l_fileData.tellg(); + l_fileData.seekg(0, std::ios::beg); + std::vector vec; + vec.reserve(l_fileSize); + vec.insert(vec.begin(), + std::istream_iterator(l_fileData), + std::istream_iterator()); + l_arguments["data"] = json::binary(vec); + l_response.args = l_arguments; + l_communicator.sendToClient(xlpmg::messageToJsonString(l_response)); + } + } } - else if (l_key == xlpmg::SET_CELL_AMOUNT.key) + // LOW + else if (l_urgency == xlpmg::LOW) { - tsunami_lab::t_idx l_nCellsX = l_args["cellsX"]; - tsunami_lab::t_idx l_nCellsY = l_args["cellsY"]; - simulator->setCellAmount(l_nCellsX, l_nCellsY); } } }