Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make GLM include directory portable. #1074

Merged
merged 2 commits into from
Jul 31, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 3 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -175,7 +175,7 @@ cmake --build . --target all
| `FLAMEGPU_RTC_DISK_CACHE` | `ON`/`OFF` | Enable/Disable caching of RTC functions to disk. Default `ON`. |
| `FLAMEGPU_VERBOSE_PTXAS` | `ON`/`OFF` | Enable verbose PTXAS output during compilation. Default `OFF`. |
| `FLAMEGPU_CURAND_ENGINE` | `XORWOW` / `PHILOX` / `MRG` | Select the CUDA random engine. Default `XORWOW` |
| `FLAMEGPU_ENABLE_GLM` | `ON`/`OFF` | Experimental feature for GLM type support in RTC models. Default `OFF`. |
| `FLAMEGPU_ENABLE_GLM` | `ON`/`OFF` | Experimental feature for GLM type support within models. Default `OFF`. |
| `FLAMEGPU_SHARE_USAGE_STATISTICS` | `ON`/`OFF` | Share usage statistics ([telemetry](https://docs.flamegpu.com/guide/telemetry)) to support evidencing usage/impact of the software. Default `ON`. |
| `FLAMEGPU_TELEMETRY_SUPPRESS_NOTICE` | `ON`/`OFF` | Suppress notice encouraging telemetry to be enabled, which is emitted once per binary execution if telemetry is disabled. Defaults to `OFF`, or the value of a system environment variable of the same name. |
| `FLAMEGPU_TELEMETRY_TEST_MODE` | `ON`/`OFF` | Submit telemetry values to the test mode of TelemetryDeck. Intended for use during development of FLAMEGPU rather than use. Defaults to `OFF`, or the value of a system environment variable of the same name.|
Expand Down Expand Up @@ -247,7 +247,8 @@ Several environmental variables are used or required by FLAME GPU 2.
| `FLAMEGPU_RTC_INCLUDE_DIRS` | A list of include directories that should be provided to the RTC compiler, these should be separated using `;` (Windows) or `:` (Linux). If this variable is not found, the working directory will be used as a default. |
| `FLAMEGPU_SHARE_USAGE_STATISTICS` | Enable / Disable sending of telemetry data, when set to `ON` or `OFF` respectively. |
| `FLAMEGPU_TELEMETRY_SUPPRESS_NOTICE` | Enable / Disable a once per execution notice encouraging the use of telemetry, if telemetry is disabled, when set to `ON` or `OFF` respectively. |
| `FLAMEGPU_TELEMETRY_TEST_MODE` | Enable / Disable sending telemetry data to a test endpoint, for FLAMEGPU develepoment to separate user statistics from developer statistics. Set to `ON` or `OFF`. |
| `FLAMEGPU_TELEMETRY_TEST_MODE` | Enable / Disable sending telemetry data to a test endpoint, for FLAMEGPU development to separate user statistics from developer statistics. Set to `ON` or `OFF`. |
| `FLAMEGPU_GLM_INC_DIR` | When RTC compilation is required and GLM support has been enabled, if the location of the GLM include directory cannot be found it must be specified using the `FLAMEGPU_GLM_INC_DIR` environment variable. |

## Running the Test Suite(s)

Expand Down
50 changes: 46 additions & 4 deletions src/flamegpu/detail/JitifyCache.cu
Original file line number Diff line number Diff line change
Expand Up @@ -152,12 +152,12 @@ std::string getFLAMEGPUIncludeDir(std::string &env_var_used) {
break;
}
} catch (...) { }
// Throw if the value is not empty, but it does not exist. Outside the try catch excplicityly.
// Throw if the value is not empty, but it does not exist. Outside the try catch explicitly.
THROW flamegpu::exception::InvalidFilePath("Error environment variable %s (%s) does not contain flamegpu/flamegpu.h. Please correct this environment variable.", env_var.c_str(), env_value.c_str());
}
}

// If no appropriate environmental variables were found, check upwards for N levels (assuming the default filestructure is in use)
// If no appropriate environmental variables were found, check upwards for N levels (assuming the default file structure is in use)
if (include_dir_str.empty()) {
// Start with the current working directory
std::filesystem::path test_dir(".");
Expand Down Expand Up @@ -209,13 +209,54 @@ break_flamegpu_inc_dir_loop:
return include_dir_str;
}

#ifdef FLAMEGPU_USE_GLM
/**
* Get the GLM include directory via the environment variables.
* @return the GLM include directory.
*/
std::string getGLMIncludeDir() {
const std::string env_var = "FLAMEGPU_GLM_INC_DIR";
const std::string test_file = "glm/glm.hpp";
// Check the environment variable to see whether glm/glm.hpp exists
{
// If the environment variable exists
std::string env_value = std::getenv(env_var.c_str()) ? std::getenv(env_var.c_str()) : "";
// If it's a value, check if the path exists, and if any expected files are found.
if (!env_value.empty()) {
std::filesystem::path check_file = std::filesystem::path(env_value) / test_file;
// Use try catch to suppress file permission exceptions etc
try {
if (std::filesystem::exists(check_file)) {
return env_value;
}
}
catch (...) {}
// Throw if the value is not empty, but it does not exist. Outside the try catch explicitly.
THROW flamegpu::exception::InvalidFilePath("Error environment variable %s (%s) does not contain %s. Please correct this environment variable.", env_var.c_str(), env_value.c_str(), test_file.c_str());
}
}

// If no appropriate environmental variables were found, check the compile time path to GLM
std::filesystem::path check_file = std::filesystem::path(FLAMEGPU_GLM_PATH) / test_file;
// Use try catch to suppress file permission exceptions etc
try {
if (std::filesystem::exists(check_file)) {
return FLAMEGPU_GLM_PATH;
}
}
catch (...) {}
// Throw if header wasn't found. Outside the try catch explicitly.
THROW flamegpu::exception::InvalidAgentFunc("Error compiling runtime agent function: Unable to automatically determine location of GLM include directory and %s environment variable not set", env_var.c_str());
}
#endif

/**
* Confirm that include directory version header matches the version of the static library.
* This only compares up to the pre-release version number. Build metadata is only used for the RTC cache.
* @param flamegpuIncludeDir path to the flamegpu include directory to check.
* @return boolean indicator of success.
*/
bool confirmFLAMEGPUHeaderVersion(const std::string flamegpuIncludeDir, const std::string envVariable) {
bool confirmFLAMEGPUHeaderVersion(const std::string &flamegpuIncludeDir, const std::string &envVariable) {
static bool header_version_confirmed = false;

if (!header_version_confirmed) {
Expand Down Expand Up @@ -293,7 +334,8 @@ std::unique_ptr<KernelInstantiation> JitifyCache::compileKernel(const std::strin
#ifdef FLAMEGPU_USE_GLM
// GLM headers increase build time ~5x, so only enable glm if user is using it
if (kernel_src.find("glm") != std::string::npos) {
options.push_back(std::string("-I") + FLAMEGPU_GLM_PATH);
static const std::string glm_include_dir = getGLMIncludeDir();
options.push_back(std::string("-I") + glm_include_dir);
options.push_back(std::string("-DFLAMEGPU_USE_GLM"));
}
#endif
Expand Down
56 changes: 51 additions & 5 deletions swig/python/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -156,20 +156,48 @@ set(PYTHON_CODEGEN_SRC_FILES
${CMAKE_CURRENT_SOURCE_DIR}/codegen/codegen.py
)

# cleanup the flamegpu include file paths, so they're relative (begin `include/`) and seperated by `", "`
# cleanup the flamegpu include file paths, so they're relative (begin `include/`) and separated by `", "`
foreach(FLAMEGPU_INC_FILE IN LISTS FLAMEGPU_INCLUDE)
file(RELATIVE_PATH FLAMEGPU_INC_FILE_CLEAN "${FLAMEGPU_ROOT}" "${FLAMEGPU_INC_FILE}")
set(FLAMEGPU_INCLUDE_CLEAN "${FLAMEGPU_INCLUDE_CLEAN}'${FLAMEGPU_INC_FILE_CLEAN}', ")
unset(FLAMEGPU_INC_FILE_CLEAN)
endforeach()

# cleanup code generator module files, so they're seperated by `", "` (file list is already using relative paths)
# cleanup code generator module files, so they're separated by `", "` (file list is already using relative paths)
foreach(PYTHON_CODEGEN_FILE IN LISTS PYTHON_CODEGEN_SRC_FILES)
file(RELATIVE_PATH PYTHON_CODEGEN_FILE_CLEAN "${CMAKE_CURRENT_SOURCE_DIR}" "${PYTHON_CODEGEN_FILE}")
set(FLAMEGPU_CODEGEN_INCLUDE_CLEAN "${FLAMEGPU_CODEGEN_INCLUDE_CLEAN}'${FPYTHON_CODEGEN_FILE_CLEAN}', ")
unset(PYTHON_CODEGEN_FILE_CLEAN)
endforeach()

# Locate and cleanup GLM include files, so they're separated by `", "`
if(FLAMEGPU_ENABLE_GLM)
FetchContent_GetProperties(glm POPULATED glm_POPULATED SOURCE_DIR glm_SOURCE_DIR)
if (glm_POPULATED)
# Locate the root header to find the header directory
find_path(glm_ROOT
NAMES
glm/glm.hpp
PATHS
${glm_SOURCE_DIR}
NO_CACHE
)
# Build a list of all files in include dir
FILE(GLOB_RECURSE glm_INC_FILES "${glm_ROOT}glm/*")
# Add license to that list
list(APPEND glm_INC_FILES "${glm_ROOT}copying.txt")
# Clean, add separator and setup file copy
unset(glm_POPULATED)
unset(glm_SOURCE_DIR)
foreach(glm_INC_FILE IN LISTS glm_INC_FILES)
file(RELATIVE_PATH glm_INC_FILE_CLEAN "${glm_ROOT}" "${glm_INC_FILE}")
set(GLM_INCLUDE_CLEAN "${GLM_INCLUDE_CLEAN}'glm/${glm_INC_FILE_CLEAN}', ") # This var is used by setup.py template
endforeach()
else()
message(FATAL_ERROR "Python cmake can't find glm")
endif()
endif()

# Build a list of OS specific python package_data entries.
set(FLAMEGPU_PYTHON_PACKAGE_DATA_OS_SPECIFIC "")
if (FLAMEGPU_VISUALISATION)
Expand Down Expand Up @@ -232,7 +260,7 @@ flamegpu_search_python_module(wheel)
flamegpu_search_python_module(build)

## ------
# Define custom commands to produce files in the current cmake directory, a custom target which the user invokes to build the python wheel with appropraite dependencies configured, and any post-build steps required.
# Define custom commands to produce files in the current cmake directory, a custom target which the user invokes to build the python wheel with appropriate dependencies configured, and any post-build steps required.
## ------
set(PYTHON_FLAMEGPU_LIB_OUTPUT_MODULE_DIR "${PYTHON_LIB_OUTPUT_DIRECTORY}/src/${PYTHON_MODULE_NAME}")
# Only expliclty create the directory under linux, msbuild emits warnings and is fine without.
Expand All @@ -258,7 +286,7 @@ foreach(FLAMEGPU_INC_FILE IN LISTS FLAMEGPU_INCLUDE)
endforeach()

# Create the codegen directory, and copy the codegen files in.
# Only expliclty create the directory under linux, msbuild emits warnings and is fine without.
# Only explicitly create the directory under linux, msbuild emits warnings and is fine without.
if(NOT WIN32)
add_custom_command(
OUTPUT "${PYTHON_FLAMEGPU_LIB_OUTPUT_MODULE_DIR}/codegen"
Expand All @@ -267,7 +295,7 @@ if(NOT WIN32)
list(APPEND PYTHON_MODULE_TARGET_NAME_DEPENDS "${PYTHON_FLAMEGPU_LIB_OUTPUT_MODULE_DIR}/codegen")
endif()

# Copy each codegen file into the pthon module directory, and append the filename to the list of python wheel dependencies.
# Copy each codegen file into the python module directory, and append the filename to the list of python wheel dependencies.
foreach(PYTHON_CODEGEN_FILE IN LISTS PYTHON_CODEGEN_SRC_FILES)
file(RELATIVE_PATH PYTHON_CODEGEN_FILE_CLEAN "${CMAKE_CURRENT_SOURCE_DIR}" "${PYTHON_CODEGEN_FILE}")
set(PYTHON_FLAMEGPU_LIB_OUTPUT_CODEGEN_FILE "${PYTHON_FLAMEGPU_LIB_OUTPUT_MODULE_DIR}/${PYTHON_CODEGEN_FILE_CLEAN}")
Expand All @@ -281,6 +309,24 @@ foreach(PYTHON_CODEGEN_FILE IN LISTS PYTHON_CODEGEN_SRC_FILES)
unset(PYTHON_FLAMEGPU_LIB_OUTPUT_CODEGEN_FILE)
endforeach()

# Copy GLM files into the python module directory and append the filename to the list of python wheel dependencies.
if(FLAMEGPU_ENABLE_GLM)
foreach(glm_INC_FILE IN LISTS glm_INC_FILES)
file(RELATIVE_PATH glm_INC_FILE_CLEAN "${glm_ROOT}" "${glm_INC_FILE}")
set(PYTHON_FLAMEGPU_LIB_OUTPUT_glm_FILE "${PYTHON_FLAMEGPU_LIB_OUTPUT_MODULE_DIR}/glm/${glm_INC_FILE_CLEAN}")
add_custom_command(
OUTPUT "${PYTHON_FLAMEGPU_LIB_OUTPUT_glm_FILE}"
DEPENDS ${glm_INC_FILE}
COMMAND ${CMAKE_COMMAND} -E copy_if_different ${glm_INC_FILE} ${PYTHON_FLAMEGPU_LIB_OUTPUT_glm_FILE}
COMMENT "Copying ${glm_INC_FILE} to ${PYTHON_FLAMEGPU_LIB_OUTPUT_glm_FILE}"
)
list(APPEND PYTHON_MODULE_TARGET_NAME_DEPENDS "${PYTHON_FLAMEGPU_LIB_OUTPUT_glm_FILE}")
unset(PYTHON_FLAMEGPU_LIB_OUTPUT_glm_FILE)
endforeach()
unset(glm_ROOT)
unset(glm_INC_FILES)
endif()

# Copy the visualisation dlls if required, this must occur before the wheel is built
if (FLAMEGPU_VISUALISATION)
if(COMMAND flamegpu_visualiser_get_runtime_depenencies)
Expand Down
14 changes: 11 additions & 3 deletions swig/python/__init__.py.in
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ if not "FLAMEGPU_INC_DIR" in os.environ or not "FLAMEGPU2_INC_DIR" in os.environ
os.environ["FLAMEGPU_INC_DIR"] = str(pathlib.Path(__file__).resolve().parent / "include")
else:
print("@PYTHON_MODULE_NAME@ warning: env var 'FLAMEGPU_INC_DIR' is present, RTC headers may be incorrect.", file=sys.stderr)

# Some Windows users have dll load failed, because Python can't find nvrtc
# It appears due to a combination of Python and Anaconda versions
# Python 3.8+ requires DLL loads to be manually specified with os.add_dll_directory()
Expand All @@ -31,7 +31,15 @@ if os.name == 'nt' and hasattr(os, 'add_dll_directory') and callable(getattr(os,
# module version
__version__ = '@FLAMEGPU_VERSION_PYTHON@'

del os, sys, pathlib, subprocess
# Normal module stuff
__all__ = ["@PYTHON_MODULE_NAME@"]
from .@PYTHON_MODULE_NAME@ import *
from .@PYTHON_MODULE_NAME@ import *

# GLM delayed so we can check whether it was enabled
if GLM:
if not "FLAMEGPU_GLM_INC_DIR" in os.environ or not "FLAMEGPU_GLM_INC_DIR" in os.environ:
os.environ["FLAMEGPU_GLM_INC_DIR"] = str(pathlib.Path(__file__).resolve().parent / "glm")
else:
print("@PYTHON_MODULE_NAME@ warning: env var 'FLAMEGPU_GLM_INC_DIR' is present, GLM include path may be incorrect.", file=sys.stderr)

del os, sys, pathlib, subprocess
7 changes: 7 additions & 0 deletions swig/python/flamegpu.i
Original file line number Diff line number Diff line change
Expand Up @@ -1175,4 +1175,11 @@ TEMPLATE_VARIABLE_INSTANTIATE_INTS(poisson, flamegpu::HostRandom::poisson)
#define SEATBELTS false
#else
#define SEATBELTS false
#endif

#ifdef FLAMEGPU_USE_GLM
#undef FLAMEGPU_USE_GLM
#define GLM true
#else
#define GLM false
#endif
2 changes: 1 addition & 1 deletion swig/python/setup.py.in
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ setup(
'Topic :: Scientific/Engineering',
],
package_data={
'@PYTHON_MODULE_NAME@':['$<TARGET_FILE_NAME:@PYTHON_SWIG_TARGET_NAME@>', @FLAMEGPU_CODEGEN_INCLUDE_CLEAN@@FLAMEGPU_INCLUDE_CLEAN@@FLAMEGPU_PYTHON_PACKAGE_DATA_OS_SPECIFIC@],
'@PYTHON_MODULE_NAME@':['$<TARGET_FILE_NAME:@PYTHON_SWIG_TARGET_NAME@>', @FLAMEGPU_CODEGEN_INCLUDE_CLEAN@@FLAMEGPU_INCLUDE_CLEAN@@FLAMEGPU_PYTHON_PACKAGE_DATA_OS_SPECIFIC@@GLM_INCLUDE_CLEAN@],
},
install_requires=[
'astpretty',
Expand Down
1 change: 0 additions & 1 deletion tests/test_cases/simulation/test_agent_vector.cu
Original file line number Diff line number Diff line change
Expand Up @@ -389,7 +389,6 @@ TEST(AgentVectorTest, iterator_GLM) {
// Iterate vector
unsigned int i = 0;
for (AgentVector::Agent instance : pop) {
auto a = instance.getVariable<glm::uvec3>("uvec3");
ASSERT_EQ(instance.getVariable<glm::uvec3>("uvec3"), glm::uvec3(i + 3, i + 6, i));
++i;
}
Expand Down