Skip to content

Test framework and reference implementation of our algorithms relating to the real-time simulation of human vision.

License

Notifications You must be signed in to change notification settings

csobaistvan/VisSimFramework

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Vision Simulation Test Framework

This is the framework I implemented during my PhD studies for developing and testing our algorithms relating to the real-time simulation of human vision.

Main features of the framework

  • 3D scene and camera management.
    • Utilizes a custom entity-component system.
    • Automatic library and source code discovery.
    • On-the-fly update and render graph building.
  • Custom command-line configuration implementation.
  • Logging with multiple output devices.
    • Console, files, in-memory.
    • Automatic scoped logging regions.
  • A job-system.
  • Easy-to-use, thread-based distribution of work.
  • Rich editor interface with extensive debugging capabilities.
    • Highly customizable and themeable.
    • Dockable interface elements..
    • GPU object (shader, buffer, texture) inspection.
    • On-the-fly entity and material editing.
    • Extensive in-memory log inspector.
  • Key-frame animations.
    • Automatic recording of key-frames based on user interaction.
    • Real-time and lock-step playback with optional video output.
  • CPU and GPU profiling.
    • Automatic, scoped, nested regions.
    • Configurable tabular logging of the results.
    • Observable on-the-fly in a graphical and tree form.

Rendering

  • Implemented using OpenGL 4.
  • Custom shader compilation supporting #include statements and detailed error reports.
  • Phong and Blinn-Phong shading models.
  • Physically-based shading using the Cook-Torrance model.
  • Deferred shading with support for HDR rendering.
  • Normal mapping.
  • Layered rendering.
  • Content and time-adaptive local and global tone mapping.
    • Multiple tone mapping operators are supported.
  • Multisample anti-aliasing (MSAA).
  • Direct and indirect lighting.
    • Multiple source types (directional, point, spot).
    • Shadow mapping with several different filtering approaches (variance, exponential, moments).
    • Voxel global illumination.
  • CPU occlusion culling.
  • Cubemap-based skyboxes and volumetric clouds.
  • Post-process filters:
    • Motion blur.
    • Fast approximate anti-aliasing (FXAA).
    • Debug visualizers for GBuffer and voxel grid contents.
    • Color look-up tables.
    • Simulation of aberrated vision.

Requirements

Hardware

The framework makes heavy use of compute shaders; therefore, an OpenGL 4.3 compatible video card is required.

For reference, all tests and performance measurements published in our papers were performed on the following system configuration:

  • CPU: AMD Ryzen 7 1700X
  • GPU: NVIDIA TITAN Xp
  • Memory: 32 GBytes

Software

The framework requires the following external software:

  • Microsoft Visual Studio
  • MATLAB
  • Python
    • Tested with version 3.8.6.
    • List of main dependencies, with the version used during our tests in parentheses:
      • numpy (1.18.5)
      • tensorflow (2.5.0)
      • tensorflow_addons (0.13.0)
      • keras_tuner (1.0.3)
      • humanize (3.1.0)
      • matplotlib (3.3.4)
      • pandas (1.1.3)
      • psutil (5.9.0)
      • seaborn (0.11.2)
      • tabulate (0.8.9)
    • Note that this list is incomplete and only includes the most relevant third-party packages.

Third-party libraries

All third-party libraries are omitted due to file size limitations. The necessary binaries for building with VS 2019 can be downloaded from here. All third-party library files should be placed in the Libraries folder.

Third-party assets

While some of the necessary assets are uploaded along with the source code, most of the third-party meshes and textures are omitted due to file size limitations. They can be downloaded from here, and should be placed in the corresponding subfolders of the Assets folder.

Running the framework

Generating training datasets

The datasets can be generated using Python. To this end, open a terminal, navigate to the Assets/Scripts/Python folder, then use the following commands to generate the datasets:

python eye_reconstruction.py generate aberration_estimator
python eye_reconstruction.py generate eye_estimator
python eye_aberrations.py generate aberration_estimator
python eye_refocusing.py generate refocus_estimator dataset

Each command is responsible for generating a single dataset for the corresponding networks. The datasets used to perform the measurements for our papers can be downloaded from here, and should be placed in the Assets/Scripts/Python/Data/Train folder.

Training is then performed using the following set of commands:

python eye_reconstruction.py train aberration_estimator network
python eye_reconstruction.py train eye_estimator network
python eye_aberrations.py train aberration_estimator network
python eye_refocusing.py train refocus_estimator network

Once finished, the exported files will be available in the Assets/Scripts/Python/Networks folder.

Lastly, the trained networks must be manually exported for use with the C++ framework. To this end, the following commands must be used:

python eye_reconstruction.py export aberration_estimator network
python eye_reconstruction.py export eye_estimator network
python eye_aberrations.py export aberration_estimator network
python eye_refocusing.py export refocus_estimator network

Once finished, the exported files will be available in the Assets/Generated/Networks folder.

Generating build files for the C++ backend

The framework relies on Premake5 to generate the necessary project files. Premake5 is included in the archive; to invoke it, use the following command in the project's main folder:

premake5 --matlab_root=$PATH$ vs2019

where $PATH$ is the path to the MATLAB installation's root folder.

The build script assumes a MATLAB R2020b installation by default (c:/Program Files/MATLAB/R2020b/), so the --matlab_root switch can be simply omitted if such a MATLAB version is present, leading to the following:

premake5 vs2019

After Premake is finished, the generated build files can be found in the Build folder.

Building the C++ backend with Visual Studio

The solution can be opened in Visual Studio and simply built by selecting the desired build configuration. No additional steps are required.

Building the C++ backend with MSBuild

Alternatively, the framework can also be built using MSBuild.

  1. Open the VS Developer Command Prompt.
  2. Navigate to the Build folder.
  3. Build the project using msbuild \p:Configuration=Release.

Running the C++ backend

From within Visual Studio, the program can be simply started using the Start Debugging option.

The framework uses sensible defaults for the rendering arguments. Overriding these can be done in the following ways:

  1. If using SmartCommandLineArguments, the set of active arguments can be set via the extension's window (accessibla via View/Other Windows).
  2. In the absence of the aforementioned extension, the arguments can be set manually via the project settings window, located under the Debugging category.

Code organization

Parametric eye model and patternsearch-based eye reconstruction

The entirety of the eye-related MATLAB code base can be found in Assets/Scripts/Matlab/EyeReconstruction, which was built on Optometrika, a third party library for ray tracing optical systems. Note that Optometrika was modified heavily for our specific use case and several parts of the library were removed for brevity.

The most important classes and functions are the following:

  • EyeParametric: Builds the parametric eye model; stores the eye parameters, constructs the necessary optical elements, and manages the computation of Zernike aberration coefficients.
  • EyeReconstruction: Implements eye reconstruction using patternsearch, with extensive customizability.
  • ZernikeLens: A custom aspherical lens with additional surface perturbations controlled using a Zernike surface.
  • compute_aberrations: Performs the actual computation of the Zernike aberration coefficients for an input eye model and computation parameters.

The main MATLAB script folder also contains the PSNR computation routine.

The rest of the MATLAB code base implements the necessary optical procedures, facilitates the caching processes, and establishes the interface with the C++ side of the program.

Neural networks (Python)

The dataset generation and network training parts of the framework are implemented using the Python programming language. The relevant code resides in the Assets/Scripts/Python folder.

The main script files are the following:

  • eye_reconstruction.py: Implements the data generation and network training for the discriminator and eye parameter estimator networks.
  • eye_aberrations.py: Implements dataset generation and training for the off-axis aberration estimator network.
  • eye_refocusing.py: Implements the data generation and training procedures for the refocused eye parameter estimator network.

All of these scripts are built on a custom shared framework, which can be found in framework subfolder. Lastly, the framework heavily utilizes .json files for configuration, which are located in the Data/Config subfolder.

Vision simulation (CPU)

To interface with the trained neural networks, the C++ side of the framework contains a small wrapper around the TensorFlow C API, which can be found in TensorFlowEx. The framework uses these functions to load the trained networks and perform inference on them at run time.

The C++ classes relating to vision simulation are located in the Source/Scene/Components/Aberration folder. The main classes are the following:

  • WavefrontAberration: Supporting class; holds all the information necessary to describe an optical system. Also has the corresponding functionality to compute the PSF with arbitrary parameters. Lastly, the interface with the MATLAB eye reconstruction code base and the exported neural networks is also implemented here.
  • TiledSplatBlurComponent: Implements the tiled PSF splatting-based proposed algorithm as described in our papers.
  • GroundTruthAberrationComponent: Responsible for creating reference images by evaluating the dense PSF for every pixel in the input texture. Also produces PSNR maps by sampling the output of the tiled splat algorithm.

Generally speaking, the most important functions for these classes are the following:

  • initObject: Responsible for creating the necessary GPU buffers and loading assets.
  • renderObject: Main entry point for rendering the object.
  • generateGui: Generates the user interface for modifying the exposed parameters. Each class relies on on-demand data recomputation, and thus, the relevant processes are initiated by these functions.
  • demoSetup: Instantiates the object and configures it for the demo scene.

Because the implementation uses on-demand data computation, changing the values while running the framework can result in a full eye reconstruction in the worst case scenario. This is not an issue with the neural network-based approach; however, for the GPS-based approach, setting all parameters in the demoSetup function and then building and running the framework with the updated parameters is recommended.

Vision simulation - rendering (shaders)

The important shaders are located in the Assets/Shaders/OpenGL/Aberration/TiledSplatBlur folder.

  • common: Holds the various input buffer and texture definitions, as well as the common set of functions for implementing the algorithm.
  • radius_based: Implements our radius-based texture layout for on-axis simulations.
  • diopter_based_on_axis: Implements our diopter-based on-axis texture layout, using the non-uniform depth sampling strategy.
  • diopter_based_off_axis: Implements our diopter-based off-axis texture layout.
  • psf_cache_command, psf_cache_params, psf_cache_texture: These shaders handle the building of the cached PSF layers for peripheral vision simulation with variable pupil and focus settings. These shaders are only invoked when the PSF grid is updated. The relevant functionality is mostly implemented by the PSF compute shaders referenced above.
  • psf_texture_command_params, psf_texture_command_texture, psf_texture_params, psf_texture_texture: These shaders implement the construction of the per-frame PSF texture, which is performed every frame. The relevant functionality is mostly implemented by the PSF compute shaders referenced above.
  • fragment_buffer_build: Converts the input scene textures to a suitable format and populates the fragment buffer with this data.
  • fragment_buffer_merge: Performs one step of the merge process, merging fragments in a 2x2 block.
  • tile_buffer_build: Takes all the fragments in the tile and populates the tile buffer with them. Also populates a work queue (for splatting) with the indices of all the fragments.
  • tile_buffer_splat_command: Generates dispatch parameters to launch the work queue for splatting.
  • tile_buffer_splat: Each shader takes one entry from the splat queue and copies it to the tile buffer of neighboring tiles if the fragment overlaps the tile.
  • tile_buffer_sort_params: Generates dispatch parameters for sorting the tile buffers.
  • tile_buffer_sort_presort, tile_buffer_sort_inner, tile_buffer_sort_outer: These shaders perform the various steps of bitonic sorting on the tile buffers.
  • convolution: Iterates over the fragments in the tile buffer, accumulating the results for every output pixel.

Screenshots

The editor interface in action: Editor

Vision simulation for a healthy eye. Editor Editor

Simulation of a myopic vision. Editor

Simulating astigmatic vision. Editor

Vision simulation for an eye with keratoconus. Editor

Related publications & citations

If you find the framework or our algorithms useful, we kindly ask you to cite the relevant papers as follows:

@article{csoba2024fast,
  author  = {Csoba, István and Kunkli, Roland},
  title   = {{Fast rendering of central and peripheral human visual aberrations across the entire visual field with interactive personalization}},
  year    = {2024},
  month   = {05},
  journal = {The Visual Computer},
  volume  = {40},
  number  = {5},
  pages   = {3709--3731},
  doi     = {https://doi.org/10.1007/s00371-023-03060-0}
}
@article{csoba2021efficient,
  author  = {Csoba, István and Kunkli, Roland},
  title   = {{Efficient Rendering of Ocular Wavefront Aberrations using Tiled Point-Spread Function Splatting}},
  journal = {Computer Graphics Forum},
  volume  = {40},
  number  = {6},
  pages   = {182--199},
  year    = {2021},
  month   = {09},
  doi     = {https://doi.org/10.1111/cgf.14267}
}
@inproceedings{csoba2022fast,
  author    = {Csoba, István and Kunkli, Roland},
  booktitle = {2022 IEEE 2nd Conference on Information Technology and Data Science (CITDS)},
  title     = {{Fast, GPU-based Computation of Large Point-Spread Function Sets for the Human Eye using the Extended Nijboer-Zernike Approach}},
  year      = {2022},
  month     = {10},
  location  = {Debrecen},
  pages     = {69--73},
  doi       = {https://doi.org/10.1109/CITDS54976.2022.9914232},
  editor    = {Fazekas, István},
  publisher = {IEEE Computer Society},
  address   = {Los Alamitos, USA}
}
@article{csoba2023rendering,
  author  = {Csoba, István and Kunkli, Roland},
  title   = {{Rendering algorithms for aberrated human vision simulation}},
  journal = {Visual Computing for Industry, Biomedicine, and Art},
  volume  = {6},
  pages   = {5:1--5:25},
  year    = {2023},
  month   = {3},
  doi     = {https://doi.org/10.1186/s42492-023-00132-9}
}

License

This project is licensed under the BSD 2-clause License, see LICENSE for more information.

About

Test framework and reference implementation of our algorithms relating to the real-time simulation of human vision.

Topics

Resources

License

Stars

Watchers

Forks