This open-ended experiment is intended to experiment with real-time graphics techniques and AEC data. It is desktop first, but with the intention to support VR. Two broad scopes / directions are defined hereafter:
A renderer that should be able to render an architectural scene in real-time. This architectural scene should be automatically created from BIM or CAD data. Assets should be able to be added to the scene using a simple editor.
- Design review
- Presentation / architectural visualisation
- Walk through the environment -> requires collision
A conceptual design tool for creating architectural concepts and rendering them in real time in VR. Simple procedural geometry tools, simple parametric tools. This conceptual design should be exportable to an external program.
The application is written for Cocoa (macOS) and the Metal API, but for VR should be ported to Vulkan. Metal is a cleaner API, so we might want to write a Vulkan backend for the Metal API (or a subset of the Metal API we use) but porting is easier than implementing the features, focus is on implementing features first.
As a plethora of IFC viewers and CAD software exists for desktop platforms such as macOS, it does not make sense to target Metal, unless I would target the Apple Vision Pro. Unfortunately, that platform is outside any reasonable price range.
Therefore, the intended target platform is now Meta Quest 3 and PCVR. Experimenting with Metal was good for understanding what type of abstractions to build (or not), as it is a higher level, less verbose API compared to Vulkan. Last time I wrote a Vulkan renderer, I had no idea what I was doing as I was blindly copying code from vk-tutorial, and trying to abstract things in an OOP way, as this was the first time I had ventured outside C#. This time I know how to progress, and how to write simple graphics code.
My alternative to writing Vulkan would be using a translation layer from Metal to Vulkan, but that would make things more complicated, and less well-supported. MoltenVK is well-supported and suffices for development purposes (i.e. writing the renderer on a macOS desktop computer before building it for the target platform).
List of features to implement or techniques to experiment with. This list is neither exhaustive nor prescriptive.
- CAD or BIM data (e.g. Revit, IFC using ifcOpenShell)
- OpenCascade or another CAD kernel for generating geometry from certain representations such as BReps or Nurbs.
- 3D city data (Open Street Maps), Cesium (https://cesium.com/why-cesium/3d-tiles/)
- Gltf import
- Gltf import with stride of 16 bytes instead of 12 for vector3, this is better for alignment. packed_float3 is not ideal.
- Scene file format -> utilize GLTF instead of inventing own scene model
- Collision (terrain collider, box collider)
- Asynchronous loading and decoding of png / jpeg
- Caching of imported textures / other assets
- Directional light shadow mapping
- Blinn phong shading
- Fog
- Skybox (panoramic / 360 spherical)
- Compilation of shader variants
- PBR shading (OpenPBR Surface)
- conductors
- dielectrics
- subsurface
- transmission
- coat
- glass
- Batch rendering primitives by material (to avoid constantly switching fragment shader buffers etc.)
- Shadow volumes (https://en.wikipedia.org/wiki/Shadow_volume)
- Megatextures (https://en.wikipedia.org/wiki/Clipmap)
- Deferred rendering (gbuffer etc., support many non-image based lights)
- Point lights, area lights, spot lights, directional lights
- Animation / rigging of a mesh, skinning
- Automatic mip-mapping of textures for better interpolation at grazing angles
- Raytracing reflections, denoising
- Ambient occlusion baking using path tracing
- Specular reflection probes / environment probes
- Screen space reflections
- Screen-space ambient occlusion
- Lens flare / post-processing effects
- Support cubemaps instead of equirectangular projection
- HDR support for image based lighting
- Terrain system
- Heightmaps
- Erosion / simulation
- Tri-planar mapping
- Terrain chunks / LOD system
- Particle systems (fire)
- Volumetrics / volumetric fog
- Grass / foliage / tree vertex shader (animated with wind etc.)
- Water / ocean shader
- Hair shader
- Skin shader (optimized, subsurface scattering)
- Frustum culling -> meshes should have bounds
- Occlusion culling -> could be done by specific middleware? / on the GPU?
- LOD system and blending
- Proper text rendering (glyph caching, using truetype / opentype rendering library), use signed distance fields (SDF) for 3D text rendering.
- Stereoscopic rendering for VR
Look at frame decompositions of games: