Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,9 @@
# VS
Debug/
Release/
ipch/
*.sdf

# General
builds/

Expand Down
Binary file added MedianNoise.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added MedianRef.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added MedianSubjectBetter.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added PerformStreamCompact.xls
Binary file not shown.
178 changes: 14 additions & 164 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,180 +94,30 @@ For each 'extra feature' you must provide the following analysis :
should give a roadmap of how to further optimize and why you believe this is the next
step)

## BASE CODE TOUR
You will be working in three files: raytraceKernel.cu, intersections.h, and
interactions.h. Within these files, areas that you need to complete are marked
with a TODO comment. Areas that are useful to and serve as hints for optional
features are marked with TODO (Optional). Functions that are useful for
reference are marked with the comment LOOK.

* raytraceKernel.cu contains the core raytracing CUDA kernel. You will need to
complete:
* cudaRaytraceCore() handles kernel launches and memory management; this
function already contains example code for launching kernels,
transferring geometry and cameras from the host to the device, and transferring
image buffers from the host to the device and back. You will have to complete
this function to support passing materials and lights to CUDA.
* raycastFromCameraKernel() is a function that you need to implement. This
function once correctly implemented should handle camera raycasting.
* raytraceRay() is the core raytracing CUDA kernel; all of your pathtracing
logic should be implemented in this CUDA kernel. raytraceRay() should
take in a camera, image buffer, geometry, materials, and lights, and should
trace a ray through the scene and write the resultant color to a pixel in the
image buffer.
## Implementation
All required features are implemented;

* intersections.h contains functions for geometry intersection testing and
point generation. You will need to complete:
* boxIntersectionTest(), which takes in a box and a ray and performs an
intersection test. This function should work in the same way as
sphereIntersectionTest().
* getRandomPointOnSphere(), which takes in a sphere and returns a random
point on the surface of the sphere with an even probability distribution.
This function should work in the same way as getRandomPointOnCube(). You can
(although do not necessarily have to) use this to generate points on a sphere
to use a point lights, or can use this for area lighting.
2 extra features, Refraction and Subsurface scattering are also provided.

* interactions.h contains functions for ray-object interactions that define how
rays behave upon hitting materials and objects. You will need to complete:
* getRandomDirectionInSphere(), which generates a random direction in a
sphere with a uniform probability. This function works in a fashion
similar to that of calculateRandomDirectionInHemisphere(), which generates a
random cosine-weighted direction in a hemisphere.
* calculateBSDF(), which takes in an incoming ray, normal, material, and
other information, and returns an outgoing ray. You can either implement
this function for ray-surface interactions, or you can replace it with your own
function(s).
![Alt text](https://github.com/chiwsy/Project3-Pathtracer/blob/master/test.png)

You will also want to familiarize yourself with:
![Alt text](https://github.com/chiwsy/Project3-Pathtracer/blob/master/test_mark.png)

* sceneStructs.h, which contains definitions for how geometry, materials,
lights, cameras, and animation frames are stored in the renderer.
* utilities.h, which serves as a kitchen-sink of useful functions
![Alt text](https://github.com/chiwsy/Project3-Pathtracer/blob/master/compare.png)

## NOTES ON GLM
This project uses GLM, the GL Math library, for linear algebra. You need to
know two important points on how GLM is used in this project:
These 3 images show the features implemented and the performance of the render with and without stream compaction.

* In this project, indices in GLM vectors (such as vec3, vec4), are accessed
via swizzling. So, instead of v[0], v.x is used, and instead of v[1], v.y is
used, and so on and so forth.
* GLM Matrix operations work fine on NVIDIA Fermi cards and later, but
pre-Fermi cards do not play nice with GLM matrices. As such, in this project,
GLM matrices are replaced with a custom matrix struct, called a cudaMat4, found
in cudaMat4.h. A custom function for multiplying glm::vec4s and cudaMat4s is
provided as multiplyMV() in intersections.h.
The second figure marks the features.

## SCENE FORMAT
This project uses a custom scene description format.
Scene files are flat text files that describe all geometry, materials,
lights, cameras, render settings, and animation frames inside of the scene.
Items in the format are delimited by new lines, and comments can be added at
the end of each line preceded with a double-slash.
For the performance parts, I tested various parameters of trace depth and plot an curve to show the relationship of the stream compaction have on the overall performance. We can see that with larger trace depth, larger than 10, using stream compact can largely reduce the rendering time. This is because that most of the rays are killed before they reached the trace depth.

Materials are defined in the following fashion:
As the images are very noise, especially when the iteration is very low, I think we can use some simple filter to improve the quality of the image. Even in the final version, after 5000 iteration, there exists irregular points on the output image.

* MATERIAL (material ID) //material header
* RGB (float r) (float g) (float b) //diffuse color
* SPECX (float specx) //specular exponent
* SPECRGB (float r) (float g) (float b) //specular color
* REFL (bool refl) //reflectivity flag, 0 for
no, 1 for yes
* REFR (bool refr) //refractivity flag, 0 for
no, 1 for yes
* REFRIOR (float ior) //index of refraction
for Fresnel effects
* SCATTER (float scatter) //scatter flag, 0 for
no, 1 for yes
* ABSCOEFF (float r) (float b) (float g) //absorption
coefficient for scattering
* RSCTCOEFF (float rsctcoeff) //reduced scattering
coefficient
* EMITTANCE (float emittance) //the emittance of the
material. Anything >0 makes the material a light source.
![Alt text](https://github.com/chiwsy/Project3-Pathtracer/blob/master/MedianNoise.png)

Cameras are defined in the following fashion:
This image shows the initial condition when there are only 20 iterations. The original image on the left is very noisy and we can see a number of yellow dots on the reflective surface. In order to remove such salt-and-pepper-like noise, a common method is using median filter. The filtered image has a better PSNR value, the objective criteria for image quality. However, the visual effect is no much better than the original one. If we use this image instead of the original one, I think we can have a faster convergence towards the referrence image which iterated for 5000 times.

* CAMERA //camera header
* RES (float x) (float y) //resolution
* FOVY (float fovy) //vertical field of
view half-angle. the horizonal angle is calculated from this and the
reslution
* ITERATIONS (float interations) //how many
iterations to refine the image, only relevant for supersampled antialiasing,
depth of field, area lights, and other distributed raytracing applications
* FILE (string filename) //file to output
render to upon completion
* frame (frame number) //start of a frame
* EYE (float x) (float y) (float z) //camera's position in
worldspace
* VIEW (float x) (float y) (float z) //camera's view
direction
* UP (float x) (float y) (float z) //camera's up vector
![Alt text](https://github.com/chiwsy/Project3-Pathtracer/blob/master/MedianSubjectBetter.png)

Objects are defined in the following fashion:
* OBJECT (object ID) //object header
* (cube OR sphere OR mesh) //type of object, can
be either "cube", "sphere", or "mesh". Note that cubes and spheres are unit
sized and centered at the origin.
* material (material ID) //material to
assign this object
* frame (frame number) //start of a frame
* TRANS (float transx) (float transy) (float transz) //translation
* ROTAT (float rotationx) (float rotationy) (float rotationz) //rotation
* SCALE (float scalex) (float scaley) (float scalez) //scale

An example scene file setting up two frames inside of a Cornell Box can be
found in the scenes/ directory.

For meshes, note that the base code will only read in .obj files. For more
information on the .obj specification see http://en.wikipedia.org/wiki/Wavefront_.obj_file.

An example of a mesh object is as follows:

OBJECT 0
mesh tetra.obj
material 0
frame 0
TRANS 0 5 -5
ROTAT 0 90 0
SCALE .01 10 10

Check the Google group for some sample .obj files of varying complexity.

## THIRD PARTY CODE POLICY
* Use of any third-party code must be approved by asking on our Google Group.
If it is approved, all students are welcome to use it. Generally, we approve
use of third-party code that is not a core part of the project. For example,
for the ray tracer, we would approve using a third-party library for loading
models, but would not approve copying and pasting a CUDA function for doing
refraction.
* Third-party code must be credited in README.md.
* Using third-party code without its approval, including using another
student's code, is an academic integrity violation, and will result in you
receiving an F for the semester.

## SELF-GRADING
* On the submission date, email your grade, on a scale of 0 to 100, to Harmony,
harmoli+cis565@seas.upenn.com, with a one paragraph explanation. Be concise and
realistic. Recall that we reserve 30 points as a sanity check to adjust your
grade. Your actual grade will be (0.7 * your grade) + (0.3 * our grade). We
hope to only use this in extreme cases when your grade does not realistically
reflect your work - it is either too high or too low. In most cases, we plan
to give you the exact grade you suggest.
* Projects are not weighted evenly, e.g., Project 0 doesn't count as much as
the path tracer. We will determine the weighting at the end of the semester
based on the size of each project.

## SUBMISSION
Please change the README to reflect the answers to the questions we have posed
above. Remember:
* this is a renderer, so include images that you've made!
* be sure to back your claims for optimization with numbers and comparisons
* if you reference any other material, please provide a link to it
* you wil not e graded on how fast your path tracer runs, but getting close to
real-time is always nice
* if you have a fast GPU renderer, it is good to show case this with a video to
show interactivity. If you do so, please include a link.

Be sure to open a pull request and to send Harmony your grade and why you
believe this is the grade you should get.
This image shows using median filter on the final result. After 5000 iteration, the quality of the image is relatively high but still, there are irregular points. The right side has relatively low PSNR value but as far as I am concered, I think the right image has better quality since the image is more clean than the left one.
Binary file added compare.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Empty file added data/scenes/error.txt
Empty file.
Loading