diff --git a/1.png b/1.png new file mode 100644 index 0000000..1d43512 Binary files /dev/null and b/1.png differ diff --git a/README.md b/README.md index ae6088d..6326f83 100644 --- a/README.md +++ b/README.md @@ -5,269 +5,9 @@ Fall 2014 Due Wed, 10/8 (submit without penalty until Sun, 10/12) -## INTRODUCTION -In this project, you will implement a CUDA based pathtracer capable of -generating pathtraced rendered images extremely quickly. Building a pathtracer can be viewed as a generalization of building a raytracer, so for those of you who have taken 460/560, the basic concept should not be very new to you. For those of you that have not taken -CIS460/560, raytracing is a technique for generating images by tracing rays of -light through pixels in an image plane out into a scene and following the way -the rays of light bounce and interact with objects in the scene. More -information can be found here: -http://en.wikipedia.org/wiki/Ray_tracing_(graphics). Pathtracing is a generalization of this technique by considering more than just the contribution of direct lighting to a surface. +![](https://github.com/DiracSea3921/Project3-Pathtracer/blob/master/1.png) -Since in this class we are concerned with working in generating actual images -and less so with mundane tasks like file I/O, this project includes basecode -for loading a scene description file format, described below, and various other -things that generally make up the render "harness" that takes care of -everything up to the rendering itself. The core renderer is left for you to -implement. Finally, note that while this basecode is meant to serve as a -strong starting point for a CUDA pathtracer, you are not required to use this -basecode if you wish, and you may also change any part of the basecode -specification as you please, so long as the final rendered result is correct. +Texture mapping and Interactive camera implemented. -## CONTENTS -The Project3 root directory contains the following subdirectories: - -* src/ contains the source code for the project. Both the Windows Visual Studio - solution and the OSX and Linux makefiles reference this folder for all - source; the base source code compiles on Linux, OSX and Windows without - modification. If you are building on OSX, be sure to uncomment lines 4 & 5 of - the CMakeLists.txt in order to make sure CMake builds against clang. -* data/scenes/ contains an example scene description file. -* renders/ contains an example render of the given example scene file. -* windows/ contains a Windows Visual Studio 2010 project and all dependencies - needed for building and running on Windows 7. If you would like to create a - Visual Studio 2012 or 2013 projects, there are static libraries that you can - use for GLFW that are in external/bin/GLFW (Visual Studio 2012 uses msvc110, - and Visual Studio 2013 uses msvc120) -* external/ contains all the header, static libraries and built binaries for - 3rd party libraries (i.e. glm, GLEW, GLFW) that we use for windowing and OpenGL - extensions +Press 'up' 'left' 'right' 'down' to move the camera -## RUNNING THE CODE -The main function requires a scene description file (that is provided in data/scenes). -The main function reads in the scene file by an argument as such : -'scene=[sceneFileName]' - -If you are using Visual Studio, you can set this in the Debugging > Command Arguments section -in the Project properties. - -## REQUIREMENTS -In this project, you are given code for: - -* Loading, reading, and storing the scene scene description format -* Example functions that can run on both the CPU and GPU for generating random - numbers, spherical intersection testing, and surface point sampling on cubes -* A class for handling image operations and saving images -* Working code for CUDA-GL interop - -You will need to implement the following features: - -* Raycasting from a camera into a scene through a pixel grid -* Diffuse surfaces -* Perfect specular reflective surfaces -* Cube intersection testing -* Sphere surface point sampling -* Stream compaction optimization - -You are also required to implement at least 2 of the following features: - -* Texture mapping -* Bump mapping -* Depth of field -* Refraction, i.e. glass -* OBJ Mesh loading and rendering -* Interactive camera -* Motion blur -* Subsurface scattering - -The 'extra features' list is not comprehensive. If you have a particular feature -you would like to implement (e.g. acceleration structures, etc.) please contact us -first! - -For each 'extra feature' you must provide the following analysis : -* overview write up of the feature -* performance impact of the feature -* if you did something to accelerate the feature, why did you do what you did -* compare your GPU version to a CPU version of this feature (you do NOT need to - implement a CPU version) -* how can this feature be further optimized (again, not necessary to implement it, but - should give a roadmap of how to further optimize and why you believe this is the next - step) - -## BASE CODE TOUR -You will be working in three files: raytraceKernel.cu, intersections.h, and -interactions.h. Within these files, areas that you need to complete are marked -with a TODO comment. Areas that are useful to and serve as hints for optional -features are marked with TODO (Optional). Functions that are useful for -reference are marked with the comment LOOK. - -* raytraceKernel.cu contains the core raytracing CUDA kernel. You will need to - complete: - * cudaRaytraceCore() handles kernel launches and memory management; this - function already contains example code for launching kernels, - transferring geometry and cameras from the host to the device, and transferring - image buffers from the host to the device and back. You will have to complete - this function to support passing materials and lights to CUDA. - * raycastFromCameraKernel() is a function that you need to implement. This - function once correctly implemented should handle camera raycasting. - * raytraceRay() is the core raytracing CUDA kernel; all of your pathtracing - logic should be implemented in this CUDA kernel. raytraceRay() should - take in a camera, image buffer, geometry, materials, and lights, and should - trace a ray through the scene and write the resultant color to a pixel in the - image buffer. - -* intersections.h contains functions for geometry intersection testing and - point generation. You will need to complete: - * boxIntersectionTest(), which takes in a box and a ray and performs an - intersection test. This function should work in the same way as - sphereIntersectionTest(). - * getRandomPointOnSphere(), which takes in a sphere and returns a random - point on the surface of the sphere with an even probability distribution. - This function should work in the same way as getRandomPointOnCube(). You can - (although do not necessarily have to) use this to generate points on a sphere - to use a point lights, or can use this for area lighting. - -* interactions.h contains functions for ray-object interactions that define how - rays behave upon hitting materials and objects. You will need to complete: - * getRandomDirectionInSphere(), which generates a random direction in a - sphere with a uniform probability. This function works in a fashion - similar to that of calculateRandomDirectionInHemisphere(), which generates a - random cosine-weighted direction in a hemisphere. - * calculateBSDF(), which takes in an incoming ray, normal, material, and - other information, and returns an outgoing ray. You can either implement - this function for ray-surface interactions, or you can replace it with your own - function(s). - -You will also want to familiarize yourself with: - -* sceneStructs.h, which contains definitions for how geometry, materials, - lights, cameras, and animation frames are stored in the renderer. -* utilities.h, which serves as a kitchen-sink of useful functions - -## NOTES ON GLM -This project uses GLM, the GL Math library, for linear algebra. You need to -know two important points on how GLM is used in this project: - -* In this project, indices in GLM vectors (such as vec3, vec4), are accessed - via swizzling. So, instead of v[0], v.x is used, and instead of v[1], v.y is - used, and so on and so forth. -* GLM Matrix operations work fine on NVIDIA Fermi cards and later, but - pre-Fermi cards do not play nice with GLM matrices. As such, in this project, - GLM matrices are replaced with a custom matrix struct, called a cudaMat4, found - in cudaMat4.h. A custom function for multiplying glm::vec4s and cudaMat4s is - provided as multiplyMV() in intersections.h. - -## SCENE FORMAT -This project uses a custom scene description format. -Scene files are flat text files that describe all geometry, materials, -lights, cameras, render settings, and animation frames inside of the scene. -Items in the format are delimited by new lines, and comments can be added at -the end of each line preceded with a double-slash. - -Materials are defined in the following fashion: - -* MATERIAL (material ID) //material header -* RGB (float r) (float g) (float b) //diffuse color -* SPECX (float specx) //specular exponent -* SPECRGB (float r) (float g) (float b) //specular color -* REFL (bool refl) //reflectivity flag, 0 for - no, 1 for yes -* REFR (bool refr) //refractivity flag, 0 for - no, 1 for yes -* REFRIOR (float ior) //index of refraction - for Fresnel effects -* SCATTER (float scatter) //scatter flag, 0 for - no, 1 for yes -* ABSCOEFF (float r) (float b) (float g) //absorption - coefficient for scattering -* RSCTCOEFF (float rsctcoeff) //reduced scattering - coefficient -* EMITTANCE (float emittance) //the emittance of the - material. Anything >0 makes the material a light source. - -Cameras are defined in the following fashion: - -* CAMERA //camera header -* RES (float x) (float y) //resolution -* FOVY (float fovy) //vertical field of - view half-angle. the horizonal angle is calculated from this and the - reslution -* ITERATIONS (float interations) //how many - iterations to refine the image, only relevant for supersampled antialiasing, - depth of field, area lights, and other distributed raytracing applications -* FILE (string filename) //file to output - render to upon completion -* frame (frame number) //start of a frame -* EYE (float x) (float y) (float z) //camera's position in - worldspace -* VIEW (float x) (float y) (float z) //camera's view - direction -* UP (float x) (float y) (float z) //camera's up vector - -Objects are defined in the following fashion: -* OBJECT (object ID) //object header -* (cube OR sphere OR mesh) //type of object, can - be either "cube", "sphere", or "mesh". Note that cubes and spheres are unit - sized and centered at the origin. -* material (material ID) //material to - assign this object -* frame (frame number) //start of a frame -* TRANS (float transx) (float transy) (float transz) //translation -* ROTAT (float rotationx) (float rotationy) (float rotationz) //rotation -* SCALE (float scalex) (float scaley) (float scalez) //scale - -An example scene file setting up two frames inside of a Cornell Box can be -found in the scenes/ directory. - -For meshes, note that the base code will only read in .obj files. For more -information on the .obj specification see http://en.wikipedia.org/wiki/Wavefront_.obj_file. - -An example of a mesh object is as follows: - -OBJECT 0 -mesh tetra.obj -material 0 -frame 0 -TRANS 0 5 -5 -ROTAT 0 90 0 -SCALE .01 10 10 - -Check the Google group for some sample .obj files of varying complexity. - -## THIRD PARTY CODE POLICY -* Use of any third-party code must be approved by asking on our Google Group. - If it is approved, all students are welcome to use it. Generally, we approve - use of third-party code that is not a core part of the project. For example, - for the ray tracer, we would approve using a third-party library for loading - models, but would not approve copying and pasting a CUDA function for doing - refraction. -* Third-party code must be credited in README.md. -* Using third-party code without its approval, including using another - student's code, is an academic integrity violation, and will result in you - receiving an F for the semester. - -## SELF-GRADING -* On the submission date, email your grade, on a scale of 0 to 100, to Harmony, - harmoli+cis565@seas.upenn.com, with a one paragraph explanation. Be concise and - realistic. Recall that we reserve 30 points as a sanity check to adjust your - grade. Your actual grade will be (0.7 * your grade) + (0.3 * our grade). We - hope to only use this in extreme cases when your grade does not realistically - reflect your work - it is either too high or too low. In most cases, we plan - to give you the exact grade you suggest. -* Projects are not weighted evenly, e.g., Project 0 doesn't count as much as - the path tracer. We will determine the weighting at the end of the semester - based on the size of each project. - -## SUBMISSION -Please change the README to reflect the answers to the questions we have posed -above. Remember: -* this is a renderer, so include images that you've made! -* be sure to back your claims for optimization with numbers and comparisons -* if you reference any other material, please provide a link to it -* you wil not e graded on how fast your path tracer runs, but getting close to - real-time is always nice -* if you have a fast GPU renderer, it is good to show case this with a video to - show interactivity. If you do so, please include a link. - -Be sure to open a pull request and to send Harmony your grade and why you -believe this is the grade you should get. diff --git a/src/interactions.h b/src/interactions.h index 7bf6fab..9a0344d 100644 --- a/src/interactions.h +++ b/src/interactions.h @@ -77,7 +77,7 @@ __host__ __device__ glm::vec3 calculateRandomDirectionInHemisphere(glm::vec3 nor } else { directionNotNormal = glm::vec3(0, 0, 1); } - + // Use not-normal direction to generate two perpendicular directions glm::vec3 perpendicularDirection1 = glm::normalize(glm::cross(normal, directionNotNormal)); glm::vec3 perpendicularDirection2 = glm::normalize(glm::cross(normal, perpendicularDirection1)); @@ -91,7 +91,19 @@ __host__ __device__ glm::vec3 calculateRandomDirectionInHemisphere(glm::vec3 nor // non-cosine (uniform) weighted random direction generation. // This should be much easier than if you had to implement calculateRandomDirectionInHemisphere. __host__ __device__ glm::vec3 getRandomDirectionInSphere(float xi1, float xi2) { - return glm::vec3(0,0,0); + // Crucial difference between this and calculateRandomDirectionInSphere: THIS IS COSINE WEIGHTED! + + float angle = (xi1-0.5f)*TWO_PI; + float up = cos(angle); // cos(theta) + float over = sin(angle); // sin(theta) + float around = xi2 * TWO_PI; + + glm::vec3 u = glm::vec3(0,1,0); + glm::vec3 v = glm::vec3(0,0,1); + glm::vec3 w = glm::vec3(1,0,0); + + return up + ( cos(around) * over * v ) + ( sin(around) * over * w ); + } // TODO (PARTIALLY OPTIONAL): IMPLEMENT THIS FUNCTION diff --git a/src/intersections.h b/src/intersections.h index c9eafb6..47b7a24 100644 --- a/src/intersections.h +++ b/src/intersections.h @@ -71,9 +71,111 @@ __host__ __device__ glm::vec3 getSignOfRay(ray r){ // TODO: IMPLEMENT THIS FUNCTION // Cube intersection test, return -1 if no intersection, otherwise, distance to intersection -__host__ __device__ float boxIntersectionTest(staticGeom box, ray r, glm::vec3& intersectionPoint, glm::vec3& normal){ +__host__ __device__ float boxIntersectionTest(staticGeom box, ray r, glm::vec3& intersectionPoint, glm::vec3& normal, glm::vec2& texCoord){ - return -1; + glm::vec3 ro = multiplyMV(box.inverseTransform, glm::vec4(r.origin,1.0f)); + glm::vec3 rd = glm::normalize(multiplyMV(box.inverseTransform, glm::vec4(r.direction,0.0f))); + + float tmin; + float minx, miny, minz, maxx, maxy, maxz; + + float a = 1.0/rd.x; + if (a >= 0) { + minx = (-0.5 - ro.x)*a; + maxx = (0.5 - ro.x)*a; + } + else { + minx = (0.5 - ro.x)*a; + maxx = (-0.5 - ro.x)*a; + } + + float b = 1.0 / rd.y; + if (b >= 0) { + miny = (-0.5 - ro.y)*b; + maxy = (0.5 - ro.y)*b; + } + else { + miny = (0.5 - ro.y)*b; + maxy = (-0.5 - ro.y)*b; + } + + float c = 1.0 / rd.z; + if (c >= 0) { + minz = (-0.5 - ro.z) * c; + maxz = (0.5 - ro.z) * c; + } + else { + minz = (0.5 - ro.z) * c; + maxz = (-0.5 - ro.z) * c; + } + + glm::vec3 normal_in,normal_out; + float t0, t1; + if (minx > miny){ + t0 = minx; + normal_in = (a >= 0.0) ? glm::vec3(-1, 0, 0) : glm::vec3(1, 0, 0); + } + else { + t0 = miny; + normal_in = (b >= 0.0) ? (glm::vec3(0, -1, 0)) : glm::vec3(0, 1, 0); + } + + if (minz > t0) { + t0 = minz; + normal_in = (c >= 0.0) ? glm::vec3(0, 0, -1) : glm::vec3(0, 0, 1); + } + + if (maxx < maxy){ + t1 = maxx; + normal_out = (a >= 0.0) ? glm::vec3(1, 0, 0) : glm::vec3(-1, 0, 0); + } + else { + t1 = maxy; + normal_out = (b >= 0.0) ? glm::vec3(0, 1, 0) : glm::vec3(0, -1, 0); + } + + if (maxz < t1){ + t1 = maxz; + normal_out = (c >= 0.0) ? glm::vec3(0, 0, 1) : glm::vec3(0, 0, -1); + } + + + + if (t0 0.0001f) { + if (t0 > 0.0001f){ + tmin = t0; + normal = normal_in; + } + else{ + tmin = t1; + normal = normal_out; + } + + + ray rt; rt.origin = ro; rt.direction = rd; + glm::vec3 point = getPointOnRay(rt, tmin); + if(normal.x == 1||normal.x == -1) + { + texCoord.x = point.y+0.5f; + texCoord.y = point.z+0.5f; + } + else if(normal.y == 1||normal.y == -1) + { + texCoord.x = point.x+0.5f; + texCoord.y = point.z+0.5f; + } + else if(normal.z == 1||normal.z == -1) + { + texCoord.x = point.x+0.5f; + texCoord.y = point.y+0.5f; + } + normal = glm::normalize(multiplyMV(box.transform, glm::vec4(normal,0.0f))); + intersectionPoint = multiplyMV(box.transform, glm::vec4(point, 1.0)); + + return glm::length(r.origin - intersectionPoint);; + } + else + return -1; } // LOOK: Here's an intersection test example from a sphere. Now you just need to figure out cube and, optionally, triangle. @@ -178,7 +280,21 @@ __host__ __device__ glm::vec3 getRandomPointOnCube(staticGeom cube, float random // Generates a random point on a given sphere __host__ __device__ glm::vec3 getRandomPointOnSphere(staticGeom sphere, float randomSeed){ - return glm::vec3(0,0,0); + thrust::default_random_engine rng(hash(randomSeed)); + thrust::uniform_real_distribution xi1(0,1); + thrust::uniform_real_distribution xi2(-0.5,0.5); + + float angle = ((float)xi1(rng)-0.5f)*TWO_PI; + float up = cos(angle); + float over = sin(angle); + float around = (float)xi2(rng) * TWO_PI; + glm::vec3 u = glm::vec3(0,1,0); + glm::vec3 v = glm::vec3(0,0,1); + glm::vec3 w = glm::vec3(1,0,0); + + glm::vec3 p = up + ( cos(around) * over * v ) + ( sin(around) * over * w ); + + return multiplyMV(sphere.transform, glm::vec4(p,1.0f)); } #endif diff --git a/src/main.cpp b/src/main.cpp index b002500..6e7c0d3 100755 --- a/src/main.cpp +++ b/src/main.cpp @@ -12,6 +12,7 @@ //-------------MAIN-------------- //------------------------------- + int main(int argc, char** argv){ // Set up pathtracer stuff bool loadedScene = false; @@ -59,6 +60,8 @@ int main(int argc, char** argv){ return 0; } +bmp_texture tex; + void mainLoop() { while(!glfwWindowShouldClose(window)){ glfwPollEvents(); @@ -106,7 +109,7 @@ void runCuda(){ } // execute the kernel - cudaRaytraceCore(dptr, renderCam, targetFrame, iterations, materials, renderScene->materials.size(), geoms, renderScene->objects.size() ); + cudaRaytraceCore(dptr, renderCam, targetFrame, iterations, materials, renderScene->materials.size(), geoms, renderScene->objects.size(),&tex ); // unmap buffer object cudaGLUnmapBufferObject(pbo); @@ -157,6 +160,34 @@ void runCuda(){ } } +void readBMP(char* filename,bmp_texture &tex) +{ + int i; + FILE* f = fopen(filename, "rb"); + unsigned char info[54]; + fread(info, sizeof(unsigned char), 54, f); // read the 54-byte header + + // extract image height and width from header + int width = *(int*)&info[18]; + int height = *(int*)&info[22]; + + int size = 3 * width * height; + unsigned char* data = new unsigned char[size]; // allocate 3 bytes per pixel + fread(data, sizeof(unsigned char), size, f); // read the rest of the data at once + fclose(f); + glm::vec3 *color_data = new glm::vec3[size/3]; + for(i = 0; i < size; i += 3) + { + color_data[i/3].r = (int)data[i+2]/255.0f; + color_data[i/3].g = (int)data[i+1]/255.0f; + color_data[i/3].b = (int)data[i]/255.0f; + } + delete []data; + tex.data = color_data; + tex.height = height; + tex.width = width; +} + //------------------------------- //----------SETUP STUFF---------- //------------------------------- @@ -167,7 +198,7 @@ bool init(int argc, char* argv[]) { if (!glfwInit()) { return false; } - + readBMP("texture.bmp",tex); width = 800; height = 800; window = glfwCreateWindow(width, height, "CIS 565 Pathtracer", NULL, NULL); @@ -315,9 +346,31 @@ void deleteTexture(GLuint* tex){ void errorCallback(int error, const char* description){ fputs(description, stderr); } +void clearScreen(){ + iterations=0; + for(int i=0;iresolution.x*renderCam->resolution.y;i++) + renderCam->image[i]=glm::vec3(0,0,0); +} void keyCallback(GLFWwindow* window, int key, int scancode, int action, int mods){ if(key == GLFW_KEY_ESCAPE && action == GLFW_PRESS){ glfwSetWindowShouldClose(window, GL_TRUE); } + + if(key == GLFW_KEY_RIGHT && action == GLFW_PRESS){ + clearScreen(); + renderCam->positions[0]+=glm::vec3(0.1f,0,0); + } + if(key == GLFW_KEY_UP && action == GLFW_PRESS){ + clearScreen(); + renderCam->positions[0]+=glm::vec3(0,0.1f,0); + } + if(key == GLFW_KEY_LEFT && action == GLFW_PRESS){ + clearScreen(); + renderCam->positions[0]+=glm::vec3(-0.1f,0,0); + } + if(key == GLFW_KEY_DOWN && action == GLFW_PRESS){ + clearScreen(); + renderCam->positions[0]+=glm::vec3(0,-0.1f,0); + } } diff --git a/src/raytraceKernel.cu b/src/raytraceKernel.cu index 9c7bc7d..fba18ad 100644 --- a/src/raytraceKernel.cu +++ b/src/raytraceKernel.cu @@ -7,8 +7,12 @@ #include #include -#include - +#include +#include +#include +#include +#include +#include #include "sceneStructs.h" #include "glm/glm.hpp" #include "utilities.h" @@ -39,8 +43,9 @@ __host__ __device__ glm::vec3 generateRandomNumberFromThread(glm::vec2 resolutio // Function that does the initial raycast from the camera __host__ __device__ ray raycastFromCameraKernel(glm::vec2 resolution, float time, int x, int y, glm::vec3 eye, glm::vec3 view, glm::vec3 up, glm::vec2 fov){ ray r; - r.origin = glm::vec3(0,0,0); - r.direction = glm::vec3(0,0,-1); + r.origin = eye; + glm::vec3 u = glm::cross(up,view); + r.direction = glm::normalize(up * (tan(fov.y*3.14159f/180.0f) * (-y / resolution.y + 0.5f)*2.0f) + u * (tan(fov.x*3.14159f/180.0f) * (x / resolution.x - 0.5f)*2.0f) + view); return r; } @@ -50,7 +55,16 @@ __global__ void clearImage(glm::vec2 resolution, glm::vec3* image){ int y = (blockIdx.y * blockDim.y) + threadIdx.y; int index = x + (y * resolution.x); if(x<=resolution.x && y<=resolution.y){ - image[index] = glm::vec3(0,0,0); + image[index] = glm::vec3(1,1,1); + } +} + +__global__ void clearRay(glm::vec2 resolution, ray* r){ + int x = (blockIdx.x * blockDim.x) + threadIdx.x; + int y = (blockIdx.y * blockDim.y) + threadIdx.y; + int index = x + (y * resolution.x); + if(x<=resolution.x && y<=resolution.y){ + r[index].direction = glm::vec3(0,0,0); } } @@ -88,24 +102,138 @@ __global__ void sendImageToPBO(uchar4* PBOpos, glm::vec2 resolution, glm::vec3* } } + __device__ glm::vec3 get_texture_color(bmp_texture* tex, glm::vec2 texcoord,glm::vec3* texData){ + if(texcoord.x<0) + texcoord.x=0; + else if (texcoord.x>1) + texcoord.x=1; + + if(texcoord.y<0) + texcoord.y=0; + else if (texcoord.y>1) + texcoord.y=1; + + int i = (int)(texcoord.y*tex->height); + int j = (int)(texcoord.x*tex->width); + int index = j*tex->height + i; + if(index>tex->height*tex->width-1) + index = tex->height*tex->width-1; + else if (index<0) + index = 0; + return texData[index]; + } + + __device__ glm::vec3 pathTrace(ray r,float time,int index, int depth, staticGeom* geoms, int numberOfGeoms, material* materials,ray &reflectRay,bool &hitLight,bool &hitAnything,bmp_texture* tex,glm::vec3* texData){ + glm::vec3 intersectionPoint; + glm::vec3 normal; + glm::vec3 intersectionPoint_tmp; + glm::vec3 normal_tmp; + glm::vec2 texCoord; + int minIndex = -1; + float mint = 99999999.0f; + for(int i=0;i0&&temp0&&temp0){ + hitLight = true; + return m.color * m.emittance; + } + + if(m.indexOfRefraction>0.1f){ + float ndotwo = glm::dot(normal , -r.direction); + reflectRay.origin=intersectionPoint; + reflectRay.direction = r.direction + 2.0f * normal * ndotwo; + return m.specularColor*m.color; + } + else{ + + thrust::default_random_engine rng(hash(index*(time+depth+1))); + thrust::uniform_real_distribution xi1(0,1); + thrust::uniform_real_distribution xi2(0,1); + + reflectRay.origin=intersectionPoint; + reflectRay.direction = calculateRandomDirectionInHemisphere(normal,(float)xi1(rng),(float)xi2(rng)); + float diffuse = glm::dot(reflectRay.direction,normal); + if(diffuse<0) + diffuse = 0; + + if(m.texture[0]!='N') + return get_texture_color(tex,texCoord,texData); + return diffuse * m.color; + } + + } + hitAnything = false; + return glm::vec3(0,0,0); +} +#define MaxDepth 5 // TODO: IMPLEMENT THIS FUNCTION // Core raytracer kernel __global__ void raytraceRay(glm::vec2 resolution, float time, cameraData cam, int rayDepth, glm::vec3* colors, - staticGeom* geoms, int numberOfGeoms){ + staticGeom* geoms, int numberOfGeoms, material* materials, ray* rays, glm::vec3* tmpColors,bmp_texture* tex,glm::vec3* texData){ + int x = (blockIdx.x * blockDim.x) + threadIdx.x; int y = (blockIdx.y * blockDim.y) + threadIdx.y; int index = x + (y * resolution.x); + if(glm::length(rays[index].direction)<0.01f && rayDepth>1) + return; if((x<=resolution.x && y<=resolution.y)){ + ray r; + if(rayDepth == 1) + r = raycastFromCameraKernel(resolution,time,x,y,cam.position,cam.view,cam.up,cam.fov); + else + r = rays[index]; - colors[index] = generateRandomNumberFromThread(resolution, time, x, y); + bool hitLight = false; + bool hitAnything = true; + ray reflectRay; + reflectRay.direction = glm::vec3(1,1,1); + reflectRay.origin = glm::vec3(0,0,0); + + tmpColors[index] *= pathTrace(r,time,index,rayDepth,geoms,numberOfGeoms,materials,reflectRay,hitLight,hitAnything,tex,texData); + if(hitLight){ + colors[index] = (colors[index] * (time-1)) /(time) + tmpColors[index]/time; + rays[index].direction = glm::vec3(0,0,0); + return; + } + else if(rayDepth==MaxDepth) + colors[index] = (colors[index] * (time-1)) /(time); + if(hitAnything&&rayDepthheight*tex->width*sizeof(glm::vec3)); + cudaMemcpy( data, tex->data, tex->height*tex->width*sizeof(glm::vec3), cudaMemcpyHostToDevice); + // package camera cameraData cam; cam.resolution = renderCam->resolution; @@ -145,9 +284,23 @@ void cudaRaytraceCore(uchar4* PBOpos, camera* renderCam, int frame, int iteratio cam.up = renderCam->ups[frame]; cam.fov = renderCam->fov; - // kernel launches - raytraceRay<<>>(renderCam->resolution, (float)iterations, cam, traceDepth, cudaimage, cudageoms, numberOfGeoms); + // send image to GPU + ray* cudaRay = NULL; + cudaMalloc((void**)&cudaRay, (int)renderCam->resolution.x*(int)renderCam->resolution.y*sizeof(ray)); + clearRay<<>>(renderCam->resolution, cudaRay); + cudaThreadSynchronize(); + + glm::vec3 *tmpColor = NULL; + cudaMalloc((void**)&tmpColor, (int)renderCam->resolution.x*(int)renderCam->resolution.y*sizeof(glm::vec3)); + clearImage<<>>(renderCam->resolution, tmpColor); + cudaThreadSynchronize(); + + // kernel launches + for(;traceDepth<=MaxDepth;traceDepth++){ + raytraceRay<<>>(renderCam->resolution, (float)iterations, cam, traceDepth, cudaimage, cudageoms, numberOfGeoms,cudamaterials,cudaRay,tmpColor,cudatex,data); + cudaThreadSynchronize(); + } sendImageToPBO<<>>(PBOpos, renderCam->resolution, cudaimage); // retrieve image from GPU @@ -156,6 +309,12 @@ void cudaRaytraceCore(uchar4* PBOpos, camera* renderCam, int frame, int iteratio // free up stuff, or else we'll leak memory like a madman cudaFree( cudaimage ); cudaFree( cudageoms ); + cudaFree( cudaRay ); + cudaFree( tmpColor ); + cudaFree( cudamaterials ); + cudaFree( cudatex ); + cudaFree( data ); + delete geomList; // make certain the kernel has completed diff --git a/src/raytraceKernel.h b/src/raytraceKernel.h index 984e89f..861ba6d 100755 --- a/src/raytraceKernel.h +++ b/src/raytraceKernel.h @@ -14,6 +14,6 @@ #include #include "sceneStructs.h" -void cudaRaytraceCore(uchar4* pos, camera* renderCam, int frame, int iterations, material* materials, int numberOfMaterials, geom* geoms, int numberOfGeoms); +void cudaRaytraceCore(uchar4* pos, camera* renderCam, int frame, int iterations, material* materials, int numberOfMaterials, geom* geoms, int numberOfGeoms,bmp_texture* tex); #endif diff --git a/src/scene.cpp b/src/scene.cpp index 4cbe216..65e4060 100644 --- a/src/scene.cpp +++ b/src/scene.cpp @@ -229,7 +229,7 @@ int scene::loadMaterial(string materialid){ material newMaterial; //load static properties - for(int i=0; i<10; i++){ + for(int i=0; i<11; i++){ string line; utilityCore::safeGetline(fp_in,line); vector tokens = utilityCore::tokenizeString(line); @@ -256,7 +256,10 @@ int scene::loadMaterial(string materialid){ newMaterial.reducedScatterCoefficient = atof(tokens[1].c_str()); }else if(strcmp(tokens[0].c_str(), "EMITTANCE")==0){ newMaterial.emittance = atof(tokens[1].c_str()); - + }else if(strcmp(tokens[0].c_str(), "TEXTURE")==0){ + char*tmp =(char*)tokens[1].c_str(); + for(int i=0;tmp[i]!='\0';i++) + newMaterial.texture[i] = tmp[i]; } } materials.push_back(newMaterial); diff --git a/src/sceneStructs.h b/src/sceneStructs.h index 5e0c853..73ec3c2 100644 --- a/src/sceneStructs.h +++ b/src/sceneStructs.h @@ -47,6 +47,12 @@ struct cameraData { glm::vec2 fov; }; +struct bmp_texture { + glm::vec3 *data; + int width; + int height; +}; + struct camera { glm::vec2 resolution; glm::vec3* positions; @@ -71,6 +77,7 @@ struct material{ glm::vec3 absorptionCoefficient; float reducedScatterCoefficient; float emittance; + char texture[256]; }; #endif //CUDASTRUCTS_H diff --git a/src/utilities.cpp b/src/utilities.cpp index a8e5d90..6f0ae4d 100755 --- a/src/utilities.cpp +++ b/src/utilities.cpp @@ -4,7 +4,7 @@ // File: utilities.cpp // A collection/kitchen sink of generally useful functions -#define GLM_FORCE_RADIANS +//#define GLM_FORCE_RADIANS #include #include diff --git a/windows/Project3-Pathtracer/Project3-Pathtracer/Project3-Pathtracer.vcxproj b/windows/Project3-Pathtracer/Project3-Pathtracer/Project3-Pathtracer.vcxproj index c45dd79..f05f54d 100644 --- a/windows/Project3-Pathtracer/Project3-Pathtracer/Project3-Pathtracer.vcxproj +++ b/windows/Project3-Pathtracer/Project3-Pathtracer/Project3-Pathtracer.vcxproj @@ -28,7 +28,7 @@ - + @@ -95,6 +95,6 @@ - + \ No newline at end of file diff --git a/windows/Project3-Pathtracer/Project3-Pathtracer/sampleScene.txt b/windows/Project3-Pathtracer/Project3-Pathtracer/sampleScene.txt new file mode 100644 index 0000000..2ca4cd1 --- /dev/null +++ b/windows/Project3-Pathtracer/Project3-Pathtracer/sampleScene.txt @@ -0,0 +1,212 @@ +MATERIAL 0 //white diffuse +RGB 1 1 1 +SPECEX 0 +SPECRGB 1 1 1 +REFL 0 +REFR 0 +REFRIOR 0 +SCATTER 0 +ABSCOEFF 0 0 0 +RSCTCOEFF 0 +EMITTANCE 0 +TEXTURE NULL + +MATERIAL 1 //red diffuse +RGB .63 .06 .04 +SPECEX 0 +SPECRGB 1 1 1 +REFL 0 +REFR 0 +REFRIOR 0 +SCATTER 0 +ABSCOEFF 0 0 0 +RSCTCOEFF 0 +EMITTANCE 0 +TEXTURE NULL + +MATERIAL 2 //green diffuse +RGB .15 .48 .09 +SPECEX 0 +SPECRGB 1 1 1 +REFL 0 +REFR 0 +REFRIOR 0 +SCATTER 0 +ABSCOEFF 0 0 0 +RSCTCOEFF 0 +EMITTANCE 0 +TEXTURE NULL + +MATERIAL 3 //red glossy +RGB .63 .06 .04 +SPECEX 0 +SPECRGB 1 1 1 +REFL 0 +REFR 0 +REFRIOR 2 +SCATTER 0 +ABSCOEFF 0 0 0 +RSCTCOEFF 0 +EMITTANCE 0 +TEXTURE NULL + +MATERIAL 4 //white glossy +RGB 1 1 1 +SPECEX 0 +SPECRGB 1 1 1 +REFL 0 +REFR 0 +REFRIOR 2 +SCATTER 0 +ABSCOEFF 0 0 0 +RSCTCOEFF 0 +EMITTANCE 0 +TEXTURE NULL + +MATERIAL 5 //glass +RGB 0 0 0 +SPECEX 0 +SPECRGB 1 1 1 +REFL 0 +REFR 1 +REFRIOR 2.2 +SCATTER 0 +ABSCOEFF .02 5.1 5.7 +RSCTCOEFF 13 +EMITTANCE 0 +TEXTURE NULL + +MATERIAL 6 //green glossy +RGB .15 .48 .09 +SPECEX 0 +SPECRGB 1 1 1 +REFL 0 +REFR 0 +REFRIOR 2.6 +SCATTER 0 +ABSCOEFF 0 0 0 +RSCTCOEFF 0 +EMITTANCE 0 +TEXTURE NULL + +MATERIAL 7 //light +RGB 1 1 1 +SPECEX 0 +SPECRGB 0 0 0 +REFL 0 +REFR 0 +REFRIOR 0 +SCATTER 0 +ABSCOEFF 0 0 0 +RSCTCOEFF 0 +EMITTANCE 1 +TEXTURE NULL + +MATERIAL 8 //light +RGB 1 1 1 +SPECEX 0 +SPECRGB 0 0 0 +REFL 0 +REFR 0 +REFRIOR 0 +SCATTER 0 +ABSCOEFF 0 0 0 +RSCTCOEFF 0 +EMITTANCE 50 +TEXTURE NULL + +MATERIAL 9 //white diffuse +RGB 1 1 1 +SPECEX 0 +SPECRGB 1 1 1 +REFL 0 +REFR 0 +REFRIOR 0 +SCATTER 0 +ABSCOEFF 0 0 0 +RSCTCOEFF 0 +EMITTANCE 0 +TEXTURE texture.bmp + +CAMERA +RES 800 800 +FOVY 25 +ITERATIONS 5000 +FILE test.bmp +frame 0 +EYE 0 4.5 12 +VIEW 0 0.0001 -0.9999 +UP 0 1 0 + +OBJECT 0 +cube +material 9 +frame 0 +TRANS 0 0 0 +ROTAT 0 0 90 +SCALE .01 10 10 + +OBJECT 1 +cube +material 0 +frame 0 +TRANS 0 5 -5 +ROTAT 90 0 0 +SCALE 10 0.01 10 + +OBJECT 2 +cube +material 0 +frame 0 +TRANS 0 10 0 +ROTAT 0 0 90 +SCALE .01 10 10 + +OBJECT 3 +cube +material 1 +frame 0 +TRANS -5 5 0 +ROTAT 0 0 0 +SCALE .01 10 10 + +OBJECT 4 +cube +material 2 +frame 0 +TRANS 5 5 0 +ROTAT 0 0 0 +SCALE .01 10 10 + +OBJECT 5 +sphere +material 4 +frame 0 +TRANS 0 2 0 +ROTAT 0 180 0 +SCALE 3 3 3 + +OBJECT 6 +sphere +material 3 +frame 0 +TRANS 2 5 2 +ROTAT 0 180 0 +SCALE 2.5 2.5 2.5 + +OBJECT 7 +sphere +material 6 +frame 0 +TRANS -2 5 -2 +ROTAT 0 180 0 +SCALE 3 3 3 + + +OBJECT 8 +cube +material 8 +frame 0 +TRANS 0 10 0 +ROTAT 0 0 90 +SCALE 0.3 3 3 \ No newline at end of file diff --git a/windows/Project3-Pathtracer/Project3-Pathtracer/test.0.bmp b/windows/Project3-Pathtracer/Project3-Pathtracer/test.0.bmp new file mode 100644 index 0000000..068a04d Binary files /dev/null and b/windows/Project3-Pathtracer/Project3-Pathtracer/test.0.bmp differ diff --git a/windows/Project3-Pathtracer/Project3-Pathtracer/texture.bmp b/windows/Project3-Pathtracer/Project3-Pathtracer/texture.bmp new file mode 100644 index 0000000..031f3d6 Binary files /dev/null and b/windows/Project3-Pathtracer/Project3-Pathtracer/texture.bmp differ