diff --git a/.ipynb_checkpoints/multiview_structure_from_motion-checkpoint.ipynb b/.ipynb_checkpoints/multiview_structure_from_motion-checkpoint.ipynb new file mode 100644 index 0000000..42787f0 --- /dev/null +++ b/.ipynb_checkpoints/multiview_structure_from_motion-checkpoint.ipynb @@ -0,0 +1,188 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Project 4: Sequential Structure from Motion\n", + "\n", + "### Due 4/3/2019\n", + "\n", + "### Graduate Students: Our next reading is [Snavely, 2006](http://195.130.87.21:8080/dspace/bitstream/123456789/636/1/Photo%20tourism%20exploring%20photo%20collections%20in%203D.pdf). We'll have a written report on this one: these methods papers aren't as good for discussions as I'd hoped.\n", + "\n", + "## Problem Statement\n", + "\n", + "You have now developed code that takes two photographs, finds matching locations in both, determines the relative motion between the cameras that took both photos, and solves for the 3D position of those points using epipolar geometry. **The next (and final for our purposes) stage is to extend this analysis to more than two images**, such that we can build 3D models of objects on the ground with just about as much detail as we'd like.\n", + "\n", + "## Adding the third photo\n", + "How do we add these additional photos? To be perfectly honest, at this point it's mostly an exercise in match housekeeping: we've already developed most of the code that we need. First, let's consider what we've got after matching our first two images, $I_1$ and $I_2$. First, we have a set of keypoints in each image, associated with a set of matches. These matches have been quality controlled twice: first by the ratio test, then by RANSAC in conjunction with the recovery of the essential matrix. Assuming that we've used our known camera matrix to convert our pixel-wise coordinates to generalized coordinates, let's call these keypoints $\\mathbf{x}_1$ and $\\mathbf{x}_2$. In practice, we can drop all of those keypoints for which there is not an associated accepted match. Then, for each of our kept matches, we have the essential matrix $E_{12}$, from which we can extract a pair of projection matrices $\\mathbf{P}_1 = [\\mathbf{I}|\\mathbf{0}]$ and $\\mathbf{P}_2 = [\\mathbf{R}_2|\\mathbf{t}_2]$. Using these projection matrices, we generated 3D, world coordinate location of the corresponding features that showed up robustly in both images. We'll call these coordinates $\\mathbf{X}_{12}$. \n", + "\n", + "To add a third image $\\mathbf{I}_3$ to the mix, consider that the situation outlined above is sort of analogous to the information that we have when we want to do pose estimation with ground control points: we have 3D world coordinates as well as the image coordinates of a set of points (a bunch of them, usually!), and we want to recover the camera pose. The problem is that the feature generalized coordinates that we've computed are for $I_1$ and $I_2$, but not $I_3$. Is this a big problem? Of course not! We can simply find $\\mathbf{x}_3$ in $I_3$ that correspond to $\\mathbf{x}_2$, the keypoints in the second image. Then we identify these keypoints with the 3D poi nts $\\mathbf{X}_{12}$. Thus we have image coordinates of features in the third image and the corresponding world coordinates: we can now perform pose estimation, just as we did in Project 1. \n", + "\n", + "Of course there are a few minor caveats: first, we need to filter out spurious matches between $\\mathbf{x}_2$ and $\\mathbf{x}_3$. To do this, we can utilize a tool that we already have: RANSAC estimation of the essential matrix. Because $I_2$ and $I_3$ are related by epipolar geometry just as $I_1$ and $I_2$ are, we can use the same subroutine to compute the essential matrix $\\mathbf{E}_{23}$, and (critically) identify and filter outliers, i.e. we'll discard matches that don't don't correspond to the consensus view of the essential matrix. This also leads to the next caveat, namely that we need an initial guess (call it $P_3^0$) for pose estimation to converge properly. Where should this initial guess come from? The $\\mathbf{E}_{23}$ provides a rotation given as if camera 2 were canonical, i.e. $\\mathbf{P_2'}=[\\mathbf{I}|\\mathbf{0}]$, $\\mathbf{P_3}'=[\\mathbf{R}_3'|\\mathbf{t}_3']$. We'll call it $P_3'$. We need to convert this projection matrix to a coordinate system in which $I_1$ (not $I_2$) is canonical. Fortunately, this is easy:\n", + "$$\n", + "P_3 \\approx P_2 P_3'.\n", + "$$\n", + "This $P_3$, is a an excellent initial guess for pose estimation (in principle, it's rotation matrix should actually be correct). Note that the translation component is only good up to a constant: however, this isn't too problematic because its direction is close to correct, and any optimization just needs to perform what amounts to a line search (a univariate optimization problem) to find the correct scaling. \n", + "\n", + "Once we have a robust estimation of the third camera's pose, we can use it do point triangulation on the correspondences between $I_2$ and $I_3$ not associated with an already-known world coordinate point, which allows us to augment our 3D model with new points. Additionally, we can perform triangulation with *3 views*, potentially improving our accuracy. Moreover, we can apply the process above iteratively, adding more and more images to generate a highly featured 3D model from (for example) 360 degrees worth of view angles. \n", + "\n", + "## Application\n", + "**Generate code that performs the above process for a third image. Apply it to one of the 3D image datasets that we generated in class. Note that we will be collecting aerial imagery from drones as well. Apply this method to a sequence of drone imagery as well.** As a challenge, can you implement code that sequentially adds an arbitrary number of images?\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], +<<<<<<< HEAD + "source": [] +======= + "source": [ + "import numpy as np\n", + "import matplotlib.pyplot as plt\n", + "import PIL.Image\n", + "import matplotlib.image as mpimg\n", + "import scipy.optimize as so\n", + "import cv2\n", + "import piexif\n", + "import sys\n", + "from mpl_toolkits.mplot3d import Axes3D\n", + "\n", + "class Image(object):\n", + " def __init__(self, img): \n", + " self.img = plt.imread(img)\n", + " # Image Description\n", + " self.image = piexif.load(img)\n", + " self.h = plt.imread(img).shape[0]\n", + " self.w = plt.imread(img).shape[1]\n", + " self.d = plt.imread(img).shape[2]\n", + " self.f = self.image['Exif'][piexif.ExifIFD.FocalLengthIn35mmFilm]/36*self.w\n", + " self.sift = cv2.xfeatures2d.SIFT_create()\n", + " self.keypoints,self.descriptor = sift.detectAndCompute(self.img, None)\n", + " self.realWorldtoCam = []\n", + " \n", + "\n", + "class camClass(Image):\n", + " def __init__(self): \n", + " self.images = []\n", + " self.pointCloud = []\n", + " \n", + " def add_images(self,image):\n", + " #image.pose = self.pose_guess #initialize image with guess\n", + " self.images.append(image)\n", + "\n", + "\n", + " def genPointCloud(self):\n", + " def triangulate(P0,P1,x1,x2):\n", + " '''X,Y,Z,W of the key points that are found in 2 images\n", + " P0 and P1 are poses, x1 and x2 are SIFT key-points'''\n", + " A = np.array([[P0[2,0]*x1[0] - P0[0,0], P0[2,1]*x1[0] - P0[0,1], P0[2,2]*x1[0] - P0[0,2], P0[2,3]*x1[0] - P0[0,3]],\n", + " [P0[2,0]*x1[1] - P0[1,0], P0[2,1]*x1[1] - P0[1,1], P0[2,2]*x1[1] - P0[1,2], P0[2,3]*x1[1] - P0[1,3]],\n", + " [P1[2,0]*x2[0] - P1[0,0], P1[2,1]*x2[0] - P1[0,1], P1[2,2]*x2[0] - P1[0,2], P1[2,3]*x2[0] - P1[0,3]],\n", + " [P1[2,0]*x2[1] - P1[1,0], P1[2,1]*x2[1] - P1[1,1], P1[2,2]*x2[1] - P1[1,2], P1[2,3]*x2[1] - P1[1,3]]])\n", + " u,s,vt = np.linalg.svd(A)\n", + " return vt[-1]\n", + " \n", + " for i in range(len(self.images)-1):\n", + " I1 = self.images[i]\n", + " I2 = self.images[i+1]\n", + " bf = cv2.BFMatcher()\n", + " matches = bf.knnMatch(I1.descriptor,I2.descriptor,k=2)\n", + "\n", + " # Apply ratio test\n", + " good = []\n", + " for i,(m,n) in enumerate(matches):\n", + " if m.distance < 0.7*n.distance:\n", + " good.append(m)\n", + " u1 = []\n", + " u2 = []\n", + " for m in good:\n", + " u1.append(I1.keypoints[m.queryIdx].pt)\n", + " u2.append(I2.keypoints[m.trainIdx].pt)\n", + "\n", + " # General Coordinates\n", + " u1g = np.array(u1)\n", + " u2g = np.array(u2)\n", + "\n", + " # Make Homogeneous\n", + " u1h = np.c_[u1g,np.ones(u1g.shape[0])] #image coords of keypoints\n", + " u2h = np.c_[u2g,np.ones(u2g.shape[0])]\n", + "\n", + " # Image Center\n", + " cv = I1.h/2\n", + " cu = I1.w/2\n", + "\n", + " # Get Camera Coordinates\n", + " K_cam = np.array([[I1.f,0,I1.cu],[0,I1.f,I1.cv],[0,0,1]])\n", + " K_inv = np.linalg.inv(K_cam)\n", + " x1 = u1h @ K_inv.T \n", + " x2 = u2h @ K_inv.T \n", + "\n", + " # Generate Essential Matrix\n", + " E, inliers = cv2.findEssentialMat(x1[:,:2],x2[:,:2],np.eye(3),method=cv2.RANSAC,threshold=1e-3)\n", + " inliers = inliers.ravel().astype(bool) \n", + " n_in,R,t,_ = cv2.recoverPose(E,x1[inliers,:2],x2[inliers,:2])\n", + "\n", + " x1 = x1[inliers==True]\n", + " x2 = x2[inliers==True]\n", + "\n", + " # Relative pose between two camperas\n", + " if (i == 0):\n", + " self.images[i].pose = np.array([[1,0,0,0],[0,1,0,0],[0,0,1,0]]) #aliased??\n", + " self.images[i+1] = np.hstack((R,t))\n", + " \n", + " # Find X,Y,Z for all SIFT Keypoints\n", + " for j in range(len(x1)):\n", + " rwc = triangulate(I1.pose,I2.pose,x1[j],x2[j])\n", + " if i == 0:\n", + " self.pointCloud.append(rwc)\n", + " else:\n", + " if rwc in self.pointCloud: \n", + " self.pointCloud.append(rwc) #appends to list of points in xyz coordinates\n", + " I2.realWorldtoCam[j] = x2[j]\n", + " else:\n", + " I2.realWorldtoCam[j] = None\n", + "\n", + " self.pointCloud = np.array(self.pointCloud)\n", + " self.pointCloud = self.pointCloud.T / self.pointCloud[:,3]\n", + " self.pointCloud = self.pointCloud[:-1,:] \n", + " \n", + " \n", + " def plotPointCloud(self):\n", + " #%matplotlib notebook\n", + " fig = plt.figure()\n", + " ax = fig.add_subplot(111,projection='3d')\n", + " ax.plot(*self.pointCloud,'k.')\n", + " plt.show()\n" + ] +>>>>>>> ce96574e4e79f6263f4c8f1f3621bd339ec7b105 + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", +<<<<<<< HEAD + "version": "3.6.7" +======= + "version": "3.6.3" +>>>>>>> ce96574e4e79f6263f4c8f1f3621bd339ec7b105 + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/multiview_structure_from_motion.ipynb b/multiview_structure_from_motion.ipynb index 8f5d649..9c2d7cf 100644 --- a/multiview_structure_from_motion.ipynb +++ b/multiview_structure_from_motion.ipynb @@ -36,7 +36,124 @@ "execution_count": null, "metadata": {}, "outputs": [], - "source": [] + "source": [ + "import numpy as np\n", + "import matplotlib.pyplot as plt\n", + "import PIL.Image\n", + "import matplotlib.image as mpimg\n", + "import scipy.optimize as so\n", + "import cv2\n", + "import piexif\n", + "import sys\n", + "from mpl_toolkits.mplot3d import Axes3D\n", + "\n", + "class Image(object):\n", + " def __init__(self, img): \n", + " self.img = plt.imread(img)\n", + " # Image Description\n", + " self.image = piexif.load(img)\n", + " self.h = plt.imread(img).shape[0]\n", + " self.w = plt.imread(img).shape[1]\n", + " self.d = plt.imread(img).shape[2]\n", + " self.f = self.image['Exif'][piexif.ExifIFD.FocalLengthIn35mmFilm]/36*self.w\n", + " self.sift = cv2.xfeatures2d.SIFT_create()\n", + " self.keypoints,self.descriptor = sift.detectAndCompute(self.img, None)\n", + " self.realWorldtoCam = []\n", + " \n", + "\n", + "class camClass(Image):\n", + " def __init__(self): \n", + " self.images = []\n", + " self.pointCloud = []\n", + " \n", + " def add_images(self,image):\n", + " #image.pose = self.pose_guess #initialize image with guess\n", + " self.images.append(image)\n", + "\n", + "\n", + " def genPointCloud(self):\n", + " def triangulate(P0,P1,x1,x2):\n", + " '''X,Y,Z,W of the key points that are found in 2 images\n", + " P0 and P1 are poses, x1 and x2 are SIFT key-points'''\n", + " A = np.array([[P0[2,0]*x1[0] - P0[0,0], P0[2,1]*x1[0] - P0[0,1], P0[2,2]*x1[0] - P0[0,2], P0[2,3]*x1[0] - P0[0,3]],\n", + " [P0[2,0]*x1[1] - P0[1,0], P0[2,1]*x1[1] - P0[1,1], P0[2,2]*x1[1] - P0[1,2], P0[2,3]*x1[1] - P0[1,3]],\n", + " [P1[2,0]*x2[0] - P1[0,0], P1[2,1]*x2[0] - P1[0,1], P1[2,2]*x2[0] - P1[0,2], P1[2,3]*x2[0] - P1[0,3]],\n", + " [P1[2,0]*x2[1] - P1[1,0], P1[2,1]*x2[1] - P1[1,1], P1[2,2]*x2[1] - P1[1,2], P1[2,3]*x2[1] - P1[1,3]]])\n", + " u,s,vt = np.linalg.svd(A)\n", + " return vt[-1]\n", + " \n", + " for i in range(len(self.images)-1):\n", + " I1 = self.images[i]\n", + " I2 = self.images[i+1]\n", + " bf = cv2.BFMatcher()\n", + " matches = bf.knnMatch(I1.descriptor,I2.descriptor,k=2)\n", + "\n", + " # Apply ratio test\n", + " good = []\n", + " for i,(m,n) in enumerate(matches):\n", + " if m.distance < 0.7*n.distance:\n", + " good.append(m)\n", + " u1 = []\n", + " u2 = []\n", + " for m in good:\n", + " u1.append(I1.keypoints[m.queryIdx].pt)\n", + " u2.append(I2.keypoints[m.trainIdx].pt)\n", + "\n", + " # General Coordinates\n", + " u1g = np.array(u1)\n", + " u2g = np.array(u2)\n", + "\n", + " # Make Homogeneous\n", + " u1h = np.c_[u1g,np.ones(u1g.shape[0])] #image coords of keypoints\n", + " u2h = np.c_[u2g,np.ones(u2g.shape[0])]\n", + "\n", + " # Image Center\n", + " cv = I1.h/2\n", + " cu = I1.w/2\n", + "\n", + " # Get Camera Coordinates\n", + " K_cam = np.array([[I1.f,0,I1.cu],[0,I1.f,I1.cv],[0,0,1]])\n", + " K_inv = np.linalg.inv(K_cam)\n", + " x1 = u1h @ K_inv.T \n", + " x2 = u2h @ K_inv.T \n", + "\n", + " # Generate Essential Matrix\n", + " E, inliers = cv2.findEssentialMat(x1[:,:2],x2[:,:2],np.eye(3),method=cv2.RANSAC,threshold=1e-3)\n", + " inliers = inliers.ravel().astype(bool) \n", + " n_in,R,t,_ = cv2.recoverPose(E,x1[inliers,:2],x2[inliers,:2])\n", + "\n", + " x1 = x1[inliers==True]\n", + " x2 = x2[inliers==True]\n", + "\n", + " # Relative pose between two camperas\n", + " if (i == 0):\n", + " self.images[i].pose = np.array([[1,0,0,0],[0,1,0,0],[0,0,1,0]]) #aliased??\n", + " self.images[i+1] = np.hstack((R,t))\n", + " \n", + " # Find X,Y,Z for all SIFT Keypoints\n", + " for j in range(len(x1)):\n", + " rwc = triangulate(I1.pose,I2.pose,x1[j],x2[j])\n", + " if i == 0:\n", + " self.pointCloud.append(rwc)\n", + " else:\n", + " if rwc in self.pointCloud: \n", + " self.pointCloud.append(rwc) #appends to list of points in xyz coordinates\n", + " I2.realWorldtoCam[j] = x2[j]\n", + " else:\n", + " I2.realWorldtoCam[j] = None\n", + "\n", + " self.pointCloud = np.array(self.pointCloud)\n", + " self.pointCloud = self.pointCloud.T / self.pointCloud[:,3]\n", + " self.pointCloud = self.pointCloud[:-1,:] \n", + " \n", + " \n", + " def plotPointCloud(self):\n", + " #%matplotlib notebook\n", + " fig = plt.figure()\n", + " ax = fig.add_subplot(111,projection='3d')\n", + " ax.plot(*self.pointCloud,'k.')\n", + " plt.show()\n" + ] } ], "metadata": { @@ -55,7 +172,11 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.6.5" +<<<<<<< HEAD + "version": "3.6.7" +======= + "version": "3.6.3" +>>>>>>> ce96574e4e79f6263f4c8f1f3621bd339ec7b105 } }, "nbformat": 4, diff --git a/project4.ipynb b/project4.ipynb new file mode 100644 index 0000000..3138ca2 --- /dev/null +++ b/project4.ipynb @@ -0,0 +1,3591 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": 1, + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "import matplotlib.pyplot as plt\n", + "import PIL.Image\n", + "import matplotlib.image as mpimg\n", + "import scipy.optimize as so\n", + "import cv2\n", + "import piexif\n", + "import sys\n", + "from mpl_toolkits.mplot3d import Axes3D" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": {}, + "outputs": [], + "source": [ + "class Image(object):\n", + " def __init__(self, img):\n", + " # Store the Image \n", + " self.img = plt.imread(img)\n", + " # Image Description\n", + " self.image = piexif.load(img)\n", + " self.h = plt.imread(img).shape[0] # image width \n", + " self.w = plt.imread(img).shape[1] # image height\n", + " self.d = plt.imread(img).shape[2]\n", + " self.f = self.image['Exif'][piexif.ExifIFD.FocalLengthIn35mmFilm]/36*self.w\n", + "\n", + "class camClass(Image):\n", + " def __init__(self):\n", + " self.images = []\n", + " self.pointCloud = [] # Real World Coords\n", + " \n", + " self.x1 = [] # Generalized Coords\n", + " self.x2 = [] \n", + " \n", + " self.u1 = [] # Image pixel Coords\n", + " self.u2 = [] \n", + "\n", + " self.R = None # Rotation Matrix\n", + " self.t = None # Translation Matrix\n", + " \n", + " def add_images(self,image):\n", + " self.images.append(image)\n", + " \n", + " def SIFT(self): \n", + " # Read in images \n", + " I1 = self.images[0]\n", + " I2 = self.images[1]\n", + " h,w,d,f = I1.h, I1.w, I1.d, I1.f\n", + " \n", + " # Generate SIFT key-points\n", + " sift = cv2.xfeatures2d.SIFT_create()\n", + " kp1,des1 = sift.detectAndCompute(I1.img, None)\n", + " kp2,des2 = sift.detectAndCompute(I2.img, None)\n", + " bf = cv2.BFMatcher()\n", + " matches = bf.knnMatch(des1,des2,k=2)\n", + "\n", + " # Apply ratio test\n", + " good = []\n", + " for i,(m,n) in enumerate(matches):\n", + " if m.distance < 0.7*n.distance:\n", + " good.append(m)\n", + " u1 = []\n", + " u2 = []\n", + " for m in good:\n", + " u1.append(kp1[m.queryIdx].pt)\n", + " u2.append(kp2[m.trainIdx].pt)\n", + "\n", + " # Pixel-wise Camera Coordinates\n", + " u1g = np.array(u1)\n", + " u2g = np.array(u2)\n", + "\n", + " # Make Homogeneous \n", + " u1h = np.c_[u1g,np.ones(u1g.shape[0])]\n", + " u2h = np.c_[u2g,np.ones(u2g.shape[0])]\n", + " \n", + " # Image Center\n", + " cv = h/2\n", + " cu = w/2\n", + "\n", + " # Generalized Camera Coordinates\n", + " K_cam = np.array([[f,0,cu],[0,f,cv],[0,0,1]])\n", + " K_inv = np.linalg.inv(K_cam)\n", + " x1 = u1h @ K_inv.T\n", + " x2 = u2h @ K_inv.T\n", + " \n", + " # Find Essential Matrix\n", + " E, inliers = cv2.findEssentialMat(x1[:,:2],x2[:,:2],np.eye(3),method=cv2.RANSAC,threshold=1e-3)\n", + " inliers = inliers.ravel().astype(bool)\n", + " n_in, self.R, self.t,_ = cv2.recoverPose(E,x1[inliers,:2],x2[inliers,:2])\n", + " \n", + " # Filtered Generalized Coordinates\n", + " self.x1 = x1[inliers==True]\n", + " self.x2 = x2[inliers==True]\n", + " \n", + " # Convert Back to Camera Pixel Coordinats\n", + " self.h1 = K_cam@self.x1.T\n", + " self.h2 = K_cam@self.x2.T\n", + "\n", + "def triangulate(P0,P1,x1,x2):\n", + " '''This function returns the real-world coordinates (X,Y,X,W) of key points found in 2 images\n", + " P0 and P1 are poses, x1 and x2 are SIFT key-points'''\n", + " A = np.array([[P0[2,0]*x1[0] - P0[0,0], P0[2,1]*x1[0] - P0[0,1], P0[2,2]*x1[0] - P0[0,2], P0[2,3]*x1[0] - P0[0,3]], [P0[2,0]*x1[1] - P0[1,0], P0[2,1]*x1[1] - P0[1,1], P0[2,2]*x1[1] - P0[1,2], P0[2,3]*x1[1] - P0[1,3]],\n", + " [P1[2,0]*x2[0] - P1[0,0], P1[2,1]*x2[0] - P1[0,1], P1[2,2]*x2[0] - P1[0,2], P1[2,3]*x2[0] - P1[0,3]],\n", + " [P1[2,0]*x2[1] - P1[1,0], P1[2,1]*x2[1] - P1[1,1], P1[2,2]*x2[1] - P1[1,2], P1[2,3]*x2[1] - P1[1,3]]])\n", + " u,s,vt = np.linalg.svd(A)\n", + " return vt[-1]\n", + " \n", + "def genPointCloud(R, t, x1, x2):\n", + " # Relative pose between two cameras\n", + " P0 = np.array([[1,0,0,0],[0,1,0,0],[0,0,1,0]]) # First camera has canonical rotation and t=0 \n", + " P1 = np.hstack((R, t))\n", + " \n", + " pointCloud = []\n", + " \n", + " # Find X,Y,Z,W for all SIFT Keypoints\n", + " for i in range(len(x1)):\n", + " pointCloud.append(triangulate(P0, P1, x1[i], x2[i])) # Appends to list of points in xyz coordinates\n", + "\n", + " pointCloud = np.array(pointCloud)\n", + " pointCloud = pointCloud.T / pointCloud[:,3] # Divide everything by W \n", + " pointCloud = pointCloud[:-1,:] # Drop the last column (W=1)\n", + " return pointCloud \n", + " \n", + "def plotPointCloud(rw_coords):\n", + " #%matplotlib notebook\n", + " fig = plt.figure()\n", + " ax = fig.add_subplot(111,projection='3d')\n", + " ax.plot(*rw_coords,'k.') \n", + " plt.show()\n", + " \n", + "class pose_esimation(object):\n", + " def __init__(self):\n", + " self.pose = None \n", + " \n", + " def residual(self, pose, x2, x3, rwc_gcp):\n", + " #pose = [R11,R12,R13,R21,R22,R23,R31,R32,R33,x,y,z]\n", + " R = pose[:9].reshape(3,3)\n", + " t = pose[9:].reshape(3,1)\n", + " \n", + " rwc_est = genPointCloud(R, t, x2, x3) # pose 3 is the uknown we are optimizing for. We know image coords x2 and x3\n", + " residual = rwc_est - rwc_gcp.T\n", + " return residual.ravel()\n", + " \n", + " def estimate_pose(self, x2, x3, rwc_gcp):\n", + " \"\"\"\n", + " This function adjusts the R and t such that difference between calculated real world coords between image 2 and 3\n", + " and image 1 and 2 is minimized\n", + " \"\"\"\n", + " p_opt = so.least_squares(self.residual, self.pose, method='lm',args=(x2,x3,rwc_gcp))\n", + " #self.pose = p_opt.x\n", + " return p_opt\n", + "\n", + "def p_trans(pose):\n", + " '''Convert a pose matrix to R and t'''\n", + " R = pose[:,:3]\n", + " t = pose[:,3:][:,np.newaxis]\n", + " return R, t" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": {}, + "outputs": [], + "source": [ + "Image1 = Image('sword/sword_0.jpg')\n", + "Image2 = Image('sword/sword_1.jpg')\n", + "Image3 = Image('sword/sword_2.jpg')\n", + "\n", + "#############################\n", + "# Point Cloud image 1 image 2\n", + "#############################\n", + "keypts1 = camClass()\n", + "keypts1.add_images(Image1)\n", + "keypts1.add_images(Image2)\n", + "keypts1.SIFT()\n", + "\n", + "# Pixel Coord Keypoints\n", + "h11 = keypts1.h1.T[:,:-1] # Keypoint coords (u,v) of Image 1 based on matching between image 1 and 2\n", + "h12 = keypts1.h2.T[:,:-1] # Keypoint coords (u,v) of Image 2 based on matching between image 1 and 2\n", + "\n", + "# Keypoints generalized cam coordinates\n", + "x11 = keypts1.x1 \n", + "x12 = keypts1.x2\n", + "\n", + "R12 = keypts1.R # Rotation Matrix, Iteration 1 Image 2\n", + "t12 = keypts1.t # Translation Matrix\n", + "\n", + "# PointCloud (Image 1 and 2)\n", + "rwc_1 = genPointCloud(R12, t12, x11, x12)\n", + "\n", + "\n", + "#############################\n", + "# Point Cloud image 2 image 3\n", + "#############################\n", + "keypts2 = camClass()\n", + "keypts2.add_images(Image2)\n", + "keypts2.add_images(Image3)\n", + "keypts2.SIFT()\n", + "\n", + "# Pixel Coord Keypoints\n", + "h22 = keypts2.h1.T[:,:-1] # Keypoint coords (u,v) of Image 2 based on matching between image 2 and 3\n", + "h23 = keypts2.h2.T[:,:-1] # Keypoint coords (u,v) of Image 3 based on matching between image 2 and 3\n", + "\n", + "# Keypoints generalized cam coordinates\n", + "x22 = keypts2.x1 \n", + "x23 = keypts2.x2\n", + "\n", + "R23 = keypts2.R # Rotation Matrix, Iteration 2 Image 3\n", + "t23 = keypts2.t # Translation Matrix" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": {}, + "outputs": [], + "source": [ + "# Find image 3 key points that have correspondance in image 1\n", + "keypts_idx = []\n", + "rwc_idx = []\n", + "\n", + "for i in range(h12.shape[0]):\n", + " for j in range(h22.shape[0]):\n", + " if (np.allclose(h12[i], h22[j])): #If image 2 u,v coordinates are equal between the two matching iterations\n", + " keypts_idx.append(j) #Store that index\n", + " rwc_idx.append(i)\n", + " break\n", + "\n", + "# Filter out non-correspondances\n", + "x1_matched = x11[rwc_idx] # Image 2 coordinates for pose estimation\n", + "x2_matched = x22[keypts_idx] # Image 2 coordinates for pose estimation\n", + "x3_matched = x23[keypts_idx] # Image 3 coordinates for pose estimation\n", + "\n", + "rwc_matched = rwc_1.T[rwc_idx,:] # Real world coordinates for pose estimation\n", + "\n", + "#h3_matched = h23[keypts_idx] " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Estimate the pose of image 3\n", + "get_pose3 = pose_esimation()\n", + "\n", + "# Pose Guess\n", + "P2 = np.hstack((R12, t12)) \n", + "P3_prime = np.hstack((R23, t23))\n", + "pose3_guess = np.multiply(P2,P3_prime)\n", + "R3_guess, t3_guess = p_trans(pose3_guess)\n", + "pose3_guess = np.append(R3_guess,t3_guess)\n", + "get_pose3.pose = pose3_guess\n", + "\n", + "# Optimize for pose\n", + "pose3 = get_pose3.estimate_pose(x2_matched, x3_matched, rwc_matched).x\n", + "\n", + "# Get the Real World Coordinates with optimized pose\n", + "R3 = pose3[:9].reshape(3,3)\n", + "t3 = pose3[9:].reshape(3,1)\n", + "rwc_23 = genPointCloud(R3, t3, x2_matched, x3_matched)\n", + "\n", + "%matplotlib notebook\n", + "fig = plt.figure()\n", + "ax = fig.add_subplot(111,projection='3d')\n", + "ax.plot(*rwc_matched.T,'k.') \n", + "ax.plot(*rwc_23, 'r.')\n", + "plt.show()" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "metadata": {}, + "outputs": [ + { + "data": { + "application/javascript": [ + "/* Put everything inside the global mpl namespace */\n", + "window.mpl = {};\n", + "\n", + "\n", + "mpl.get_websocket_type = function() {\n", + " if (typeof(WebSocket) !== 'undefined') {\n", + " return WebSocket;\n", + " } else if (typeof(MozWebSocket) !== 'undefined') {\n", + " return MozWebSocket;\n", + " } else {\n", + " alert('Your browser does not have WebSocket support.' +\n", + " 'Please try Chrome, Safari or Firefox ≥ 6. ' +\n", + " 'Firefox 4 and 5 are also supported but you ' +\n", + " 'have to enable WebSockets in about:config.');\n", + " };\n", + "}\n", + "\n", + "mpl.figure = function(figure_id, websocket, ondownload, parent_element) {\n", + " this.id = figure_id;\n", + "\n", + " this.ws = websocket;\n", + "\n", + " this.supports_binary = (this.ws.binaryType != undefined);\n", + "\n", + " if (!this.supports_binary) {\n", + " var warnings = document.getElementById(\"mpl-warnings\");\n", + " if (warnings) {\n", + " warnings.style.display = 'block';\n", + " warnings.textContent = (\n", + " \"This browser does not support binary websocket messages. \" +\n", + " \"Performance may be slow.\");\n", + " }\n", + " }\n", + "\n", + " this.imageObj = new Image();\n", + "\n", + " this.context = undefined;\n", + " this.message = undefined;\n", + " this.canvas = undefined;\n", + " this.rubberband_canvas = undefined;\n", + " this.rubberband_context = undefined;\n", + " this.format_dropdown = undefined;\n", + "\n", + " this.image_mode = 'full';\n", + "\n", + " this.root = $('
');\n", + " this._root_extra_style(this.root)\n", + " this.root.attr('style', 'display: inline-block');\n", + "\n", + " $(parent_element).append(this.root);\n", + "\n", + " this._init_header(this);\n", + " this._init_canvas(this);\n", + " this._init_toolbar(this);\n", + "\n", + " var fig = this;\n", + "\n", + " this.waiting = false;\n", + "\n", + " this.ws.onopen = function () {\n", + " fig.send_message(\"supports_binary\", {value: fig.supports_binary});\n", + " fig.send_message(\"send_image_mode\", {});\n", + " if (mpl.ratio != 1) {\n", + " fig.send_message(\"set_dpi_ratio\", {'dpi_ratio': mpl.ratio});\n", + " }\n", + " fig.send_message(\"refresh\", {});\n", + " }\n", + "\n", + " this.imageObj.onload = function() {\n", + " if (fig.image_mode == 'full') {\n", + " // Full images could contain transparency (where diff images\n", + " // almost always do), so we need to clear the canvas so that\n", + " // there is no ghosting.\n", + " fig.context.clearRect(0, 0, fig.canvas.width, fig.canvas.height);\n", + " }\n", + " fig.context.drawImage(fig.imageObj, 0, 0);\n", + " };\n", + "\n", + " this.imageObj.onunload = function() {\n", + " fig.ws.close();\n", + " }\n", + "\n", + " this.ws.onmessage = this._make_on_message_function(this);\n", + "\n", + " this.ondownload = ondownload;\n", + "}\n", + "\n", + "mpl.figure.prototype._init_header = function() {\n", + " var titlebar = $(\n", + " '
');\n", + " var titletext = $(\n", + " '
');\n", + " titlebar.append(titletext)\n", + " this.root.append(titlebar);\n", + " this.header = titletext[0];\n", + "}\n", + "\n", + "\n", + "\n", + "mpl.figure.prototype._canvas_extra_style = function(canvas_div) {\n", + "\n", + "}\n", + "\n", + "\n", + "mpl.figure.prototype._root_extra_style = function(canvas_div) {\n", + "\n", + "}\n", + "\n", + "mpl.figure.prototype._init_canvas = function() {\n", + " var fig = this;\n", + "\n", + " var canvas_div = $('
');\n", + "\n", + " canvas_div.attr('style', 'position: relative; clear: both; outline: 0');\n", + "\n", + " function canvas_keyboard_event(event) {\n", + " return fig.key_event(event, event['data']);\n", + " }\n", + "\n", + " canvas_div.keydown('key_press', canvas_keyboard_event);\n", + " canvas_div.keyup('key_release', canvas_keyboard_event);\n", + " this.canvas_div = canvas_div\n", + " this._canvas_extra_style(canvas_div)\n", + " this.root.append(canvas_div);\n", + "\n", + " var canvas = $('');\n", + " canvas.addClass('mpl-canvas');\n", + " canvas.attr('style', \"left: 0; top: 0; z-index: 0; outline: 0\")\n", + "\n", + " this.canvas = canvas[0];\n", + " this.context = canvas[0].getContext(\"2d\");\n", + "\n", + " var backingStore = this.context.backingStorePixelRatio ||\n", + "\tthis.context.webkitBackingStorePixelRatio ||\n", + "\tthis.context.mozBackingStorePixelRatio ||\n", + "\tthis.context.msBackingStorePixelRatio ||\n", + "\tthis.context.oBackingStorePixelRatio ||\n", + "\tthis.context.backingStorePixelRatio || 1;\n", + "\n", + " mpl.ratio = (window.devicePixelRatio || 1) / backingStore;\n", + "\n", + " var rubberband = $('');\n", + " rubberband.attr('style', \"position: absolute; left: 0; top: 0; z-index: 1;\")\n", + "\n", + " var pass_mouse_events = true;\n", + "\n", + " canvas_div.resizable({\n", + " start: function(event, ui) {\n", + " pass_mouse_events = false;\n", + " },\n", + " resize: function(event, ui) {\n", + " fig.request_resize(ui.size.width, ui.size.height);\n", + " },\n", + " stop: function(event, ui) {\n", + " pass_mouse_events = true;\n", + " fig.request_resize(ui.size.width, ui.size.height);\n", + " },\n", + " });\n", + "\n", + " function mouse_event_fn(event) {\n", + " if (pass_mouse_events)\n", + " return fig.mouse_event(event, event['data']);\n", + " }\n", + "\n", + " rubberband.mousedown('button_press', mouse_event_fn);\n", + " rubberband.mouseup('button_release', mouse_event_fn);\n", + " // Throttle sequential mouse events to 1 every 20ms.\n", + " rubberband.mousemove('motion_notify', mouse_event_fn);\n", + "\n", + " rubberband.mouseenter('figure_enter', mouse_event_fn);\n", + " rubberband.mouseleave('figure_leave', mouse_event_fn);\n", + "\n", + " canvas_div.on(\"wheel\", function (event) {\n", + " event = event.originalEvent;\n", + " event['data'] = 'scroll'\n", + " if (event.deltaY < 0) {\n", + " event.step = 1;\n", + " } else {\n", + " event.step = -1;\n", + " }\n", + " mouse_event_fn(event);\n", + " });\n", + "\n", + " canvas_div.append(canvas);\n", + " canvas_div.append(rubberband);\n", + "\n", + " this.rubberband = rubberband;\n", + " this.rubberband_canvas = rubberband[0];\n", + " this.rubberband_context = rubberband[0].getContext(\"2d\");\n", + " this.rubberband_context.strokeStyle = \"#000000\";\n", + "\n", + " this._resize_canvas = function(width, height) {\n", + " // Keep the size of the canvas, canvas container, and rubber band\n", + " // canvas in synch.\n", + " canvas_div.css('width', width)\n", + " canvas_div.css('height', height)\n", + "\n", + " canvas.attr('width', width * mpl.ratio);\n", + " canvas.attr('height', height * mpl.ratio);\n", + " canvas.attr('style', 'width: ' + width + 'px; height: ' + height + 'px;');\n", + "\n", + " rubberband.attr('width', width);\n", + " rubberband.attr('height', height);\n", + " }\n", + "\n", + " // Set the figure to an initial 600x600px, this will subsequently be updated\n", + " // upon first draw.\n", + " this._resize_canvas(600, 600);\n", + "\n", + " // Disable right mouse context menu.\n", + " $(this.rubberband_canvas).bind(\"contextmenu\",function(e){\n", + " return false;\n", + " });\n", + "\n", + " function set_focus () {\n", + " canvas.focus();\n", + " canvas_div.focus();\n", + " }\n", + "\n", + " window.setTimeout(set_focus, 100);\n", + "}\n", + "\n", + "mpl.figure.prototype._init_toolbar = function() {\n", + " var fig = this;\n", + "\n", + " var nav_element = $('
')\n", + " nav_element.attr('style', 'width: 100%');\n", + " this.root.append(nav_element);\n", + "\n", + " // Define a callback function for later on.\n", + " function toolbar_event(event) {\n", + " return fig.toolbar_button_onclick(event['data']);\n", + " }\n", + " function toolbar_mouse_event(event) {\n", + " return fig.toolbar_button_onmouseover(event['data']);\n", + " }\n", + "\n", + " for(var toolbar_ind in mpl.toolbar_items) {\n", + " var name = mpl.toolbar_items[toolbar_ind][0];\n", + " var tooltip = mpl.toolbar_items[toolbar_ind][1];\n", + " var image = mpl.toolbar_items[toolbar_ind][2];\n", + " var method_name = mpl.toolbar_items[toolbar_ind][3];\n", + "\n", + " if (!name) {\n", + " // put a spacer in here.\n", + " continue;\n", + " }\n", + " var button = $('');\n", + " button.click(method_name, toolbar_event);\n", + " button.mouseover(tooltip, toolbar_mouse_event);\n", + " nav_element.append(button);\n", + " }\n", + "\n", + " // Add the status bar.\n", + " var status_bar = $('');\n", + " nav_element.append(status_bar);\n", + " this.message = status_bar[0];\n", + "\n", + " // Add the close button to the window.\n", + " var buttongrp = $('
');\n", + " var button = $('');\n", + " button.click(function (evt) { fig.handle_close(fig, {}); } );\n", + " button.mouseover('Stop Interaction', toolbar_mouse_event);\n", + " buttongrp.append(button);\n", + " var titlebar = this.root.find($('.ui-dialog-titlebar'));\n", + " titlebar.prepend(buttongrp);\n", + "}\n", + "\n", + "mpl.figure.prototype._root_extra_style = function(el){\n", + " var fig = this\n", + " el.on(\"remove\", function(){\n", + "\tfig.close_ws(fig, {});\n", + " });\n", + "}\n", + "\n", + "mpl.figure.prototype._canvas_extra_style = function(el){\n", + " // this is important to make the div 'focusable\n", + " el.attr('tabindex', 0)\n", + " // reach out to IPython and tell the keyboard manager to turn it's self\n", + " // off when our div gets focus\n", + "\n", + " // location in version 3\n", + " if (IPython.notebook.keyboard_manager) {\n", + " IPython.notebook.keyboard_manager.register_events(el);\n", + " }\n", + " else {\n", + " // location in version 2\n", + " IPython.keyboard_manager.register_events(el);\n", + " }\n", + "\n", + "}\n", + "\n", + "mpl.figure.prototype._key_event_extra = function(event, name) {\n", + " var manager = IPython.notebook.keyboard_manager;\n", + " if (!manager)\n", + " manager = IPython.keyboard_manager;\n", + "\n", + " // Check for shift+enter\n", + " if (event.shiftKey && event.which == 13) {\n", + " this.canvas_div.blur();\n", + " event.shiftKey = false;\n", + " // Send a \"J\" for go to next cell\n", + " event.which = 74;\n", + " event.keyCode = 74;\n", + " manager.command_mode();\n", + " manager.handle_keydown(event);\n", + " }\n", + "}\n", + "\n", + "mpl.figure.prototype.handle_save = function(fig, msg) {\n", + " fig.ondownload(fig, null);\n", + "}\n", + "\n", + "\n", + "mpl.find_output_cell = function(html_output) {\n", + " // Return the cell and output element which can be found *uniquely* in the notebook.\n", + " // Note - this is a bit hacky, but it is done because the \"notebook_saving.Notebook\"\n", + " // IPython event is triggered only after the cells have been serialised, which for\n", + " // our purposes (turning an active figure into a static one), is too late.\n", + " var cells = IPython.notebook.get_cells();\n", + " var ncells = cells.length;\n", + " for (var i=0; i= 3 moved mimebundle to data attribute of output\n", + " data = data.data;\n", + " }\n", + " if (data['text/html'] == html_output) {\n", + " return [cell, data, j];\n", + " }\n", + " }\n", + " }\n", + " }\n", + "}\n", + "\n", + "// Register the function which deals with the matplotlib target/channel.\n", + "// The kernel may be null if the page has been refreshed.\n", + "if (IPython.notebook.kernel != null) {\n", + " IPython.notebook.kernel.comm_manager.register_target('matplotlib', mpl.mpl_figure_comm);\n", + "}\n" + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/html": [ + "" + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "application/javascript": [ + "/* Put everything inside the global mpl namespace */\n", + "window.mpl = {};\n", + "\n", + "\n", + "mpl.get_websocket_type = function() {\n", + " if (typeof(WebSocket) !== 'undefined') {\n", + " return WebSocket;\n", + " } else if (typeof(MozWebSocket) !== 'undefined') {\n", + " return MozWebSocket;\n", + " } else {\n", + " alert('Your browser does not have WebSocket support.' +\n", + " 'Please try Chrome, Safari or Firefox ≥ 6. ' +\n", + " 'Firefox 4 and 5 are also supported but you ' +\n", + " 'have to enable WebSockets in about:config.');\n", + " };\n", + "}\n", + "\n", + "mpl.figure = function(figure_id, websocket, ondownload, parent_element) {\n", + " this.id = figure_id;\n", + "\n", + " this.ws = websocket;\n", + "\n", + " this.supports_binary = (this.ws.binaryType != undefined);\n", + "\n", + " if (!this.supports_binary) {\n", + " var warnings = document.getElementById(\"mpl-warnings\");\n", + " if (warnings) {\n", + " warnings.style.display = 'block';\n", + " warnings.textContent = (\n", + " \"This browser does not support binary websocket messages. \" +\n", + " \"Performance may be slow.\");\n", + " }\n", + " }\n", + "\n", + " this.imageObj = new Image();\n", + "\n", + " this.context = undefined;\n", + " this.message = undefined;\n", + " this.canvas = undefined;\n", + " this.rubberband_canvas = undefined;\n", + " this.rubberband_context = undefined;\n", + " this.format_dropdown = undefined;\n", + "\n", + " this.image_mode = 'full';\n", + "\n", + " this.root = $('
');\n", + " this._root_extra_style(this.root)\n", + " this.root.attr('style', 'display: inline-block');\n", + "\n", + " $(parent_element).append(this.root);\n", + "\n", + " this._init_header(this);\n", + " this._init_canvas(this);\n", + " this._init_toolbar(this);\n", + "\n", + " var fig = this;\n", + "\n", + " this.waiting = false;\n", + "\n", + " this.ws.onopen = function () {\n", + " fig.send_message(\"supports_binary\", {value: fig.supports_binary});\n", + " fig.send_message(\"send_image_mode\", {});\n", + " if (mpl.ratio != 1) {\n", + " fig.send_message(\"set_dpi_ratio\", {'dpi_ratio': mpl.ratio});\n", + " }\n", + " fig.send_message(\"refresh\", {});\n", + " }\n", + "\n", + " this.imageObj.onload = function() {\n", + " if (fig.image_mode == 'full') {\n", + " // Full images could contain transparency (where diff images\n", + " // almost always do), so we need to clear the canvas so that\n", + " // there is no ghosting.\n", + " fig.context.clearRect(0, 0, fig.canvas.width, fig.canvas.height);\n", + " }\n", + " fig.context.drawImage(fig.imageObj, 0, 0);\n", + " };\n", + "\n", + " this.imageObj.onunload = function() {\n", + " fig.ws.close();\n", + " }\n", + "\n", + " this.ws.onmessage = this._make_on_message_function(this);\n", + "\n", + " this.ondownload = ondownload;\n", + "}\n", + "\n", + "mpl.figure.prototype._init_header = function() {\n", + " var titlebar = $(\n", + " '
');\n", + " var titletext = $(\n", + " '
');\n", + " titlebar.append(titletext)\n", + " this.root.append(titlebar);\n", + " this.header = titletext[0];\n", + "}\n", + "\n", + "\n", + "\n", + "mpl.figure.prototype._canvas_extra_style = function(canvas_div) {\n", + "\n", + "}\n", + "\n", + "\n", + "mpl.figure.prototype._root_extra_style = function(canvas_div) {\n", + "\n", + "}\n", + "\n", + "mpl.figure.prototype._init_canvas = function() {\n", + " var fig = this;\n", + "\n", + " var canvas_div = $('
');\n", + "\n", + " canvas_div.attr('style', 'position: relative; clear: both; outline: 0');\n", + "\n", + " function canvas_keyboard_event(event) {\n", + " return fig.key_event(event, event['data']);\n", + " }\n", + "\n", + " canvas_div.keydown('key_press', canvas_keyboard_event);\n", + " canvas_div.keyup('key_release', canvas_keyboard_event);\n", + " this.canvas_div = canvas_div\n", + " this._canvas_extra_style(canvas_div)\n", + " this.root.append(canvas_div);\n", + "\n", + " var canvas = $('');\n", + " canvas.addClass('mpl-canvas');\n", + " canvas.attr('style', \"left: 0; top: 0; z-index: 0; outline: 0\")\n", + "\n", + " this.canvas = canvas[0];\n", + " this.context = canvas[0].getContext(\"2d\");\n", + "\n", + " var backingStore = this.context.backingStorePixelRatio ||\n", + "\tthis.context.webkitBackingStorePixelRatio ||\n", + "\tthis.context.mozBackingStorePixelRatio ||\n", + "\tthis.context.msBackingStorePixelRatio ||\n", + "\tthis.context.oBackingStorePixelRatio ||\n", + "\tthis.context.backingStorePixelRatio || 1;\n", + "\n", + " mpl.ratio = (window.devicePixelRatio || 1) / backingStore;\n", + "\n", + " var rubberband = $('');\n", + " rubberband.attr('style', \"position: absolute; left: 0; top: 0; z-index: 1;\")\n", + "\n", + " var pass_mouse_events = true;\n", + "\n", + " canvas_div.resizable({\n", + " start: function(event, ui) {\n", + " pass_mouse_events = false;\n", + " },\n", + " resize: function(event, ui) {\n", + " fig.request_resize(ui.size.width, ui.size.height);\n", + " },\n", + " stop: function(event, ui) {\n", + " pass_mouse_events = true;\n", + " fig.request_resize(ui.size.width, ui.size.height);\n", + " },\n", + " });\n", + "\n", + " function mouse_event_fn(event) {\n", + " if (pass_mouse_events)\n", + " return fig.mouse_event(event, event['data']);\n", + " }\n", + "\n", + " rubberband.mousedown('button_press', mouse_event_fn);\n", + " rubberband.mouseup('button_release', mouse_event_fn);\n", + " // Throttle sequential mouse events to 1 every 20ms.\n", + " rubberband.mousemove('motion_notify', mouse_event_fn);\n", + "\n", + " rubberband.mouseenter('figure_enter', mouse_event_fn);\n", + " rubberband.mouseleave('figure_leave', mouse_event_fn);\n", + "\n", + " canvas_div.on(\"wheel\", function (event) {\n", + " event = event.originalEvent;\n", + " event['data'] = 'scroll'\n", + " if (event.deltaY < 0) {\n", + " event.step = 1;\n", + " } else {\n", + " event.step = -1;\n", + " }\n", + " mouse_event_fn(event);\n", + " });\n", + "\n", + " canvas_div.append(canvas);\n", + " canvas_div.append(rubberband);\n", + "\n", + " this.rubberband = rubberband;\n", + " this.rubberband_canvas = rubberband[0];\n", + " this.rubberband_context = rubberband[0].getContext(\"2d\");\n", + " this.rubberband_context.strokeStyle = \"#000000\";\n", + "\n", + " this._resize_canvas = function(width, height) {\n", + " // Keep the size of the canvas, canvas container, and rubber band\n", + " // canvas in synch.\n", + " canvas_div.css('width', width)\n", + " canvas_div.css('height', height)\n", + "\n", + " canvas.attr('width', width * mpl.ratio);\n", + " canvas.attr('height', height * mpl.ratio);\n", + " canvas.attr('style', 'width: ' + width + 'px; height: ' + height + 'px;');\n", + "\n", + " rubberband.attr('width', width);\n", + " rubberband.attr('height', height);\n", + " }\n", + "\n", + " // Set the figure to an initial 600x600px, this will subsequently be updated\n", + " // upon first draw.\n", + " this._resize_canvas(600, 600);\n", + "\n", + " // Disable right mouse context menu.\n", + " $(this.rubberband_canvas).bind(\"contextmenu\",function(e){\n", + " return false;\n", + " });\n", + "\n", + " function set_focus () {\n", + " canvas.focus();\n", + " canvas_div.focus();\n", + " }\n", + "\n", + " window.setTimeout(set_focus, 100);\n", + "}\n", + "\n", + "mpl.figure.prototype._init_toolbar = function() {\n", + " var fig = this;\n", + "\n", + " var nav_element = $('
')\n", + " nav_element.attr('style', 'width: 100%');\n", + " this.root.append(nav_element);\n", + "\n", + " // Define a callback function for later on.\n", + " function toolbar_event(event) {\n", + " return fig.toolbar_button_onclick(event['data']);\n", + " }\n", + " function toolbar_mouse_event(event) {\n", + " return fig.toolbar_button_onmouseover(event['data']);\n", + " }\n", + "\n", + " for(var toolbar_ind in mpl.toolbar_items) {\n", + " var name = mpl.toolbar_items[toolbar_ind][0];\n", + " var tooltip = mpl.toolbar_items[toolbar_ind][1];\n", + " var image = mpl.toolbar_items[toolbar_ind][2];\n", + " var method_name = mpl.toolbar_items[toolbar_ind][3];\n", + "\n", + " if (!name) {\n", + " // put a spacer in here.\n", + " continue;\n", + " }\n", + " var button = $('');\n", + " button.click(method_name, toolbar_event);\n", + " button.mouseover(tooltip, toolbar_mouse_event);\n", + " nav_element.append(button);\n", + " }\n", + "\n", + " // Add the status bar.\n", + " var status_bar = $('');\n", + " nav_element.append(status_bar);\n", + " this.message = status_bar[0];\n", + "\n", + " // Add the close button to the window.\n", + " var buttongrp = $('
');\n", + " var button = $('');\n", + " button.click(function (evt) { fig.handle_close(fig, {}); } );\n", + " button.mouseover('Stop Interaction', toolbar_mouse_event);\n", + " buttongrp.append(button);\n", + " var titlebar = this.root.find($('.ui-dialog-titlebar'));\n", + " titlebar.prepend(buttongrp);\n", + "}\n", + "\n", + "mpl.figure.prototype._root_extra_style = function(el){\n", + " var fig = this\n", + " el.on(\"remove\", function(){\n", + "\tfig.close_ws(fig, {});\n", + " });\n", + "}\n", + "\n", + "mpl.figure.prototype._canvas_extra_style = function(el){\n", + " // this is important to make the div 'focusable\n", + " el.attr('tabindex', 0)\n", + " // reach out to IPython and tell the keyboard manager to turn it's self\n", + " // off when our div gets focus\n", + "\n", + " // location in version 3\n", + " if (IPython.notebook.keyboard_manager) {\n", + " IPython.notebook.keyboard_manager.register_events(el);\n", + " }\n", + " else {\n", + " // location in version 2\n", + " IPython.keyboard_manager.register_events(el);\n", + " }\n", + "\n", + "}\n", + "\n", + "mpl.figure.prototype._key_event_extra = function(event, name) {\n", + " var manager = IPython.notebook.keyboard_manager;\n", + " if (!manager)\n", + " manager = IPython.keyboard_manager;\n", + "\n", + " // Check for shift+enter\n", + " if (event.shiftKey && event.which == 13) {\n", + " this.canvas_div.blur();\n", + " event.shiftKey = false;\n", + " // Send a \"J\" for go to next cell\n", + " event.which = 74;\n", + " event.keyCode = 74;\n", + " manager.command_mode();\n", + " manager.handle_keydown(event);\n", + " }\n", + "}\n", + "\n", + "mpl.figure.prototype.handle_save = function(fig, msg) {\n", + " fig.ondownload(fig, null);\n", + "}\n", + "\n", + "\n", + "mpl.find_output_cell = function(html_output) {\n", + " // Return the cell and output element which can be found *uniquely* in the notebook.\n", + " // Note - this is a bit hacky, but it is done because the \"notebook_saving.Notebook\"\n", + " // IPython event is triggered only after the cells have been serialised, which for\n", + " // our purposes (turning an active figure into a static one), is too late.\n", + " var cells = IPython.notebook.get_cells();\n", + " var ncells = cells.length;\n", + " for (var i=0; i= 3 moved mimebundle to data attribute of output\n", + " data = data.data;\n", + " }\n", + " if (data['text/html'] == html_output) {\n", + " return [cell, data, j];\n", + " }\n", + " }\n", + " }\n", + " }\n", + "}\n", + "\n", + "// Register the function which deals with the matplotlib target/channel.\n", + "// The kernel may be null if the page has been refreshed.\n", + "if (IPython.notebook.kernel != null) {\n", + " IPython.notebook.kernel.comm_manager.register_target('matplotlib', mpl.mpl_figure_comm);\n", + "}\n" + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/html": [ + "" + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/plain": [ + "'\\nclass t_esimation(object):\\n def __init__(self):\\n self.t = None\\n \\n def residual(self, t, R, x2, x3, rwc_12):\\n \\n t = t.reshape(3,1)\\n \\n rwc23 = genPointCloud(R, t, x2, x3) # pose 3 is the uknown we are optimizing for. We know image coords x2 and x3\\n #residual = rwc23.flatten() - rwc_12.T.flatten()\\n residual = rwc23 - rwc_12.T\\n return residual.ravel()\\n \\n def estimate_t(self, R, x2, x3, rwc_12):\\n \"\"\"\\n This function adjusts the R and t such that difference between calculated real world coords between image 2 and 3\\n and image 1 and 2 is minimized\\n \"\"\"\\n t_opt = so.least_squares(self.residual, self.t, method=\\'lm\\',args=(R,x2,x3,rwc_12))\\n\\n return t_opt\\n\\n\\n\\n# Estimate the pose of image 3\\nget_t3 = t_esimation()\\n\\n# Pose Guess\\nP2 = np.hstack((R12, t12))\\nP3_prime = np.hstack((R23, t23))\\n\\npose_guess = np.multiply(P2,P3_prime)\\nR = pose_guess[:,:3]\\nt_guess = pose_guess[:,-1][:,np.newaxis].flatten()\\n\\n# Optimize for pose\\nget_t3.t = t_guess\\n\\nt3 = get_t3.estimate_t(R, keypts2_matched_x, keypts3_matched_x, rw_coords_matched).x\\nt3 = t3[:,np.newaxis]\\n\\n# Real World Coordinates\\nRWC_23 = genPointCloud(R, t3, keypts2_matched_x, keypts3_matched_x)\\n\\n%matplotlib notebook\\nfig = plt.figure()\\nax = fig.add_subplot(111,projection=\\'3d\\')\\nax.plot(*rw_coords_matched.T,\\'k.\\') \\nax.plot(*RWC_23, \\'r.\\')\\nplt.show()\\n'" + ] + }, + "execution_count": 7, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# Test point Correspondences\n", + "fig, ax = plt.subplots(ncols=1, nrows=1)\n", + "img1 = plt.imread('sword/sword_0.jpg')\n", + "ax.imshow(img1)\n", + "ax.plot(h11[rwc_idx][:,0], h11[rwc_idx][:,1], 'x')\n", + "plt.show()\n", + "\n", + "fig, ax = plt.subplots(ncols=1, nrows=1)\n", + "img2 = plt.imread('sword/sword_1.jpg')\n", + "ax.imshow(img2)\n", + "ax.plot(h12[:,0][rwc_idx], h12[rwc_idx][:,1], 'x')\n", + "ax.plot(h22[:,0][keypts_idx], h22[:,1][keypts_idx], 'r.')\n", + "plt.show()\n", + "\n", + "fig, ax = plt.subplots(ncols=1, nrows=1)\n", + "img3 = plt.imread('sword/sword_2.jpg')\n", + "ax.imshow(img3)\n", + "ax.plot(h23[:,0][keypts_idx], h23[:,1][keypts_idx], 'r.')\n", + "plt.show()\n", + "\n", + "\n", + "\n", + "'''\n", + "class t_esimation(object):\n", + " def __init__(self):\n", + " self.t = None\n", + " \n", + " def residual(self, t, R, x2, x3, rwc_12):\n", + " \n", + " t = t.reshape(3,1)\n", + " \n", + " rwc23 = genPointCloud(R, t, x2, x3) # pose 3 is the uknown we are optimizing for. We know image coords x2 and x3\n", + " #residual = rwc23.flatten() - rwc_12.T.flatten()\n", + " residual = rwc23 - rwc_12.T\n", + " return residual.ravel()\n", + " \n", + " def estimate_t(self, R, x2, x3, rwc_12):\n", + " \"\"\"\n", + " This function adjusts the R and t such that difference between calculated real world coords between image 2 and 3\n", + " and image 1 and 2 is minimized\n", + " \"\"\"\n", + " t_opt = so.least_squares(self.residual, self.t, method='lm',args=(R,x2,x3,rwc_12))\n", + "\n", + " return t_opt\n", + "\n", + "\n", + "\n", + "# Estimate the pose of image 3\n", + "get_t3 = t_esimation()\n", + "\n", + "# Pose Guess\n", + "P2 = np.hstack((R12, t12))\n", + "P3_prime = np.hstack((R23, t23))\n", + "\n", + "pose_guess = np.multiply(P2,P3_prime)\n", + "R = pose_guess[:,:3]\n", + "t_guess = pose_guess[:,-1][:,np.newaxis].flatten()\n", + "\n", + "# Optimize for pose\n", + "get_t3.t = t_guess\n", + "\n", + "t3 = get_t3.estimate_t(R, keypts2_matched_x, keypts3_matched_x, rw_coords_matched).x\n", + "t3 = t3[:,np.newaxis]\n", + "\n", + "# Real World Coordinates\n", + "RWC_23 = genPointCloud(R, t3, keypts2_matched_x, keypts3_matched_x)\n", + "\n", + "%matplotlib notebook\n", + "fig = plt.figure()\n", + "ax = fig.add_subplot(111,projection='3d')\n", + "ax.plot(*rw_coords_matched.T,'k.') \n", + "ax.plot(*RWC_23, 'r.')\n", + "plt.show()\n", + "'''" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.7" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/project4_final.ipynb b/project4_final.ipynb new file mode 100644 index 0000000..3138ca2 --- /dev/null +++ b/project4_final.ipynb @@ -0,0 +1,3591 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": 1, + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "import matplotlib.pyplot as plt\n", + "import PIL.Image\n", + "import matplotlib.image as mpimg\n", + "import scipy.optimize as so\n", + "import cv2\n", + "import piexif\n", + "import sys\n", + "from mpl_toolkits.mplot3d import Axes3D" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": {}, + "outputs": [], + "source": [ + "class Image(object):\n", + " def __init__(self, img):\n", + " # Store the Image \n", + " self.img = plt.imread(img)\n", + " # Image Description\n", + " self.image = piexif.load(img)\n", + " self.h = plt.imread(img).shape[0] # image width \n", + " self.w = plt.imread(img).shape[1] # image height\n", + " self.d = plt.imread(img).shape[2]\n", + " self.f = self.image['Exif'][piexif.ExifIFD.FocalLengthIn35mmFilm]/36*self.w\n", + "\n", + "class camClass(Image):\n", + " def __init__(self):\n", + " self.images = []\n", + " self.pointCloud = [] # Real World Coords\n", + " \n", + " self.x1 = [] # Generalized Coords\n", + " self.x2 = [] \n", + " \n", + " self.u1 = [] # Image pixel Coords\n", + " self.u2 = [] \n", + "\n", + " self.R = None # Rotation Matrix\n", + " self.t = None # Translation Matrix\n", + " \n", + " def add_images(self,image):\n", + " self.images.append(image)\n", + " \n", + " def SIFT(self): \n", + " # Read in images \n", + " I1 = self.images[0]\n", + " I2 = self.images[1]\n", + " h,w,d,f = I1.h, I1.w, I1.d, I1.f\n", + " \n", + " # Generate SIFT key-points\n", + " sift = cv2.xfeatures2d.SIFT_create()\n", + " kp1,des1 = sift.detectAndCompute(I1.img, None)\n", + " kp2,des2 = sift.detectAndCompute(I2.img, None)\n", + " bf = cv2.BFMatcher()\n", + " matches = bf.knnMatch(des1,des2,k=2)\n", + "\n", + " # Apply ratio test\n", + " good = []\n", + " for i,(m,n) in enumerate(matches):\n", + " if m.distance < 0.7*n.distance:\n", + " good.append(m)\n", + " u1 = []\n", + " u2 = []\n", + " for m in good:\n", + " u1.append(kp1[m.queryIdx].pt)\n", + " u2.append(kp2[m.trainIdx].pt)\n", + "\n", + " # Pixel-wise Camera Coordinates\n", + " u1g = np.array(u1)\n", + " u2g = np.array(u2)\n", + "\n", + " # Make Homogeneous \n", + " u1h = np.c_[u1g,np.ones(u1g.shape[0])]\n", + " u2h = np.c_[u2g,np.ones(u2g.shape[0])]\n", + " \n", + " # Image Center\n", + " cv = h/2\n", + " cu = w/2\n", + "\n", + " # Generalized Camera Coordinates\n", + " K_cam = np.array([[f,0,cu],[0,f,cv],[0,0,1]])\n", + " K_inv = np.linalg.inv(K_cam)\n", + " x1 = u1h @ K_inv.T\n", + " x2 = u2h @ K_inv.T\n", + " \n", + " # Find Essential Matrix\n", + " E, inliers = cv2.findEssentialMat(x1[:,:2],x2[:,:2],np.eye(3),method=cv2.RANSAC,threshold=1e-3)\n", + " inliers = inliers.ravel().astype(bool)\n", + " n_in, self.R, self.t,_ = cv2.recoverPose(E,x1[inliers,:2],x2[inliers,:2])\n", + " \n", + " # Filtered Generalized Coordinates\n", + " self.x1 = x1[inliers==True]\n", + " self.x2 = x2[inliers==True]\n", + " \n", + " # Convert Back to Camera Pixel Coordinats\n", + " self.h1 = K_cam@self.x1.T\n", + " self.h2 = K_cam@self.x2.T\n", + "\n", + "def triangulate(P0,P1,x1,x2):\n", + " '''This function returns the real-world coordinates (X,Y,X,W) of key points found in 2 images\n", + " P0 and P1 are poses, x1 and x2 are SIFT key-points'''\n", + " A = np.array([[P0[2,0]*x1[0] - P0[0,0], P0[2,1]*x1[0] - P0[0,1], P0[2,2]*x1[0] - P0[0,2], P0[2,3]*x1[0] - P0[0,3]], [P0[2,0]*x1[1] - P0[1,0], P0[2,1]*x1[1] - P0[1,1], P0[2,2]*x1[1] - P0[1,2], P0[2,3]*x1[1] - P0[1,3]],\n", + " [P1[2,0]*x2[0] - P1[0,0], P1[2,1]*x2[0] - P1[0,1], P1[2,2]*x2[0] - P1[0,2], P1[2,3]*x2[0] - P1[0,3]],\n", + " [P1[2,0]*x2[1] - P1[1,0], P1[2,1]*x2[1] - P1[1,1], P1[2,2]*x2[1] - P1[1,2], P1[2,3]*x2[1] - P1[1,3]]])\n", + " u,s,vt = np.linalg.svd(A)\n", + " return vt[-1]\n", + " \n", + "def genPointCloud(R, t, x1, x2):\n", + " # Relative pose between two cameras\n", + " P0 = np.array([[1,0,0,0],[0,1,0,0],[0,0,1,0]]) # First camera has canonical rotation and t=0 \n", + " P1 = np.hstack((R, t))\n", + " \n", + " pointCloud = []\n", + " \n", + " # Find X,Y,Z,W for all SIFT Keypoints\n", + " for i in range(len(x1)):\n", + " pointCloud.append(triangulate(P0, P1, x1[i], x2[i])) # Appends to list of points in xyz coordinates\n", + "\n", + " pointCloud = np.array(pointCloud)\n", + " pointCloud = pointCloud.T / pointCloud[:,3] # Divide everything by W \n", + " pointCloud = pointCloud[:-1,:] # Drop the last column (W=1)\n", + " return pointCloud \n", + " \n", + "def plotPointCloud(rw_coords):\n", + " #%matplotlib notebook\n", + " fig = plt.figure()\n", + " ax = fig.add_subplot(111,projection='3d')\n", + " ax.plot(*rw_coords,'k.') \n", + " plt.show()\n", + " \n", + "class pose_esimation(object):\n", + " def __init__(self):\n", + " self.pose = None \n", + " \n", + " def residual(self, pose, x2, x3, rwc_gcp):\n", + " #pose = [R11,R12,R13,R21,R22,R23,R31,R32,R33,x,y,z]\n", + " R = pose[:9].reshape(3,3)\n", + " t = pose[9:].reshape(3,1)\n", + " \n", + " rwc_est = genPointCloud(R, t, x2, x3) # pose 3 is the uknown we are optimizing for. We know image coords x2 and x3\n", + " residual = rwc_est - rwc_gcp.T\n", + " return residual.ravel()\n", + " \n", + " def estimate_pose(self, x2, x3, rwc_gcp):\n", + " \"\"\"\n", + " This function adjusts the R and t such that difference between calculated real world coords between image 2 and 3\n", + " and image 1 and 2 is minimized\n", + " \"\"\"\n", + " p_opt = so.least_squares(self.residual, self.pose, method='lm',args=(x2,x3,rwc_gcp))\n", + " #self.pose = p_opt.x\n", + " return p_opt\n", + "\n", + "def p_trans(pose):\n", + " '''Convert a pose matrix to R and t'''\n", + " R = pose[:,:3]\n", + " t = pose[:,3:][:,np.newaxis]\n", + " return R, t" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": {}, + "outputs": [], + "source": [ + "Image1 = Image('sword/sword_0.jpg')\n", + "Image2 = Image('sword/sword_1.jpg')\n", + "Image3 = Image('sword/sword_2.jpg')\n", + "\n", + "#############################\n", + "# Point Cloud image 1 image 2\n", + "#############################\n", + "keypts1 = camClass()\n", + "keypts1.add_images(Image1)\n", + "keypts1.add_images(Image2)\n", + "keypts1.SIFT()\n", + "\n", + "# Pixel Coord Keypoints\n", + "h11 = keypts1.h1.T[:,:-1] # Keypoint coords (u,v) of Image 1 based on matching between image 1 and 2\n", + "h12 = keypts1.h2.T[:,:-1] # Keypoint coords (u,v) of Image 2 based on matching between image 1 and 2\n", + "\n", + "# Keypoints generalized cam coordinates\n", + "x11 = keypts1.x1 \n", + "x12 = keypts1.x2\n", + "\n", + "R12 = keypts1.R # Rotation Matrix, Iteration 1 Image 2\n", + "t12 = keypts1.t # Translation Matrix\n", + "\n", + "# PointCloud (Image 1 and 2)\n", + "rwc_1 = genPointCloud(R12, t12, x11, x12)\n", + "\n", + "\n", + "#############################\n", + "# Point Cloud image 2 image 3\n", + "#############################\n", + "keypts2 = camClass()\n", + "keypts2.add_images(Image2)\n", + "keypts2.add_images(Image3)\n", + "keypts2.SIFT()\n", + "\n", + "# Pixel Coord Keypoints\n", + "h22 = keypts2.h1.T[:,:-1] # Keypoint coords (u,v) of Image 2 based on matching between image 2 and 3\n", + "h23 = keypts2.h2.T[:,:-1] # Keypoint coords (u,v) of Image 3 based on matching between image 2 and 3\n", + "\n", + "# Keypoints generalized cam coordinates\n", + "x22 = keypts2.x1 \n", + "x23 = keypts2.x2\n", + "\n", + "R23 = keypts2.R # Rotation Matrix, Iteration 2 Image 3\n", + "t23 = keypts2.t # Translation Matrix" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": {}, + "outputs": [], + "source": [ + "# Find image 3 key points that have correspondance in image 1\n", + "keypts_idx = []\n", + "rwc_idx = []\n", + "\n", + "for i in range(h12.shape[0]):\n", + " for j in range(h22.shape[0]):\n", + " if (np.allclose(h12[i], h22[j])): #If image 2 u,v coordinates are equal between the two matching iterations\n", + " keypts_idx.append(j) #Store that index\n", + " rwc_idx.append(i)\n", + " break\n", + "\n", + "# Filter out non-correspondances\n", + "x1_matched = x11[rwc_idx] # Image 2 coordinates for pose estimation\n", + "x2_matched = x22[keypts_idx] # Image 2 coordinates for pose estimation\n", + "x3_matched = x23[keypts_idx] # Image 3 coordinates for pose estimation\n", + "\n", + "rwc_matched = rwc_1.T[rwc_idx,:] # Real world coordinates for pose estimation\n", + "\n", + "#h3_matched = h23[keypts_idx] " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Estimate the pose of image 3\n", + "get_pose3 = pose_esimation()\n", + "\n", + "# Pose Guess\n", + "P2 = np.hstack((R12, t12)) \n", + "P3_prime = np.hstack((R23, t23))\n", + "pose3_guess = np.multiply(P2,P3_prime)\n", + "R3_guess, t3_guess = p_trans(pose3_guess)\n", + "pose3_guess = np.append(R3_guess,t3_guess)\n", + "get_pose3.pose = pose3_guess\n", + "\n", + "# Optimize for pose\n", + "pose3 = get_pose3.estimate_pose(x2_matched, x3_matched, rwc_matched).x\n", + "\n", + "# Get the Real World Coordinates with optimized pose\n", + "R3 = pose3[:9].reshape(3,3)\n", + "t3 = pose3[9:].reshape(3,1)\n", + "rwc_23 = genPointCloud(R3, t3, x2_matched, x3_matched)\n", + "\n", + "%matplotlib notebook\n", + "fig = plt.figure()\n", + "ax = fig.add_subplot(111,projection='3d')\n", + "ax.plot(*rwc_matched.T,'k.') \n", + "ax.plot(*rwc_23, 'r.')\n", + "plt.show()" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "metadata": {}, + "outputs": [ + { + "data": { + "application/javascript": [ + "/* Put everything inside the global mpl namespace */\n", + "window.mpl = {};\n", + "\n", + "\n", + "mpl.get_websocket_type = function() {\n", + " if (typeof(WebSocket) !== 'undefined') {\n", + " return WebSocket;\n", + " } else if (typeof(MozWebSocket) !== 'undefined') {\n", + " return MozWebSocket;\n", + " } else {\n", + " alert('Your browser does not have WebSocket support.' +\n", + " 'Please try Chrome, Safari or Firefox ≥ 6. ' +\n", + " 'Firefox 4 and 5 are also supported but you ' +\n", + " 'have to enable WebSockets in about:config.');\n", + " };\n", + "}\n", + "\n", + "mpl.figure = function(figure_id, websocket, ondownload, parent_element) {\n", + " this.id = figure_id;\n", + "\n", + " this.ws = websocket;\n", + "\n", + " this.supports_binary = (this.ws.binaryType != undefined);\n", + "\n", + " if (!this.supports_binary) {\n", + " var warnings = document.getElementById(\"mpl-warnings\");\n", + " if (warnings) {\n", + " warnings.style.display = 'block';\n", + " warnings.textContent = (\n", + " \"This browser does not support binary websocket messages. \" +\n", + " \"Performance may be slow.\");\n", + " }\n", + " }\n", + "\n", + " this.imageObj = new Image();\n", + "\n", + " this.context = undefined;\n", + " this.message = undefined;\n", + " this.canvas = undefined;\n", + " this.rubberband_canvas = undefined;\n", + " this.rubberband_context = undefined;\n", + " this.format_dropdown = undefined;\n", + "\n", + " this.image_mode = 'full';\n", + "\n", + " this.root = $('
');\n", + " this._root_extra_style(this.root)\n", + " this.root.attr('style', 'display: inline-block');\n", + "\n", + " $(parent_element).append(this.root);\n", + "\n", + " this._init_header(this);\n", + " this._init_canvas(this);\n", + " this._init_toolbar(this);\n", + "\n", + " var fig = this;\n", + "\n", + " this.waiting = false;\n", + "\n", + " this.ws.onopen = function () {\n", + " fig.send_message(\"supports_binary\", {value: fig.supports_binary});\n", + " fig.send_message(\"send_image_mode\", {});\n", + " if (mpl.ratio != 1) {\n", + " fig.send_message(\"set_dpi_ratio\", {'dpi_ratio': mpl.ratio});\n", + " }\n", + " fig.send_message(\"refresh\", {});\n", + " }\n", + "\n", + " this.imageObj.onload = function() {\n", + " if (fig.image_mode == 'full') {\n", + " // Full images could contain transparency (where diff images\n", + " // almost always do), so we need to clear the canvas so that\n", + " // there is no ghosting.\n", + " fig.context.clearRect(0, 0, fig.canvas.width, fig.canvas.height);\n", + " }\n", + " fig.context.drawImage(fig.imageObj, 0, 0);\n", + " };\n", + "\n", + " this.imageObj.onunload = function() {\n", + " fig.ws.close();\n", + " }\n", + "\n", + " this.ws.onmessage = this._make_on_message_function(this);\n", + "\n", + " this.ondownload = ondownload;\n", + "}\n", + "\n", + "mpl.figure.prototype._init_header = function() {\n", + " var titlebar = $(\n", + " '
');\n", + " var titletext = $(\n", + " '
');\n", + " titlebar.append(titletext)\n", + " this.root.append(titlebar);\n", + " this.header = titletext[0];\n", + "}\n", + "\n", + "\n", + "\n", + "mpl.figure.prototype._canvas_extra_style = function(canvas_div) {\n", + "\n", + "}\n", + "\n", + "\n", + "mpl.figure.prototype._root_extra_style = function(canvas_div) {\n", + "\n", + "}\n", + "\n", + "mpl.figure.prototype._init_canvas = function() {\n", + " var fig = this;\n", + "\n", + " var canvas_div = $('
');\n", + "\n", + " canvas_div.attr('style', 'position: relative; clear: both; outline: 0');\n", + "\n", + " function canvas_keyboard_event(event) {\n", + " return fig.key_event(event, event['data']);\n", + " }\n", + "\n", + " canvas_div.keydown('key_press', canvas_keyboard_event);\n", + " canvas_div.keyup('key_release', canvas_keyboard_event);\n", + " this.canvas_div = canvas_div\n", + " this._canvas_extra_style(canvas_div)\n", + " this.root.append(canvas_div);\n", + "\n", + " var canvas = $('');\n", + " canvas.addClass('mpl-canvas');\n", + " canvas.attr('style', \"left: 0; top: 0; z-index: 0; outline: 0\")\n", + "\n", + " this.canvas = canvas[0];\n", + " this.context = canvas[0].getContext(\"2d\");\n", + "\n", + " var backingStore = this.context.backingStorePixelRatio ||\n", + "\tthis.context.webkitBackingStorePixelRatio ||\n", + "\tthis.context.mozBackingStorePixelRatio ||\n", + "\tthis.context.msBackingStorePixelRatio ||\n", + "\tthis.context.oBackingStorePixelRatio ||\n", + "\tthis.context.backingStorePixelRatio || 1;\n", + "\n", + " mpl.ratio = (window.devicePixelRatio || 1) / backingStore;\n", + "\n", + " var rubberband = $('');\n", + " rubberband.attr('style', \"position: absolute; left: 0; top: 0; z-index: 1;\")\n", + "\n", + " var pass_mouse_events = true;\n", + "\n", + " canvas_div.resizable({\n", + " start: function(event, ui) {\n", + " pass_mouse_events = false;\n", + " },\n", + " resize: function(event, ui) {\n", + " fig.request_resize(ui.size.width, ui.size.height);\n", + " },\n", + " stop: function(event, ui) {\n", + " pass_mouse_events = true;\n", + " fig.request_resize(ui.size.width, ui.size.height);\n", + " },\n", + " });\n", + "\n", + " function mouse_event_fn(event) {\n", + " if (pass_mouse_events)\n", + " return fig.mouse_event(event, event['data']);\n", + " }\n", + "\n", + " rubberband.mousedown('button_press', mouse_event_fn);\n", + " rubberband.mouseup('button_release', mouse_event_fn);\n", + " // Throttle sequential mouse events to 1 every 20ms.\n", + " rubberband.mousemove('motion_notify', mouse_event_fn);\n", + "\n", + " rubberband.mouseenter('figure_enter', mouse_event_fn);\n", + " rubberband.mouseleave('figure_leave', mouse_event_fn);\n", + "\n", + " canvas_div.on(\"wheel\", function (event) {\n", + " event = event.originalEvent;\n", + " event['data'] = 'scroll'\n", + " if (event.deltaY < 0) {\n", + " event.step = 1;\n", + " } else {\n", + " event.step = -1;\n", + " }\n", + " mouse_event_fn(event);\n", + " });\n", + "\n", + " canvas_div.append(canvas);\n", + " canvas_div.append(rubberband);\n", + "\n", + " this.rubberband = rubberband;\n", + " this.rubberband_canvas = rubberband[0];\n", + " this.rubberband_context = rubberband[0].getContext(\"2d\");\n", + " this.rubberband_context.strokeStyle = \"#000000\";\n", + "\n", + " this._resize_canvas = function(width, height) {\n", + " // Keep the size of the canvas, canvas container, and rubber band\n", + " // canvas in synch.\n", + " canvas_div.css('width', width)\n", + " canvas_div.css('height', height)\n", + "\n", + " canvas.attr('width', width * mpl.ratio);\n", + " canvas.attr('height', height * mpl.ratio);\n", + " canvas.attr('style', 'width: ' + width + 'px; height: ' + height + 'px;');\n", + "\n", + " rubberband.attr('width', width);\n", + " rubberband.attr('height', height);\n", + " }\n", + "\n", + " // Set the figure to an initial 600x600px, this will subsequently be updated\n", + " // upon first draw.\n", + " this._resize_canvas(600, 600);\n", + "\n", + " // Disable right mouse context menu.\n", + " $(this.rubberband_canvas).bind(\"contextmenu\",function(e){\n", + " return false;\n", + " });\n", + "\n", + " function set_focus () {\n", + " canvas.focus();\n", + " canvas_div.focus();\n", + " }\n", + "\n", + " window.setTimeout(set_focus, 100);\n", + "}\n", + "\n", + "mpl.figure.prototype._init_toolbar = function() {\n", + " var fig = this;\n", + "\n", + " var nav_element = $('
')\n", + " nav_element.attr('style', 'width: 100%');\n", + " this.root.append(nav_element);\n", + "\n", + " // Define a callback function for later on.\n", + " function toolbar_event(event) {\n", + " return fig.toolbar_button_onclick(event['data']);\n", + " }\n", + " function toolbar_mouse_event(event) {\n", + " return fig.toolbar_button_onmouseover(event['data']);\n", + " }\n", + "\n", + " for(var toolbar_ind in mpl.toolbar_items) {\n", + " var name = mpl.toolbar_items[toolbar_ind][0];\n", + " var tooltip = mpl.toolbar_items[toolbar_ind][1];\n", + " var image = mpl.toolbar_items[toolbar_ind][2];\n", + " var method_name = mpl.toolbar_items[toolbar_ind][3];\n", + "\n", + " if (!name) {\n", + " // put a spacer in here.\n", + " continue;\n", + " }\n", + " var button = $('');\n", + " button.click(method_name, toolbar_event);\n", + " button.mouseover(tooltip, toolbar_mouse_event);\n", + " nav_element.append(button);\n", + " }\n", + "\n", + " // Add the status bar.\n", + " var status_bar = $('');\n", + " nav_element.append(status_bar);\n", + " this.message = status_bar[0];\n", + "\n", + " // Add the close button to the window.\n", + " var buttongrp = $('
');\n", + " var button = $('');\n", + " button.click(function (evt) { fig.handle_close(fig, {}); } );\n", + " button.mouseover('Stop Interaction', toolbar_mouse_event);\n", + " buttongrp.append(button);\n", + " var titlebar = this.root.find($('.ui-dialog-titlebar'));\n", + " titlebar.prepend(buttongrp);\n", + "}\n", + "\n", + "mpl.figure.prototype._root_extra_style = function(el){\n", + " var fig = this\n", + " el.on(\"remove\", function(){\n", + "\tfig.close_ws(fig, {});\n", + " });\n", + "}\n", + "\n", + "mpl.figure.prototype._canvas_extra_style = function(el){\n", + " // this is important to make the div 'focusable\n", + " el.attr('tabindex', 0)\n", + " // reach out to IPython and tell the keyboard manager to turn it's self\n", + " // off when our div gets focus\n", + "\n", + " // location in version 3\n", + " if (IPython.notebook.keyboard_manager) {\n", + " IPython.notebook.keyboard_manager.register_events(el);\n", + " }\n", + " else {\n", + " // location in version 2\n", + " IPython.keyboard_manager.register_events(el);\n", + " }\n", + "\n", + "}\n", + "\n", + "mpl.figure.prototype._key_event_extra = function(event, name) {\n", + " var manager = IPython.notebook.keyboard_manager;\n", + " if (!manager)\n", + " manager = IPython.keyboard_manager;\n", + "\n", + " // Check for shift+enter\n", + " if (event.shiftKey && event.which == 13) {\n", + " this.canvas_div.blur();\n", + " event.shiftKey = false;\n", + " // Send a \"J\" for go to next cell\n", + " event.which = 74;\n", + " event.keyCode = 74;\n", + " manager.command_mode();\n", + " manager.handle_keydown(event);\n", + " }\n", + "}\n", + "\n", + "mpl.figure.prototype.handle_save = function(fig, msg) {\n", + " fig.ondownload(fig, null);\n", + "}\n", + "\n", + "\n", + "mpl.find_output_cell = function(html_output) {\n", + " // Return the cell and output element which can be found *uniquely* in the notebook.\n", + " // Note - this is a bit hacky, but it is done because the \"notebook_saving.Notebook\"\n", + " // IPython event is triggered only after the cells have been serialised, which for\n", + " // our purposes (turning an active figure into a static one), is too late.\n", + " var cells = IPython.notebook.get_cells();\n", + " var ncells = cells.length;\n", + " for (var i=0; i= 3 moved mimebundle to data attribute of output\n", + " data = data.data;\n", + " }\n", + " if (data['text/html'] == html_output) {\n", + " return [cell, data, j];\n", + " }\n", + " }\n", + " }\n", + " }\n", + "}\n", + "\n", + "// Register the function which deals with the matplotlib target/channel.\n", + "// The kernel may be null if the page has been refreshed.\n", + "if (IPython.notebook.kernel != null) {\n", + " IPython.notebook.kernel.comm_manager.register_target('matplotlib', mpl.mpl_figure_comm);\n", + "}\n" + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/html": [ + "" + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "application/javascript": [ + "/* Put everything inside the global mpl namespace */\n", + "window.mpl = {};\n", + "\n", + "\n", + "mpl.get_websocket_type = function() {\n", + " if (typeof(WebSocket) !== 'undefined') {\n", + " return WebSocket;\n", + " } else if (typeof(MozWebSocket) !== 'undefined') {\n", + " return MozWebSocket;\n", + " } else {\n", + " alert('Your browser does not have WebSocket support.' +\n", + " 'Please try Chrome, Safari or Firefox ≥ 6. ' +\n", + " 'Firefox 4 and 5 are also supported but you ' +\n", + " 'have to enable WebSockets in about:config.');\n", + " };\n", + "}\n", + "\n", + "mpl.figure = function(figure_id, websocket, ondownload, parent_element) {\n", + " this.id = figure_id;\n", + "\n", + " this.ws = websocket;\n", + "\n", + " this.supports_binary = (this.ws.binaryType != undefined);\n", + "\n", + " if (!this.supports_binary) {\n", + " var warnings = document.getElementById(\"mpl-warnings\");\n", + " if (warnings) {\n", + " warnings.style.display = 'block';\n", + " warnings.textContent = (\n", + " \"This browser does not support binary websocket messages. \" +\n", + " \"Performance may be slow.\");\n", + " }\n", + " }\n", + "\n", + " this.imageObj = new Image();\n", + "\n", + " this.context = undefined;\n", + " this.message = undefined;\n", + " this.canvas = undefined;\n", + " this.rubberband_canvas = undefined;\n", + " this.rubberband_context = undefined;\n", + " this.format_dropdown = undefined;\n", + "\n", + " this.image_mode = 'full';\n", + "\n", + " this.root = $('
');\n", + " this._root_extra_style(this.root)\n", + " this.root.attr('style', 'display: inline-block');\n", + "\n", + " $(parent_element).append(this.root);\n", + "\n", + " this._init_header(this);\n", + " this._init_canvas(this);\n", + " this._init_toolbar(this);\n", + "\n", + " var fig = this;\n", + "\n", + " this.waiting = false;\n", + "\n", + " this.ws.onopen = function () {\n", + " fig.send_message(\"supports_binary\", {value: fig.supports_binary});\n", + " fig.send_message(\"send_image_mode\", {});\n", + " if (mpl.ratio != 1) {\n", + " fig.send_message(\"set_dpi_ratio\", {'dpi_ratio': mpl.ratio});\n", + " }\n", + " fig.send_message(\"refresh\", {});\n", + " }\n", + "\n", + " this.imageObj.onload = function() {\n", + " if (fig.image_mode == 'full') {\n", + " // Full images could contain transparency (where diff images\n", + " // almost always do), so we need to clear the canvas so that\n", + " // there is no ghosting.\n", + " fig.context.clearRect(0, 0, fig.canvas.width, fig.canvas.height);\n", + " }\n", + " fig.context.drawImage(fig.imageObj, 0, 0);\n", + " };\n", + "\n", + " this.imageObj.onunload = function() {\n", + " fig.ws.close();\n", + " }\n", + "\n", + " this.ws.onmessage = this._make_on_message_function(this);\n", + "\n", + " this.ondownload = ondownload;\n", + "}\n", + "\n", + "mpl.figure.prototype._init_header = function() {\n", + " var titlebar = $(\n", + " '
');\n", + " var titletext = $(\n", + " '
');\n", + " titlebar.append(titletext)\n", + " this.root.append(titlebar);\n", + " this.header = titletext[0];\n", + "}\n", + "\n", + "\n", + "\n", + "mpl.figure.prototype._canvas_extra_style = function(canvas_div) {\n", + "\n", + "}\n", + "\n", + "\n", + "mpl.figure.prototype._root_extra_style = function(canvas_div) {\n", + "\n", + "}\n", + "\n", + "mpl.figure.prototype._init_canvas = function() {\n", + " var fig = this;\n", + "\n", + " var canvas_div = $('
');\n", + "\n", + " canvas_div.attr('style', 'position: relative; clear: both; outline: 0');\n", + "\n", + " function canvas_keyboard_event(event) {\n", + " return fig.key_event(event, event['data']);\n", + " }\n", + "\n", + " canvas_div.keydown('key_press', canvas_keyboard_event);\n", + " canvas_div.keyup('key_release', canvas_keyboard_event);\n", + " this.canvas_div = canvas_div\n", + " this._canvas_extra_style(canvas_div)\n", + " this.root.append(canvas_div);\n", + "\n", + " var canvas = $('');\n", + " canvas.addClass('mpl-canvas');\n", + " canvas.attr('style', \"left: 0; top: 0; z-index: 0; outline: 0\")\n", + "\n", + " this.canvas = canvas[0];\n", + " this.context = canvas[0].getContext(\"2d\");\n", + "\n", + " var backingStore = this.context.backingStorePixelRatio ||\n", + "\tthis.context.webkitBackingStorePixelRatio ||\n", + "\tthis.context.mozBackingStorePixelRatio ||\n", + "\tthis.context.msBackingStorePixelRatio ||\n", + "\tthis.context.oBackingStorePixelRatio ||\n", + "\tthis.context.backingStorePixelRatio || 1;\n", + "\n", + " mpl.ratio = (window.devicePixelRatio || 1) / backingStore;\n", + "\n", + " var rubberband = $('');\n", + " rubberband.attr('style', \"position: absolute; left: 0; top: 0; z-index: 1;\")\n", + "\n", + " var pass_mouse_events = true;\n", + "\n", + " canvas_div.resizable({\n", + " start: function(event, ui) {\n", + " pass_mouse_events = false;\n", + " },\n", + " resize: function(event, ui) {\n", + " fig.request_resize(ui.size.width, ui.size.height);\n", + " },\n", + " stop: function(event, ui) {\n", + " pass_mouse_events = true;\n", + " fig.request_resize(ui.size.width, ui.size.height);\n", + " },\n", + " });\n", + "\n", + " function mouse_event_fn(event) {\n", + " if (pass_mouse_events)\n", + " return fig.mouse_event(event, event['data']);\n", + " }\n", + "\n", + " rubberband.mousedown('button_press', mouse_event_fn);\n", + " rubberband.mouseup('button_release', mouse_event_fn);\n", + " // Throttle sequential mouse events to 1 every 20ms.\n", + " rubberband.mousemove('motion_notify', mouse_event_fn);\n", + "\n", + " rubberband.mouseenter('figure_enter', mouse_event_fn);\n", + " rubberband.mouseleave('figure_leave', mouse_event_fn);\n", + "\n", + " canvas_div.on(\"wheel\", function (event) {\n", + " event = event.originalEvent;\n", + " event['data'] = 'scroll'\n", + " if (event.deltaY < 0) {\n", + " event.step = 1;\n", + " } else {\n", + " event.step = -1;\n", + " }\n", + " mouse_event_fn(event);\n", + " });\n", + "\n", + " canvas_div.append(canvas);\n", + " canvas_div.append(rubberband);\n", + "\n", + " this.rubberband = rubberband;\n", + " this.rubberband_canvas = rubberband[0];\n", + " this.rubberband_context = rubberband[0].getContext(\"2d\");\n", + " this.rubberband_context.strokeStyle = \"#000000\";\n", + "\n", + " this._resize_canvas = function(width, height) {\n", + " // Keep the size of the canvas, canvas container, and rubber band\n", + " // canvas in synch.\n", + " canvas_div.css('width', width)\n", + " canvas_div.css('height', height)\n", + "\n", + " canvas.attr('width', width * mpl.ratio);\n", + " canvas.attr('height', height * mpl.ratio);\n", + " canvas.attr('style', 'width: ' + width + 'px; height: ' + height + 'px;');\n", + "\n", + " rubberband.attr('width', width);\n", + " rubberband.attr('height', height);\n", + " }\n", + "\n", + " // Set the figure to an initial 600x600px, this will subsequently be updated\n", + " // upon first draw.\n", + " this._resize_canvas(600, 600);\n", + "\n", + " // Disable right mouse context menu.\n", + " $(this.rubberband_canvas).bind(\"contextmenu\",function(e){\n", + " return false;\n", + " });\n", + "\n", + " function set_focus () {\n", + " canvas.focus();\n", + " canvas_div.focus();\n", + " }\n", + "\n", + " window.setTimeout(set_focus, 100);\n", + "}\n", + "\n", + "mpl.figure.prototype._init_toolbar = function() {\n", + " var fig = this;\n", + "\n", + " var nav_element = $('
')\n", + " nav_element.attr('style', 'width: 100%');\n", + " this.root.append(nav_element);\n", + "\n", + " // Define a callback function for later on.\n", + " function toolbar_event(event) {\n", + " return fig.toolbar_button_onclick(event['data']);\n", + " }\n", + " function toolbar_mouse_event(event) {\n", + " return fig.toolbar_button_onmouseover(event['data']);\n", + " }\n", + "\n", + " for(var toolbar_ind in mpl.toolbar_items) {\n", + " var name = mpl.toolbar_items[toolbar_ind][0];\n", + " var tooltip = mpl.toolbar_items[toolbar_ind][1];\n", + " var image = mpl.toolbar_items[toolbar_ind][2];\n", + " var method_name = mpl.toolbar_items[toolbar_ind][3];\n", + "\n", + " if (!name) {\n", + " // put a spacer in here.\n", + " continue;\n", + " }\n", + " var button = $('');\n", + " button.click(method_name, toolbar_event);\n", + " button.mouseover(tooltip, toolbar_mouse_event);\n", + " nav_element.append(button);\n", + " }\n", + "\n", + " // Add the status bar.\n", + " var status_bar = $('');\n", + " nav_element.append(status_bar);\n", + " this.message = status_bar[0];\n", + "\n", + " // Add the close button to the window.\n", + " var buttongrp = $('
');\n", + " var button = $('');\n", + " button.click(function (evt) { fig.handle_close(fig, {}); } );\n", + " button.mouseover('Stop Interaction', toolbar_mouse_event);\n", + " buttongrp.append(button);\n", + " var titlebar = this.root.find($('.ui-dialog-titlebar'));\n", + " titlebar.prepend(buttongrp);\n", + "}\n", + "\n", + "mpl.figure.prototype._root_extra_style = function(el){\n", + " var fig = this\n", + " el.on(\"remove\", function(){\n", + "\tfig.close_ws(fig, {});\n", + " });\n", + "}\n", + "\n", + "mpl.figure.prototype._canvas_extra_style = function(el){\n", + " // this is important to make the div 'focusable\n", + " el.attr('tabindex', 0)\n", + " // reach out to IPython and tell the keyboard manager to turn it's self\n", + " // off when our div gets focus\n", + "\n", + " // location in version 3\n", + " if (IPython.notebook.keyboard_manager) {\n", + " IPython.notebook.keyboard_manager.register_events(el);\n", + " }\n", + " else {\n", + " // location in version 2\n", + " IPython.keyboard_manager.register_events(el);\n", + " }\n", + "\n", + "}\n", + "\n", + "mpl.figure.prototype._key_event_extra = function(event, name) {\n", + " var manager = IPython.notebook.keyboard_manager;\n", + " if (!manager)\n", + " manager = IPython.keyboard_manager;\n", + "\n", + " // Check for shift+enter\n", + " if (event.shiftKey && event.which == 13) {\n", + " this.canvas_div.blur();\n", + " event.shiftKey = false;\n", + " // Send a \"J\" for go to next cell\n", + " event.which = 74;\n", + " event.keyCode = 74;\n", + " manager.command_mode();\n", + " manager.handle_keydown(event);\n", + " }\n", + "}\n", + "\n", + "mpl.figure.prototype.handle_save = function(fig, msg) {\n", + " fig.ondownload(fig, null);\n", + "}\n", + "\n", + "\n", + "mpl.find_output_cell = function(html_output) {\n", + " // Return the cell and output element which can be found *uniquely* in the notebook.\n", + " // Note - this is a bit hacky, but it is done because the \"notebook_saving.Notebook\"\n", + " // IPython event is triggered only after the cells have been serialised, which for\n", + " // our purposes (turning an active figure into a static one), is too late.\n", + " var cells = IPython.notebook.get_cells();\n", + " var ncells = cells.length;\n", + " for (var i=0; i= 3 moved mimebundle to data attribute of output\n", + " data = data.data;\n", + " }\n", + " if (data['text/html'] == html_output) {\n", + " return [cell, data, j];\n", + " }\n", + " }\n", + " }\n", + " }\n", + "}\n", + "\n", + "// Register the function which deals with the matplotlib target/channel.\n", + "// The kernel may be null if the page has been refreshed.\n", + "if (IPython.notebook.kernel != null) {\n", + " IPython.notebook.kernel.comm_manager.register_target('matplotlib', mpl.mpl_figure_comm);\n", + "}\n" + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/html": [ + "" + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/plain": [ + "'\\nclass t_esimation(object):\\n def __init__(self):\\n self.t = None\\n \\n def residual(self, t, R, x2, x3, rwc_12):\\n \\n t = t.reshape(3,1)\\n \\n rwc23 = genPointCloud(R, t, x2, x3) # pose 3 is the uknown we are optimizing for. We know image coords x2 and x3\\n #residual = rwc23.flatten() - rwc_12.T.flatten()\\n residual = rwc23 - rwc_12.T\\n return residual.ravel()\\n \\n def estimate_t(self, R, x2, x3, rwc_12):\\n \"\"\"\\n This function adjusts the R and t such that difference between calculated real world coords between image 2 and 3\\n and image 1 and 2 is minimized\\n \"\"\"\\n t_opt = so.least_squares(self.residual, self.t, method=\\'lm\\',args=(R,x2,x3,rwc_12))\\n\\n return t_opt\\n\\n\\n\\n# Estimate the pose of image 3\\nget_t3 = t_esimation()\\n\\n# Pose Guess\\nP2 = np.hstack((R12, t12))\\nP3_prime = np.hstack((R23, t23))\\n\\npose_guess = np.multiply(P2,P3_prime)\\nR = pose_guess[:,:3]\\nt_guess = pose_guess[:,-1][:,np.newaxis].flatten()\\n\\n# Optimize for pose\\nget_t3.t = t_guess\\n\\nt3 = get_t3.estimate_t(R, keypts2_matched_x, keypts3_matched_x, rw_coords_matched).x\\nt3 = t3[:,np.newaxis]\\n\\n# Real World Coordinates\\nRWC_23 = genPointCloud(R, t3, keypts2_matched_x, keypts3_matched_x)\\n\\n%matplotlib notebook\\nfig = plt.figure()\\nax = fig.add_subplot(111,projection=\\'3d\\')\\nax.plot(*rw_coords_matched.T,\\'k.\\') \\nax.plot(*RWC_23, \\'r.\\')\\nplt.show()\\n'" + ] + }, + "execution_count": 7, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# Test point Correspondences\n", + "fig, ax = plt.subplots(ncols=1, nrows=1)\n", + "img1 = plt.imread('sword/sword_0.jpg')\n", + "ax.imshow(img1)\n", + "ax.plot(h11[rwc_idx][:,0], h11[rwc_idx][:,1], 'x')\n", + "plt.show()\n", + "\n", + "fig, ax = plt.subplots(ncols=1, nrows=1)\n", + "img2 = plt.imread('sword/sword_1.jpg')\n", + "ax.imshow(img2)\n", + "ax.plot(h12[:,0][rwc_idx], h12[rwc_idx][:,1], 'x')\n", + "ax.plot(h22[:,0][keypts_idx], h22[:,1][keypts_idx], 'r.')\n", + "plt.show()\n", + "\n", + "fig, ax = plt.subplots(ncols=1, nrows=1)\n", + "img3 = plt.imread('sword/sword_2.jpg')\n", + "ax.imshow(img3)\n", + "ax.plot(h23[:,0][keypts_idx], h23[:,1][keypts_idx], 'r.')\n", + "plt.show()\n", + "\n", + "\n", + "\n", + "'''\n", + "class t_esimation(object):\n", + " def __init__(self):\n", + " self.t = None\n", + " \n", + " def residual(self, t, R, x2, x3, rwc_12):\n", + " \n", + " t = t.reshape(3,1)\n", + " \n", + " rwc23 = genPointCloud(R, t, x2, x3) # pose 3 is the uknown we are optimizing for. We know image coords x2 and x3\n", + " #residual = rwc23.flatten() - rwc_12.T.flatten()\n", + " residual = rwc23 - rwc_12.T\n", + " return residual.ravel()\n", + " \n", + " def estimate_t(self, R, x2, x3, rwc_12):\n", + " \"\"\"\n", + " This function adjusts the R and t such that difference between calculated real world coords between image 2 and 3\n", + " and image 1 and 2 is minimized\n", + " \"\"\"\n", + " t_opt = so.least_squares(self.residual, self.t, method='lm',args=(R,x2,x3,rwc_12))\n", + "\n", + " return t_opt\n", + "\n", + "\n", + "\n", + "# Estimate the pose of image 3\n", + "get_t3 = t_esimation()\n", + "\n", + "# Pose Guess\n", + "P2 = np.hstack((R12, t12))\n", + "P3_prime = np.hstack((R23, t23))\n", + "\n", + "pose_guess = np.multiply(P2,P3_prime)\n", + "R = pose_guess[:,:3]\n", + "t_guess = pose_guess[:,-1][:,np.newaxis].flatten()\n", + "\n", + "# Optimize for pose\n", + "get_t3.t = t_guess\n", + "\n", + "t3 = get_t3.estimate_t(R, keypts2_matched_x, keypts3_matched_x, rw_coords_matched).x\n", + "t3 = t3[:,np.newaxis]\n", + "\n", + "# Real World Coordinates\n", + "RWC_23 = genPointCloud(R, t3, keypts2_matched_x, keypts3_matched_x)\n", + "\n", + "%matplotlib notebook\n", + "fig = plt.figure()\n", + "ax = fig.add_subplot(111,projection='3d')\n", + "ax.plot(*rw_coords_matched.T,'k.') \n", + "ax.plot(*RWC_23, 'r.')\n", + "plt.show()\n", + "'''" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.7" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/sword/sword_0.jpg b/sword/sword_0.jpg new file mode 100644 index 0000000..965ecff Binary files /dev/null and b/sword/sword_0.jpg differ diff --git a/sword/sword_1.jpg b/sword/sword_1.jpg new file mode 100644 index 0000000..e3536a8 Binary files /dev/null and b/sword/sword_1.jpg differ diff --git a/sword/sword_2.jpg b/sword/sword_2.jpg new file mode 100644 index 0000000..b5a814f Binary files /dev/null and b/sword/sword_2.jpg differ