Image distortion occurs when a camera looks at 3D objects in the real world and transforms them into a 2D image; this transformation isn’t perfect. Distortion actually changes what the shape and size of these 3D objects appear to be. So, we first analyzing camera images, is to undo this distortion so that you can get correct and useful information out of them.
I Used Chessboard pattern, and taken 36 pictures of it.
Count the number of corners in any given row. Similarly, count the number of corners in a given column. Keep in mind that "corners" are only points where two black and two white squares intersect, in other words, only count inside corners, not outside corners.
Used the OpenCV functions findChessboardCorners() to find Corners that takes grayscale image ( Convert using cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) )
objpoints = [] # 3D points imgpoints = [] # 2D points
objp = np.zeros((6*8,3), np.float32) objp[:,:2] = np.mgrid[0:8,0:6].T.reshape(-1,2) # x, y coordinates
objpoints.append(objp) imgpoints.append(corners) # findChessboardCorners() returns corners
Used the OpenCV functions drawChessboardCorners() to draw corners in an image of a chessboard pattern.
To learn more about both of those functions, look at the OpenCV documentation here: [cv2.findChessboardCorners()] (https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#cv2.drawChessboardCorners) and cv2.drawChessboardCorners().
It takes arguments objpoints and imgpoints and return distortion coefficients (dist), camera matrix (mtx) that we need to transform 3D object points to 2D image points. It also returns the position of the camera in the world, with values for rotation (rvecs) and translation (tvecs) vectors
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], None, None)
dst = cv2.undistort(img, mtx, dist, None, mtx)