深度估计与三维重建
Depth Estimation¶
Stereo Matching¶
For a given stereo camera, suppose the relative pose of the two lenses is given, then the procedures to compute the depth are: 1. Find 2D-2D correspondences 2. Triangulate
So here comes stereo matching, which helps match all pixels efficiently.
Stereo Image Rectification¶
For every point, set \(X_L\) and \(X_R\).
Given \(X_L\), then the equation of the epipolar line on the right image plane is
When image planes of cameras are parallel to each other and to the baseline,
When camera centers are at same height,
When focal lengths are the same,
Then we can get
Therefore, epipolar lines fall along the horizontal scan lines of the images at this time.
Let \(D = x_2 - x_1\) be the disparity of pixel \((x_1, y_1)\), and \(B\) to be the length of baseline, then we can get the depth
When these conditions are not satisfied, we can simply use two homographies to reproject the image planes onto a common plane parallel to the line between camera centers.
Stereo Matching Algorithms¶
Window-based Stereo Matching¶
We can find the match of a point on the scanline by locally search the point that minimizes the dissimilarity.
To get the dissimilarity between two points, we can calculate the matching scores from all pixels in the windows of the two points.
There are many kinds of matching scores, such as:
- SSD (Sum of Squared Differences):
- SAD (Sum of Absolute Differences):
- ZNCC (Zero-mean Normalized Cross Correlation):
where \(\displaystyle \overline{W}_i = \frac{1}{n}\sum_{x,y}W_i\), $\displaystyle \sigma_{W_i} = \sqrt{\frac{1}{n} \sum_{x,y}(W_i - \overline{W}_i)^2} $. This can reduce the influence of brightness.
For smaller window, there will be more detail, but also more noise. For larger window, there will be less noise, but also less detail.
MRF-based Stereo Matching¶
Define the energy function as
where \(d\) is the disparity of each pixel.
The match cost is $E_d(d) = \sum_{(x,y)\in I} C(x, y, d(x, y)) $, where \(C\) can be SSD, SAD, ZNCC, etc. The smoothness cost is $E_s(d) = \sum_{(p,q)\in \epsilon} V(d_p, d_q) $, where \(\epsilon\) is the set of neighboring pixels, \(V\) can be \(L_1\) distance $$V(d_p, d_q) = |d_p - d_q| $$
or "Potts model"
Then we can get a better match by minimizing the global cost, i.e., performing discrete optimization on this energy function.
Multi-View Stereo (MVS)¶
To get a proper depth value of a point from multiple reference images, the basic procedures are
- Assume a depth value and get the 3D coordinates.
- Project the coordinates to other cameras and compute the total error.
- For each point in the reference image, compute the error for each depth value, and find the depth value that gives the smallest error.
Plane-Sweep¶
This method is to compute the errors of all pixels at all depths efficiently. Its steps are
- Assume the same depth for all pixels in the reference image, then we can get sweep family of planes parallel to the reference camera image plane.
- Project each plane to neighbors views via homography and compute errors, then we can get a cost volume which is a 3D array that stores the errors of all pixels at all depths.
- Get depth map from the cost volume.
Patch Match¶
It's an efficient algorithm for solving correspondence problems. Its steps are
- Random Initialization: Each pixel is given a random patch offset as initialization.
- Propagation: Each pixels checks if the offsets from neighboring patches give a better matching patch. If so, adopt neighbor’s patch offset.
- Random Search: Each pixels searches for better patch offsets within a concentric radius around the current offset. The search radius starts with the size of the image and is halved each time until it is 1.
- Go to Step 2 until converge.
3D Reconstruction¶
3D Representations¶
The methods for 3D representation include - Point Cloud - Volume - Occupancy - Signed Distance - Mesh
Occupancy
Signed Distance
- Signed Distance Function (SDF): The distance of a point to the shape boundary.
- Truncated Signed Distance Function (TSDF): Truncate SDF’s distance value to \([−1, 1]\).
3D Surface Reconstruction¶
The basic procedures are
- Use depth maps to get occupancy or TSDF volume through Poisson reconstruction, KinectFusion, etc.
- Use occupancy or TSDF volume to get mesh through Marching Cubes, etc.
Poisson Reconstruction¶
This method can convert depth maps into a 3D volume. Its steps are
- Convert depth map to point cloud.
- Compute normal vector for each point, and represent the oriented points by a vector field \(\vec{V}\).
- Represent surface by the indicator (occupancy) function
And then find the function \(\mathcal{X}\) whose gradient best approximates \(\vec{V}\) by minimizing
- Solve this by Poisson equation.
Marching Cubes¶
This method can extract 3D surface (mesh) from volumetric representation. Its steps are
- For each grid cell with a sign change, create one vertex on each grid edge with a sign change, and store their indices as the sign configuration of the grid cell.
- Connect vertices by triangles using the pre-computed look-up table and make sure that triangles do not intersect.