栏目分类:
子分类:
返回
名师互学网用户登录
快速导航关闭
当前搜索
当前分类
子分类
实用工具
热门搜索
名师互学网 > IT > 软件开发 > 后端开发 > C/C++/C#

C++使用最小二乘法计算点云法向量原理以及全部代码

C/C++/C# 更新时间: 发布时间: IT归档 最新发布 模块sitemap 名妆网 法律咨询 聚返吧 英语巴士网 伯小乐 网商动力

C++使用最小二乘法计算点云法向量原理以及全部代码

最小二乘计算点云法向量原理

对于任意一点 p ( x , y , z ) p(x, y, z) p(x,y,z)查找其一定领域内的点 { p i } {{p_i}} {pi​};

求得一个平面 ∏ : a 0 x + a 1 y + a 2 z + 1 = 0 prod: a_0x + a_1y+a_2z + 1 = 0 ∏:a0​x+a1​y+a2​z+1=0使得其到平面距离的平方和最小,即:

m i n ∑ i = 1 n d i s t ( p i , ∏ ) min sum_{i=1}^{n}dist(p_i, prod) min∑i=1n​dist(pi​,∏) = m i n ∑ i = 1 n ( a 0 x i + a 1 y i + a 2 z i + 1 ) 2 min sum_{i=1}^{n}(a_0x_i + a_1y_i+a_2z_i + 1 )^2 min∑i=1n​(a0​xi​+a1​yi​+a2​zi​+1)2

记 S = m i n ∑ i = 1 n ( a 0 x + a 1 y + a 2 z + 1 ) 2 S = min sum_{i=1}^{n}(a_0x + a_1y+a_2z + 1 )^2 S=min∑i=1n​(a0​x+a1​y+a2​z+1)2
要使得S最小,则 ∂ S ∂ a 0 = 0 , ∂ S ∂ b 0 = 0 , ∂ S ∂ c 0 = 0 frac{partial S}{partial a_0} = 0, frac{partial S}{partial b_0} = 0, frac{partial S}{partial c_0} = 0 ∂a0​∂S​=0,∂b0​∂S​=0,∂c0​∂S​=0
即:
∂ S ∂ a 0 = ∑ i = 1 n 2 ( a 0 x + a 1 y + a 2 z + 1 ) x i = 0 frac{partial S}{partial a_0} = sum_{i=1}^n2(a_0x + a_1y+a_2z + 1 )x_i = 0 ∂a0​∂S​=∑i=1n​2(a0​x+a1​y+a2​z+1)xi​=0
∂ S ∂ b 0 = ∑ i = 1 n 2 ( a 0 x + a 1 y + a 2 z + 1 ) y i = 0 frac{partial S}{partial b_0} = sum_{i=1}^n2(a_0x + a_1y+a_2z + 1 )y_i = 0 ∂b0​∂S​=∑i=1n​2(a0​x+a1​y+a2​z+1)yi​=0
∂ S ∂ c 0 = ∑ i = 1 n 2 ( a 0 x + a 1 y + a 2 z + 1 ) z i = 0 frac{partial S}{partial c_0} = sum_{i=1}^n2(a_0x + a_1y+a_2z + 1 )z_i = 0 ∂c0​∂S​=∑i=1n​2(a0​x+a1​y+a2​z+1)zi​=0
上式经过变形可得(请读者自行下去推导)
[ ∑ x i 2 ∑ x i y y ∑ x i z i ∑ x i y i ∑ y i 2 ∑ y i z i ∑ x i z i ∑ y i z i ∑ z i 2 ] ⋅ [ a 0 b 0 c 0 ] + [ ∑ x i ∑ y i ∑ z i ] = 0 begin{bmatrix} sum x_i^2 &sum x_iy_y &sum x_iz_i \ sum x_iy_i &sum y_i^2 &sum y_iz_i \ sum x_iz_i&sum y_iz_i &sum z_i^2 \ end{bmatrix} cdot begin{bmatrix} a_0\ b_0 \ c_0 \ end{bmatrix} + begin{bmatrix} sum x_i\ sum y_i \ sum z_i \ end{bmatrix} = 0 ⎣⎡​∑xi2​∑xi​yi​∑xi​zi​​∑xi​yy​∑yi2​∑yi​zi​​∑xi​zi​∑yi​zi​∑zi2​​⎦⎤​⋅⎣⎡​a0​b0​c0​​⎦⎤​+⎣⎡​∑xi​∑yi​∑zi​​⎦⎤​=0
令 A = [ x 1 y 1 z 1 x 2 y 2 z 2 ⋮ ⋮ ⋮ x n y n z n ] ; − b = [ 1 1 ⋮ 1 ] A = begin{bmatrix} x_1 &y_1&z_1 \ x_2 &y_2&z_2\ vdots &vdots &vdots \ x_n &y_n&z_n\ end{bmatrix}; -b = begin{bmatrix} 1 \ 1\ vdots \ 1\ end{bmatrix} A=⎣⎢⎢⎢⎡​x1​x2​⋮xn​​y1​y2​⋮yn​​z1​z2​⋮zn​​⎦⎥⎥⎥⎤​;−b=⎣⎢⎢⎢⎡​11⋮1​⎦⎥⎥⎥⎤​
则有: A T A [ a 0 b 0 c 0 ] − A T b = 0 A^{T}A begin{bmatrix} a_0\ b_0 \ c_0\ end{bmatrix} - A^Tb = 0 ATA⎣⎡​a0​b0​c0​​⎦⎤​−ATb=0
由于 A T A A^TA ATA可逆
所以 [ a 0 b 0 c 0 ] = ( A T A ) − 1 A T b begin{bmatrix} a_0\ b_0 \ c_0\ end{bmatrix} = (A^TA)^{-1}A^Tb ⎣⎡​a0​b0​c0​​⎦⎤​=(ATA)−1ATb
由高等数学等知识可知: [ a 0 b 0 c 0 ] 即 为 法 向 量 begin{bmatrix} a_0\ b_0 \ c_0\ end{bmatrix} 即为法向量 ⎣⎡​a0​b0​c0​​⎦⎤​即为法向量

代码
void lsNormal(const PointCloud::Ptr& cloud, const int& k_neighbors, PointCloud::Ptr& normals)
{
	//1. build kdtree index
	KdTreeFLANN::Ptr kdtree(new KdTreeFLANN);
	kdtree->setInputCloud(cloud);//set input cloud

	if (normals == NULL)
		normals = PointCloud::Ptr(new PointCloud);
	if (!normals->empty())
		normals->resize(0);

	for (const auto& point: * cloud)
	{
		// 3.1 set k nearest neighbors
		PointXYZ searchPoint = point;
		vector KNNIndices(k_neighbors);//an int vector to store the k nearest neighbor of points
		vector KNNSquareDistance(k_neighbors);//a float vector to store the distance of k nearest neighbor

		//3.2 least square for each point and calculate the normal vectors
		if (kdtree->nearestKSearch(searchPoint, k_neighbors, KNNIndices, KNNSquareDistance) > 0)
		{
			//if the search result > 0, search succeed
			//build a new point cloud, store the k nearest neighbor of current point
			PointCloud::Ptr neighborsPoints(new PointCloud);
			pcl::copyPointCloud(*cloud, KNNIndices, *neighborsPoints);

			size_t i = 0;
			MatrixXd A = MatrixXd::Random(k_neighbors, 3);
			MatrixXd b = MatrixXd::Random(k_neighbors, 1);
			MatrixXd X = MatrixXd::Random(3, 1);
			//A, b, X is correspond to A*X = b
			for (int idx = 0; idx < k_neighbors; idx++)
			{
				A(i, 0) = cloud->points[KNNIndices[idx]].x;
				A(i, 1) = cloud->points[KNNIndices[idx]].x;
				A(i, 2) = cloud->points[KNNIndices[idx]].x;
				b(i, 0) = -1;
				i = i + 1;
			}

			X = (A.transpose() * A).inverse() * A.transpose() * b;

			//Normalize
			double norm_xyz = sqrt(X(0) * X(0) + X(1) * X(1) + X(2) * X(2));
			double nx = X(0) / norm_xyz;
			double ny = X(1) / norm_xyz;
			double nz = X(2) / norm_xyz;
			normals->push_back(Normal(nx, ny, nz));
		}
		else
			normals->push_back(Normal());
	}

}
转载请注明:文章转载自 www.mshxw.com
本文地址:https://www.mshxw.com/it/870261.html
我们一直用心在做
关于我们 文章归档 网站地图 联系我们

版权所有 (c)2021-2022 MSHXW.COM

ICP备案号:晋ICP备2021003244-6号