指点成金-最美分享吧

登录

比较 openCv PnP 与 openGv PnP

佚名 举报

技术标签:

【中文标题】比较 openCv PnP 与 openGv PnP【英文标题】:Comparing openCv PnP with openGv PnP 【发布时间】:2018-09-09 19:58:27 【问题描述】:

我正在尝试构建一个测试项目来比较 openCv solvePnP 实现与 openGv 实现。

这里详细介绍了opencv:

https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#solvepnp

这里是openGv:

https://laurentkneip.github.io/opengv/page_how_to_use.html

使用 opencv 示例代码,我在图像中找到一个棋盘,并构建匹配的 3d 点。我运行 cv pnp,然后设置 Gv 求解器。 cv pnp 运行良好,并打印值:

//rotation -0.003040771263293328, 0.9797142824436152, -0.2003763421317906; 0.0623096853748876, 0.2001735322445355, 0.977777101438374]//translation[-12.06549797067309; -9.533070368412945; 37.6825295047483]

我通过重新投影 3d 点进行测试,它看起来不错。

但是,Gv Pnp 会为所有值打印 nan。我试图按照示例代码进行操作,但我一定在某处犯了错误。代码是:

    int main(int argc, char **argv)         cv::Mat matImg = cv::imread("chess.jpg");            cv::Size boardSize(8, 6);            //Construct the chessboard model            double squareSize = 2.80;            std::vector objectPoints;            for (int i = 0; i < boardSize.height; i++)                 for (int j = 0; j < boardSize.width; j++)                     objectPoints.push_back(                        cv::Point3f(double(j * squareSize), float(i * squareSize), 0));                                        cv::Mat rvec, tvec;            cv::Mat cameraMatrix, distCoeffs;            cv::FileStorage fs("CalibrationData.xml", cv::FileStorage::READ);            fs["cameraMatrix"] >> cameraMatrix;            fs["dist_coeffs"] >> distCoeffs;                //Found chessboard corners                std::vector imagePoints;                bool found = cv::findChessboardCorners(matImg, boardSize, imagePoints, cv::CALIB_CB_FAST_CHECK);                if (found)                     cv::drawChessboardCorners(matImg, boardSize, cv::Mat(imagePoints), found);                    //SolvePnP                    cv::solvePnP(objectPoints, imagePoints, cameraMatrix, distCoeffs, rvec, tvec);                    drawAxis(matImg, cameraMatrix, distCoeffs, rvec, tvec, squareSize);                                //cv to matrix                cv::Mat R;                cv::Rodrigues(rvec, R);                std::cout << "results from cv:" << R << tvec << std::endl;//START OPEN GV                //vars                bearingVectors_t bearingVectors;                points_t points;                rotation_t rotation;                //add points to the gv type                for (int i = 0; i < objectPoints.size(); ++i)                                    point_t pnt;                    pnt.x() = objectPoints[i].x;                    pnt.y() = objectPoints[i].y;                    pnt.z() = objectPoints[i].z;                    points.push_back(pnt);                                /*                K is the common 3x3 camera matrix that you can  compose with cx, cy, fx, and fy.                You put the image point into homogeneous form (append a 1),                multiply it with the inverse of K from the left, which gives you a normalized image point (a spatial direction vector).                You normalize that to norm 1.                */                //to homogeneous                std::vector imagePointsH;                convertPointsToHomogeneous(imagePoints, imagePointsH);                //multiply by K.Inv                for (int i = 0; i < imagePointsH.size(); i++)                                    cv::Point3f pt = imagePointsH[i];                    cv::Mat ptMat(3, 1, cameraMatrix.type());                    ptMat.at(0, 0) = pt.x;                    ptMat.at(1, 0) = pt.y;                    ptMat.at(2, 0) = pt.z;                    cv::Mat dstMat = cameraMatrix.inv() * ptMat;                    //store as bearing vector                           bearingVector_t bvec;                    bvec.x() = dstMat.at(0, 0);                    bvec.y() = dstMat.at(1, 0);                    bvec.z() = dstMat.at(2, 0);                    bvec.normalize();                    bearingVectors.push_back(bvec);                                //create a central absolute adapter                absolute_pose::CentralAbsoluteAdapter adapter(                    bearingVectors,                    points,                    rotation);                size_t iterations = 50;                std::cout << "running epnp (all correspondences)" << std::endl;                transformation_t epnp_transformation;                for (size_t i = 0; i < iterations; i++)                    epnp_transformation = absolute_pose::epnp(adapter);                std::cout << "results from epnp algorithm:" << std::endl;                std::cout << epnp_transformation << std::endl << std::endl;        return 0;    

我在设置 openGv Pnp 求解器时哪里出错了?

【问题讨论】:

【参考方案1】:

多年后,我遇到了同样的问题,并解决了它。要将 openCv 转换为 openGV 方位向量,您可以这样做:

bearingVectors_t bearingVectors;std::vector dd2;const int N1 = static_cast(dd2.size());cv::Mat points1_mat = cv::Mat(dd2).reshape(1);// first rectify points and construct homogeneous points// construct homogeneous pointscv::Mat ones_col1 = cv::Mat::ones(N1, 1, CV_32F);cv::hconcat(points1_mat, ones_col1, points1_mat);// undistort pointscv::Mat points1_rect = points1_mat * cameraMatrix.inv();// compute bearingspoints2bearings3(points1_rect, &bearingVectors);

使用此函数进行最终转换:

// Convert a set of points to bearing// points Matrix of size Nx3 with the set of points.// bearings Vector of bearings.void points2bearings3(const cv::Mat& points,    opengv::bearingVectors_t* bearings)     double l;    cv::Vec3f p;    opengv::bearingVector_t bearing;    for (int i = 0; i < points.rows; ++i)         p = cv::Vec3f(points.row(i));        l = std::sqrt(p[0] * p[0] + p[1] * p[1] + p[2] * p[2]);        for (int j = 0; j < 3; ++j) bearing[j] = p[j] / l;        bearings->push_back(bearing);    

【讨论】:

以上是关于比较 openCv PnP 与 openGv PnP的主要内容,如果未能解决你的问题,请参考以下文章