for Robot Artificial Inteligence

Meaning of Gradient and Gradient Methods(Convex Optimization)

|

Gradient는 등산으로 따지면 현재 내 위치가 경사가 아래로 되어있는지 위로 되어있는지 보는 것이다.

우리가 등산을 할 때 가고 있는 길이 꾸불꾸불하기 때문에 우리가 가고 있는 길이 원하고 있는 목적지로 가고 있는 지는 지도가 있지 않 는 이상 경사로 확인을 할 수 가 있다.(현재 값이 결과 값과 correspoiding하지 않기 때문에)

그렇기 때문에 우리가 길을 가면서 수시로 내리막길인지 올라가는 길인지 확인을 하는 작업을 합니다(Iteration)(Global minimal 찾는 작업(목적지를 찾는 작업)

이와 마찬가지로 현실세계에서 다양한 문제가 있습니다.

ICP나 Bundle Adjustment와 같이 매칭을 통한 T,R값을 구할떄 error(측정값 - 예측값)에 대한 cost Function을 최대한 줄여서 미지수를 구하게 됩니다.

이와 같이 다양한 미지수에 대한 최적값을 찾을 떄 Gradient Decent를 많이 사용한다.

Gradient Decent가 필요한 이유는 현실 세계에서 우리는 비선형에 대한 문제를 풀게 된다.

허나 모든 데이터를 가지고 최소값을 찾는 것은 cost가 매우 큰 작업이다. 이에 다양한 방법들이 있는데 다양하게 사용하는 것은

stochastic gradient descent (SGD) : 모든 미지수를 계산하는 것이 아니라, 특정 미지수만 선택을하여서 global minimum을 찾는다.

즉, Batch set을 이용하는 방법이다.

Vanila Gradient Descent :Vanilla gradient descent means the basic gradient descent algorithm without any bells or whistles. There are many variants on gradient descent. In usual gradient descent (also known as batch gradient descent or vanilla gradient descent), the gradient is computed as the average of the gradient of each datapoint.

Vanilla algorithm meaning : In computer science, vanilla is the term used to refer when computer software and sometimes also other computing-related systems like computer hardware or algorithms are not customized from their original form, i.e., they are used without any customizations or updates applied to them.

주 최적화 방법은 즉 error function에 2차 편미분을 하여서(Jacobian Matrix, Hessian Matrix) Gaussian Newton(Decent) 방식으로 변환을 하여서, delta X에 대한 값(pose and landmark or just pose(landmark))를 x에 업데이트를 Iteration K만큼 해주면서 업데이트에 적용되는 delta x에 대해 최소값을 만드는 것이다.

다양한 방법들

https://darkpgmr.tistory.com/142?category=761008

https://darkpgmr.tistory.com/133

https://darkpgmr.tistory.com/56

https://darkpgmr.tistory.com/58

Comment  Read more

Depth Image to PCL

|

참고

https://guyuehome.com/35198

Comment  Read more

Covariance Matrix Propagation and Information Matrix Propagation relationship with Gaussian(Normal) Distribution, Optimization, Probabilistic notion

|

최적화 방법은 즉 error function에 2차 편미분을 하여서(Jacobian Matrix, Hessian Matrix) Gaussian Newton(Decent) 방식으로 변환을 하여서, delta X에 대한 값(pose and landmark or just pose(landmark))를 x에 업데이트를 Iteration K만큼 해주면서 업데이트에 적용되는 delta x에 대해 최소값을 만드는 것이다.

Reference

https://present5.com/siddharth-choudhary-bundle-adjustment-a-tutorial/

http://www.cv-learn.com/20210313-ba/

Comment  Read more

Robot-Centric Elevation Mapping with uncertainty estimate(sensor variance, motion variance활용)

|

  • Goal : find a representation of the terrain that takes the uncertainties of the range measurements and the state estimation into account

  • Approach : mapping approach fully incorporates the distance sensor measurement uncertainties and the six-dimensional pose covariance of the robot.

  • Contribution : the uncertainty of the robot’s position and orientation is reflected in the map by linearly growing the variance of the height estimate based on the accumulated distance and angle.

  • Method : distance measurements are taken relative to the robot and when the robot moves, the entire elevation map is updated with information about the motion of robot. at any time, the robot-centric elevation map is an estimate which the system has about the shape of the terrian from a local perspective. the region ahead of the robot has typically the highest precision as it is constatnly updated with new measurements from the forward-looking distance sensor. on the other hand, regions which are out of the sensor’s field of view(below or behind the robot) have decreased certainty due to drift the robot’s relative pose estimation

  • Specific : in this project, we can decide whether applying motion prediction or not.(we can update measurement variance update too).

Measurement Update

to Obtain the variance of the height measurement, we derive the jacobians for the sensor measurement J_s and the sensor frame rotation J_q

  • Note
    • Map : odom
    • base : base_link
    • sensor : camera_depth_optical_frame

1. find Tranformation matrix

  • Tranformation Matrix : [sensor to map], [base to sensor], [map to base]
bool SensorProcessorBase::updateTransformations(const ros::Time& timeStamp) {
  try {
    transformListener_.waitForTransform(sensorFrameId_, generalParameters_.mapFrameId_, timeStamp, ros::Duration(1.0));

    tf::StampedTransform transformTf;
    transformListener_.lookupTransform(generalParameters_.mapFrameId_, sensorFrameId_, timeStamp, transformTf);
    poseTFToEigen(transformTf, transformationSensorToMap_);

    transformListener_.lookupTransform(generalParameters_.robotBaseFrameId_, sensorFrameId_, timeStamp,
                                       transformTf);  // TODO(max): Why wrong direction?
    Eigen::Affine3d transform;
    poseTFToEigen(transformTf, transform);
    rotationBaseToSensor_.setMatrix(transform.rotation().matrix());
    translationBaseToSensorInBaseFrame_.toImplementation() = transform.translation();

    transformListener_.lookupTransform(generalParameters_.mapFrameId_, generalParameters_.robotBaseFrameId_, timeStamp,
                                       transformTf);  // TODO(max): Why wrong direction?
    poseTFToEigen(transformTf, transform);
    rotationMapToBase_.setMatrix(transform.rotation().matrix());
    translationMapToBaseInMapFrame_.toImplementation() = transform.translation();

    if (!firstTfAvailable_) {
      firstTfAvailable_ = true;
    }

    return true;
  } catch (tf::TransformException& ex) {
    if (!firstTfAvailable_) {
      return false;
    }
    ROS_ERROR("%s", ex.what());
    return false;
  }
}

  • Transformed Pointcloud into sensorframe and map_frame

  • and remove points out of range

bool SensorProcessorBase::process(const PointCloudType::ConstPtr pointCloudInput, const Eigen::Matrix<double, 6, 6>& robotPoseCovariance,
                                  const PointCloudType::Ptr pointCloudMapFrame, Eigen::VectorXf& variances, std::string sensorFrame) {
  sensorFrameId_ = sensorFrame;
  ROS_DEBUG("Sensor Processor processing for frame %s", sensorFrameId_.c_str());

  // Update transformation at timestamp of pointcloud
  ros::Time timeStamp;
  timeStamp.fromNSec(1000 * pointCloudInput->header.stamp);
  if (!updateTransformations(timeStamp)) {
    return false;
  }

  // Transform into sensor frame.
  PointCloudType::Ptr pointCloudSensorFrame(new PointCloudType);

  // pointcloudinput frame id can be same as sensorframeid_
  transformPointCloud(pointCloudInput, pointCloudSensorFrame, sensorFrameId_);

  // Remove Nans (optional voxel grid filter)
  filterPointCloud(pointCloudSensorFrame);

  // Specific filtering per sensor type
  filterPointCloudSensorType(pointCloudSensorFrame);

  // Remove outside limits in map frame
  if (!transformPointCloud(pointCloudSensorFrame, pointCloudMapFrame, generalParameters_.mapFrameId_)) {
    return false;
  }
  std::vector<PointCloudType::Ptr> pointClouds({pointCloudMapFrame, pointCloudSensorFrame});
  removePointsOutsideLimits(pointCloudMapFrame, pointClouds);

  // Compute variances
  return computeVariances(pointCloudSensorFrame, robotPoseCovariance, variances);
}

2. Measurement Update(Variance Update)

  • projection vector : xyz 공간상에 포인트들을 xy로 투영하는 것

  • SensorJacobian(RotationMapToBased^T * RotationBaseToSensor) : Sensor Measurement

  • rotation variance(pose covariance)

  • Sessor frame rotation(skewmatrix is converting vector to rotation matrix)

bool StereoSensorProcessor::computeVariances(const PointCloudType::ConstPtr pointCloud,
                                             const Eigen::Matrix<double, 6, 6>& robotPoseCovariance, Eigen::VectorXf& variances) {
  variances.resize(pointCloud->size());

  // Projection vector (P).
  const Eigen::RowVector3f projectionVector = Eigen::RowVector3f::UnitZ();

  // Sensor Jacobian (J_s).
  const Eigen::RowVector3f sensorJacobian =
      projectionVector * (rotationMapToBase_.transposed() * rotationBaseToSensor_.transposed()).toImplementation().cast<float>();

  // Robot rotation covariance matrix (Sigma_q).
  Eigen::Matrix3f rotationVariance = robotPoseCovariance.bottomRightCorner(3, 3).cast<float>();

  // Preparations for#include <pcl/common/transforms.h> robot rotation Jacobian (J_q) to minimize computation for every point in point
  // cloud.
  const Eigen::Matrix3f C_BM_transpose = rotationMapToBase_.transposed().toImplementation().cast<float>();
  const Eigen::RowVector3f P_mul_C_BM_transpose = projectionVector * C_BM_transpose;
  const Eigen::Matrix3f C_SB_transpose = rotationBaseToSensor_.transposed().toImplementation().cast<float>();
  const Eigen::Matrix3f B_r_BS_skew =
      kindr::getSkewMatrixFromVector(Eigen::Vector3f(translationBaseToSensorInBaseFrame_.toImplementation().cast<float>()));

  for (unsigned int i = 0; i < pointCloud->size(); ++i) {
    // For every point in point cloud.

    // Preparation.
    pcl::PointXYZRGBConfidenceRatio point = pointCloud->points[i];
    double disparity = sensorParameters_.at("depth_to_disparity_factor") / point.z;  // NOLINT(cppcoreguidelines-pro-type-union-access)
    Eigen::Vector3f pointVector(point.x, point.y, point.z);  // S_r_SP // NOLINT(cppcoreguidelines-pro-type-union-access)
    float heightVariance = 0.0;                              // sigma_p

    // Measurement distance.
    float measurementDistance = pointVector.norm();

    // Compute sensor covariance matrix (Sigma_S) with sensor model.
    float varianceNormal =
        pow(sensorParameters_.at("depth_to_disparity_factor") / pow(disparity, 2), 2) *
        ((sensorParameters_.at("p_5") * disparity + sensorParameters_.at("p_2")) *
             sqrt(pow(sensorParameters_.at("p_3") * disparity + sensorParameters_.at("p_4") - getJ(i), 2) + pow(240 - getI(i), 2)) +
         sensorParameters_.at("p_1"));
    float varianceLateral = pow(sensorParameters_.at("lateral_factor") * measurementDistance, 2);
    Eigen::Matrix3f sensorVariance = Eigen::Matrix3f::Zero();
    sensorVariance.diagonal() << varianceLateral, varianceLateral, varianceNormal;

    // Robot rotation Jacobian (J_q).
    const Eigen::Matrix3f C_SB_transpose_times_S_r_SP_skew = kindr::getSkewMatrixFromVector(Eigen::Vector3f(C_SB_transpose * pointVector));
    Eigen::RowVector3f rotationJacobian = P_mul_C_BM_transpose * (C_SB_transpose_times_S_r_SP_skew + B_r_BS_skew);

    // Measurement variance for map (error propagation law).
    heightVariance = rotationJacobian * rotationVariance * rotationJacobian.transpose();
    heightVariance += sensorJacobian * sensorVariance * sensorJacobian.transpose();

    // Copy to list.
    variances(i) = heightVariance;
  }

update varaince to existing heightmap

3. Motion Model Update

  • elevation map에다가 robo pose에 대한 motion update를 맵에다 해준다.

  • data reading

// Get robot pose at requested time.
boost::shared_ptr<geometry_msgs::PoseWithCovarianceStamped const> poseMessage = robotPoseCache_.getElemBeforeTime(time);
if (!poseMessage) {
  // Tell the user that either for the timestamp no pose is available or that the buffer is possibly empty
  if (robotPoseCache_.getOldestTime().toSec() > lastPointCloudUpdateTime_.toSec()) {
    ROS_ERROR("The oldest pose available is at %f, requested pose at %f", robotPoseCache_.getOldestTime().toSec(),
              lastPointCloudUpdateTime_.toSec());
  } else {
    ROS_ERROR("Could not get pose information from robot for time %f. Buffer empty?", lastPointCloudUpdateTime_.toSec());
  }
  return false;
}

kindr::HomTransformQuatD robotPose;
kindr_ros::convertFromRosGeometryMsg(poseMessage->pose.pose, robotPose);
// Covariance is stored in row-major in ROS: http://docs.ros.org/api/geometry_msgs/html/msg/PoseWithCovariance.html
Eigen::Matrix<double, 6, 6> robotPoseCovariance =
    Eigen::Map<const Eigen::Matrix<double, 6, 6, Eigen::RowMajor>>(poseMessage->pose.covariance.data(), 6, 6);

// Compute map variance update from motion prediction.
robotMotionMapUpdater_.update(map_, robotPose, robotPoseCovariance, time);

return true;
  • reduce covariance matrix between two robot poses

  • tangent pitch meaning : update yawJacobian

    • it is super useful to find covariance matrix of vehicle about rotation motion


bool RobotMotionMapUpdater::update(ElevationMap& map, const Pose& robotPose, const PoseCovariance& robotPoseCovariance,
                                   const ros::Time& time) {
  const PoseCovariance robotPoseCovarianceScaled = covarianceScale_ * robotPoseCovariance;

  // Check if update necessary.
  if (previousUpdateTime_ == time) {
    return false;
  }

  // Initialize update data.
  grid_map::Size size = map.getRawGridMap().getSize();
  grid_map::Matrix varianceUpdate(size(0), size(1));  // TODO(max): Make as grid map?
  grid_map::Matrix horizontalVarianceUpdateX(size(0), size(1));
  grid_map::Matrix horizontalVarianceUpdateY(size(0), size(1));
  grid_map::Matrix horizontalVarianceUpdateXY(size(0), size(1));

  // Relative convariance matrix between two robot poses.
  ReducedCovariance reducedCovariance;
  computeReducedCovariance(robotPose, robotPoseCovarianceScaled, reducedCovariance);
  ReducedCovariance relativeCovariance;
  computeRelativeCovariance(robotPose, reducedCovariance, relativeCovariance);

  // Retrieve covariances for (24).
  Covariance positionCovariance = relativeCovariance.topLeftCorner<3, 3>();
  Covariance rotationCovariance(Covariance::Zero());
  rotationCovariance(2, 2) = relativeCovariance(3, 3);

  // Map to robot pose rotation (R_B_M = R_I_B^T * R_I_M).
  kindr::RotationMatrixPD mapToRobotRotation = kindr::RotationMatrixPD(robotPose.getRotation().inverted() * map.getPose().getRotation());
  kindr::RotationMatrixPD mapToPreviousRobotRotationInverted =
      kindr::RotationMatrixPD(previousRobotPose_.getRotation().inverted() * map.getPose().getRotation()).inverted();

  // Translation Jacobian (J_r) (25).
  Eigen::Matrix3d translationJacobian = -mapToRobotRotation.matrix().transpose();

  // Translation variance update (for all points the same).
  Eigen::Vector3f translationVarianceUpdate =
      (translationJacobian * positionCovariance * translationJacobian.transpose()).diagonal().cast<float>();

  // Map-robot relative position (M_r_Bk_M, for all points the same).
  // Preparation for (25): M_r_BP = R_I_M^T (I_r_I_M - I_r_I_B) + M_r_M_P
  // R_I_M^T (I_r_I_M - I_r_I_B):
  const kindr::Position3D positionRobotToMap =
      map.getPose().getRotation().inverseRotate(map.getPose().getPosition() - previousRobotPose_.getPosition());

  auto& heightLayer = map.getRawGridMap()["elevation"];

  // For each cell in map. // TODO(max): Change to new iterator.
  for (unsigned int i = 0; i < static_cast<unsigned int>(size(0)); ++i) {
    for (unsigned int j = 0; j < static_cast<unsigned int>(size(1)); ++j) {
      kindr::Position3D cellPosition;  // M_r_MP

      const auto height = heightLayer(i, j);
      if (std::isfinite(height)) {
        grid_map::Position position;
        map.getRawGridMap().getPosition({i, j}, position);
        cellPosition = {position.x(), position.y(), height};

        // Rotation Jacobian J_R (25)
        const Eigen::Matrix3d rotationJacobian =
            -kindr::getSkewMatrixFromVector((positionRobotToMap + cellPosition).vector()) * mapToPreviousRobotRotationInverted.matrix();

        // Rotation variance update.
        const Eigen::Matrix2f rotationVarianceUpdate =
            (rotationJacobian * rotationCovariance * rotationJacobian.transpose()).topLeftCorner<2, 2>().cast<float>();

        // Variance update.
        varianceUpdate(i, j) = translationVarianceUpdate.z();
        horizontalVarianceUpdateX(i, j) = translationVarianceUpdate.x() + rotationVarianceUpdate(0, 0);
        horizontalVarianceUpdateY(i, j) = translationVarianceUpdate.y() + rotationVarianceUpdate(1, 1);
        horizontalVarianceUpdateXY(i, j) = rotationVarianceUpdate(0, 1);
      } else {
        // Cell invalid. // TODO(max): Change to new functions
        varianceUpdate(i, j) = std::numeric_limits<float>::infinity();
        horizontalVarianceUpdateX(i, j) = std::numeric_limits<float>::infinity();
        horizontalVarianceUpdateY(i, j) = std::numeric_limits<float>::infinity();
        horizontalVarianceUpdateXY(i, j) = std::numeric_limits<float>::infinity();
      }
    }
  }

  map.update(varianceUpdate, horizontalVarianceUpdateX, horizontalVarianceUpdateY, horizontalVarianceUpdateXY, time);
  previousReducedCovariance_ = reducedCovariance;
  previousRobotPose_ = robotPose;
  return true;
}

bool RobotMotionMapUpdater::computeReducedCovariance(const Pose& robotPose, const PoseCovariance& robotPoseCovariance,
                                                     ReducedCovariance& reducedCovariance) {
  // Get augmented Jacobian (A.4).
  // 오일러 각은 강체가 놓인 방향을 3차원 공간에 표시하기 위해 레온하르트 오일러가 도입한 세 개의 각도이다. 즉, 3차원 회전군 SO의 한 좌표계다. 3차원 공간에 놓인 강체의 방향은 오일러 각도를 사용하여 세 번의 회전을 통해 얻을 수 있다.
  kindr::EulerAnglesZyxPD eulerAngles(robotPose.getRotation()); // get eulerangle(오일러각)
  double tanOfPitch = tan(eulerAngles.pitch());
  // (A.5)
  Eigen::Matrix<double, 1, 3> yawJacobian(cos(eulerAngles.yaw()) * tanOfPitch, sin(eulerAngles.yaw()) * tanOfPitch, 1.0);
  Eigen::Matrix<double, 4, 6> jacobian;
  jacobian.setZero();
  jacobian.topLeftCorner(3, 3).setIdentity();
  jacobian.bottomRightCorner(1, 3) = yawJacobian;

  // (A.3)
  reducedCovariance = jacobian * robotPoseCovariance * jacobian.transpose();
  return true;
}

bool RobotMotionMapUpdater::computeRelativeCovariance(const Pose& robotPose, const ReducedCovariance& reducedCovariance,
                                                      ReducedCovariance& relativeCovariance) {
  // Rotation matrix of z-align frame R_I_tilde_B.
  const kindr::RotationVectorPD rotationVector_I_B(robotPose.getRotation());
  const kindr::RotationVectorPD rotationVector_I_tilde_B(0.0, 0.0, rotationVector_I_B.vector().z());
  const kindr::RotationMatrixPD R_I_tilde_B(rotationVector_I_tilde_B);

  // Compute translational velocity from finite differences.
  kindr::Position3D positionInRobotFrame =
      previousRobotPose_.getRotation().inverseRotate(robotPose.getPosition() - previousRobotPose_.getPosition());
  kindr::Velocity3D v_Delta_t(positionInRobotFrame);  // (A.8)

  // Jacobian F (A.8).
  Jacobian F;
  F.setIdentity();
  // TODO(max): Why does Eigen::Vector3d::UnitZ() not work?
  F.topRightCorner(3, 1) = kindr::getSkewMatrixFromVector(Eigen::Vector3d(0.0, 0.0, 1.0)) * R_I_tilde_B.matrix() * v_Delta_t.vector();

  // Jacobian inv(G) * Delta t (A.14).
  Jacobian inv_G_Delta_t;
  inv_G_Delta_t.setZero();
  inv_G_Delta_t(3, 3) = 1.0;
  Jacobian inv_G_transpose_Delta_t(inv_G_Delta_t);
  inv_G_Delta_t.topLeftCorner(3, 3) = R_I_tilde_B.matrix().transpose();
  inv_G_transpose_Delta_t.topLeftCorner(3, 3) = R_I_tilde_B.matrix();

  // Relative (reduced) robot covariance (A.13).
  relativeCovariance = inv_G_Delta_t * (reducedCovariance - F * previousReducedCovariance_ * F.transpose()) * inv_G_transpose_Delta_t;

  return true;
}
  • 즉 sensor Measurement로 map을 업데이트하고 이전 모션과 현재 모션시 생기는 Update(Jacobian * (preivous covraince - current covartiance) * Jacobian ^T)된 Varaince를 맵에다 업데이트를 하여서 센서와 모션 퓨전으로 맵들을 업데이트한다.(locally하데 )

Comment  Read more

Covariance Propagation meaning

|

통계학에서 불확실성의 전파 ( propagation of error )

나 오차 전파 ( propagation of error ) 는

변수들의 함수관계에서 오는 불확실성에 영향을 주는

변수의 불확실성 효과 또는 오차 효과를 말한다.

변수가 실험실 측정의 값이라면

측정 도구의 정교함과 같은 측정 한계로 인해

변수는 불확실성을 가지고 이는 함수관계에 있는 다른 변수 결합에

전파되어진다.

오차는 실험을 하게되면 필연적으로 발생하는 것이다. 그런데 이 오차를 가지고 있는 값들을 서로 더하거나 곱하는 연산을 해야하는 필요도 상당히 빈번하다.

많은 경우, 이런 의문이 발생한다. 예를 들어 우리가 x라는 값을 측정했을 때 이 값이

x라는 평균값을 가지고 δx라는 오차를 가지고 있다는 것을 알게 되었다고 하자.

뭐, 숫자로 예를 들어서, x가 10이고 δx가 1라고 해보자.

그렇다면, x = x ± δx = 10 ± 1일 것이다.

여기서 의문은… 그렇다면, x2의 오차나 x-1의 오차는 어떻게 계산하며…

또 다른 오차를 가진 값인 y = y ± δy와의 연산인… x+y의 총 오차는 어떻게 평가하는 지에 대한 것이다.

이에 대해 계산하는 것을 연산에 따라서 오차가 전달되어 간다는 의미에서… 오차의 전파(error propagation)라고 한다.

그렇다면 Covariance Propagation은 무엇일까.

과거의 Covariance(오차가 존재한다는 가정하에 )값들을 이후 Covariance에 전파를 해서 Covariance의 Variance가 점점 커지게 될 것이다. 이에 대한 Covariance의 Variance가 점점 커지는 것을 보정해야 할 필요가 있다.(Kalman Filter 등등 사용)

Comment  Read more