for Robot Artificial Inteligence

포인터 예약어 선언 의미

|

const int* p : 포인터 변수 p는 다른 주소를 저장할 수 있지만, 포인터 변수가 가르키는 대상의 값을 변경할 수 없다. int* const p : 포인터 변수 p는 초기값으로 준 주소만 저장하고 다른 주소를 변경할 수 없지만, 포인터 변수가 가르키는 대상의 값은 변경 할 수 있다. const int *p const p : 포인터 변수 p는 초기값으로 준 주소만 저장을하고 다른 주소로 변경할 수 없고, 포인터 변수가 가르키는 대상의 값 또한 변경할 수 없다.

Comment  Read more

pre_integration imu

|

http://rpg.ifi.uzh.ch/docs/teaching/2016/13_visual_inertial_fusion.pdf

IMU(Inertial Measurement Unit)

  • Angular Velocity
  • Linear Accelerations

Why IMU?

  • Monocular Vision is scale ambiguous
  • Pur vision is not robust enough
    • Low texture(feature)
    • High dynamic range
    • High speed motion
  • Robustness is a critial issue : Tesla Accident “the autopilot sensors on the model S failed to distinguish a white tractor-trailer crossing the highway against a bright sky”

Pure IMU integration will lead to large drift (especially cheap IMUs)

  • Intuition
    • Integration of angular velocity to get orientation: error proportional to t
    • Double integration of acceleration to get position: if there is a bias in acceleration, the error of position is proportional to t2
    • Worse, the actually position error also depends on the error of orientation.

In navigation, dead reckoning is the process of calculating current position of some moving object by using a previously determined position, or fix, and then incorporating estimations of speed, heading direction, and course over elapsed time.

Pre Integration IMU는 즉 IMU를 Odometery로, 카메라 포즈로 쓰고, 카메라는 Map constructing 혹은 keyframe추출 loop closure 같은 것으로 쓰는 것이다. 이렇게 되면, PnP알고리즘을 통해 camera pose를 구하는 CPU Computation을 줄일 수 있다.

NOTATION

IMU Preintegration : this is used for obtaining frame pose using imu.

many imu preintegration way using mid point integration(interpoliation)

output

pvq:pose, velocity, quaternion.

mid integration study that i need.

Comment  Read more

PSO vs Particle Filter

|

Particle Swarm Optimization (PSO) is a versatile population-based optimization technique, in many respects similar to evolutionary algorithms. The PSO has its origin on the swarm intelligence algorithms, which are concerned with the design of intelligent multi-agent systems by taking stimulation from the collective behavior of social insects such as ants, termites, bees, and wasps, as well as from other animal societies such as flocks of birds or schools of fish. In PSO method, the particles that represent potential solutions move around in the phase space with a velocity updated by the particle’s own experience and the experience of the particle’s neighbors or the experience of the whole swarm.

Particle Filter (PF) performs sequential Monte Carlo estimation (PSO is not Monte Carlo method) based on particle representation of probability densities, by representing the posterior density function by a set of random samples with associated weights and computing estimates based on these samples and weights. In this method the particles with high weights propagate and the particles with smallest weights are eliminated by a re-sampling procedure.

Your answer is a high-quality answer which can make the problem easy to understand. After reading your answer, I understand that PSO and PF have the only similar point “particle” ,but particle has different meaning in the two algorithm. PSO is a intelligence algorithm. PF is an algorithm from Probability Theory.

Reference: https://blog.csdn.net/daaikuaichuan/article/details/81382794

https://www.youtube.com/watch?v=JhgDMAm-imI

간단하게 모델 입히는 방법은 관측 데이터 x,y를 PSO 모델에 넣어서 PSO에 있는 파티클들에 조건(particle velocity, global goal, local goal)에 따라 Iterate를 하여서 관측 데이터의 정확한 X,Y를 영상이면 카메라 이미지 안에서 찾는 다던가, 맵이라면, 맵 상에서 Global Minimum이 되는 곳을 찾아 얻게 된다.

exsitng, 2D gird or 3D voxel map

put a model that we want to know pose

global 상에 pose를 공식에다가 넣고, 3개의 particle velocity를 가지고 Iteration하게 업데이트 하여서 위치를 구하는 방법

https://blog.csdn.net/TOMJJY/article/details/104403381

Comment  Read more

Factor Grpah

|

https://gisbi-kim.github.io/blog/2021/03/04/slambackend-1.html

http://www.cs.cmu.edu/~kaess/pub/Dellaert17fnt.pdf

https://opensam.org/opensam.org/2020/05/26/factor-graphs.html

http://www.cv-learn.com/20210405-visloc-my-thoughts/

https://gtsam.org/tutorials/intro.html

Factor graphs are a family of probabilistic graphical models, other examples of which are Bayesian networks and Markov random fields, which are well known from the statistical modeling and machine learning literature

In SLAM we want to characterize our knowledge about the un-knowns X, in this case robot poses and the unknown landmark posi-tions, when given a set of observed measurements Z. Using the language of Bayesian probability, this is simply the conditional density

p(X Z), (1.2)

and obtaining a description like this is called probabilistic inference

Generative Modeling :models that predict the next word in a sequence are typically generative models

a Bayesian network [163] or Bayes net is a directed graphical model where the nodes represent variables θj.

Per the general definition of Bayes nets, the joint density p(X,Z) = p(x1,x2,x3,l1,l2,z1,z2,z3,z4) is obtained as a product of conditional densities:

Note that the graph structure makes an explicit statement about data association, i.e., for every measurement zk we know which landmark it is a measurement of. While it is possible to model unknown data association in a graphical model context, in this text we assume that data association is given to us as the result of a pre-processing step.

This could be derived from odometry measurements, in which case we would proceed exactly as described above. Alternatively, such a motion model could arise from known control inputs ut. In practice, we often use a conditional Gaussian assumption, p(xt+1|xt,ut) = 1√|2πQ|exp{−12 ‖g(xt,ut) −xt+1‖2Q}

factorization(因素分解)

For example, steps 1 and 2 above can be switched without consequence. Also, we can generate the pose measurement z1 at any time after x1 is generated, etc

n robotics we are typically interested in the unknown state variables X, such as poses and/or landmarks, given the measurements Z. The most often used estimator for these unknown state variables X is the maximum a posteriori or MAP estimate, so named because it maximizes the posterior density p(X Z) of the states X given the measurements Z

e see that the odometry factors form a prominent, chain-like backbone, whereas off to the sides binary likelihood factors are connected to the 20 or so landmarks

http://computing.or.kr/14693/joint-probability-distribution%EA%B2%B0%ED%95%A9-%ED%99%95%EB%A5%A0-%EB%B6%84%ED%8F%AC/

https://www.youtube.com/playlist?list=PLOJ3GF0x2_eWtGXfZ5Ne1Jul5L-6Q76Sz

joint probabilty는 동시에 일어날 확률을 말합니다.

간단히 이해로는 Node(로봇 포즈, 카메라나 라이더 센서로 얻어진 랜드마크들 저장)와 Node끼리 연결되어있는 Factor, 이 케이스 같은 경우 Odometery, Node와 landmark 사이에 있는 놈들은 measurement들을 잘 가지고 노는 것이다. 다만, Probabiltic을 사용함으로써 breif propogation이 기본 틀이기 떄문에, 이전 상태를 가지고 현재 pose를 확률적으로 추정을 하거나, measurement값들을 추정을하여서 만들어진 기법으로 알고 있으나, 확실한 공부가 더 필요하다.

Comment  Read more

Debug 쉽게하는 법[C++, ROS]

|

C++ DEBUG

  1. firstly, created .o file for source code.(Cmake or using some compiler)

  2. GO TO RUN & BUG
    • create json
      • choose type of debug type(in this case, C++, and RECOMMAND LAUNCH)
  3. change execute path

  4. run launch.json.

ROS DEBUG

  1. debug mode compile
    • catkin_make -DCMAKE_BUILD_TYPE=Debug
  2. GO TO RUN & BUG
    • create json
      • choose type of debug type(in this case : ROS launch(roslaunch), ROS attach(for rosrun))
{
    "name": "ROS: Launch",
    "type": "ros",
    "request": "launch",
    "target": "absolute path to launch file"
},
{
    "name": "ROS: Attach",
    "type": "ros",
    "request": "attach"
},
  1. change path.

  2. run roscore

  3. run debug

    • if attach mode, run rosrun file and run debug.

reference https://github.com/ms-iot/vscode-ros/blob/master/doc/debug-support.md

Comment  Read more