for Robot Artificial Inteligence

lidar on autonomous car in C++

|

You would mount a lidar on the roof for the farthest field of view, but you mount a lidar on the roof it’s not going to see anything that’s happening close to the vehicle down more to the bottom. So to fill out the overall field of view of the senses of the vehicles, you mount lidars in front, in the back, or on the side depending on where you have gaps in the overall sense of coverage. We have a bunch on the roof kind of for maximizing the field of view, and we have some in the front and in the back to really cover the other areas that are rooftop lidars would not see. The relationship between the vertical field of view and the granularity, I think, let’s take the velodyne lidar where you have a finite or a very discrete number of layed individual layers like 16 layers, 32 layers, 64 layers, and so on. Usually, all of those lidars have the same vertical field of view, expressing degrees, it’s about 30 degrees. If you have 30 degrees and 16 layers, then you have two degrees spacing between the layers in the image. Now, two degrees in about 60 meters, you can hide a pedestrian between two layers. So that limits your actually your resolution, but it also limits in a sense how far you can see because if you have no scan layer on an object, then this object is invisible to the lidar. So the more layers you have in this vertical field of view, the finer the granularity is, the more objects you can see, and in a sense. The farther you can see within the overall power range off the lidar.

  • Lidar ray creates
Lidar(std::vector<Car> setCars, double setGroundSlope)
  : cloud(new pcl::PointCloud<pcl::PointXYZ>()), position(0,0,2.6)
{
  // TODO:: set minDistance to 5 to remove points from roof of ego car
  minDistance = 5;
  maxDistance = 50;
  resoultion = 0.2;
  // TODO:: set sderr to 0.2 to get more interesting pcd files
  sderr = 0.2;
  cars = setCars;
  groundSlope = setGroundSlope;

  // TODO:: increase number of layers to 8 to get higher resoultion pcd
  int numLayers = 8;
  // the steepest vertical angle
  double steepestAngle =  30.0*(-pi/180);
  double angleRange = 26.0*(pi/180);
  // TODO:: set to pi/64 to get higher resoultion pcd
  double horizontalAngleInc = pi/64;

  double angleIncrement = angleRange/numLayers;

  for(double angleVertical = steepestAngle; angleVertical < steepestAngle+angleRange; angleVertical+=angleIncrement)
  {
    for(double angle = 0; angle <= 2*pi; angle+=horizontalAngleInc)
    {
      Ray ray(position,angle,angleVertical,resoultion);
      rays.push_back(ray);
    }
  }
}

  • Note :

The syntax of PointCloud with the template is similar to the syntax of vectors or other std container libraries: ContainerName. The Ptr type from PointCloud indicates that the object is actually a pointer - a 32 bit integer that contains the memory address of your point cloud object. Many functions in pcl use point cloud pointers as arguments, so it's convenient to return the inputCloud in this form. The renderRays function is defined in src/render. It contains functions that allow us to render points and shapes to the pcl viewer. You will be using it to render your lidar rays as line segments in the viewer. The arguments for the renderRays function is viewer, which gets passed in by reference. This means that any changes to the viewer in the body of the renderRays function directly affect the viewer outside the function scope. The lidar position also gets passed in, as well as the point cloud that your scan function generated. The type of point for the PointCloud will be pcl::PointXYZ. We will talk about some other different types of point clouds in a bit.

Template important(typename)

Templates and Different point cloud data The lidar scan function used previously produced a pcl PointCloud object with pcl::PointXYZ points. The object uses a template because there are many different types of point clouds: some that are 3D, some that are 2D, some that include color and intensity. Here you are working with plain 3D point clouds so PointXYZ is used. However, later in the course you will have points with an intensity component as well. Instead of defining two separate functions one with an argument for PointXYZ and the other for PointXYZI, templates can automate this process. With templates, you only have to write the function once and use the template like an argument to specify the point type.

If you haven’t used templates with pointers before, you may have noticed in the code that typename is used whenever a pointer is used that depends on a template. For example in the function signature here: (즉, typename 은 포인터를 사용할때 받아드리는 값을 보통 value로 받아 들이는데, 노 이거는 type이라고 정의를 해주는 것이다. 템플렛을 정의할때 많이 쓰인다)

typename pcl::PointCloud<PointT>::Ptr ProcessPointClouds<PointT>::FilterCloud(typename pcl::PointCloud<PointT>::Ptr cloud, float filterRes, Eigen::Vector4f minPoint, Eigen::Vector4f maxPoint)

The reason for this is the following: Given a piece of code with a type name parameter, like pcl::PointCloud::Ptr, the compiler is unable to determine if the code is a value or a type without knowing the value for the type name parameter. The compiler will assume that the code represents a value. If the code actually represents a typename, you will need to specify that. Test your own intuition with the quiz below. You can use this documentation for help.

Comment  Read more

PointCloud Segmentation in C++

|

Segmentation

We want to be able to locate obstacles in the scene. However, some objects in our scene our not obstacles. What would be objects that appear in the pcd but are not obstacles? For the most part, any free space on the road is not an obstacle, and if the road is flat it’s fairly straightforward to pick out road points from non-road points. To do this we will use a method called Planar Segmentation which uses the RANSAC (random sample consensus) algorithm.

Why Segmentation

Engineers use LiDAR really how we mentioned it before by just recording the LiDAR signal driving around. We record all the data that comes back into the LiDAR. We create objects out of it or not, depending on the particular use case where we could do stuff directly with the point cloud. We can then use the objects to track those objects, make predictions, what they do. We can use it to classify objects. We have certain classes that the LiDAR using Neural Network can detect and distinguish. The segmentation is saying okay, this particular thing belongs to one object. This belongs to the ground plane, this belongs to a tree or so. So you associate certain points with objects and then say okay, this is one object, that set of points and this is another object, a set of objects or another object. This is how this basically works.

Point Processing

The first thing you are going to do is create a processPointClouds object. This is defined by the src/processPointClouds.cpp and src/processPointClouds.h files. This object is going to contain all the methods that you will be using in this module for processing lidar data. The process object also has helper methods for loading and saving PCD files. By the time you compete this exercise, the code should compile but you will still need to complete the next few concepts before getting results.

Comment  Read more

Fusion in kalman filter in Matlab STEP 2

|

MATLAB Sensor Fusion Guided Walkthrough

The following steps will take you on a guided walkthrough of performing Kalman Filtering in a simulated environment using MATLAB. You can download the starter code file Sensor_Fusion_with_Radar.m for this walkthrough in the Resources section for this lesson.

Sensor fusion and control algorithms for automated driving systems require rigorous testing. Vehicle-based testing is not only time consuming to set up, but also difficult to reproduce. Automated Driving System Toolbox provides functionality to define road networks, actors, vehicles, and traffic scenarios, as well as statistical models for simulating synthetic radar and camera sensor detection. This example shows how to generate a scenario, simulate sensor detections, and use sensor fusion to track simulated vehicles. The main benefit of using scenario generation and sensor simulation over sensor recording is the ability to create rare and potentially dangerous events and test the vehicle algorithms with them. This example covers the entire synthetic data workflow.

Generate scenario

Scenario generation comprises generating a road network, defining vehicles that move on the roads, and moving the vehicles.In this example, you test the ability of the sensor fusion to track a vehicle that is passing on the left of the ego vehicle. The scenario simulates a highway setting, and additional vehicles are in front of and behind the ego vehicle. Find more on how to generate these scenarios here : Automated Driving Toolbox

%% Sensor Fusion Using Synthetic Radar
%% Generate the Scenario
% Scenario generation comprises generating a road network, defining
% vehicles that move on the roads, and moving the vehicles.
%
% Test the ability of the sensor fusion to track a
% vehicle that is passing on the left of the ego vehicle. The scenario
% simulates a highway setting, and additional vehicles are in front of and
% behind the ego vehicle.

% Define an empty scenario.
scenario = drivingScenario;
scenario.SampleTime = 0.01;

%%
% Add a stretch of 500 meters of typical highway road with two lanes. The
% road is defined using a set of points, where each point defines the center of
% the road in 3-D space.
roadCenters = [0 0; 50 0; 100 0; 250 20; 500 40];
road(scenario, roadCenters, 'lanes',lanespec(2));

%%
% Create the ego vehicle and three cars around it: one that overtakes the
% ego vehicle and passes it on the left, one that drives right in front of
% the ego vehicle and one that drives right behind the ego vehicle. All the
% cars follow the trajectory defined by the road waypoints by using the
% |trajectory| driving policy. The passing car will start on the right
% lane, move to the left lane to pass, and return to the right lane.

% Create the ego vehicle that travels at 25 m/s along the road.  Place the
% vehicle on the right lane by subtracting off half a lane width (1.8 m)
% from the centerline of the road.
egoCar = vehicle(scenario, 'ClassID', 1);
trajectory(egoCar, roadCenters(2:end,:) - [0 1.8], 25); % On right lane

% Add a car in front of the ego vehicle
leadCar = vehicle(scenario, 'ClassID', 1);
trajectory(leadCar, [70 0; roadCenters(3:end,:)] - [0 1.8], 25); % On right lane

% Add a car that travels at 35 m/s along the road and passes the ego vehicle
passingCar = vehicle(scenario, 'ClassID', 1);
waypoints = [0 -1.8; 50 1.8; 100 1.8; 250 21.8; 400 32.2; 500 38.2];
trajectory(passingCar, waypoints, 35);

% Add a car behind the ego vehicle
chaseCar = vehicle(scenario, 'ClassID', 1);
trajectory(chaseCar, [25 0; roadCenters(2:end,:)] - [0 1.8], 25); % On right lane

Define Radar

In this example, you simulate an ego vehicle that has 6 radar sensors covering the 360 degrees field of view. The sensors have some overlap and some coverage gap. The ego vehicle is equipped with a long-range radar sensor on both the front and the back of the vehicle. Each side of the vehicle has two short-range radar sensors, each covering 90 degrees. One sensor on each side covers from the middle of the vehicle to the back. The other sensor on each side covers from the middle of the vehicle forward. The figure in the next section shows the coverage.

sensors = cell(6,1);
% Front-facing long-range radar sensor at the center of the front bumper of the car.
sensors{1} = radarDetectionGenerator('SensorIndex', 1, 'Height', 0.2, 'MaxRange', 174, ...
    'SensorLocation', [egoCar.Wheelbase + egoCar.FrontOverhang, 0], 'FieldOfView', [20, 5]);

% Rear-facing long-range radar sensor at the center of the rear bumper of the car.
sensors{2} = radarDetectionGenerator('SensorIndex', 2, 'Height', 0.2, 'Yaw', 180, ...
    'SensorLocation', [-egoCar.RearOverhang, 0], 'MaxRange', 174, 'FieldOfView', [20, 5]);

% Rear-left-facing short-range radar sensor at the left rear wheel well of the car.
sensors{3} = radarDetectionGenerator('SensorIndex', 3, 'Height', 0.2, 'Yaw', 120, ...
    'SensorLocation', [0, egoCar.Width/2], 'MaxRange', 30, 'ReferenceRange', 50, ...
    'FieldOfView', [90, 5], 'AzimuthResolution', 10, 'RangeResolution', 1.25);

% Rear-right-facing short-range radar sensor at the right rear wheel well of the car.
sensors{4} = radarDetectionGenerator('SensorIndex', 4, 'Height', 0.2, 'Yaw', -120, ...
    'SensorLocation', [0, -egoCar.Width/2], 'MaxRange', 30, 'ReferenceRange', 50, ...
    'FieldOfView', [90, 5], 'AzimuthResolution', 10, 'RangeResolution', 1.25);

% Front-left-facing short-range radar sensor at the left front wheel well of the car.
sensors{5} = radarDetectionGenerator('SensorIndex', 5, 'Height', 0.2, 'Yaw', 60, ...
    'SensorLocation', [egoCar.Wheelbase, egoCar.Width/2], 'MaxRange', 30, ...
    'ReferenceRange', 50, 'FieldOfView', [90, 5], 'AzimuthResolution', 10, ...
    'RangeResolution', 1.25);

% Front-right-facing short-range radar sensor at the right front wheel well of the car.
sensors{6} = radarDetectionGenerator('SensorIndex', 6, 'Height', 0.2, 'Yaw', -60, ...
    'SensorLocation', [egoCar.Wheelbase, -egoCar.Width/2], 'MaxRange', 30, ...
    'ReferenceRange', 50, 'FieldOfView', [90, 5], 'AzimuthResolution', 10, ...
    'RangeResolution', 1.25);

Create a multiObjectTracker

Create a multiObjectTracker to track the vehicles that are close to the ego vehicle. The tracker uses the initSimDemoFilter supporting function to initialize a constant velocity linear Kalman filter that works with position and velocity. Tracking is done in 2-D. Although the sensors return measurements in 3-D, the motion itself is confined to the horizontal plane, so there is no need to track the height.

%% Create a Tracker
% Create a |<matlab:doc('multiObjectTracker') multiObjectTracker>| to track
% the vehicles that are close to the ego vehicle. The tracker uses the
% |initSimDemoFilter| supporting function to initialize a constant velocity
% linear Kalman filter that works with position and velocity.
%
% Tracking is done in 2-D. Although the sensors return measurements in 3-D,
% the motion itself is confined to the horizontal plane, so there is no
% need to track the height.



%% TODO*
%Change the Tracker Parameters and explain the reasoning behind selecting
%the final values. You can find more about parameters here: https://www.mathworks.com/help/driving/ref/multiobjecttracker-system-object.html

tracker = multiObjectTracker('FilterInitializationFcn', @initSimDemoFilter, ...
    'AssignmentThreshold', 30, 'ConfirmationParameters', [4 5], 'NumCoastingUpdates', 5);
positionSelector = [1 0 0 0; 0 0 1 0]; % Position selector
velocitySelector = [0 1 0 0; 0 0 0 1]; % Velocity selector

% Create the display and return a handle to the bird's-eye plot
BEP = createDemoDisplay(egoCar, sensors);

MultiObjectTracker Function has several parameters that can be tuned for different driving scenarios. It controls the track creation and deletion One can learn more about these here.

Simulate the Scenario

The following loop moves the vehicles, calls the sensor simulation, and performs the tracking. Note that the scenario generation and sensor simulation can have different time steps. Specifying different time steps for the scenario and the sensors enables you to decouple the scenario simulation from the sensor simulation. This is useful for modeling actor motion with high accuracy independently from the sensor’s measurement rate.

Another example is when the sensors have different update rates. Suppose one sensor provides updates every 20 milliseconds and another sensor provides updates every 50 milliseconds. You can specify the scenario with an update rate of 10 milliseconds and the sensors will provide their updates at the correct time. In this example, the scenario generation has a time step of 0.01 second, while the sensors detect every 0.1 second.

The sensors return a logical flag, isValidTime, that is true if the sensors generated detections. This flag is used to call the tracker only when there are detections. Another important note is that the sensors can simulate multiple detections per target, in particular when the targets are very close to the radar sensors. Because the tracker assumes a single detection per target from each sensor, you must cluster the detections before the tracker processes them. This is done by implementing clustering algorithm, the way we discussed above.

%% Simulate the Scenario


% In this example, the scenario generation has a time step of 0.01 second,
% while the sensors detect every 0.1 second. The sensors return a logical
% flag, |isValidTime|, that is true if the sensors generated detections.
% This flag is used to call the tracker only when there are detections.
%
% Another important note is that the sensors can simulate multiple
% detections per target, in particular when the targets are very close to
% the radar sensors. Because the tracker assumes a single detection per
% target from each sensor, you must cluster the detections before the
% tracker processes them. This is done by the function |clusterDetections|.
% See the 'Supporting Functions' section.

toSnap = true;
while advance(scenario) && ishghandle(BEP.Parent)    
    % Get the scenario time
    time = scenario.SimulationTime;

    % Get the position of the other vehicle in ego vehicle coordinates
    ta = targetPoses(egoCar);

    % Simulate the sensors
    detections = {};
    isValidTime = false(1,6);
    for i = 1:6
        [sensorDets,numValidDets,isValidTime(i)] = sensors{i}(ta, time);
        if numValidDets
            detections = [detections; sensorDets]; %#ok<AGROW>
        end
    end

    % Update the tracker if there are new detections
    if any(isValidTime)
        vehicleLength = sensors{1}.ActorProfiles.Length;
        detectionClusters = clusterDetections(detections, vehicleLength);
        confirmedTracks = updateTracks(tracker, detectionClusters, time);

        % Update bird's-eye plot
        updateBEP(BEP, egoCar, detections, confirmedTracks, positionSelector, velocitySelector);
    end

    % Snap a figure for the document when the car passes the ego vehicle
    if ta(1).Position(1) > 0 && toSnap
        toSnap = false;
        snapnow
    end
end


Define the Kalman Filter

Define the Kalman Filter here to be used with multiObjectTracker. In MATLAB a trackingKF function can be used to initiate Kalman Filter for any type of Motion Models. This includes the 1D, 2D or 3D constant velocity or even constant acceleration. You can read more about this here. initSimDemoFilter This function initializes a constant velocity filter based on a detection.

%% Supporting Functions
%
%
% This function initializes a constant velocity filter based on a detection.
function filter = initSimDemoFilter(detection)
% Use a 2-D constant velocity model to initialize a trackingKF filter.
% The state vector is [x;vx;y;vy]
% The detection measurement vector is [x;y;vx;vy]
% As a result, the measurement model is H = [1 0 0 0; 0 0 1 0; 0 1 0 0; 0 0 0 1]

%TODO: Implement the Kalman filter using trackingKF function. If stuck
%review the implementation discussed in the project walkthrough

H = [1 0 0 0; 0 0 1 0; 0 1 0 0; 0 0 0 1];
filter = trackingKF('MotionModel', '2D constant Velocity', ...
                    'State', H * detection.Measurement,...
                    'MeasurementModel', H, ...
                    'StateCovariance', H * detect.MeasurementNoise * H, ...
                    'MeasurementNoise', detection.MeasurementNoise);
end

Cluster Detections

This function merges multiple detections suspected to be of the same vehicle to a single detection. The function looks for detections that are closer than the size of a vehicle. Detections that fit this criterion are considered a cluster and are merged to a single detection at the centroid of the cluster. The measurement noises are modified to represent the possibility that each detection can be anywhere on the vehicle. Therefore, the noise should have the same size as the vehicle size. In addition, this function removes the third dimension of the measurement (the height) and reduces the measurement vector to

[x;y;vx;vy].
%%%
% *|clusterDetections|*
%
% This function merges multiple detections suspected to be of the same
% vehicle to a single detection. The function looks for detections that are
% closer than the size of a vehicle. Detections that fit this criterion are
% considered a cluster and are merged to a single detection at the centroid
% of the cluster. The measurement noises are modified to represent the
% possibility that each detection can be anywhere on the vehicle.
% Therefore, the noise should have the same size as the vehicle size.
%
% In addition, this function removes the third dimension of the measurement
% (the height) and reduces the measurement vector to [x;y;vx;vy].
function detectionClusters = clusterDetections(detections, vehicleSize)
N = numel(detections);
distances = zeros(N);
for i = 1:N
    for j = i+1:N
        if detections{i}.SensorIndex == detections{j}.SensorIndex
            distances(i,j) = norm(detections{i}.Measurement(1:2) - detections{j}.Measurement(1:2));
        else
            distances(i,j) = inf;
        end
    end
end
leftToCheck = 1:N;
i = 0;
detectionClusters = cell(N,1);
while ~isempty(leftToCheck)    
    % Remove the detections that are in the same cluster as the one under
    % consideration

    %TODO : Complete the clustering loop based on the implementation
    %discussed in the lesson
    underConsideration = leftToCheck(1);
    clusterInds = (distances(underConsideration, leftToCheck) < vehicleSize);
    detInds = leftToCheck(clusterInds);
    clusterDets = [detections{detInds}];
    clusterMeas = [clusterDets.Measurement];
    meas = mean(clusterMeas,2);
    meas2D = [meas(1:2); meas(4:5)];
    i = i+1;
    detectionClusters{i} = detections{detInds(1)};
    detectionClusters{i}.Measurement  = meas2D;
    leftToCheck(clusterInds) = [];
end
detectionClusters(i+1:end) = [];

% Since the detections are now for clusters, modify the noise to represent
% that they are of the whole car
for i = 1:numel(detectionClusters)
    measNoise(1:2,1:2) = vehicleSize^2 * eye(2);
    measNoise(3:4,3:4) = eye(2) * 100 * vehicleSize^2;
    detectionClusters{i}.MeasurementNoise = measNoise;
end
end

Comment  Read more

Fusion in kalman filter in Matlab STEP 1

|

Kalman Filter Frame Work

  • x^k, the state of the vehicle at the k_th step.
  • A, the state-transition model
  • Pk, the state covariance matrix - state estimation covariance error
  • B, control matrix - external influence
  • C, the observation/measurement model
  • Q, the covariance of the process noise
  • R, the covariance of the observation noise

The purpose of the Kalman filter is to estimate the state of a tracked vehicle. Here, “state” could include the position, velocity, acceleration or other properties of the vehicle being tracked. The Kalman filter uses measurements that are observed over time that contain noise or random variations and other inaccuracies, and produces values that tend to be closer to the true values of the measurements and their associated calculated values. It is the central algorithm to the majority of all modern radar tracking systems.

The Kalman filter process has two steps: prediction and update.

  1. Prediction Step : Using the target vehicle’s motion model, the next state of the target vehicle is predicted by using the current state. Since we know the current position and velocity of the target from the previous timestamp, we can predict the position of the target for next timestamp. For example, using a constant velocity model, the new position of the target vehicle can be computed as: x_new = x_prev + v ∗ t

  2. Update Step : Here, the Kalman filter uses noisy measurement data from sensors, and combines the data with the prediction from the previous step to produce a best-possible estimate of the state.

Implementation in MATLAB The following guidelines can be used to implement a basic Kalman filter for the next project.

  • You will define the Kalman filter using the trackingKF function. The function signature is as follows:
filter = trackingKF('MotionModel', model, 'State', state, 'MeasurementModel', measurementModel, 'StateCovariance', stateCovrariance, 'MeasurementNoise', measurementNoise)

In this function signature, each property (e.g. ‘MotionModel) is followed by the value for that property (e.g. model).

  • For the model variable, you can pass the string ‘2D Constant Velocity’, which will provides the 2D constant velocity motion model.
  • For the 2D constant velocity model the state vector (x) can be defined as: [x;vx;y;vy]

Here, x and y are 2D position coordinates. The variablesvx and vy provide the velocity in 2D.

  • A RadarDetectionGenerator function is used to generate detection points based on the returns after reflection. Every Radar detection generates a detection measurement and measurement noise matrix: detection.Measurement and detection.MeasurementNoise.The detection measurement vector (z) has the format [x;y;vx;vy].

  • Measurement Modelsm Measurements are what you observe about your system. Measurements depend on the state vector but are not always the same as the state vector.The measurement model assumes that the actual measurement at any time is related to the current state by

z  = H*x

As a result, for the case above the measurement model is

H = [1 0 0 0; 0 0 1 0; 0 1 0 0; 0 0 0 1]

Using this measurement model, the state can derived from the measurements.

x = H'*z

state = H’*detection.Measurement

  • Further, using the generated measurement noise and measurement model define the state covariance matrix:
stateCovariance =H'*detection.MeasurementNoise*H

Comment  Read more

why sensor fusion need | sensors pros & con | Kalman Filter

|

Introduction to sensor fusion

1. Sensor fusion

Uncertainty Sensor has their own pros & con So need to sensor fusion Even sensor fusion it is difficult to figure out world

2. Sensors

In the world, there are a lot of sensor existing like camera lidar radar gps imu, even biologic sensor

3. Lidar

Active sensor Laser sensor FOV 360 degree Vertical field of view(how high, how low) Point cloud(intensity, it used at ML/DL for detecting object)

4. Camera

Passive sensor Understanding night vision High resolution Image only can be read by camera Camera can be worked as active sensor by preprocessing algorisms (intensity, feature based)

5. Radar

Active sensor Signal process Understand potion and velocity Propagate out and get back frequency from unit Low resolution Just can figure where is object like this Cost effective

6. Fusion

Merge sensors Figure out environment using Mathematical Prediction and update cycle(update predict by actual observation data) Sensor often continuous form But we can change discrete form Choose correct sensor

7. Kalman Filters

Linear form

But real world is non-linear (curve or unexpected behavior)

So need to use extended Kalman filter(Taylor 1st series, not long term driving good, because their error is accumulated) or unscented Kalman filter(sampling and correct(like particle filter, long drive advance)

Comment  Read more