top of page
Search
  • Writer's pictureGaurav Bhosale

Summary of Fast range image-based segmentation of sparse 3D lasers scans for online operation.


Summary of Fast range image-based segmentation of sparse 3D lasers scans for online operation.

Authors : Igor Bogoslavskyi : Cyrill Stachniss

Presented at IROS, Daejeon, South Korea


All the Images are from the paper itself.


Abstract:

  • This paper proposes and presents a “fast” method that will segment 3D objects into different clusters which can run online and have small computational needs.

  • It is faster because it does not run the algorithm on full 3D data instead it performs the computation on a 2D range image which allows a fast segmentation for each scan.

  • It performs even well on sparse 3D data allowing usage of lower cost Lidars and the authors make the source code available for this approach , allowing us to use it for our clustering needs.



Introduction:

  • Mobile robotics is a booming field with the necessity of low computational cost algorithms to run on the robot or online.

  • A robot needs to be able to navigate through an unknown environment wherein the biggest challenge the robot faces while planning its own path is to accurately known objects in its domain.

  • In a known environment (prebuilt maps) it is crucial that the algorithm segments the dynamic or moving objects into different clusters so that they can be tracked.

  • The paper proposes performing computations on cylindrical images of LIDAR scan.

  • This provides advantage as the ranged image is small ,dense and maintains the neighbourhood information similar to the 3D structure.

  • It also allows for low computational cost for clustering by working on 2D images instead of 3D.


Relevant Work:


Segmentation of scans can be divided into three different groups:

  • The first group defines handwritten features for the algorithms by which the data is explained in 3D [8] or by doing a ground segmentation algorithm to remove the ground points and then using a variant of nearest neighbors approach to segment the data into clusters[5].

  • The second group of approaches is in which the 3D points are projected as 2D points on the ground with 2D grid position on the ground. These algorithms are fast but they suffer the problem of under segmentation of points in the Z- direction . Multiple point cloud clusters across the Z-axis can be grouped as a single object . This effect depends on the grid parameters and can be tuned for multiple environments. [22]

  • The third group approaches the problem by performing segmentation on the range image. This paper falls into the third group. Previous work -eg Moosman et al. [18].They use range image to compute local convexities of points in the cloud , that is finding local clusters in cloud. This approach is easier to implement and only relies on a single parameter.


There are several other works in LIDAR segmentation which work on RGBD data , registering with multiple cameras for visual information and using temporal information (time information) and tracking to enhance segmentation but here the paper only focuses on non-temporal information laser scans but acknowledges the usefulness of tacking.


The Algorithm for Fast and Effective segmentation using Langer scanned Images :


  • The primary problem with working with sparse points is that the resolution of the image in the vertical dimension is very low which has an impact on the segmentation algorithm because for every pair of points one has to decide if the reflected laser beam is from the same object or not. Since this paper does not work with 3D data directly but instead with cylindrical range image it has the benefit of exploiting the neighborhood relationships which make the segmentation problem easier and also faster to compute.

  • A LIDAR data is provided as raw data the individual range readings per laser beam with a timestamp and an orientation of the beam. This allows us to directly turn the data into a range image. The number of rows in the image is defined by the number of beams in the vertical direction ,i.e., 16, 32 or 64 for the Velodyne scanners. The number of columns is given by the range readings per 360 degree revolution of the scanner. Each pixel of such a virtual image stores the measured distance from the sensor to the object. Eg- 16 channel velodyne Lidar , with angular resolution of 360 degree will contain 16 rows and 360 columns with each cell having value of the distance of the object from LIDAR.

  • This work uses ground estimation from [13] , once the ground is estimated it is removed from the range image. This preprocessing removes ground points and allows better clustering points.

  • The key feature of an algorithm is its ability to differentiate if the laser scans results from the object belonging to it or a different object. This paper presents an easy to implement and compute approaches.

  • The bottom image of Fig. 4 shows an example scene with two people walking close to each other in front of a bicyclist,who passes between them and a parked car. This scene has been recorded using our Velodyne VLP-16 scanner. The Bottom left image shows an illustration of two arbitrary points and measured from the scanner located at O with the illustrated laser beams OA and OB.We assume that coordinate system . We define the angleβ as the angle between the laser beam and the line connecting A and B in the point that is further away from the scanner (in our example that is A). Intuitively, the angle β relates the distance in depth between the two captured points and we can tell by angleβ if A and B lie on the same object.



  • we know the distance ‖OA‖ as it corresponds to the first laser measurement as well as ‖OB‖ (second laser measurement).We will call these range measurements d1 and d2 respectively can use this information to calculateβ by applying trigonometric equations. Where α is the known angle between the beams and is usually provided in the documentation of the scanner. The bottom right image in Fig. 4 illustrates the computation in the x−y plane from a top-down view of the scene.



  • The intuition behind angle β is that it stays relatively large for most objects and only takes small values if the depth difference between neighboring points given the range image is substantially larger than their displacement in the image plane that is defined through the angular resolution of the scanner. This insight allows us to define a parameterθ that acts as a threshold on the angleβ. This threshold enables us to make a decision about whether to separate any two points in the range image into separate clusters or merge them into one. If β is smaller than the user-defined valueθ, we argue that the change in depth is too large and make the decision to separate the points into different segments. Otherwise, the points are considered as lying on the same object.

  • A threshold-based criterion on β is clearly a trial and error but works well in practice as they will illustrate in the experimental evaluation. A failure case can be a situation in which the scanner is located close to a wall when the angle will obviously be very less.

  • Thus we can view the segmentation problem as the problem finding the connected 2D components in the depth image and constraint on β .







The Algorithm :


  • Line 2 : Label =1 , R= input range image

  • Line 3: L = matrix filled up of zeroes with R rows and R columns

  • Line 4-5 : Loop over image starting from Top to bottom and left to right.

  • Line 6 : Whenever we encounter a label with 0 , basically every time when we start as it is filled with zeros.

  • Line 7: we start the BFS (Breadth First Search)

  • Line 8: After returning from BFS we increased the Label count by 1 . Then it loops until the end.

  • Line 9 : Declaring BFS

  • Push the point from line 7 into a queue.

  • Line 11 : Until the queue is empty

  • Line 12 -13: Assign value of Label to L(r,c).

  • Line 14 : do N4 Neighborhood search around {r,c} ie: look up, down,right ,left.

  • Line 15-18 : check if the angle β is greater than threshold ,if itis push it to queue

  • Line 19: Pop out of the queue


In brief the algorithm starts looking from top left of image and goes towards bottom , it looks if the labels have value zero , if the labels have value of zero it does a breadth first search in the same position it keeps on doing this search until the values less than the angle threshold aka if it isn't the same object . Once it finishes and finds that a beam of LIDAR is not a part of the object , it labels all the points it pushed in the queue and pops them back. Finally it returns back to the starting point and continues checking if the label is zero until it reaches the end of the image.


Results :

The below image presents how this algorithm performs comparatively with a grid based segmentation algorithm while also running faster than it.




Conclusion :

This paper provides a fast and easy 3D LIDAR segmentation algorithm which is comparative to grid based algorithm while running faster then it. The algorithm also works better on sparse data like the one produced by 16 beam LIDAR.


By

Gaurav Bhosale


References for important works :

[8]:B. Douillard, J. Underwood, V. Vlaskine, A. Quadros, and S. Singh. Apipeline for the segmentation and classification of 3d point clouds. InProc. of the Int. Symposium on Experimental Robotics (ISER), 2014.


[5]Y. Choe, S. Ahn, and M.J. Chung. Fast point cloud segmentation foran intelligent vehicle using sweeping 2d laser scanners. InProc. ofthe Intl. Conf. on Ubiquitous Robots and Ambient Intelligence (URAI),pages 38–43, 2012.


[22]D. Steinhauser, O. Ruepp, and D. Burschka. Motion segmentationand scene classification from 3d lidar data. InProc. of the IntelligentVehicles Symposium, pages 398–403, 2008.


[18]F. Moosmann, O. Pink, and Ch. Stiller. Segmentation of 3d lidar datain non-flat urban environments using a local convexity criterion. InProc. of the Intelligent Vehicles Symposium, pages 215–220, 20


[13]M. Himmelsbach, F. v Hundelshausen, and H. Wuensche.Fastsegmentation of 3d point clouds for ground vehicles.InIEEEIntelligent Vehicles Symposium, pages 560–565, 2010

































Comments


bottom of page