- Open Access
Enhanced ground segmentation method for Lidar point clouds in human-centric autonomous robot systems
© The Author(s) 2019
- Received: 21 November 2018
- Accepted: 25 April 2019
- Published: 9 May 2019
Ground segmentation is an important step for any autonomous and remote-controlled systems. After separating ground and nonground parts, many works such as object tracking and 3D reconstruction can be performed. In this paper, we propose an efficient method for segmenting the ground data of point clouds acquired from multi-channel Lidar sensors. The goal of this study is to completely separate ground points and nonground points in real time. The proposed method segments ground data efficiently and accurately in various environments such as flat terrain, undulating/rugged terrain, and mountainous terrain. First, the point cloud in each obtained frame is divided into small groups. We then focus on the vertical and horizontal directions separately, before processing both directions concurrently. Experiments were conducted, and the results showed the effectiveness of the proposed ground segment method. For flat and sloping terrains, the accuracy is over than 90%. Besides, the quality of the proposed method is also over than 80% for bumpy terrains. On the other hand, the speed is 145 frames per second. Therefore, in both simple and complex terrains, we gained good results and real-time performance.
- Internet of things
- Autonomous robot
- Point cloud
- Ground segmentation
Internet of things (IoT) is growing fast in over the world [1–7]. In an IoT-based system for the autonomous vehicles, light detection and ranging (Lidar) sensors are often used to collect data of surrounding environments. Furthermore, in human-centric autonomous systems, robots also have several attached cameras and an inertial measurement unit-global positioning system (IMU-GPS) sensor. In each frame, the Lidar sensor returns a point cloud that describes the terrain around the robot. The data from the Lidar sensor are transferred to a computer and split into two groups: ground and nonground. The first group includes ground points of terrain which a robot can traverse. On the other hand, the second group consists of nonground points which the robot cannot traverse such as cars, trees, walls, etc. If the terrain is sloping such that the autonomous robot cannot traverse it, the corresponding points are clustered into the nonground group. The segmentation of three-dimensional (3D) point cloud ground data is a fundamental step that is extremely important for robot operation. Especially, ground cloud segmentation is a pre-processing step for many terrain reconstruction applications [8–10]. The ground segmentation result is used for recognizing objects, classifications, and feature extraction. Dealing with large datasets in real time and in urgent situations such as rescuing people in distress is a challenging task. This is why a fast and accurate ground segmentation method is necessary for real-time autonomous systems.
In recent years, ground segmentation has become an important and challenging task, and is now the focus of considerable research. However, ground segmentation remains an open problem because of the complexity of the input data and the real-time requirements. Therefore, this paper proposes a fast and highly accurate ground segmentation method for 3D Lidar point clouds. The main contribution of this paper is to provide an infinitely faster ground segmentation approach than previous ones based on geometry features and distribution of points in each scanline. In addition, the proposed method also performs high accuracy in various terrains.
The remainder of this paper is organized as follows. The next section presents several related works. “Ground segmentation algorithm” section proposes the novel ground segmentation approach. “Experiments and analysis” section summarizes the results from experiments. The discussion and conclusion are presented in “Discussion” and “Conclusion” sections, respectively.
Numerous approaches have been used to segment 3D point cloud data. However, the discovery of a fast and accurate ground segmentation method is still a challenging task for real-time autonomous systems. We roughly categorize these techniques in relation to our present research as outlined below.
Typically, current object tracking approaches [11–13] segment objects on the ground from the background in a frame-by-frame manner using two-dimensional (2D) images. These approaches can be extended to enable segmentation with 3D images using large datasets such as 3D point cloud data. For example, a fully automatic approach for 3D point cloud segmentation  uses the ground segmentation results for detection and geometric modeling.
Wallenberg et al.  proposed a purely color-based leaf segmentation using data gathered from a Kinect sensor. Their algorithm segments an RGB image (typically, the leaves on a plant from the background) from a color camera based on color and depth information. In , the authors proposed an active segmentation technique based on the depth and color information of RGBD images, but with the goal of tracking objects and keeping them in the center of the image.
Hernández  focused on the automatic detection and classification of artifacts located at the ground region. Although this method achieves good results on flat terrain, it cannot be applied to sloping or rugged terrain. In , the authors described the segmentation of ground into flat and non-flat urban environments using local convexity measures. Their results indicate good performance over a variety of terrain, but the computational cost of the algorithm means that this method cannot process data in real time. Cho  and Lin  proposed different ground segmentation approaches by dividing a point cloud dataset into smaller parts such as voxels or blocks. The results are good in certain cases and their algorithms work well with all kinds of terrain but can be time-consuming. Douillard  proposed a set of segmentation methods designed for various 3D point cloud densities. Although this method achieves good results within a reasonable computation time, the dependency on sets of adjacent points (four neighboring points) for each calculation makes it time-consuming to build a terrain mesh and implement further computations. In , the authors introduced a new segmentation method based on scanline segmentation. This approach enhances the processing efficiency of the massive amounts of data using GPU acceleration. However, this method only works well in urban areas.
Wellington et al. presented a method for a generative terrain model by exploiting the natural structure observed by the sensors . Their model exploits a 3D spatial structure in outdoor domains and uses a set of noisy data to classify obstacles and estimate the ground height and vegetation height. To detect obstacles while supporting the ground estimation process, their model includes two Markov random fields, a hidden semi-Markov model (HSMM), and voxel models. Their approach models 3D structures efficiently in vegetation and smooth terrain using a 150 × 150 grid of 15-cm square voxel columns. Therefore, this method is computationally demanding. In , the authors proposed a region-growing algorithm based on an octree-based voxelized representation to extract segments and a separate process to refine the segmentation. According to their results, this approach performs particularly well in urban environments and is computationally efficient. Zhang  proposed a ground segmentation method by combining a Markov random field with loopy belief propagation to update the ground-height probabilities and segmentation. Their algorithm can segment rough and steeply sloped regions with good results. However, this method cannot operate in real time because the average processing time of their algorithm is greater than 1 s. Therefore, this method cannot meet the twin requirements of real-time processing and good quality.
The fast ground segmentation method  has the ability to segment normal terrain accurately and efficiently. However, for complex terrain, this method becomes inefficient. To overcome the issues discussed above and upgrade previous approach, we propose a novel segmentation algorithm that deals with a wide variety of terrain and is sufficiently fast for real-time operation.
Processing in the vertical direction
Processing in the horizontal direction
Dividing a scanline into level-2 lines
The method of dividing scanlines into level-2 lines is inspired by real-life observations. In the scanline, the distance between two consecutive points on a single object is less than the distance between two consecutive points on two different objects. Based on this observation, each scanline is divided into smaller lines.
The distance between two consecutive points in each scanline is calculated. If the distance is less than some constant minimum value dmin, we place both points on one level-2 line. Otherwise, the previous point is placed on the current level-2 line and the next point is assigned to a new level-2 line. The value of dmin depends on the type of Lidar sensor and number of channels of the scanline.
Classification and labeling of level-2 lines
Each level-2 line is a set of points. There are four types of level-2 line. The first type contains only ground points, whereas the second type contains only nonground points. The third and fourth types contain both ground and nonground points. In the third type, the lines include separate and distinct ground and nonground parts. In the fourth type, the ground and nonground points alternate and are mixed together.
In the next step, the number of types is reduced from four to two. All lines of the first and second types are maintained. For the third type, we calculate the average height of all points in the ground and nonground parts in each level-2 line. Depending on the average heights, the third line type is splited into two cases: (i) If the difference between the average height of the ground points and the nonground points is less than hmin, all points in this line will be assigned the same label. The density of ground and nonground points are compared. If the ground points constitute the majority, all nonground labels are changed to ground labels. Otherwise, the nonground labels are maintained and the ground labels are changed. This produces a line that is of the first or second type. (ii) If the difference between the average height of the ground points and nonground points is greater than hmin, each line is splited into two smaller lines. The first line contains only ground points and the second contains only nonground points. For lines of the fourth type, the same adjustment method is used as for case (i) above. In general, the Lidar sensor data contain errors of a few centimeters. Therefore, we define hmin so as to ignore the errors of the 3D range sensor.
Label updating based on point-type distribution in horizontal direction
Processing in both directions
We implemented experiments to verify the proposed ground segmentation method both in qualitative and quantitative terms. Moreover, several state-of-the-art methods were used for comparison to demonstrate the effectiveness and high quality of the proposed approach.
For the experiments and analysis, datasets captured from a Lidar sensor (Velodyne HDL-32E, Velodyne Inc., Morgan Hill, CA, USA) were employed. The first dataset was obtained from a simple, flat terrain. The second dataset was obtained from sloping terrain, and the third dataset was taken from more complex regions. To evaluate the quantitative results, a set of ground truth data was constructed. For each dataset, we ran the proposed method and fast ground segmentation method  and compared the results. We also compare the performance with other done methods using Velodyne HDL-32E sensor. The experiments were conducted using a PC equipped with an Intel Core i7-6700 3.4 GHz CPU and 16 GB RAM. We set dmin = 20 cm, gmax = 30°, hmin = 10 cm, and rmax = 0.7 according to the capabilities of the robot and sensor. For the first stage, the same parameters as the experiments reported in  were used. These values are not dependent on the features of the terrain. For other robots or Lidar sensors, different values can be chosen.
Quantitative results using ground truth data
Total number of frames
TPR of fast ground segmentation  (%)
TPR of proposed method (%)
Sloping and bumpy
The experimental results demonstrated that the proposed method could well separate ground and nonground from a 3D point cloud at high speed. The proposed method can be applied to the autonomous robots and remote-controlled systems. For examples, the object detection and tracking applications can be performed from nonground part. Furthermore, we can reconstruct the 3D scene in real time by applying different methods for the ground and nonground. In future work, we will modify the proposed ground segmentation method to improve quantitative quality on various datasets.
In this paper, we proposed a novel ground segmentation method for Lidar point clouds that uses local coordinates to deal with each received data frame. The core ideas of this method are that the point cloud is not only processed along each vertical scanline, as in previous research, but along each horizontal scanline and in both directions simultaneously. The experimental results using this extended method indicate that our approach is fast and effective over both simple and complex terrain. In future work, we will extend the proposed method to other, more complex terrain types, and will further enhance the quality of the algorithm.
PMC, SC, JP have written the source codes. SF contributed to the discussion and analysis of the results. KC provided full guidance. PMC has written the paper. All authors read and approved the final manuscript.
The authors declare that they have no competing interests.
Availability of data and materials
Consent for publication
Ethics approval and consent to participate
This work was supported by the National Research Foundation of Korea (NRF) Grant funded by the Korea government (MSIP) (2018R1A2B2007934).
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
- Kang WM, Moon SY, Park JH (2017) An enhanced security framework for home appliances in smart home. Hum Centric Comput Inform Sci 7(6):1–12Google Scholar
- Jo H, Yoon YI (2018) Intelligent smart home energy efficiency model using artificial TensorFlow engine. Hum Centric Comput Inform Sci 8(9):1–18Google Scholar
- Gubbi J, Buyya R, Marusic S, Palaniswami M (2013) Internet of Things (IoT): a vision, architectural elements, and future directions. Fut Gen Comput Syst 29(7):1645–1660View ArticleGoogle Scholar
- Lee W, Cho S, Chu P, Vu H, Helal S, Song W, Jeong YS, Cho K (2016) Automatic agent generation for IoT-based smart house simulator. Neurocomputing 209:14–24View ArticleGoogle Scholar
- Kaur J, Kaur K (2017) A fuzzy approach for an IoT-based automated employee performance appraisal. Comput Mater Continua 53(1):23–36Google Scholar
- Xiao B, Wang Z, Liu Q, Liu X (2018) SMK-means: an improved mini batch k-means algorithm based on mapreduce with big data. Comput Mater Continua 56(3):365–379MathSciNetGoogle Scholar
- Zhao X, Wu J, Zhang Y, Shi Y, Wang L (2018) Fault diagnosis of motor in frequency domain signal by stacked de-noising auto-encoder. Comput Mater Continua 57(2):223–242View ArticleGoogle Scholar
- Khatamian A, Arabnia HR (2016) Survey on 3D surface reconstruction. J Inform Process Syst 12(3):338–357Google Scholar
- Chu PM, Cho S, Fong S, Park YW, Cho K (2017) 3D reconstruction framework for multiple remote robots on cloud system. Symmetry 9(4):1–16MathSciNetView ArticleGoogle Scholar
- Chu PM, Cho S, Sim S, Kwak K, Cho K (2018) Multimedia system for real-time photorealistic nonground modeling of 3D dynamic environment for remote control system. Symmetry 10(4):1–15View ArticleGoogle Scholar
- Ren X, Malik J (2007) Tracking as repeated figure/ground segmentation. In: 2007 IEEE conference on computer vision and pattern recognition, Minneapolis, MN, USA, 17–22 June, pp 1–8Google Scholar
- Li F, Kim T, Humayun A, Tsai D and Rehg JM (2013) Video segmentation by tracking many figure-ground segments. In: IEEE international conference on computer vision, Sydney, Australia, 1–8 December, pp 2192–2199Google Scholar
- Kuettel D, Ferrari V (2012) Figure-ground segmentation by transferring window masks. In: 2012 IEEE conference on computer vision and pattern recognition (CVPR), Providence, RI, USA, 16–21 June, pp 558–565Google Scholar
- Verma V, Kumar R, Hsu S (2006) 3D Building detection and modeling from aerial Lidar data. In: 2006 IEEE computer society conference on computer vision and pattern recognition (CVPR’06), New York, NY, USA, 17–23 June, pp 2213–2220Google Scholar
- Wallenberg M, Felsberg M, Forssén P (2011) Leaf segmentation using the Kinect. In: Proceedings SSBA’11 symposium on image analysis, Linköping, Sweden, 17–18 MarchGoogle Scholar
- Tomori Z, Gargalik R, Hrmo I (2012) Active segmentation in 3D using Kinect sensor. In: Proc. Int’l conference computer graphics visualization and computer vision, Plzen, Czech, 25–28 June, pp 163–167Google Scholar
- Hernández J, Marcotegui B (2009) Point cloud segmentation towards urban ground modeling. In: 2009 urban remote sensing event, Shanghai, China, 20–22 May, pp 1–5Google Scholar
- Moosmann F, Pink O, Stiller C (2009) Segmentation of 3D Lidar data in non-flat urban environments using a local convexity criterion. In: IEEE intelligent vehicles symposium, Xi’an, China, 3–5 June, pp 215–220Google Scholar
- Cho S, Kim J, Ikram W, Cho K, Jeong Y, Um K, Sim S (2014) Sloped terrain segmentation for autonomous drive using sparse 3D point cloud. Sci World J 2014:1–10Google Scholar
- Lin X, Zhanga J (2015) Segmentation-based ground points detection from mobile laser scanning point cloud. In: The international archives of the photogrammetry, remote sensing and spatial information sciences, 2015 international workshop on image and data fusion, Hawaii, USA, 26 June, pp 99–102Google Scholar
- Douillard B, Underwood J, Kuntz N, Vlaskine V, Quadros A, Morton P, Frenkel A (2011) On the segmentation of 3D LIDAR point clouds. In: IEEE international conference on robotics and automation, Shanghai, China, 9–13 May, pp 2798–2805Google Scholar
- Hu X, Li X, Zhang Y (2013) Fast filtering of LiDAR point cloud in urban areas based on scan line segmentation and GPU acceleration. IEEE Geosci Rem Sens Lett 10(2):308–312View ArticleGoogle Scholar
- Wellington C, Courville A, Stentz A (2006) A generative model of terrain for autonomous navigation in vegetation. Int J Robot Res 25(12):1287–1304View ArticleGoogle Scholar
- Vo AV, Truong LH, Laefer DF, Bertolotto M (2015) Octree-based region growing for point cloud segmentation. ISPRS J Photogram Rem Sens 104:88–100View ArticleGoogle Scholar
- Zhang M, Morris DD, Fu R (2015) Ground segmentation based on loopy belief propagation for sparse 3D point clouds. In: 2015 international conference on 3D vision, Lyon, France, 19–22 October, pp 615–622Google Scholar
- Chu PM, Cho S, Sim S, Kwak K, Cho K (2017) A fast ground segmentation method for 3D point cloud. J Inform Process Syst 13(3):491–499Google Scholar