The RPLidar range

40 views
Skip to first unread message

Simon Ritchie

unread,
Oct 5, 2018, 3:55:48 AM10/5/18
to lidar mapping
There are now lots of very cheap Lidar scanners designed for the robotics and driverless cars.  Can they be re-purposed for mapping?

The RPLidar scanner is a good example of what’s available and affordable now.  It’s a range of devices manufactured by Slamtech, designed for Simultaneous Localisation And Mapping (SLAM) applications.  By mapping, they don’t mean geographical mapping.  The original purpose was to sit on top of a robot vacuum cleaner and produce a floor plan of the surrounding room.  It’s an analogue device, not solid state, using a laser sensor mounted on a spinning platform to scan the nearby objects in a circle.  It returns a list of angle and distance values. By moving around the room, taking a series of scans from different points of view and merging them, the robot can produce an accurate two-dimensional map of the room that it's working in.

Prices range from $99 to $600.   Different models offer different accuracy.  The A3 claims a range 0f 150mm – 25m, angular resolution of 0.3 degrees and a distance accuracy of less than 1%.  So, at 5m from an object the measurement will be up to 5cm out, if it’s further away the error will be bigger.

The scanner connects to its host computer via a USB connection, so you don’t need any electronics skills to use it.  You do need programming skills.  Slamtech provide some common driver software, so a program developed for one model can work with all the others.  Their driver and example programs are written in C++.  I’ve written a wrapper for the Go programming language so you can work in that too.  You can find a simple Go program here https://github.com/goblimey/rplidar_sdk_go that runs a scan and draws a floorplan.  The results are a bit rough and ready, but they show the potential of the scanner. 
The host computer could be one of the cheap single-board computers such as a Raspberry Pi.

The attachments to this posting show my RPLidar working and the resulting floorplan.  The file scene.jpg shows the room in which I was working and floorplan.png shows the result.  In the photo, the scanner is sitting on a stack of bricks, connected via a USB cable to the laptop on the floor which is running my program.   The scanner is pointing directly at the wall opposite, There’s a doorway in the wall and beyond that, the garden.  The wall to the left has a bigger doorway.  The wall to the right, which is out of the picture, has some building materials and tools stacked against it.  There is some scaffolding around the outside of the building.

Looking at the floorplan it’s clear the scanner works best with hard, fairly continuous objects.  The wall in front of the scanner is represented quite well, but the scattered objects to the right and through the doorways are not.  This is as much the fault of my program as limitations of the scanner.   There are some “artifacts” in the scan - junk caused by a defect in my program.  (More on that later.)

This purpose of this experiment was to test the accuracy of the scanner, so I placed some target objects carefully with a tape measure.  The wall opposite is 3m away from the scanner.  I put a stack of bricks in front of the wall with its front face 2.7m away from the scanner, and tall thin piece of wood to the right, 0.5m away.

In the floorplan you can see the basic shape of the room and the objects in it – the stack of bricks opposite, the piece of wood on the floor and the stuff stacked against the wall.  The walls are drawn at a slight angle because the scanner is not perfectly aligned with them.  Through the doorways you can see scattered objects where the scanner picked up readings from the scaffolding and from plants in the garden.  

This is how the system works:  the scanner spins and produces a list of measurements, each the distance to an object at various points in a circle around it, something like this:

    angle (deg)   distance (mm)
       0.4                  2715.0
       1.5                  2721.0
       2.6                  2992.0
       3.7                  3015.0
       4.8                  3023.0
       6.0                  3032.0
       7.1                  3039.0
       And so on …..
 
The measurements follow a horizontal line running around the scene.  In this case the scanner made 315 measurements, each about 1.1 degrees apart.  In the first few the beam strikes the stack of bricks in front of the wall.  In the next few it strikes the wall behind.  The measurements are out by about 20mm, which is within the quoted accuracy of the scanner.

Some lidar scanners have a fairly wide beam in the vertical axis.  For an application like collision detection, this is useful, but it means that your scan is not very precise.  If it detects an object, is that at the same height as the scanner, a little above it or a little below it?

I tested this by removing bricks under the target and seeing when the scanner lost sight of it.  Not very scientific, but the vertical resolution appears to be about 1%, the same as the horizontal resolution.  For mapping, that’s what you want.  If you run two scans close together you want the scan lines to be tight.

Narrow objects can fall in between the scans.  At 0.5m the scanner detects the thin piece of wood reliably, but not always if I move it 3m away.  At that distance the scanner only reliably detects objects 60mm wide or more.  Moving the scanner and taking a lot of scans will compensate for this to an extent.

The scanner has limited range.  If the nearest object is too far away it doesn’t detect anything.  In that case it returns an invalid measurement of zero degrees and zero distance.  The list of measurements typically contains some valid values, followed by some invalid values, some more valid values, and so on.  Valid values represent an object.  Invalid values represent a gap between two objects.  You can see this effect when the beam goes out through a doorway.  At some angles it finds an object, at others it doesn’t and the result in the floorplan is scattered points.

The artifacts I mentioned earlier are caused by a flaw in the way I handle these data.   My program assumes that there will always be some invalid values in between two objects.  If one object is partly obscuring another, the scans are continuous, so the program draws a line connecting them.  I could fix this, but actually a better approach would be to produce a point cloud file and use one of the many ready-made tools to draw it.

The RPLidar A3 claims to work with objects as close as 150mm.  I’m pretty confident that you could build a 3D scanner by mounting the scanner on a frame, moving it over an object, taking scans at regular intervals and then merging them together.

On paper, you could use this scanner as a mapping instrument.  I’ll discuss that in more detail in my next posting.  (Spoiler:  drones are prone to crashes and the RPLidar looks a bit too delicate to survive many of those.  I don’t think it’s robust enough to work in a drone.  You really want to use a solid-state device.   However, It’s a good proof of concept.)
setup.jpg
floorplan.png

Simon Ritchie

unread,
Oct 5, 2018, 4:13:43 AM10/5/18
to lidar mapping

In an earlier posting I described the RPLidar scanner and claimed that, on paper at least, you could use it for mapping.

To survey a piece of ground, turn the scanner on its side and mount it under a drone.  Divide the ground into strips, say two metres wide each.  Run the drone backwards and forwards along the strips, making a series of scans as it moves.  Each scan will include part of the ground below it and the sky above it, plus maybe bits of the drone too.  You are only interested in the part which runs along the ground within the strip:


                                drone

                                 /\

                                /  \

                               /    \

                              /      \

                             /        \
                            /      ----------------- ground
                           /      |
                 -----------------

                           <---- 2m --->

                                                                                                                                                                       


Each scan produces a line of points across the ground.  If you know the scanner’s map reference and height above sea level, you can calculate the map reference and height of each point.  If the scanner is pointing vertically downwards, as in the left-hand picture below, it’s very simple.  Just figure out the position of the point relative to the scanner, then subtract its height above ground.


        sensor               sensor
       vertical              tilted
          |                     \
     ^    |                      \
     |    |                       \
     |    |                        \
    10m   |                         \
     |    |                          \
     |    |                           \
     v    |                            \
          |                             \
  ------- |-------      --------|--------|-------------- ground
        target                sensor   target  

       position              position  position

 


The right-hand picture above shows the more likely situation.   As the drone moves, it’s buffeted by the wind and it wobbles, so the scanner points in a different direction on each scan.   To calculate where that points are, you need to know which direction it was pointing (its heading) and correct for that.  The accuracy of the result depends on the accuracy of all of the inputs to that calculation.


Lidar systems intended for mapping take care of all this.  They either incorporate devices to find very accurate position and heading values, or they hook up to the aircraft’s positioning system.   If you use a scanner meant for some other purpose, you will have to get hold of those data and massage the results coming out of the scanner.  You can buy a pretty powerful single-board computer for a few dollars that are quite capable of doing this.  Assuming that the measurements are accurate enough, the problem just becomes a matter of writing some software.


You only get single return from this device so you can only create a Digital Surface Model (DSM).  To create a Digital Terrain Model (DTM) you need the first and second return, but if those data are collected at all, they are lost inside the electronics. 

With cheap equipment, accuracy of measurements could be a problem. If you do a complicated calculation, any errors in the input data accumulate, so you want those data.to be as accurate as possible.  This is one reason why the specialist mapping scanners are so expensive.  However, a drone can fly much closer to the ground than a plane, so you may be able to get away with less accuracy – 1% of 5m is a lot less than 0.1% of 500m.


A typical drone flight controller figures out its position and heading using all sorts of methods including GPS and the more accurate GNSS system, plus devices like an accurate barometer that can detect small changes in height as the drone moves.  Position sensors these days can find their position accurate to a few centimetres IF they stay in the same place for several seconds.  Something which is fixed to a drone can’t do that.  One way to deal with that problem is to use a base station on the ground in radio contact with a receiver in the drone.  The base station figures out exactly where it is and feeds corrections to its friend in the air using a Real Time Kinematics (RTK) protocol. 


Base stations vary in accuracy and price.  You can easily pay $5,000 or more.  A lot of Lidar scanners assume that you have a base station, which is something else to consider when you’re looking at prices.


Emlid sell their Reach M+ for $265 (https://emlid.com/reach/).   It’s two bits of electronics each powered through a USB connection, providing a base station and a receiver for the drone.  The system claims to provide position data accurate to within 2cm in the horizontal and 5cm in the vertical.  You can power the base station part using a backup phone charger.  Put the whole thing in a plastic sandwich box to keep the rain out and you have a working base station.


Emlid also sell a ready-made base station, the Reach RS+ for $800 (https://emlid.com/reach/).  It’s basically a sturdy case with a ground spike, a battery and some electronics.  It may be the same electronics as the base station from the M+, but it looks more professional than a sandwich box.


Emlid’s position sensors work well with their Navio (https://emlid.com/navio/), an adjunct board that turns a Raspberry Pi into a flight controller, the central control unit of a drone.


My concept is to mount an RPLidar scanner in a drone, run a series of scans and write some software to use the position and heading data from the Navio to merge the scans and produce a point cloud.  You would need a computer to run the software and store the results.  You could use a second Raspberry Pi for that.  Assuming this setup could be made to work, it would be much cheaper than any commercial Lidar system I’ve seen so far.  It would also be fairly light so you could carry it on a cheaper drone.


The accuracy of the result would depend on the accuracy of the scanner and the positioning system.   most of the UK Environment Agency’s data is only accurate to 1m, and to be useful this solution has to do better than that.

The RPLidar claims accuracy in the distance measurement of 1%.  Assuming it runs 5m above ground, that’s 5cm.  If the drone uses the Emlid M+ as its position sensor, it will also know its position to within 5cm, so your readings will be out by at least 10cm.  In truth, it will be worse than that.  The height, position and heading data will be fed into a calculation and that will amplify the errors.  I think it will be more accurate than 1m, but we will have to see.


This article discusses the same issue:  https://lidarnews.com/articles/drone-lidar-present-challenges/.  As the author says, you can’t just stick a scanner on a drone and hope it will be accurate enough.  You have to do the sums.


Actually, my main criticism of the RPLidar for this application is that I don’t think it’s robust enough.  Its high-precision spinning platform looks too delicate to respond well to being shaken up in a drone, or crashing into the ground.  However, it’s only one of many small cheap Lidar devices and the ideas described here apply to all of them.  It’s an interesting proof of concept and I will continue to experiment with it.

David Fairbairn

unread,
Nov 9, 2018, 4:27:32 PM11/9/18
to lidar mapping
I've played around a bit with the RPLidar A1, and I agree that it certainly wouldn't be good enough to use with drones for anything particularly demanding (and it's too bulky to be used as a peripheral sensor for nearby obstacle detection). It seems mostly well-suited to hobby projects (I'm aiming to incorporate it into a robot at some point...), but could perhaps also be used for other smaller-scale scanning in a fixed setting.
Reply all
Reply to author
Forward
0 new messages