LiDAR and orthophoto fusion

a simple 3D point cloud and 2D raster data fusion

October 26, 2018

Airborne remote sensing systems can carry multiple sensors on board. For example, LiDAR can be acquired concurrently with aerial oblique and nadir imagery data. I this post, I am sharing a simple approach for integrating LiDAR point clouds and ortho-rectified images (aka orthophotos) to produced colored point clouds.

Basics

The basic concept is simple. We have a 3D point cloud and a 2D ortho-rectified, raster image of the same geographical area. Both the point cloud and the orthophoto are georeferenced. That means every point in the point cloud ties to a real-world location (think latitude, longitude, elevation) and each pixel of the photo also corresponds to a 2D square on the Earth surface (aka a spatial extent). As the 2 datasets are both referenced to the Earth surface, they are actually already linked, but in a somewhat implicit way. Our job is to materialize that linkage.

Point cloud Raster

Once we link the datasets, we will assign the color from each pixel to the points sharing the pixel's spatial extent to produce a colored point cloud. Establishing the linkage is easy. A point and a pixel are considered as a corresponding pair if the point is contained inside the pixel's extent. Notably, because of the dimensionality mismatch (2D vs. 3D), this "contain" operation does not take into account the third dimension. Thus, this approach is imperfect. Example of such imperfection will be seen in the example in the next section, in which the color of the tree crown is incorrectly mapped to the ground under the tree as both of them share the same horizontal extent.

Hands-on

We can do such a data fusion by using FME, a commercial software for data translation and integration. You may be eligible for a free FME license if you are a student or a researcher. You can use the download the input data and the FME workspace from this Github repo to follow along. The files were extracted from tile T_316000_234000 of the 2015 Dublin aerial data collection by using CloudCompare and QGIS. The input data and the FME workspace can be downloaded using from the Github link below.

CloudCompare QGIS Hands-on materials

Input point cloud

Simply put, a LiDAR point cloud is a collection of tuples of 3D coordinates (x, y, z) and some optional attributes (e.g. laser intensity, timestamp). Notably, LiDAR data do not natively contain colors. Unlike photogrammetry sensors, which often measure across a broad spectral range, LiDAR sensor often operates on one narrow spectral range (e.g. infrared or green). If you see a point cloud in natural colors, the data may be produced by a photogrammetry-based technique (e.g. SfM), or the colors may be integrated from another source.


Input ortho-rectified photo

This typical raster photo is essentially an array of pixels. The photo contains 4 spectral bands, thus each pixel has 4 numeric values, red (R), green (G), blue (B), and near-infrared (I). The visualization below shows a false color (I, R, G) version of the photo on the right and a true color (R, G, B) version on the right to expose all of the 4 bands. The photo is ortho-rectified, meaning it was processed to remove the distortion caused by terrain relief and the camera to produce a nadir view (looking straight down) for every pixel. The orthorectification applied to this photo is imperfect as oblique, non-nadir views still exist, particularly towards the right side of the photo.

Orthorectification False color images

Overlaying CIR image (left) on RGB image (right).
Slide the yellow bar at the middle of the image to observe the difference.

Data fusion workflow

The image below shows the FME workspace that fuses the point cloud data and the RGBI orthophoto to produce colored point clouds. Readers, transformers, and writers are the 3 types of the components making up any FME workspace. The workspace below starts with 2 readers that read the point cloud (1) and orthophoto (4) from the LAS and GeoTiff files into the workspace. The data (aka features in FME language) are passed through a series of transformers and then wrtiten down to files by the two writers [i.e. (7a) and (7b)]. The role of each transformer is explained in the following:

FME

  • (2) Define a coordinate system (EPSG:29903) for the point cloud using a CoordinateSystemSetter. This is needed for point cloud as the FME LAS reader does not read the coordinate system information from the file header. This coordinate setter is not needed for the orthophoto since FME can read that from the GeoTiff file.
  • (3) Extract the LAS file name with a FilenamePartExtractor as we want to name the output files based on this input LAS file.
  • (5a&b) Extract and order the raster bands from orthophotos. For more information about multispectral data manipulation, please check out this post.
  • (6a&b) Use PointCloudOnRasterComponentSetter to integrate the color data from the photo with the point data. This use exactly the approach described earlier.

Resulting colored point cloud

Here are what come out of the workspace. The first point cloud carries the 3 first bands of the orthophoto (RGB). The second point cloud has I, R, G as its color. The vegetation and human objects appear in red.

RGB

CIR

Thoughts

  • The orthorectification of this dataset needs improvement.
  • Orthophotos are imperfect for this 3D-2D data fusion as vertical structures (e.g. building façades) are purposefully omitted in the data. Oblique images could be a better source of color data for such data integration.