Airborne remote sensing systems can carry multiple sensors on board. For example, LiDAR can be acquired concurrently with aerial oblique and nadir imagery data. I this post, I am sharing a simple approach for integrating LiDAR point clouds and ortho-rectified images (aka orthophotos) to produced colored point clouds.
The basic concept is simple. We have a 3D point cloud and a 2D ortho-rectified, raster image of the same geographical area. Both the point cloud and the orthophoto are georeferenced. That means every point in the point cloud ties to a real-world location (think latitude, longitude, elevation) and each pixel of the photo also corresponds to a 2D square on the Earth surface (aka a spatial extent). As the 2 datasets are both referenced to the Earth surface, they are actually already linked, but in a somewhat implicit way. Our job is to materialize that linkage.
Once we link the datasets, we will assign the color from each pixel to the points sharing the pixel's spatial extent to produce a colored point cloud. Establishing the linkage is easy. A point and a pixel are considered as a corresponding pair if the point is contained inside the pixel's extent. Notably, because of the dimensionality mismatch (2D vs. 3D), this "contain" operation does not take into account the third dimension. Thus, this approach is imperfect. Example of such imperfection will be seen in the example in the next section, in which the color of the tree crown is incorrectly mapped to the ground under the tree as both of them share the same horizontal extent.
We can do such a data fusion by using FME, a commercial software for data translation and integration. You may be eligible for a free FME license if you are a student or a researcher. You can use the download the input data and the FME workspace from this Github repo to follow along. The files were extracted from tile T_316000_234000 of the 2015 Dublin aerial data collection by using CloudCompare and QGIS. The input data and the FME workspace can be downloaded using from the Github link below.
Simply put, a LiDAR point cloud is a collection of tuples of 3D coordinates (x, y, z) and some optional attributes (e.g. laser intensity, timestamp). Notably, LiDAR data do not natively contain colors. Unlike photogrammetry sensors, which often measure across a broad spectral range, LiDAR sensor often operates on one narrow spectral range (e.g. infrared or green). If you see a point cloud in natural colors, the data may be produced by a photogrammetry-based technique (e.g. SfM), or the colors may be integrated from another source.
This typical raster photo is essentially an array of pixels. The photo contains 4 spectral bands, thus each pixel has 4 numeric values, red (R), green (G), blue (B), and near-infrared (I). The visualization below shows a false color (I, R, G) version of the photo on the right and a true color (R, G, B) version on the right to expose all of the 4 bands. The photo is ortho-rectified, meaning it was processed to remove the distortion caused by terrain relief and the camera to produce a nadir view (looking straight down) for every pixel. The orthorectification applied to this photo is imperfect as oblique, non-nadir views still exist, particularly towards the right side of the photo.
The image below shows the FME workspace that fuses the point cloud data and the RGBI orthophoto to produce colored point clouds. Readers, transformers, and writers are the 3 types of the components making up any FME workspace. The workspace below starts with 2 readers that read the point cloud (1) and orthophoto (4) from the LAS and GeoTiff files into the workspace. The data (aka features in FME language) are passed through a series of transformers and then wrtiten down to files by the two writers [i.e. (7a) and (7b)]. The role of each transformer is explained in the following:
Here are what come out of the workspace. The first point cloud carries the 3 first bands of the orthophoto (RGB). The second point cloud has I, R, G as its color. The vegetation and human objects appear in red.