Project outline
Between 30 August 2019 and 4 November 2019, uncrewed aerial
vehicle (UAV) missions were flown over the 15 NPP study sites
located in the Jornada Basin, southern New Mexico, USA. These
sites are associated with the Jornada Basin LTER program. During
each mission, which consisted of one or two separate flights,
between 450 and 1300 12.4 megapixel RGB images (approximately)
were captured. Orthomosaic photos, digital elevation models (DEM),
digital terrain models (DTM), and other products have been derived
from these data. This data package contains sparse point clouds in
.las file format.
Flight and payload information
A DJI Phantom 4 UAV with a 12.4 megapixel camera was used for all
missions. Metadata describing each mission related to the included
data are found in the "Flight_metadata.csv" file.
Important fields include flight start and end times, UAV pass
overlaps, UAV height, camera angle, camera sensor specification,
camera lens specifications, and number of images collected.
Image post-processing
A subset of the individual images (.jpg files) collected during
missions at each site were loaded into Agisoft Metashape software,
which uses structure from motion (SFM) technology to generate the
sparse point clouds in this data package. These point clouds give
horizontal and vertical point coordinates of all ground surface
features derived from SFM image processing. Images were aligned
and dense clouds were generated at "high" quality unless
otherwise noted in the "SparsePoingCloud_inventory.csv"
file included in this data package.
Agisoft Photoscan uses a fairly common technique called structure
from motion (SFM) which relies on the use of uses mathematical
techniques to recover the three dimensional shape and appearance
of objects in imagery (Verhoeven, 2011). Exact algorithms used in
the software are hard to find due to proprietary restrictions,
however a brief, general description of relevant processing for
creating orthomosaics are below.
Feature matching across the photos
At the first stage PhotoScan detects points in the source photos
which are stable under viewpoint and lighting variations and
generates a descriptor for each point based on its local
neighborhood. These descriptors are used later to detect
correspondences across the photos. This is similar to the well
known SIFT approach, but uses different algorithms for a little
bit higher alignment quality.
Solving for camera intrinsic and extrinsic orientation
parameters
PhotoScan uses a greedy algorithm to find approximate camera
locations and refines them later using a bundle-adjustment
algorithm. This should have many things in common with Bundler, a
software that basically performs the process described here,
although we didn’t compare our algorithm with Bundler thoroughly
(go to the following link for information on Bundler:
https://www.cs.cornell.edu/~snavely/bundler/).
Dense surface reconstruction
The reconstruction process involves pair-wise depth map
computation, whereby the program recognizes geometries common
among different photographs and creates a surface reconstruction
based on how those geometries appear to change in photographs from
different positions.
Texture mapping
The texture mapping process parameterizes a surface by dividing
the surface into smaller pieces, and then blends source photos to
form a texture atlas.
Post-processing and file metadata
Parameters used in processing software, as well as metadata
describing the derived sparse point clouds, are included in the
"SparsePointCloud_inventory.csv" file. Important
parameters used in post-processing with Agisoft software include
assumed camera accuracy, the marker accuracy for image
coordinates, photo alignment quality settings, and dense cloud
quality settings. In addition, file format, image resolution,
pixel size in ground area units, coordinate reference system, and
other metadata useful for re-using the point cloud data are
included in this file.
Related data packages
The raw imagery collected during UAV missions is not publicly
available on EDI yet. It can be requested from the PIs. Other
derived products from these UAV missions are available at EDI
including:
Orthomosaic photos of the sites - knb-lter-jrn.210543001
Digital elevation models (terrain features including vegetation) -
knb-lter-jrn.210543003
Digital terrain models (terrain features excluding vegetation) -
knb-lter-jrn.210543003
References
Verhoeven, G, 2011. Taking computer vision aloft - archaeological
three-dimensional reconstructions from aerial photographs with
photoscan. Archaeological Prospection, 18, pp. 67-73.
Semyonov, D., 2011. Algorithms used in Photoscan [Msg 2].
Retrieved May 3, 2020. Message posted to
www.agisoft.ru/forum/index.php?topic=89.0.