Extract comprehensive raw data from your LiDAR captures to enable advanced 3D reconstruction, precise measurements, and custom processing workflows. Access depth maps, camera parameters, mesh information, and more for professional analysis and development.
Prerequisites
Before you can extract raw data from your LiDAR captures, you'll need to enable Developer Mode on your device. For detailed instructions, see How to Access Developer Mode.
How to Export Raw Data
Follow these steps to export raw data from your LiDAR captures:
1Enable Developer Mode
Ensure that Developer Mode is enabled on your device. This must be done before capturing to enable raw data export functionality.
2Open Your Capture
Navigate to your capture library and select the LiDAR capture you want to export.
3Access Export Options
Tap the export option (arrow icon) located on the right-hand side of the capture editor to open the export window.
4Select Raw Data Export
Scroll to the bottom of the export options and select Raw Data export.
5Download the File
Click Export to download the file. The export will download as a zip file. Once downloaded, you'll need to unzip the file to access the raw data contents.
What's Included in the Raw Data Export?
The raw data export includes several folders and files containing detailed information about your LiDAR capture. Below is a comprehensive guide to understanding each component.
Cameras (JSON File)
This JSON file provides a compact way to store both camera information and environmental data for each frame or scan. It enables 3D reconstruction, accurate measurements, and proper alignment across multiple captures.
Image Dimensions
width and height - The resolution
of the captured image in pixels.
{
"width": 1024,
"height": 768
}
Timestamp
timestamp - A numerical identifier marking when
the frame or scan was captured.
Used to order frames or sync data, but may not represent
real-world time unless tied to a
specific internal clock.
{
"timestamp": 638443502
}
Image Sharpness
blur_score - Reflects how blurry or sharp the
image is. Higher values indicate
more blur, while lower values suggest a sharper image.
{
"blur_score": 115.28
}
Camera Intrinsic Parameters
These values describe the internal characteristics of the camera lens:
-
fxandfy- The focal length of the camera in pixels along the x and y axes -
cxandcy- The principal point (optical center) of the camera image in pixel coordinates, typically near the center of the image
Together, these values form the camera intrinsic matrix:
[fx 0 cx]
[0 fy cy]
[0 0 1]
Example:
{
"fx": 712.3738,
"fy": 712.3738,
"cx": 516.26404,
"cy": 384.30533
}
Camera Pose (Extrinsics)
t_00 to t_23 - These values represent
transformation matrices
used to describe the camera's position and orientation in
3D space. Most commonly, this is
a 4×4 matrix that defines how to convert points between world
space and camera space.
The numbers are labeled row-wise:
[t_00 t_01 t_02 t_03]
[t_10 t_11 t_12 t_13]
[t_20 t_21 t_22 t_23]
This structure is for understanding how 3D data is aligned and projected.
Additional Fields
manual_keyframe - A boolean indicating whether
this frame was manually
selected as a keyframe (used in keyframe-based scanning or
tracking).
{
"manual_keyframe": false
}
angular_velocity - A measurement of how fast
the camera was rotating when
the frame was captured. Often used for motion tracking. May
show a placeholder value
(3.4028235e+38) if unused or not captured.
center_depth - The depth (distance) from the
camera to the center of the
image, typically measured in meters.
{
"center_depth": 2.93
}
This means the object in the center of the image is 2.93 meters away from the camera.
Confidence Images
A confidence map or confidence image is commonly used in LiDAR scanning and depth sensing to represent how reliable or accurate each pixel's depth value is.
Interpreting Confidence Values
-
White = High confidence
These areas have depth values that the LiDAR sensor is very sure about. -
Black = Low confidence
These pixels have depth values that are likely unreliable or noisy (e.g., due to distance, reflectivity, or occlusion). -
Gray values = Medium confidence
These might still be usable but should be treated with caution for critical measurements.
Depth Images
A depth image encodes how far each pixel in the image is from the camera or LiDAR sensor. Each pixel's brightness (grayscale value) corresponds to a depth value.
Understanding Depth Encoding
- White pixels = far away
- Black pixels = very close
- Grayscale = in between
Mesh Info (JSON File)
This file contains metadata about a 3D mesh generated from your LiDAR capture, including its geometry, location, and alignment.
Mesh Information Fields
| Key | Meaning |
|---|---|
vertexCount
|
Number of vertices (points) in the mesh. |
faceCount
|
Number of triangle faces that make up the mesh surface. |
totalArea
|
Total surface area of the mesh in square meters. |
bboxSize
|
The width, height, and depth of the bounding box
containing the mesh. Format: [x, y, z]
in meters.
|
bboxCenter
|
Center point of the bounding box. Often [0, 0, 0]
when unshifted.
|
alignmentTransform
|
A 4×4 matrix (flattened) describing how the mesh is aligned in space. This matrix includes rotation, translation, and possibly scaling. Useful for reconstructing global orientation. |
xPlusArea, xMinusArea,
yMinusArea, zPlusArea,
zMinusArea, horizontalUpArea
|
Surface area (in m²) of the mesh portions facing a certain direction. Helps analyze wall, floor, ceiling, or terrain coverage. |
georeferenceLatitude / Longitude
/ Altitude
|
The real-world location (in degrees and meters) where the mesh was captured or aligned. |
georeferenceRotation
|
The rotation (in radians) applied to align the model with the real-world compass. |
yAlignmentRotation
|
Indicates how the Y-axis of the mesh was rotated during alignment (likely relative to gravity). |
-
Latitude:
41.721196 -
Longitude:
-81.353154 -
Altitude:
183.56 m