Before you can extract raw data from your LiDAR captures, you'll need to enable developer mode on your device. For detailed instructions on how to access Developer Mode, see 'How to Access Developer Mode'.
You must be using the same device that originally captured the LiDAR data to export raw data.
Enable Developer Mode
1. First, ensure that developer mode is enabled. Once enabled, new LiDAR captures will include a Raw Export option.
2. Open your capture library and select a LiDAR capture you want to export.
3. Tap the export option (arrow icon) located on the right-hand side of the capture editor to open the export window.
4. Scroll to the bottom of the export options, select the Raw Data export
5. Click Export to download the file.
The export will download as a zip file. Once downloaded, you'll need to unzip the file to access the raw data contents.
What’s Included in the File?
The Raw data folder includes several of the following:
Keyframes folder
Cameras
This JSON file is a compact way to store both camera information and environmental data for each frame or scan. It enables 3D reconstruction, accurate measurements, and proper alignment across multiple captures.
Image Dimensions
width
and height
- The resolution of the captured image in pixels.
"width": 1024,
"height": 768
Timestamp
timestamp
- A numerical identifier marking when the frame or scan was captured. Used to order frames or sync data, but may not represent real-world time unless tied to a specific internal clock.
"timestamp": 638443502
Image Sharpness
blur_score
- Reflects how blurry or sharp the image is. Higher values indicate more blur, while lower values suggest a sharper image.
"blur_score": 115.28
Camera Intrinsic Parameters
These values describe the internal characteristics of the camera lens:
-
fx
andfy
- The focal length of the camera in pixels along the x and y axes -
cx
andcy
- The principal point (optical center) of the camera image in pixel coordinates, typically near the center of the image
Together, these values form the camera intrinsic matrix:
[fx 0 cx]
[0 fy cy]
[0 0 1]
Example:
"fx": 712.3738,
"fy": 712.3738,
"cx": 516.26404,
"cy": 384.30533
Camera Pose (Extrinsics)
t_00
to t_23
- These values represent transformation matrices used to describe the camera's position and orientation in 3D space. Most commonly, this is a 4×4 matrix that defines how to convert points between world space and camera space.
The numbers are labeled row-wise:
[t_00 t_01 t_02 t_03]
[t_10 t_11 t_12 t_13]
[t_20 t_21 t_22 t_23]
This structure is critical for understanding how 3D data is aligned and projected.
Additional Fields
manual_keyframe
- A boolean indicating whether this frame was manually selected as a keyframe (used in keyframe-based scanning or tracking).
"manual_keyframe": false
angular_velocity
- A measurement of how fast the camera was rotating when the frame was captured. Often used for motion tracking. May show a placeholder value (3.4028235e+38
) if unused or not captured.
center_depth
- The depth (distance) from the camera to the center of the image, typically measured in meters.
"center_depth": 2.93
This means the object in the center of the image is 2.93 meters away from the camera.
Confidence Images
A confidence map or confidence image, and it's commonly used in LiDAR scanning and depth sensing to represent how reliable or accurate each pixel’s depth value is.
-
The image is in grayscale (or pseudocolored), and each pixel’s brightness indicates the confidence of the depth measurement at that location.
-
White = High confidence
These areas have depth values that the LiDAR sensor is very sure about. -
Black = Low confidence
These pixels have depth values that are likely unreliable or noisy (e.g., due to distance, reflectivity, or occlusion). -
Gray values = Medium confidence
These might still be usable but should be treated with caution for critical measurements.
Depth Images
A depth image encodes how far each pixel in the image is from the camera or LiDAR sensor. Each pixel's brightness (grayscale value) corresponds to a depth value:
-
White pixels = far away
-
Black pixels = very close
-
Grayscale = in between
Mesh Info
This file contains metadata about a 3D mesh generated in Polycam (or a similar platform), including its geometry, location, and alignment.
What's Inside mesh_info.json
Key | Meaning |
---|---|
vertexCount |
Number of vertices (points) in the mesh. |
faceCount |
Number of triangle faces that make up the mesh surface. |
totalArea |
Total surface area of the mesh in square meters. |
bboxSize |
The width, height, and depth of the bounding box containing the mesh. Format: [x, y, z] in meters. |
bboxCenter |
Center point of the bounding box. Often [0, 0, 0] when unshifted. |
alignmentTransform |
A 4×4 matrix (flattened) describing how the mesh is aligned in space. This matrix includes rotation, translation, and possibly scaling. Useful for reconstructing global orientation. |
xPlusArea , xMinusArea , yMinusArea , zPlusArea , zMinusArea , horizontalUpArea |
Surface area (in m²) of the mesh portions facing a certain direction. Helps analyze wall, floor, ceiling, or terrain coverage. |
georeferenceLatitude / Longitude / Altitude |
The real-world location (in degrees and meters) where the mesh was captured or aligned. In your file: |
-
Latitude:
41.721196
-
Longitude:
-81.353154
-
Altitude:
183.56 m
|
|georeferenceRotation
| The rotation (in radians) applied to align the model with the real-world compass. |
|yAlignmentRotation
| Indicates how the Y-axis of the mesh was rotated during alignment (likely relative to gravity).