black and white checkered illustration

As a geospatial developer, a lot of what I do revolves around describing the physical world in a digital way. There are many ways to do this. Consider a city: a country-scale map may distill a city into a single point. We might represent the same city as a polygon delineating the city limits. Further refinement may show the road network, building footprints, points of interest, and so on. We are also able to add a vertical dimension by extruding gridded elevation values across a surface, representing ground elevation values or the top of real features occurring above the ground (e.g. buildings and trees), so the city representation may roll across the landscape and obscure the horizon in a more realistic way. Such elevation rasters are generally constrained to one value per pixel, but we can achieve even more detail with point clouds.

Point clouds contain XYZ coordinates (and optional additional attributes) indicating locations where the sensor determines that there is something approximately solid. In aggregate, point clouds are able to show oblique details that elevation rasters cannot. In most cases, the sensor producing a point cloud is some form of LiDAR1: the sensor emits a laser pulse and determines the distance(s) encountered by the pulse from the time(s) it takes to reflect and return to the sensor (multiple partial returns are possible). These depth sensors have existed for decades, but historically, have been expensive single-purpose pieces of equipment. In recent years, however, LiDAR sensors have become more accessible to the general public; notable advances include the releases of the Kinect Controller in 2010, and iPhone 12 Pro/iPad Pro in 2020 (iPhone 13 followed in 2021).

How to use a mobile phone’s LiDAR to create a 3D model and display it onto a map

Naturally, I was interested in the prospect of collecting 3D data using mobile LiDAR. When the opportunity presented itself, I obtained an iPhone 12 Pro and set out to explore the 3D mobile mapping space. I learn best when I have a goal in mind – for this post my goal was: create a workflow that results in a 3D model on a map, as easily as possible, as cheaply as possible.

My initial workflow looked like above, and this blog post aims to fill in the blanks for:

Collecting and scanning 3D data using the iPhone 12 Pro

To my knowledge, the iPhone does not create 3D models out of the box. Luckily, there is an app for that. Actually, there are now many apps for that. I did not perform an exhaustive test of all scanner apps, so I will not provide a critical app review here. For this project I tested Polycam and Metascan amongst others, and they are both capable of producing high quality scans in a variety of output formats (several point cloud and mesh options) at a reasonable cost (free tier, plus optional monthly pro plans for additional output options/processing), so they tick all my boxes.

The process of collecting LiDAR is fairly consistent between scanner apps: start a scan, scan the object/scene through a camera viewer, and stop the scan. When the scan is complete, most scanner apps provide a low resolution representation of the scene. If it looks good, you’re done until the next step. If not, delete and try again.

Example scan. (scanner app: Metascan)


  • Avoid any sudden movements while scanning.
  • Collect multiple scans, across multiple scanner apps, at least until you’ve arrived at a suitable workflow. Some scans/apps happen to work better in some cases than others.
  • Be aware of your surroundings. During the scan, you will be focused on the camera viewer.
  • As always, be conscious of the privacy of others.

Processing and storing the 3D data in the cloud

When you are satisfied with one of your scans, the next step in most scanner apps is to upload your scan to the app’s cloud infrastructure for final processing (it is possible that some apps may process the scan directly on your device). When complete, you should be able to view the high resolution scan on your device. Several scanner apps also have a web platform, which should now contain your final scan.

You will likely want to store the output somewhere that you have ownership, like your own cloud infrastructure. Some scanner apps may integrate with hosting services like sketchfab, but otherwise, unfortunately, this may be a manual endeavour. Most scanner apps have download and share capabilities.

Output formats vary between apps, but generally fall into two categories: mesh and point cloud. We’ll explore these two categories using this Tyrannosaurus rex toy:

Tyrannosaurus rex toy, original


Common mesh formats include: GLTF/GLB, OBJ, FBX, STL, and USDZ. Mesh objects are 3D graphical models consisting of faces, edges, and vertices. Although not often used in geospatial workflows, meshes are widely supported in modern browsers and 3D desktop software. Be aware that each mesh format is different, and support varies between environments and software.

Tyrannosaurus rex toy mesh. (scanner app: Polycam, viewer: Blender)

Point Cloud

Common point cloud formats include: LAS/LAZ, XYZ, and PLY. As described above, point clouds are collections of 3D points with additional, optional attributes. Point clouds, especially LAS/LAZ format, are more widely used in traditional geospatial workflows, and are supported in current versions of most GIS software.

Tyrannosaurus rex toy point cloud. (scanner app: Polycam, viewer: napari)

My suspicion about point clouds: as we are using third party, proprietary scanner apps, it is difficult to know how outputs are derived. Because this post is mostly about mobile LiDAR capabilities, I have so far neglected to mention that, in addition to LiDAR capture, most scanner apps have the capability to create 3D models directly from multi-angle photos. Point cloud outputs are available for models created from photos, which makes me suspect that point cloud outputs are derived from the output mesh surface, not the LiDAR sensor. But, this may vary between apps and I should do some due diligence before disparaging app-derived point clouds further.

Displaying the 3D model onto a map

The final step in the workflow is to display the model on a map. There are many ways to view the 3D models, as either mesh or point cloud, in either the role of consumer or producer.



  • Blender: 3D desktop editing software, supporting a wide range of mesh formats
  • Apple Reality Composer: view USDZ meshes and build iOS Augmented Reality experiences

Point Cloud

  • an online viewer that is capable of loading local point cloud files
  • QGIS: since version 3.18, QGIS is able to read/view point clouds
  • Python: I have had success reading las/laz files with PDAL, and viewing the point cloud with napari
  • If you convert your point cloud to a Cloud-optimized Point Cloud, you can drop the url into the viewer here



  • <model-viewer>: embed GLTF/GLB models in HTML
  • ThreeJS: use the GLTFLoader or OBJLoader to add meshes
  • Mapbox GL JS/MapLibre GL JS: using ThreeJS, embed GLTF/GLB models within the map (examples: Mapbox, MapLibre)
  • Apple ARKit: I hesitate to start down this path because this includes all of mobile development, but this is how you can start building a native iOS Augmented Reality app that contains your mesh
  • Unity: add your mesh to a game environment, and build for several mobile and web environments

Point Cloud

  • Potree: WebGL based point cloud renderer
Yes, it’s a bathroom building, but it is also a mesh on a map! (scanner app: Metascan, viewer: Mapbox)


  • Output format: most scanner apps output several formats, but specific format availability and cost may vary between apps
  • Geographic positioning: some scanner apps do not support geographic positioning whatsoever, and the sophistication of positioning may vary between apps. All scanner apps that I tried required at least some manual positioning or rotation.
  • Support: many scanner apps maintain public Discord channels for support. Check them out and see if feature requests are actually being fulfilled.
  • LiDAR vs. photo mode: many scanner apps allow model capture in either LiDAR or photo mode. LiDAR mode uses the LiDAR sensor to collect depth information, while photo mode relies on multi-angle photogrammetry to derive the model. The LiDAR sensor on iPhone 12 Pro has a range of about 5 meters, and does not seem to have sufficient resolution to capture small details. Photo mode, on the other hand, often performs well at longer distances and is able to capture minute details, provided the user is free to capture the scene from a variety of angles. Photo mode is more time consuming and may provide physical challenges to the user – several dozen photos from all angles are often necessary to create a high quality model.
  • Cost: there are many reasonably priced scanner apps available, most with a free tier, upgradable to monthly subscriptions. There are also reputable, higher end scanner apps that you can test on a free trial. I will say that I did indulge in one such trial, and it resulted in two scans, one of which ended up located in the wrong hemisphere after processing, so perhaps you don’t always get what you pay for in this case. You would likely have more leverage for support, though. Anyway, food for thought.


In the end, my full workflow looks like this:

Final end-to-end mobile LiDAR workflow.

While the selection of scanner apps and the iPhone LiDAR sensor itself are impressive and fun to experiment with, I suspect both are still near the beginning of a journey toward greater things. The technology is new and the space is understandably evolving quickly, with no consensus on features, formats, or purpose (e.g. is the app for scanning retail products or real estate tours?).

With that said, I’m pleased to see that the mobile LiDAR app space is quickly evolving as developers embrace the new generation of depth sensors. Many features and new apps emerged even between the time I started and published this post, and I am excited to see what comes next!


1: The term LiDAR is fraught with mild controversy that I want to avoid in this post. LiDAR may variously stand for “light detection and ranging”, “laser imaging, detection, and ranging”, or it may be a portmanteau of “light” and “radar”. It may also be written “LiDAR” or “lidar”. I do not care about any of this pedantry, and if I never hear any more about it, I will be happy.

Sharing options

Experiments • Gordon Logie

Wildfires and Flood Damage

A data-backed research project examining linkages between the wildfires and subsequent flood damages that occurred in BC, 2021.

Experiments • Will Cadell

Supply Chain Disruption and Geography

Geospatial Trend Report March 2022 – Will discusses recent supply chain disruptions and how geospatial plays a role in increasing the reliability of our global supply chain infrastructure.

black and gray binoculars on brown wooden dock

Experiments • Darren Wiens

A Free Open Source Full Motion Video Workflow

FOSS FMV FTW: A tutorial on how to accurately map video onto a map, using open source software and standards (STAC & STAC API, Mapbox, MapLibre).

Need a geospatial partner?

Our team complements organizations like yours—by providing on-tap access to geospatial, analytics, and mapping expertise.

Let’s talk

Join our team?

We’re always looking for skilled technologists to help us build the future of geospatial. Got a minute to find out more about us?

Working Here

Sharing options