Point Cloud Processor Tips and Notes

When you are running the new Point Cloud processor Command the following may help you

  1. If you have never set the setting for the Maximum Number of Points in Surface definition, found under Support - Options - Point Clouds, the command may not execute. There is a default setting of 500000 points, however that setting is not written into your Options settings until you change it. If the setting is absent from the Options file, the command will run but may not execute when asked to do so.

  2. If you load large point cloud files greater than e.g. 100,000,000 points, you may want to run the command called Sample Region from the Point Cloud Menu to reduce the Point cloud to 100,000,000 or less points prior to running the Point Cloud Processor. I have found that with Point Clouds of 200,000,000 points while the Point Cloud Processor may process them, it will take a long time. If you reduce the Point Cloud to 100,000,000 or less it will run much faster. We are looking into this to see if we can speed up the processing of more data, however for most projects 100,000,000 is a lot of data points. The Sample Region command is extremely fast and reduces the point cloud density in about a second prior to processing with our command.

Trimble Stratus / Propeller offers two Point Clouds typically for download - in one I was looking at over the weekend one file was 542 million points, the other was 54 million points. I processed the 54 million point file in ~3 minutes, the 542 million point file failed at 38% and when reduced to 100 million points it took ~10 mins to process and at 200 million points it took a lot longer.

For Large Areas with large point clouds you can

  1. Create Point Cloud Regions for sections of the project that have less than 100 million points in each and process each area separately then create a surface from the results of all the areas combined
  2. Reduce the Point Cloud to 100 million points or less first using Sample Region and then run Point Cloud Processor on the reduced file.

Hope this helps on those really large datasets that are out there.