@@ -173,6 +173,7 @@ All the parameters for `main_pipeline.py` are defined in the file `cli_utils.ply
*`--multiple_models` : If selected, will let colmap mapper do multiple models. The biggest one will then be chosen
*`--more_sift_features` : If selected, will activate the COLMAP options ` SiftExtraction.domain_size_pooling` and `--SiftExtraction.estimate_affine_shape` during feature extraction. Be careful, this does not use GPU and is thus very slow. More info : https://colmap.github.io/faq.html#increase-number-of-matches-sparse-3d-points
*`--add_new_videos` : If selected, will skip the mapping steps to directly register new video with respect to an already existing colmap model.
*`--filter_models` : If selected, will filter video localization to smooth trajectory
*`--stereo_min_depth` : Min depth for PatchMatch Stereo used during point cloud densification
*`--stereo_max_depth` : Same as min depth but for max depth.
...
...
@@ -297,7 +298,7 @@ This will essentially do the same thing as the script, in order to let you chang
@@ -448,7 +449,8 @@ This will essentially do the same thing as the script, in order to let you chang
For next videos, replace input1 with `/path/to/georef_full` , which will incrementally add more and more images to the model.
5. Register the remaining frames of the videos, without mapping. This is done by chunks in order to avoid RAM problems. Chunks are created during step 5, when calling script `videos_to_colmap.py`. For each chunk `N`, make a copy of the scan database and do the same operations as above, minus the mapping, replaced with image registration.
5. Register the remaining frames of the videos, without mapping. This is done by chunks in order to avoid RAM problems.
Chunks are created during step 5, when calling script `videos_to_colmap.py`. For each chunk `N`, make a copy of the scan database and do the same operations as above, minus the mapping, replaced with image registration.
At the end of these per-video-tasks, you should have a model at `/path/to/georef_full` with all photogrammetry images + localization of video frames at 1fps, and for each video a TXT file with positions with respect to the first geo-registered reconstruction.
...
...
@@ -678,7 +680,22 @@ This will essentially do the same thing as the script, in order to let you chang
--compress_depth_maps 1
```
This will create for each video a folder `/path/to/raw_GT/groundtruth_depth/<video name>/` with compressed files with depth information. Option `--write_occlusion_depth` will make the folder `/path/to/raw_GT/` much heavier but is optional. It is used for inspection purpose.
This will create for each video a folder `/path/to/raw_GT/ground_truth_depth/<video name>/` with files with depth information. Option `--write_occlusion_depth` will make the folder `/path/to/raw_GT/` much heavier but is optional. It is used for inspection purpose. Option `--compress_depth_maps` will try to compress depth maps with GZip algorithm. When not using compressiong, the files will be named `[frame_name.jpg]` (even if it's not a jpeg file), and otherwise it will be named `[frame_name.jpg].gz`. Note that for non sparse depth maps (especially occlusion depth maps), the GZ compression is not very effective.
Alternatively, you can do a sanity check before creating depth maps by running dataset inspector
See https://github.com/ETH3D/dataset-pipeline#dataset-inspection
- Note that you don't need the option `--multi_res_point_cloud_directory_path`
- Also note that this will load every image of your video, so for long videos it can be very RAM demanding