Commit 7281439a authored by Clément Pinard's avatar Clément Pinard
Browse files

add pipeline without lidar

parent 022c65fc
...@@ -166,7 +166,28 @@ This will essentially do the same thing as the script, in order to let you chang ...@@ -166,7 +166,28 @@ This will essentially do the same thing as the script, in order to let you chang
--number_of_scales 5 --number_of_scales 5
``` ```
4. Video frame addition to COLMAP db file 4. First COLMAP step (divided in two parts) : feature extraction for photogrammetry frames
```
python generate_sky_masks.py \
--img_dir /path/to/images \
--colmap_img_root /path/to/images \
--maskroot /path/to/images_mask \
--batch_size 8
```
```
colmap feature_extractor \
--database_path /path/to/scan.db \
--image_path /path/to/images \
--ImageReader.mask_path Path/to/images_mask/ \
--ImageReader.camera_model RADIAL \
--ImageReader.single_camera_per_folder 1 \
```
We don't need to extract features before having video frames, but this will populate the `/path/to/scan.db` file with the photogrammetry pictures and corresponding id that will be reserved for future version of the file. Besides, it automatically set a camera per folder too.
5. Video frame addition to COLMAP db file
``` ```
python video_to_colmap \ python video_to_colmap \
...@@ -179,7 +200,6 @@ This will essentially do the same thing as the script, in order to let you chang ...@@ -179,7 +200,6 @@ This will essentially do the same thing as the script, in order to let you chang
--total_frames 1000 \ --total_frames 1000 \
--save_space \ --save_space \
--thorough_db /path/to/scan.db --thorough_db /path/to/scan.db
``` ```
The video to colmap step will populate the scan db with new entries with the right camera parameters. And select a spatially optimal subset of frames from the full video for a photogrammetry with 1000 pictures. The video to colmap step will populate the scan db with new entries with the right camera parameters. And select a spatially optimal subset of frames from the full video for a photogrammetry with 1000 pictures.
...@@ -187,10 +207,10 @@ This will essentially do the same thing as the script, in order to let you chang ...@@ -187,10 +207,10 @@ This will essentially do the same thing as the script, in order to let you chang
- `video_frames_for_thorough_scan.txt` : all images used in the first thorough photogrammetry - `video_frames_for_thorough_scan.txt` : all images used in the first thorough photogrammetry
- `georef.txt` : all images with GPS position, and XYZ equivalent, with system and minus centroid of Lidar file. - `georef.txt` : all images with GPS position, and XYZ equivalent, with system and minus centroid of Lidar file.
And finally, it will divide long videos into chunks with corresponding list of filepath so that we don't deal with too large sequences (limit here is 4000 frames) And finally, it will divide long videos into chunks with corresponding list of filepath so that we don't deal with too large sequences (limit here is 4000 frames)
5. First COLMAP step : feature extraction 6. Second part of first COLMAP step : feature extraction for video frames used for thorough photogrammetry
``` ```
...@@ -201,14 +221,14 @@ This will essentially do the same thing as the script, in order to let you chang ...@@ -201,14 +221,14 @@ This will essentially do the same thing as the script, in order to let you chang
--batch_size 8 --batch_size 8
``` ```
(this is the same command as step 4)
``` ```
colmap feature_extractor \ colmap feature_extractor \
--database_path /path/to/scan.db \ --database_path /path/to/scan.db \
--image_path /path/to/images \ --image_path /path/to/images \
--image_list_path /path/to/images/video_frames_for_thorough_scan.txt --image_list_path /path/to/images/video_frames_for_thorough_scan.txt
--ImageReader.mask_path Path/to/images_mask/ \ --ImageReader.mask_path Path/to/images_mask/ \
--ImageReader.camera_model RADIAL \
--ImageReader.single_camera_per_folder 1 \
``` ```
We also recommand you make your own vocab_tree with image indexes, this will make the next matching steps faster. We also recommand you make your own vocab_tree with image indexes, this will make the next matching steps faster.
...@@ -220,7 +240,7 @@ This will essentially do the same thing as the script, in order to let you chang ...@@ -220,7 +240,7 @@ This will essentially do the same thing as the script, in order to let you chang
--output_index /path/to/indexed_vocab_tree --output_index /path/to/indexed_vocab_tree
``` ```
6. Second COLMAP step : matching. For less than 1000 images, you can use exhaustive matching (this will take around 2hours). If there is too much images, you can use either spatial matching or vocab tree matching 7. Second COLMAP step : matching. For less than 1000 images, you can use exhaustive matching (this will take around 2hours). If there is too much images, you can use either spatial matching or vocab tree matching
``` ```
colmap exhaustive_matcher \ colmap exhaustive_matcher \
...@@ -241,7 +261,7 @@ This will essentially do the same thing as the script, in order to let you chang ...@@ -241,7 +261,7 @@ This will essentially do the same thing as the script, in order to let you chang
--SiftMatching.guided_matching 1 --SiftMatching.guided_matching 1
``` ```
7. Third COLMAP step : thorough mapping. 8. Third COLMAP step : thorough mapping.
``` ```
mkdir -p /path/to/thorough/ mkdir -p /path/to/thorough/
...@@ -266,7 +286,7 @@ This will essentially do the same thing as the script, in order to let you chang ...@@ -266,7 +286,7 @@ This will essentially do the same thing as the script, in order to let you chang
--output_path /path/to/thorough/0 --output_path /path/to/thorough/0
``` ```
8. Fourth COLMAP step : [georeferencing](https://colmap.github.io/faq.html#geo-registration) 9. Fourth COLMAP step : [georeferencing](https://colmap.github.io/faq.html#geo-registration)
``` ```
mkdir -p /path/to/geo_registered_model mkdir -p /path/to/geo_registered_model
...@@ -280,7 +300,7 @@ This will essentially do the same thing as the script, in order to let you chang ...@@ -280,7 +300,7 @@ This will essentially do the same thing as the script, in order to let you chang
This model will be the reference model, every further models and frames localization will be done with respect to this one. This model will be the reference model, every further models and frames localization will be done with respect to this one.
Even if we could, we don't run Point cloud registration right now, as the next steps will help us to have a more complete point cloud. Even if we could, we don't run Point cloud registration right now, as the next steps will help us to have a more complete point cloud.
9. Video Localization 10. Video Localization
All these substep will populate the db file, which is then used for matching. So you need to make a copy for each video. All these substep will populate the db file, which is then used for matching. So you need to make a copy for each video.
1. Extract all the frames of the video to same directory the `video_to_colmap.py` script exported the frame subset of this video. 1. Extract all the frames of the video to same directory the `video_to_colmap.py` script exported the frame subset of this video.
...@@ -431,7 +451,7 @@ This will essentially do the same thing as the script, in order to let you chang ...@@ -431,7 +451,7 @@ This will essentially do the same thing as the script, in order to let you chang
``` ```
At the end of these per-video-tasks, you should have a model at `/path/to/georef_full` with all photogrammetry images + localization of video frames at 1fps, and for each video a TXT file with positions with respect to the first geo-registered reconstruction. At the end of these per-video-tasks, you should have a model at `/path/to/georef_full` with all photogrammetry images + localization of video frames at 1fps, and for each video a TXT file with positions with respect to the first geo-registered reconstruction.
10. Point cloud densification 11. Point cloud densification
``` ```
colmap image_undistorter \ colmap image_undistorter \
...@@ -461,7 +481,7 @@ This will essentially do the same thing as the script, in order to let you chang ...@@ -461,7 +481,7 @@ This will essentially do the same thing as the script, in order to let you chang
This will also create a `/path/to/georef_dense.ply.vis` file which describes frames from which each point is visible. This will also create a `/path/to/georef_dense.ply.vis` file which describes frames from which each point is visible.
11. Point cloud registration 12. Point cloud registration
Convert meshlab project to PLY with normals : Convert meshlab project to PLY with normals :
...@@ -522,7 +542,7 @@ This will essentially do the same thing as the script, in order to let you chang ...@@ -522,7 +542,7 @@ This will essentially do the same thing as the script, in order to let you chang
--transform /path/to/registration_matrix.txt --transform /path/to/registration_matrix.txt
``` ```
12. Occlusion Mesh generation 13. Occlusion Mesh generation
Use COLMAP delaunay mesher to generate a mesh from PLY + VIS. Use COLMAP delaunay mesher to generate a mesh from PLY + VIS.
Normally, COLMAP expect the cloud it generated when running the `stereo_fusion` step, but we use the lidar point cloud instead. Normally, COLMAP expect the cloud it generated when running the `stereo_fusion` step, but we use the lidar point cloud instead.
...@@ -565,7 +585,7 @@ This will essentially do the same thing as the script, in order to let you chang ...@@ -565,7 +585,7 @@ This will essentially do the same thing as the script, in order to let you chang
The ideal distance threshold is what is considered close range of the occlusion mesh, and the distance from which a splat (little square surface) will be created. The ideal distance threshold is what is considered close range of the occlusion mesh, and the distance from which a splat (little square surface) will be created.
13. Raw Groundtruth generation 14. Raw Groundtruth generation
For each video : For each video :
...@@ -586,7 +606,7 @@ This will essentially do the same thing as the script, in order to let you chang ...@@ -586,7 +606,7 @@ This will essentially do the same thing as the script, in order to let you chang
This will create for each video a folder `/path/to/raw_GT/groundtruth_depth/<video name>/` with compressed files with depth information. Option `--write_occlusion_depth` will make the folder `/path/to/raw_GT/` much heavier but is optional. It is used for inspection purpose. This will create for each video a folder `/path/to/raw_GT/groundtruth_depth/<video name>/` with compressed files with depth information. Option `--write_occlusion_depth` will make the folder `/path/to/raw_GT/` much heavier but is optional. It is used for inspection purpose.
14. Dataset conversion 15. Dataset conversion
For each video : For each video :
...@@ -603,10 +623,4 @@ This will essentially do the same thing as the script, in order to let you chang ...@@ -603,10 +623,4 @@ This will essentially do the same thing as the script, in order to let you chang
--video \ --video \
--downscale 4 \ --downscale 4 \
--threads 8 --threads 8
``` ```
\ No newline at end of file
...@@ -19,7 +19,7 @@ def extrapolate_position(speeds, timestamps, initial_position, final_position): ...@@ -19,7 +19,7 @@ def extrapolate_position(speeds, timestamps, initial_position, final_position):
return trapz + interp return trapz + interp
def preprocess_metadata(metadata, proj, centroid): def preprocess_metadata(metadata, proj):
def lambda_fun(x): def lambda_fun(x):
return pd.Series(proj(*x), index=["x", "y"]) return pd.Series(proj(*x), index=["x", "y"])
position_xy = metadata[["location_longitude", "location_latitude"]].apply(lambda_fun, axis=1) position_xy = metadata[["location_longitude", "location_latitude"]].apply(lambda_fun, axis=1)
...@@ -57,17 +57,16 @@ def preprocess_metadata(metadata, proj, centroid): ...@@ -57,17 +57,16 @@ def preprocess_metadata(metadata, proj, centroid):
for start, end in zip(invalidity_start, validity_start): for start, end in zip(invalidity_start, validity_start):
positions[start:end] = extrapolate_position(speed[start:end], timestamps[start:end], positions[start], positions[end]) positions[start:end] = extrapolate_position(speed[start:end], timestamps[start:end], positions[start], positions[end])
positions -= centroid
metadata["x"], metadata["y"], metadata["z"] = positions.transpose() metadata["x"], metadata["y"], metadata["z"] = positions.transpose()
return metadata return metadata
def extract_metadata(folder_path, file_path, native_wrapper, proj, w, h, f, centroid, save_path=None): def extract_metadata(folder_path, file_path, native_wrapper, proj, w, h, f, save_path=None):
metadata = native_wrapper.vmeta_extract(file_path) metadata = native_wrapper.vmeta_extract(file_path)
metadata = metadata.iloc[:-1] metadata = metadata.iloc[:-1]
metadata = preprocess_metadata(metadata, proj, centroid) metadata = preprocess_metadata(metadata, proj)
video_quality = h * w / f video_quality = h * w / f
metadata["video_quality"] = video_quality metadata["video_quality"] = video_quality
metadata["height"] = h metadata["height"] = h
......
...@@ -39,8 +39,9 @@ def extract_sky_mask(network, image_paths, mask_folder): ...@@ -39,8 +39,9 @@ def extract_sky_mask(network, image_paths, mask_folder):
image_tensor = torch.from_numpy(images).float()/255 image_tensor = torch.from_numpy(images).float()/255
image_tensor = image_tensor.permute(0, 3, 1, 2) # shape [B, C, H, W] image_tensor = image_tensor.permute(0, 3, 1, 2) # shape [B, C, H, W]
scale_factor = 512/image_tensor.shape[3] w_r = 512
reduced = F.interpolate(image_tensor, scale_factor=scale_factor, mode='area') h_r = int(512 * h / w)
reduced = F.interpolate(image_tensor, size=(h_r, w_r), mode='area')
result = network(reduced.cuda()) result = network(reduced.cuda())
classes = torch.max(result, 1)[1] classes = torch.max(result, 1)[1]
......
import las2ply import las2ply
import rawpy
import imageio
import numpy as np import numpy as np
import pandas as pd
from pyproj import Proj
from edit_exif import get_gps_location
from wrappers import Colmap, FFMpeg, PDraw, ETH3D, PCLUtil from wrappers import Colmap, FFMpeg, PDraw, ETH3D, PCLUtil
from cli_utils import set_argparser, print_step, print_workflow from cli_utils import set_argparser, print_step, print_workflow
from video_localization import localize_video, generate_GT from video_localization import localize_video, generate_GT
import meshlab_xml_writer as mxw import meshlab_xml_writer as mxw
import videos_to_colmap as v2c import prepare_images as pi
import generate_sky_masks as gsm import prepare_workspace as pw
def prepare_point_clouds(pointclouds, lidar_path, verbose, eth3d, pcl_util, SOR, pointcloud_resolution, **env): def prepare_point_clouds(pointclouds, lidar_path, verbose, eth3d, pcl_util, SOR, pointcloud_resolution, **env):
...@@ -42,117 +35,6 @@ def prepare_point_clouds(pointclouds, lidar_path, verbose, eth3d, pcl_util, SOR, ...@@ -42,117 +35,6 @@ def prepare_point_clouds(pointclouds, lidar_path, verbose, eth3d, pcl_util, SOR,
return converted_clouds, output_centroid return converted_clouds, output_centroid
def extract_gps_and_path(existing_pictures, image_path, system, centroid, **env):
proj = Proj(system)
georef_list = []
for img in existing_pictures:
gps = get_gps_location(img)
if gps is not None:
lat, lng, alt = gps
x, y = proj(lng, lat)
x -= centroid[0]
y -= centroid[1]
alt -= centroid[2]
georef_list.append("{} {} {} {}\n".format(img.relpath(image_path), x, y, alt))
return georef_list
def extract_pictures_to_workspace(input_folder, image_path, workspace, colmap,
raw_ext, pic_ext, more_sift_features, **env):
picture_folder = input_folder / "Pictures"
picture_folder.merge_tree(image_path)
raw_files = sum((list(image_path.walkfiles('*{}'.format(ext))) for ext in raw_ext), [])
for raw in raw_files:
if not any((raw.stripext() + ext).isfile() for ext in pic_ext):
raw_array = rawpy.imread(raw)
rgb = raw_array.postprocess()
imageio.imsave(raw.stripext() + ".jpg", rgb)
raw.remove()
gsm.process_folder(folder_to_process=image_path, image_path=image_path, pic_ext=pic_ext, **env)
colmap.extract_features(per_sub_folder=True, more=more_sift_features)
return sum((list(image_path.walkfiles('*{}'.format(ext))) for ext in pic_ext), [])
def extract_videos_to_workspace(video_path, **env):
return v2c.process_video_folder(output_video_folder=video_path, **env)
def check_input_folder(path):
def print_error_string():
print("Error, bad input folder structure")
print("Expected :")
print(str(path/"Lidar"))
print(str(path/"Pictures"))
print(str(path/"Videos"))
print()
print("but got :")
print("\n".join(str(d) for d in path.dirs()))
if all((path/d).isdir() for d in ["Lidar", "Pictures", "Videos"]):
return
else:
print_error_string()
def prepare_workspace(path, env):
for dirname, key in zip(["Lidar", "Pictures", "Masks",
"Pictures/Videos", "Thorough/0", "Thorough/georef", "Thorough/georef_full",
"Videos_reconstructions"],
["lidar_path", "image_path", "mask_path",
"video_path", "thorough_recon", "georef_recon", "georef_full_recon",
"video_recon"]):
(path / dirname).makedirs_p()
env[key] = path / dirname
env["thorough_db"] = path / "scan_thorough.db"
env["video_frame_list_thorough"] = env["image_path"] / "video_frames_for_thorough_scan.txt"
env["georef_frames_list"] = env["image_path"] / "georef.txt"
env["lidar_mlp"] = env["workspace"] / "lidar.mlp"
env["with_normals_path"] = env["lidar_path"] / "with_normals.ply"
env["aligned_mlp"] = env["workspace"] / "aligned_model.mlp"
env["occlusion_ply"] = env["lidar_path"] / "occlusion_model.ply"
env["splats_ply"] = env["lidar_path"] / "splats_model.ply"
env["occlusion_mlp"] = env["lidar_path"] / "occlusions.mlp"
env["splats_mlp"] = env["lidar_path"] / "splats.mlp"
env["georefrecon_ply"] = env["georef_recon"] / "georef_reconstruction.ply"
env["matrix_path"] = env["workspace"] / "matrix_thorough.txt"
env["indexed_vocab_tree"] = env["workspace"] / "vocab_tree_thorough.bin"
env["dense_workspace"] = env["thorough_recon"].parent/"dense"
def prepare_video_workspace(video_name, video_frames_folder,
raw_output_folder, converted_output_folder,
video_recon, video_path, **env):
video_env = {video_name: video_name,
video_frames_folder: video_frames_folder}
relative_path_folder = video_frames_folder.relpath(video_path)
video_env["lowfps_db"] = video_frames_folder / "video_low_fps.db"
video_env["metadata"] = video_frames_folder / "metadata.csv"
video_env["lowfps_image_list_path"] = video_frames_folder / "lowfps.txt"
video_env["chunk_image_list_paths"] = sorted(video_frames_folder.files("full_chunk_*.txt"))
video_env["chunk_dbs"] = [video_frames_folder / fp.namebase + ".db" for fp in video_env["chunk_image_list_paths"]]
colmap_root = video_recon / relative_path_folder
video_env["colmap_models_root"] = colmap_root
video_env["full_model"] = colmap_root
video_env["lowfps_model"] = colmap_root / "lowfps"
num_chunks = len(video_env["chunk_image_list_paths"])
video_env["chunk_models"] = [colmap_root / "chunk_{}".format(index) for index in range(num_chunks)]
video_env["final_model"] = colmap_root / "final"
output = {}
output["images_root_folder"] = raw_output_folder / "images"
output["video_frames_folder"] = output["images_root_folder"] / relative_path_folder
output["model_folder"] = raw_output_folder / "models" / relative_path_folder
output["interpolated_frames_list"] = output["model_folder"] / "interpolated_frames.txt"
output["final_model"] = output["model_folder"] / "final"
output["kitti_format_folder"] = converted_output_folder / "KITTI" / relative_path_folder
output["viz_folder"] = converted_output_folder / "video" / relative_path_folder
video_env["output_env"] = output
video_env["already_localized"] = env["resume_work"] and output["model_folder"].isdir()
video_env["GT_already_done"] = env["resume_work"] and (raw_output_folder / "ground_truth_depth" / video_name.namebase).isdir()
return video_env
def main(): def main():
args = set_argparser().parse_args() args = set_argparser().parse_args()
env = vars(args) env = vars(args)
...@@ -163,9 +45,9 @@ def main(): ...@@ -163,9 +45,9 @@ def main():
args.skip_step += [1, 2, 4, 5, 6] args.skip_step += [1, 2, 4, 5, 6]
if args.begin_step is not None: if args.begin_step is not None:
args.skip_step += list(range(args.begin_step)) args.skip_step += list(range(args.begin_step))
check_input_folder(args.input_folder) pw.check_input_folder(args.input_folder)
args.workspace = args.workspace.abspath() args.workspace = args.workspace.abspath()
prepare_workspace(args.workspace, env) pw.prepare_workspace(args.workspace, env)
colmap = Colmap(db=env["thorough_db"], colmap = Colmap(db=env["thorough_db"],
image_path=env["image_path"], image_path=env["image_path"],
mask_path=env["mask_path"], mask_path=env["mask_path"],
...@@ -188,45 +70,33 @@ def main(): ...@@ -188,45 +70,33 @@ def main():
ply_files = (args.input_folder/"Lidar").files("*.ply") ply_files = (args.input_folder/"Lidar").files("*.ply")
input_pointclouds = las_files + ply_files input_pointclouds = las_files + ply_files
env["videos_list"] = sum((list((args.input_folder/"Videos").walkfiles('*{}'.format(ext))) for ext in args.vid_ext), []) env["videos_list"] = sum((list((args.input_folder/"Videos").walkfiles('*{}'.format(ext))) for ext in args.vid_ext), [])
no_gt_folder = args.input_folder/"Videos"/"no_groundtruth"
if no_gt_folder.isdir():
env["videos_to_localize"] = [v for v in env["videos_list"] if not str(v).startswith(no_gt_folder)]
i = 1 i = 1
if i not in args.skip_step: if i not in args.skip_step:
print_step(i, "Point Cloud Preparation") print_step(i, "Point Cloud Preparation")
env["pointclouds"], env["centroid"] = prepare_point_clouds(input_pointclouds, **env) env["pointclouds"], env["centroid"] = prepare_point_clouds(input_pointclouds, **env)
if env["centroid"] is not None:
np.savetxt(env["centroid_path"], env["centroid"])
else: else:
env["pointclouds"] = env["lidar_path"].files("*inliers.ply") if env["centroid_path"].isfile():
centroid_path = sorted(env["lidar_path"].files("*_centroid.txt"))[0] env["centroid"] = np.loadtxt(env["centroid_path"])
env["centroid"] = np.loadtxt(centroid_path)
i += 1 i += 1
if i not in args.skip_step: if i not in args.skip_step:
print_step(i, "Pictures preparation") print_step(i, "Pictures preparation")
env["existing_pictures"] = extract_pictures_to_workspace(**env) env["existing_pictures"] = pi.extract_pictures_to_workspace(**env)
else: else:
env["existing_pictures"] = sum((list(env["image_path"].walkfiles('*{}'.format(ext))) for ext in env["pic_ext"]), []) env["existing_pictures"] = sum((list(env["image_path"].walkfiles('*{}'.format(ext))) for ext in env["pic_ext"]), [])
i += 1 i += 1
if i not in args.skip_step: if i not in args.skip_step:
print_step(i, "Extracting Videos and selecting optimal frames for a thorough scan") print_step(i, "Extracting Videos and selecting optimal frames for a thorough scan")
existing_georef = extract_gps_and_path(**env) existing_georef, env["centroid"] = pi.extract_gps_and_path(**env)
path_lists, env["videos_frames_folders"] = extract_videos_to_workspace(fps=args.lowfps, **env) env["videos_frames_folders"] = pi.extract_videos_to_workspace(existing_georef=existing_georef,
if path_lists is not None: fps=args.lowfps, **env)
with open(env["video_frame_list_thorough"], "w") as f:
f.write("\n".join(path_lists["thorough"]["frames"]))
with open(env["georef_frames_list"], "w") as f:
f.write("\n".join(existing_georef) + "\n")
f.write("\n".join(path_lists["thorough"]["georef"]) + "\n")
for v in env["videos_list"]:
video_folder = env["videos_frames_folders"][v]
with open(video_folder / "lowfps.txt", "w") as f:
f.write("\n".join(path_lists[v]["frames_lowfps"]) + "\n")
with open(video_folder / "georef.txt", "w") as f:
f.write("\n".join(existing_georef) + "\n")
f.write("\n".join(path_lists["thorough"]["georef"]) + "\n")
f.write("\n".join(path_lists[v]["georef_lowfps"]) + "\n")
for j, l in enumerate(path_lists[v]["frames_full"]):
with open(video_folder / "full_chunk_{}.txt".format(j), "w") as f:
f.write("\n".join(l) + "\n")
else: else:
env["videos_frames_folders"] = {} env["videos_frames_folders"] = {}
by_name = {v.namebase: v for v in env["videos_list"]} by_name = {v.namebase: v for v in env["videos_list"]}
...@@ -236,17 +106,17 @@ def main(): ...@@ -236,17 +106,17 @@ def main():
env["videos_frames_folders"][by_name[video_name]] = folder env["videos_frames_folders"][by_name[video_name]] = folder
env["videos_workspaces"] = {} env["videos_workspaces"] = {}
for v, frames_folder in env["videos_frames_folders"].items(): for v, frames_folder in env["videos_frames_folders"].items():
env["videos_workspaces"][v] = prepare_video_workspace(v, frames_folder, **env) env["videos_workspaces"][v] = pw.prepare_video_workspace(v, frames_folder, **env)
i += 1 i += 1
if i not in args.skip_step: if i not in args.skip_step:
print_step(i, "First thorough photogrammetry") print_step(i, "First thorough photogrammetry")
gsm.process_folder(folder_to_process=env["video_path"], **env) env["thorough_recon"].makedirs_p()
colmap.extract_features(image_list=env["video_frame_list_thorough"], more=args.more_sift_features) colmap.extract_features(image_list=env["video_frame_list_thorough"], more=args.more_sift_features)
colmap.index_images(vocab_tree_output=env["indexed_vocab_tree"], vocab_tree_input=args.vocab_tree) colmap.index_images(vocab_tree_output=env["indexed_vocab_tree"], vocab_tree_input=args.vocab_tree)
colmap.match(method="vocab_tree", vocab_tree=env["indexed_vocab_tree"]) colmap.match(method="vocab_tree", vocab_tree=env["indexed_vocab_tree"])
colmap.map(output=env["thorough_recon"].parent) colmap.map(output=env["thorough_recon"])
colmap.adjust_bundle(output=env["thorough_recon"], input=env["thorough_recon"], colmap.adjust_bundle(output=env["thorough_recon"] / "0", input=env["thorough_recon"] / "0",
num_iter=100, refine_extra_params=True) num_iter=100, refine_extra_params=True)
i += 1 i += 1
...@@ -254,7 +124,7 @@ def main(): ...@@ -254,7 +124,7 @@ def main():
print_step(i, "Alignment of photogrammetric reconstruction with GPS") print_step(i, "Alignment of photogrammetric reconstruction with GPS")
colmap.align_model(output=env["georef_recon"], colmap.align_model(output=env["georef_recon"],
input=env["thorough_recon"], input=env["thorough_recon"] / "0",
ref_images=env["georef_frames_list"]) ref_images=env["georef_frames_list"])
env["georef_recon"].merge_tree(env["georef_full_recon"]) env["georef_recon"].merge_tree(env["georef_full_recon"])
if args.inspect_dataset: if args.inspect_dataset:
...@@ -272,18 +142,19 @@ def main(): ...@@ -272,18 +142,19 @@ def main():
i += 1 i += 1
if i not in args.skip_step: if i not in args.skip_step:
print_step(i, "Video localization with respect to reconstruction") print_step(i, "Video localization with respect to reconstruction")
for j, v in enumerate(env["videos_list"]): for j, v in enumerate(env["videos_to_localize"]):
print("\n\nNow working on video {} [{}/{}]".format(v, j + 1, len(env["videos_list"]))) print("\n\nNow working on video {} [{}/{}]".format(v, j + 1, len(env["videos_to_localize"])))
video_env = env["videos_workspaces"][v] video_env = env["videos_workspaces"][v]
localize_video(video_name=v, localize_video(video_name=v,
video_frames_folder=env["videos_frames_folders"][v], video_frames_folder=env["videos_frames_folders"][v],
video_index=j+1, video_index=j+1,
num_videos=len(env["videos_list"]), num_videos=len(env["videos_to_localize"]),
**video_env, **env) **video_env, **env)
i += 1 i += 1
if i not in args.skip_step: if i not in args.skip_step:
print_step(i, "Full reconstruction point cloud densificitation") print_step(i, "Full reconstruction point cloud densificitation")
env["georef_full_recon"].makedirs_p()
colmap.undistort(input=env["georef_full_recon"]) colmap.undistort(input=env["georef_full_recon"])
colmap.dense_stereo() colmap.dense_stereo()
colmap.stereo_fusion(output=env["georefrecon_ply"]) colmap.stereo_fusion(output=env["georefrecon_ply"])
...@@ -302,7 +173,7 @@ def main(): ...@@ -302,7 +173,7 @@ def main():
print_step(i, "Registration of photogrammetric reconstruction with respect to Lidar Point Cloud") print_step(i, "Registration of photogrammetric reconstruction with respect to Lidar Point Cloud")
if args.registration_method == "eth3d": if args.registration_method == "eth3d":
# Note : ETH3D doesn't register with scale, this might not be suitable for very large areas # Note : ETH3D doesn't register with scale, this might not be suitable for very large areas
mxw.add_mesh_to_project(env["lidar_mlp"], env["aligned_mlp"], env["georefrecon_ply"], index=0) mxw.add_meshes_to_project(