Commit 7281439a authored by Clément Pinard's avatar Clément Pinard
Browse files

add pipeline without lidar

parent 022c65fc
......@@ -166,7 +166,28 @@ This will essentially do the same thing as the script, in order to let you chang
--number_of_scales 5
```
4. Video frame addition to COLMAP db file
4. First COLMAP step (divided in two parts) : feature extraction for photogrammetry frames
```
python generate_sky_masks.py \
--img_dir /path/to/images \
--colmap_img_root /path/to/images \
--maskroot /path/to/images_mask \
--batch_size 8
```
```
colmap feature_extractor \
--database_path /path/to/scan.db \
--image_path /path/to/images \
--ImageReader.mask_path Path/to/images_mask/ \
--ImageReader.camera_model RADIAL \
--ImageReader.single_camera_per_folder 1 \
```
We don't need to extract features before having video frames, but this will populate the `/path/to/scan.db` file with the photogrammetry pictures and corresponding id that will be reserved for future version of the file. Besides, it automatically set a camera per folder too.
5. Video frame addition to COLMAP db file
```
python video_to_colmap \
......@@ -179,7 +200,6 @@ This will essentially do the same thing as the script, in order to let you chang
--total_frames 1000 \
--save_space \
--thorough_db /path/to/scan.db
```
The video to colmap step will populate the scan db with new entries with the right camera parameters. And select a spatially optimal subset of frames from the full video for a photogrammetry with 1000 pictures.
......@@ -190,7 +210,7 @@ This will essentially do the same thing as the script, in order to let you chang
And finally, it will divide long videos into chunks with corresponding list of filepath so that we don't deal with too large sequences (limit here is 4000 frames)
5. First COLMAP step : feature extraction
6. Second part of first COLMAP step : feature extraction for video frames used for thorough photogrammetry
```
......@@ -201,14 +221,14 @@ This will essentially do the same thing as the script, in order to let you chang
--batch_size 8
```
(this is the same command as step 4)
```
colmap feature_extractor \
--database_path /path/to/scan.db \
--image_path /path/to/images \
--image_list_path /path/to/images/video_frames_for_thorough_scan.txt
--ImageReader.mask_path Path/to/images_mask/ \
--ImageReader.camera_model RADIAL \
--ImageReader.single_camera_per_folder 1 \
```
We also recommand you make your own vocab_tree with image indexes, this will make the next matching steps faster.
......@@ -220,7 +240,7 @@ This will essentially do the same thing as the script, in order to let you chang
--output_index /path/to/indexed_vocab_tree
```
6. Second COLMAP step : matching. For less than 1000 images, you can use exhaustive matching (this will take around 2hours). If there is too much images, you can use either spatial matching or vocab tree matching
7. Second COLMAP step : matching. For less than 1000 images, you can use exhaustive matching (this will take around 2hours). If there is too much images, you can use either spatial matching or vocab tree matching
```
colmap exhaustive_matcher \
......@@ -241,7 +261,7 @@ This will essentially do the same thing as the script, in order to let you chang
--SiftMatching.guided_matching 1
```
7. Third COLMAP step : thorough mapping.
8. Third COLMAP step : thorough mapping.
```
mkdir -p /path/to/thorough/
......@@ -266,7 +286,7 @@ This will essentially do the same thing as the script, in order to let you chang
--output_path /path/to/thorough/0
```
8. Fourth COLMAP step : [georeferencing](https://colmap.github.io/faq.html#geo-registration)
9. Fourth COLMAP step : [georeferencing](https://colmap.github.io/faq.html#geo-registration)
```
mkdir -p /path/to/geo_registered_model
......@@ -280,7 +300,7 @@ This will essentially do the same thing as the script, in order to let you chang
This model will be the reference model, every further models and frames localization will be done with respect to this one.
Even if we could, we don't run Point cloud registration right now, as the next steps will help us to have a more complete point cloud.
9. Video Localization
10. Video Localization
All these substep will populate the db file, which is then used for matching. So you need to make a copy for each video.
1. Extract all the frames of the video to same directory the `video_to_colmap.py` script exported the frame subset of this video.
......@@ -431,7 +451,7 @@ This will essentially do the same thing as the script, in order to let you chang
```
At the end of these per-video-tasks, you should have a model at `/path/to/georef_full` with all photogrammetry images + localization of video frames at 1fps, and for each video a TXT file with positions with respect to the first geo-registered reconstruction.
10. Point cloud densification
11. Point cloud densification
```
colmap image_undistorter \
......@@ -461,7 +481,7 @@ This will essentially do the same thing as the script, in order to let you chang
This will also create a `/path/to/georef_dense.ply.vis` file which describes frames from which each point is visible.
11. Point cloud registration
12. Point cloud registration
Convert meshlab project to PLY with normals :
......@@ -522,7 +542,7 @@ This will essentially do the same thing as the script, in order to let you chang
--transform /path/to/registration_matrix.txt
```
12. Occlusion Mesh generation
13. Occlusion Mesh generation
Use COLMAP delaunay mesher to generate a mesh from PLY + VIS.
Normally, COLMAP expect the cloud it generated when running the `stereo_fusion` step, but we use the lidar point cloud instead.
......@@ -565,7 +585,7 @@ This will essentially do the same thing as the script, in order to let you chang
The ideal distance threshold is what is considered close range of the occlusion mesh, and the distance from which a splat (little square surface) will be created.
13. Raw Groundtruth generation
14. Raw Groundtruth generation
For each video :
......@@ -586,7 +606,7 @@ This will essentially do the same thing as the script, in order to let you chang
This will create for each video a folder `/path/to/raw_GT/groundtruth_depth/<video name>/` with compressed files with depth information. Option `--write_occlusion_depth` will make the folder `/path/to/raw_GT/` much heavier but is optional. It is used for inspection purpose.
14. Dataset conversion
15. Dataset conversion
For each video :
......@@ -604,9 +624,3 @@ This will essentially do the same thing as the script, in order to let you chang
--downscale 4 \
--threads 8
```
\ No newline at end of file
......@@ -19,7 +19,7 @@ def extrapolate_position(speeds, timestamps, initial_position, final_position):
return trapz + interp
def preprocess_metadata(metadata, proj, centroid):
def preprocess_metadata(metadata, proj):
def lambda_fun(x):
return pd.Series(proj(*x), index=["x", "y"])
position_xy = metadata[["location_longitude", "location_latitude"]].apply(lambda_fun, axis=1)
......@@ -57,17 +57,16 @@ def preprocess_metadata(metadata, proj, centroid):
for start, end in zip(invalidity_start, validity_start):
positions[start:end] = extrapolate_position(speed[start:end], timestamps[start:end], positions[start], positions[end])
positions -= centroid
metadata["x"], metadata["y"], metadata["z"] = positions.transpose()
return metadata
def extract_metadata(folder_path, file_path, native_wrapper, proj, w, h, f, centroid, save_path=None):
def extract_metadata(folder_path, file_path, native_wrapper, proj, w, h, f, save_path=None):
metadata = native_wrapper.vmeta_extract(file_path)
metadata = metadata.iloc[:-1]
metadata = preprocess_metadata(metadata, proj, centroid)
metadata = preprocess_metadata(metadata, proj)
video_quality = h * w / f
metadata["video_quality"] = video_quality
metadata["height"] = h
......
......@@ -39,8 +39,9 @@ def extract_sky_mask(network, image_paths, mask_folder):
image_tensor = torch.from_numpy(images).float()/255
image_tensor = image_tensor.permute(0, 3, 1, 2) # shape [B, C, H, W]
scale_factor = 512/image_tensor.shape[3]
reduced = F.interpolate(image_tensor, scale_factor=scale_factor, mode='area')
w_r = 512
h_r = int(512 * h / w)
reduced = F.interpolate(image_tensor, size=(h_r, w_r), mode='area')
result = network(reduced.cuda())
classes = torch.max(result, 1)[1]
......
import las2ply
import rawpy
import imageio
import numpy as np
import pandas as pd
from pyproj import Proj
from edit_exif import get_gps_location
from wrappers import Colmap, FFMpeg, PDraw, ETH3D, PCLUtil
from cli_utils import set_argparser, print_step, print_workflow
from video_localization import localize_video, generate_GT
import meshlab_xml_writer as mxw
import videos_to_colmap as v2c
import generate_sky_masks as gsm
import prepare_images as pi
import prepare_workspace as pw
def prepare_point_clouds(pointclouds, lidar_path, verbose, eth3d, pcl_util, SOR, pointcloud_resolution, **env):
......@@ -42,117 +35,6 @@ def prepare_point_clouds(pointclouds, lidar_path, verbose, eth3d, pcl_util, SOR,
return converted_clouds, output_centroid
def extract_gps_and_path(existing_pictures, image_path, system, centroid, **env):
proj = Proj(system)
georef_list = []
for img in existing_pictures:
gps = get_gps_location(img)
if gps is not None:
lat, lng, alt = gps
x, y = proj(lng, lat)
x -= centroid[0]
y -= centroid[1]
alt -= centroid[2]
georef_list.append("{} {} {} {}\n".format(img.relpath(image_path), x, y, alt))
return georef_list
def extract_pictures_to_workspace(input_folder, image_path, workspace, colmap,
raw_ext, pic_ext, more_sift_features, **env):
picture_folder = input_folder / "Pictures"
picture_folder.merge_tree(image_path)
raw_files = sum((list(image_path.walkfiles('*{}'.format(ext))) for ext in raw_ext), [])
for raw in raw_files:
if not any((raw.stripext() + ext).isfile() for ext in pic_ext):
raw_array = rawpy.imread(raw)
rgb = raw_array.postprocess()
imageio.imsave(raw.stripext() + ".jpg", rgb)
raw.remove()
gsm.process_folder(folder_to_process=image_path, image_path=image_path, pic_ext=pic_ext, **env)
colmap.extract_features(per_sub_folder=True, more=more_sift_features)
return sum((list(image_path.walkfiles('*{}'.format(ext))) for ext in pic_ext), [])
def extract_videos_to_workspace(video_path, **env):
return v2c.process_video_folder(output_video_folder=video_path, **env)
def check_input_folder(path):
def print_error_string():
print("Error, bad input folder structure")
print("Expected :")
print(str(path/"Lidar"))
print(str(path/"Pictures"))
print(str(path/"Videos"))
print()
print("but got :")
print("\n".join(str(d) for d in path.dirs()))
if all((path/d).isdir() for d in ["Lidar", "Pictures", "Videos"]):
return
else:
print_error_string()
def prepare_workspace(path, env):
for dirname, key in zip(["Lidar", "Pictures", "Masks",
"Pictures/Videos", "Thorough/0", "Thorough/georef", "Thorough/georef_full",
"Videos_reconstructions"],
["lidar_path", "image_path", "mask_path",
"video_path", "thorough_recon", "georef_recon", "georef_full_recon",
"video_recon"]):
(path / dirname).makedirs_p()
env[key] = path / dirname
env["thorough_db"] = path / "scan_thorough.db"
env["video_frame_list_thorough"] = env["image_path"] / "video_frames_for_thorough_scan.txt"
env["georef_frames_list"] = env["image_path"] / "georef.txt"
env["lidar_mlp"] = env["workspace"] / "lidar.mlp"
env["with_normals_path"] = env["lidar_path"] / "with_normals.ply"
env["aligned_mlp"] = env["workspace"] / "aligned_model.mlp"
env["occlusion_ply"] = env["lidar_path"] / "occlusion_model.ply"
env["splats_ply"] = env["lidar_path"] / "splats_model.ply"
env["occlusion_mlp"] = env["lidar_path"] / "occlusions.mlp"
env["splats_mlp"] = env["lidar_path"] / "splats.mlp"
env["georefrecon_ply"] = env["georef_recon"] / "georef_reconstruction.ply"
env["matrix_path"] = env["workspace"] / "matrix_thorough.txt"
env["indexed_vocab_tree"] = env["workspace"] / "vocab_tree_thorough.bin"
env["dense_workspace"] = env["thorough_recon"].parent/"dense"
def prepare_video_workspace(video_name, video_frames_folder,
raw_output_folder, converted_output_folder,
video_recon, video_path, **env):
video_env = {video_name: video_name,
video_frames_folder: video_frames_folder}
relative_path_folder = video_frames_folder.relpath(video_path)
video_env["lowfps_db"] = video_frames_folder / "video_low_fps.db"
video_env["metadata"] = video_frames_folder / "metadata.csv"
video_env["lowfps_image_list_path"] = video_frames_folder / "lowfps.txt"
video_env["chunk_image_list_paths"] = sorted(video_frames_folder.files("full_chunk_*.txt"))
video_env["chunk_dbs"] = [video_frames_folder / fp.namebase + ".db" for fp in video_env["chunk_image_list_paths"]]
colmap_root = video_recon / relative_path_folder
video_env["colmap_models_root"] = colmap_root
video_env["full_model"] = colmap_root
video_env["lowfps_model"] = colmap_root / "lowfps"
num_chunks = len(video_env["chunk_image_list_paths"])
video_env["chunk_models"] = [colmap_root / "chunk_{}".format(index) for index in range(num_chunks)]
video_env["final_model"] = colmap_root / "final"
output = {}
output["images_root_folder"] = raw_output_folder / "images"
output["video_frames_folder"] = output["images_root_folder"] / relative_path_folder
output["model_folder"] = raw_output_folder / "models" / relative_path_folder
output["interpolated_frames_list"] = output["model_folder"] / "interpolated_frames.txt"
output["final_model"] = output["model_folder"] / "final"
output["kitti_format_folder"] = converted_output_folder / "KITTI" / relative_path_folder
output["viz_folder"] = converted_output_folder / "video" / relative_path_folder
video_env["output_env"] = output
video_env["already_localized"] = env["resume_work"] and output["model_folder"].isdir()
video_env["GT_already_done"] = env["resume_work"] and (raw_output_folder / "ground_truth_depth" / video_name.namebase).isdir()
return video_env
def main():
args = set_argparser().parse_args()
env = vars(args)
......@@ -163,9 +45,9 @@ def main():
args.skip_step += [1, 2, 4, 5, 6]
if args.begin_step is not None:
args.skip_step += list(range(args.begin_step))
check_input_folder(args.input_folder)
pw.check_input_folder(args.input_folder)
args.workspace = args.workspace.abspath()
prepare_workspace(args.workspace, env)
pw.prepare_workspace(args.workspace, env)
colmap = Colmap(db=env["thorough_db"],
image_path=env["image_path"],
mask_path=env["mask_path"],
......@@ -188,45 +70,33 @@ def main():
ply_files = (args.input_folder/"Lidar").files("*.ply")
input_pointclouds = las_files + ply_files
env["videos_list"] = sum((list((args.input_folder/"Videos").walkfiles('*{}'.format(ext))) for ext in args.vid_ext), [])
no_gt_folder = args.input_folder/"Videos"/"no_groundtruth"
if no_gt_folder.isdir():
env["videos_to_localize"] = [v for v in env["videos_list"] if not str(v).startswith(no_gt_folder)]
i = 1
if i not in args.skip_step:
print_step(i, "Point Cloud Preparation")
env["pointclouds"], env["centroid"] = prepare_point_clouds(input_pointclouds, **env)
if env["centroid"] is not None:
np.savetxt(env["centroid_path"], env["centroid"])
else:
env["pointclouds"] = env["lidar_path"].files("*inliers.ply")
centroid_path = sorted(env["lidar_path"].files("*_centroid.txt"))[0]
env["centroid"] = np.loadtxt(centroid_path)
if env["centroid_path"].isfile():
env["centroid"] = np.loadtxt(env["centroid_path"])
i += 1
if i not in args.skip_step:
print_step(i, "Pictures preparation")
env["existing_pictures"] = extract_pictures_to_workspace(**env)
env["existing_pictures"] = pi.extract_pictures_to_workspace(**env)
else:
env["existing_pictures"] = sum((list(env["image_path"].walkfiles('*{}'.format(ext))) for ext in env["pic_ext"]), [])
i += 1
if i not in args.skip_step:
print_step(i, "Extracting Videos and selecting optimal frames for a thorough scan")
existing_georef = extract_gps_and_path(**env)
path_lists, env["videos_frames_folders"] = extract_videos_to_workspace(fps=args.lowfps, **env)
if path_lists is not None:
with open(env["video_frame_list_thorough"], "w") as f:
f.write("\n".join(path_lists["thorough"]["frames"]))
with open(env["georef_frames_list"], "w") as f:
f.write("\n".join(existing_georef) + "\n")
f.write("\n".join(path_lists["thorough"]["georef"]) + "\n")
for v in env["videos_list"]:
video_folder = env["videos_frames_folders"][v]
with open(video_folder / "lowfps.txt", "w") as f:
f.write("\n".join(path_lists[v]["frames_lowfps"]) + "\n")
with open(video_folder / "georef.txt", "w") as f:
f.write("\n".join(existing_georef) + "\n")
f.write("\n".join(path_lists["thorough"]["georef"]) + "\n")
f.write("\n".join(path_lists[v]["georef_lowfps"]) + "\n")
for j, l in enumerate(path_lists[v]["frames_full"]):
with open(video_folder / "full_chunk_{}.txt".format(j), "w") as f:
f.write("\n".join(l) + "\n")
existing_georef, env["centroid"] = pi.extract_gps_and_path(**env)
env["videos_frames_folders"] = pi.extract_videos_to_workspace(existing_georef=existing_georef,
fps=args.lowfps, **env)
else:
env["videos_frames_folders"] = {}
by_name = {v.namebase: v for v in env["videos_list"]}
......@@ -236,17 +106,17 @@ def main():
env["videos_frames_folders"][by_name[video_name]] = folder
env["videos_workspaces"] = {}
for v, frames_folder in env["videos_frames_folders"].items():
env["videos_workspaces"][v] = prepare_video_workspace(v, frames_folder, **env)
env["videos_workspaces"][v] = pw.prepare_video_workspace(v, frames_folder, **env)
i += 1
if i not in args.skip_step:
print_step(i, "First thorough photogrammetry")
gsm.process_folder(folder_to_process=env["video_path"], **env)
env["thorough_recon"].makedirs_p()
colmap.extract_features(image_list=env["video_frame_list_thorough"], more=args.more_sift_features)
colmap.index_images(vocab_tree_output=env["indexed_vocab_tree"], vocab_tree_input=args.vocab_tree)
colmap.match(method="vocab_tree", vocab_tree=env["indexed_vocab_tree"])
colmap.map(output=env["thorough_recon"].parent)
colmap.adjust_bundle(output=env["thorough_recon"], input=env["thorough_recon"],
colmap.map(output=env["thorough_recon"])
colmap.adjust_bundle(output=env["thorough_recon"] / "0", input=env["thorough_recon"] / "0",
num_iter=100, refine_extra_params=True)
i += 1
......@@ -254,7 +124,7 @@ def main():
print_step(i, "Alignment of photogrammetric reconstruction with GPS")
colmap.align_model(output=env["georef_recon"],
input=env["thorough_recon"],
input=env["thorough_recon"] / "0",
ref_images=env["georef_frames_list"])
env["georef_recon"].merge_tree(env["georef_full_recon"])
if args.inspect_dataset:
......@@ -272,18 +142,19 @@ def main():
i += 1
if i not in args.skip_step:
print_step(i, "Video localization with respect to reconstruction")
for j, v in enumerate(env["videos_list"]):
print("\n\nNow working on video {} [{}/{}]".format(v, j + 1, len(env["videos_list"])))
for j, v in enumerate(env["videos_to_localize"]):
print("\n\nNow working on video {} [{}/{}]".format(v, j + 1, len(env["videos_to_localize"])))
video_env = env["videos_workspaces"][v]
localize_video(video_name=v,
video_frames_folder=env["videos_frames_folders"][v],
video_index=j+1,
num_videos=len(env["videos_list"]),
num_videos=len(env["videos_to_localize"]),
**video_env, **env)
i += 1
if i not in args.skip_step:
print_step(i, "Full reconstruction point cloud densificitation")
env["georef_full_recon"].makedirs_p()
colmap.undistort(input=env["georef_full_recon"])
colmap.dense_stereo()
colmap.stereo_fusion(output=env["georefrecon_ply"])
......@@ -302,7 +173,7 @@ def main():
print_step(i, "Registration of photogrammetric reconstruction with respect to Lidar Point Cloud")
if args.registration_method == "eth3d":
# Note : ETH3D doesn't register with scale, this might not be suitable for very large areas
mxw.add_mesh_to_project(env["lidar_mlp"], env["aligned_mlp"], env["georefrecon_ply"], index=0)
mxw.add_meshes_to_project(env["lidar_mlp"], env["aligned_mlp"], [env["georefrecon_ply"]], start_index=0)
eth3d.align_with_ICP(env["aligned_mlp"], env["aligned_mlp"], scales=5)
mxw.remove_mesh_from_project(env["aligned_mlp"], env["aligned_mlp"], 0)
print(mxw.get_mesh(env["aligned_mlp"], index=0)[0])
......@@ -354,12 +225,12 @@ def main():
i += 1
if i not in args.skip_step:
print_step(i, "Groud Truth generation")
for j, v in enumerate(env["videos_list"]):
for j, v in enumerate(env["videos_to_localize"]):
video_env = env["videos_workspaces"][v]
generate_GT(video_name=v, GT_already_done=video_env["GT_already_done"],
video_index=j+1,
num_videos=len(env["videos_list"]),
num_videos=len(env["videos_to_localize"]),
metadata=video_env["metadata"],
**video_env["output_env"], **env)
......
import numpy as np
from wrappers import Colmap, FFMpeg, PDraw, ETH3D, PCLUtil
from cli_utils import set_argparser, print_step, print_workflow
from video_localization import localize_video, generate_GT
import meshlab_xml_writer as mxw
import prepare_images as pi
import prepare_workspace as pw
def main():
args = set_argparser().parse_args()
env = vars(args)
if args.show_steps:
print_workflow()
return
if args.add_new_videos:
args.skip_step += [1, 2, 4, 5, 6]
if args.begin_step is not None:
args.skip_step += list(range(args.begin_step))
pw.check_input_folder(args.input_folder, with_lidar=False)
args.workspace = args.workspace.abspath()
pw.prepare_workspace(args.workspace, env, with_lidar=False)
colmap = Colmap(db=env["thorough_db"],
image_path=env["image_path"],
mask_path=env["mask_path"],
dense_workspace=env["dense_workspace"],
binary=args.colmap,
verbose=args.verbose,
logfile=args.log)
env["colmap"] = colmap
ffmpeg = FFMpeg(args.ffmpeg, verbose=args.verbose, logfile=args.log)
env["ffmpeg"] = ffmpeg
pdraw = PDraw(args.nw, verbose=args.verbose, logfile=args.log)
env["pdraw"] = pdraw
eth3d = ETH3D(args.eth3d, args.raw_output_folder / "Images", args.max_occlusion_depth,
verbose=args.verbose, logfile=args.log)
env["eth3d"] = eth3d
pcl_util = PCLUtil(args.pcl_util, verbose=args.verbose, logfile=args.log)
env["pcl_util"] = pcl_util
env["videos_list"] = sum((list((args.input_folder/"Videos").walkfiles('*{}'.format(ext))) for ext in args.vid_ext), [])
no_gt_folder = args.input_folder/"Videos"/"no_groundtruth"
if no_gt_folder.isdir():
env["videos_to_localize"] = [v for v in env["videos_list"] if not str(v).startswith(no_gt_folder)]
i = 1
if i not in args.skip_step:
print_step(i, "Pictures preparation")
env["existing_pictures"] = pi.extract_pictures_to_workspace(**env)
else:
env["existing_pictures"] = sum((list(env["image_path"].walkfiles('*{}'.format(ext))) for ext in env["pic_ext"]), [])
i += 1
if i not in args.skip_step:
print_step(i, "Extracting Videos and selecting optimal frames for a thorough scan")
env["videos_frames_folders"] = pi.extract_videos_to_workspace(fps=args.lowfps, **env)
else:
env["videos_frames_folders"] = {}
by_name = {v.namebase: v for v in env["videos_list"]}
for folder in env["video_path"].walkdirs():
video_name = folder.basename()
if video_name in by_name.keys():
env["videos_frames_folders"][by_name[video_name]] = folder
env["videos_workspaces"] = {}
for v, frames_folder in env["videos_frames_folders"].items():
env["videos_workspaces"][v] = pw.prepare_video_workspace(v, frames_folder, **env)
i += 1
if i not in args.skip_step:
print_step(i, "First thorough photogrammetry")
env["thorough_recon"].makedirs_p()
colmap.extract_features(image_list=env["video_frame_list_thorough"], more=args.more_sift_features)
colmap.index_images(vocab_tree_output=env["indexed_vocab_tree"], vocab_tree_input=args.vocab_tree)
colmap.match(method="vocab_tree", vocab_tree=env["indexed_vocab_tree"])
colmap.map(output=env["thorough_recon"])
colmap.adjust_bundle(output=env["thorough_recon"] / "0", input=env["thorough_recon"] / "0",
num_iter=100, refine_extra_params=True)
i += 1
if i not in args.skip_step:
print_step(i, "Alignment of photogrammetric reconstruction with GPS")
env["georef_recon"].makedirs_p()
colmap.align_model(output=env["georef_recon"],
input=env["thorough_recon"] / "0",
ref_images=env["georef_frames_list"])
env["georef_recon"].merge_tree(env["georef_full_recon"])
if args.inspect_dataset:
colmap.export_model(output=env["georef_recon"] / "georef_sparse.ply",
input=env["georef_recon"])
georef_mlp = env["georef_recon"]/"georef_recon.mlp"
mxw.create_project(georef_mlp, [env["georefrecon_ply"]])
colmap.export_model(output=env["georef_recon"],
input=env["georef_recon"],
output_type="TXT")
eth3d.inspect_dataset(scan_meshlab=georef_mlp,
colmap_model=env["georef_recon"],
image_path=env["image_path"])
i += 1
if i not in args.skip_step:
print_step(i, "Video localization with respect to reconstruction")
for j, v in enumerate(env["videos_to_localize"]):
print("\n\nNow working on video {} [{}/{}]".format(v, j + 1, len(env["videos_to_localize"])))
video_env = env["videos_workspaces"][v]
localize_video(video_name=v,
video_frames_folder=env["videos_frames_folders"][v],
step_index=i, video_index=j+1,
num_videos=len(env["videos_to_localize"]),
**video_env, **env)
i += 1
if i not in args.skip_step:
print_step(i, "Full reconstruction point cloud densificitation")
env["georef_full_recon"].makedirs_p()
colmap.undistort(input=env["georef_full_recon"])
colmap.dense_stereo()
colmap.stereo_fusion(output=env["georefrecon_ply"])
i += 1
if i not in args.skip_step:
print_step(i, "Reconstruction cleaning")
filtered = env["georefrecon_ply"].stripext() + "_filtered.ply"
pcl_util.filter_cloud(input_file=env["georefrecon_ply"],
output_file=filtered,
knn=args.SOR[0], std=args.SOR[1])
mxw.create_project(env["aligned_mlp"], [filtered])
i += 1
if i not in args.skip_step:
print_step(i, "Occlusion Mesh computing")