Commit 554db4da authored by Clément Pinard's avatar Clément Pinard
Browse files

README update

parent c164d84d
......@@ -133,14 +133,15 @@ This will essentially do the same thing as the script, in order to let you chang
```
2. Cleaning
2. Point Cloud Cleaning
For each ply file :
```
ETHD3D/build/PointCloudCleaner \
--in /path/to/cloud_lidar.ply \
--filter <5,10>
```
(local outliers removal, doesn't remove isolated points)
(local outliers removal, doesn't necessarily remove isolated points)
or
```
pcl_util/build/CloudSOR \
......@@ -149,7 +150,23 @@ This will essentially do the same thing as the script, in order to let you chang
--knn 5 --std 6
```
3. Video frame addition to COLMAP db file
3. Meshlab Project creation
```
python meshlab_xml_writer.py create \
--input_models /path/to/cloud1 [../path/to/cloudN] \
--output_meshlab /path/to/lidar.mlp
```
Optionally, if we have multiple lidar scans (which is not the case here), we can run a registration step with ETH3D
```
ETHD3D/build/ICPScanAligner \
-i /path/to/lidar.mlp \
-o /path/to/lidar.mlp
--number_of_scales 5
```
4. Video frame addition to COLMAP db file
```
python video_to_colmap \
......@@ -174,7 +191,7 @@ This will essentially do the same thing as the script, in order to let you chang
And finally, it will divide long videos into chunks with corresponding list of filepath so that we don't deal with too large sequences (limit here is 4000 frames)
4. First COLMAP step : feature extraction
5. First COLMAP step : feature extraction
```
......@@ -204,7 +221,7 @@ This will essentially do the same thing as the script, in order to let you chang
--output_index /path/to/indexed_vocab_tree
```
5. Second COLMAP step : matching. For less than 1000 images, you can use exhaustive matching (this will take around 2hours). If there is too much images, you can use either spatial matching or vocab tree matching
6. Second COLMAP step : matching. For less than 1000 images, you can use exhaustive matching (this will take around 2hours). If there is too much images, you can use either spatial matching or vocab tree matching
```
colmap exhaustive_matcher \
......@@ -225,7 +242,7 @@ This will essentially do the same thing as the script, in order to let you chang
--SiftMatching.guided_matching 1
```
6. Third COLMAP step : thorough mapping.
7. Third COLMAP step : thorough mapping.
```
mkdir -p /path/to/thorough/
......@@ -250,7 +267,7 @@ This will essentially do the same thing as the script, in order to let you chang
--output_path /path/to/thorough/0
```
7. Fourth COLMAP step : [georeferencing](https://colmap.github.io/faq.html#geo-registration)
8. Fourth COLMAP step : [georeferencing](https://colmap.github.io/faq.html#geo-registration)
```
mkdir -p /path/to/geo_registered_model
......@@ -264,7 +281,7 @@ This will essentially do the same thing as the script, in order to let you chang
This model will be the reference model, every further models and frames localization will be done with respect to this one.
Even if we could, we don't run Point cloud registration right now, as the next steps will help us to have a more complete point cloud.
8. Video Localization
9. Video Localization
All these substep will populate the db file, which is then used for matching. So you need to make a copy for each video.
1. Extract all the frames of the video to same directory the `video_to_colmap.py` script exported the frame subset of this video.
......@@ -408,14 +425,14 @@ This will essentially do the same thing as the script, in order to let you chang
7. Filter the image sequence to exclude frame with an absurd acceleration and interpolate them instead
```
python filter_colmap_model.py \
--input_images_colmap /path/to/full_video_model/images.txt \
--output_images_colmap /path/to/full_video_model/images.txt \
--metdata /path/to/images/video/dir/metadata.csv \
--interpolate
--input_images_colmap /path/to/final_model/images.txt \
--output_images_colmap /path/to/final_model/images.txt \
--metadata /path/to/images/video/dir/metadata.csv \
--interpolated_frames_list /path/to/images/video/dir/interpolated_frames.txt
```
At the end of these per-video-tasks, you should have a model at `/path/to/georef_full` with all photogrammetry images + localization of video frames at 1fps, and for each video a TXT file with positions with respect to the first geo-registered reconstruction.
9. Point cloud densification
10. Point cloud densification
```
colmap image_undistorter \
......@@ -445,7 +462,152 @@ This will essentially do the same thing as the script, in order to let you chang
This will also create a `/path/to/georef_dense.ply.vis` file which describes frames from which each point is visible.
10.
11. Point cloud registration
Convert meshlab project to PLY with normals :
Determine the transformation to apply to `/path/to/lidar.mlp` to get to `/path/to/georef_dense.ply` so that we can have the pose of the cameras with respect to the lidar.
Option 1 : construct a meshlab project similar to `/path/to/lidar.mlp` with `/path/to/georef_dense.ply` as first mesh and run ETH3D's registration tool
```
python meshlab_xml_writer.py add \
--input_models /path/to/georef_dense.ply \
--start_index 0 \
--input_meshlab /path/to/lidar.mlp \
--output_meshlab /path/to/registered.mlp
```
```
ETHD3D/build/ICPScanAligner \
-i /path/to/registered.mlp \
-o /path/to/registered.mlp \
--number_of_scales 5
```
The second matrix in `/path/to/register.mlp` will be the matrix transform from `/path/to/lidar.mlp` to `/path/to/georef_dense.ply`
Importante note : This operation doesn't work for scale adjustments which can be a problem with very large models.
Option 2 : construct a PLY file from lidar scans and register the reconstructed cloud with respect to the lidar, with PCL or CloudCompare. We do this way (and not from lidar to reconstructed), because it is usually easier to register the cloud with less points with classic ICP)
```
ETHD3D/build/NormalEstimator \
-i /path/to/lidar.mlp \
-o /path/to/lidar_with_normals.ply
```
```
pcl_util/build/CloudRegistrator \
--georef /path/to/georef_dense.ply \
--lidar /path/to/lidar_with_normals.ply \
--output_matrix /path/toregistration_matrix.txt
```
Note that `/path/toregistration_matrix.txt`stored the inverse of the matrix we want, so you have to invert it and save back the result.
Or use CloudCompare : https://www.cloudcompare.org/doc/wiki/index.php?title=Alignment_and_Registration
Best results were maintened with these consecutive steps :
- Crop the /path/georef_dense.ply cloud, otherwise the Octomap will be very inefficient, and the cloud usually has very far outliers
- Apply noise filtering on cropped cloud
- Apply fine registration, with final overlap of 50%, scale adjustment, and Enable farthest point removal
- save inverse of resulting registration
For the fine registration part, as said earlier, the aligned cloud is the reconstruction and the reference cloud is the lidar
finally, apply the registration matrix to `/path/to/lidar/mlp` to get `/path/to/registered.mlp`
```
python meshlab_xml_writer.py transform \
--input_meshlab /path/to/lidar.mlp \
--output_meshlab /path/to/registered.mlp \
--transform /path/to/registration_matrix.txt
```
12. Occlusion Mesh generation
Use COLMAP delaunay mesher to generate a mesh from PLY + VIS.
Normally, COLMAP expect the cloud it generated when running the `stereo_fusion` step, but we use the lidar point cloud instead.
Get a PLY file for the registered lidar point cloud
```
ETHD3D/build/NormalEstimator \
-i /path/to/registered.mlp \
-o /path/to/lidar_with_normals.ply
```
```
pcl_util/build/CreateVisFile \
--georef_dense /path/to/georef_dense.ply \
--lidar lidar_with_normals.ply \
--output_cloud /path/to/dense/fused.ply \
--resolution 0.2
```
This is important to place the resulting point cloud at root of COLMAP MVS workspace `/path/to/dense` that was used for generating `/path/to/georef_dense.ply` and name it `fused.ply` because it is hardwritten on COLMAP's code.
The file `/path/to/fused.ply.vis` will also be generated. The resolution option is used to reduce the computational load of the next step.
```
colmap delaunay_mesher \
--input_type dense \
--input_path /path/to/dense \
--output_path /path/to/occlusion_mesh.ply
```
Generate splats for lidar points outside of occlusion mesh close range. See https://github.com/ETH3D/dataset-pipeline#splat-creation
```
ETH3D/build/SplatCreator \
--point_normal_cloud_path /path/tolidar_with_normals.ply \
--mesh_path /path/to/occlusion_mesh.ply \
--output_path /path/to/splats.ply
--distance_threshold 0.1
```
The ideal distance threshold is what is considered close range of the occlusion mesh, and the distance from which a splat (little square surface) will be created.
13. Raw Groundtruth generation
For each video :
```
ETH3D/build/GroundTruthCreator \
--scan_alignment_path /path/to/registered.mlp \
--image_base_path /path/to/images \
--state_path path/to/final_model \
--output_folder_path /path/to/raw_GT \
--occlusion_mesh_path /path/to/occlusion_mesh.ply \
--occlusion_splats_path /path/to/splats/ply \
--max_occlusion_depth 200 \
--write_point_cloud 0 \
--write_depth_maps 1 \
--write_occlusion_depth 1 \
--compress_depth_maps 1
```
This will create for each video a folder `/path/to/raw_GT/groundtruth_depth/<video name>/` with compressed files with depth information. Option `--write_occlusion_depth` will make the folder `/path/to/raw_GT/` much heavier but is optional. It is used for inspection purpose.
14. Dataset conversion
For each video :
```
python convert_dataset.py \
--depth_dir /path/to/raw_GT/groundtruth_depth/<video name>/ \
--images_root_folder /path/to/images/ \
--occ_dir /path/to/raw_GT/occlusion_depth/<video name>/ \
--metadata_path /path/to/images/videos/dir/metadata.csv \
--dataset_output_dir /path/to/dataset/ \
--video_output_dir /path/to/vizualisation/ \
--interpolated_frames_list /path/to/images/video/dir/interpolated_frames.txt \
--final_model /path/to/final_model/ \
--video \
--downscale 4 \
--threads 8
```
......
......@@ -143,17 +143,21 @@ parser = ArgumentParser(description='create a vizualisation from ground truth cr
formatter_class=ArgumentDefaultsHelpFormatter)
parser.add_argument('--depth_dir', metavar='DIR', type=Path)
parser.add_argument('--img_dir', metavar='DIR', type=Path)
parser.add_argument('--images_root_folder', metavar='DIR', type=Path)
parser.add_argument('--occ_dir', metavar='DIR', type=Path)
parser.add_argument('--metdata', type=Path)
parser.add_argument('--output_dir', metavar='DIR', default=None, type=Path)
parser.add_argument('--metadata_path', type=Path)
parser.add_argument('--dataset_output_dir', metavar='DIR', default=None, type=Path)
parser.add_argument('--video_output_dir', metavar='DIR', default=None, type=Path)
parser.add_argument('--interpolated_frames_list', metavar='TXT', type=Path)
parser.add_argument('--final_model', metavar='DIR', type=Path)
parser.add_argument('--video', action='store_true')
parser.add_argument('--fps', default='1')
parser.add_argument('--downscale', type=int, default=1)
parser.add_argument('--threads', type=int, default=8)
def convert_dataset(final_model, depth_dir, images_root_folder, occ_dir, dataset_output_dir, video_output_dir, metadata_path, interpolated_frames_path,
fps, downscale, ffmpeg, threads=8, video=False, **env):
def convert_dataset(final_model, depth_dir, images_root_folder, occ_dir,
dataset_output_dir, video_output_dir, metadata_path, interpolated_frames_path,
downscale, ffmpeg, threads=8, video=False, **env):
dataset_output_dir.makedirs_p()
video_output_dir.makedirs_p()
cameras, images, _ = rm.read_model(final_model, '.txt')
......@@ -166,6 +170,7 @@ def convert_dataset(final_model, depth_dir, images_root_folder, occ_dir, dataset
interpolated_frames = [line[:-1] for line in f.readlines()]
metadata = pd.read_csv(metadata_path).set_index("db_id", drop=False).sort_values("time")
framerate = metadata["framerate"].values[0]
image_df = pd.DataFrame.from_dict(images, orient="index").set_index("id")
image_df = image_df.reindex(metadata.index)
depth_maps = []
......@@ -214,7 +219,7 @@ def convert_dataset(final_model, depth_dir, images_root_folder, occ_dir, dataset
if video:
video_path = str(video_output_dir/'{}_groundtruth_viz.mp4'.format(video_output_dir.namebase))
glob_pattern = str(video_output_dir/'*.png')
ffmpeg.create_video(video_path, glob_pattern, fps)
ffmpeg.create_video(video_path, glob_pattern, framerate)
if __name__ == '__main__':
......
......@@ -13,9 +13,9 @@ parser = ArgumentParser(description='Take all the drone videos of a folder and p
parser.add_argument('--input_images_colmap', metavar='FILE', type=Path)
parser.add_argument('--output_images_colmap', metavar='FILE', type=Path)
parser.add_argument('--interpolate', action="store_true")
parser.add_argument('--metadata', metavar='FILE', type=Path)
parser.add_argument('--visualize', action="store_true")
parser.add_argument('--interpolated_frames_list', type=Path)
'''
......@@ -79,7 +79,7 @@ def slerp_quats(quat_df, prefix=""):
def filter_colmap_model(input_images_colmap, output_images_colmap, metadata_path,
filter_degree=3, filter_time=0.1,
threshold_t=0.01, threshold_q=5e-3,
visualize=False, max_interpolate=2, **env):
visualize=False, **env):
if input_images_colmap.ext == ".txt":
images_dict = rm.read_images_text(input_images_colmap)
elif input_images_colmap.ext == ".bin":
......@@ -275,4 +275,6 @@ def filter_colmap_model(input_images_colmap, output_images_colmap, metadata_path
if __name__ == '__main__':
args = parser.parse_args()
env = vars(args)
filter_colmap_model(metadata_path=args.metadata, **env)
interpolated_frames = filter_colmap_model(metadata_path=args.metadata, **env)
with open(args.interpolated_frames_list, "w") as f:
f.write("\n".join(interpolated_frames) + "\n")
......@@ -145,7 +145,6 @@ def prepare_video_workspace(video_name, video_frames_folder,
output["model_folder"] = raw_output_folder / "models" / relative_path_folder
output["interpolated_frames_list"] = output["model_folder"] / "interpolated_frames.txt"
output["final_model"] = output["model_folder"] / "final"
output["video_fps"] = pd.read_csv(video_env["metadata"])["framerate"].values[0]
output["kitti_format_folder"] = converted_output_folder / "KITTI" / relative_path_folder
output["viz_folder"] = converted_output_folder / "video" / relative_path_folder
video_env["output_env"] = output
......@@ -289,49 +288,69 @@ def main():
colmap.dense_stereo()
colmap.stereo_fusion(output=env["georefrecon_ply"])
def get_matrix(path):
if path.isfile():
'''Note : We use the inverse matrix here, because in general, it's easier to register the reconstructed model into the lidar one,
as the reconstructed will have less points, but in the end we need the matrix to apply to the lidar point to be aligned
with the camera positions (ie the inverse)'''
return np.linalg.inv(np.fromfile(env["matrix_path"], sep=" ").reshape(4, 4))
else:
print("Error, no registration matrix can be found, identity will be used")
return np.eye(4)
i += 1
if i not in args.skip_step:
print_step(i, "Registration of photogrammetric reconstruction with respect to Lidar Point Cloud")
if args.registration_method == "eth3d":
# Note : ETH3D doesn't register with scale, this might not be suitable for very large areas
mxw.add_mesh_to_project(env["lidar_mlp"], env["aligned_mlp"], env["georefrecon_ply"], index=0)
eth3d.align_with_ICP(env["aligned_mlp"], env["aligned_mlp"], scales=5)
mxw.remove_mesh_from_project(env["aligned_mlp"], env["aligned_mlp"], 0)
print(mxw.get_mesh(env["aligned_mlp"], index=0)[0])
matrix = np.linalg.inv(mxw.get_mesh(env["aligned_mlp"], index=0)[0])
np.savetxt(env["matrix_path"], matrix)
''' The new mlp is supposedly better than the one before because it was an ICP
with N+1 models instead of just N so we replace it with the result on this scan
by reversing the first transformation and getting back a mlp file with identity
as first transform matrix'''
mxw.apply_transform_to_project(env["aligned_mlp"], env["lidar_mlp"], matrix)
env["global_registration_matrix"] = matrix
else:
eth3d.compute_normals(env["with_normals_path"], env["lidar_mlp"], neighbor_radius=args.normal_radius)
if args.registration_method == "simple":
pcl_util.register_reconstruction(georef=env["georefrecon_ply"],
lidar=env["with_normals_path"],
output_matrix=env["matrix_path"],
max_distance=10)
elif args.registration_method == "eth3d":
temp_mlp = env["lidar_mlp"].stripext() + "_registered.mlp"
mxw.add_mesh_to_project(env["lidar_mlp"], temp_mlp, env["georefrecon_ply"], index=0)
eth3d.align_with_ICP(temp_mlp, temp_mlp, scales=5)
mxw.remove_mesh_from_project(temp_mlp, temp_mlp, 0)
print(mxw.get_mesh(temp_mlp, index=0)[0])
matrix = np.linalg.inv(mxw.get_mesh(temp_mlp, index=0)[0])
np.savetxt(env["matrix_path"], matrix)
elif args.registration_method == "interactive":
input("Get transformation matrix between {0} and {1} so that we should apply it to the reconstructed point cloud to have the lidar point cloud, "
"and paste it in the file {2}. When done, press ENTER".format(env["with_normals_path"], env["georefrecon_ply"], env["matrix_path"]))
if env["matrix_path"].isfile():
env["global_registration_matrix"] = np.linalg.inv(np.fromfile(env["matrix_path"], sep=" ").reshape(4, 4))
else:
print("Error, no registration matrix can be found, identity will be used")
env["global_registration_matrix"] = np.eye(4)
if args.inspect_dataset:
input("Get transformation matrix between {0} and {1} so that we should"
" apply it to the reconstructed point cloud to have the lidar point cloud, "
"and paste it in the file {2}. When done, press ENTER".format(env["with_normals_path"],
env["georefrecon_ply"],
env["matrix_path"]))
env["global_registration_matrix"] = get_matrix(env["matrix_path"])
mxw.apply_transform_to_project(env["lidar_mlp"], env["aligned_mlp"], env["global_registration_matrix"])
mxw.create_project(env["occlusion_mlp"], [env["occlusion_ply"]], transforms=[env["global_registration_matrix"]])
mxw.create_project(env["splats_mlp"], [env["splats_ply"]], transforms=[env["global_registration_matrix"]])
eth3d.inspect_dataset(scan_meshlab=env["aligned_mlp"],
colmap_model=env["georef_recon"],
image_path=env["image_path"])
else:
env["global_registration_matrix"] = get_matrix(env["matrix_path"])
i += 1
if i not in args.skip_step:
print_step(i, "Occlusion Mesh computing")
pcl_util.create_vis_file(env["georefrecon_ply"], env["with_normals_path"], env["matrix_path"],
output=env["with_normals_path"], resolution=args.mesh_resolution)
eth3d.compute_normals(env["with_normals_path"], env["aligned_mlp"], neighbor_radius=args.normal_radius)
pcl_util.create_vis_file(env["georefrecon_ply"], env["with_normals_path"], resolution=args.mesh_resolution)
colmap.delaunay_mesh(env["occlusion_ply"], input_ply=env["with_normals_path"])
eth3d.create_splats(env["splats_ply"], env["with_normals_path"], env["occlusion_ply"], threshold=args.splat_threshold)
if args.inspect_dataset:
eth3d.inspect_dataset(scan_meshlab=env["aligned_mlp"],
colmap_model=env["georef_recon"],
image_path=env["image_path"])
eth3d.inspect_dataset(scan_meshlab=env["aligned_mlp"],
colmap_model=env["georef_recon"],
image_path=env["image_path"],
occlusions=env["occlusion_ply"],
splats=env["splats_ply"])
i += 1
if i not in args.skip_step:
print_step(i, "Groud Truth generation")
......
import numpy as np
from lxml import etree
from path import Path
from argparse import ArgumentParser, ArgumentDefaultsHelpFormatter
def create_project(mlp_path, model_paths, labels=None, transforms=None):
......@@ -49,19 +50,29 @@ def get_mesh(input_mlp, index):
return transform, filepath
def add_mesh_to_project(input_mlp, output_mlp, model_path, index=0, label=None, transform=np.eye(4)):
def add_mesh_to_project(input_mlp, output_mlp, model_paths, labels=None, transforms=None, start_index=-1):
if labels is not None:
assert(len(model_paths) == len(labels))
else:
labels = [m.basename() for m in model_paths]
if transforms is not None:
assert(len(model_paths) == len(transforms))
else:
transforms = [np.eye(4) for _ in model_paths]
with open(input_mlp, "r") as f:
to_modify = etree.parse(f)
if label is None:
label = model_path.basename()
root = to_modify.getroot()
group = root[0]
if start_index < 0:
start_index = len(group)
for i, (m, l, t) in enumerate(zip(model_paths, labels, transforms)):
mesh = etree.Element("MLMesh")
mesh.set("label", label)
mesh.set("filename", model_path)
mesh.set("label", l)
mesh.set("filename", m)
matrix = etree.SubElement(mesh, "MLMatrix44")
matrix.text = "\n" + "\n".join(" ".join(str(element) for element in row) + " " for row in transform) + "\n"
group.insert(index, mesh)
matrix.text = "\n" + "\n".join(" ".join(str(element) for element in row) + " " for row in t) + "\n"
group.insert(start_index + 1, mesh)
to_modify.write(output_mlp, pretty_print=True)
......@@ -76,12 +87,62 @@ def apply_transform_to_project(input_mlp, output_mlp, transform):
to_modify.write(output_mlp, pretty_print=True)
parser = ArgumentParser(description='Create a meshlab project with ply files and transformations',
formatter_class=ArgumentDefaultsHelpFormatter)
subparsers = parser.add_subparsers(dest="operation")
create_parser = subparsers.add_parser('create')
create_parser.add_argument('--input_models', metavar='PLY', type=Path, nargs="+")
create_parser.add_argument('--output_meshlab', metavar='MLP', type=Path, required=True)
create_parser.add_argument('--transforms', metavar='TXT', type=Path, nargs="+")
create_parser.add_argument('--labels', metavar='LABEL', nargs="*")
remove_parser = subparsers.add_parser('remove')
remove_parser.add_argument('--input_meshlab', metavar='MLP', type=Path, required=True)
remove_parser.add_argument('--output_meshlab', metavar='MLP', type=Path, required=True)
remove_parser.add_argument('--index', metavar="N", type=int, default=-1)
add_parser = subparsers.add_parser('add')
add_parser.add_argument('--input_models', metavar='PLY', type=Path, nargs="+")
add_parser.add_argument('--input_meshlab', metavar='MLP', type=Path, required=True)
add_parser.add_argument('--output_meshlab', metavar='MLP', type=Path, required=True)
add_parser.add_argument('--transforms', metavar='TXT', type=Path, nargs="+")
add_parser.add_argument('--labels', metavar='LABEL', nargs="*")
add_parser.add_argument('--start_index', metavar='N', default=-1, type=int)
transform_parser = subparsers.add_parser('transform')
transform_parser.add_argument('--input_meshlab', metavar='MLP', type=Path, required=True)
transform_parser.add_argument('--output_meshlab', metavar='MLP', type=Path, required=True)
transform_parser.add_argument('--transform', metavar='TXT', type=Path, required=True)
if __name__ == '__main__':
model_paths = [Path("path/to/model.ply"), Path("path/to/other.ply")]
labels = "1", "2"
transforms = [np.random.randn(4, 4), np.random.randn(4, 4)]
create_project("test.mlp", model_paths)
add_mesh_to_project("test.mlp", "test.mlp", model_paths[0], index=0)
matrix, filename = remove_mesh_from_project("test.mlp", "test2.mlp", 0)
print(matrix, filename)
args = parser.parse_args()
if args.operation in ["add", "create"]:
n_models = len(args.input_models)
if args.labels is not None:
assert n_models == len(args.labels)
if args.transforms is None:
transforms = [np.eye(4, 4)] * n_models
elif len(args.transforms) == 1:
transform = np.fromfile(args.transforms[0], sep=" ").reshape(4, 4)
transforms = [transform] * n_models
else:
assert n_models == len(transforms)
transforms = [np.fromfile(t, sep=" ").reshape(4, 4) for t in args.transforms]
if args.operation == "create":
create_project(args.output_meshlab, args.input_models, args.labels, transforms)
if args.operation == "add":
add_mesh_to_project(args.input_meshlab,
args.output_meshlab,
args.input_models,
args.labels,
transforms,
args.start_index)
if args.operation == "remove":
matrix, filename = remove_mesh_from_project(args.input_meshlab, args.output_meshlab, args.index)
print("Removed model {} with transform\n {} \nfrom meshlab".format(filename, matrix))
if args.operation == "transform":
transform = np.fromfile(args.transform, sep=" ").reshape(4, 4)
apply_transform_to_project(args.input_meshlab, args.output_meshlab, transform)
......@@ -14,28 +14,6 @@
#include "pointcloud_subsampler.h"
Eigen::Matrix4f readMatrix(std::string filename)
{
int cols = 4, rows = 4;
Eigen::Matrix4f M;
std::ifstream infile;
infile.open(filename);
int row = 0;
while (! infile.eof() && row < 4)
{
std::string line;
std::getline(infile, line);
std::stringstream stream(line);
stream >> M(row, 0) >> M(row, 1) >> M(row, 2) >> M(row, 3);
row ++;
}
infile.close();
return M;
};
int main (int argc, char** argv)
{
FLAGS_logtostderr = 1;
......@@ -56,17 +34,15 @@ int main (int argc, char** argv)
pcl::console::parse_argument(argc, argv, "--georef_dense", georef_dense_path);
std::string lidar_path;
pcl::console::parse_argument(argc, argv, "--lidar", lidar_path);
std::string georef_matrix_path;
pcl::console::parse_argument(argc, argv, "--georef_matrix", georef_matrix_path);
std::string output_cloud_path;
pcl::console::parse_argument(argc, argv, "--output_cloud", output_cloud_path);
float resolution = 0.2; //20cm resolution
pcl::console::parse_argument(argc, argv, "--resolution", resolution);
if (georef_matrix_path.empty() && output_cloud_path.empty()){
if (output_cloud_path.empty()){
LOG(ERROR) << "No output path was given";
LOG(INFO) << "Usage: " << argv[0] << " --georef_dense <file.ply> --lidar <file.ply> "
<< "--georef_matrix <matrix.txt> --output_cloud <output.ply>";
<< "--output_cloud <output.ply>";
return EXIT_FAILURE;
}
......@@ -88,13 +64,6 @@ int main (int argc, char** argv)
LOG(INFO) << "Subsampling to have a mean distance between points of " << resolution << " m";
lidar = filter<pcl::PointNormal>(lidar, resolution);
LOG(INFO) << "Loading transformation matrix ...";
Eigen::Matrix4f M = readMatrix(georef_matrix_path);
LOG(INFO) << "Matrix loaded";
LOG(INFO) << M;
pcl::transformPointCloudWithNormals (*lidar, *lidar, M.inverse());
LOG(INFO) << "Loading georef_dense vis file...";
const std::string input_vis_path = georef_dense_path + ".vis";
std::fstream input_vis_file(input_vis_path, std::ios::in | std<