Commit a37b70fc authored by Clément Pinard's avatar Clément Pinard
Browse files

README update

parent c164d84d
......@@ -133,14 +133,15 @@ This will essentially do the same thing as the script, in order to let you chang
```
2. Cleaning
2. Point Cloud Cleaning
For each ply file :
```
ETHD3D/build/PointCloudCleaner \
--in /path/to/cloud_lidar.ply \
--filter <5,10>
```
(local outliers removal, doesn't remove isolated points)
(local outliers removal, doesn't necessarily remove isolated points)
or
```
pcl_util/build/CloudSOR \
......@@ -149,7 +150,23 @@ This will essentially do the same thing as the script, in order to let you chang
--knn 5 --std 6
```
3. Video frame addition to COLMAP db file
3. Meshlab Project creation
```
python meshlab_xml_writer.py create \
--input_models /path/to/cloud1 [../path/to/cloudN] \
--output_meshlab /path/to/lidar.mlp
```
Optionally, if we have multiple lidar scans (which is not the case here), we can run a registration step with ETH3D
```
ETHD3D/build/ICPScanAligner \
-i /path/to/lidar.mlp \
-o /path/to/lidar.mlp
--number_of_scales 5
```
4. Video frame addition to COLMAP db file
```
python video_to_colmap \
......@@ -174,7 +191,7 @@ This will essentially do the same thing as the script, in order to let you chang
And finally, it will divide long videos into chunks with corresponding list of filepath so that we don't deal with too large sequences (limit here is 4000 frames)
4. First COLMAP step : feature extraction
5. First COLMAP step : feature extraction
```
......@@ -204,7 +221,7 @@ This will essentially do the same thing as the script, in order to let you chang
--output_index /path/to/indexed_vocab_tree
```
5. Second COLMAP step : matching. For less than 1000 images, you can use exhaustive matching (this will take around 2hours). If there is too much images, you can use either spatial matching or vocab tree matching
6. Second COLMAP step : matching. For less than 1000 images, you can use exhaustive matching (this will take around 2hours). If there is too much images, you can use either spatial matching or vocab tree matching
```
colmap exhaustive_matcher \
......@@ -225,7 +242,7 @@ This will essentially do the same thing as the script, in order to let you chang
--SiftMatching.guided_matching 1
```
6. Third COLMAP step : thorough mapping.
7. Third COLMAP step : thorough mapping.
```
mkdir -p /path/to/thorough/
......@@ -250,7 +267,7 @@ This will essentially do the same thing as the script, in order to let you chang
--output_path /path/to/thorough/0
```
7. Fourth COLMAP step : [georeferencing](https://colmap.github.io/faq.html#geo-registration)
8. Fourth COLMAP step : [georeferencing](https://colmap.github.io/faq.html#geo-registration)
```
mkdir -p /path/to/geo_registered_model
......@@ -264,7 +281,7 @@ This will essentially do the same thing as the script, in order to let you chang
This model will be the reference model, every further models and frames localization will be done with respect to this one.
Even if we could, we don't run Point cloud registration right now, as the next steps will help us to have a more complete point cloud.
8. Video Localization
9. Video Localization
All these substep will populate the db file, which is then used for matching. So you need to make a copy for each video.
1. Extract all the frames of the video to same directory the `video_to_colmap.py` script exported the frame subset of this video.
......@@ -415,7 +432,7 @@ This will essentially do the same thing as the script, in order to let you chang
```
At the end of these per-video-tasks, you should have a model at `/path/to/georef_full` with all photogrammetry images + localization of video frames at 1fps, and for each video a TXT file with positions with respect to the first geo-registered reconstruction.
9. Point cloud densification
10. Point cloud densification
```
colmap image_undistorter \
......@@ -445,8 +462,31 @@ This will essentially do the same thing as the script, in order to let you chang
This will also create a `/path/to/georef_dense.ply.vis` file which describes frames from which each point is visible.
10.
11. Point cloud registration
Convert meshlab project to PLY with normals :
```
ETHD3D/build/NormalEstimator \
-i /path/to/scan_meshlab \
-o /path/to/with_normals.ply
```
Determine the transformation to apply to `/path/to/georef_dense.ply` to get to `/path/to/with_normals.ply` and save the corresponding transformation matrix to `/path/to/matrix.txt`
Option 1 : construct a meshlab project similar to `/path/to/lidar.mlp` with `/path/to/georef_dense.ply` as first mesh and run ETH3D's registration tool
```
python meshlab_xml_writer.py add \
--input_models /path/to/georef_dense.ply \
--input_meshlab /path/to/lidar.mlp \
--output_meshlab /path/to/register.mlp
```
```
ETHD3D/build/ICPScanAligner \
-i /path/to/register.mlp \
-o /path/to/register.mlp \
--number_of_scales 5
```
......
......@@ -289,49 +289,69 @@ def main():
colmap.dense_stereo()
colmap.stereo_fusion(output=env["georefrecon_ply"])
def get_matrix(path):
if path.isfile():
'''Note : We use the inverse matrix here, because in general, it's easier to register the reconstructed model into the lidar one,
as the reconstructed will have less points, but in the end we need the matrix to apply to the lidar point to be aligned
with the camera positions (ie the inverse)'''
return np.linalg.inv(np.fromfile(env["matrix_path"], sep=" ").reshape(4, 4))
else:
print("Error, no registration matrix can be found, identity will be used")
return np.eye(4)
i += 1
if i not in args.skip_step:
print_step(i, "Registration of photogrammetric reconstruction with respect to Lidar Point Cloud")
eth3d.compute_normals(env["with_normals_path"], env["lidar_mlp"], neighbor_radius=args.normal_radius)
if args.registration_method == "simple":
pcl_util.register_reconstruction(georef=env["georefrecon_ply"],
lidar=env["with_normals_path"],
output_matrix=env["matrix_path"],
max_distance=10)
elif args.registration_method == "eth3d":
temp_mlp = env["lidar_mlp"].stripext() + "_registered.mlp"
mxw.add_mesh_to_project(env["lidar_mlp"], temp_mlp, env["georefrecon_ply"], index=0)
eth3d.align_with_ICP(temp_mlp, temp_mlp, scales=5)
mxw.remove_mesh_from_project(temp_mlp, temp_mlp, 0)
print(mxw.get_mesh(temp_mlp, index=0)[0])
matrix = np.linalg.inv(mxw.get_mesh(temp_mlp, index=0)[0])
if args.registration_method == "eth3d":
# Note : ETH3D doesn't register with scale, this might not be suitable for very large areas
mxw.add_mesh_to_project(env["lidar_mlp"], env["aligned_mlp"], env["georefrecon_ply"], index=0)
eth3d.align_with_ICP(env["aligned_mlp"], env["aligned_mlp"], scales=5)
mxw.remove_mesh_from_project(env["aligned_mlp"], env["aligned_mlp"], 0)
print(mxw.get_mesh(env["aligned_mlp"], index=0)[0])
matrix = np.linalg.inv(mxw.get_mesh(env["aligned_mlp"], index=0)[0])
np.savetxt(env["matrix_path"], matrix)
elif args.registration_method == "interactive":
input("Get transformation matrix between {0} and {1} so that we should apply it to the reconstructed point cloud to have the lidar point cloud, "
"and paste it in the file {2}. When done, press ENTER".format(env["with_normals_path"], env["georefrecon_ply"], env["matrix_path"]))
if env["matrix_path"].isfile():
env["global_registration_matrix"] = np.linalg.inv(np.fromfile(env["matrix_path"], sep=" ").reshape(4, 4))
''' The new mlp is supposedly better than the one before because it was an ICP
with N+1 models instead of just N so we replace it with the result on this scan
by reversing the first transformation and getting back a mlp file with identity
as first transform matrix'''
mxw.apply_transform_to_project(env["aligned_mlp"], env["lidar_mlp"], matrix)
env["global_registration_matrix"] = matrix
else:
eth3d.compute_normals(env["with_normals_path"], env["lidar_mlp"], neighbor_radius=args.normal_radius)
if args.registration_method == "simple":
pcl_util.register_reconstruction(georef=env["georefrecon_ply"],
lidar=env["with_normals_path"],
output_matrix=env["matrix_path"],
max_distance=10)
elif args.registration_method == "interactive":
input("Get transformation matrix between {0} and {1} so that we should"
" apply it to the reconstructed point cloud to have the lidar point cloud, "
"and paste it in the file {2}. When done, press ENTER".format(env["with_normals_path"],
env["georefrecon_ply"],
env["matrix_path"]))
env["global_registration_matrix"] = get_matrix(env["matrix_path"])
mxw.apply_transform_to_project(env["lidar_mlp"], env["aligned_mlp"], env["global_registration_matrix"])
else:
print("Error, no registration matrix can be found, identity will be used")
env["global_registration_matrix"] = np.eye(4)
if args.inspect_dataset:
mxw.apply_transform_to_project(env["lidar_mlp"], env["aligned_mlp"], env["global_registration_matrix"])
mxw.create_project(env["occlusion_mlp"], [env["occlusion_ply"]], transforms=[env["global_registration_matrix"]])
mxw.create_project(env["splats_mlp"], [env["splats_ply"]], transforms=[env["global_registration_matrix"]])
eth3d.inspect_dataset(scan_meshlab=env["aligned_mlp"],
colmap_model=env["georef_recon"],
image_path=env["image_path"])
env["global_registration_matrix"] = get_matrix(env["matrix_path"])
i += 1
if i not in args.skip_step:
print_step(i, "Occlusion Mesh computing")
pcl_util.create_vis_file(env["georefrecon_ply"], env["with_normals_path"], env["matrix_path"],
output=env["with_normals_path"], resolution=args.mesh_resolution)
eth3d.compute_normals(env["with_normals_path"], env["aligned_mlp"], neighbor_radius=args.normal_radius)
pcl_util.create_vis_file(env["georefrecon_ply"], env["with_normals_path"], resolution=args.mesh_resolution)
colmap.delaunay_mesh(env["occlusion_ply"], input_ply=env["with_normals_path"])
eth3d.create_splats(env["splats_ply"], env["with_normals_path"], env["occlusion_ply"], threshold=args.splat_threshold)
if args.inspect_dataset:
eth3d.inspect_dataset(scan_meshlab=env["aligned_mlp"],
colmap_model=env["georef_recon"],
image_path=env["image_path"])
eth3d.inspect_dataset(scan_meshlab=env["aligned_mlp"],
colmap_model=env["georef_recon"],
image_path=env["image_path"],
occlusions=env["occlusion_ply"],
splats=env["splats_ply"])
i += 1
if i not in args.skip_step:
print_step(i, "Groud Truth generation")
......
import numpy as np
from lxml import etree
from path import Path
from argparse import ArgumentParser, ArgumentDefaultsHelpFormatter
def create_project(mlp_path, model_paths, labels=None, transforms=None):
......@@ -49,19 +50,29 @@ def get_mesh(input_mlp, index):
return transform, filepath
def add_mesh_to_project(input_mlp, output_mlp, model_path, index=0, label=None, transform=np.eye(4)):
def add_mesh_to_project(input_mlp, output_mlp, model_paths, labels=None, transforms=None, start_index=-1):
if labels is not None:
assert(len(model_paths) == len(labels))
else:
labels = [m.basename() for m in model_paths]
if transforms is not None:
assert(len(model_paths) == len(transforms))
else:
transforms = [np.eye(4) for _ in model_paths]
with open(input_mlp, "r") as f:
to_modify = etree.parse(f)
if label is None:
label = model_path.basename()
root = to_modify.getroot()
group = root[0]
mesh = etree.Element("MLMesh")
mesh.set("label", label)
mesh.set("filename", model_path)
matrix = etree.SubElement(mesh, "MLMatrix44")
matrix.text = "\n" + "\n".join(" ".join(str(element) for element in row) + " " for row in transform) + "\n"
group.insert(index, mesh)
if start_index < 0:
start_index = len(group)
for i, (m, l, t) in enumerate(zip(model_paths, labels, transforms)):
mesh = etree.Element("MLMesh")
mesh.set("label", l)
mesh.set("filename", m)
matrix = etree.SubElement(mesh, "MLMatrix44")
matrix.text = "\n" + "\n".join(" ".join(str(element) for element in row) + " " for row in t) + "\n"
group.insert(start_index + 1, mesh)
to_modify.write(output_mlp, pretty_print=True)
......@@ -76,12 +87,62 @@ def apply_transform_to_project(input_mlp, output_mlp, transform):
to_modify.write(output_mlp, pretty_print=True)
parser = ArgumentParser(description='Create a meshlab project with ply files and transformations',
formatter_class=ArgumentDefaultsHelpFormatter)
subparsers = parser.add_subparsers(dest="operation")
create_parser = subparsers.add_parser('create')
create_parser.add_argument('--input_models', metavar='PLY', type=Path, nargs="+")
create_parser.add_argument('--output_meshlab', metavar='MLP', type=Path, required=True)
create_parser.add_argument('--transforms', metavar='TXT', type=Path, nargs="+")
create_parser.add_argument('--labels', metavar='LABEL', nargs="*")
remove_parser = subparsers.add_parser('remove')
remove_parser.add_argument('--input_meshlab', metavar='MLP', type=Path, required=True)
remove_parser.add_argument('--output_meshlab', metavar='MLP', type=Path, required=True)
remove_parser.add_argument('--index', metavar="N", type=int, default=-1)
add_parser = subparsers.add_parser('add')
add_parser.add_argument('--input_models', metavar='PLY', type=Path, nargs="+")
add_parser.add_argument('--input_meshlab', metavar='MLP', type=Path, required=True)
add_parser.add_argument('--output_meshlab', metavar='MLP', type=Path, required=True)
add_parser.add_argument('--transforms', metavar='TXT', type=Path, nargs="+")
add_parser.add_argument('--labels', metavar='LABEL', nargs="*")
add_parser.add_argument('--start_index', metavar='N', default=-1, type=int)
transform_parser = subparsers.add_parser('transform')
transform_parser.add_argument('--input_meshlab', metavar='MLP', type=Path, required=True)
transform_parser.add_argument('--output_meshlab', metavar='MLP', type=Path, required=True)
transform_parser.add_argument('--transform', metavar='TXT', type=Path, required=True)
if __name__ == '__main__':
model_paths = [Path("path/to/model.ply"), Path("path/to/other.ply")]
labels = "1", "2"
transforms = [np.random.randn(4, 4), np.random.randn(4, 4)]
create_project("test.mlp", model_paths)
add_mesh_to_project("test.mlp", "test.mlp", model_paths[0], index=0)
matrix, filename = remove_mesh_from_project("test.mlp", "test2.mlp", 0)
print(matrix, filename)
args = parser.parse_args()
if args.operation in ["add", "create"]:
n_models = len(args.input_models)
if args.labels is not None:
assert n_models == len(args.labels)
if args.transforms is None:
transforms = [np.eye(4, 4)] * n_models
elif len(args.transforms) == 1:
transform = np.fromfile(args.transforms[0], sep=" ").reshape(4, 4)
transforms = [transform] * n_models
else:
assert n_models == len(transforms)
transforms = [np.fromfile(t, sep=" ").reshape(4, 4) for t in args.transforms]
if args.operation == "create":
create_project(args.output_meshlab, args.input_models, args.labels, transforms)
if args.operation == "add":
add_mesh_to_project(args.input_meshlab,
args.output_meshlab,
args.input_models,
args.labels,
transforms,
args.start_index)
if args.operation == "remove":
matrix, filename = remove_mesh_from_project(args.input_meshlab, args.output_meshlab, args.index)
print("Removed model {} with transform\n {} \nfrom meshlab".format(filename, matrix))
if args.operation == "transform":
transform = np.fromfile(args.transform, sep=" ").reshape(4, 4)
apply_transform_to_project(args.input_meshlab, args.output_meshlab, transform)
......@@ -14,28 +14,6 @@
#include "pointcloud_subsampler.h"
Eigen::Matrix4f readMatrix(std::string filename)
{
int cols = 4, rows = 4;
Eigen::Matrix4f M;
std::ifstream infile;
infile.open(filename);
int row = 0;
while (! infile.eof() && row < 4)
{
std::string line;
std::getline(infile, line);
std::stringstream stream(line);
stream >> M(row, 0) >> M(row, 1) >> M(row, 2) >> M(row, 3);
row ++;
}
infile.close();
return M;
};
int main (int argc, char** argv)
{
FLAGS_logtostderr = 1;
......@@ -56,17 +34,15 @@ int main (int argc, char** argv)
pcl::console::parse_argument(argc, argv, "--georef_dense", georef_dense_path);
std::string lidar_path;
pcl::console::parse_argument(argc, argv, "--lidar", lidar_path);
std::string georef_matrix_path;
pcl::console::parse_argument(argc, argv, "--georef_matrix", georef_matrix_path);
std::string output_cloud_path;
pcl::console::parse_argument(argc, argv, "--output_cloud", output_cloud_path);
float resolution = 0.2; //20cm resolution
pcl::console::parse_argument(argc, argv, "--resolution", resolution);
if (georef_matrix_path.empty() && output_cloud_path.empty()){
if (output_cloud_path.empty()){
LOG(ERROR) << "No output path was given";
LOG(INFO) << "Usage: " << argv[0] << " --georef_dense <file.ply> --lidar <file.ply> "
<< "--georef_matrix <matrix.txt> --output_cloud <output.ply>";
<< "--output_cloud <output.ply>";
return EXIT_FAILURE;
}
......@@ -88,13 +64,6 @@ int main (int argc, char** argv)
LOG(INFO) << "Subsampling to have a mean distance between points of " << resolution << " m";
lidar = filter<pcl::PointNormal>(lidar, resolution);
LOG(INFO) << "Loading transformation matrix ...";
Eigen::Matrix4f M = readMatrix(georef_matrix_path);
LOG(INFO) << "Matrix loaded";
LOG(INFO) << M;
pcl::transformPointCloudWithNormals (*lidar, *lidar, M.inverse());
LOG(INFO) << "Loading georef_dense vis file...";
const std::string input_vis_path = georef_dense_path + ".vis";
std::fstream input_vis_file(input_vis_path, std::ios::in | std::ios::binary);
......
......@@ -216,8 +216,9 @@ def generate_GT(video_name, raw_output_folder, images_root_folder, video_frames_
registration_matrix = global_registration_matrix
mxw.apply_transform_to_project(env["lidar_mlp"], final_lidar, registration_matrix)
mxw.create_project(final_occlusions, [env["occlusion_ply"]], transforms=[np.eye(4)])
mxw.create_project(final_splats, [env["splats_ply"]], transforms=[np.eye(4)])
adjustment_matrix = registration_matrix * np.linalg.inv(global_registration_matrix)
mxw.create_project(final_occlusions, [env["occlusion_ply"]], transforms=[adjustment_matrix])
mxw.create_project(final_splats, [env["splats_ply"]], transforms=[adjustment_matrix])
if inspect_dataset:
eth3d.image_path = images_root_folder
......@@ -225,6 +226,7 @@ def generate_GT(video_name, raw_output_folder, images_root_folder, video_frames_
# - inspection with reconstructed cloud
# - inspection with lidar cloud without occlusion
# - inspection with lidar cloud and occlusion models
# Careful, very RAM demanding for long sequences !
georef_mlp = env["georef_recon"]/"georef_recon.mlp"
eth3d.inspect_dataset(georef_mlp, final_model)
eth3d.inspect_dataset(final_lidar, final_model)
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment