Commit 7d99ebaab510ca1bbd12b9a1b6c4704720b1e779

Authored by Clément Pinard
1 parent ffd564cc

Add evaluation graphs

README.md
... ... @@ -1068,13 +1068,14 @@ This will essentially do the same thing as the script, in order to let you chang
1068 1068 ```
1069 1069 python construct_evaluation_metadata.py \
1070 1070 --dataset_dir Converted_output_directory/dataset/ \
  1071 + --max_num_samples 500 \
1071 1072 --split 0.9 \
1072 1073 --seed 0 \
1073 1074 --min_shift 50 \
1074 1075 --allow_interpolated_frames
1075 1076 ```
1076 1077  
1077   - this will select 500 frames (at most) such that 90% (`--split 0.9`) of folders are kept as training folders, and every frame has at least 50 frames with valid odometry before (`--min_shift 50`). Interpolated frames are allowed for odometry to be considered valid (but not for depth ground truth) (`--allow_interpolated_frames`)
  1078 + this will try to select at most 500 frames (`--max_num_samples 500`) such that 90% (`--split 0.9`) of folders are kept as training folders, and every frame has at least 50 frames with valid odometry before (`--min_shift 50`). Interpolated frames are allowed for odometry to be considered valid (but not for depth ground truth) (`--allow_interpolated_frames`)
1078 1079  
1079 1080 It will create a txt file with test file paths (`/path/to/dataset/test_files.txt`), a txt file with train folders (`/path/to/dataset/train_folders.txt`) and lastly a txt file with flight path vector coordinates (in pixels) (`/path/to/dataset/fpv.txt`)
1080 1081  
... ... @@ -1109,7 +1110,11 @@ All these steps can be done under the script `picture_localization.py` with the
1109 1110  
1110 1111 ## Evaluation
1111 1112  
1112   -TODO
  1113 +Once you have you dataset, with your depth maps and a list of frames used for evaluation, you can use a special package to get evaluation metrics of your depth estimation algorithm. See [dedicated README](evaluation_toolkit/README.md)
  1114 +
  1115 +```
  1116 +pip install -e evaluation_toolkit
  1117 +```
1113 1118  
1114 1119 ## Detailed method with the "Manoir" example
1115 1120  
... ... @@ -1185,6 +1190,24 @@ Thorough photogrammetry was done with 1000 frames. Notice that not all the area
1185 1190  
1186 1191 [![Alt text](https://img.youtube.com/vi/NLIvrzUB9bY/0.jpg)](https://www.youtube.com/watch?v=NLIvrzUB9bY&list=PLMeM2q87QjqjAAbg8RD3F_J5D7RaTMAJj)
1187 1192  
  1193 +### Depth algorithm evaluation.
  1194 +
  1195 +Training and evaluation was done with SFMLearner. See inference script https://github.com/ClementPinard/SfmLearner-Pytorch/blob/validation_set_constructor/inference.py
  1196 +
  1197 +```
  1198 +Results for usual metrics
  1199 + AbsDiff, StdDiff, AbsRel, StdRel, AbsLog, StdLog, a1, a2, a3
  1200 + 17.1951, 33.2526, 0.3735, 0.9200, 0.3129, 0.4512, 0.5126, 0.7629, 0.8919
  1201 +```
  1202 +
  1203 +Graphs :
  1204 +
  1205 +![h](images/Figure_1.png)
  1206 +![h](images/Figure_2.png)
  1207 +![h](images/Figure_3.png)
  1208 +![h](images/Figure_4.png)
  1209 +![h](images/Figure_5.png)
  1210 +
1188 1211 # Todo
1189 1212  
1190 1213 ## Better point cloud registration
... ...
evaluation_toolkit/README.md
1 1 # Evaluation Toolkit
2 2  
3   -Set of tools to run a particular algorithm on a dataset constructed with the validation set constructor, and evaluate it, along with advanced statistics regarding depth value, dans pixel position in image with repsect to flight path vector.
  3 +Set of tools to run a particular algorithm on a dataset constructed with the validation set constructor, and evaluate it, along with advanced statistics regarding depth value and pixel position in image with respect to flight path vector.
4 4  
5 5 ## Inference Example
6 6  
... ...
evaluation_toolkit/setup.py
1   -from setuptools import setup, find_packages
  1 +from setuptools import setup
2 2  
3 3 with open("README.md", "r") as fh:
4 4 long_description = fh.read()
... ... @@ -10,7 +10,7 @@ setup(name='inference toolkit',
10 10 description='Inference and evaluation routines to test on a dataset constructed with validation set constructor',
11 11 long_description=long_description,
12 12 long_description_content_type="text/markdown",
13   - packages=find_packages(),
  13 + packages=["evaluation_toolkit"],
14 14 entry_points={
15 15 'console_scripts': [
16 16 'depth_evaluation = evaluation_toolkit.depth_evaluation:main'
... ...
images/Figure_1.png 0 → 100644

31.1 KB

images/Figure_2.png 0 → 100644

63.2 KB

images/Figure_3.png 0 → 100644

36.9 KB

images/Figure_4.png 0 → 100644

53.5 KB

images/Figure_5.png 0 → 100644

47.6 KB

images/Figure_6.png 0 → 100644

141 KB