diff --git a/ML/README.md b/ML/README.md index c51560a..e82004b 100644 --- a/ML/README.md +++ b/ML/README.md @@ -1,15 +1,17 @@ # Segmenting Vegetation from bare-Earth in High-relief and Dense Point Clouds using Machine Learning [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.10966854.svg)](https://doi.org/10.5281/zenodo.10966854) -These programs are designed to segment vegetation from bare-Earth points in a dense point cloud, although they may also be used to segment any two classes that are visually distinguishable from each other. The programs are meant to reclassify large and dense point clouds, similar to the following: +These programs are modelled after work originally presented by myself at the AGU Fall Meeting in December 2021 (recording can be found [HERE](https://youtu.be/k1ors_mKxlo)) and supplementary to the manuscript in review in *Remote Sensing*. They are designed to segment vegetation from bare-Earth points in a dense point cloud, although they may also be used to segment any two classes that are visually distinguishable from each other by colour alone. The programs are meant to reclassify large and dense point clouds very efficiently, similar to the following (green points represent 'vegetation' and brown points represent 'bare-Earth'): -R G B color model of a coastal bluff near Port Angeles, WA +RGB color model of a coastal bluff near Port Angeles, WA -And reclassify the points, similar to the following: +model of a coastal bluff coloured by classification -model of a coastal bluff colored by classification +The out-of-the-box direct transferrability of the pre-trained ML models is further demonstrated using a point cloud for Chimney Bluffs, NY (along Lake Ontario) previously published by the USGS (yellow points represent 'vegetation' and blue points represent 'bare-Earth'): -Green points represent 'vegetation' and brown points represent 'bare-Earth'. +RGB color model of a coastal bluff near Chimney Bluffs, NY + +model of a coastal bluff coloured by classification There are two approaches: @@ -192,16 +194,21 @@ python ML_veg_reclass.py ``` ## ML_veg_train.py + +The `ML_veg_train.py` program will read in the two training point clouds, account for any class imbalance, build a ML model, and train the ML model. + +Running `ML_veg_train.py` without any command line argument will automatically enable a simple graphical interface similar to this: + +screenshot of the graphical interface for the ML_veg_train program + ### Inputs: -The following inputs are required for the ML_veg_train program. If any of these options are not specified in the command line arguments, a pop-up window will appear for each. +The following inputs are required for the `ML_veg_train.py` program. If any of these options are not specified in the command line arguments, a pop-up window will appear for each. 1. The point cloud containing vegetation points only 2. The point cloud containing only bare-Earth points 3. The output model name -The ML_veg_train program will read in the two training point clouds, account for any class imbalance, build a ML model, and train the ML model. - ### Outputs: All outputs will be saved in a directory with the following scheme: @@ -216,23 +223,24 @@ A plot of the model will also be saved as a PNG file (see example below), and a R G B model with one layer of 8 nodes ## ML_veg_reclass.py -### Inputs: -The following inputs are required for the ML_veg_reclass program. If any of these options are not specified in the command line arguments, a pop-up window will appear for each. +The `ML_veg_reclass.py` program will automatically read in the model structure, weights, and required inputs (including vegetation indices and geometry metrics) and will reclassify the input point cloud. -1. The point cloud to be reclassified -2. The h5 model file (can be generated using the ML_veg_train program) +Running `ML_veg_reclass.py` without any command line argument will automatically enable a simple graphical interface similar to this: -The ML_veg_reclass program will automatically read in the model structure, weights, and required inputs (including vegetation indices and geometry metrics) and will reclassify the input point cloud. +screenshot of the graphical interface for the ML_veg_reclass program -### Outputs: -The reclassified LAS/LAZ file will be saved in a directory with the following scheme: +### Inputs: -> results_{date} +The following inputs are required for the `ML_veg_reclass.py` program. If any of these options are not specified in the command line arguments, a pop-up window will appear for each. -Where {date} is the date the model was created and is pulled from the computer clock. If this directory does not already exist then it will first be created. +1. The point cloud to be reclassified +2. The h5 model file (can be generated using the `ML_veg_train.py` program) -A new LAZ file will be generated in the results directory: +### Outputs: +The reclassified LAS/LAZ file will be saved in the same directory as the original point cloud. + +A new LAZ file will be generated in with the following syntax: > {filename}_{model_name}_{threshold_value}.laz @@ -243,11 +251,15 @@ Where *{filename}* is the original point cloud file name, *{model_name}* is the ## ML_vegfilter.py -The ML_vegfilter program will use the two training point clouds to generate a machine learning model with the user-specified arguments, and then use this model to reclassify the specified point cloud. The significant advantage of using a single program is eliminating the need to read the model file for reclassification. +The `ML_vegfilter.py` program will use the two training point clouds to generate a machine learning model with the user-specified arguments, and then use this model to reclassify the specified point cloud. The significant advantage of using a single program is eliminating the need to read the model file for reclassification. + +Running `ML_vegfilter.py` without any command line argument will automatically enable a simple graphical interface similar to this: + +screenshot of the graphical interface for the ML_vegfilter program ### Inputs: -The following inputs are required for the ML_vegfilter program. If any of these options are not specified in the command line arguments, a pop-up window will appear for each. +The following inputs are required for the `ML_vegfilter.py` program. If any of these options are not specified in the command line arguments, a pop-up window will appear for each. 1. The point cloud containing vegetation points only 2. The point cloud containing only bare-Earth points @@ -280,6 +292,15 @@ Wernette, Phillipe A. 2024. Segmenting Vegetation from bare-Earth in High-relief } ``` +# OTHER PUBLICATIONS AND INFORMATION +Click [HERE](https://youtu.be/k1ors_mKxlo) to watch my completely original presentation at the American Geophysicl Union Fall Meeting in 2021 (New Orleans, LA). + +My manuscript in *Remote Sensing* is based on this original research and is currently available via Preprints.org: +> Wernette, P. Machine Learning Vegetation Filtering of Coastal Cliff and Bluff Point Clouds. Preprints 2024, 2024041387. https://doi.org/10.20944/preprints202404.1387.v1 + +Point clouds for coastal bluffs near the Elwha River mouth near Port Angeles, WA can be found [HERE](https://doi.org/10.5061/dryad.8pk0p2nww). +> Wernette, Phillipe (2024). Coastal bluff point clouds derived from SfM near Elwha River mouth, Washington from 2016-04-18 to 2020-05-08 [Dataset]. Dryad. https://doi.org/10.5061/dryad.8pk0p2nww + # REFERENCES [^1]: Meyer, G.E.; Neto, J.C. 2008. Verification of color vegetation indices for automated crop imaging applications. Comput. Electron. Agric. 63, 282–293. [^2]: Woebbecke, D.M.; Meyer, G.E.; Von Bargen, K.; Mortensen, D.A. 1995. Color Indices forWeed Identification Under Various Soil, Residue, and Lighting Conditions. Trans. ASAE, 38, 259–269. diff --git a/ML/src/__pycache__/tk_get_user_input.cpython-39.pyc b/ML/src/__pycache__/tk_get_user_input.cpython-39.pyc index 08ed39b..677924a 100644 Binary files a/ML/src/__pycache__/tk_get_user_input.cpython-39.pyc and b/ML/src/__pycache__/tk_get_user_input.cpython-39.pyc differ diff --git a/ML/src/__pycache__/tk_get_user_input_RECLASS_ONLY.cpython-39.pyc b/ML/src/__pycache__/tk_get_user_input_RECLASS_ONLY.cpython-39.pyc index 1705455..a7e68a5 100644 Binary files a/ML/src/__pycache__/tk_get_user_input_RECLASS_ONLY.cpython-39.pyc and b/ML/src/__pycache__/tk_get_user_input_RECLASS_ONLY.cpython-39.pyc differ diff --git a/ML/src/__pycache__/tk_get_user_input_TRAIN_ONLY.cpython-39.pyc b/ML/src/__pycache__/tk_get_user_input_TRAIN_ONLY.cpython-39.pyc index d414059..1077184 100644 Binary files a/ML/src/__pycache__/tk_get_user_input_TRAIN_ONLY.cpython-39.pyc and b/ML/src/__pycache__/tk_get_user_input_TRAIN_ONLY.cpython-39.pyc differ diff --git a/ML/src/tk_get_user_input.py b/ML/src/tk_get_user_input.py index 64b2d37..5d06666 100644 --- a/ML/src/tk_get_user_input.py +++ b/ML/src/tk_get_user_input.py @@ -125,7 +125,7 @@ def browseFiles(intextbox, desc_text="Select a File"): lab.grid(column=0, row=rowplacement, sticky=W, padx=padxval, pady=padyval) model_output_name = Text(self, height=1, width=50) if default_arguments_obj.model_output_name == 'NA': - model_output_name.insert(tk.END, 'model_'+default_arguments_obj.model_vegetation_indices+'_'+str(default_arguments_obj.model_nodes).replace(',','_').replace(' ','').replace('[','').replace(']','')) + model_output_name.insert(tk.END, 'model_'+str(default_arguments_obj.model_vegetation_indices).replace(' ','').replace('[','').replace(']','').replace("'","")+'_'+str(default_arguments_obj.model_nodes).replace(',','_').replace(' ','').replace('[','').replace(']','')) else: model_output_name.insert(tk.END, default_arguments_obj.model_output_name) model_output_name.grid(column=1, row=rowplacement, sticky=E, padx=padxval, pady=padyval) diff --git a/README.md b/README.md index c51560a..e82004b 100644 --- a/README.md +++ b/README.md @@ -1,15 +1,17 @@ # Segmenting Vegetation from bare-Earth in High-relief and Dense Point Clouds using Machine Learning [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.10966854.svg)](https://doi.org/10.5281/zenodo.10966854) -These programs are designed to segment vegetation from bare-Earth points in a dense point cloud, although they may also be used to segment any two classes that are visually distinguishable from each other. The programs are meant to reclassify large and dense point clouds, similar to the following: +These programs are modelled after work originally presented by myself at the AGU Fall Meeting in December 2021 (recording can be found [HERE](https://youtu.be/k1ors_mKxlo)) and supplementary to the manuscript in review in *Remote Sensing*. They are designed to segment vegetation from bare-Earth points in a dense point cloud, although they may also be used to segment any two classes that are visually distinguishable from each other by colour alone. The programs are meant to reclassify large and dense point clouds very efficiently, similar to the following (green points represent 'vegetation' and brown points represent 'bare-Earth'): -R G B color model of a coastal bluff near Port Angeles, WA +RGB color model of a coastal bluff near Port Angeles, WA -And reclassify the points, similar to the following: +model of a coastal bluff coloured by classification -model of a coastal bluff colored by classification +The out-of-the-box direct transferrability of the pre-trained ML models is further demonstrated using a point cloud for Chimney Bluffs, NY (along Lake Ontario) previously published by the USGS (yellow points represent 'vegetation' and blue points represent 'bare-Earth'): -Green points represent 'vegetation' and brown points represent 'bare-Earth'. +RGB color model of a coastal bluff near Chimney Bluffs, NY + +model of a coastal bluff coloured by classification There are two approaches: @@ -192,16 +194,21 @@ python ML_veg_reclass.py ``` ## ML_veg_train.py + +The `ML_veg_train.py` program will read in the two training point clouds, account for any class imbalance, build a ML model, and train the ML model. + +Running `ML_veg_train.py` without any command line argument will automatically enable a simple graphical interface similar to this: + +screenshot of the graphical interface for the ML_veg_train program + ### Inputs: -The following inputs are required for the ML_veg_train program. If any of these options are not specified in the command line arguments, a pop-up window will appear for each. +The following inputs are required for the `ML_veg_train.py` program. If any of these options are not specified in the command line arguments, a pop-up window will appear for each. 1. The point cloud containing vegetation points only 2. The point cloud containing only bare-Earth points 3. The output model name -The ML_veg_train program will read in the two training point clouds, account for any class imbalance, build a ML model, and train the ML model. - ### Outputs: All outputs will be saved in a directory with the following scheme: @@ -216,23 +223,24 @@ A plot of the model will also be saved as a PNG file (see example below), and a R G B model with one layer of 8 nodes ## ML_veg_reclass.py -### Inputs: -The following inputs are required for the ML_veg_reclass program. If any of these options are not specified in the command line arguments, a pop-up window will appear for each. +The `ML_veg_reclass.py` program will automatically read in the model structure, weights, and required inputs (including vegetation indices and geometry metrics) and will reclassify the input point cloud. -1. The point cloud to be reclassified -2. The h5 model file (can be generated using the ML_veg_train program) +Running `ML_veg_reclass.py` without any command line argument will automatically enable a simple graphical interface similar to this: -The ML_veg_reclass program will automatically read in the model structure, weights, and required inputs (including vegetation indices and geometry metrics) and will reclassify the input point cloud. +screenshot of the graphical interface for the ML_veg_reclass program -### Outputs: -The reclassified LAS/LAZ file will be saved in a directory with the following scheme: +### Inputs: -> results_{date} +The following inputs are required for the `ML_veg_reclass.py` program. If any of these options are not specified in the command line arguments, a pop-up window will appear for each. -Where {date} is the date the model was created and is pulled from the computer clock. If this directory does not already exist then it will first be created. +1. The point cloud to be reclassified +2. The h5 model file (can be generated using the `ML_veg_train.py` program) -A new LAZ file will be generated in the results directory: +### Outputs: +The reclassified LAS/LAZ file will be saved in the same directory as the original point cloud. + +A new LAZ file will be generated in with the following syntax: > {filename}_{model_name}_{threshold_value}.laz @@ -243,11 +251,15 @@ Where *{filename}* is the original point cloud file name, *{model_name}* is the ## ML_vegfilter.py -The ML_vegfilter program will use the two training point clouds to generate a machine learning model with the user-specified arguments, and then use this model to reclassify the specified point cloud. The significant advantage of using a single program is eliminating the need to read the model file for reclassification. +The `ML_vegfilter.py` program will use the two training point clouds to generate a machine learning model with the user-specified arguments, and then use this model to reclassify the specified point cloud. The significant advantage of using a single program is eliminating the need to read the model file for reclassification. + +Running `ML_vegfilter.py` without any command line argument will automatically enable a simple graphical interface similar to this: + +screenshot of the graphical interface for the ML_vegfilter program ### Inputs: -The following inputs are required for the ML_vegfilter program. If any of these options are not specified in the command line arguments, a pop-up window will appear for each. +The following inputs are required for the `ML_vegfilter.py` program. If any of these options are not specified in the command line arguments, a pop-up window will appear for each. 1. The point cloud containing vegetation points only 2. The point cloud containing only bare-Earth points @@ -280,6 +292,15 @@ Wernette, Phillipe A. 2024. Segmenting Vegetation from bare-Earth in High-relief } ``` +# OTHER PUBLICATIONS AND INFORMATION +Click [HERE](https://youtu.be/k1ors_mKxlo) to watch my completely original presentation at the American Geophysicl Union Fall Meeting in 2021 (New Orleans, LA). + +My manuscript in *Remote Sensing* is based on this original research and is currently available via Preprints.org: +> Wernette, P. Machine Learning Vegetation Filtering of Coastal Cliff and Bluff Point Clouds. Preprints 2024, 2024041387. https://doi.org/10.20944/preprints202404.1387.v1 + +Point clouds for coastal bluffs near the Elwha River mouth near Port Angeles, WA can be found [HERE](https://doi.org/10.5061/dryad.8pk0p2nww). +> Wernette, Phillipe (2024). Coastal bluff point clouds derived from SfM near Elwha River mouth, Washington from 2016-04-18 to 2020-05-08 [Dataset]. Dryad. https://doi.org/10.5061/dryad.8pk0p2nww + # REFERENCES [^1]: Meyer, G.E.; Neto, J.C. 2008. Verification of color vegetation indices for automated crop imaging applications. Comput. Electron. Agric. 63, 282–293. [^2]: Woebbecke, D.M.; Meyer, G.E.; Von Bargen, K.; Mortensen, D.A. 1995. Color Indices forWeed Identification Under Various Soil, Residue, and Lighting Conditions. Trans. ASAE, 38, 259–269. diff --git a/misc/images/color_rgb_16_16_16.png b/misc/images/color_rgb_16_16_16.png new file mode 100644 index 0000000..03b6a9d Binary files /dev/null and b/misc/images/color_rgb_16_16_16.png differ diff --git a/misc/images/gui_screenshot_veg_reclass.png b/misc/images/gui_screenshot_veg_reclass.png new file mode 100644 index 0000000..a36275a Binary files /dev/null and b/misc/images/gui_screenshot_veg_reclass.png differ diff --git a/misc/images/gui_screenshot_veg_train.png b/misc/images/gui_screenshot_veg_train.png new file mode 100644 index 0000000..1ad7f50 Binary files /dev/null and b/misc/images/gui_screenshot_veg_train.png differ diff --git a/misc/images/gui_screenshot_vegfilter.png b/misc/images/gui_screenshot_vegfilter.png new file mode 100644 index 0000000..76f99a4 Binary files /dev/null and b/misc/images/gui_screenshot_vegfilter.png differ diff --git a/misc/images/reclassified_rgb_16_16_16.png b/misc/images/reclassified_rgb_16_16_16.png new file mode 100644 index 0000000..9a47901 Binary files /dev/null and b/misc/images/reclassified_rgb_16_16_16.png differ diff --git a/noML/README.md b/noML/README.md index c51560a..e82004b 100644 --- a/noML/README.md +++ b/noML/README.md @@ -1,15 +1,17 @@ # Segmenting Vegetation from bare-Earth in High-relief and Dense Point Clouds using Machine Learning [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.10966854.svg)](https://doi.org/10.5281/zenodo.10966854) -These programs are designed to segment vegetation from bare-Earth points in a dense point cloud, although they may also be used to segment any two classes that are visually distinguishable from each other. The programs are meant to reclassify large and dense point clouds, similar to the following: +These programs are modelled after work originally presented by myself at the AGU Fall Meeting in December 2021 (recording can be found [HERE](https://youtu.be/k1ors_mKxlo)) and supplementary to the manuscript in review in *Remote Sensing*. They are designed to segment vegetation from bare-Earth points in a dense point cloud, although they may also be used to segment any two classes that are visually distinguishable from each other by colour alone. The programs are meant to reclassify large and dense point clouds very efficiently, similar to the following (green points represent 'vegetation' and brown points represent 'bare-Earth'): -R G B color model of a coastal bluff near Port Angeles, WA +RGB color model of a coastal bluff near Port Angeles, WA -And reclassify the points, similar to the following: +model of a coastal bluff coloured by classification -model of a coastal bluff colored by classification +The out-of-the-box direct transferrability of the pre-trained ML models is further demonstrated using a point cloud for Chimney Bluffs, NY (along Lake Ontario) previously published by the USGS (yellow points represent 'vegetation' and blue points represent 'bare-Earth'): -Green points represent 'vegetation' and brown points represent 'bare-Earth'. +RGB color model of a coastal bluff near Chimney Bluffs, NY + +model of a coastal bluff coloured by classification There are two approaches: @@ -192,16 +194,21 @@ python ML_veg_reclass.py ``` ## ML_veg_train.py + +The `ML_veg_train.py` program will read in the two training point clouds, account for any class imbalance, build a ML model, and train the ML model. + +Running `ML_veg_train.py` without any command line argument will automatically enable a simple graphical interface similar to this: + +screenshot of the graphical interface for the ML_veg_train program + ### Inputs: -The following inputs are required for the ML_veg_train program. If any of these options are not specified in the command line arguments, a pop-up window will appear for each. +The following inputs are required for the `ML_veg_train.py` program. If any of these options are not specified in the command line arguments, a pop-up window will appear for each. 1. The point cloud containing vegetation points only 2. The point cloud containing only bare-Earth points 3. The output model name -The ML_veg_train program will read in the two training point clouds, account for any class imbalance, build a ML model, and train the ML model. - ### Outputs: All outputs will be saved in a directory with the following scheme: @@ -216,23 +223,24 @@ A plot of the model will also be saved as a PNG file (see example below), and a R G B model with one layer of 8 nodes ## ML_veg_reclass.py -### Inputs: -The following inputs are required for the ML_veg_reclass program. If any of these options are not specified in the command line arguments, a pop-up window will appear for each. +The `ML_veg_reclass.py` program will automatically read in the model structure, weights, and required inputs (including vegetation indices and geometry metrics) and will reclassify the input point cloud. -1. The point cloud to be reclassified -2. The h5 model file (can be generated using the ML_veg_train program) +Running `ML_veg_reclass.py` without any command line argument will automatically enable a simple graphical interface similar to this: -The ML_veg_reclass program will automatically read in the model structure, weights, and required inputs (including vegetation indices and geometry metrics) and will reclassify the input point cloud. +screenshot of the graphical interface for the ML_veg_reclass program -### Outputs: -The reclassified LAS/LAZ file will be saved in a directory with the following scheme: +### Inputs: -> results_{date} +The following inputs are required for the `ML_veg_reclass.py` program. If any of these options are not specified in the command line arguments, a pop-up window will appear for each. -Where {date} is the date the model was created and is pulled from the computer clock. If this directory does not already exist then it will first be created. +1. The point cloud to be reclassified +2. The h5 model file (can be generated using the `ML_veg_train.py` program) -A new LAZ file will be generated in the results directory: +### Outputs: +The reclassified LAS/LAZ file will be saved in the same directory as the original point cloud. + +A new LAZ file will be generated in with the following syntax: > {filename}_{model_name}_{threshold_value}.laz @@ -243,11 +251,15 @@ Where *{filename}* is the original point cloud file name, *{model_name}* is the ## ML_vegfilter.py -The ML_vegfilter program will use the two training point clouds to generate a machine learning model with the user-specified arguments, and then use this model to reclassify the specified point cloud. The significant advantage of using a single program is eliminating the need to read the model file for reclassification. +The `ML_vegfilter.py` program will use the two training point clouds to generate a machine learning model with the user-specified arguments, and then use this model to reclassify the specified point cloud. The significant advantage of using a single program is eliminating the need to read the model file for reclassification. + +Running `ML_vegfilter.py` without any command line argument will automatically enable a simple graphical interface similar to this: + +screenshot of the graphical interface for the ML_vegfilter program ### Inputs: -The following inputs are required for the ML_vegfilter program. If any of these options are not specified in the command line arguments, a pop-up window will appear for each. +The following inputs are required for the `ML_vegfilter.py` program. If any of these options are not specified in the command line arguments, a pop-up window will appear for each. 1. The point cloud containing vegetation points only 2. The point cloud containing only bare-Earth points @@ -280,6 +292,15 @@ Wernette, Phillipe A. 2024. Segmenting Vegetation from bare-Earth in High-relief } ``` +# OTHER PUBLICATIONS AND INFORMATION +Click [HERE](https://youtu.be/k1ors_mKxlo) to watch my completely original presentation at the American Geophysicl Union Fall Meeting in 2021 (New Orleans, LA). + +My manuscript in *Remote Sensing* is based on this original research and is currently available via Preprints.org: +> Wernette, P. Machine Learning Vegetation Filtering of Coastal Cliff and Bluff Point Clouds. Preprints 2024, 2024041387. https://doi.org/10.20944/preprints202404.1387.v1 + +Point clouds for coastal bluffs near the Elwha River mouth near Port Angeles, WA can be found [HERE](https://doi.org/10.5061/dryad.8pk0p2nww). +> Wernette, Phillipe (2024). Coastal bluff point clouds derived from SfM near Elwha River mouth, Washington from 2016-04-18 to 2020-05-08 [Dataset]. Dryad. https://doi.org/10.5061/dryad.8pk0p2nww + # REFERENCES [^1]: Meyer, G.E.; Neto, J.C. 2008. Verification of color vegetation indices for automated crop imaging applications. Comput. Electron. Agric. 63, 282–293. [^2]: Woebbecke, D.M.; Meyer, G.E.; Von Bargen, K.; Mortensen, D.A. 1995. Color Indices forWeed Identification Under Various Soil, Residue, and Lighting Conditions. Trans. ASAE, 38, 259–269.