I am using this page for tracking my research notes during development of an open source drone mapping and object counting workflow. Newest notes are at the top. The clean workflow document is a work in progress and largely incomplete, but available at the link below:
Status Summary November 27
Overall workflow is set up and tested with a car-counting model. So this is the “drone photos to object counts” path:
- Drone photos
- -> WebODM orthophoto
- -> QGIS raster layer
- -> Objects detected and counted
- -> Locations marked in a separate vector layer
If you want to count cars, you can re-use an existing model from the model zoo (quick and easy). If you have an existing object detection model and some QGIS knowledge, this is very easy to set up and run.
Greatest effort for most applications will be training an object detection model for your topic of choice. The second greatest effort will be assessing and demonstrating the accuracy of the computer counts.
Ran a quick orthophoto of snowy automobiles in an impound lot. Total of 250-275 vehicles visible in the orthophoto. Deepness found approximately 28 (10%).
Tweakable detection parameters in Deepness plugin:
- Confidence threshold (default = .50)
- IoU (Intersection over Union) threshold (default = .50)
- Remove overlapping detections (default = enabled)
Lowering confidence threshold to .4, .3, .2 produced successively more detections (.4 = 35, .3 = 40, .2 = 70) Most new detections were appropriate but a few were partial vehicles.
Working to understand the overall process of developing a custom YOLOv7 model via the jupyter notebook “car_detection__prepare_and_train.ipynb”
- Install YOLOv7
- Prepare dataset
- Prepare YOLO configuration
- Run training
- Run testing
- Export model to ONNX
- Add metadata for QGIS Deepness plugin
- Test model using QGIS
YOLO Overview references
Training YOLOv7 model references
Training Dataset requires
- Dataset directory with images and labels
- Dataset YAML descriptor file, which goes in yolov7/data/MY_DATA.yaml
NEXT: continue with Section 3.1 of https://learnopencv.com/fine-tuning-yolov7-on-custom-dataset/
Deepness plugin setup: https://qgis-plugin-deepness.readthedocs.io/en/latest/main/main_installation.html
- Requirements setup
- Python3 – test from powershell “python3” should open python session – quit()
- pip – test from powershell “python3 -m pip” should show help info
- Plugin installation
- Use QGIS plugin manager
- Had to install dependencies again from plugin install dialog, but it worked that time
- See screenshots
- Model installation
- Model zoo: https://qgis-plugin-deepness.readthedocs.io/en/latest/main/main_model_zoo.html
- Find an ONNX model you want to use. (e.g., Aerial Cars Detection). Click link to download it.
- Open QGIS, new project
- Add orthophoto as raster layer
- Enable quick map services layer (OSM) for context
- Launch Deepness plugin (control panel on right)
Deepness UI videos: https://qgis-plugin-deepness.readthedocs.io/en/latest/main/main_ui_video.html
Car detection example: https://qgis-plugin-deepness.readthedocs.io/en/latest/example/example_detection_cars_yolov7.html
Model training tutorial: The entire training process has been gathered in a tutorial notebook in jupyter notebook: ./tutorials/detection/cars_yolov7/car_detection__prepare_and_train.ipynb
Jupyter 101: https://www.dataquest.io/blog/jupyter-notebook-tutorial/
Working with Jupyter Notebooks in VSCode: https://python.land/data-science/jupyter-notebook#Jupyter_Notebook_in_VSCode
November 3-4, 2023
Initial planning and documentation setup. Preliminary target for toolset is WebODM, QGIS, Deepness plugin, and DotDotGoose for developing training data.
“Open Source Drone Mapping and Object Detection Workflow” needs a much shorter, punchier working title.