Research Notes Supporting Development of a Drone Mapping and Object Counting Workflow

I am using this page for tracking my research notes during development of an open source drone mapping and object counting workflow. Newest notes are at the top. The clean workflow document is a work in progress and largely incomplete, but available at the link below:

Links: Project Overview | Research Notes | Workflow Document

Status Summary November 27

Overall workflow is set up and tested with a car-counting model. So this is the “drone photos to object counts” path:

  • Drone photos
  • -> WebODM orthophoto
  • -> QGIS raster layer
  • -> Objects detected and counted
  • -> Locations marked in a separate vector layer

If you want to count cars, you can re-use an existing model from the model zoo (quick and easy). If you have an existing object detection model and some QGIS knowledge, this is very easy to set up and run.

Greatest effort for most applications will be training an object detection model for your topic of choice. The second greatest effort will be assessing and demonstrating the accuracy of the computer counts.

November 26

Ran a quick orthophoto of snowy automobiles in an impound lot. Total of 250-275 vehicles visible in the orthophoto. Deepness found approximately 28 (10%).

Tweakable detection parameters in Deepness plugin:

  • Confidence threshold (default = .50)
  • IoU (Intersection over Union) threshold (default = .50)
  • Remove overlapping detections (default = enabled)

Lowering confidence threshold to .4, .3, .2 produced successively more detections (.4 = 35, .3 = 40, .2 = 70) Most new detections were appropriate but a few were partial vehicles.

November 7

Working to understand the overall process of developing a custom YOLOv7 model via the jupyter notebook “car_detection__prepare_and_train.ipynb”

  • Install YOLOv7
  • Prepare dataset
  • Prepare YOLO configuration
  • Run training
  • Run testing
  • Export model to ONNX
  • Add metadata for QGIS Deepness plugin
  • Test model using QGIS

YOLO Overview references


Training YOLOv7 model references

  • *

Training Dataset requires

  • Dataset directory with images and labels
  • Dataset YAML descriptor file, which goes in yolov7/data/MY_DATA.yaml

NEXT: continue with Section 3.1 of

November 5

Deepness plugin setup:

  • Requirements setup
    • Python3 – test from powershell “python3” should open python session – quit()
    • pip – test from powershell “python3 -m pip” should show help info
  • Plugin installation
    • Use QGIS plugin manager
    • Had to install dependencies again from plugin install dialog, but it worked that time
    • See screenshots
  • Model installation
    • Model zoo:
    • Find an ONNX model you want to use. (e.g., Aerial Cars Detection). Click link to download it.
  • Test
    • Open QGIS, new project
    • Add orthophoto as raster layer
    • Enable quick map services layer (OSM) for context
    • Launch Deepness plugin (control panel on right)

Deepness UI videos:

Car detection example:

Model training tutorial: The entire training process has been gathered in a tutorial notebook in jupyter notebook: ./tutorials/detection/cars_yolov7/car_detection__prepare_and_train.ipynb


Jupyter 101:

Working with Jupyter Notebooks in VSCode:

November 3-4, 2023

Initial planning and documentation setup. Preliminary target for toolset is WebODM, QGIS, Deepness plugin, and DotDotGoose for developing training data.

“Open Source Drone Mapping and Object Detection Workflow” needs a much shorter, punchier working title.