Public Lab Research note


OpenDroneMap,OpenAerialMap,and MapKnitter

by smathermather | June 08, 2015 14:50 08 Jun 14:50 | #11955 | #11955

A great post a couple weeks ago from Jeff Warren addressed some of the questions of what are the opportunities for integration between the great work being done on MapKnitter, OpenAerialMap, and OpenDroneMap.

So, I have a blog post answering with my perspective on integration between the three, but the summary for MapKnitter/OpenDroneMap love is as follows:

Ways MapKnitter may help OpenDroneMap:

  • MapKnitter’s clever use of Leaflet to handle affine transformation of images is really exciting, and may help with improving final georeferencing for OpenDroneMap datasets.

  • Regarding the above, one really useful thing for fliers launching balloons, drones, and kites without GPS would be the ability to quickly and easily perform really approximate georeferencing. I would envision a workflow where a user moves an image to its approximate position and size relative to a background aerial. OpenDroneMap would be able to take advantage of this approximate georeferencing to optimize matching.

Ways OpenDroneMap could benefit MapKnitter

  • For large image datasets, matching images can be very tedious. Automatic feature extraction and matching can help. OpenDroneMap could be adapted to serve back match information to Mapknitter to ease this process. This will become increasingly important as MapKnitter raises the ~60 image limit on images that it can process.

  • A near future version of OpenDroneMap will have image blending / smoothing / radiometric matching. For the server portion of the MapKnitter infrastructure, this feature could be a really useful addition for production of final mosaics.


4 Comments

Super! Lots to think about here but one that floats to the top of my mind is: What would ODM need in terms of inputs to generate feature extraction and matching, and what would it return? Input might be just a collection of images, i guess? Would it return interest point matches via REST and perhaps therefore image pairings as well? MapKnitter could in theory submit a list of image URLs and get back a list of interest point pairings.

Is there a standard way to checksum or identify interest points that have been found, so MK could store them itself?

If that could work, we could do it at low resolution and use it to organize images on the client side in MK. For example, we could do a rough placement, and/or we could auto-place new images which matched old interest points.

Brain churning here... great stuff, Stephen! Thanks!

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


Hi @warren,

Yes, ODM would need just images, ideally with geo info as well. I suppose the most useful thing for ODM to return would be transformation matrices based on the matches, but if MapKnitter can take point pairings, than that's certainly the easiest thing for ODM to return. What do matches look like now in MK?

I'm not sure what you mean by checksums for interest points.

Love the idea of low res manipulation with MK. That could make for some p. quick workflows.

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


We don't currently use matches in MK, but the very initial thing I was going to suggest is to simply visually link the currently selected (or dragging) image's interest points with their matches in nearby images, with very fine yellow lines. This would also potentially reduce the search space for matches -- submit a request just for the current image and any nearby images.

The idea for checksums as unique ids was that I'm not sure how standard interest point descriptors are -- but if there's a way to get and store the interest point description itself from the request, pair finding could be done on the MK end, in the client-side, for efficiency.

That is, we wouldn't have to re-generate interest points, but could store them in the image objects, and do local pair-finding.

It'd be especially nice if, in addition, interest point descriptors were standard across algorithms, and interest points could be stored in image metadata too, making future matches faster... but I think IPs can be calculated with different methods, so that probably won't work.

Reply to this comment...


Ah, checksums are a brilliant way to handle that.

If memory serves, matches in ODM are image pairs, plus matches for pixel and line. Rotations and translations are only calculated once matches are back-traced in relative space to 3D camera positions, but image matches plus pixel/line matches could be used to calculate 2D transformations.

Reply to this comment...


Login to comment.