Sunday, May 8, 2016

Navigational Activity

Introduction:

This week our objective was to navigate through dense woods to a series of predetermined points, marked with flags, with the maps we made back in the navigational map design lab. For navigating we would have a set of 5 UTM coordinates, and a GPS unit set to track our route as well as points (the flags). 

Study Area

The study area was The Priory, a dorm hall off of the UWEC campus, set on 120 acres of forest, surprisingly close to campus. The wooded area does have trails, benches, and a variety of flora and fauna throughout. In addition to the plants and animals the priory has a variety of geospatial features including, rivers, hills, V-shaped valleys, steep inclines and quite a few sharp plants. 

Figure 1. The Priory and the Priory Woods.
Methods

Figure 2. Dr Hupy explaining the GPS Unit.
The class met on at the Priory parking lot, the sun was shining and their was a chance of rain. But we were excited non-the-less. After receiving our maps, which had been printed out for us, we attempted to determine the where the points were using the UTM grid on our maps to make for easy navigation. We ran into a problem right away as one of our maps did not print, inconveniently the map that did not print correctly was our UTM grid map. So with quick thinking we pulled up the map on our phones and cross referenced the points to the map we did have.  

After marking the points of 5 pairs of UTM coordinates and receiving a tutorial on how to mark the points once we got to each one (Fig 2), we set off into the woods! 

We circled the non-forested area until we found a trial that would, presumably, take us closer to the points, having never been to the Priory before we were all a little lost. We made it into the interior of the woods rather quickly by following a trail which was marked with pink flamingos (not a clue as to why they were out there). When the trail stopped the woods opened up and we started looking for our first point (Fig 3,4).
Fig 3. The first clearing of the woods. 

Fig 4. Looking for our first point. 

We stopped to figure out the GPS and take another look at the coordinates (Fig 5). It took us a little while to figure out which direction cause the set of numbers to increase or decrease when we went in the correct direction. 

Figure 5. The GPS and the Coordinates. 


After losing the trail, and beginning to make a new one of our own, we figured out how to get to our first point below (Fig 6). The vegetation was quite thick in order to get to this point, many small and narrow young trees were in our way and had to be traversed in addition to many bushes and branches with very sharp thorns.



Fig 6. The first point, marked with a flag.
The second point was not close by at all, and after back tracking and stumbling around for a while we began to head in the correct direction. Which was down into a valley (Fig 7).



Fig 7. The beginning of the trip into the valley. 
Once into the valley, the going got a little easier, the brush cleared out and only tall, mostly dead trees cluttered the area. Many of the dead trees had fallen over and lay in our path, so we had to climb over them (and under) to get to the second point. 

Fig 8. Point 2. under a fallen log off the main branch of the valley.
For our third point we had to continue traveling down the base of the valley and towards the interstate to locate the flag. We got close a few times and realized that in order to get to the point we would have to change our elevation(Fig 9). This seemed like an easy task however the hillside we had to go up was steep and fragile, every step we took caused us to have to balance. Additionally, many dead trees were on the hill so reaching for support was a bad idea, a few of them had such poor support that the trees feel right over as we passed. And of course more thorny plants, after a while we got to the top of the hill (Fig 10)

Fig 10, The uphill climb, about half way up the hillside. The hills incline was not captured with justice here.

Figure 10. Point 3. Dense vegetation at the top of the hill and a lot of thorny plants

Now we had two points left, we began to get a little tired and hot from all that climbing, but we pressed on. After climbing out of the other side of the valley we back tracked (unknowingly) to point one, and then from there we went towards point 4. The back tracking was quite a bit farther than the distance we should have been able to go to get to the 4th point, and was largely inefficient, as we climbed out of the valley to get back into a different valley, but at least this time we had a stream (Fig 11)

Fig 11. Point 4 with a stream
After Point 4, things broke down a little, we were tired and hot, and we wanted to take a break. But we knew we just had to find one last point in order to go home. So we set off, but we ended up going the wrong way, for a while (about 35-45-minutes!). Which did not help the moral situation. But we did get to see a lot of the priory grounds, so we had that going for us. 

For the 5th point we actually had to get back into the valley we had climbed out of, walk about 100 meters back past point 4, and towards the highway, climb another steep slope, and then wander on a hill top for a little bit. 

As we became more and more tired and hot, we had all gone direction blind, and we had gotten disoriented. None of us knew where to go to begin with, but our sense of direction had also been eroded away. So after what seemed like an eternity atop that hill we finally stumbled on the 5th point* (* not shown due to being hot and tired, and forgetful).

Results/Discussion

After a long trek out of the woods we got to go home. As you can see from our track log in the map below, we went all over the place. One thing to note is that the tracking log did get turned off for an unknown reason during the activity, what is represented here is only some of the track log points. 
Figure 12. The final Priory map. The green dots represent the tracking log, and the colored dots represent the various flags we had to navigate to. The red square around the outside of the map represents the boundaries of our activity. 


Conclusions:

During this activity we had a great group dynamic, and we worked together well. The points did seem frustrating to find on more than one occasion, but we never blamed each other, just kept going. It is good to have a positive attitude when navigating due to the fact that many issues do come up, and in order to continue on, one has to think about the end outcome. That being said, the error for this project was all human and procedural. The issues we faced could have been avoided and will be next time we are in the field navigating. Being it: not having the correct map, having errors determining the GPS when using it, or accidentally turning off your data points. This run was a great point of what can be done and what not to do. In the end we did find all 5 points, and it did take us the longest out of all of the other groups. But we did not give up, and we pervaded in the end.  

Sunday, May 1, 2016

Processing UAS data in Pix4D

This post will be slightly different form the ones you are used to seeing if you are following this blog. Instead of doing just a technical report, this post will be broken into three components. Back ground on Pix4D and some questions to get familiar with the software and its capabilities, followed by a walk through of using the software, and finally a few maps that were end products of using the Pix4D software.

Part 1: Getting familiar with Pix4D, what is required to use the program and how Pix4D processes Data.

  1. A drone flys over an area and takes Ariel Photos of locations
  2. The Pix4D software overlaps the pixels in the photos to key points on the ground
  3. The photogammertry extracts geometry to calculate the camera position of when the photo was taken, and then is able to create a 3D maps and images. 
Via the Pix4D software manual, here are a few questions to help us understand the prerequisite materials needed to get started with understanding Pix4D, 
  • What is the overlap needed for Pix4D to process imagery?
    1. "In order to automatically get high accuracy results, a high overlap between the images is required. Therefore, the image acquisition plan has to be carefully designed in order to have enough overlap. The image acquisition plan depends on the required GSD [Ground Sampling Distance] by the project specifications and the terrain type / object to be reconstructed" (Index, Step 1). The manual also indicates that "a bad image acquisition plan will lead to inaccurate results or processing failure and will require to acquire images again".
    2. The first question that came to mind was, "what is a image acquisition plan?", well the manual also answered that question as well.
Fig 1. How To Design an Image Acquisition Plan. 
  • What if the user is flying over sand/snow, or uniform fields?
    • Normally, the manual lists a general case of the recommended overlap for a image acquisition path to have atleast 75% overlap with respect to the flight direction, and at least 60% side overlap between flying tracks, as well as the a pattern that will work for most cases.
Fig 2.
    •  In the case of Snow or Sand that have, "little visual content due to large uniform areas...a hight overlap [of] atleast 85 % frontal overlap and at least 70% side overlap [is needed]. Set the exposure settings accordingly to get as much contrast as possible in each image". 
  • What is Rapid Check?
    • Rapid Check is an alternative processing option, where instead of doing the full processing, Rapid check determines weather all of the images obtained are capable of creating a sufficient coverage layer in order to process the images.
  • Can Pix4D process multiple flights? What does the pilot need to maintain if so?
    • In order to process multiple flights a correct number of overlap points are needed. Shown in Fig 3, and Fig 4 below, the images show how many points are needed to overlap flights, and what those flight photos may look like in order to combine multiple flights for processing. 
    Fig 3. Showing the correct conditions under which multiple flights can be processed., only when enough points are overlapping can the images be overlapped. 

    Fig 4. An example of overlapping photography from multiple flights.
  • Can Pix4D process oblique images? What type of data do you need if so?
    • Yes, Pix4D can process oblique images. In order to process oblique images the camera must take multiple pictures, one above, one at a 45 degree angle to the ground or at a 90 degree angle to the object (Fig 5)
    Fig 5. How Oblique images are captured.
  • Are GCPs necessary for Pix4D? When are they highly recommended?
    • While Ground Control Points are not necessary for Pix4D, it is highly recommended that they are used to increase absolute accuracy of the project. The Pix4D manual, notes that the GCPS increase the absolute accuracy of a project, placing the model at the exact position on the earth. They reduce the Shift due to GPS from meters to centimeters. In projects without image geolocation: GCPs are required if there is need for georefrenced outputs. 
  • What is the quality report?
    • The Quality report is generated after the points taken by areal drone are processed in Pix4D. The quality report contains the following information;
      1.  Quality Check: which verify that all or almost all the images are calibrated in one block, 
      2. Preview: for projects which use nadir images or orthomosaics preview verifies that the othomosaic does not contain holes, have distortions and that the orthomosaic is in correct position and orientation with the Ground Control Points
      3. Initial Image Positions: which verifies that if the image was geolocated that the image positions figure corresponds to the flight plan.
      4. Computed Image/GCPs/Manual Tie Points Postions: Verifies that the computed image geolocation is good, as well as calculation that the GCP's error is low. .
      5. 3D points from 2D Keypoints Matches: Verifies that enough points between multiple images exist.
      6. Geolocation detail: If using GCPs, this section verifys that all GCPs are taken into account and verified. 
      7. Processing Options: Verify that if using GCPs the coordinate systems of the GCPs is correct and that if using an image, the image coordinate system is correct. 
Part 2: Using the Pix4D program to generate orthomosaics. A Review

In order to start a new project in Pix4D, one just needs to go to start new project. It is important to note that in Pix4D, and in every program with the ability to set the work space, in order to correctly place all of the files generated by the program as it undergoes its analysis the work place must be set correctly. 

Next we want to "Add Images" (Fig 6, below), and copy all of the images from the drone flight into Pix4D. 
Fig 6.  The "Add Images" function in Pix4D.
After adding all the images, a message appears that states "reading EXIF data", which means that Pix4D program is reading the metadata from the drone and the images that are attached to the file. The drone that was used automatically tags the various metadata, in the EXIF file as the images are taken, so we do not have to do anything here (Fig 7) 

Fig 7. The metadata of the image files in Pix4D.
All of the points in Pix4D, each point represents an image that has been geolocated (Fig 8).

Fig 8.
After uploading the images, it must be decided what kind of project that will be created, In this case we will be creating  a new 3D map (Fig 9).

Fig 9. The options of projects that one can select in Pix4D.

Now that the data is uploaded and ready to be processed, we need to go through the initial processing, or the rapid check to make sure that are data is intact enough to process and turn into a map. First we need to uncheck "Point Cloud and Mesh" and "DSM, orthomosaic and Index". If we knew that the data we had was good enough to create the second and third options we could leave those checked right away.

Up until this point, the program was a breeze to use, and the Pix4D still is easy to understand, however the initial processing required 1 h and 11 min. After the Initial Processing was completed, the Point Cloud Mesh and the DSM, Orthomasic and Index also needed to be ran, which also took a long time to process. 

After the Initial Processing a Quality Report was generated. Below is the Summary, Quality Check and Preview generated in the Quality Report. From the Quality Report we can see that all 80 images were used in the analysis with none being rejected. : 
Fig 10. Image taken from the Quality report. 
We can also examine how the various images that were taken by the drone were overlapped and where the overlap was done well (5+ images) and where poor overlap areas exist (1) (Fig 11).

Fig 11. The number of overlapping images in the Quality Report. On a scale of 1 to 5, with 1 being the worst, and 5 being the best, the images are shown above on a color map for quality. 
In order to review the program, Pix4D's functions and features had to be tested. In order to do that, a few objectives were determined that would show off the programs capabilities. 
  • Calculate the area of a surface within the Ray Cloud editor. Export the feature for use in your maps
    • Calculating the surface area of an object within Ray Cloud editor was fairly straight forward, by going to rayCloud > New Surfaces, all you have to do is click to add the vertices of the object that you are attempting to find the surface of. By right clicking you make an "end vertices" which tells the program to calculate the surface area inside of the shape you just created. The measurements of that Object and the vertices will be displayed in the properties area on the right hand side of the screen. It is important to note that the vertices have to be cross verified in 2 of the images in the "Images" section which is below the measurements area, with out verifying the vertices the measurements will not display. Exporting the feature was as easy as right clicking on the object in Pix4D and choosing "export" (Fig 17).
  • Measure the length of a linear feature in the Ray Cloud. Export the feature for use in your maps.
    • Similarly to the surface area calculator, the line length calculator is under rayCloud > New Polyline. This feature is actually easier to create and just requires clicking to create verticies and right clicking to create an "end verticies", which stops the drawing phase and allows Pix4D to measure the distance of the line created (Fig 16 Baseball Diamond, Fig 18 Trackfield).
  • Calculate the volume of a 3D object. Export the feature for use in your maps
    • Calculating the volume of an object actually was challenging. The process is straight forward once it is figured out, it took many attempts at trial and error to determine the correct procedure. Similar to the other rayCloud tools, clicking on the image creates vertices and right clicking creates an end point at which time Pix4D stops drawing and attempts to determine the measurements of the shape. What is important to note that was not found in the directions (until after the task was completed) is that the vertices have to be verified similarly to the surface area feature, but the base of the object can exist in multiple planes, and the user must determine what that plane is. Secondly, the other confusing aspect of this tool was that after the vertices are determined, one must click "Update Measurements" in order to have the object correctly filled in and for Pix4D to determine a volume (Fig 17).
  • Create an animation that 'flys' you through your project. 
    • In rayCloud editor, select the new Animation Trajectory feature. This feature is really fun to use and lets you create a fly through of your project in two different ways. The first way is to see the view from the drone which captured the images, this can be done by selecting "Computed Camera Positions and Orientation" in the New Video Animation Trajectory window when it comes up. The second way to create a video is to have the software create a video from points in space that the user selects, this is done by hitting the "User Recorded Views" option. Once selected the user clicks "record view" at which ever point and camera angle the user likes and the Animation Trajectory will record that perspective, with enough points selected the Animation Trajectory creates a fly through image using those perspectives (see below). This feature is really a fantastic feature of the software, and lets the 3D images be seen in a way that flying a drone would not necessary lend itself to with out a practiced hand. 
Fig 12. Fly through animation of the track field.


Fig 13. Fly Through animation of the baseball diamond. 


Part 3: Maps

The first of the 3D maps we will exam will be the baseball field (Fig 13-17), mostly due to the fact that the baseball diamonds fields have a better 3D component to them, with structures, lights, bleachers, ect, that the track field does not have. While using the polyline tool in Pix4D, a measurement between the bases was taken. Knowing that a regulation baseball field has 90 feet distances between bases, a measurement of 17.56 meters (57.6 feet) brought up the question of what kind of baseball field this actually was. A quick search indicated that standard little league fields had distances between the bases of about 60 feet (Fig 14). To check this assumption, a second polyline measurement was taken from the pitchers mound to home plate (Fig 16). The second measurement came back with 13 meters which is about 42.6 feet. Both of these measurements are very close to a the measurements of the distance between the pitchers mound and home plate of a little league field.
Fig 14. Regulation Size of a standard Little League Field.
After bringing the image of the field into ArcScene and setting the base height of the orthophoto tif to the height of the DSM tif, the 3D components of the field could be modeled. With a relative scale taken from the polyline measurements, a map can now be made in ArcMap with the exported picture of the baseball diamond from ArcScene (Fig 15 and 16).


Fig 15. In order to get the map to be 3D in ArcScene, you must add the Orthomosaic tif file, and then set the base height of the file to the DSM tif file that also came out of the project. Failing to set the correct base height in ArcScene will results in the map either not becoming 3D or becoming extremely distorted and unrecognizable. 

All of the measurements in Pix4D were accompanied with error measurements, but even with these, the measurements were off, but not by a lot. Both of the line measurements for the baseball field were off by a measure of about 3 feet, which is not too bad all things considered, for 3D image software measuring distances from photos taken by a aerial drone. Error in measurement, as long as that error is consistent and known, that error can be accounted for and corrected.

Fig 16. The baseball diamond with a relative scale, the Red line is a known distance of 17.56 meters (57.6) based on a polyline measurement in Pix4D, this distance is a relative measurement of the distance between bases on a little league field of 60 feet between bases. To confirm the size of the field a second polyline measurement was taken, the black line measuring 13 meters (42.6 feet), measuring the distance between the pitchers mound and home plate. 


The measurements for the surface area and volume are more than likely off by some factor in terms of a base distance measure. Unfortunately, this error is now propagated in the surface area or volume measurement and has now caused them both to be off by a factor that can not be accounted for with out doing some math. We know that the distance between bases is 18.82 meters (60 feet), so we can use 18.82 (60 feet)^2, as a rough estimate of the surface area of the inner field, which is 354.19 meters (3600 square feet). This will be an over estimate due to the fact that the inner field is not a perfect square, but this estimate still leaves us off by about 1100 sq feet. Doing some quick math with the reported measurements from Pix4D, it would appear that the surface area was calculated with side measurements at or near 15.25 meters (50.06 feet) to get the 233.68 meters^2 (2515.31 Sq Feet) measurement. So if we think to ourselves that the polyline could be off by 1 meter (3 feet), and correct for that, we would still come up short with  a measurement of the exterior of the infield being 16.25 meters and squaring that to get 264.06 meters (2842.34 sq feet), and we are still off by some unknown measure, because of the missing corners of the infield. So if we used these to values as error measurements we can say that the surface area of the infield could be adjusted to 264.06 meters (2852.34 sq feet) at a minimum, and 334.45 sq meters (3600 sq feet). This is means that their exists quite a variation in the reported surface area and the actual surface area, and that the actual measure exists between those two values, but is probably more than the lowest and lower than the highest value. 

What this actual comes out as is uncertainty in the measurements and data, so while we can get rough estimates with the method we have employed here, we can not do any detailed analysis unless we determine what is causing the error in our measurements. These could be Pix4D, the aerial image quality and the drones ability to capture fine detail, or this could be a result of error by the user.

The volume measurement also has an error, but this is probably due to the fact that the fine detail on the dugout in Pix4D was lacking and had to be estimated, this resulted in the vertices being placed in the incorrect position and resulted in the volume being off. But we really dont have an idea of by how much.
Fig 17. The baseball field again, this time with surface area (Green) and volume measurements (Blue). The Dugout with the volume measurement may be hard to see, but the specified dugout is in the back, at the second of the two fields. The surface area of the infield was taken and measured to be 233.68 sq meters (2515.31 Sq Feet). The Volume of the dugout was measured to be 34.28 meters cubed (1210.59 cubic feet), both measurements were taken in pix 4D. 


The School track did not have as many defining features, for 3D mapping, but can underwent the same process as the baseball diamond. Here again we have the polyline measurement being incorrect when compared to known distances. Here the distance the green line represents is 100 meters ( 328.04 feet), but comes out as 30.12 meters (98.82 feet) in Pix4D. 
Fig 18. Track field, a polyline measurement taken in Pix4D, the green line is 30.12 meters (98.82 feet) long.
Final Critique:

Overall the Pix4D program is an amazing tool which and is capable of rendering stunning 3D maps and images, with great interactive features. The program is very easy to use and understand as long as the manual is handy. However it will require a significant time investment to wait for the program to run to completion, and with out the manual the program is not very intuitive to a first time user, but with the time it takes to process projects you will have plenty of time to read the manual, and get familiar with the software.. The last problem is with the measurements, the fact that all of the measurements are off slightly is troubling, and results in the program reviving only a passing grade as to its ability to solve questions related to geospatial analysis. However it is probably fixable, it is just not known at this time where the error is occurring and as to how the error would be fixed. But with all programs, it will lie in the software itself, the platform the aerial imagery was taken on, or an issue with the users ability to use the program.

Overall Pix4D is great and works well to map and render 3D images, and has a great deal of uses and applications that can serve many industries, including construction, mining, urban planing, and many, many more with the right imagination and skill.

Sources:

Little league Baseball Diamond Specifications: http://www.littleleague.org/Assets/forms_pubs/50-70-FieldConversion.pdf
Pix 4D webstie: https://pix4d.com/