Sunday, May 8, 2016

Navigational Activity

Introduction:

This week our objective was to navigate through dense woods to a series of predetermined points, marked with flags, with the maps we made back in the navigational map design lab. For navigating we would have a set of 5 UTM coordinates, and a GPS unit set to track our route as well as points (the flags). 

Study Area

The study area was The Priory, a dorm hall off of the UWEC campus, set on 120 acres of forest, surprisingly close to campus. The wooded area does have trails, benches, and a variety of flora and fauna throughout. In addition to the plants and animals the priory has a variety of geospatial features including, rivers, hills, V-shaped valleys, steep inclines and quite a few sharp plants. 

Figure 1. The Priory and the Priory Woods.
Methods

Figure 2. Dr Hupy explaining the GPS Unit.
The class met on at the Priory parking lot, the sun was shining and their was a chance of rain. But we were excited non-the-less. After receiving our maps, which had been printed out for us, we attempted to determine the where the points were using the UTM grid on our maps to make for easy navigation. We ran into a problem right away as one of our maps did not print, inconveniently the map that did not print correctly was our UTM grid map. So with quick thinking we pulled up the map on our phones and cross referenced the points to the map we did have.  

After marking the points of 5 pairs of UTM coordinates and receiving a tutorial on how to mark the points once we got to each one (Fig 2), we set off into the woods! 

We circled the non-forested area until we found a trial that would, presumably, take us closer to the points, having never been to the Priory before we were all a little lost. We made it into the interior of the woods rather quickly by following a trail which was marked with pink flamingos (not a clue as to why they were out there). When the trail stopped the woods opened up and we started looking for our first point (Fig 3,4).
Fig 3. The first clearing of the woods. 

Fig 4. Looking for our first point. 

We stopped to figure out the GPS and take another look at the coordinates (Fig 5). It took us a little while to figure out which direction cause the set of numbers to increase or decrease when we went in the correct direction. 

Figure 5. The GPS and the Coordinates. 


After losing the trail, and beginning to make a new one of our own, we figured out how to get to our first point below (Fig 6). The vegetation was quite thick in order to get to this point, many small and narrow young trees were in our way and had to be traversed in addition to many bushes and branches with very sharp thorns.



Fig 6. The first point, marked with a flag.
The second point was not close by at all, and after back tracking and stumbling around for a while we began to head in the correct direction. Which was down into a valley (Fig 7).



Fig 7. The beginning of the trip into the valley. 
Once into the valley, the going got a little easier, the brush cleared out and only tall, mostly dead trees cluttered the area. Many of the dead trees had fallen over and lay in our path, so we had to climb over them (and under) to get to the second point. 

Fig 8. Point 2. under a fallen log off the main branch of the valley.
For our third point we had to continue traveling down the base of the valley and towards the interstate to locate the flag. We got close a few times and realized that in order to get to the point we would have to change our elevation(Fig 9). This seemed like an easy task however the hillside we had to go up was steep and fragile, every step we took caused us to have to balance. Additionally, many dead trees were on the hill so reaching for support was a bad idea, a few of them had such poor support that the trees feel right over as we passed. And of course more thorny plants, after a while we got to the top of the hill (Fig 10)

Fig 10, The uphill climb, about half way up the hillside. The hills incline was not captured with justice here.

Figure 10. Point 3. Dense vegetation at the top of the hill and a lot of thorny plants

Now we had two points left, we began to get a little tired and hot from all that climbing, but we pressed on. After climbing out of the other side of the valley we back tracked (unknowingly) to point one, and then from there we went towards point 4. The back tracking was quite a bit farther than the distance we should have been able to go to get to the 4th point, and was largely inefficient, as we climbed out of the valley to get back into a different valley, but at least this time we had a stream (Fig 11)

Fig 11. Point 4 with a stream
After Point 4, things broke down a little, we were tired and hot, and we wanted to take a break. But we knew we just had to find one last point in order to go home. So we set off, but we ended up going the wrong way, for a while (about 35-45-minutes!). Which did not help the moral situation. But we did get to see a lot of the priory grounds, so we had that going for us. 

For the 5th point we actually had to get back into the valley we had climbed out of, walk about 100 meters back past point 4, and towards the highway, climb another steep slope, and then wander on a hill top for a little bit. 

As we became more and more tired and hot, we had all gone direction blind, and we had gotten disoriented. None of us knew where to go to begin with, but our sense of direction had also been eroded away. So after what seemed like an eternity atop that hill we finally stumbled on the 5th point* (* not shown due to being hot and tired, and forgetful).

Results/Discussion

After a long trek out of the woods we got to go home. As you can see from our track log in the map below, we went all over the place. One thing to note is that the tracking log did get turned off for an unknown reason during the activity, what is represented here is only some of the track log points. 
Figure 12. The final Priory map. The green dots represent the tracking log, and the colored dots represent the various flags we had to navigate to. The red square around the outside of the map represents the boundaries of our activity. 


Conclusions:

During this activity we had a great group dynamic, and we worked together well. The points did seem frustrating to find on more than one occasion, but we never blamed each other, just kept going. It is good to have a positive attitude when navigating due to the fact that many issues do come up, and in order to continue on, one has to think about the end outcome. That being said, the error for this project was all human and procedural. The issues we faced could have been avoided and will be next time we are in the field navigating. Being it: not having the correct map, having errors determining the GPS when using it, or accidentally turning off your data points. This run was a great point of what can be done and what not to do. In the end we did find all 5 points, and it did take us the longest out of all of the other groups. But we did not give up, and we pervaded in the end.  

Sunday, May 1, 2016

Processing UAS data in Pix4D

This post will be slightly different form the ones you are used to seeing if you are following this blog. Instead of doing just a technical report, this post will be broken into three components. Back ground on Pix4D and some questions to get familiar with the software and its capabilities, followed by a walk through of using the software, and finally a few maps that were end products of using the Pix4D software.

Part 1: Getting familiar with Pix4D, what is required to use the program and how Pix4D processes Data.

  1. A drone flys over an area and takes Ariel Photos of locations
  2. The Pix4D software overlaps the pixels in the photos to key points on the ground
  3. The photogammertry extracts geometry to calculate the camera position of when the photo was taken, and then is able to create a 3D maps and images. 
Via the Pix4D software manual, here are a few questions to help us understand the prerequisite materials needed to get started with understanding Pix4D, 
  • What is the overlap needed for Pix4D to process imagery?
    1. "In order to automatically get high accuracy results, a high overlap between the images is required. Therefore, the image acquisition plan has to be carefully designed in order to have enough overlap. The image acquisition plan depends on the required GSD [Ground Sampling Distance] by the project specifications and the terrain type / object to be reconstructed" (Index, Step 1). The manual also indicates that "a bad image acquisition plan will lead to inaccurate results or processing failure and will require to acquire images again".
    2. The first question that came to mind was, "what is a image acquisition plan?", well the manual also answered that question as well.
Fig 1. How To Design an Image Acquisition Plan. 
  • What if the user is flying over sand/snow, or uniform fields?
    • Normally, the manual lists a general case of the recommended overlap for a image acquisition path to have atleast 75% overlap with respect to the flight direction, and at least 60% side overlap between flying tracks, as well as the a pattern that will work for most cases.
Fig 2.
    •  In the case of Snow or Sand that have, "little visual content due to large uniform areas...a hight overlap [of] atleast 85 % frontal overlap and at least 70% side overlap [is needed]. Set the exposure settings accordingly to get as much contrast as possible in each image". 
  • What is Rapid Check?
    • Rapid Check is an alternative processing option, where instead of doing the full processing, Rapid check determines weather all of the images obtained are capable of creating a sufficient coverage layer in order to process the images.
  • Can Pix4D process multiple flights? What does the pilot need to maintain if so?
    • In order to process multiple flights a correct number of overlap points are needed. Shown in Fig 3, and Fig 4 below, the images show how many points are needed to overlap flights, and what those flight photos may look like in order to combine multiple flights for processing. 
    Fig 3. Showing the correct conditions under which multiple flights can be processed., only when enough points are overlapping can the images be overlapped. 

    Fig 4. An example of overlapping photography from multiple flights.
  • Can Pix4D process oblique images? What type of data do you need if so?
    • Yes, Pix4D can process oblique images. In order to process oblique images the camera must take multiple pictures, one above, one at a 45 degree angle to the ground or at a 90 degree angle to the object (Fig 5)
    Fig 5. How Oblique images are captured.
  • Are GCPs necessary for Pix4D? When are they highly recommended?
    • While Ground Control Points are not necessary for Pix4D, it is highly recommended that they are used to increase absolute accuracy of the project. The Pix4D manual, notes that the GCPS increase the absolute accuracy of a project, placing the model at the exact position on the earth. They reduce the Shift due to GPS from meters to centimeters. In projects without image geolocation: GCPs are required if there is need for georefrenced outputs. 
  • What is the quality report?
    • The Quality report is generated after the points taken by areal drone are processed in Pix4D. The quality report contains the following information;
      1.  Quality Check: which verify that all or almost all the images are calibrated in one block, 
      2. Preview: for projects which use nadir images or orthomosaics preview verifies that the othomosaic does not contain holes, have distortions and that the orthomosaic is in correct position and orientation with the Ground Control Points
      3. Initial Image Positions: which verifies that if the image was geolocated that the image positions figure corresponds to the flight plan.
      4. Computed Image/GCPs/Manual Tie Points Postions: Verifies that the computed image geolocation is good, as well as calculation that the GCP's error is low. .
      5. 3D points from 2D Keypoints Matches: Verifies that enough points between multiple images exist.
      6. Geolocation detail: If using GCPs, this section verifys that all GCPs are taken into account and verified. 
      7. Processing Options: Verify that if using GCPs the coordinate systems of the GCPs is correct and that if using an image, the image coordinate system is correct. 
Part 2: Using the Pix4D program to generate orthomosaics. A Review

In order to start a new project in Pix4D, one just needs to go to start new project. It is important to note that in Pix4D, and in every program with the ability to set the work space, in order to correctly place all of the files generated by the program as it undergoes its analysis the work place must be set correctly. 

Next we want to "Add Images" (Fig 6, below), and copy all of the images from the drone flight into Pix4D. 
Fig 6.  The "Add Images" function in Pix4D.
After adding all the images, a message appears that states "reading EXIF data", which means that Pix4D program is reading the metadata from the drone and the images that are attached to the file. The drone that was used automatically tags the various metadata, in the EXIF file as the images are taken, so we do not have to do anything here (Fig 7) 

Fig 7. The metadata of the image files in Pix4D.
All of the points in Pix4D, each point represents an image that has been geolocated (Fig 8).

Fig 8.
After uploading the images, it must be decided what kind of project that will be created, In this case we will be creating  a new 3D map (Fig 9).

Fig 9. The options of projects that one can select in Pix4D.

Now that the data is uploaded and ready to be processed, we need to go through the initial processing, or the rapid check to make sure that are data is intact enough to process and turn into a map. First we need to uncheck "Point Cloud and Mesh" and "DSM, orthomosaic and Index". If we knew that the data we had was good enough to create the second and third options we could leave those checked right away.

Up until this point, the program was a breeze to use, and the Pix4D still is easy to understand, however the initial processing required 1 h and 11 min. After the Initial Processing was completed, the Point Cloud Mesh and the DSM, Orthomasic and Index also needed to be ran, which also took a long time to process. 

After the Initial Processing a Quality Report was generated. Below is the Summary, Quality Check and Preview generated in the Quality Report. From the Quality Report we can see that all 80 images were used in the analysis with none being rejected. : 
Fig 10. Image taken from the Quality report. 
We can also examine how the various images that were taken by the drone were overlapped and where the overlap was done well (5+ images) and where poor overlap areas exist (1) (Fig 11).

Fig 11. The number of overlapping images in the Quality Report. On a scale of 1 to 5, with 1 being the worst, and 5 being the best, the images are shown above on a color map for quality. 
In order to review the program, Pix4D's functions and features had to be tested. In order to do that, a few objectives were determined that would show off the programs capabilities. 
  • Calculate the area of a surface within the Ray Cloud editor. Export the feature for use in your maps
    • Calculating the surface area of an object within Ray Cloud editor was fairly straight forward, by going to rayCloud > New Surfaces, all you have to do is click to add the vertices of the object that you are attempting to find the surface of. By right clicking you make an "end vertices" which tells the program to calculate the surface area inside of the shape you just created. The measurements of that Object and the vertices will be displayed in the properties area on the right hand side of the screen. It is important to note that the vertices have to be cross verified in 2 of the images in the "Images" section which is below the measurements area, with out verifying the vertices the measurements will not display. Exporting the feature was as easy as right clicking on the object in Pix4D and choosing "export" (Fig 17).
  • Measure the length of a linear feature in the Ray Cloud. Export the feature for use in your maps.
    • Similarly to the surface area calculator, the line length calculator is under rayCloud > New Polyline. This feature is actually easier to create and just requires clicking to create verticies and right clicking to create an "end verticies", which stops the drawing phase and allows Pix4D to measure the distance of the line created (Fig 16 Baseball Diamond, Fig 18 Trackfield).
  • Calculate the volume of a 3D object. Export the feature for use in your maps
    • Calculating the volume of an object actually was challenging. The process is straight forward once it is figured out, it took many attempts at trial and error to determine the correct procedure. Similar to the other rayCloud tools, clicking on the image creates vertices and right clicking creates an end point at which time Pix4D stops drawing and attempts to determine the measurements of the shape. What is important to note that was not found in the directions (until after the task was completed) is that the vertices have to be verified similarly to the surface area feature, but the base of the object can exist in multiple planes, and the user must determine what that plane is. Secondly, the other confusing aspect of this tool was that after the vertices are determined, one must click "Update Measurements" in order to have the object correctly filled in and for Pix4D to determine a volume (Fig 17).
  • Create an animation that 'flys' you through your project. 
    • In rayCloud editor, select the new Animation Trajectory feature. This feature is really fun to use and lets you create a fly through of your project in two different ways. The first way is to see the view from the drone which captured the images, this can be done by selecting "Computed Camera Positions and Orientation" in the New Video Animation Trajectory window when it comes up. The second way to create a video is to have the software create a video from points in space that the user selects, this is done by hitting the "User Recorded Views" option. Once selected the user clicks "record view" at which ever point and camera angle the user likes and the Animation Trajectory will record that perspective, with enough points selected the Animation Trajectory creates a fly through image using those perspectives (see below). This feature is really a fantastic feature of the software, and lets the 3D images be seen in a way that flying a drone would not necessary lend itself to with out a practiced hand. 
Fig 12. Fly through animation of the track field.


Fig 13. Fly Through animation of the baseball diamond. 


Part 3: Maps

The first of the 3D maps we will exam will be the baseball field (Fig 13-17), mostly due to the fact that the baseball diamonds fields have a better 3D component to them, with structures, lights, bleachers, ect, that the track field does not have. While using the polyline tool in Pix4D, a measurement between the bases was taken. Knowing that a regulation baseball field has 90 feet distances between bases, a measurement of 17.56 meters (57.6 feet) brought up the question of what kind of baseball field this actually was. A quick search indicated that standard little league fields had distances between the bases of about 60 feet (Fig 14). To check this assumption, a second polyline measurement was taken from the pitchers mound to home plate (Fig 16). The second measurement came back with 13 meters which is about 42.6 feet. Both of these measurements are very close to a the measurements of the distance between the pitchers mound and home plate of a little league field.
Fig 14. Regulation Size of a standard Little League Field.
After bringing the image of the field into ArcScene and setting the base height of the orthophoto tif to the height of the DSM tif, the 3D components of the field could be modeled. With a relative scale taken from the polyline measurements, a map can now be made in ArcMap with the exported picture of the baseball diamond from ArcScene (Fig 15 and 16).


Fig 15. In order to get the map to be 3D in ArcScene, you must add the Orthomosaic tif file, and then set the base height of the file to the DSM tif file that also came out of the project. Failing to set the correct base height in ArcScene will results in the map either not becoming 3D or becoming extremely distorted and unrecognizable. 

All of the measurements in Pix4D were accompanied with error measurements, but even with these, the measurements were off, but not by a lot. Both of the line measurements for the baseball field were off by a measure of about 3 feet, which is not too bad all things considered, for 3D image software measuring distances from photos taken by a aerial drone. Error in measurement, as long as that error is consistent and known, that error can be accounted for and corrected.

Fig 16. The baseball diamond with a relative scale, the Red line is a known distance of 17.56 meters (57.6) based on a polyline measurement in Pix4D, this distance is a relative measurement of the distance between bases on a little league field of 60 feet between bases. To confirm the size of the field a second polyline measurement was taken, the black line measuring 13 meters (42.6 feet), measuring the distance between the pitchers mound and home plate. 


The measurements for the surface area and volume are more than likely off by some factor in terms of a base distance measure. Unfortunately, this error is now propagated in the surface area or volume measurement and has now caused them both to be off by a factor that can not be accounted for with out doing some math. We know that the distance between bases is 18.82 meters (60 feet), so we can use 18.82 (60 feet)^2, as a rough estimate of the surface area of the inner field, which is 354.19 meters (3600 square feet). This will be an over estimate due to the fact that the inner field is not a perfect square, but this estimate still leaves us off by about 1100 sq feet. Doing some quick math with the reported measurements from Pix4D, it would appear that the surface area was calculated with side measurements at or near 15.25 meters (50.06 feet) to get the 233.68 meters^2 (2515.31 Sq Feet) measurement. So if we think to ourselves that the polyline could be off by 1 meter (3 feet), and correct for that, we would still come up short with  a measurement of the exterior of the infield being 16.25 meters and squaring that to get 264.06 meters (2842.34 sq feet), and we are still off by some unknown measure, because of the missing corners of the infield. So if we used these to values as error measurements we can say that the surface area of the infield could be adjusted to 264.06 meters (2852.34 sq feet) at a minimum, and 334.45 sq meters (3600 sq feet). This is means that their exists quite a variation in the reported surface area and the actual surface area, and that the actual measure exists between those two values, but is probably more than the lowest and lower than the highest value. 

What this actual comes out as is uncertainty in the measurements and data, so while we can get rough estimates with the method we have employed here, we can not do any detailed analysis unless we determine what is causing the error in our measurements. These could be Pix4D, the aerial image quality and the drones ability to capture fine detail, or this could be a result of error by the user.

The volume measurement also has an error, but this is probably due to the fact that the fine detail on the dugout in Pix4D was lacking and had to be estimated, this resulted in the vertices being placed in the incorrect position and resulted in the volume being off. But we really dont have an idea of by how much.
Fig 17. The baseball field again, this time with surface area (Green) and volume measurements (Blue). The Dugout with the volume measurement may be hard to see, but the specified dugout is in the back, at the second of the two fields. The surface area of the infield was taken and measured to be 233.68 sq meters (2515.31 Sq Feet). The Volume of the dugout was measured to be 34.28 meters cubed (1210.59 cubic feet), both measurements were taken in pix 4D. 


The School track did not have as many defining features, for 3D mapping, but can underwent the same process as the baseball diamond. Here again we have the polyline measurement being incorrect when compared to known distances. Here the distance the green line represents is 100 meters ( 328.04 feet), but comes out as 30.12 meters (98.82 feet) in Pix4D. 
Fig 18. Track field, a polyline measurement taken in Pix4D, the green line is 30.12 meters (98.82 feet) long.
Final Critique:

Overall the Pix4D program is an amazing tool which and is capable of rendering stunning 3D maps and images, with great interactive features. The program is very easy to use and understand as long as the manual is handy. However it will require a significant time investment to wait for the program to run to completion, and with out the manual the program is not very intuitive to a first time user, but with the time it takes to process projects you will have plenty of time to read the manual, and get familiar with the software.. The last problem is with the measurements, the fact that all of the measurements are off slightly is troubling, and results in the program reviving only a passing grade as to its ability to solve questions related to geospatial analysis. However it is probably fixable, it is just not known at this time where the error is occurring and as to how the error would be fixed. But with all programs, it will lie in the software itself, the platform the aerial imagery was taken on, or an issue with the users ability to use the program.

Overall Pix4D is great and works well to map and render 3D images, and has a great deal of uses and applications that can serve many industries, including construction, mining, urban planing, and many, many more with the right imagination and skill.

Sources:

Little league Baseball Diamond Specifications: http://www.littleleague.org/Assets/forms_pubs/50-70-FieldConversion.pdf
Pix 4D webstie: https://pix4d.com/

Friday, April 22, 2016

Total Station Topographical Field Survey

Introduction:

Previously we conducted a survey using the Distance/Azimuth Survey method , in which we used a laser measure to find the distance of objects relative to one central point on the UWEC campus mall.


During this survey, we are going to build upon the previous distance/azimuth method by using a Topcon Total Station and a Tesla GPS Unit. The differences between the two survey methods are as follows; first, with the Topcon Total Station, Due North has to be known in order to calibrate the Total Station, second, the topographical survey generated by using a total station will also assign a height measurement to each point collected. The height measurement may not seem like a lot, but it gives us another dimension to work with, as we then have measurements on the X,Y, and Z axis. As you may have seen in previous blog posts, the Z measurement will give us the capability to model the surface of any area surveyed in 3D using ArcScene. 


Fig 1. Dr Hupy explaining the setup of the Total Station, again this week it was raining and again we used the campus mall as our study area. It is important to note that the pink flag under the total station designated the occupied point, over which the total station was calibrated. The equipment listed from left to right is the Topcon Survey Grade GPS, the Total Station, and the Prism Pole
Study Area:

Similarly to most of our other surveys, the UWEC campus mall with its diverse topography, would serve as the study area for this survey. Specifically we set the total station on the lawn between the Davies Center and Phillips Hall, before the bridge cross Little Niagara Creek. 

Fig 2. The UWEC Lower Campus, and the location of the campus mall.
The survey area is a small section of lawn bisected by sidewalks that extend through the campus mall, and which lead up to the entrances of Phillips Hall. The area is on the south side of Little Niagara Creek, and is obstructed with young trees that have been planted as the campus mall has been rebuilt. The ground has an incline leading up to the West side of Phillips hall and a also features a decline as you near Little Niagara Creek (Fig 3).

Fig 3. The location of the Total Station Topographical Field Survey is the area of lawn in the lower right hand corner of the picture, near where the small tree is planted, and stretch to Little Niagara Creek. 


Methods:

The Total Station has a few additional setup and technology requirements in order to function. Rather than having a central point that may be slightly variable (due to human/procedural error) the total station has to be set on known benchmark, known as an occupied point. In addition to this occupied point, 2 or 3 back sites are required which are also known points and used to calibrate the total station. The back sites are calibrated using a secondary unit known as a Prism Pole, which reflects a laser shot by the total station and using triangulation with the Prism Pole, measures the height the height of the object being measured. It is VERY important to note that at no time can the Total Station be moved, adjusted, or bumped and that the Prism Pole's height must not be changed without recording the adjustment. Failing to note the adjustment or a disturbance of the Total Station will result in inaccuracy and error which will propagate as more points are surveyed.

Here is another resource from the UW-Madison as to how to set up a total station, and what individual components make up the whole unit. 


For written step by step directions to set op the Total Station please see Appendix A at the bottom of this post.  

After the total station is set up, every time a points is surveyed three different measurements are taken. 

  1. Angle measurement: taken in arc-seconds (theta, Fig 3)
  2. Distance measurement: via the reflection of the prism pole (D, Fig 3)
  3. Coordinate Measurement: the coordinate measurement of any point is determinedly using retaliative position of the distance measurement to the known point that the total station is over. Height measurements are then calculated using trigonometry and triangulation from the angle of the total station to the reflection of the prism pole.  


How these three measurements come together is pictured below (Fig 3). 


Fig 3. A representative picture of how Total Stations capture all two of the three measurements, angle (theta), distance (D). Not shown specifically is coordinate measurements, which is captured by linking the total station to the survey grade GPS unit, at which point the coordinates of surveyed points are estimated using the distance measurement relative to the known location of the total station, The level road is also not used in our analysis, but rather the prism rod which reflects the total stations beam back to the total station for digital measurements and calculations.


With the knowledge of how the total station is set up and the unit captures data points, one is now ready to proceed with a survey. In this case we will be doing a topographic survey, so data points of the local landscape will be to capture the numerical attributes of various points in the area. 

Fig 4. The total station set up and working, ready to capture survey points of the landscape. 

Fig 5. The view of the total station from the perspective of the prism rod.

Fig 6. A picture of the total station capturing a point via the prism rod. 
When using the Total Station all you have to do is look through the lens piece and line the cross hairs up on the prism, once the cross hairs are set on the prism, you hit the capture button on the GPS to capture the point. As the prism is moved to various points, this process is repeated until enough survey points are captured or the desired number of points are surveyed.

Results/discussion:

For our survey we collected 97 points on the UWEC campus mall, with both the known control point and the two back sites, a total of 100 points were collected (Fig 7).

Fig 7. The attribute table of all 100 survey points opened in ArcMap with X,Y and Z data recorded.
The Map below reiterates the specific survey location, showing both banks of Niagara Creek captured, as well as the various points on the campus mall surrounding Phillips Hall.  The known control point is shown on the map below (Fig 8) in yellow and labeled "OCC", this same point is depicted in Figure 1, as the pink flag under the Total Station. 

Fig 8. The survey points of the Topographical Survey are depicted by the purple circles while the known control point and the two back sites are depicted by the varying colored circles with different symbolism than the survey points. The Total Station was set up of the point OCC , with the survey grade GPS directly behind the site.
After the data was validated by point all of the survey points into ArcMap, and making sure that the points could be read by GIS software, the data was put into ArcScene for 3D rendering. In order to make the data points show more drastic height differences between the data points, a floating surface was created for the maps base height and a factor of 4 was used to differentiate the heights of the individual data points. With out these manipulations, the 3D image would have had very little variation in height and would have looked remarkable flat due how close the data was, in terms of elevation. In Fig 9, below, the green represents the lowest portion of the data, and would have been the data that was captured closest to Little Niagara Creek. While the white portions of the map, would be the highest points captured, on the left hand side, these points are in a garden near Phillips Hall.
Fig 9.  The 3D Rendering of all 100 data points, the lowest portion of the image (green), are areas that were recorded near Little Niagara Creek. The white area represent the highest points captured, which would have been near Phillips Hall (left side) and on the other side of the Little Niagara Creek bank (right side)



Fig 10.  Another 3D rendering of the data points, the 3 green pins in this image represent the known control point (standing) and the 2 back stations (flat). 

While the quality of the data that was collected with the total station was precise, some of the data was inaccurately represented in the map (Fig 8).  Which brings us back to the idea of accuracy and precision, and what those terms truly mean (Fig 11). The points that were recorded in using the Total Station were low in accuracy and high in precision, at least according to the points when overlaid with a base map of campus. What this means is that relative to each other, each point was placed correctly in terms of relative position, however some points were low in accuracy, appearing in the Little Niagara Creek, and in Phillips Hall.

Fig. 11 Accuracy vs. Precision. 
These errors may be attributed to potentially a few things

  1.  Human Error; either in the set up or in the import of the data or the calibration of the units.
  2. Error in the base map; which reports all points to be off by some measure across the board.
  3. Actual Error in the tools. 
More than likely, because we are dealing with a survey grade GPS that can be accurate down to the sub-meter level, the errors are not coming from the GPS and as the GPS is connected to the total station which uses quite a bit of technology to accurately measure distance, the error probably does not lie in the equipment.

Due to the fact that the points that are inside Phillips Hall are just barley on the outside edge of where the building is drawn, it may be that the scale at which the data points are represented and a combination of that data being projected or converted and/or an inaccurate base map, it would seem that these errors are procedural or a result of human error rather than error in the equipment. 


Conclusion:

While the Total Station survey technique may seem straightforward, it's really the culmination of multiple survey techniques that we have been building on over the course of the semester. From the Sandbox Survey we have taken the aspect of 3 dimensional measurements (X,Y, and Z). From the second survey, Data Interpolation in ArcScene, we included the aspects of proper survey procedure with have data normalization, data interpolation, and also included dimensional analysis and arc scene.  Then from the Distance/Azimuth Survey, we have core idea of all of the survey points being taken in reference to one standard base point. And finally we have the accuracy of the survey grade GPS from the Topographical Survey.  While the total station adds a new element of technology and a more systematic way of referencing survey points than we had previously explored in the Distance/Azimuth Survey, the methodology of the topographical survey contains elements we have seen before.  What the Total Station Topographical Field Survey adds is the combination of the survey grade GPS to enhanced accuracy and precision of the survey points in relation to the known control point and to capture many survey points in three dimensions.

This is why it is not a surprising that this survey technique is used in industry where accuracy and precision are required.  Potentially another way to include increased data normalization, is also something that we have worked with before, which would be to include capturing attribute data in addition to X, Y, and Z data through the use of domains to capture more information about each survey point, while constraining the data entry to relevant information.

Practical applications of adding attribute information via domains would include surveying points for road construction that have objects or obstructions which would need to be removed or built around in order to complete the project.


Appendix A:
Steps to collecting data with the TSS
  1. Set up Survey
    1. Outside set pin flags where you will set up the total station (Occupied Point and the BS point
  2. Set up Magnet Job
    1. Set up a job within Magnet using the RTK option. This is the same procedure as setting up the job for surveying with RTK GPS.
    2. Gather your backsight(s). Name them BS1, BS2, etc.
    3. Gather your OCC1 (This is your occupied point where the Total station will
  3. Set up the TSS
    1. Tripod Setup
      • Wipe off head to ensure that surface is clean and free of dirt
      • Extend all three legs equally prior to spreading the legs.  Secure locking Mechanism
      • Spread the legs sufficiently to ensure a stable base for the tripod
      • Center tripod head over the point while maintaining a fairly level tripod  head
      • Check centering by dropping pebble from the center of the tripod head. (within 2" of Pt)
      • Step down firmly on the footpads to set the legs
    2.  Instrument Setup
      • Secure instrument to tripod and center over tripod head
      • Bring all leveling screws to a neutral position, just below the line on the leveling screw post
      • While looking through optical plummet (if needed adjust o.p. for parallax and focus on the ground) or
      • with Laser plummet on, position instrument directly over point by using Leveling screws only.
      • Observe what two legs need to be adjusted to bring Bullseye bubble into the middle
      • Be careful not to move the third Leg
      • Release Horizontal tangent lock and rotate instrument until tubular level vial is parallel (in Line)  to 2 of the leveling screws.(this position 1)
      • Rotate both leveling screws equal amounts in opposing directions until tubular level vial is centered
      • Rotate instrument until tubular level vial is perpendicular to Position 1
      • Rotate leveling screw equal until tubular level vial is centered
      • Re-observe the point with o.p or laser plummet an adjust instrument over the point by loosening the center screw and shifting instrument over the point and re-tighten screw.
      • Re-check the fine(tubular) bubble vial in position 1 and 2 and adjust as needed
  4. Set up Blue Tooth
    1. Turn on the total station.
    2. Turn on the station Bluetooth. This is done within the menu area, and within the parameters portion.
    3. Make sure your Bluetooth is on for the Tesla to recognize it.
    4. Disconnect from the Hiper in your job, and now connect to the Total Station
  5. To begin the OCC/BS setup
    1. On the Home Screen for Magnet, select the Setup icon
    2. Click on the backsight icon
    3. Enter in all needed information for the TS and for the Prism rod.
    4. Place your prism rod over the backsight point and gather the backsight. This is needed to zero out the total station for north
  6. Collect GPS points with the Tesla in Magnet, using the Total Station and prism*
    1. From the Magnet Home screen, open the Survey icon.
    2. Begin your toposurvey, but now use the prism and total station to do the survey.