- Visual Contrast
- Legibility
- Figure-ground organization
- Hierarchical organization
- Balance
Sources:
Sources:
The final module of Special Topics in GIS focused on scale / resolution, and data aggregation. The first portion of the lab explored two vector-based datasets [consisting of points, lines, and polygons] and how they were affected as the scale of the datasets was changed. As the scale / resolution of a dataset is enlarged, sample points will be eliminated as they space between is reduced. Therefore, lines and polygons will have less vertices, resulting in geometries that are over-generalized, or completely eliminated from the map. This is displayed on the map below, with the left-side map being delineated watersheds and the right-side map highlighting water bodies of the same swath of land located in North Carolina. The darkest shade of blue is the original dataset, the medium represents the same dataset at a 1:24,000 resolution, and the light blue represents the same dataset at a 1:100,000 resolution. It is evident that as the scale is increased, the number of line features decreases, eliminating line / polygon features from the dataset; hence, more line features / polygonal features are captured by the original and high-resolution [1:24,000] datasets. Due to this generalization effect, a careful consideration of appropriate scale must take place to ensure an accurate representation of the data.
Module 2.2 of Special Topics in GIS was an exploration of surface interpolation and some of the different methods that can be employed to produce a dataset consisting of estimated values in between known [sample or testing] points. [Bolstad & Manson, 2022] define surface interpolation as a 'prediction of variables at unmeasured locations, and based on a sampling of the same variables at known locations [p. 510]. This was accomplished by the use of four different techniques and comparing the results to discern which model most accurately portrayed the data.
The first exercise was a comparison of Digital Elevation Models produced by Inverse Distance Weighted [IDW] and Spline interpolation. The IDW Interpolation model is an estimation of unknown values inversely proportionate to the distance from known values at sample, or testing, points. Essentially, this equates to greater distanced from sample points equals less influence in determining that cell's estimated value. The Spline Interpolation model employs mathematical functions, or polynomials, to form a smooth curved surface between the known sample points [Bolstad & Manson, 2022].
After these models were constructed, the Raster Calculator geoprocessing tool was ran to determine the mathematical difference between the two; the results are shown in the map below.
The interpolation method I would choose to best represent BOD concentrations of Tampa Bay would be the Spline Technique, specifically the Regularized Spline type. The reasoning behind this decision is due to the nature of any substance being diluted in water. Regardless of the substance, once introduced into a body of water, it will disperse evenly and continuously throughout the adjacent areas. The ISW method tends to create 'hot spots' with peaks occurring at the testing points while the Tension Spline type will create a continuous surface, but not a smooth surface. Finally, Nearest Neighbor will provide an estimated value of BOD concentrations, but these are generalized over discreet regions of the bay and will not provide a smooth, continuous interpolated model. However, the Regularized Spline type 'creates a smooth, gradually changing surface with values that may lie outside the sample data range' [ESRI, 2024]. This, in my opinion, would provide a much more accurate estimate of BOD concentrations in Tampa Bay than the other three methods discussed in this assignment.
Sources:
Bolstad,
Paul & Manson, Steven. (2022). GIS Fundamentals: A First Text on
Geographic Information Systems (7th Edition). Eider Press.
Environmental Systems Research Institute. (2024). How Spline Works. https://pro.arcgis.com/en/pro-app/latest/tool-reference/3d-analyst/how-spline-works.htm
Module 2.1 of Special Topics in GIS was based on surfaces, particularly Triangulated Irregular Networks [TINs] and Digital Elevation Models [DEMs]. The first portion of the lab was an opportunity to import elevation data, set the ground source [giving it 3D visualization], and learning how to exaggerate the vertical distances to enhance the visual aesthetics of the landscape. Once these fundamental concepts were practiced, an analytical problem was presented.
The second portion of the lab was to create a Suitability Map for a study area that illustrates the best locations for a ski resort and its associated ski run. The suitability was determined based on slope, elevation, and aspect [directional face] of the landscape. The dark green areas of the map below display the most suitable locations of the resort, and the red areas signify areas that are unsuitable for this tourist destination.
Module 1.3 of Special Topics in GIS was a continuation of data quality; this module focused on the completeness of datasets, roadway networks particularly. Two datasets were provided for the completeness assessment; one was obtained from Jackson County, Oregon and the other was downloaded from the United States Census Bureau TIGER shapefile repository. While both datasets contained roadway centerlines, their overall distances were significantly different. The spatial analysis performed on these datasets was to ascertain which one was more complete, based on length alone. Initially, before any processing was performed, the TIGER shapefile consisted of 11,382.7 kilometers of roadway centerlines while the Jackson County dataset accounted for 10,873.3 kilometers, making the TIGER dataset more complete.
The next process of this lab was to analyze completeness according to [Haklay, 2010]. Essentially, this method consists of overlaying a grid index on top of the datasets and creating a thematic map according to their percentage differences. For this lab, the grid consisted of 5-kilometer squares that were set within the confines of the county border. Next, all roadways that lied outside of the grid index were clipped; this deleted any extra roadways outside the confines of the grid. After this, the roadways had to be split at the intersection of each grid cell, and then the individual roadway sections within each cell had to be dissolved into one multi-part feature. Once these processes were completed for each dataset, a comparison between the two could be made on a cell-by-cell basis [see map below].
[[Jackson County Length - TIGER Length] / Jackson County Length] * 100
Haklay, M. (2010). How Good is Volunteered Geographic Information? A Comparative Study of OpenStreetMap and Ordinance Survey Datasets. Environmental and Planning B: Planning and Design, 37(4). 682-703.
Lab assignment 1.2 of Special Topics in GIS was performing an accuracy assessment according to the National Standard for Spatial Data Accuracy. Positional Accuracy Handbook states 'the National Standard for Spatial Data Accuracy describes a way to measure and report positional accuracy of features found within a geographic dataset. Approved in 1998, the NSSDA recognizes the growing need for digital spatial data and provides a common language for reporting accuracy' [Planning, 1999]. For this assignment, two datasets were provided for a study area located in the City of Albuquerque, New Mexico. The first dataset was obtained from the City of Albuquerque and the second was a StreetMap USA dataset, which is a product of TeleAtlas and is distributed by ESRI with the ArcGIS software package. Both datasets consist of roadway networks and can be seen in the map below. The green lines represent the City of Albuquerque [ABQ] dataset, and the red lines represent the StreetMap USA dataset.
ABQ Dataset:
Using the National Standard for Spatial Data Accuracy, the data set tested 14.27ft horizontal accuracy at 95% confidence level.
StreetMap Dataset:
Using the National Standard for Spatial Data Accuracy,
the data set tested 379.66ft horizontal accuracy at 95% confidence level.
Example of Detailed positional accuracy
statements as reported in metadata:
Digitized features of the roadway infrastructure located
within the study area of Albuquerque, New Mexico were obtained from the City of
Albuquerque and from StreetMap USA, a product of TeleAtlas and distributed by
ESRI with ArcGIS. Those obtained from the City of Albuquerque tested at 14.27ft
horizontal accuracy at the 95% confidence level, and those obtained from
StreetMap USA tested at 379.66ft horizontal accuracy at the 95% confidence
level using modified NSSDA testing procedures. See Section 5 for entity
information of digitized feature groups. See also Lineage portion of Section 2 for
additional background. For a complete report of the testing procedures used,
contact the University of West Florida GIS Department as noted in Section 6,
Distribution Information.
Levels of vertical relief were not considered throughout
the entire accuracy assessment of these two datasets.
Source:
Planning, M. (1999). Positional Accuracy Handbook. Using the National Standard for Spatial data Accuracy to measure and report geographic data quality. Minnesota Planning, St. Paul, MN.
Module 1 of Special Topics in GIS dealt with the precision and accuracy of gathered waypoints from a GPS data collection unit. International Organization for Standardization defines precision as 'the closeness of agreement between independent test results obtained under stipulated conditions' [ISO, 2006]. With regard towards the lab assignment, precision would be determining the proximity of fifty gathered waypoints from a single location using a Garmin GPSMAP 76 data collection unit. As shown in the map below, many of the waypoints are in close proximity while others deviate from the majority. For this part of the lab, the mathematical mean was calculated for the x-, y-, and z- location for all fifty waypoints; this 'average' location is denoted on the map as a red 'X'. Once this average location was calculated, an analysis could be performed on the distance between each waypoint and the calculated average location. This precision analysis concludes that 50% of all gathered waypoints fall within 3.1 meters of the average location, 68% of waypoints fall within 4.5 meters of the average location, and 95% of all gathered waypoints fall within 14.8 meters from the calculated average location. Whether these precision analysis results would suffice varies widely between applications. These percentile distances may be acceptable and appropriate for one scenario and widely unacceptable in a different scenario; precision, therefore, is relative and must be determined at the beginning of each synopsis.
Module One of GIS 6005 - Communicating GIS revolved around cartographic design principles and typographical principles that should be follow...