Conclusion

Through practical experience of the software and resources associated with remote sensing, I have been able to develop knowledge and understanding of the topic. I have found that the division of practical work and reflective blog has enabled me to independently learn and develop my skills, as well as find research that incorporates remote sensing in topics I have an interest in.

I will adapt these skills into future projects, for example dissertation work. I am interested in tourism, and how tourism changes the environment, and remote sensing could potentially be a useful tool for this. An example of the use of remote sensing can be seen in the images below (NASA, 2014).

orlando_ms1_1972250

Image of Orlando, Florida (1972) (Sourced from NASA, 2014)

orlando_oli_2014137

Image of Orlando, Florida (2014) – showing the impact of the growth of tourism in the area (Sourced from NASA, 2014)

Remote sensing will improve and develop as time and technology progresses. It will be interesting to learn about the upcoming and future plans of remote sensing, and see how these ideas unfold.

I have found the applications of remote sensing to be most interesting, and how it can be adapted to many uses, therefore indicating it’s long-term relevance in geography. This is particularly evident in the need for management and solutions to future issues, such as climate change, which can be researched through remote sensing.

References:

NASA (2014) Orlando, Florida: Four Decades of Development / Landsat Image Gallery. Available at: http://landsat.visibleearth.nasa.gov/view.php?id=86276

 

Remote Sensing Application: Monitoring Diseases

Satellite images have been used in the the study of health, by monitoring areas containing habitats and vectors risking transmission of diseases (Beck et al, 2000). Examples of this include; the use of a Landsat TM image to assess the spatial pattern of Lyme disease transmission in New York (Beck et al, 2000), temporal patterns of cholera in the Bay of Bengal (Beck et al, 2000), malaria (Hay et al, 2002), and the monitoring of known disease-carrying vectors such as mosquitoes (Kalluri et al, 2007).

Spectral Signatures:

One of the more simple approaches of monitoring disease and transmission, is through observing spectral signatures of land-use cover types and their correlation to species abundance (Kalluri et al, 2007). However techniques have developed further than this as remote sensing has progressed (Kalluri et al, 2007) (see blog post 1). The difference in spectral signatures and the information we can gain was discussed in blog post 3 – spectral signatures. I found it interesting that this simple technique can be used in a variety of applications, such as disease mapping.

Other methods of monitoring:

For successful application of remote sensing to be useful for health records, satellites need to monitor factors involved in disease transmission. This can include surface waters, atmospheric conditions, species abundance, land-use and cover type, and climate. The factor being measured will determine the sensor used, with certain variables of a sensor being more important than others for each factor. For example, diseases that depend on surface waters and atmospheric moisture for transmission will be best recorded with a satellite with high temporal resolution (Kalluri et al, 2007), as these factors are highly dynamic. Another example is using NDVI values to monitor vector ecosystems (Kalluri et al, 2007). NDVI values are derived from reflectance of red and NIR wavelengths of vegetation, and can provide information on species and vegetation health.

The study assessing spatial patterns of Lyme disease in New York also used remote sensing to monitor vegetation as part of the assessment (Beck et al, 2000). This research used a Landsat TM image to compare the ecological characteristics of the vectors’ habitat, to the patterns of mammals that can spread the disease, such as dogs and deer (Beck et al, 2000). This study emphasises how remote sensing data can be combined and used alongside wider data sets to provide useful insights and inferences.

From researching practical uses of remote sensing, I have strengthen my understanding of the topic and broadened my ideas on how remote sensing can be used. I have particularly found it interesting how the data can be combined to find relationships between variables, and provide wider conclusions outside of the image the sensor has captured. Again, it is interesting to have seen how the differentiating characteristics of the satellites can be beneficial to certain uses, but not to others.

References:

Beck, L.R., Lobitz, B.M. and Wood, B.L. (2000). Remote sensing and human health: new sensors and new opportunities. Emerging Infectious Diseases, 6, 217–226.

Hay, S.I., Cox, J., Rogers, D.J., Randolph, S.E., Stern, D.I., Shanks, G.D., Myers, M.F., and Snow, R.W. (2002) ‘Climate change and the resurgence of malaria in the East African highlands’, Letters to Nature, Vol: 415, pp. 905 – 909.

Kalluri, S., Gilruth, P., Rogers, D., and Szczur, M. (2007) ‘Surveillance of Arthopod Vector-Borne Infectious Diseases Using Remote Sensing Techniques: A Review’, PLOS, Pathogens. Available at: http://journals.plos.org/plospathogens/article?id=10.1371%2Fjournal.ppat.0030116

Applications of Remote Sensing: Natural Disasters

After gaining an understanding of how remote sensing works and can be adapted for it’s uses, I wanted to learn about how it is applied to real-life situations; for example, the management and predication of natural disasters. As stated in the first post, satellite monitoring has been continuously improving since it was first established, with technology progression allowing for more specified adaptations and better imaging. This progress has resulted in an increased use of remote sensing for natural hazards over the last two decades (Tralli et al, 2005). .

The most common remote sensing for disaster management is that of monitoring weather, to predict and assess cyclones and storms (Joyce et al, 2009). An example of this can be seen in the feature image of this post (Natural Resource Canada, 2015). However, it can also be used for damage evaluation and predication of volcanoes, landslides, earthquakes, floods and wildfires (Tralli et al, 2005; Gillespie et al 2007; Joyce et al, 2009).

The sensors used for monitoring depends on the hazard, and the features that are most important for that hazard. The ability of a sensor to measure a hazard depends on having appropriate resolutions or an ability to improve the image through classification (Joyce et al, 2009). 

Spatial Resolution:

For mapping volcanic debris, a higher spatial resolution sensor such as Landsat or SPOT is required (Joyce et al, 2009), with a low spatial resolution sensor such as AVHRR being inadequate (Joyce et al, 2009).

This high spatial resolution is gained by monitoring a smaller distance and gaining a more detailed image. As volcano debris will not cover too much of an extensive area, the limitations of reduced area size will not be too great of an impact. 

Similarly, to monitor earthquake damage, a high resolution of around 0.6m to 1, is best (Gillespie et al, 2007). LiDAR is suggested to be the most appropriate sensor due to the high resolution being able to observe previously undetected faults (Joyce et al, 2009).

High resolution images are also used in flood events, for example in the 2004 Indian Ocean tsunami (Gillespie et al, 2007). Examples of this for Lhoknga, Indonesia and Sri Lanka can be seen in the images below (NASA, 2007).

The main issue with the use of high spatial resolution satellites, is the compromise of temporal resolution.

Temporal Resolution:

High temporal resolution is just as important as the spatial resolution, particularly for rapid emergency response; if the image is captured too late, the main damage or impact could be missed (Gillespie et al, 2007; Joyce et al, 2009). Another issue that emphasises the importance of appropriate temporal resolution is atmospheric interference, particularly cloud cover. This can especially be a problem for observing flood events (Joyce et al, 2009). To mitigate the problem of cloud coverage over the flood, a high temporal resolution satellite, such as AVHRR, can be used, despite it producing a lower spatial resolution image (Joyce et al, 2009). For flood events, there is a need for a compromise for greater temporal resolution than for spatial resolution. Another way to avoid cloud cover could be to combine isodata classification, and use a Landsat ETM+ (Joyce et al, 2009) to provide a useful image.

References:

Gillespie, T.W., Chu, J., Frankenburg, E., and Thomas, D. (2007) ‘Assessment and predication of natural hazards from satellite imagery’, Progress in Physical Geography, Vol 31(5), pp. 459 – 470

Joyce, K.E., Belliss, S.E., Sergey, V.S, McNeill, S.J., and Glassey, P.J. (2009) ‘A review of the status of satellite remote sensing and image processing techniques for mapping natural hazards and disasters’, Progress in Physical Geography, Vol 33(2), pp. 1-25.

NASA (2007) Earthquake Satellite Imagery. Available at: http://www.nasa.gov/vision/earth/lookingatearth/indonesia_quake.html

Tralli, D.M., Blom, R.G., Zlotnicki, V., Donnellan, A., and Evans, D.L. (2005) ‘Satellite remote sensing of earthquake, volcano, flood, landslide and coastal inundation hazards’, ISPRS Journal of Photogrammertry and Remote Sensing, Vol 59(4), pp. 185-198.

Feature image:

Natural Resources Canada (2015) Weather Satellites / Sensors. Available at: http://www.nrcan.gc.ca/earth-sciences/geomatics/satellite-imagery-air-photos/satellite-imagery-products/educational-resources/9387

Image Classification

Image classification aims to classify pixels into themes or land covers (Lillesand et al, 2014), and is usually achieved by grouping similar pixels together into classes. There are a few ways in which this can be done, two of which are unsupervised and supervised classification.

Unsupervised Classification:

flow_chart

The process of unsupervised classification can be seen in the flow diagram above. It can either be done through a process using K-means or isodata algorithms. In practical 5, I completed an unsupervised classification using the isodata algorithm to classify 6 different land cover types on an image of the Hong Kong harbour. Learning about the individual spectral signatures (see blog 3) provided a greater understanding of how to assess which spectral responses should be combined, as well as what land covers these spectral responses were representing. This will have resulted in a more accurate classification and match to the true colour image.

Supervised Classification:

Supervised classification uses training sites that are identified by the user to classify land cover types. The software uses the spectral signatures of the training sites to identify the land cover type (GISGeography, 2016). In practical 6, we used a supervised classification technique to classify the 6 land cover types on the Hong Kong harbour. Overall, this method produced the better classification, both in visual comparison to the true colour image, and statistically in the accuracy assessment.

Practical examples of using classification:

I found that doing the different classification processes on the same image helped to develop my understanding of the process and visualise the difference between the two methods. From doing these practicals, I questioned how this classification technique was used in research to improve accuracy of remote sensing.

One study used this technique in measuring the effects of wildfires (Joyce et al, 2009). Normally, soil is seen as lighter than vegetation however burn scars contain organic matter and so can appear darker depending on the vegetation (Joyce et al, 2009). This can result in variance and reduce accuracy. To improve this, isodata can be used to classify, with band ratios being able to distinguish burnt and unburnt areas better than individual spectral bands (Joyce et al, 2009).

Supervised classification was also used to map British lowland (Fuller and Parsell, 1990). Training sites were used to map varying land covers in Cambridgeshire, such as: crop types, bare ground (e.g. urban areas), water sites and semi-natural vegetation (Fuller and Parsell, 1990). The overall accuracy of the classification was good, which may have been a result of a using number of training sites. However, the producer’s accuracy and user’s accuracy varied between the land cover types, suggesting that this classification was not uniformly successful. For example, 95% of the land classified as semi-natural, was classified correctly (Fuller and Parsell, 1990). This indicates successful user’s accuracy for this land type. However, less area was classified as deciduous forest than there was in the field (Fuller and Parsell, 1990), indicating a lower producer’s accuracy for deciduous forest land cover in this study.

The results from that study emphasised the importance of completing an accuracy assessment after each classification, to ensure the image can be used with confidence.

References:

Fuller, R.M., and Parsell, R.J. (1990) ‘Classification of TM imagery in the study of land use in lowland Britain: practical considerations for operational use’, International Journal of Remote Sensing, Vol: 11(10), pp. 1901-1917.

GISGeography (2016) Image Classification Techniques in Remote Sensing. Available at: http://gisgeography.com/image-classification-techniques-remote-sensing/

Joyce, K.E., Belliss, S.E., Sergey, V.S, McNeill, S.J., and Glassey, P.J. (2009) ‘A review of the status of satellite remote sensing and image processing techniques for mapping natural hazards and disasters’, Progress in Physical Geography, Vol 33(2), pp. 1-25.

Lillesand, T., Kiefer, R.W., Chipman, J.W. (2004) Remote Sensing and Image Interpretation. (5th Edition). New York: Wiley and Sons.

Image Enhancement

Image enhancement is used to enable a greater level of image interpretation and understanding (National Resources Canada, 2016). Enhancement is improved by using a greater brightness range, thereby producing an image of high contrast. There are a few ways in which this contrast stretching can be achieved, all be re-scaling the original brightness values to the screen brightness values of a minimum of 0 and maximum of 255 (Natural Resource Canada, 2016).

Linear stretch:

One way in which the contrast of an image can be enhanced is by a linear stretch. This is the simplest method (Natural Resource Canada, 2016), and works by applying the lowest brightness value of the image to the lowest display value of 0, and repeating this for the highest to 255 (Natural Resource Canada, 2016). This thereby allows the whole of the brightness range of the screen to be accessed. It will result in making the dark areas of the image darker, and the light areas lighter, which will make visual interpretation easier (Natural Resource Canada, 2016). However, this method may not always be appropriate, for example if the data is not uniformly distributed, the contrast stretch will not be as effective. Below is an example of a linear stretch to enhance the contrast of the wetland and estuary areas of the image. The original image displayed as a false colour composite is on the right, whilst the linear stretched image is on the left. The wetland can be seen in much greater detail, revealing some small tributary-like features that are difficult to interpret on the original image.

Linear

Hong Kong Harbour (right); Linear stretch of the right image (left)

Histogram equalisation:

This method uses a greater representation across the image histogram, rather than just the maximum and minimum values on the linear stretch. Through this method, areas with more values that frequently occur on the original histogram will be of better detail than areas with low frequency values (Natural Resource Canada, 2016). Below is an example of the same false colour composite image (on the right) and this image after a histogram equalisation for image enhancement (on the left). This contrast stretch also seems to provide more detail on the wetland and estuary areas of the image, however, the surrounding area on the image is not compromised and is also presented in greater detail.

histogram_equalisation

Hong Kong harbour (right); Histogram equalisation of right image (left)

Guassian Stretch:

This method is used when the light and dark areas of an image need to be enhanced. The mean data value will be set to 127, whilst the values three standard deviations below and above the mean will be assigned to display values 0 and 255 respectively (ENVI, 2016). Below, the same image indicated in the previous two examples (shown on the right) is presented after a guassian stretch (on the left). Although this does improve the detail from the original image, it does not appear to be quite as effective as the linear stretch or histogram equalisation for this example.

guassian

Hong Kong harbour (right); Guassian stretch of the right image (left)

References:

ENVI (2016) Laboratory Exercises in Image Processing: Contrast Stretching / Harrisgeospatial. Available at: http://www.harrisgeospatial.com/Portals/0/EasyDNNNewsDocuments/Repository/ContrastStretching.pdf

Natural Resources Canada (2016) Image Enhancement. Available at: http://www.nrcan.gc.ca/earth-sciences/geomatics/satellite-imagery-air-photos/satellite-imagery-products/educational-resources/9389

Richards, J.A. (2013) Remote Sensing Digital Image Analysis: An Introduction. (5th Edition). Australia: Springer.

Spectral Signatures

All objects have a different spectral signature, due to varying spectral properties in reflection and absorption. This can therefore be used to differentiate land covers (Campbell et al, 2011) and can provide useful information about that land cover.

Vegetation:

Although we may not always be able to differ vegetation type from spectral signatures, remote sensing can certainly be used to differ between vegetated and non-vegetated areas (Campbell et al, 2011). It can also provide information about the health of that vegetation (Campbell et al, 2011), with a difference in peaks in certain wavelengths being a reference for photosynthesis activity, moisture content and more.

A general spectral signature for vegetation can be seen below. Chlorophyll is absorbed in the red and blue section of the visible region,  and reflected in the green. This spectral pattern occurring in the visible region is a response to the leaf properties, is the reason we see vegetation as green (Lillesand et al, 2014).

Vegetation has a very high reflectance in the near infrared region, which is presented in the spectral signature with the rapid increase at around 700nm, known as the ‘red edge’. The high reflectance in NIR is due to cells within the leaf reflecting a high amount of chlorophyll (around 40 to 50% of the energy (Lillesand et al, 2014)). The position and magnitude of the red edge can provide information about vegetation species, and canopy density (Lillesand et al, 2014; Filella and Penuelas, 2007), as it is a reference for chlorophyll concentration (Filella and Penuelas, 2007). Near infrared reflectance therefore increases with a greater leaf canopy (Lillesand et al, 2014). The dips in the wavelengths beyond 1300nm are a response to high absorption due to water in the leaves of the vegetation (Lillesand et al, 2014).

Vegetation_Spectrum_Detail

Spectral signature of vegetation (Sourced from: Harrisgeospatial, 2013)

Spectral signatures can change both spatially and temporally, and a change in the reflectance in the NIR region can refer to a change in vegetation health. This indicates how remote sensing of vegetation can be used to monitor health of vegetation after natural disasters, human influences, and climate change.

Studies have used this spectral signature to identify the biodiversity and species abundance in an area. For example Gould (2000) used Landsat TM imagery and NDVI values to assess the species richness of the Hood River region of the Central Canadian Arctic. This analysis of ecological resources can improve future evaluations of conservation, and planning future land use decisions (Gould, 2000).

The vegetation spectral index has also been used to develop a crop water stress index (CWSI) to detect when crops are water stressed (Moran et al, 1994). This is done by combining information about the spectral indices of the crops with air surface temperature (Moran et al, 1994). This is an important application of spectral signatures, as it can aid in mitigating crop loss through water stress.

References:

Campbell, J.B., Wynne, R.H. (2011) Introduction to Remote Sensing. (5th Edition). New York: Guildford Press.

Filella, I., and Penuelas, J. (1994) ‘The red edge position and shape as indicators of plant chlorophyll content, biomass and hydric status’, International Journal of Remote Sensing, Vol: 15(7), pp. 1459-1470.

Lillesand, T., Kiefer, R.W., Chipman, J.W. (2004) Remote Sensing and Image Interpretation. (5th Edition). New York: Wiley and Sons.

Moran, M.S., Clarke, T.R., Inoue, Y., and Vidal, A. (1994) ‘Estimating crop water deficit using the relation between surface-air temperature and spectral vegetation index’, Remote Sensing of Environment, Vol: 49(3), pp. 246-263

Peters, J. (2013) ‘Vegetation Analysis: Using Vegetation Indices in ENVI’, Harrisgeospatial, 18 December. Available at: http://www.harrisgeospatial.com/Company/PressRoom/Blogs/TabId/836/PageID/2/PgrID/2928/PID/2928/CategoryID/59/CategoryName/ENVIWhitepaper/Default.aspx

(feature image):

SEOS (2016) Introduction to categorisation of objects from their data / SEOS Classification algorithms and methods. Available at: http://www.seos-project.eu/modules/classification/classification-c00-p05.html

Electromagnetic Spectrum

Electromagnetic Radiation:

The source of remote sensing comes from electromagnetic radiation. This can either be from a natural source, for example the sun, or from an artificial source, for example a satellite.

The energy produced for remote sensing is dependent on the wavelengths and frequencies of this electromagnetic radiation. Short wavelengths have more energy and therefore provide a better signal for remote sensing than long wavelengths.

The Electromagnetic Spectrum:

The electromagnetic spectrum has been divided into intervals of different wavelengths (Graham, 1999). The best range of wavelengths for remote sensing are the visible region, the infrared region and the microwave region; these have wavelengths ranging from 100nm to 10cm. The visible light region consists of wavelengths from 0.4 to 0.7 um (Graham, 1999). The infrared region then expands from 0.7 to 100 um (Graham, 1999). Microwave radiation has longer wavelengths from 1mm to 1m (Graham, 1999).

Visible:

The visible region can be detected by the eye, and can be split up into different colours. The blue region has wavelengths from 400-500nm, green is from 500 to 600 nm, and red 600 to 700nm. Reflectance and absorbency of each of these wavelengths is different in different properties, and can explain the colouring of objects.

Infrared:

This can be broken down to near-infrared (NIR) at 700 to 1300nm, shortwave-infrared (SWIR) at 1300 to 3000nm, mid-infrared (MIR) at 3000-8000nm, and thermal infrared (TIR) and far-infared (FIR) at 8000 to 1cm. It has a longer wavelength than visible light and so can’t be seen without the use of sensors.

Microwave:

This can either be passive microwave remote sensing (at 0.15 to 30cm wavelength) or active microwave remote sensing (over 30cm wavelength). Passive microwave sensor detect natural sources of energy, whereas active microwave sensors detect artificial sources of energy. Active has more energy available than passive, and produces an image of higher spatial resolution.

I found learning about the different regions of the spectrum interesting, as I didn’t previously know how wavelength could affect energy, and the implications this would have on remote sensing. Learning this will be helpful in the understanding of how certain objects will have variance in reflectance of these wavelengths, and how this effects monitoring them using remote sensing.

References:

Graham, S. (1999) Remote Sensing/ NASA Earth Observatory. Available at: http://earthobservatory.nasa.gov/Features/RemoteSensing/remote.php

NASA (2016) Climate Science Investigations. Available at: http://www.ces.fau.edu/nasa/module-2/radiation-sun.php

Remote Sensing

What is remote sensing?

Remote sensing can be defined as the practice of observing earth’s land and water surfaces through images acquired from a distance, using reflected or emitted electromagnetic energy (Campbell et al, 2011). The process, factors and uses of the remote sensing technique will be explored throughout this blog.

The history of remote sensing

Remote sensing could be argued to have started from the first photograph in 1839, and developed as images and image acquisition progressed. However, the term remote sensing was not being used in research until the 1960’s (Campbell, 2011). Remote sensing has since been able to progress, as global technology improved and knowledge was expanded. The first earth-orbiting satellite for observation was Landsat-1, launched in 1972 (Campbell, 2011). Satellites and methods of observing the Earth continued to develop, with global coverage remote sensing being established in the 1990s (Campbell, 2011). The first satellite with the specific aim to obtain global coverage was the NASA Terra-1 in 1999, designed to monitor changes in Earth’s ecosystems (Campbell, 2011). Future developments are focusing on making remotely sensed data more accessible to a greater number of people (Schmidt, 2011). Advances are also being made in hyperspectral remote sensing, and equipping satellites with ‘active’ sensors (Schmidt, 2011). These developments will improve remotely sensed data by extracting greater amounts of data, and allowing for greater targeting of areas to observe (Schmidt, 2011).

Examples of Sensors and Satellites:

landsat7_orbit2


Image of the Landsat-7 satellite (sourced from: NASA, 2016)

SENSOR – SATELLITES

  • MODIS – Terra

  • Hyperion – EO

  • ETM – Landsat-7

  • MSS (multi-spectral scanner system) – World View

It will be interesting to discover what each of these sensors, and others, are most appropriate and best used for, and why. I hope to find out the difference in sensors, and why some are better at monitoring features than others. I aim to find this out by first learning how sensors work and differ, and then how these are put into practical use.

Remote sensing can be used to monitor:

  • Climate change and it’s effects

  • Vegetation cover and productivity

  • Anthropogenic effects on land and land cover

  • Photosynthesis occurrence

  • Effect and extent of natural disasters

  • Extents and shifts in ecosystems and ecosystem boundaries

References:

Campbell, J.B., and Wynne R.H. (2011) Introduction to remote sensing, 5th edition, Guildford press.

NASA (2016) Landsat Science. Available at: http://landsat.gsfc.nasa.gov/?page_id=2290 (accessed on 20/05/2016).

Schmidt, C.W. (2011) Beyond the Internet, RAND, Available at: http://smapp.rand.org/ise/ourfuture/Internet/sec4_sensing.html