What is Spatial Resolution ?A presentation for better understanding!S.A.QuadriCEDEC , USM , Malaysia
Effect of Spatial resolution on visualization(Satellite image : Reference http://visibleearth.nasa.gov/view_rec.php?id=1427)
Image resolutionIt is an umbrella term that describes the detail an image holds.The term applies to raster digital images, film images, and other types of images.Higher resolution means more image details.Image resolution can be measured in various ways.Resolution quantifies how close lines can be to each other and still be visibly resolved.Resolution units can be tied to physical sizes (e.g. lines per mm, lines per inch), to theoverall size of a picture (lines per picture height, also known simply as lines, TV lines, orTVL), or to angular subtenant.Line pairs are often used instead of lines.A line pair comprises a dark line and an adjacent light line.A line is either a dark line or a light line.A resolution of 10 lines per millimeter means 5 dark lines alternating with 5 light lines, or 5line pairs per millimeter (5 LP/mm).Photographic lens and film resolution are most often quoted in line pairs per millimeter.
Resolution of digital imagesThe resolution of digital images can be described in many different ways.The term resolution is often used for a pixel count in digital imaging, even though American, Japanese, &international standards specify that it should not be so used, at least in the digital camera field.•An image of N pixels high by M pixels wide can have any resolution less than N lines per picture height, or N TVlines. But when the pixel counts are referred to as resolution, the convention is to describe the pixel resolution withthe set of two positive integer numbers, where the first number is the number of pixel columns (width) and thesecond is the number of pixel rows (height), for example as 640 by 480.•Another popular convention is to cite resolution as the total number of pixels in the image, typically given asnumber of megapixels, which can be calculated by multiplying pixel columns by pixel rows and dividing by onemillion.•Other conventions include describing pixels per length unit or pixels per area unit, such as pixels per inch or persquare inch.•According to the same standards, the number of effective pixels that an image sensor or digital camera has is thecount of elementary pixel sensors that contribute to the final image, as opposed to the number of total pixels,which includes unused or light-shielded pixels around the edges.None of these pixel resolutions are true resolutions, but they are widely referred to as such;they serve as upper bounds on image resolution.
Effect of pixel resolutionsBelow is an illustration of how the same image might appear at different pixelresolutions, if the pixels were poorly rendered as sharp squares (normally, asmooth image reconstruction from pixels would be preferred, but for illustration ofpixels, the sharp squares make the point better).
Further ExplanationAn image that is 2048 pixels in width and 1536 pixels in height has a total of 2048×1536 = 3,145,728pixels.One could refer to it as 2048 by 1536 or a 3.1-megapixel image.Unfortunately, the count of pixels is not a real measure of the resolution of digital cameraimages, because :Color image sensors are typically set up to alternate color filter types over the light sensitive individualpixel sensors.Digital images ultimately require a red, green, and blue value for each pixel to be displayed or printed,but one individual pixel in the image sensor will only supply one of those three pieces of information.The image has to be interpolated or demosaiced to produce all three colors for each output pixel.
Spatial resolutionThe measure of how closely lines can be resolved in an image is called spatial resolution, and it depends on properties of the systemcreating the image, not just the pixel resolution in pixels per inch (ppi).For practical purposes the clarity of the image is decided by its spatial resolution, not the number of pixels in an image.In effect, spatial resolution refers to the number of independent pixel values per unit length.•The spatial resolution of computer monitors is generally 72 to 100 lines per inch, corresponding to pixel resolutions of 72 to 100 ppi.•With scanners, optical resolution is used to distinguish spatial resolution from the number of pixels per inch.•In geographic information systems (GISs), spatial resolution is measured by the ground sample distance (GSD) of an image, the pixelspacing on the Earths surface.•In astronomy one often measures spatial resolution in data points per arc second subtended at the point of observation, since the physicaldistance between objects in the image depends on their distance away & this varies widely with the object of interest.•In electron microscopy, line or fringe resolution refers to the minimum separation detectable between adjacent parallel lines (e.g.between planes of atoms), while point resolution instead refers to the minimum separation between adjacent points that can be bothdetected & interpreted e.g. as adjacent columns of atoms, for instance.•In Stereoscopic 3D images, spatial resolution could be defined as the spatial information recorded or captured by two viewpoints of astereo camera (left & right camera).It could be argued that such "spatial resolution" could add an image that then would not depend solely on pixel count or Dots per inchalone, when classifying and interpreting overall resolution of a given photographic image or video frame.
Spatial resolution and Pixel count Just make out difference !Spatial resolution Pixel count
Spectral resolutionColor images distinguish light of different spectra.Multi-spectral images resolve even finer differences of spectrum or wavelength than is needed to reproducecolor. That is, they can have higher spectral resolution. i.e. (high strength of each band).Temporal resolutionMovie cameras and high-speed cameras can resolve events at different points in time.The time resolution used for movies is usually 15 to 30 frames per second (frames/s),while high-speed cameras may resolve 100 to 1000 frames/s, or even more.Radiometric resolutionRadiometric resolution determines how finely a system can represent or distinguish differences of intensity,and is usually expressed as a number of levels or a number of bits, for example, 8 bits or 256 levels that istypical of computer image files.The higher the radiometric resolution, the better subtle differences of intensity or reflectivity can berepresented, at least in theory.In practice, the effective radiometric resolution is typically limited by the noise level, rather than by thenumber of bits of representation.
Resolution in various mediaThis is a list of resolutions for various media.Analog and early digital352×240 : Video CD300×480 : Umatic, Betamax, VHS, Video8350×480 : Super Betamax, Betacam420×480 : LaserDisc, Super VHS, Hi8640×480 : Analog broadcast (NTSC)670×480 : Enhanced Definition Betamax768×576 : Analog broadcast (PAL, SECAM)Digital720×480 : D-VHS, DVD, miniDV, Digital8, Digital Betacam720×480 : Widescreen DVD (anamorphic)1280×720 : D-VHS, HD DVD, Blue-ray, HDV (miniDV)1440×1080 : HDV (miniDV)1920×1080 : HDV (miniDV), AVCHD, HD DVD, Blu-ray, HDCAM SR2048×1080 : 2K Digital Cinema4096×2160 : 4K Digital Cinema7680×4320 : UHDTVFilm35 mm film is scanned for release on DVD at 1080 or 2000 lines as of 2005.However some photography sources gives 5380 x 3620 as the resolution of 35mm film.It is similar to 19.5 Mpix, of course with identical spatial resolution.IMAX, including IMAX HD and OMNIMAX: approximately 10,000×7000 (7000 lines) resolution.It is about 70 Mpix, which may be considered to the biggest resolution.
Spatial Resolution and Pixel SizeThe image resolution and pixel size are often used interchangeably.In reality, they are not equivalent. An image sampled at a small pixel size does not necessarily has a high resolution.The following three images illustrate this point. The first image is a SPOT image of 10 m pixel size.It was derived by merging a SPOT panchromatic image of 10 m resolution with a SPOT multispectral image of 20 mresolution. The effective resolution is thus determined by the resolution of the panchromatic image, which is 10 m.This image is further processed to degrade the resolution while maintaining the same pixel size.The next two images are the blurred versions of the image with larger resolution size, but still digitized at the samepixel size of 10 m.Even though they have the same pixel size as the first image, they do not have the same resolution
RESOLUTION AND SHARPNESSTo determine resolution, a raster is normally used, employing increasingly fine bars and gaps. A common example inreal images would be a picket fence displayed to perspective.In the image of the fence, shown in Fig. 1, it is evident that the gaps between the boards become increasingly difficultto discriminate as the distance becomes greater. This effect is the basic problem of every optical image.In the foreground of the image, where the boards and gaps have not yet been squeezed together by the perspective, alarge difference in brightness is recognized.The more the boards and gaps are squeezed together in the distance, the less difference is seen in the brightness.To better understand this effect, the brightness values are shown along the yellow arrow in an x / y diagram (Fig. 2).The brightness difference seen in the y-axis is called contrast.The curve itself functions like a harmonic oscillation; because the brightness does not change over time but spatiallyfrom left to right, the x-axis is called spatial frequency.
It can be clearly seen in Fig. 1 that the finer the reproduced structure, the more the contrastwill be “slurred” at that point in the image.The limit of the resolution has been reached when one can no longer clearly differentiatebetween the structures.This means the resolution limit (red circle indicated in Fig. 2) lies at the spatial frequencywhere there is just enough contrast left to clearly differentiate between board and gap.
Resolution = Sharpness?Are resolution and sharpness the same? By looking at the images shown below, one can quickly determine which imageis sharper.Although the image on the left comprises twice as many pixels, the image on the right, whose contrast at coarse detailsis increased with a filter, looks at first glance to be distinctly sharper.The resolution limit describes how much information makes up each image, but not how a person evaluates thisinformation.The human eye, in fact, is able to resolve extremely fine details.This ability is also valid for objects at a greater distance.The decisive physiological point, however, is that fine details do not contribute to the subjective perception ofsharpness. Therefore, it’s important to clearly separate the two terms, resolution and sharpness.
MTFModulation transfer function describes the relationship between resolution and sharpness, and is the basis for ascientific confirmation of the phenomenon described earlier.The modulation component in MTF means approximately the same as contrast.If we evaluate the contrast (modulation) not only where the resolution reaches its limit, but over as many spatialfrequencies as possible and connect these points with a curve, we arrive at the so-called MTF.As shown in figure , the x-axis illustrates the already-established spatial frequency expressed in lp/ mm on the y-axis,instead of the brightness seen in modulation.A modulation of 1 (or 100%) is the ratio of the brightness of a completely white image to the brightness of acompletely black image.The higher the spatial frequency— in other words the finer the structures in the image - the lower the transferredmodulation. (lp= lines pair ) Conclusions: •Sharpness does not depend only on resolution. • The modulation at lower spatial frequencies is essential. •Contrast in coarse details is significantly more imp for the impression of sharpness than contrast at the resolution limit.
Resolution of the human eyeThe fovea of the human eye (the part of the retina that is responsible for sharp central vision) includesabout 140 000 sensor-cells per square millimeter.This means that if two objects are projected with a separation distance of more than 4 m on the fovea,a human with a normal visual acuity (20/20) can resolve them.On the object side, this corresponds to 0.2 mm in a distance of 1 m (or 1 minute of arc).In practice of course, this depends on whether the viewer is concentrating only on the center of theviewing field, whether the object is moving very slowly or not at all, and whether the object has goodcontrast to the background. Allowing for some amount of tolerance, this would be around 0.3 mm at 1m distance (= 1.03 minutes of arc ). In a certain range, one can assume a linear relation betweendistance and the detail size
This hypothesis can be easily proved !!!Pin the test pattern displayed in Figure below on a well-lit wall and walk away 10 m.One should be able to clearly differentiate between the lines and gaps in Figure.Of course, this requires an ideal visual acuity of 20/20.Nevertheless, if you can’t resolve the pattern in Figure,you might consider paying a visit to an ophthalmologist !
How we interpret optical images ?Let us see significance of spatial resolution and various other related terms:Four main types of information contained in an optical image are often utilized forimage interpretation:•Radiometric Information (i.e. brightness, intensity, tone),•Spectral Information (i.e. color, hue),•Textural Information,•Geometric and Contextual Information. They are illustrated in the following examples,
There are different types of images : •Panchromatic Images •Multispectral Images •Color Composite Images •True Color Composite images •False Color Composite images •Natural Color Composite
Panchromatic imageA panchromatic image consists of only one band.It is usually displayed as a grey scale image.Panchromatic image may be similarly interpreted as a black-and-white aerial photograph of the area.The Radiometric Information is the main information type utilized in the interpretation. A panchromatic image extracted from a SPOT panchromatic scene at a ground resolution of 10 m. (Reference: http://visibleearth.nasa.gov/view_rec.php?id=1427 and http://earthobservatory.nasa.gov/Contact/index_ve.php?do=s)
Multispectral ImagesA multispectral image consists of several bands of data.For visual display, each band of the image may be displayed one band at a time as a grey scale image, or in combination of 3 bands at a timeas a color composite image.Interpretation of a multispectral color composite image will require the knowledge of the spectral reflectance signature of the targets in thescene.In this case, the spectral information content of the image is utilized in the interpretation.The following 3 images show the 3 bands of a multispectral image extracted from a SPOT multispectral scene at a ground resolution of 20 m. (Reference: http://visibleearth.nasa.gov/view_rec.php?id=1427 and http://earthobservatory.nasa.gov/Contact/index_ve.php?do=s)
Color Composite ImagesIn displaying a color composite image, three primary colors (red, green and blue) are used.When these three colors are combined in various proportions, they produce different colors in the visiblespectrum.Associating each spectral band (not necessarily a visible band) to a separate primary color results in acolor composite image.
True Color CompositeIf a multispectral image consists of the three visual primary color bands (red, green, blue), the three bands may becombined to produce a "true color" image.The bands 3 (red band), 2 (green band) and 1 (blue band) of a LANDSAT TM image or an IKONOS multispectralimage can be assigned respectively to the R, G, and B colors for display.In this way, the colors of the resulting color composite image resemble closely what would be observed by the humaneyes. A 1-m resolution true-color IKONOS image (Reference: http://visibleearth.nasa.gov/view_rec.php?id=1427 and http://earthobservatory.nasa.gov/Contact/index_ve.php?do=s)
False Color Composite The display color assignment for any band of a multispectral image can be done in an entirely arbitrary manner. In this case, the color of a target in the displayed image does not have any resemblance to its actual colour. The resulting product is known as a false colour composite image. There are many possible schemes of producing false colour composite images. Some schemes are suitable for detecting certain objects in the image. False colour composite multispectral SPOT image(Reference: http://visibleearth.nasa.gov/view_rec.php?id=1427 and http://earthobservatory.nasa.gov/Contact/index_ve.php?do=s)
Natural Colour CompositeFor optical images lacking one or more of the three visual primary colour bands (i.e. red, green and blue), the spectralbands (some of which may not be in the visible region) may be combined in such a way that the appearance of thedisplayed image resembles a visible colour photograph, i.e. vegetation in green, water in blue, soil in brown or grey, etc.Some people refer to this composite as a "true colour" composite. However, this term is misleading since in manyinstances the colors are only simulated to look similar to the "true" colors of the targets. The term "natural colour" ispreferred. Natural colour composite multispectral SPOT image (Reference: http://visibleearth.nasa.gov/view_rec.php?id=1427 and http://earthobservatory.nasa.gov/Contact/index_ve.php?do=s)
Vegetation IndicesDifferent bands of a multispectral image may be combined to accentuate the vegetated areas.One such combination is the ratio of the near-infrared band to the red band. This ratio is known as theRatio Vegetation Index (RVI)RVI = NIR/RedNormalized Difference Vegetation Index (NDVI)Since vegetation has high NIR reflectance but low red reflectance, vegetated areas will have higher RVIvalues compared to non-vegetated areas. Another commonly used vegetation index is the NormalizedDifference Vegetation Index (NDVI) computed byNDVI = (NIR - Red)/(NIR + Red)
Textural InformationTexture is an important aid in visual image interpretation, especially for high spatial resolution imagery.It is also possible to characterize the textural features numerically, and algorithms for computer-aidedautomatic discrimination of different textures in an image are available. IKONOS 1-m resolution pan-sharpened color image of an oil palm plantation. Even though the general colour is green throughout, three distinct, land cover types can be identified from the image texture. (Reference: http://visibleearth.nasa.gov/view_rec.php?id=1427 and http://earthobservatory.nasa.gov/Contact/index_ve.php?do=s)
Remote Sensing SatellitesOptical remote sensing makes use of visible, near infrared & short-wave infrared sensors to form images of the earths surface .By detecting the solar radiation reflected from targets on the ground.Different materials reflect and absorb differently at different wavelengths.Thus, the targets can be differentiated by their spectral reflectance signatures in the remotely sensed images.Optical remote sensing systems are classified into the following types, depending on the number of spectral bands used in the imagingprocess.Several remote sensing satellites are currently available, providing imagery suitable for various types of applications.Each of these satellite-sensor platform is characterized by the•Wavelength bands employed in image acquisition,•Spatial resolution of the sensor,•The coverage area and the temporal coverage, i.e. how frequent a given location on the earth surface can be imaged by theimaging system.
In terms of the spatial resolution, the satellite imaging systems can be classified into:•Low resolution systems (approx. 1 km or more)•Medium resolution systems (approx. 100 m to 1 km)•High resolution systems (approx. 5 m to 100 m)•Very high resolution systems (approx. 5 m or less)In terms of the spectral regions used in data acquisition, the satellite imaging systems can be classified into:•Optical imaging systems (include visible, near infrared, and shortwave infrared systems)•Thermal imaging systems•Synthetic aperture radar (SAR) imaging systemsOptical/thermal imaging systems can be classified according to the number of spectral bands used:•Monospectral or panchromatic (single wavelength band, "black-and-white", grey-scale image) systems•Multispectral (several spectral bands) systems•Superspectral (tens of spectral bands) systems•Hyper spectral (hundreds of spectral bands) systemsSynthetic aperture radar imaging systems can be classified according to the combination of frequency bands &polarization modes used in data acquisition, e.g.:•Single frequency (L-band, or C-band, or X-band)•Multiple frequency (Combination of two or more frequency bands)•Single polarization (VV, or HH, or HV)•Multiple polarization (Combination of two or more polarization modes)