Viva Life

Sunday, 9 August 2015

PENGOLAHAN CITRA DIGITAL

OBJECT BASED ANALYSIS (OBIA)

Digital images are composed of pixels that record the amount of radiation, (i.e. light) reflected from a part of the electromagnetic spectrum. Generally pixels are not visible except at extremely close zoom levels where they appear usually as a series of squares to the human eye. The photographs below show an area of rangeland in the southwestern U.S. The left photo is shown at a very close zoom level where individual pixels are visible. The right photo is the same area (red box) at a more realistic view, showing that the pixels are really parts of shrubs and patches of grass.

Image courtesy of USDA/ARS Jornada Experimental Range

Object-based Image Analysis

Object – based image analysis (OBIA), a technique used to analyze digital imagery, was developed relatively recently compared to traditional pixel-based image analysis (Burnett and Blaschke 2003). While pixel-based image analysis is based on the information in each pixel, object-based image analysis is based on information from a set of similar pixels called objects or image objects. More specifically, image objects are groups of pixels that are similar to one another based on a measure of spectral properties (i.e., color), size, shape, and texture, as well as context from a neighborhood surrounding the pixels.

Note: The examples below are drawn from Definiens eCognition®, v. 8. However, there are many other programs available that also provide object – based image analysis. See “Similar Methods” section.

Steps of OBIA

Segmentation

To obtain useful information from an image, the segmentation process splits an image into unclassified “object primitives” that form the basis for the image objects and the rest of the image analysis. Segmentations, and the resulting characteristics of object primitives and eventual image objects, are based on shape, size, color, and pixel topology controlled through parameters set by the user. The values of the parameters define how much influence spectral and spatial characteristics of the image layers will have in defining the shape and size of the image objects. The user modifies the settings depending on the objective, as well as image quality, bands available, and image resolution.

Image courtesy of USDA/ARS Jornada Experimental Range

Pixels (left) are grouped into image objects (right) through a segmentation process. In this “false-color” image (live vegetation shows up as red), the red outline indicates an individual shrub.

As a general rule, ‘good’ image objects should be as large as possible, but small enough to show contours of interest and to serve as building blocks for objects of interest not yet identified . If the objective is to classify large shrubs, each object should contain only one (or one group of) shrub. If a single shrub is made up of many small objects, the objects are too small.

The “best” settings for segmentation parameters vary widely, and are usually determined through a combination of trial and error, and experience. Settings that work well for one image may not work at all for another, even if the images are similar.
Color/shape parameters
Color and shape parameters affect how objects are created during a segmentation. The higher the value for color or shape criteria the more the resulting objects would be optimized for spectral or spatial homogeneity. Within the shape criterion, the user also can alter the degree of smoothness (of object border) and compactness of the objects.

The color and shape parameters balance each other, i.e., if color has a high value (high influence on segmentation), shape must have a low value, with less influence. If color and shape parameters are equal, then each will have roughly equal amounts of influence on the segmentation outcome.
Scale Parameter
The value of the scale parameter affects image segmentation by determining the size of image objects. If the scale value is high, the variability allowed within each object is high and image objects are relatively large. Conversely, small scale values allow less variability within each segment, creating relatively smaller segments.
Example of image segmentation
The aerial photos (3cm resolution) below were acquired in 2008 and show a shrubland in the southwestern U.S. Most of the dark green vegetation is a common shrub, creosotebush,(Larrea tridentata). The pale brown color is soil with some sparse vegetation or litter. A large section of an arroyo shows as bright white – soil in the arroyo has little or no vegetation. On the right is the same image after a segmentation. While the color parameter was given more weight, the shape parameter was of some use because the shrubs are relatively compact. Note that most of the shrubs are individual objects, e.g. green outline. A large section of an arroyo is also a single object (red outline). The segmentation created meaningful objects that carry spectral and spatial information for image analysis. 
 
Image courtesy of USDA/ARS Jornada Experimental Range

Image Object Hierarchy

In OBIA, all image objects are part of the image object hierarchy, which may consist of many different levels, but always in a hierarchical manner. Each image object level is a virtual copy of the image, holding information about particular parts of the image. Therefore all objects are linked to neighboring objects on the same level, superobjects on higher (coarser scale) levels, and to subobjects on lower (finer scale) levels. Note that while it is possible to have many object levels, it is not necessary, and the higher the number of image object levels, the more complicated the classification.

The figure below is taken from the Definiens Developer 7, User Guide, p. 26, showing the links between objects on the same and on different levels. Thick blue lines show links between the example “image object” (orange box with the black border) on the same level (neighbors), and at multiple levels (super or subobjects). 
Image from Definiens Developer 7, User Guide, p. 26
mage Classification

After an image has been segmented into appropriate image objects, the image is classified by assigning each object to a class based on features and criteria set by the user.

Features

The definition of a ‘feature’ varies widely. For these purposes, a feature in OBIA (which is different than a feature in GIS), is an algorithm that measures (in relative or absolute terms) various characteristics (shape, size, color, texture, context) of image objects. The efficacy of different features varies widely, again depending on objectives, object size, color, texture, and shape properties, and location within the object hierarchy.

Features usually define the upper and lower limits of a range measures of characteristics of image objects. Image objects within the defined limits are assigned to a specific class. Image objects outside of the feature range are assigned to a different class, (or left unclassified). Features can be applied to image objects, an entire scene, or a class.

The following is a list (not exhaustive) of examples of commonly used features:

Color: mean or standard deviation of each band, mean brightness, band ratios
Size: area, length to width ratio, relative border length
Shape: roundness, asymmetry, rectangular fit
Texture: smoothness, local homogeneity
Class level: relation to neighbors, relation to subobjects and superobjects
Classification Methods

Two (there are many) common classification methods are briefly described below. Like the segmentation process, there is no “best” method, or combination of methods. The most appropriate method depends on objectives, image characteristics, a priori knowledge, as well as experience and preference of the user.
Nearest neighbor (NN)

User chooses sample image objects for each class
Samples are usually based on a priori knowledge of the plant community, and should represent the range of characteristics within a single class
Software finds objects similar to the samples, then assigns those objects to proper class
Classification improves through iterative steps
Appropriate for describing variation in fine resolution images
Membership function

User chooses features that have different value thresholds for different classes
The software separates image objects into classes using the feature threshold identified by the user (see example below)
Results are more objective than NN, and easy to edit
Useful if the classes are easily separated using one or a few features
Appropriate when there is little a priori knowledge about the particular vegetation community in the image
Examples of Membership Function Classification

The best way to understand a classification is to work through a simple example:

The disappearance of native grasslands in the American southwest is a focus of a great deal of research. These grasslands are often replaced by a patchy network of shrubs and bare ground. The magnitude of the increase (over time) in bare ground is one (of many possible) clues to the rate of declining grasslands. Image classification is one way of estimating these changes.

Beginning with the segmented aerial photo above, the brightness feature is used to classify the image into ‘parent’ classes, vegetation and bare ground, and their corresponding ‘child’ classes, which inherit the parent class description. (See class hierachy – which is created by the user).

In a classification using thresholds, the approximate cutoff value for a chosen feature is determined for the class in question. In this example, using the brightness feature, the approximate cutoff between the two parent classes can be defined – note the dark vegetation and much lighter bare ground. Image objects with brightness values below the threshold are assigned to the ‘vegetation’ class. Objects with brightness values above (or equal to) the threshold are assigned to‘bare ground.

To separate shrubs from other types of vegetation, (i.e., ‘not shrub’), the feature, mean of the near infrared (NIR) band, is used. To separate bare soil from sparse cover, the feature, ratio of the blue band, is used. For each feature, a threshold value, or cutoff value, is found that separates the child classes. See figure below.

Image courtesy of USDA/ARS Jornada Experimental Range

The figure on the left shows the image classified to the top two parent classes, vegetation (green) and bare ground (yellow). The figure on the right is the image classified into all four child classes. Note arroyo (highlighted in red) and shrub (in bright green) for reference.

Similar Methods

There are many different image classification methods, e.g., supervised, unsupervised, or subpixel classification. OBIA is (usually) considered a type of Supervised Classification because knowledge of the user is part of the input for the resulting classification. Also see image analysis software and http://www.ioer.de/segmentation-evaluation/results.html.

Advantages of OBIA

Multiple scales

The spatial relationship information contained in image objects allow for more than one level of analysis. This is critical because image analysis at the landscape scale requires multiple, related levels of segmentation, or scale levels. In pixel – based image analysis, the pixel is assumed to cover an area meaningful at the landscape scale, although this is often not the case. The objects in OBIA provide complex information on various scales (through multiple segmentations with different parameter settings), and thus OBIA is more suited to landscape scale analyses.

Spatial relationships

Objects can be classified using their spatial relationships with adjacent or nearby objects. For example, some prickly pear species of cactus require a 'nurse plant', often a shrub, in order to germinate, grow, and survive, and thus are commonly found together. The presence of cactus objects could be used to help classify the nurse plant species by using “adjacent to” or “distance to” features.

Information filter

OBIA is able to filter out meaningless information and assimilate other pieces of information into a single object. This is analogous to how the human eye filters information that is then translated by the brain into an image that makes sense. For example, the pixels in an image are filtered and grouped to reveal a pattern, like that of an orchard or tree plantation.

Fuzzy logic

OBIA provides more meaningful information than pixel-based image analysis by allowing for less well-defined edges or borders between different classes. On maps, divisions between different types of vegetation, for example where a shrubland meets a grassland, are generally represented by a single line. In nature, no such abrupt change occurs. Instead the area where the shrubland meets the open grassland is a transition area, called an ecotone, containing characteristic species of each community, and sometimes species unique to the ecotone itself.

OBIA allows for this area of transition by using fuzzy logic. That is, the objects that occur within the ecotone belong to, and are thus considered members of, both the shrubland and grassland classes. The membership value of a pixel to a class varies from 0.0 (no membership) to 1.0, (100% complete membership to a class, and thus no ambiguity). An object in an ecotone might have 80% membership within the shrubland class, and 20% membership within the grassland class. This is a more realistic approach than of objects belonging strictly in one class or another, but not both.

Output

Output of OBIA is usually a classified image, which often then becomes part of a map used, for example, to illustrate different vegetation types in an area. The segmentation itself can be an output, and is often imported into a GIS as a raster (e.g., image file), or a polygon vector layer (e.g., shapefile), to summarize and statistically analyze data. Another possible output of OBIA is an accuracy assessments such as an error matrix indicating the classification quality and amount of uncertainty associated with each class.

Sumber:
http://wiki.landscapetoolbox.org/doku.php/remote_sensing_methods:object-based_classification

No comments:

Post a Comment