In OBIA, all image objects are part of the image object hierarchy, which may consist of many different levels, but always in a hierarchical manner. Each image object level is a virtual copy of the image, holding information about particular parts of the image. Therefore all objects are linked to neighboring objects on the same level, superobjects on higher (coarser scale) levels, and to subobjects on lower (finer scale) levels. Note that while it is possible to have many object levels, it is not necessary, and the higher the number of image object levels, the more complicated the classification.
The figure below is taken from the Definiens Developer 7, User Guide, p. 26, showing the links between objects on the same and on different levels. Thick blue lines show links between the example “image object” (orange box with the black border) on the same level (neighbors), and at multiple levels (super or subobjects).
Image from Definiens Developer 7, User Guide, p. 26
mage
Classification
After an image
has been segmented into appropriate image objects, the image is classified by
assigning each object to a class based on features and criteria set by the
user.
Features
The definition
of a ‘feature’ varies widely. For these purposes, a feature in OBIA (which is
different than a feature in GIS), is an algorithm that measures (in relative or
absolute terms) various characteristics (shape, size, color, texture, context)
of image objects. The efficacy of different features varies widely, again
depending on objectives, object size, color, texture, and shape properties, and
location within the object hierarchy.
Features
usually define the upper and lower limits of a range measures of
characteristics of image objects. Image objects within the defined limits are
assigned to a specific class. Image objects outside of the feature range are
assigned to a different class, (or left unclassified). Features can be applied
to image objects, an entire scene, or a class.
The following
is a list (not exhaustive) of examples of commonly used features:
Color: mean or
standard deviation of each band, mean brightness, band ratios
Size: area,
length to width ratio, relative border length
Shape:
roundness, asymmetry, rectangular fit
Texture:
smoothness, local homogeneity
Class level:
relation to neighbors, relation to subobjects and superobjects
Classification
Methods
Two (there are
many) common classification methods are briefly described below. Like the
segmentation process, there is no “best” method, or combination of methods. The
most appropriate method depends on objectives, image characteristics, a priori
knowledge, as well as experience and preference of the user.
Nearest
neighbor (NN)
User chooses
sample image objects for each class
Samples are
usually based on a priori knowledge of the plant community, and should
represent the range of characteristics within a single class
Software finds
objects similar to the samples, then assigns those objects to proper class
Classification
improves through iterative steps
Appropriate for
describing variation in fine resolution images
Membership
function
User chooses
features that have different value thresholds for different classes
The software
separates image objects into classes using the feature threshold identified by
the user (see example below)
Results are
more objective than NN, and easy to edit
Useful if the
classes are easily separated using one or a few features
Appropriate
when there is little a priori knowledge about the particular vegetation
community in the image
Examples of
Membership Function Classification
The best way to
understand a classification is to work through a simple example:
The
disappearance of native grasslands in the American southwest is a focus of a
great deal of research. These grasslands are often replaced by a patchy network
of shrubs and bare ground. The magnitude of the increase (over time) in bare
ground is one (of many possible) clues to the rate of declining grasslands.
Image classification is one way of estimating these changes.
Beginning with
the segmented aerial photo above, the brightness feature is used to classify
the image into ‘parent’ classes, vegetation and bare ground, and their
corresponding ‘child’ classes, which inherit the parent class description. (See
class hierachy – which is created by the user).
In a
classification using thresholds, the approximate cutoff value for a chosen
feature is determined for the class in question. In this example, using the
brightness feature, the approximate cutoff between the two parent classes can
be defined – note the dark vegetation and much lighter bare ground. Image
objects with brightness values below the threshold are assigned to the
‘vegetation’ class. Objects with brightness values above (or equal to) the
threshold are assigned to‘bare ground.
To separate
shrubs from other types of vegetation, (i.e., ‘not shrub’), the feature, mean
of the near infrared (NIR) band, is used. To separate bare soil from sparse
cover, the feature, ratio of the blue band, is used. For each feature, a
threshold value, or cutoff value, is found that separates the child classes.
See figure below.
Image courtesy of USDA/ARS Jornada Experimental Range
The figure on
the left shows the image classified to the top two parent classes, vegetation
(green) and bare ground (yellow). The figure on the right is the image
classified into all four child classes. Note arroyo (highlighted in red) and
shrub (in bright green) for reference.
Similar Methods
There are many
different image classification methods, e.g., supervised, unsupervised, or
subpixel classification. OBIA is (usually) considered a type of Supervised
Classification because knowledge of the user is part of the input for the
resulting classification. Also see image analysis software and
http://www.ioer.de/segmentation-evaluation/results.html.
Advantages of OBIA
Multiple scales
The spatial
relationship information contained in image objects allow for more than one
level of analysis. This is critical because image analysis at the landscape
scale requires multiple, related levels of segmentation, or scale levels. In
pixel – based image analysis, the pixel is assumed to cover an area meaningful
at the landscape scale, although this is often not the case. The objects in
OBIA provide complex information on various scales (through multiple
segmentations with different parameter settings), and thus OBIA is more suited
to landscape scale analyses.
Spatial
relationships
Objects can be
classified using their spatial relationships with adjacent or nearby objects.
For example, some prickly pear species of cactus require a 'nurse plant', often
a shrub, in order to germinate, grow, and survive, and thus are commonly found
together. The presence of cactus objects could be used to help classify the
nurse plant species by using “adjacent to” or “distance to” features.
Information
filter
OBIA is able to
filter out meaningless information and assimilate other pieces of information
into a single object. This is analogous to how the human eye filters
information that is then translated by the brain into an image that makes sense.
For example, the pixels in an image are filtered and grouped to reveal a
pattern, like that of an orchard or tree plantation.
Fuzzy logic
OBIA provides
more meaningful information than pixel-based image analysis by allowing for
less well-defined edges or borders between different classes. On maps,
divisions between different types of vegetation, for example where a shrubland
meets a grassland, are generally represented by a single line. In nature, no
such abrupt change occurs. Instead the area where the shrubland meets the open
grassland is a transition area, called an ecotone, containing characteristic
species of each community, and sometimes species unique to the ecotone itself.
OBIA allows for
this area of transition by using fuzzy logic. That is, the objects that occur
within the ecotone belong to, and are thus considered members of, both the
shrubland and grassland classes. The membership value of a pixel to a class
varies from 0.0 (no membership) to 1.0, (100% complete membership to a class,
and thus no ambiguity). An object in an ecotone might have 80% membership
within the shrubland class, and 20% membership within the grassland class. This
is a more realistic approach than of objects belonging strictly in one class or
another, but not both.
Output
Output of OBIA
is usually a classified image, which often then becomes part of a map used, for
example, to illustrate different vegetation types in an area. The segmentation
itself can be an output, and is often imported into a GIS as a raster (e.g.,
image file), or a polygon vector layer (e.g., shapefile), to summarize and
statistically analyze data. Another possible output of OBIA is an accuracy
assessments such as an error matrix indicating the classification quality and
amount of uncertainty associated with each class.
Sumber:
http://wiki.landscapetoolbox.org/doku.php/remote_sensing_methods:object-based_classification
No comments:
Post a Comment