these slides have been prepared as a general guide for preparing training data for usgs canopy and...

21
These slides have been prepared as a general guide for preparing training data for USGS canopy and impervious predictions. These images have all been classified using different methods, but regardless of the method the end product is what is important. This is not meant to cover all circumstances, but just to give a general “feel” for what we expect from a classification. If there are any questions about this presentation, feel free to contact Jon Dewitz at [email protected].

Post on 22-Dec-2015

215 views

Category:

Documents


1 download

TRANSCRIPT

These slides have been prepared as a general guide for preparing training data for USGS canopy and impervious predictions. These images have all been classified using different

methods, but regardless of the method the end product is what is important. This is not meant to cover all circumstances, but just to give a general “feel” for what we expect from a classification. If there are any questions about this presentation, feel free to contact Jon

Dewitz at [email protected].

Below is an example of leaf-off training data that we feel is a good representation of what completed impervious training data should look like.

Another example that we feel is good training data. Paved roads are full and complete, as well as driveways, houses, and intersections.

Although most of the impervious training data we use and receive is leaf-off, below is an acceptable example of a leaf-on classification. All roads are complete, along with driveways and intersections.

This is another example of an acceptable canopy covered leaf-on image. Although canopy covers the majority of the road, the width and general characteristics of the road

are still represented.

Another example of acceptable leaf-on training data.

In an unacceptable image below the tree canopy in combination with the color scheme seems to obscure most of the impervious features in the urban areas. Streets are not connected, and intersections, driveways, and

houses are missed.

This is the same image and location at closer magnification.

Some of the same occlusion errors occur in this image.

This is an unacceptable leaf off image. Much more impervious needs to be captured, streets need to be complete and full, as well as all viewable roof area, driveways, sidewalks, etc.

This is the threshold we try to keep with small, gravel? roads. The road running to the house is maintained, and had material brought in for its construction. The road circled in yellow is a farm road, or trail that is there only as a result of it being worn in to the

surrounding land.

Another example of the difference threshold for gravel or dirt roads. The road at the far right bottom was also part of the tiger roads file, while the scattered roads throughout

the image are not a maintained road.

Another example for gravel roads, the road leading to the residence is likely to be a maintained and constructed road. The roads leading away and to the fields are not.

Below is a good example of a canopy classification. Spaces between trees are not classified when they exist, and when canopy is full the classification is nearly

continuous.

This is an example of an unacceptable classification. The circled red area is a different species that was not captured well. There are inclusion errors in black that are open

areas, and the entire upper right corner has been very underestimated.

This shows some occlusion errors where more forest should have been classified.

This is an example of a good classification in an urban area. All trees are captured, and shadows are excluded.

This is an example of a poor classification in an urban area. Trees are not captured well, and much grass and other areas are mis-classified as tree.

Another example of a good canopy classification. All trees are captured well with minimal error.

Another poor canopy classification. Trees are not fully captured.

One last thing that can play a large role in the quality of your dataset is mosaicing your training data. When your finished 30m percent calculation is reprojected to albers, there

will be blank space surrounding the training data. This needs to be recoded to a background value (255 for example) so that it is not used as zero percent training data. This edited version is then mosaiced with the other training data chips into a mosaic that

is the full extent of the zone (corner to corner of the imagery) with 255 again as the background. 255 is then ignored in the sampling.