GuidedTour1 Ecognition8 GettingStarted Example SimpleBuildingExtraction
GuidedTour1 Ecognition8 GettingStarted Example SimpleBuildingExtraction
Legal Notices Copyright and Trademarks Copyright 2010, Trimble Navigation Limited. All rights reserved. Trimble, the Globe & Triangle logo and eCognition are trademarks of Trimble Navigation Limited, registered in the United States and in other countries. All other product names, company names, and brand names mentioned in this document may be trademark properties of their respective holders. Protected by patents US REG NO. 3,287,767; WO9741529; WO9802845; EP0858051; WO0145033; WO0205198; WO2004036337 ; EP1866849; US 6,229,920; US 7,117,131; US 6,832,002; US 7,437,004; US 7,574,053 B2; US 7,146,380; US 7,467,159 B2; US 20070122017; US 20080008349; US 12/386,380. Further patents pending. The contents of this manual are subject to change without notice. Although every effort has been made to ensure the accuracy of this manual, we can not guarantee the accuracy of its content. If you discover items which are incorrect or unclear, please contact us using this form: http://www.ecognition.com/content/training-inquiries Find more information about the product or other tutorials at http://www.ecognition.com/ http://www.ecognition.com/community Release Notice Release date: August 2010
Table of Contents
Getting started Example: _______________________________________________ 1 Simple building extraction _______________________________________________ 1 Table of Contents _______________________________________________________ 3 Introduction ___________________________________________________________ 5 Lesson 1 Introduction to Rule Set development ___________________________ 9 1.1 Get the big picture_____________________________________________ 9 1.2 Which data to use?____________________________________________ 10 1.3 Develop strategy _____________________________________________ 11 1.4 Translate the strategy into processes _____________________________ 12 1.5 Review your intermediate results ________________________________ 14 Lesson 2 Get the big picture of building extraction analysis task____________ 15 2.1 How buildings are represented in spectral image data? ______________ 15 2.2 Do buildings have significant shape characteristics? _________________ 16 2.3 Can buildings be separated using context information? ______________ 16 2.4 Is elevation information stabile characteristic for buildings?___________ 16 2.5 Conclusion: Which data to be used? Which general ideas? ____________ 17 2.6 Overview Guided Tour 1: Simple example of building extraction _______ 17 Lesson 3 Importing data ______________________________________________ 18 3.1.1 Open eCognition Developer Rule Set Mode 18 3.2 Option 1: Use workspace and import template _____________________ 19 3.2.1 Create a new workspace 19 3.2.2 Import data in the workspace with an import template 20 3.2.3 How to open a created project 21 3.2.4 How to save a project 21 3.3 Option 2 (Trial version): Creating an individual Project _______________ 22 3.3.1 Define the data to be loaded 22 3.3.2 Overview: Create Project dialog box 23 3.3.3 Define layer alias 24 3.3.4 Assign No Data values 25 3.3.5 Confirm the settings to create the project 26 Lesson 4 Developing the core strategy by evaluating the data content ______ 27 4.1 The image visualization tools ___________________________________ 27 4.1.1 Zooming into the scene 27 4.1.2 Display the DSM 28 4.1.3 Open a second viewer window to compare data 29 4.2 Evaluate the data content ______________________________________ 29 4.2.1 Evaluate the elevation data 29 4.2.2 Evaluate the RGB data 30 Lesson 5 Creating image objects _______________________________________ 32 5.1 Strategy for creating suitable image objects _______________________ 32 5.2 Translate strategy into Rule Set use algorithm multiresolution segmentation _______________________________________________ 33 5.2.1 Why multiresolution segmentation algorithm? 33 5.2.2 Insert rule for object creation 35 5.3 Review the created image objects _______________________________ 38 Lesson 6 Initial classification: classifying all elevated objects ______________ 39 6.1 Strategy to classify buildings based on elevation information _________ 39 6.2 Translate the strategy into Rule Set Object mean of DSM, algorithm assign class _______________________________________________________ 40 6.2.1 Find the feature and the threshold to represent all elevated objects (mean value of DSM) 40 6.2.2 Write rule for classifying all elevated objects 42 6.3 Review the classification result buildings and trees are classified ______ 47 Lesson 7 Refinement based on DSM: un-classifying trees __________________ 49 7.1 Strategy to separate buildings from trees: use standard deviation of DSM 49
Translate the strategy into Rule Set Standard deviation of DSM, algorithm assign class__________________________________________________51 7.2.1 Find the feature and the threshold to separate buildings from trees (Stddev. of DSM) 51 7.2.2 Write rule to separate buildings from trees 52 7.3 Review the classification result trees are de-classified; some vegetation and shadows objects are still misclassified _____________________________55 Lesson 8 Refinement based on spectral information ______________________ 56 8.1 Strategy to refine buildings based on spectral information the spectral layer ratio ____________________________________________________56 8.2 Translate the strategy into Rule Set Ratio of green, algorithm assign class ____________________________________________________________58 8.2.1 Find the feature and the threshold for spectral refinement (ratio green) 58 8.2.2 Translate strategy in a rule for refinement based on a spectral feature 60 8.3 Review the classification result the vegetated areas are de-classified; small objects still misclassified ________________________________________63 Lesson 9 Refinement based on context _________________________________ 64 9.1 Strategy to refine buildings based on context information surrounding neighbor objects ______________________________________________64 9.2 Translate Strategy in a rule for refinement based on a context feature ___66 9.2.1 The class-related feature Relative border to 66 9.2.2 Prepare the Rule Set structure 68 9.2.3 Insert process to classify 69 9.3 Review the classification result only small objects remain misclassified _71 Lesson 10 Refinement based on shape ___________________________ 72 10.1 Strategy to refine buildings based on shape information generalize objects, separate them by size ___________________________________72 10.2 Translate strategy in a rule for refinement based on a shape feature _____74 10.2.1 Merge the image objects 74 10.2.2 Find the feature and the threshold for refinement based on shape 76 10.2.3 Translate strategy in a rule for refinement based on the Area feature 77 10.3 Review the classification result ___________________________________79 Lesson 11 Exporting the result in a vector layer____________________ 80 11.1 Insert process to merge unclassified objects _______________________80 11.1.1 Prepare the Rule Set structure 81 11.1.2 Insert process to merge all unclassified objects 81 11.2 Insert process to export shape file with attributes____________________82 11.2.1 Create the feature Class name 82 11.2.2 Insert process to export vector file 83 11.3 Review the exported result ______________________________________85 Where to get additional help and information? ____________________________ 86 eCognition Community and Rule Set Exchange platform ____________________86 User Guide and Reference Book ________________________________________86 Additional Guided Tours and Tutorials ___________________________________87 eCognition Training__________________________________________________87 Consulting _________________________________________________________87 Buy Software and Services? ____________________________________________87
7.2
Introduction
Introduction
About this Guided Tour
Welcome to the Guided Tour Getting started Example: Simple building extraction. This tour is written for novice users of the eCognition software. This Guided Tour will focus on the basic steps involved in developing a Rule Set using eCognition Developer. Further information about eCognition products is available on the website http://ecognition.com.
Requirements
To perform this tutorial you will need eCognition Developer installed on a computer.
All steps of this Guided Tour, except the batch processing, can be done using eCognition Developer or its trial version (http://www.ecognition.com/products/trialsoftware ). This edition is designed for self-study.
Action!
Settings Check
If this symbol is shown, compare the settings shown in the screenshot with the settings in the according dialog box in Developer.
If this symbol is shown check the screenshot of the Process Tree with the content of the Process Tree in Developer.
Result Check
If this symbol is shown check the screenshot aside with the result Developer. It should look similar.
Import data This diagram expresses that the next step to do is to import your data:
Introduction
Develop strategy This diagram expresses that the next step to do is to develop your next Rule Set development strategy step:
Translates Strategy into Rule Set This diagram expresses that the next step to do is to translate your strategy into a Rule Set:
Review result This diagram expresses that the next step to do is to review your results:
Refine/expand strategy This diagram expresses that the next step to do is to go back and develop your next strategy step, because the result did not meet your requirements:
Ready for export This diagram stands for the stage where you are satisfied with the result and you can export:
Export result
Introduction
1.1
The most important tool for creating a Rule Set is your expert knowledge, for example as a remote sensing professional or a geographer, and the ability to translate your recognition process into the eCognition language: the Cognition Technology Language. Behind every image analysis there is a methodology.
N Get the big picture of the general analysis task O=Choose the data P=Develop a strategy Q=Translate the strategy into a Rule Set R=Review the results S=Refine the strategy and Rule Set if necessary T=Export the results
The diagram and the symbols above will guide you through the whole tour. The individual symbols will be visible in the header of the page, with the current stage highlighted. Whenever there is a change e.g. from strategy development to Rule Set writing etc. the diagram above will show you the next phase.
1.2
First step in Rule Set development is choosing the data to be used for analysis. In this Guided Tour buildings are extracted. Data requirements: The data should have a fine resolution to get correct outlines. Due to the large variability of building types and roof tops, classification only based on spectral information is quite tricky. But elevation information is a very consistent source of information to classify buildings.
eCognition allows integration of data from different sensors and with different resolutions. In this example high resolution spectral images and a lower resolution DSM (Digital Surface Model) will be loaded together without problems. The software works with objects, therefore the resolution has direct impact on the object creation. The most stabile information about buildings is their different elevation compared to their surrounding. LiDAR data contains accurate elevation information in high resolution. Spectral information will help to separate buildings from other elevated objects like trees. Conclusion: To extract buildings a combination of spectral (here RGB) and LiDAR data is an optimum.
Efficiency in development
It is important to choose a representative subset of your data to develop an image analysis routing. The first step always is to evaluate your complete data set and then choose an area you want to start with. Working with a representative subset saves development time, because the processing times are lower than testing every step on the whole data set. Nonetheless it is recommended to test the Rule Set on regular basis and not too late (!) on the complete set, to avoid surprises at the end of the calculated development time.
10
Introduction
1.3
Develop strategy
To find the best suited features and algorithms, always ask yourself, why do I recognize something as a building, tree, lake? Ask: how do I have to modify the objects so that they fulfill the criteria?
General rules
Developing a Rule Set is an iterative process Always start with the class with the most significant features. Only process in the domain you are interested in.
eCognition works with objects. The image pixels are grouped together and as a result, much more information is available. Information like the spectral signature of the whole object, the shape and size and also context information is available. All these attributes can be used and combined for classification.
Figure 9: Different visualization possibilities of the loaded data and the classification results.
11
The Feature View helps to evaluate the feature values for the entire scene
Figure 10: Viewer displaying feature values in grey range or in a color range.
The Image Object Information window helps to evaluate values for individual objects or evaluate classifications.
1.4
To be able to write a Rule Set you have to translate your strategy into processes. This is done in the Process Tree window. There you can add, edit and sort the processes. In the Process Tree window you can define:
12
Introduction how shall be processed (algorithm) what and where shall be processed (domain) under which conditions (domain condition)
Figure 12: Diagram about the Process Tree and the individual processes.
A single process represents an individual operation of an image analysis routine for an image or subset. Thus, it is the main working tool for developing rule sets. A single process is the elementary unit of a rule set providing a solution to a specific image analysis problem. Every single process has to be edited to define an algorithmN to be executed on an image object domain O Combine single processes in a sequence by building a Rule Set. You can organize a process sequence with parent and child processes which are executed in a defined order. You can also load an existing Rule Set, save your rule set, and execute individual processes or the complete rule set. Developing a Rule Set does not require you to write any code, rather one selects from a set of predefined algorithms within the graphical user-interface.
13
1.5
As already mentioned, developing a Rule Set is an iterative process. You start with a base strategy, implement the actual rules in the software, check the result, then you step back and refine/expand the strategy, modify the Rule Set and again check the result. Until you are finally satisfied with the outcome.
Figure 13: Reviewing your intermediate results is a crucial step to develop the next steps of your analysis
Figure 14:or to decide that the accuracy of the classification is ready for export
14
To get the big picture you have to think about which general and consistent characteristics are contained in the data in the object-shape and whether there are context based characteristics.
2.1
Roofs seen from above have a wide spectral variety, from colored roof tiles, to metal roofs. Spectral Information about buildings is inconsistent information
15
2.2
Buildings have also a wide variety in shape and in size. They can be rectangular to circled, small family houses to bigger industry buildings. At least buildings have compared to other classes a quite large size. Partially inconsistent information
2.3
Buildings have shadows, as they are elevated. Roofs are generally not covered by water or vegetation. consistent context information
2.4
Buildings have a higher elevation than their surrounding. At the edges of a building there is a very steep slope. The elevation changes suddenly. Consistent Information
16
2.5
The most consistent and relevant characteristics of buildings is their elevation. Therefore elevation data (converted from LiDAR data) is chosen for the analysis task Building extraction. The elevation information is used for segmentation and classification. Only elevation is not sufficient to classify buildings correctly. Additional spectral information is needed, to separate e.g. buildings from trees. The elevation data used has a resolution of 2,5 feet, the aerial photo has 0,5. The spectral image layer will help to get more detailed outlines due to its higher resolution.
2.6
In the first Guided Tour a rather flat area will be classified. This has the advantage that the initial classification of elevated objects can take place using fixed height values.
As a result trees are classified too. To get rid of these trees, their different appearance in the DSM is used to separate them from buildings. Additionally the spectral features for vegetation and shadow will be used to clean up the classification.
Figure 18: Classification result after second classification step. After that, missing parts of the buildings will be classified using context and size information.
As a result all buildings in the subset are classified with a good accuracy according the building outline.
17
NOTE:
If you are working with the Trail Version, you have to jump to chapter 3.3 to create and save projects individually. The workspace functionality is not available. Or if you have access to the full licensed product you will learn how to manage projects in the workspace. Then the data in this Guided Tour will be imported in the workspace using the Customized Import tool.
3.1
Introduction
When starting eCognition Developer you can optionally start it in either: or Rule Set Mode Quick Map Mode
The Quick Map Mode is designed to allow a user solving simple analysis tasks without having to get involved with Rule Set development. The main steps in analyzing an image are creating objects, classifying objects and exporting results. For each of these steps, a small assortment of predefined actions is available. The Rule Set Mode provides all necessary tools to develop own Rule Sets. In all the Guided Tours, the Rule Set Mode is necessary to accomplish the exercises. 1. 2. Action! Start eCognition Developer. Select Rule Set Mode.
18
Importing data
Settings Check
Figure 20: Select Rule Set Mode to have all tools available.
3.2
Figure 21: View Settings toolbar with the 4 predefined view setting buttons: Load and Manage Data, Configure Analysis, Review Results, Develop Rulesets.
To create, open or modify a workspace, make sure that you are in the Load and Manage Data view. 3. 4. Select the predefined view setting number 1 Load and Manage Data from the View Settings toolbar. To create a new workspace, do one of the following: Click the Create New Workspace button on the toolbar. Action!
Choose File > New Workspace from the main menu bar.
19
5. 6. 7.
Enter a Name for the new workspace. The default name is New Workspace. Browse for a location to save the new workspace. Confirm with OK to create the workspace. It is displayed as the root folder in the left pane (the tree view) of the workspace window.
Action!
The Customized Import dialog box opens. 2. 3. Make sure that the Workspace N tab is active Click on the Load O button and browse to the GuidedTour1_eCognition8_GettingStarted_Example_SimpleBuildingExtrac tion\\Data folder. Select the CustoImport_GuidedTour_Level1.xml.
The settings defined in the .xml are loaded 4. 5. Define the Root Folder P by browsing to the folder the data is stored. Define RGB_Level1_Simple_Example.img as Masterfile Q.
Note:
Select No for the upcoming Resolve conflicts error message. The information about different number of image layers refers also to the additionally to be loaded panchromatic layer. 6. Confirm with OK R.
20
Importing data
Settings Check
Figure 23: Customized Import dialog box with settings to import the data.
Note:
Alternatively you can right-click in the left workspace window and select Import Existing Project and browse to the folder where the Guided Tour1 data is stored. There you can find the Level1_Simple_Example.dpr project file.
Note:
The currently opened project is marked in the Workspace window with an asterisk.
21
NOTE:
As there is no undo command, it is recommended that you save a project prior to any operation that could lead to the unwanted loss of information, such as deleting an object layer or splitting objects. To retrieve the last saved state of the project, close the project without saving and reopen.
3.3
The Create Project dialog box together with the Import Image Layer dialog box opens. 2. Navigate to the folder \GuidedTour1_eCognition8_GettingStarted_Example_SimpleBuildingExtraction\ Data. Mark the following image files and click Open. RGB_Level1_Simple_Example.img04mar_multi.img DSM_Level1_Simple_Example.img
3.
The Create Project dialog box opens. 4. In the Create Project dialog box in the field Name either enter a meaningful name for the project, e.g. Building extraction. Or keep the default naming according to the first loaded image file RGB_Level1_Simple_Example.
22
Importing data
NOTE:
The geocoding for the data is not available with the data belonging to this Guided Tour. A subset of the loaded images can be selected by clicking the Subset Selection button. The Create Subset dialog box opens.
23
The image layer options section O : Introduction All preloaded image layers are displayed along with their properties. To select an image layer, click it. To select multiple image layers, press Ctrl or the Shift key and click on the image layers. To edit a layer double-click or right-click an image layer and choose Edit. The Layer Properties dialog will open. Alternatively you can click the Edit button. To insert an additional image layer you can click the Insert button or right-click inside the image layer display window and choose Insert on the context menu. To remove one or more image layers, select the desired layer(s) and click Remove. To change the order of the layers select an image layer and use the up and down arrows. To set No Data values for those pixels not to be analyzed, click No Data. The Assign No Data Values dialog box opens.
The thematic layer options section P : To insert a thematic layer, you can click the Insert button or right-click inside the thematic layer display window l and choose Insert from the context menu. To edit a thematic layer works similar to editing image layers described above.
The meta data options section Q : Here you can load additional information data as an .ini file, if available.
24
Importing data
NOTE:
The Layer Properties dialog opens. Important for all 2 dimensional image analysis, as it is standard for Earth Sciences, the lower part of the dialog box Multidimensional Map Parameters can be ignored.
2. 3.
Layer4
DSM
This indicates that an area is defined as No Data if any of the layers contains a 0 value pixel. 6. Confirm the settings with OK.
25
26
4.1
27
1.
If you are working with the workspace, open the created project by doubleclicking on it in the workspace window. Or, right-click on it and select Open from the context menu. Zoom in and out of the loaded scene, use the panning function to evaluate the data.
Action!
2.
Settings Check
4. 5.
Click on the dots for red, green and blue layers to deactivate them and click in one field of the DSM channel to activate it. At the bottom of the Edit Layer Mixing dialog box, click OK.
Action!
The image will now be displayed using the view settings you specified.
28
Settings Check
Figure 29: Only DSM is displayed.
A second window opens. 2. To link the two windows go to the menu Windows and select Side by Side View.
Now two viewer windows are open (see Figure 31) showing the same area of the loaded data. If now zooming in one window the other will follow and show the same content. Result Check
4.2
29
Introduction
Besides the buildings, the trees are elevated objects too, but displaying the DSM you can see that leaf-off trees, in contrast to buildings, give back very heterogeneous elevation information. This heterogeneity can be used as a feature to separate the trees from buildings.
Figure 32: Loaded data with the workspace, displayed in two viewers. Above: the DSM layer; Below: the RGB layer mix.
30
This example shows that height information from the LiDAR data provides key information to extract buildings. However, since trees and buildings both are elevated, only additionally using their different appearance in LiDAR and RGB layer separates buildings from all other objects with elevation. The rules in words: Buildings are higher than a certain level -> Use the height information from the DSM Trees, which have the same elevation as buildings, have a heterogenic elevation information, are smaller and have spectral vegetation features. -> Use heterogeneity and the size information and classify elevated objects with vegetation features in a separate class.
Introduction
31
5.1
Information
The fundamental step of any eCognition image analysis is a segmentation of a scene representing an imageinto image objects. Thus, initial segmentation is the subdivision of an image into separated regions represented by basic unclassified image objects called image object primitives. To get most realistic image objects they have to represent the necessary elevation information and they should be not too big and not too small for the analysis task.
Figure 33: Diagram of relationship between Image Layer and Image Object Level.
32
5.2
Figure 34: Left RGB layer mix, Right: image object outlines. Homogeneous areas result in bigger objects, heterogeneous areas in smaller.
33
Figure 35: Process Tree with process to create image objects using the multiresolution segmentation added.
34
Figure 36: View Settings toolbar with the 4 predefined view setting buttons: Load and Manage Data, Configure Analysis, Review Results, Develop Rulesets.
1. Action!
Select the predefined view setting number 4 Develop Rulesets from the View Settings toolbar.
For the Develop Rulesets view, per default one viewer window for the image data is open, as well as the Process Tree and the Image Object Information window, the Feature View and the Class Hierarchy. Information
Action!
Result Check
Figure 37If you right-click in the Process Tree window, a context menu appears.
35
3.
The Edit Process dialog box opens. Action! 4. In the Name field enter the name Building Detection and confirm with OK.
Note:
For all parent processes which serve only as a wrapper for underlying Processes, you can edit the name. For all Processes containing an algorithm, keep the auto naming!
Action!
TIP:
Type in the Algorithm field the first letters of the algorithm you want to use, a list of suitable algorithms is provided. 3. Keep pixel level O in the Image Object Domain.
Note:
For the first segmentation you can not change that. If you perform a second segmentation step, you can choose if you want to start from the domain pixel level again or from an already existing image object Level. 4. 5. 6. In the field Level Name O type Level 1. This will be the name of the image object level to be created. In the field Scale parameter Q enter 25. Confirm the settings with OK. The process is now added to the Process Tree.
36
Settings Check
Figure 38: Edit Process dialog box with settings to create Level 1 using the multiresolution segmentation algorithm.
Figure 39: Process Tree with parent process Building Detection and child process for creating Level 1.
An image object level Level1 is created according to the settings in the multiresolution Action! segmentation process.
37
5.3
Result Check
The created objects represent the buildings even in their detailed outlines and also the roof parts.
These objects will be the basis to classify in a first step all elevated objects in the scene.
38
6.1
As described in the chapter Evaluate the loaded data the assumption is made that buildings are always elevated. The subset used for this Guided Tour has not much change in terrain elevation, so simply the definite elevation above sea level is used as a threshold to classify elevated objects.
Figure 41: First step is to classify all elevated objects, buildings and trees.
39
6.2
Translate the strategy into Rule Set Object mean of DSM, algorithm assign class
6.2.1 Find the feature and the threshold to represent all elevated objects (mean value of DSM)
Information Every object contains lots of feature values. There are: Object features like layer features, shape features and position features Class-related features: Context information about neighbors, sub objects below and super objects above Scene features: Features relative to scene, like the overall brightness, etc. Process-related features: Features with which you can expand the domain concept to deal only with individual objects, not the whole class or level Metadata: Deal with external, additional information, if available Feature variables : Used for defining individual names for features
For the initial classification of elevated objects, the mean value of the object concerning the DSM is used.
Note:
If not already open, there are several possibilities to open the Feature View tool: Choose in the menu Tools>Feature View, alternatively select the Feature View button from the Tool toolbar.
40
Select the feature 1. 2. 3. In the Feature View window browse to Object features>Layer Values>Mean. Double-click on DSM. Move your cursor over the objects in the viewer and the exact feature values for the object appears. Action!
As you can see the elevated objects appear quite bright, this means they have high values for the feature Mean DSM.
Result Check
Figure 42: All objects appear now in grey values representing the respective DSM value.
But how to get the correct threshold to separate elevated and not elevated objects?
41
Figure 43: All image objects are colored in a smooth transition from blue (low values) to green (high values).
Information
With the arrows beside the value boxes you can increase or decrease the start and end of the range. All objects which are not within this range will be displayed in grey again. 2. To isolate the high values (areas with high elevation), click the up arrow Nto the right of the minimum value.
Action!
This will increase the low end of the range. Only objects within this new range are now displayed in color. 3. Continue until you reach the value 765 or type it in.
Result Check
Figure 44: All image objects within a value range from 765 to 810 are colored, all objects outside this range (lower than 765) are displayed in grey values.
The rule derived from the feature and the values can be formulated like this: all objects with a mean elevation of more than 765 are possible buildings.
NOTE:
Be sure to update the range of feature values each time you select a different feature. Otherwise, the range of the recent feature is used.
Action!
42
3. 4.
In the Process Tree right-click on the Classification of Buildings process and select Insert Child from the context menu. Insert Classification based on elevation in the Name field and confirm with OK.
Figure 45: Process Tree with the two parent processes Classification of Buildings and Classification based on elevation.
The Class Description dialog box opens. 2. 3. Enter Building in the Name field. Keep the default color. Confirm with OK.
Figure 46: The Class Description dialog box. The name is defined as Building. Red is chosen as color.
43
Result eCognition 8.0: Guided Tour Level 1 Simple building extraction Check
Action!
Choose algorithm and object domain 2. 3. 4. Choose assign class N from the algorithm drop-down list. Check if Level 1O is set as level domain In the field Class Filter keep none P.
Define condition 5. Click in Threshold condition field and again click on the Q=next to it.
6. 7. 8.
Browse to Object features>Layer Values>Mean>DSM and double-click on it. Choose the larger or equal than (>=) as operator and enter the value 765. Confirm with OK.
The Edit threshold condition dialog box opens. The condition is added to the process. Settings Check
44
Define the target class 9. In the Algorithm Parameter section in the right pane, select Building Rfrom the drop-down list. Action!
10. Confirm the process settings with OK. The process is now added to the Process Tree.
Settings Check
Figure 50: Edit Process dialog box with settings to classify objects with Mean DSM higher than 765 to the class Building.
45
46
6.3
1. 2.
In the View Settings toolbar select the View Classification button sure that the Show or Hide Outlines button is deselected.
Move the cursor over the classification and the assigned class will appear as a tool tip next to it. Select the Pixel View or Objects Mean View button transparency view. Select the Show or Hide Outlines button the classification colors. to switch on and off the
3.
4.
Result Check
Figure 52: Classification view not transparent, transparent and with outlines view switched on.
All elevated objects are elevated. Besides the buildings there are also trees classified. This means add additional rules to refine the result must be added. You have to develop a new strategy to find additional rules to minimize the misclassifications.
47
48
7.1
In a second processing step buildings have to be separated from trees, as both are elevated objects. Here again the DSM information will be used. Evaluate object features for DSM to find a separating feature
Figure 54: Image layer DSM. Trees appear more heterogeneous than buildings.
If you evaluate the DSM image layer you can see that trees have very high elevation values close to very low elevation values. This is due to the leaf-off tree branches. The laser hits a branch and gives back a high elevation value, between the branches the laser goes through onto the ground, this results in a low elevation value. So it is characteristic that there are significant changes in elevation close to each other.
49
Introduction
Figure 55: The laser hits one time the branch, the other time it goes onto the ground. Due to this effect, the leaf-off trees appear heterogeneous.
Which feature represents the characteristics of leaf-of trees? The features of the category Mean will not help here. A very suitable feature to describe heterogeneity or homogeneity is the standard deviation.
50
7.2
Translate the strategy into Rule Set Standard deviation of DSM, algorithm assign class
7.2.1 Find the feature and the threshold to separate buildings from trees (Stddev. of DSM)
Information
For the leaf-off trees high differences in elevation is expected, the standard deviation of these areas should be high. In opposite, the building surfaces should give back low values for standard deviation.
51
Information
The pixels of building objects are close around the mean value of the object, this means that the dispersion is low. In opposite the pixels of a tree object have a wide dispersion around the mean of the object.
Figure 56: Left: classification view; Middle: whole range of the standard deviation of DSM; Right: high standard deviation values are highlighted in color.
The trees appear in green; this indicates high values for the standard deviation of DSM. If you increase the low values, only the trees remain colored.
Action!
52
3.
Click in the field Class Filter and again click on the button next to it. Select Building as domain by switching on the check-box next to it.
Tip:
If you want to change to the list view in the Edit Classification Filter dialog box, simply select the last button. Then you can select the classes by double-clicking
Define condition
4. 5. 6. 7. Click in Threshold condition field and again click on the =next to it. Browse to Object features>Layer Values>Standard deviation>DSM and double-click on it. In the Edit threshold condition choose the larger or equal than (>=) as operator and enter the value 6. Confirm with OK.
Settings Check
Figure 58: Edit Process dialog box with settings to un-classify all Building objects with a standard deviation of DSM larger than 6.
53
Figure 59: Process Tree with second process for classification added.
54
7.3
Review the classification result trees are de-classified; some vegetation and shadows objects are still misclassified
Result Check
All Building objects with a value higher than 6 for the feature standard deviation of DSM are unclassified again. But still there are too many objects classified as Building. The classification is not ready for export!
Figure 60: Left: Classification before refinement; Right: Classification after refinement.
This means you have to go back to the Develop Strategy stage to find additional rules to minimize the misclassifications.
55
8.1
Strategy to refine buildings based on spectral information the spectral layer ratio
Introduction
After refining the classification using the standard deviation of the DSM layer still some trees are classified as Building. Some of them are coniferous trees, so they are not leafoff and therefore give back a quite homogeneous elevation area. But these coniferous trees have a significant spectral difference to buildings. Roofs are usually not covered by vegetation; therefore a feature must be found that represents vegetation in a stabile way.
Figure 61: The black arrows are pointing to some trees not de-classified with the previous step.
It is obvious that the green layer contains significant information about vegetation, but only in comparison with the other 2 layers.
56
Introduction
One way of comparing image layers is to create a ratio. The relevant ratio would be green/(red+green+blue). This knowledge must be transfered into the Rule Set now. Outlook on Process Tree:
Figure 64: Process Tree processes added to refine classification based on spectral feature.
57
8.2
Translate the strategy into Rule Set Ratio of green, algorithm assign class
8.2.1 Find the feature and the threshold for spectral refinement (ratio green)
Information Within eCognition Developer there are standard features, always listed in the feature view, but there is also the possibility to customize features for your specific needs. You can create, save and load such Customized Features. Here in this example a feature is needed which expresses the ratio: green/(red+green+blue).
Note:
The software provides a standard ratio calculation. These ratio features are calculated on basis of all inserted layers. This means in our case that the DSM will be included in the calculation.
The customized feature is added to the Feature View Browse to Object Features>Customized.
4.
58
The mean value for the layer green of the object is compared to the overall brightness of the object.
Result Check
Figure 66: The loaded customized feature in the Edit Customized Feature dialog box.
6.
Action!
Result Check
Figure 67: Feature View of the customized feature Cust Ratio Green
The vegetated areas appear very bright, this means the objects have high values. The amount of green in comparison to the other image layers is here dominant. 2. 3. 4. Right-click on it and select Update Range from the context menu. Switch on the check box at the bottom of the Feature View window. Increase the lower range to the value 0.36. Most of the vegetated areas have values above 0.36. Action!
59
Result Check
Figure 69: Feature View range (above 0.36) of the customized feature Cust Ratio Green
Figure 70: Vegetation is represented by values higher than 0.36 for the feature CustRatioGreen.
As a rule from the feature and the range it can be formulated that: Unclassify all Building objects with a value for customized ratio green of more than 0.36.
Action!
60
Choose algorithm and object domain Again only one condition for classification is used, so the algorithm assign class is appropriate. 2. 3. Choose assign class from the algorithm list. As Class Filter select Building.
Define condition 4. 5. 6. 7. Click in Threshold condition field and again click on the =next to it. Browse to Object features>Customized>Cust Ratio Green and double-click on it. In the Edit threshold condition choose the larger or equal than (>=) as operator and enter the value 0.36. In the field Active class keep unclassified.
Settings Check
Figure 71: Edit Process dialog box with settings to un-classify Building objects with a value for the customized feature higher than 0.36.
61
Figure 72: Process Tree with parent process Refinement based on spectral information added and with child processes for classification.
Action!
62
8.3
Review the classification result the vegetated areas are de-classified; small objects still misclassified
All Building objects with a value higher than 0.36 for the feature Cust Ratio Green are now unclassified again.
Figure 73: Left: Classification view before refinement; Middle: Feature View for Ratio Green; Right: Classification after refinement.
But still there are objects misclassified as Building and some building areas are not yet classified. The classification is not ready for export yet! This means the strategy has to be refined, additional rules must be added to minimize the misclassifications.
63
9.1
Introduction
Some areas of the buildings have not been classified yet or were de-classified again because they fulfilled one of the conditions before. Some of the not classified objects are highly surrounded by Building objects. If an unclassified object has a high common border to Building objects it should also belong to the class Buildings.
Figure 74: Some unclassified objects are highly surrounded by Building objects.
This knowledge must be transferred into the Rule Set now. Within eCognition Developer you can express neighborhood relationships of objects using the Class-Related (context) features. This is possible within the image object level, for super-objects above or sub-objects below.
64
Introduction
In the current example neighborhood within the same level will be analyzed. The rule must describe the situation that if an unclassified object has a high common border to Building objects it belongs to the class Building too.
Figure 76: All objects have neighborhood relationships, which can be used as information for classification.
This knowledge must be transferred into the Rule Set now. Outlook on Process Tree:
Figure 77: Process Tree with processes added for classification based on context information.
65
9.2
Figure 78: The selected object (blue outlines) has a border to Buildings about 0.6 and a border to unclasssified about 0.4
The range of the feature reaches from 0 to 1. Objects with no border to Building objects have the value 0 Objects with a low common border to Building objects have low values Objects with a high common border to Building objects have high values Objects which are completely surrounded by Building objects have the value 1
Action!
66
Note:
Per default the list of class-related features is empty. You first have to create the class you want to evaluate. If you had a lot of classes and all relationships to all classes would be displayed always, this would be very confusing. This is why you have to create the individual relationships on demand. 2. Double-click on Create new Rel. border to.
The Create Rel. border to dialog box opens. 3. 4. From the Value drop-down list select the class Building, as the border to Building objects must be evaluated. Confirm the settings with OK.
Settings Check
Figure 79: The Create Rel. border to dialog box with the class Building selected.
Result Check
Figure 80: Feature View with new feature Rel. border to Building created.
67
Result Check
Objects appearing in a color range form blue to green, have at least half of its overall border common with Building objects. Objects with a lower common border to Building objects appear now in a grey range. If the value is too low, too many neighboring objects would be included.
Figure 81: The selected object has a Rel. border to Buildings of 0.3.
If the value is too high, too few neighboring objects would be included.
Figure 82: The selected object has a relative border to Buildings of 0.6.
Figure 83: The selected object has a Rel. border to Buildings of 0.5.
As a rule from the feature and the range it can be formulated that: Unclassified objects with a relative border to Building objects more than 0.5 shall be classified as Building too.
Action!
68
Define condition
4. Click in the Threshold condition field and browse to Class-Related features>Relations to neighbor objects> Rel. border to Building and doubleclick on it. In the Edit threshold condition choose the larger or equal than (>=) as operator and enter the value 0.5. Confirm with OK.
5. 6.
Settings Check
Figure 84: Edit Process dialog box with settings to classify all unclassified objects with a high common border to Buildings as Buildings too.
Figure 85: Process Tree with parent process Refinement based on context and shape added and with child processes for classification.
69
Action!
70
9.3
Unclassified objects with a relative border to Building objects more than 0.5 are classified as Building too.
But still there are too many misclassifications. The classification is not ready for export! This means the development strategy has to be refined to find additional rules to minimize the misclassifications.
71
10.1 Strategy to refine buildings based on shape information generalize objects, separate them by size
Introduction There are still some misclassified objects, but they are very small. The size of the object of interest is the separating condition here, based on the assumption, that buildings have a certain size.
As all objects are now still as small as they were created by the initial multiresolution segmentation, they have to be merged first according to their classification. After that the area feature can be used to separate the big buildings from the small misclassified objects.
72
This means you have to add two rules: Rule 1) Merge all Building objects. Rule 2) Unclassify all too small Building objects. This knowledge must be transferred into the Rule Set now. Outlook on Process Tree:
Introduction
Figure 89: Process Tree with process added to merge all Building objects and to un-classify too small Building objects.
73
Action!
Choose algorithm and object domain 2. 3. Action! 4. 5. Choose merge region from the algorithm list. Select Building as Class filter. Keep Threshold condition empty. Confirm with OK.
74
Settings Check
Figure 90: Edit Process dialog box with settings to merge all Building objects.
Figure 91: Process Tree with parent process clean up too small Building objects added and with child processes to merge Building objects.
Execute the classification process 6. Right-click on the process and select Execute from the context menu. Alternatively press F5 on your keyboard. Action!
75
10.2.2 Find the feature and the threshold for refinement based on shape
Information Within eCognition Developer objects can also be evaluated according to their shape characteristics, like their area. 1. 2. Action! In the Feature View window browse to Object features>Geometry>Extent. Double-click on the feature Area.
Result Check
The buildings appear very bright, this means the objects have high values. The area of buildings is significantly higher than the area of not building objects. 3. 4. Action! 5. Right-click on it and select Update Range from the context menu. Switch on the check box at the bottom of the Feature View window. Increase the lower range to the value 7000. All of the not Building objects have values below 7000.
Result Check
As a rule from the feature and the range it can be formulated that: Unclassify all Building objects with a value for the feature Area of less than 7000.
76
10.2.3 Translate strategy in a rule for refinement based on the Area feature
Insert process to classify
1. In the Process Tree right-click on the last process and select Append New from the context menu. Action!
Choose algorithm and object domain Again only one condition is used, so use the algorithm assign class. 2. 3. Choose assign class from the algorithm list. Select Building as Class filter.
Define condition 4. 5. 6. In the Threshold condition field browse to Object Features> Geometry>Extent>Area and double-click on it. In the Edit threshold condition choose the smaller than (<) as operator and enter the value 7000. Confirm with OK.
Settings Check
Figure 94: Edit Process dialog box with settings to un-classify all Building objects smaller than 7000 pixel.
77
Execute the classification process 7. Right-click on the process and select Execute from the context menu. Alternatively press F5 on your keyboard.
Action!
78
This means the development is ready for the next step, the Export.
79
Figure 97: Process Tree with process added to export a vector layer of the result.
Information
To clean up and reduce the number of objects in the scene first all unclassified objects must be merged.
80
Settings Check
Figure 98: Edit Process dialog box with settings to merge all unclassified objects.
Figure 99: Process Tree with parent process Export added and with child processes to merge unclassified objects.
81
This is controlled by the Distance value. 1. 2. In the Feature View window browse to Class-Related features>Relations to Classification. Double-click on Create new Class name.
Action!
The Class name dialog box opens. 3. 4. Keep the default settings. Confirm the settings with OK.
Result Check
Figure 100: The feature Class name in the Feature View tree.
82
Choose algorithm and object domain 2. 3. Choose export vector layers from the algorithm list. Keep the Class filter and Threshold condition as default.
Define the parameter 4. 5. 6. In the field Export mode keep Use export item. In the field Export item name insert Building. This will be the name of the exported file. Click in the field Attribute table.
The Select Multiple Features dialog box opens. 7. 8. Browse to the Class-Related features>Relations to Classification>Class name and double-click on it to move it to the Selected window. Do the same for the Area feature (Object Features>Geometry>Extent>Area). Confirm with OK.
Result Check
9.
In the algorithm parameter field Shape Type change from Points to Polygons.
10. Keep all other default settings. 11. After clicking OK, the Edit Attribute Table Columns window appears, where other settings are possible. 12. Execute the process. The shape file together with its attributes is exported to the location where the image data is stored. Action!
83
Settings Check
Figure 102: Edit Process dialog box with settings to export a vector layer.
Figure 103: Process Tree with process added to export a vector layer of the result.
84
Figure 105: Exported .dbf table containing the area and the classification of every object.
85
The eCognition community helps to share knowledge and information within the user, partner, academia and developer community to benefit from each other's experience. The community contains next to other content: Wiki: collection of eCognition related articles (e.g. Rule Set tips and tricks, strategies, algorithm documentation...). Discussions: ask questions and get answers. File exchange: share any type of eCognition related code such as Rule Sets, Action Libraries, plug-ins... Blogs: read and write insights about whats happening around our industry..
Share your knowledge and questions with other users interested in using and developing image intelligence applications for Earth Sciences at http://community.ecognition.com/.
86
eCognition Training
eCognition Training Services offer a carefully planned curriculum that provides handson, real-world training. We are dedicated to enhancing customers image analysis skills, helping these organizations to accomplish their goals. Tailored courses are available to meet the needs of customers engaged in Life and Earth Sciences. Our courses are held in our classrooms around the world and at customer's sites. We offer regular trainings as Open Classes, where anyone can register and as In-Company Training. We also offer Customized Courses to satisfy customer's unique image analysis needs, thereby maximizing the training effect. For more information please see our website at http://www.ecognition.com/learn/trainings . eSeminars Join one of the live, online eSeminars: http://www.ecognition.com/learn/trainings/web-based-self-study-training
Consulting
eCognition Consulting Services experts have deep insight into products, best practices and professional project management skills. As a result, we are able to deliver rapid implementations that maximize the return on investment and minimize total cost of ownership. Consulting can also provide full-service, turnkey solutions, including project management, off-site rule development, QA and implementation reviews. Customers can count on our consultants considerable expertise in the application of technology.
87