0% found this document useful (0 votes)
106 views29 pages

Lecture - Week - 3 - Software Metrics

This document outlines the schedule for a Software Engineering lecture series. Week 3 will cover Software Metrics and discuss how to measure software size, structure, and object-oriented code. Metrics can help understand, control, and improve the software development process by establishing guidelines and allowing teams to predict outcomes, establish quality targets, and aim for process improvements. The lecture will define various metrics, including lines of code, complexity metrics, and Chidamber and Kemerer object-oriented design metrics. It will also discuss problems with overreliance on metrics in practice.

Uploaded by

Manl
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
Download as pptx, pdf, or txt
0% found this document useful (0 votes)
106 views29 pages

Lecture - Week - 3 - Software Metrics

This document outlines the schedule for a Software Engineering lecture series. Week 3 will cover Software Metrics and discuss how to measure software size, structure, and object-oriented code. Metrics can help understand, control, and improve the software development process by establishing guidelines and allowing teams to predict outcomes, establish quality targets, and aim for process improvements. The lecture will define various metrics, including lines of code, complexity metrics, and Chidamber and Kemerer object-oriented design metrics. It will also discuss problems with overreliance on metrics in practice.

Uploaded by

Manl
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1/ 29

Software Engineering

CS3003
Lecture 3: Software Metrics
Lecture schedule
Week Lecture Topic Lecturer Week Commencing
1 Introducing the module and software Steve 28th Sept.
engineering
2 Software maintenance and evolution Steve 5th Oct.
3 Software metrics Steve 12th Oct.
4 Test-driven development Giuseppe 19th Oct.
5 Software structure, refactoring and Steve 26th Oct.
code smells
6 Software Complexity Steve 2nd Nov.
Coursework released Tuesday 3rd
November
7 ASK week N/A 9h Nov
8 Software fault-proneness Steve 16th Nov.
9 Clean code Steve 23th Nov.
10 Human factors in software engineering Giuseppe 30th Nov.
11 SE Techniques applied in action Steve 7th Dec.
12 Guest industry lecture (tba) Guest Lecture 14th Dec.
Coursework hand-in Monday 14th
December
Lab/seminar schedule
Week Seminar Labs Week Commencing
1 No seminar No lab 28th Sept.
2 Seminar Lab (Introduction) 5th Oct.
3 Seminar Lab 12th Oct.
4 Seminar Lab 19th Oct.
5 Seminar Lab 26th Oct.
6 Coursework Brief Seminar No lab 2nd Nov.
7 ASK week ASK week 9th Nov.
8 Seminar Lab 16th Nov.
9 Coursework technique Lab 23th Nov.
seminar
10 Seminar Lab 30th Nov.
11 No seminar Work on coursework (no lab.) 7th Dec.
12 No seminar Work on coursework (no lab.) 14th Dec.
Norman Fenton

4
Structure of this lecture

This lecture will answer the questions:


 How can software size be measured?
 How can software structure be measured?
 How can objected oriented code be
measured?
 .......What is the link to the previous lecture on
Maintenance and Evolution?

5
Uses of measurement

 Measurement helps us to “understand”


 Makes the current activity visible
 Measures establish guidelines
 Measurement allows us to “control”
 Predict outcomes and change processes
 Measurement encourages us to “improve”
 When we hold our product up to a measuring
stick, we can establish quality targets and aim to
improve

6
How can software size be measured?
 Why is size important?
 Related to effort and cost
 LOC a common measure of size
 …but not very useful?
 What about comments, blank space, “}”?
 Many companies measure functionality rather
than code length

7
How can software structure be measured?
Information flow within the system
 Indicator of maintainability and coupling
 Identifies critical stress parts of the system and design
problems
 Based on:
 Fan-in: number of modules calling a module
 Fan-out: number of modules called by a module

8
Ref: Marchese, PACE University

9
Henry & Kafura’s Complexity Metric:

 A module X is 10 lines long


 It has fan-in of 3 and fan-out of 2
 Complexity of a module = module length *
(fan-in * fan-out)2

Complexity of X = 10 * (3 * 2) 2

= 360

10
A complexity measure
 McCabe’s Cyclometric Complexity measure
 Commonly used by industry
 In lots of tools
 Any good?
 Based on control flow graph
 Very useful for identifying white box test cases
 Attributed to Tom McCabe who worked for IBM in
the 1970’s

11
Cyclomatic Complexity
Program P
 CC(P) =
#edges - #nodes +2
#edges = 11
#nodes = 9
CC = 4

(note: we exclude the start and


end nodes and edges – although
some sources don’t)

12
Vergilio et al 2006

13
Complexity Metrics

 Why use complexity metrics?


 Can be used to identify:
 Candidate modules for code inspections
 Areas where redesign may be appropriate
 Areas where additional documentation is
required
 Areas where additional testing may be required
 Areas for refactoring
How can OO code be measured?

 We’ll look at 5 of the 6 metrics of Chidamber and


Kemerer (C&K)
 Developed in 1991
 Weighted methods per class (WMC)
 A simple count of the number of methods in a class
 Depth in the inheritance tree of a class (DIT)
 The level in the inheritance tree of a class
 Number of children (NOC)
 The number of immediate sub-classes a class has

15
M has DIT equal to 0

‘a’, ‘b’ and ‘c’ are children of ‘m’

c has DIT equal to 1

‘I’ and ‘m’ are children of ‘c’

r has DIT equal to 4

Ref: Marchese, PACE University

16
C&K metrics (cont.)
 Coupling between objects (CBO)
 Number of other classes coupled to a class
 Lack of cohesion of methods (LCOM)
 Number of attributes accessed by more than one method

17
CBO

Class coupling (ravioli code); here, boxes represent classes and


arrows represent links (i.e., coupling) between classes

18
LCOM Example

Methods (M1-M3) and the variables those methods use (V1—V3)

http://www.tusharma.in/technical/revisiting-lcom/

19
Thresholds

 Metric thresholds:
 What is the best number of methods in a class?
 What is the best number of attributes in a class?
 What is the best number of LOC in a method?
 What is the best number of LOC in a class?
 What would you consider the optimal level in
each case?
 Answer?

20
What is important about measures?
 Direct measures
 Measures that can have numbers directly attributed to them
 Examples include:
 Length of source code (measured in LOC)
 Effort of programmers on a project
 Indirect measures
 Measures that cannot have numbers directly attributed and
need other measures to be calculated to make sense
 Examples include:
 Fault rate per week = number of faults in a week / 5
 Assuming a 5 day week
 Area of room example

21
Problems with metrics in the real-
world
 There is a tendency for professionals to display
over-optimism and over-confidence in metrics
 Metrics may cause more harm then good
 Data is shown because its easy to gather and display
 Metrics may have a negative effect on developer
productivity and well-being
 What are other practical problems with collecting
metrics?

22
Software Metric Usage
 Use common sense and organizational sensitivity when
interpreting metrics data
 Provide regular feedback to the individuals and teams
who have worked to collect measures and metrics.
 Don’t use metrics to appraise individuals
 Never use metrics to threaten individuals or teams.
 Metrics data that indicate a problem area should not be
considered “negative”.
 These data are merely an indicator for process improvement

1-23
NASA’s use of metrics

 What do NASA use on their code?


 Cyclomatic Complexity
 Lines of Code
 Number of comments
 Number of blank lines
 Branch count
 NASA has the best project estimation
knowledge anywhere……why?

24
Cone of uncertainty (McConnell)

Estimating software size

25
What does the cone show?
 At the beginning of a software project:
 Estimates are subject to large uncertainty.

 As we progress, we learn more about the system and


uncertainty decreases (estimates become more
accurate)
 When we start writing code, that uncertainty decreases
even more
 The more features we complete, the more that

uncertainty decreases
 Eventually, the uncertainty reaches 0% (at the project
end)

26
How metrics can help us make decisions (an
“audit grid”)
High business value
Business High business value
Low quality
quality High quality

9
10 8
6
7

Low business value Low business value


Low quality High quality

2 5
1 3 4

System quality

27
Test-based metrics
 How many test cases have been designed
per requirement?
 How many test cases have still to be
designed?
 How many test cases have been executed?
 How many test cases passed or failed?
 How many bugs were identified and what
were their severities?

28
Reading

 Sommerville – Ch. 24 in both editions 9 and 10


 Chidamber S., Kemerer C., “A metrics suite for o
bject oriented
design”, IEEE Transactions on Software Enginee
ring, June1994
 Norman Fenton, James Bieman Software
Metrics: A Rigorous and Practical Approach,
Third Edition, CRC Press, 2014
 More reading in the seminar sheet

29

You might also like