0% found this document useful (0 votes)
25 views30 pages

8NT10 e EduNote - Machine Learning Based Network Optimization

Uploaded by

X1 TKJ-1
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
25 views30 pages

8NT10 e EduNote - Machine Learning Based Network Optimization

Uploaded by

X1 TKJ-1
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 30

Educational Note

MACHINE LEARNING
BASED NETWORK
OPTIMIZATION
Products:
1. R&S®SmartAnalytics

Gorana Bosic | 8NT10 | Version 1e | 02.2024


www.rohde-schwarz.com/appnote/8nt10
Contents
1 Overview........................................................................................................ 3
2 Machine Learning approach in Data Analytics ........................................... 4
3 Machine Learning use cases ....................................................................... 5
3.1 Network Utilization Rating (NUR) ........................................................................5
3.1.1 How the NUR model works .................................................................................5
3.2 Call Stability Score (CSS) ...................................................................................7
3.2.1 How the CSS model works..................................................................................7
4 Test solution implementing machine learning ........................................... 9
4.1 Network Utilization Rating implementation in SmartAnalytics .............................9
4.1.1 Output values and results ....................................................................................9
4.1.2 Statistical Analysis in L1 Statistics ....................................................................10
4.1.3 Real-field Examples ..........................................................................................13
4.2 Call Stability Score implementation in SmartAnalytics ......................................20
4.2.1 Output values and results ..................................................................................20
4.2.2 Statistical Analysis in L1 Statistics ....................................................................20
4.2.3 Drill Down Analysis in L2 Analysis.....................................................................22
4.2.4 Real-field Examples ..........................................................................................23
5 Ordering information .................................................................................. 28

Rohde & Schwarz | Educational Note Machine Learning based Network Optimization 2
1 Overview
Mobile network operators worldwide face cost pressures and escalating network complexity. The advent of
5G-NR has ushered in new use cases and flexibility, but it also comes with more stringent performance and
availability requirements. Concurrently, a decline in expertise on the MNO side has intensified the long-
standing issue of legacy, labor-intensive data exploitation.
Machine Learning can serve as a catalyst for the market to unearth deep insights that would otherwise
remain concealed. It can also significantly streamline everyday tasks by fostering a smarter system that
guides users through their routine work processes, rather than obliging them to repeat each (manual) step.
Rohde & Schwarz Mobile Network Testing has focused its efforts on delivering Machine Learning use cases
that distill relevant insights from drive testing data, thereby offering substantial benefits to users.
This educational note explains the motivation and reasoning why this kind of approach is needed, what are
the benefits of the machine learning offering and it takes a deeper look into how it can be used in various
levels of statistical and technical analysis. Finally, this document provides real measurement results and
analysis findings based on the Rohde & Schwarz post processing SW suite SmartAnalytics and its machine
learning based features Call Stability Score and Network Utilization Rating.

Rohde & Schwarz | Educational Note Machine Learning based Network Optimization 3
2 Machine Learning approach in Data Analytics
The complexity of analyzing data from mobile network measurements and deriving actionable insights has
been increasing. This complexity is partially due to the growing size of the data, making manual deep-dive
technical analysis unscalable.
Technical analysis often encompasses multiple technologies. For instance, in 5G Non Standalone (NSA),
both LTE and 5G factors need to be included. Several factors such as spectrum occupancy, bandwidth, and
different signal values (related to coverage and interference) must be considered for any technology. Hence,
technical analysis is a multidimensional problem that can pose challenges even for statistical analysis, as
correlations and interdependencies might not be directly visible. Moreover, almost all relevant factors in the
analysis are dynamic rather than static in time domain, adding another layer of complexity to the analysis
process.
The conclusions and actions derived are based on the theoretical knowledge and engineering experience of
the individuals or teams performing the technical analysis. However, mobile operators and their network
deployments can vary greatly in size, complexity, and implementation. This diversity makes it impossible for
engineering teams to possess all-encompassing knowledge. Consequently, some insights from certain
markets may never be accessible to others.
Machine Learning (ML) approaches to data processing and insight extraction can address these challenges.
ML algorithms can be applied to large and representative worldwide datasets. In general terms, ML is a type
of Artificial Intelligence (AI) that enhances how software systems process and categorize data. The term
‘Machine Learning’ describes the process where ML algorithms mimic human learning and progressively
improve as they process larger datasets.
In practice, this means that ML models are derived through a training process based on a large dataset, the
type of ML (such as supervised, unsupervised, self-supervised, reinforcement learning), and the selected ML
architecture (such as recurrent, convolutional, LSTM, encoder/decoder neural networks). The resulting model
can then be applied to new data to generate smart KPIs and insights.

Figure 1 Machine learning

Rohde & Schwarz | Educational Note Machine Learning based Network Optimization 4
3 Machine Learning use cases
In this educational note two machine learning use cases are described to provide more insights into the
quality of mobile networks:
1. Network Utilization Rating (NUR) – rating the efficiency of the resource usage in a mobile network
2. Call Stability Score (CSS) – rating the drop probability / risk of a successful voice call in a mobile
network

3.1 Network Utilization Rating (NUR)


In data services, the download data rate is a key performance indicator, and operators strive to maximize
data transfer speed given the radio conditions and technology. However, there can be significant gaps
between the potential and actual services provided to users due to various factors such as network settings,
congestion, or connectivity.
Traditional downlink data rate measurements show the achieved data rate but do not indicate whether
network resources are being used optimally. Under certain radio conditions, due to local or systematic
issues, the achieved transport performance can be sub-optimal, implying that resources could be better
utilized.
The Network Utilization Rating (NUR) addresses this issue. It complements traditional data rate
measurements with a score that indicates whether resources were used optimally or if there is room for
improvement. NUR analyzes a set of radio parameters, considers spectrum bandwidth and allocation as well
as the technology, and predicts an optimal data rate that could realistically be achieved under these
conditions. The difference between this predicted and the actual measured data rate forms the NUR score.
This method is based on machine learning techniques and a large pool of data rate measurements from
representative regions worldwide with state-of-the-art LTE and 5G deployments. NUR applies to downlink
data rates in 4G and 5G technology, both standalone and non-standalone, and with or without carrier
aggregation.
NUR is a value in the range of 0 to 150. A value of 100 reflects a performance reached in about the top 20%
of tests worldwide. This means that under given radio conditions, bandwidth, spectrum allocation, and
technology, 20% of the analyzed measurements have reached this data rate or better. A NUR value of 100 or
above indicates optimal resource usage under current radio conditions. Conversely, a value significantly
lower than 100 indicates sub-optimal resource usage. Improved NUR performance could be achieved without
changing radio conditions by addressing other causes of underperformance such as non-optimal network
settings or connectivity issues.
NUR guides users in identifying areas of low performance regarding resource utilization and their potential for
improvement, especially for troubleshooting and optimizing mobile networks. The overall rating can be used
in benchmarking to compare network maturity and how efficiently available spectrum is utilized by network
providers.

3.1.1 How the NUR model works

3.1.1.1 Model design

Using machine learning methods enables discovering hidden patterns in large amount of complex,
multidimensional data, where manual or rule-based methods can’t address the needs. PDSCH throughput

Rohde & Schwarz | Educational Note Machine Learning based Network Optimization 5
information and input parameters defined in 3.1.1.2 represent a multidimensional problem in time domain.
Time dimension is important – various radio signals or throughput fluctuate over time, and it matters what
was before and what is next to properly determine what will be the impact on user experience.
Training data, PDSCH throughput and input parameters, are based on a large representative worldwide data
set. They are split for every technology and carrier component into 500ms time periods and are used as input
for neural network in the process called “training”. Based on that training and learning, the so called ‘model’ is
created and it can be fed with new tests and input parameters to predict downlink PDSCH throughput. This
prediction is what is called expected value and is used further to obtain the NUR value, as explained in 4.1.1.
It is worth mentioning that the model is trained on clean data. This means that a so-called validation process
was done to remove HTTP/HTTPS DL Capacity tests with problems e.g., malfunction of the mobile device or
content server.

3.1.1.2 Input features

Supported technologies are LTE and 5G, and due to technology differences, input parameters differ as well.
Both technologies support carrier aggregation, and therefore values of input parameters are taken from each
carrier component.
The input parameters used to train the LTE NUR model, and to predict the NUR value are the following:
• Numerical parameters
o RSRP
o RSRQ
o SINR
o CQI0: Average CQI_0 from PUCCH
o NetPDSCHThroughput: Scheduled throughput – discarded
• Categorical parameters
o Downlink bandwidth: DL Bandwidth in MHz from carrier aggregation cell information
o Data Technology: creating a distinction between LTE, LTE with carrier aggregation, 5G NR
in non-standalone mode or 5G NR in standalone mode.
o Duplex mode: TDD or FDD
o Carrier index
o Rank index from PUCCH
The input parameters used to train the 5G NUR model, and to predict the NUR value are the following:
• Numerical parameters
o CQI: Average Channel Quality Indicator from 5G NR CSI Report
o NetPDSCHThroughput: Scheduled throughput – discarded
• Categorical parameters
o Downlink bandwidth: DL Bandwidth in MHz from carrier aggregation cell information
o Data Technology: creating a distinction between 5G NR in non-standalone mode or 5G NR
in standalone mode
o Duplex mode: TDD, FDD, SUL or SDL
o Carrier index

Rohde & Schwarz | Educational Note Machine Learning based Network Optimization 6
o Rank index from 5G NR CSI Report
o Frequency range: FR1 or FR2
o Dynamic Spectrum Sharing

3.2 Call Stability Score (CSS)


In telephony services, successful setup and completion of a call is a basic user expectation, and call
completion ratio is often considered one of the most important KPIs in telephony. A failed call setup or a call
that begins but then drops is perceived by users as a significant negative event. Over the past decades,
operators have exerted considerable effort to minimize these failed and dropped calls in mobile networks. As
a result, unsuccessful mobile calls are now quite rare in developed networks.
However, failed and dropped calls do still occur. Identifying these requires extensive drive tests to detect a
sufficient number of failed and dropped calls for statistically reliable performance evaluation or to recognize
systematic issues in call progress. Moreover, if a call fails or drops during these measurements, the problems
have already negatively impacted the users.
To address the limitations of traditional call success statistics, the Call Stability Score (CSS) offers new
insights that were previously unavailable in a single, aggregated score. CSS analyzes a set of radio
parameters and assesses the risk of an unsuccessful call based on the current combination of these
parameters. This provides an estimate of how stable an established call is and the risk of losing it.
CSS is a continuous value ranging from 0 to 1. Its distribution across the entire range of radio performance
allows for a rapidly converging performance statistic, unlike the traditional call drop ratio. Therefore, the score
can be used for an overall stability rating as well as for smaller subsets of calls, such as per radio band or
geographical region, to identify critical areas or situations where the risk of unsuccessful calls is increased.
The CSS model is trained using machine learning techniques on a large pool of calls from representative
regions worldwide. It applies to 3G WCDMA circuit-switched and 4G VoLTE mobile-to-mobile calls, thus
covering well over 90% of today’s call technologies worldwide. VoNR (Voice over 5G NR) is on the rise and a
Machine Learning model might be trained once a sufficiently large data set is available.
CSS estimates the stability of an established call based on radio performance. A measured score with value
close to 1 indicates that a call under these conditions is highly unlikely to fail, while a score with value close
to 0 suggests a high probability of a dropped call. CSS provides a risk analysis and enables network
operators to proactively improve network conditions and focus on problematic areas without having to track
individual failed and dropped calls, which may occur very rarely and somewhat arbitrarily.

3.2.1 How the CSS model works

3.2.1.1 Model design

Call status of successfully established calls can be either ‘success’ or ‘dropped’. This provides the
opportunity to label the calls and to build a semi-supervised machine learning model that can ‘learn’ the
properties of calls within these two groups. The model is also learning based on an optimal set of input
parameters, which for CSS represent time series data of the relevant radio signals. The time dimension of
the input parameters is important – various radio signals fluctuate over time, and it matters what was before
and what is next to properly determine what will be the impact on user experience.

Rohde & Schwarz | Educational Note Machine Learning based Network Optimization 7
It is worth mentioning that the model is trained on clean data. This means that a so-called validation process
was done to remove calls with problems e.g., the malfunction of the mobile device. As CSS strives to
determine radio call stability, call drops related to core network problems are removed as well.
Based on that training and learning, the so called ‘model’ is created and it can be fed with new calls to predict
of how likely the new calls are to drop. This prediction forms the Call stability score.
Each call consists of several CSS samples, where each CSS sample is 7.5 seconds in duration. The number
of CSS samples in a call depends on the call duration, in case of a dropped call when the drop has occurred,
and if a handover to non-supported technologies has occurred e.g., a handover to GSM.

3.2.1.2 Input features

Supported technologies are UMTS for CS/CSFB calls, and LTE for VoLTE calls. The Call stability score
requires different input parameters for each technology, and therefore there are two different models built –
one for UMTS and one for LTE.
The input parameters used to train the LTE CSS model, and to predict the CSS value are the following:
• Call Status: successful or dropped
• RSRP_Rx0 and RSRP_Rx1: RSRP of first and second antenna
• RSRQ_Rx0 and RSRQ_Rx1: RSRP of first and second antenna
• SINR0 and SINR1
• Speed (speed of a car or any other vehicle etc. depending on the how measurements are done)
The input parameters used to train the UMTS CSS model, and to predict the CSS value are the following:
• Call Status: successful or dropped
• Aggregate RSCP for all PSC
• UMTS Aggregate Ec/Io for all PSC
• Speed (speed of a car or any other vehicle etc. depending on the how measurements are done)
Due to low deployment of VoNR in the world, at the time of writing this document, there is not enough
representative worldwide data to train the model for VoNR.

Rohde & Schwarz | Educational Note Machine Learning based Network Optimization 8
4 Test solution implementing machine learning
4.1 Network Utilization Rating implementation in SmartAnalytics

4.1.1 Output values and results

From a mathematical point of view, Network Utilization Rating represents the relation between the Net
PDSCH Throughput expected value and measured value. NUR value is calculated for every NUR sample as
in the following formulae:
1. Raw NUR value of a sample is calculated as a ratio of difference between measured and expected
value, and a sum of those values. Positive values of raw NUR indicate that the measured
performance is above expectation (underprediction), while negative values indicate that the
measured performance is below expectation (overprediction).
𝑡ℎ𝑟𝑜𝑢𝑔ℎ𝑝𝑢𝑡𝑚𝑒𝑎𝑠𝑢𝑟𝑒𝑑 − 𝑡ℎ𝑟𝑜𝑢𝑔ℎ𝑝𝑢𝑡𝑒𝑥𝑝𝑒𝑐𝑡𝑒𝑑
𝑁𝑈𝑅𝑟𝑎𝑤 = ( )
𝑡ℎ𝑟𝑜𝑢𝑔ℎ𝑝𝑢𝑡𝑚𝑒𝑎𝑠𝑢𝑟𝑒𝑑 + 𝑡ℎ𝑟𝑜𝑢𝑔ℎ𝑝𝑢𝑡𝑒𝑥𝑝𝑒𝑐𝑡𝑒𝑑

2. NUR value of a sample is retrieved by applying the following piecewise function:

(𝑁𝑈𝑅𝑟𝑎𝑤 + 1) ∗ 100, 𝑁𝑈𝑅𝑟𝑎𝑤 ≤ 0


𝑁𝑈𝑅 = { 1
(1 + ∗ (1 − (1 − 𝑁𝑈𝑅𝑟𝑎𝑤 )2 )) ∗ 100, 𝑁𝑈𝑅𝑟𝑎𝑤 > 0
2

Values of NUR in the range of 100 to 150  measured performance is above expectation.
Values of NUR in the range of 0 to 100  measured performance is below expectation.

For every downlink capacity test, observed sample values of NUR are further aggregated in the following
manner (Figure 27):
1. NUR samples are averaged for each technology and carrier component, resulting in as many values
as there are technologies and carrier components e.g., three values for 5G NR non-standalone test
with two LTE carrier components (two LTE values and one 5G). This is the “NUR” value in the
“Carrier” section of the value tree in Smart Analytics.
2. NUR samples are averaged for each technology across all carrier components, resulting in two
values, one for LTE and one for 5G. These are the “NUR 4G” and “NUR 5G” values in the
“Aggregated” section of the value tree in Smart Analytics.
3. NUR samples are averaged across each technology and all carrier components, resulting in one
value per test. This is the “NUR” value in the “Aggregated” section of the value tree in Smart
Analytics.

Rohde & Schwarz | Educational Note Machine Learning based Network Optimization 9
Figure 2 SmartAnalytics value tree - NUR

With the kind of averaging in 2 and 3, the number of samples for each technology and carrier component
represent a natural weight. Therefore, the technology/carrier component that has been activated late in the
test and has a significantly high or significantly low NUR will have a moderate impact. Aggregation types are
important to have in mind as they have a strong impact on which values from the value tree can be combined
in charts or as filters.
The user is provided with aggregated values of measured and expected Net PDSCH throughput, with the
same aggregation principles per technology and carrier.

4.1.2 Statistical Analysis in L1 Statistics

The overview of the Network utilization rating on Figure 3 and Figure 4 provides high level statistics of the
average NUR and distribution of NUR for individual operators, in overall and broken down per technology. As
on many other L1 views, the user has the possibility to view the geographical distribution of NUR on a map,
with available features for filtering actions.

Rohde & Schwarz | Educational Note Machine Learning based Network Optimization 10
Figure 3 Network Utilization Rating – Overview ½

Figure 4 Network Utilization Rating - Overview 2/2

NUR performance can be further broken down per technology and carrier which helps to narrow down low
performance to a specific technology and carrier, samples contributing to it and their actual impact (Figure 5).

Rohde & Schwarz | Educational Note Machine Learning based Network Optimization 11
Figure 5 Network Utilization Rating - Overview per Technology and Carriers

The analysis of NUR can be supplemented with the more detailed view of underperforming tests (Figure 6).

Figure 6 Network Utilization Rating - Analysis per Technology

Rohde & Schwarz | Educational Note Machine Learning based Network Optimization 12
4.1.3 Real-field Examples

When performing statistical analysis of throughput among different operators or different bands or
geographical areas, usually the user needs to go through several steps of filtering the data. Analyzing
throughput only is not sufficient – the user needs to either breakdown or filter the data at least across the
same technology and spectrum bandwidth. As the Network Utilization Rating is based on several radio and
spectrum parameters, these are already embedded into the results and can provide faster insights into more
general differences among operators as well as problem areas.
The following examples provide guidelines how NUR can be used for a more general analysis, as well as it
can guide the user into deeper technical insights.

4.1.3.1 Statistical analysis of general performance differences

Achieving the highest throughput doesn’t uniquely identify one operator as actually the best one. What if the
best one can be even better? What if the gap towards competitors is small? What can be done to understand
if there are differences among operator that can drive change and even better performance? Network
Utilization Rating can be a starting point in identifying such differences. The following example shows how
initial NUR differences can point to different radio resource management strategies and their impact on the
throughput.
In the results of a country wide mobile network benchmark, one operator stands out. As it can be seen on
Figure 7, differences in 5G NR Net PDSCH Throughput between the three operators are not so high.
However, NUR results portray a different picture – Operator3 with the best throughput in average
corresponds to the lowest NUR and Operator2 ranked as second in throughput is ranked as the first in NUR.
Are there some general differences that can explain these results? Is there something that the operator with
the highest throughput and the lowest NUR can do to increase advantage?

Figure 7 Mobile network benchmark - NUR results

All three operators are using a 98MHz bandwidth in majority, and Operator1 and Operator3 are using it
almost exclusively, Therefore, as a second step, let’s apply a filter to a unique bandwidth of 98MHz and see if
the data rate differences persist. In Figure 8 differences among operators are still present, however the
ranking is changed – Operator2 shows the best data rates. Operator 1 and Operator 3 with the majority of
98MHz usage show a different ranking of NUR and throughput.

Rohde & Schwarz | Educational Note Machine Learning based Network Optimization 13
Figure 8 5G NR Net PDSCH Throughput - 98MHz

What can further explain these data rate differences? Can radio condition be the root cause? As it can be
seen on Figure 9, radio conditions are slightly different among the closest competitors. Lower CQI can
explain the lower throughput on Operator1, however, NUR values are higher. Therefore, further analysis for
Operator3 is needed.

Figure 9 5G NR Radio conditions - 98 MHz

Some of the reasons can be different contributions of higher order modulations, larger TBS, number of
resources blocks etc. Operator3 shows the highest share of samples with maximum layer 2, and within that,
the highest share of samples with 64QAM (Figure 10).

Figure 10 5G NR PDSCH statistics (Layers) - 98MHz -> Modulation distribution – 98MHz / Max layers = 2

Analysis of statistics on Figure 11 drives the user to the conclusion that the best performing operator
regarding the 98MHz bandwidth is due to more usage of higher order modulation and higher TBS.

Rohde & Schwarz | Educational Note Machine Learning based Network Optimization 14
Figure 11 5G NR PDSCH Statistics - 98 MHz / Max layers = 2 / Modulation = 64QAM

These analysis steps can be repeated for all available technologies, spectrum bandwidths and other
dimensions on which drill down is meaningful. NUR results can help focus the analysis, whether the goal is to
find reasons for differences among different operators, or if a single operator shows points for improvement
in a specific band and bandwidth.

4.1.3.2 Analysis of an area with low NUR

Network Utilization Rating can be used to spot easier geographical areas with the potential for improvement
of data rates and user experience. Figure 12 shows one area of benchmarking measurements in which 2
very close zones show a different NUR performance.

Figure 12 5G NR NUR - Focus Area

Looking into the statistics on Figure 13 and Figure 14, the two zones should not differ too much as the
expected throughputs are in similar ranges. However, measured performance in one zone is bad.

Rohde & Schwarz | Educational Note Machine Learning based Network Optimization 15
Figure 13 NUR Statistics - Focus area

Figure 14 5G NR - Measured vs. Expected Net PDSCH Throughput

A breakdown per band shows that the two zones of interest show usage of two different bands. (Figure 15)

Figure 15 5G NR PDSCH - Band usage

Rohde & Schwarz | Educational Note Machine Learning based Network Optimization 16
The contributing channels show a vastly different usage of modulation as it is visible on Figure 16.

Figure 16 5G NR PDSCH - Modulation distribution

Reason for such a low usage of higher order modulation and low throughput in the low performing zone can
be due to high BLER and low MCS (as a response to high BLER) as it is seen on Figure 17.

Figure 17 5G NR PDSCH – BLER and MCS Statistics

If it is needed, the user can continue the analysis on a per test basis. Level 1 workspaces provide lists of
worst and best performing tests, per technology and carrier. Such an analysis of a low performing test is
explained in the section Example of a test with low NUR, where a more detailed example of the impact of
high BLER is shown.
Worst performing tests are tests with NUR values below 40 as on Figure 18. Best performing tests are tests
with NUR values above 100 as on Figure 18.

Rohde & Schwarz | Educational Note Machine Learning based Network Optimization 17
Figure 18 5G NR NUR - Worst performing tests

Figure 19 5G NR NUR - Best performing tests

4.1.3.3 Example of a test with low NUR

In the list of tests with bad NUR on Figure 18, the test “73-198” shows extremely low performance. The value
of NUR is 6.47 with a measured throughput of only ~11Mbps and very high expectations at 295Mbps.
Figure 20 shows that radio conditions are not too bad, and that the UE goes through a 5G handover during
this test.

Figure 20 Bad 5G NR NUR - Test “73-198” - Radio conditions

Rohde & Schwarz | Educational Note Machine Learning based Network Optimization 18
Analyzing 5G NR PDSCH statistics on Figure 21 leads to the conclusion of high BLER as a main reason for
low throughput. The test starts ok, with higher MCS and more contribution of 256 QAM, but the UE is
reporting high BLER and decreasing CQI as a reaction to it. Lower CQI leads to lower MCS and lower BLER.
Throughput remains low; however, lower throughput is expected for such a value of CQI, and that leads to
NUR improvement. Upon handover to another cell, the throughput slightly recovers but the UE again suffers
from BLER problems at the end of the test.

Figure 21 Bad 5G NR NUR - Test “73-198” - PDSCH statistics

Rohde & Schwarz | Educational Note Machine Learning based Network Optimization 19
4.2 Call Stability Score implementation in SmartAnalytics

4.2.1 Output values and results

As mentioned in the introduction, a Call stability score value is provided for every 7.5 seconds long CSS
sample (Figure 22). An aggregated value for each call is not provided. The CSS value is in the range of 0 to
1, for both UMTS and LTE technologies. Values closer to 0 mean there is a high probability of the call to
drop, and values closer to 1 mean the call is highly unlikely to drop.

Figure 22 Call Stability Score in time domain

The Call stability score consists of two models for different technologies and the results should be analyzed
separately.

4.2.2 Statistical Analysis in L1 Statistics

The overview of the Call stability score on Figure 23 provides high level statistics of the average CSS and
distribution of CSS for individual operators, in overall and broken down per technology. The user is further
guided with CSS for call status categories and daily trend. As on many other L1 views, the user has the
possibility to view the geographical distribution of CSS on a map, with available features for filtering actions.

Figure 23 Call Stability Score – Overview

Rohde & Schwarz | Educational Note Machine Learning based Network Optimization 20
CSS performance can be further broken down per technology band which helps to narrow down low
performance to a specific band, samples contributing to it and their actual impact (Figure 24).

Figure 24 Call Stability Score – Analysis

Th user has the possibility to interact with the list and the map with the best and the worst CSS samples
(Figure 25).

Figure 25 Call Stability Score - Worst CSS

The analysis of dropped calls can be supplemented with the combination of call drop statistics and CSS
statistics for calls with that status (Figure 26).

Rohde & Schwarz | Educational Note Machine Learning based Network Optimization 21
Figure 26 Call Stability Score - Dropped calls analysis

4.2.3 Drill Down Analysis in L2 Analysis

After narrowing down statistics to the desired field of view, the user has the possibility to “jump” into L2 UE
Drill Down views (Figure 27) to analyze problems in the time domain for all selected calls. Apart from the
standard set of interactive charts and controls, within the “Voice/Call” tab the CSS can be observed in time
domain as a line chart. Synchronized in time, the user can analyze changes in time of the relevant input
parameters and reach conclusions about the underlying reason for a low CSS.

Figure 27 Call and CSS drill down analysis

Rohde & Schwarz | Educational Note Machine Learning based Network Optimization 22
4.2.4 Real-field Examples

4.2.4.1 Analysis of an area with low CSS

Call Stability Score can be used to spot geographical areas with potential problems, even when traditional
KPIs like speech quality do not identify them as problematic.
In the example of the same area observed in section ‘Analysis of an area with low NUR’, speech quality
shows an even distribution (Figure 28).

Figure 28 Map overview of Speech Quality

However, when looking into the same area on Figure 29, CSS shows there are location dependencies and
problematic spots.

Figure 29 Map overview of Call Stability Score

Further breakdown per band, shows there are differences in CSS performance per band (Figure 30). Two
bands with the highest share of CSS samples show a different performance, band B shows lower CSS than
band C.

Rohde & Schwarz | Educational Note Machine Learning based Network Optimization 23
Figure 30 CSS overview per band

In the map view (Figure 31), band B shows worse performance in the south region of the selected area,
which is resulting in a wider CDF/PDF distribution of the CSS. Band C with a better and more narrow
distribution of CSS also shows some weak spots in the map.

Figure 31 Map overview of CSS per band

Rohde & Schwarz | Educational Note Machine Learning based Network Optimization 24
Looking into the radio conditions broken down per band in Figure 32, band differences are also visible in the
statistics and on the maps (Figure 33). The worst SINR performance is observed for band B, which also
shows the lowest CSS.

Figure 32 Radio statistics overview per band

Rohde & Schwarz | Educational Note Machine Learning based Network Optimization 25
Figure 33 Map overview or SINR per band

4.2.4.2 Example of a call with low CSS

In the list of calls with low CSS on Figure 34, the call “76-58” shows low performance. The value of CSS
drops to 0.3 and is below 0.5 across the call.

Rohde & Schwarz | Educational Note Machine Learning based Network Optimization 26
Radio conditions show fluctuations of SINR correlated with CSS fluctuations.

Figure 34 L2 overview - Call “76-58” with low CSS

Looking into the speech quality results for the same call, no significant degradation is observed (Figure 36).

Figure 35 Speech quality - Call “76-58”

However, radio conditions and deployment in this area result not only in lowered CSS, but also in frequent
handovers, some being like a “ping-pong” between the same pair of cells (Figure 36).

Rohde & Schwarz | Educational Note Machine Learning based Network Optimization 27
Figure 36 Cell handovers - Call “76-58”

5 Ordering information
Designation Type Order No.
Cloud Installation
Package Pro 1 Year R&S®SA-CLPR1YE 1900.6290.84
Package Mid 1 Year R&S®SA-CLMD1YE 1900.6290.83
Package Basic 1 Year R&S®SA-CLB1YEA 1900.6290.73
Package Mini 1 Year R&S®SA-CLMN1YE 1900.6290.85
5G license 1 Year (included in Mid / Pro version) R&S®SA-CL5G1YE 1900.6290.90
NPS license 1 Year (included in Mid / Pro version) R&S®SA-CLNP1YE 1900.6290.92
Cloud Setup Fee R&S®SA-CLSETUP 1900.6290.72
Cloud installations are also available as 3-month versions
On-Premise Installation
Server license for up to 200 UE and 20 scanners per file R&S®SA-BASEPRO 1900.6290.55
Server license for up to 60 UE and 5 scanners per file R&S®SA-BASEMID 1900.6290.54
Server license for up to 20 UE and 1 scanner per file R&S®SA-BASEBAS 1900.6290.53
Standalone notebook installation R&S®SA-BASEOPT 1900.6290.52
for up to 5 UE and 1 scanner per file
User license to operate the SA-BASE licenses R&S®SA-ADDUSR 1900.6290.56
Technology Licenses On-Premise
2G 3G 4G UE technology R&S® SA-2G3G4G 1900.6290.57

Rohde & Schwarz | Educational Note Machine Learning based Network Optimization 28
Designation Type Order No.
2G 3G 4G scanner technology R&S®SA-S2G3G4G 1900.6290.58
5G UE technology R&S®SA-5G 1900.6290.59
5G scanner technology R&S®SA-S5G 1900.6290.60
NBIoT UE R&S®SA-NBIOTUE 1900.6478.15
Analysis Licenses On-Premise
Smart Analytics Scene NPS license R&S®SA-NPS 1900.6290.61
Super Cubes license R&S®SA-SUPCUB 1900.6478.26
Test Based Aggregations R&S®SA-TETAGG 1900.6478.30
Machine Learning Licenses On-Premise
Smart Analytics Scene Vision Call Stability Score R&S®SA-VISCSS 1900.6290.76
Smart Analytics Scene Vision Anomaly Detection R&S®SA-VISAD 1900.6290.77
Third party file format support
TEMS Converter License (includes 5G) R&S®SA-IMTEMSV 1900.6478.20
Nemo file importer R&S®SA-IMNEMO 1900.6478.12
XQDM file importer R&S®SA-IMQXDM 1900.6478.12

Rohde & Schwarz | Educational Note Machine Learning based Network Optimization 29
Rohde & Schwarz
The Rohde & Schwarz electronics group offers
innovative solutions in the following business fields: test
and measurement, broadcast and media, secure
communications, cybersecurity, monitoring and network
testing. Founded more than 80 years ago, the
independent company which is headquartered in
Munich, Germany, has an extensive sales and service
network with locations in more than 70 countries.
www.rohde-schwarz.com

Certified Quality Management

ISO 9001

Rohde & Schwarz training


www.rohde-schwarz.com/training

Rohde & Schwarz customer support


www.rohde-schwarz.com/support
PAD-T-M: 3572.7186.00/02.00/EN

R&S® is a registered trademark of Rohde & Schwarz GmbH & Co. KG


Trade names are trademarks of the owners.
8NT10 | Version 1e | 02.2024
Educational Note | Machine Learning based Network Optimization
Data without tolerance limits is not binding | Subject to change
© 2023 Rohde & Schwarz GmbH & Co. KG | 81671 Munich, Germany
www.rohde-schwarz.com

You might also like