0% found this document useful (0 votes)
11 views9 pages

BLEST_Blocking_estimation-based_MPTCP_scheduler_for_heterogeneous_networks

Uploaded by

matthewyuhb
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
11 views9 pages

BLEST_Blocking_estimation-based_MPTCP_scheduler_for_heterogeneous_networks

Uploaded by

matthewyuhb
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 9

BLEST: Blocking Estimation-based MPTCP

Scheduler for Heterogeneous Networks


Simone Ferlin,∗† , Özgü Alay∗ Olivier Mehani,† Roksana Boreli†
∗ Simula Research Laboratory, Norway † National ICT Australia (NICTA), Sydney, Australia
{ferlin,ozgu}@simula.no {first.last}@nicta.com.au

Abstract—With the widespread availability of multi-homed has a long-term impact on the CWND of these subflows, which
devices, multipath transport protocols such as MPTCP are are limited in their growth [3], leading to sub-optimal capacity
becoming increasingly relevant to support better use of multiple aggregation, as higher-delay paths are underused [4]. As a rule-
connectivity through capacity aggregation and seamless failover.
However, capacity aggregation over heterogeneous paths, such of-thumb, it is also recommended to increase the receive buffer
as offered by cellular and Wi-Fi networks, is problematic. It size to further limit HoL-blocking situations [5].
causes packet reordering leading to head-of-line (HoL) blocking The need for multipath transport protocol schedulers is
at the receiver, increased end-to-end delays and lower application known, and a number of proposals have been made and
goodput. MPTCP tackles this issue by penalising the use of evaluated in the past [6]. However, in the specific case of
longer paths, and increasing buffer sizes. This, however, results
in suboptimal resource usage. In this paper, we first evaluate heterogeneous paths, more care is required to avoid the issues
and compare the performance of default MPTCP and alternative discussed above. Such schedulers have been proposed in [7]–
state-of-the-art schedulers, all implemented in the Linux kernel, [9], based on the concept of sending packets out of order
for a range of traffic patterns and network environments. This so they reach the receiver in order. There exists, however,
allows us to identify shortcomings of various approaches. We then no comparison of these schedulers to the MPTCP default
propose a send-window BLocking ESTimation scheduler, BLEST,
which aims to minimise HoL-blocking in heterogeneous networks, scheduler in a consistent environment.
thereby increasing the potential for capacity aggregation by In this paper, we first offer a comparative study of the pro-
reducing the number of spurious retransmissions. The resulting posed MPTCP schedulers [7]–[9], by experimentally evaluat-
scheduler allows an increase by 12% in application goodput with ing our Linux implementation of these algorithms. We evaluate
bulk traffic while reducing unnecessary retransmissions by 80%
their behaviour for different traffic types (Web, Bulk, CBR).
as compared to default MPTCP and other schedulers.
Index Terms—MPTCP, multipath, transport protocol, packet The performance of these schedulers is compared to MPTCP’s
scheduling, head-of-line blocking, receive window limitation, default scheduler as well as plain single-path TCP, in terms
heterogeneous networks of application goodput (for bulk traffic), end-to-end delays
(CBR) and completion time (Web). Based on observations in
I. I NTRODUCTION these experiments, we identify how the studied mechanisms
offer the best performance, and what they fail to properly
Multipath transport protocols, and particularly Multipath account for. We also take insight from the observations
TCP, allow to better use the network resources available of [10] that not all subflows should be used at all times and,
to multi-homed devices such as mobile phones. Two main while scheduling is needed to complement pure congestion
advantages are envisioned: capacity aggregation across mul- control, path selection and send buffer management are also
tiple links, and the ability to maintain connection if one of primordial. We then propose a novel BLocking ESTimation-
the path fails. Capacity aggregation is however challenging based scheduler, BLEST, which takes a proactive stand towards
with heterogeneous paths, such as offered by cellular and minimising HoL-blocking. Rather than penalising the slow
Wi-Fi, in particular because of delay heterogeneity [1]. This subflows, BLEST estimates whether a path will cause HoL-
heterogeneity results in packet reordering, leading to head-of- blocking and dynamically adapts scheduling to prevent block-
line (HoL) blocking, increased out-of-order (OFO) buffer use ing. Although BLEST is designed for heterogeneous paths, we
at the receiver and, ultimately, reduced goodput. show in our experiments that it works as well as MPTCP’s
MPTCP’s default scheduler, minRTT, is based on Round- minRTT scheduler in homogeneous scenarios.1
Trip Time (RTT). minRTT starts by filling the congestion The remainder of this paper is organised as follows. We
window (CWND) of the subflow with the lowest RTT before present the background to this work, and show motivating
advancing to other subflows with higher RTTs. When one of examples in the next section. We describe our evaluation setup
these subflows blocks the connection, e.g., due to head-of-line in Section III. In Section IV, we discuss our implementation
blocking, MPTCP’s default scheduler retransmits the segments of different schedulers [7]–[9] and compare their perfor-
blocking the connection on the lowest-delay path and penalise mance side-by-side with MPTCP’s default scheduler. Based
longer (i.e., higher-delay) paths that caused the issue [2]. This
ISBN 978-3-901882-83-8 
c 2016 IFIP 1 BLEST’s code is available at http://nicta.info/mptcp-blest.

Authorized licensed use limited to: Nanjing University. Downloaded on August 27,2023 at 09:30:24 UTC from IEEE Xplore. Restrictions apply.
Networking 2016 431
on observations in these experiments, we propose a proactive 3G WLAN MPTCP 3G WLAN MPTCP
1000 2000
minimum-delay scheduler that can predict the send-window

Completion Time [ms]

Completion Time [ms]


6000
blocking risk, and schedule accordingly in Section V, and 2000 1500

evaluate its performance in Section VI, both in emulated and 500 1000 4000
1000
real multipath environments. We finally offer some concluding 500 2000

remarks in Section VII. 0 0 0 0


WLAN WLAN+WLAN WLAN 3G 3G+WLAN WLAN WLAN+WLAN WLAN 3G 3G+WLAN

II. BACKGROUND AND M OTIVATION (a) Download time, Wikipedia (b) Download time, Amazon

A. Multipath Transfer over Heterogeneous Paths 3G WLAN MPTCP


4
3G WLAN MPTCP
x 10 100 600
2

Completion Time [ms]


Completion Time [ms]
Multipath transport has be shown to provide benefits from 6000
80
1.5
bandwidth aggregation to increased robustness [2], [11]–[13]. 4000 60
400

Whenever the underlying network paths are homogeneous, 1


40
200
MPTCP accomplishes its goals [14]. However, path hetero- 2000
0.5 20
geneity can hinder achievement of MPTCP’s goals, mostly 0
WLAN WLAN+WLAN
0 0
WLAN WLAN+WLAN
0
WLAN 3G 3G+WLAN
WLAN 3G 3G+WLAN
due to the HoL-blocking which causes higher end-host mem-
(c) Download time, Huffington Post (d) Packet delay, CBR video
ory usage and path bandwidth underutilisation [1], [3]. In Figure 1. Download times for selected websites, and application packet
MPTCP, the scheduler is the component that is responsible delay for CBR video traffic, both over MPTCP in WLAN+WLAN (left
for the distribution of packets among the available paths. A of each pair) and 3G+WLAN (right of each pair) (CORE emulation, with
background traffic, see III-1). MPTCP with heterogeneous paths (3G+WLAN)
well-designed scheduler that can dynamically adapt packet underperforms single-path TCP on the best (WLAN) path.
distribution based on the channel conditions to provide a better
performance, both in terms of goodput and delay, is crucial.
both delay and jitter; however, it doesn’t perform as well
MPTCP’s default minRTT scheduler2 first sends data on
as a single WLAN path when running over heterogeneous
the subflow with the lowest RTT estimation, until it has filled
3G+WLAN paths.
its congestion window [2]. Data is sent on the subflow with
This goes against one of MPTCP’s design goals: “[a]
the next higher RTT. In order to address the heterogeneity of
multipath flow should perform at least as well as a single
the paths, a mechanism of opportunistic retransmission and
path flow would on the best of the paths available to it” [5].
penalisation (PR) has also been proposed in [2]. In order to
quickly overcome HoL-blocking, opportunistic retransmission B. Schedulers for heterogeneous paths
immediately reinjects segments causing HoL-blocking onto a
subflow with an RTT lower than that of the blocking subflow Alternative multipath scheduling algorithms have been ob-
which has space available in its congestion window. The ject of multiple studies [4], [10], [15]. In [6], the authors
penalisation mechanism also halves the congestion window evaluated different scheduling strategies (pull, push and hy-
of the blocking subflow to limit its use. [3] showed that brid) focusing on implementation performance. They also
MPTCP’s PR does not behave well in some scenarios when considered how schedulers should cope with paths that have
path characteristics (e.g., capacity, delay and loss rates) are heterogeneous delay and/or capacities. They concluded that a
significantly different. Penalisation of a long subflow (higher scheduler must take both delay and capacity into consideration
RTT) has a long-term detrimental impact on the performance: in order to effectively leverage multipath scenarios.
it will take longer for the subflow to increase its CWND, Later, [8] evaluated and extended the idea of a Delay-Aware
leading to underutilisation of the path and, ultimately, lower Packet Scheduler (DAPS) [7] for MPTCP in order to overcome
capacity aggregation. HoL-blocking due to path heterogeneity. In that work, the
In order to illustrate the challenges in heterogeneous sce- authors derived a rule-of-thumb for buffer size for MPTCP. [9]
narios, we ran experiments with constant bitrate (CBR) and explored a more ambitious scheduler implementation, sending
web transfers, and contrast the results with homogeneous packets out of order so they arrive in order. They however
scenarios. In Figure 1, we observe that the amount of data included some simplications that expose vulnerabilities of the
and the path heterogeneity are the main factors determining approach. For example, no consideration is given to segment
the performance of MPTCP. MPTCP generally provides lower reinjection if a certain path is blocking the connection.
completion times, especially for websites with many objects. These alternative algorithms were so far not extensively
However, when the paths are heterogeneous in terms of delay tested against MPTCP’s default scheduler. The number of sce-
and loss, as in the 3G+WLAN case, losses in the WLAN force narios in which they were evaluated was also limited, and did
MPTCP to use the 3G path, therefore MPTCP’s completion not cover many scenarios (homogeneous vs. heterogeneous)
time becomes higher than TCP on the WLAN path. Similarly, and traffic classes. The differences in evaluation methods also
Figure 1(d) shows the same effect for the packet delay of make it difficult to accurately compare their performance. In
a CBR flow: MPTCP’s minRTT adequately leverages the the next sections, we address this issue by re-implementing
aggregation of two homogeneous WLAN paths and reduces these schedulers in the Linux kernel, and systematically eval-
uating their performance in a range of scenarios and traffic
2 We base our work on MPTCP v0.90 throughout this paper. use-cases agains MPTCP’s default scheduler.

Authorized licensed use limited to: Nanjing University. Downloaded on August 27,2023 at 09:30:24 UTC from IEEE Xplore. Restrictions apply.
Networking 2016 432
Bottleneck 1 3) Background Traffic: A synthetic mix of TCP and UDP
Client 1 Server 1
traffic was generated with D-ITG [19] as background traffic
in order to create a realistic environment. The TCP traffic
MPTCP MPTCP was composed of saturated sender and rate-limited TCP flows
Client Server
with a exponentially distributed mean rate of 157 pps. The
UDP traffic was composed of UDP on/off flows with Pareto
Server 2 distributed on and exponentially distributed off times. Each
Client 2 Bottleneck 2
flow has an exponentially distributed mean rate of 100 kbps in
Figure 2. Emulation experiment setup the heterogeneous scenario and 500 kbps in the homogeneous
scenario. Packet sizes were varied with a mean of 1000 Bytes
III. M EASUREMENT S ETUP and RTTs between 20 and 100 ms. We repeated all experiment
We used CORE [16] for the initial evaluation. CORE is settings 50 times, in both emulation and real scenarios.
a network emulator able to emulate a real network stack
implementation within Linux containers, making it suitable IV. S CHEDULING AGAINST H O L-B LOCKING
to avoid simulation model simplifications. Figure 2 shows the In the following, we discuss both Delay-Aware Packet
emulation topology. Bottleneck 1 was loaded with background Scheduler (DAPS) [7], [8] and Out-of-order Transmission for
traffic from Server 1 to Client 1, and bottleneck 2 with traffic In-order Arrival Scheduler (OTIAS) [9], evaluating them in
from Server 2 to Client 2. The link characteristics for WLAN common scenarios, and commenting on their implementation.
and 3G links are set as follows.
A. Delay-Aware Packet Scheduler (DAPS)
• WLAN: Capacity=25 Mbit/s, Delay=25 ms, Loss=1%
• 3G: Capacity=5 Mbit/s, Delay=65 ms, Loss=0% The DAPS algorithm was proposed in two versions. In [7],
Based on measurements carried in real networks, the queue it pursues the goal to make segments arrive in order by
lengths at each router interface were set to 100 packets for planning which subflows the next segments should be sent over
WLAN and 3750 packets for 3G. The losses applied to the based on both the forward delay and CWND of each subflow.
WLAN path are random. A schedule is created to span the least common multiple
1) Network and System characteristics: System settings are (LCM) of the forward delays lcm(Di ∈ {D1 , D2 , . . . , Dn }).
known to impact TCP’s performance. In order to emulate Algorithm 1 shows the main loop of the mechanism.
realistic network scenarios, we used system settings close to As an example, assume two subflows with similar capac-
the standard characteristics of each technologies. The TCP ities, but with a subflow having a forward delay ten times
buffer sizes (send/receive) were set to be equivalent to widely higher than the fast subflow. DAPS will derive the following
known Android settings, that are configured as follows. schedule: segments 1. . . 10 will be sent on the fast subflow,
• Homogeneous (WLAN): 1024 KiB/2048 KiB.
and segment 11 on the other subflow. Ideally, segment 11 will
• Heterogeneous (3G+WLAN): 1024 KiB/2048 KiB.
arrive right after segment 10, thereby avoiding HoL-blocking.
In [8], DAPS is formulated for a scenario with only two
For bulk traffic experiments, we set both send and receive
subflows (rs and rf ). It is also a simplification of the original
buffers to 16 MiB to evaluate MPTCP’s aggregation capability.
algorithm [7] as it does not take CWND asymmetry into
To ensure independence between runs, the cached TCP
account, only considering the subflows’ RTT ratio (η) and the
values were flushed after every run. We focused on congestion
CWND of the fast subflow.
avoidance; therefore, we discarded the initial phase for each
Since both algorithms are comparable, we consider only
experiment and analyzed a period of 90 s for bulk and constant
the original DAPS [7] in our evaluations. We ignore the
bitrate (CBR) traffic. For single-path TCP flows, we used TCP-
simplifications presented in [8], as they were only introduced
Reno, therefore, fairly compairing against MPTCP-OLIA.3
to ease the implementation in the ns-2 of CMT-SCTP.
2) Application Traffic: We considered three different types.
a) Video Streaming: We considered constant bit-rate B. Out-of-order Transmission for In-order Arrival Scheduler
(CBR) video traffic with a frame size of 5 KiB on the ap- (OTIAS)
plication level and a rate of 1 Mbps. This is in line with the
The OTIAS algorithm [9] is based on the idea of scheduling
recent measurement studies [17] showing that more than 53%
more segments on a subflow than what it can currently send.
of the downstream traffic in North America is video streaming,
Queues may therefore build up at each subflow of the sender,
and with other reports [18] predicting further increase,
under the assumption that these segments will be sent as soon
b) Web Traffic: We selected three websites of different
sizes, small, medium and large (see Table I), as a good set of
Table I
typical website sizes. To mimic the behavior of a real browser W EB T RAFFIC G ENERATION
downloads were performed with 6 concurrent connections.
Domain name Number of Objects Size of Objects
c) Bulk Transfer: We completed the evaluation with the
http://www.wikipedia.org 15 72 KiB
most common case for MPTCP — a buk transfer, of 64 MiB. http://www.amazon.com 54 1024 KiB
3 TCP-Linux http://www.huffingtonpost.com 138 3994 KiB
kernel 3.14.33 is used throughout our evaluations.

Authorized licensed use limited to: Nanjing University. Downloaded on August 27,2023 at 09:30:24 UTC from IEEE Xplore. Restrictions apply.
Networking 2016 433
Algorithm 1 DAPS [7] minRTT DAPS OTIAS minRTT DAPS OTIAS

Average MPTCP OFO Queue [kiB]


1: Smax ← 0

Average MPTCP OFO Queue [kiB]


500 3000 500 500

2: for Pi ∈ {P1 , P2 , ..., Pn } do 400 2500


400 400

Goodput [kiBps]

Goodput [kiBps]
3: SEQPi ← InitializeV ector() 300
2000
300 300
1500
4: end for 200 200 200
5: for Pi ∈ {O1 , O2 , ..., O lcm(Di ) } do
1000
∈1,2,...,n
i Di
100 500 100 100

6: SEQPi ← Append(SEQPi [Smax + 1, Smax + Ci ] 0 0 0 0

7: end for
(a) 3G+WLAN (b) WLAN+WLAN
8: t ← 0 Figure 3. Goodput and OFO queue for bulk traffic between DAPS, OTIAS
9: while t < lcm(Di ∈ {D1 , D2 , ..., Dn }) do and minRTT.
10: for Pi ∈ {P1 , P2 , ..., Pn } do
11: if t ≡ 0 (mod Di ) then schedule until it is completed, after which planning for the next
12: T ransmit(Pi , SEQPi [ Dti ]) run is determined. On the other hand, OTIAS decides which
13: Smax ← Smax + Ci subflow to use on a per-packet basis. It takes into account the
14: end if RTTs and the queue sizes of the subflows at a given moment
15: end for and it is closer to MPTCP’s default scheduler (minRTT) in
16: t←t+1 this respect, albeit taking into account more information from
17: end while the subflows.
Where: OTIAS operates based on current data and is able to react
more dynamically to network changes, where DAPS can only
• {P1 , P2 , ..., Pn } set of paths
react to changes in the next scheduling run. OTIAS is however
• {D1 , D2 , ..., Dn } paths’ respective forward delays
still less dynamic than MPTCP’s minRTT since it builds up
• SEQPi seqnos of packets to be transmitted on Pi
queues on the subflows. If a segment that had already been
sent is blocking the connection, e.g., it could be delayed or
as there is space in the CWND for the subflow. When asked lost, the queued packets would linger at the sender more than
to schedule a new segment, the algorithm estimates its arrival assumed, disturbing the created schedule. Moreover, MPTCP’s
time if sent over each subflow (Tij ), and chooses the subflow default scheduler retransmission mechanism, retransmitting a
with the earliest arrival time. The estimation is performed packet on the fastest subflow [4], is not applicable if a send
based on a subflow’s RTT, its CWND, the number of in-flight queue exists for a subflow, as that segment would have to wait
packets and the number of already queued packets. If there is in the queue before retransmission.
space in the CWND, the segment would be sent immediately, In the following we present an evaluation of DAPS and
yielding an arrival time of approximately RT T /2 (assuming OTIAS against MPTCP’s minRTT with bulk, web and CBR
symmetric forward and backward delays). If the CWND is full, traffic through emulations. We look at application goodput for
however, the segments will have to wait in the subflow’s queue. bulk transfers, completion times for web transfers, and average
Assuming a send rate of 1 CWND per RTT, the additional application delay for CBR traffic. In all cases, we also sample
waiting time is calculated as RTT_to_waitji . Algorithm 2 the maximum value of the out-of-order (OFO) queue every
shows the main loop of the OTIAS mechanism. 10 ms during the experiments and present the results.
1) Bulk: Figure 3 shows DAPS, OTIAS and MPTCP’s
Algorithm 2 OTIAS [9] default scheduler goodput and OFO buffer size for bulk
1: for each available subflow j do transfer in both 3G+WLAN and WLAN+WLAN scenarios.
2: pkt_can_be_sent j = cwndj − unackedj  OTIAS provides a goodput increase of 6% but requires 35%
j not_yet_sentj −pkt_can_be_sentj
3: RTT_to_waiti = cwndj
less OFO buffer compared to MPTCP’s minRTT. On the other
hand, DAPS provides a goodput decrease of 27% and requires
4: Tij = (RTT_to_waitji + 0.5) × srttj 65% less OFO buffer compared to MPTCP’s default scheduler.
5: if Tij < minT then In WLAN+WLAN scenarios, MPTCP’s default scheduler has
6: minT = Tij a 3.5% lower goodput compared to OTIAS, which on the
7: selected_subflow = j contrary takes about 87% more OFO buffer. DAPS delivers
8: end if goodput values of about 16% less compared to MPTCP’s
9: end for default scheduler with about 97% more OFO buffer.
2) Web: Figure 4 shows the completion times and OFO
buffer sizes for DAPS, OTIAS and MPTCP’s default sched-
C. Comparative evaluation of DAPS and OTIAS ulers in both 3G+WLAN and WLAN+WLAN scenarios. For
Although DAPS and OTIAS have the same goal to reduce 3G+WLAN, in Figure 4(a), all scheduler algorithms per-
HoL-blocking, they follow different approaches: DAPS creates form similarly in terms of completion time. However, for
a schedule for the distribution of future segments into the larger object sizes, we observe a larger OFO buffer size. In
available subflows for a scheduling run and follows this WLAN+WLAN, in Figure 4(c), DAPS and OTIAS struggle

Authorized licensed use limited to: Nanjing University. Downloaded on August 27,2023 at 09:30:24 UTC from IEEE Xplore. Restrictions apply.
Networking 2016 434
OTIAS DAPS minRTT minRTT DAPS OTIAS minRTT DAPS OTIAS minRTT DAPS OTIAS

Average MPTCP OFO Queue [kiB]

Average MPTCP OFO Queue [kiB]

Average MPTCP OFO Queue [kiB]


0.5 3 6 5 50 250 1000 150 65 25
Completion Time [s]

4 40 200

Application Delay [ms]

Application Delay [ms]


0.4 2.5 800 20
5
100 60
2 3 30 150 600 15
0.3
4
1.5 2 20 100 400 10
0.2 50 55
3
1 1 10 50 200 5
0.1
0.5 2 0 0 0 0 0 50 0
Wikipedia Amazon Huffpost Wikipedia Amazon Huffpost 1 Mbps 1 Mbps 1 Mbps 1 Mbps

(a) 3G+WLAN, Completion time (b) 3G+WLAN, OFO queue (a) 3G+WLAN (b) WLAN+WLAN
Figure 5. Packet delay and OFO queue for CBR traffic for DAPS, OTIAS
OTIAS DAPS minRTT minRTT DAPS OTIAS
and minRTT.
Average MPTCP OFO Queue [kiB]
1 2.5 5 5 50 250
Completion Time [s]

0.8 4.5 4 40 200


2
CWND is full. In addition, the algorithm assumes symmetric
3 30 150
0.6 4
forward delays (OW D = RT T /2), and scheduler reinjections
3.5 2 20 100
0.4 1.5 (retransmissions) are not mentioned. While OTIAS can yield
0.2 3 1 10 50 good results with heterogeneous RTTs, if the heterogeneity
Wikipedia
1
Amazon
2.5
Huffpost 0
Wikipedia
0
Amazon
0
Huffpost
is too large and losses occur in one of the subflows, the
algorithm will build up long queues in the subflows with lower
(c) WLAN+WLAN, Completion time(d) WLAN+WLAN, OFO queue
Figure 4. Completion time and OFO queue for web traffic (Wikipedia, RTTs, reducing their ability to overcome HoL-blocking. In
Amazon and Huffington Post) for DAPS, OTIAS and minRTT. homogeneous scenarios the OTIAS scheduler delivers lower
performance due to not using both subflows as fully as
when both paths have higher loss rates, because DAPS cannot MPTCP’s default scheduler.
react quickly enough to changes on the paths, and OTIAS 2) DAPS: The DAPS implementation is more complex,
builds queues that also don’t allow immediate reaction. While requiring more memory at run-time to keep the schedule
the losses on the WLAN paths cause higher OFO buffer size run. Furthermore, DAPS is not able to react upon network
in WLAN+WLAN, the path heterogeneity is the main reason changes in a timely manner due to long schedules arising
for the higher OFO size in 3G+WLAN. from high heterogeneity in the subflow delays, i.e., high
3) CBR: Figure 5 shows the average application delay and LCM in Algorithm 1. Last but not least, DAPS will use all
the OFO buffer size for DAPS, OTIAS and MPTCP’s default subflows that can send, even if a certain subflow’s contribution
schedulers in both 3G+WLAN and WLAN+WLAN scenarios. is very low. This is the main contrast compared to both
Both 3G+WLAN and WLAN+WLAN yield higher application OTIAS and MPTCP’s default schedulers, which can reduce the
delay with DAPS. OTIAS can reduce the usage of the 3G slow subflow contribution, if a faster subflow can sustain the
subflow in the 3G+WLAN scenario, leading to improved required rate. This is particularly important for transfers where
application delay compared to MPTCP’s default scheduler. the sender is not saturated. Finally, similar to OTIAS, DAPS
However, for the WLAN+WLAN scenario, OTIAS provides does not have a defined behaviour for scheduler reinjections.
higher delay values compared to MPTCP due to the lack
of design for a reinjection mechanism. Moreover, MPTCP’s V. BLEST: B LOCKING E STIMATION - BASED MPTCP
default scheduler PR mechanism can partially overcome path S CHEDULER
heterogeneity in 3G+WLAN, where we can observe burst of Based on the observations from Section IV, we introduce a
packets on the 3G path, which lead to spikes in the OFO new algorithm, BLEST, addressing the challenges of reducing
buffer, resulting in higher application delay. HoL-blocking, spurious retransmissions, and hence increas-
ing application performance in heterogeneous scenarios. The
D. Successes and Failures of Existing Algorithms scheduling is based on a new metric, estimating the amount of
Overall, we observe that, although all state-of-the-art ap- HoL-blocking, which might result from scheduling a packet
proaches address the challenges of multipath scheduling in on a give subflow.
heterogeneous scenarios, trying to overcome receive-window For each new segment, MPTCP’s default scheduler, min-
limitation and, consequently, HoL-blocking, they still fail in RTT, chooses the subflow with lowest RTT among all subflows
some typical use-case scenarios settings, e.g., heterogeneous ready to send, i.e., with space in the CWND. When MPTCP
delays and/or loss rates, as well as with excessive delays due to detects that it cannot send new data due to a full send window
buffering. Here, we comment on the strong and weak aspects (mirror of receive window at the sender), it will resend the
of the state-of-the-art proposals just evaluated. segment blocking the fastest subflow, but only if it hasn’t
1) OTIAS: Although OTIAS can make decisions on a per- been sent on that subflow before. It will also penalise the
packet basis (subflow j and packet i) reacting fast and using slow subflow responsible for blocking, halving its CWND.
current state from the network (cwndj loop), it builds up The idea is to reduce its contribution preventing further HoL-
queues on the subflows with lowest RTTs, regardless of their blocking. Such an approach reduces the chance of HoL-
CWND state, i.e., it does not restrict the scheduler if the blocking only for a limited amount of time. In other words,

Authorized licensed use limited to: Nanjing University. Downloaded on August 27,2023 at 09:30:24 UTC from IEEE Xplore. Restrictions apply.
Networking 2016 435
MPTCP MPTCP
MPTCP Send Window Send Window Send Window
0 … 1100 11…12 13…
13…20
13 …200 21…23
221…23
21
11…

…223
23 224…32
44…
…322

Subflow 1 Subflow 2 Subflow 1 Subflow 1 Subflow 2


Send Window Send Window Send Window Send Window Send Window
0 … 1100 ? 13…20
13
3…20 24…32
24
4…32 21…23
21…23
221…
21
11…
…233

1 2 3 4
Figure 6. MPTCP example with BLEST: In x, segments 0. . . 10 are in flight on subflow 1, the subflow with lowest delay. In y it is uncertain how many
segments should be sent on subflow 2, which has a higher delay. While subflow 2’s window could accommodate more data, only segments 11. . . 12 are
allocated, due to BLEST’s blocking prediction. Here, minRTT would allocate as much data as fits into subflow 2’s window given its CWND. In z subflow
1 can advance with segments 13. . . 20, because 0. . . 10 were acknowledged. At { both subflows can advance with MPTCP’s send window with subflow 1
carrying segments 24. . . 32 and subflow 2 carrying 21. . . 23.

δ =0.001 δ =0.003 δ =0.005 δ =0.01 δ =0.02


after the CWND was reduced by penalisation, the congestion 1.5 δλ=0.001
minRTT λ λ λ
3000
λ λ

control will start increasing it again, until a recurrence of δλ=0.003 500

Average MPTCP OFO Queue [kiB]


2500
δλ=0.005
blocking. Furthermore, the approach is reactive as it depends δλ=0.01 400

Goodput [kiBps]
1 2000
δλ=0.02
on blocking to trigger PR at the sender. The PR mechanism 300

λ
δ
1500

itself is detrimental in the long run, since it keeps the CWND 0.5
200
1000

of slow subflow artificially low. 100 500


To overcome the issues of the PR, we propose a proactive 0 0 0
0 Experiment Time [s] 45
scheduler where we decide at packet scheduling time whether
to send packets over the slow subflow or not. The decision is Figure 7. 3G+WLAN and BLEST’s λ parameter influence on bulk traffic with
varying δλ =0.001, 0.003, 0.005, 0.01, and 0.02; compared against minRTT.
based on MPTCP’s send window. MPTCP maintains a send
window on its control-plane for each MPTCP connection, one The estimate of X, however, can be inaccurate at times. To
level above the subflows. This window is necessary due to the address this, we introduce a correction factor λ, to scale X. λ
full multiplexing among all subflows belonging to the same is adjusted as follows. HoL-blocking during one RT TF is an
MPTCP connection. However, due to its scheduler design, if event that triggers an increase of λ by δλ ; the absence of HoL-
data is not acknowledged in one of the subflows, MPTCP’s blocking triggers a decrease by δλ . In the beginning of the
send window can be temporarily blocked, stalling the whole connection we set λ=1.0, i.e., no correction of the estimation.
multipath connection. Figure 7 shows how λ changes over time in our scenario
BLEST assumes that a segment will occupy space in with different δλ . With δλ = 0.001 we see that λ changes
MPTCP’s send window (M P T CPSW ) for at least RT TS if it slowly towards a value that represents the reality on the (lossy
is sent now on subflow S, as illustrated in Figure 6. We assume WLAN) link. Note that X is over-estimated in the beginning
that all segments in flight on S occupy space in the window for of the transfer. Therefore, most of the traffic is sent over the
the same amount of time. This is a conservative assumption, WLAN link leading to a reduced goodput. However, in time,
as these segments can be acknowledged earlier. The remaining the estimate is corrected by λ to reach a steady value where
send window can be used by the faster subflow (i.e., lower RTT the HoL-blocking is minimised.
subflow), F . This means that HoL-blocking would occur if F On the left side, Figure 7 shows the first 45 seconds of a
were not able to send due to lack of space in the send window bulk transfer and how λ corrects the estimation (each dot in the
because of S. Therefore, we estimate the amount of data X plot curves show the average and standard deviation over 1s)
that will be sent on F during RT TS , and check whether this of the rate of the faster subflow throughout the period. On
fits into MPTCP’s send window . To estimate X, we assume the right side, Figure 7 shows the effect in the OFO buffer
that for every RT TF , its CWND grows by 1 (as it is done in size and, consequently, in the goodput for different δλ . λ is
congestion avoidance) and is always filled by the scheduler, corrected to lower values than its initial setting of 1.0, because
as the model does not incorporate losses.
rtts = RT TS /RT TF VI. E VALUATION
X = M SSF · (CW N D + (rtts − 1)/2) · rtts One of MPTCP’s goals is to perform at least as well as TCP
If X ×λ > |M |−M SSS ·(inf lightS +1), the next segment on the best path. For this reason, we compare MPTCP’s default
will not be sent on S. Instead, the scheduler waits for the scheduler, minRTT, and BLEST against single path TCP on
faster subflow to become available. Essentially, while minRTT 3G and WLAN paths. We include both 3G+WLAN and
always opts to use an available subflow, our scheduler is able to WLAN+WLAN scenarios in our evaluation to illustrate the
skip a subflow, waiting for a more advantageous subflow which improvements in heterogeneous settings, while not impacting
can offer a lower risk of HoL-blocking, and the number of MPTCP in homogeneous scenarios. In the following we show
retransmissions that would have been consequently triggered. emulations and real network experiments results.

Authorized licensed use limited to: Nanjing University. Downloaded on August 27,2023 at 09:30:24 UTC from IEEE Xplore. Restrictions apply.
Networking 2016
WLAN 3G minRTT BLEST WLAN
0 minRTT BLEST improvement of 3% for Huffington Post and 2% for Amazon

Average MPTCP OFO Queue [kiB]

Average MPTCP OFO Queue [kiB]


500 3000 500 25
in completion times compared to minRTT. Overall, Table III
2500
400 400 20 illustrates the benefits of BLEST where the lowest completion
Goodput [kiBps]

Goodput [kiBps]
2000
300 300 15 time is achieved by the proposed BLEST algorithm for both
1500
200 200 10 heterogeneous and homogeneous scenarios for all the websites
1000
100 500 100 5
evaluated.
0 0
3) CBR: Live video has higher requirements of low latency
0 0
compared to other forms of video streaming, e.g., video on
(a) 3G+WLAN (b) WLAN+WLAN demand. Moreover, live video is more sensitive to network
Figure 8. 3G+WLAN and WLAN+WLAN scenarios for bulk traffic with
minRTT, BLEST and TCP on 3G and WLAN. delay variations and, therefore, impacts the user experience the
most. As we want to assess whether MPTCP could be used
A. Emulation Experiments for applications other than bulk traffic, we evaluate live video
performance that is more sensitive to latency. Figures 11(a)
1) Bulk: Increasing application goodput for bulk trans-
and 11(b) show the average application delay for minRTT
fer has been one of the most common ways to evaluate
and BLEST for CBR traffic with 1 Mbps. In the 3G+WLAN
MPTCP’s performance. Figures 8(a) and 8(b) compares the
scenario, BLEST improved the application delay over minRTT
performance in terms of goodput and OFO queue size for
by 8% for CBR (1 Mbps) and a slight improvement in
minRTT and BLEST with bulk traffic in 3G+WLAN and
OFO buffer size of 8% is also achieved, see also Table 11.
WLAN+WLAN scenarios, respectively. In 3G+WLAN, we
In the same scenario and with the same application traffic,
observe that BLEST reduces OFO buffer size by 19%, while it
comparing BLEST to results shown in Figures 4, BLEST
increases application goodput by 12%. Note that the MPTCP
performed worse than OTIAS with CBR, because OTIAS
default scheduler’s penalisation and retransmission (PR)
completely discarded the 3G path. In contrast to that, DAPS
mechanism has a particular negative impact in 3G+WLAN. As
keeps utilising the 3G path. In WLAN+WLAN shown in
illustrated in Table II, MPTCP’s PR mechanism can send up to
Figure 11(b), BLEST performed similar to MPTCP’s default
0.53 MiB retransmissions, to overcome blocking of the WLAN
scheduler as expected.
path. BLEST achieves better aggregation with less OFO buffer,
saving up to 80% of retransmissions. In WLAN+WLAN, B. Real Experiments
BLEST achieves similar application goodput with negligible Finally, we validate the performance of the different sched-
OFO buffer size of 2.5 kiB compared to minRTT. ulers with real-network experiments within the same topology
2) Web: The total download time is not a perfect metric as as shown in Figure 2 for the emulation experiments, but now
most browsers start rendering the page before the transmission constructed over NorNet [20]. To generate background traffic,
is complete. However, we are focusing on the transport-level we use Virtual Machines (VM) from five commercial cloud
performance, and discard any browser-related optimisations. service providers (2x in Europe, 1x in North America and 2x
Figures 9(a) and 10(a) show the completion times for minRTT in Asia) connected via 100 Mbps links, as described in Sec-
and BLEST for web traffic with different object sizes, see tion III, towards the server machine in Figure 12. We also use
Table I. We also compare the OFO buffer size shown in consumer hardware with a RaspberryPi connected to a home
Figures 9(b) and 10(b), and quantify the contribution of the DSL provider via WLAN and another interface via 3G/3.5G to
additional subflow with smaller web objects, the amount of a mobile broadband operator. On the RaspberryPi side, back-
bytes transferred on each subflow relative to the transfer size, ground traffic from other connected devices congested both
see Figures 9(c) and 10(c). In 3G+WLAN, for smaller web WLAN and 3G. The experimental setup is shown in Figure 12.
objects such as Wikipedia, the contribution of the additional
subflow (3G) can be considered negligible, with only up to 2% Table III
of the total transfer. However, the small contribution of the 3G AVERAGE WEB COMPLETION TIME , SEE F IGURES 4, 9, AND 10
path for Amazon can cause an impact of up to 7% reduction minRTT OTIAS DAPS BLEST
in the completion time for BLEST compared to minRTT, Scenario Traffic Completion Time [s]
see Table III and Figure 9(b). For Huffington Post, although Wikipedia 0.421 0.392 0.435 0.337
3G+WLAN Web Amazon 1.60 1.724 1.789 1.503
the contribution of the additional subflow is still comparably Huffington Post 4.87 4.858 4.932 4.62
low (about 2%), the completion time for BLEST is 6% Wikipedia 0.398 0.4107 0.333 0.324
lower than minRTT. In WLAN+WLAN, BLEST provides an WLAN+WLAN Web Amazon 1.461 1.621 1.598 1.456
Huffington Post 4.218 4.509 4.393 4.114

Table II Table IV
P ENALISATION AND RETRANSMISSION MECHANISM TRIGGER IN AVERAGE CBR APPLICATION DELAY, SEE F IGURES 5 AND 11
3G+WLAN WITH BULK TRAFFIC SHOWN IN F IGURE 3 minRTT OTIAS DAPS BLEST
Scheduler Traffic Retrans. Packets Scenario Traffic [Mbps] Application Delay [ms]
minRTT 366.37 0.53 MiB 3G+WLAN CBR 1 68 53.2 843.7 62.8
3G+WLAN Bulk
BLEST 70.3 0.1 MiB WLAN+WLAN CBR 1 52.18 53.49 54.02 52.24

Authorized licensed use limited to: Nanjing University. Downloaded on August 27,2023 at 09:30:24 UTC from IEEE Xplore. Restrictions apply.
Networking 2016 437
BLEST minRTT 3G WLAN minRTT BLEST minRTT−3G WLAN BLEST−3G WLAN

Average MPTCP OFO Queue [kiB]


1.5 5 14 1 50 150
1 1 1

Byte on Path Ratio [%]


12

Completion Time [s]


4 0.8 40 0.8 0.8 0.8
10 100
1 0.6 30
8 0.6 0.6 0.6
3
6 0.4 20 0.4 0.4 0.4
50
0.5 2 4
0.2 10 0.2 0.2 0.2
2
1 0 0 0 0 0 0
Wikipedia Amazon Huffpost Wikipedia Amazon Huffpost Wikipedia Amazon Huffpost

(a) Completion Time (b) MPTCP OFO Queue (c) Byte on Path: 3G and WLAN
Figure 9. 3G+WLAN for web traffic with Wikipedia, Amazon and Huffington Post with minRTT, BLEST and TCP on 3G and WLAN.

BLEST minRTT WLAN0 minRTT−WLAN0 WLAN1 BLEST−WLAN0 WLAN1


minRTT BLEST

Average MPTCP OFO Queue [kiB]


2 5 1 50 150 1 1 1

Byte on Path Ratio [%]


0.5
Completion Time [s]

1.8 0.8 40 0.8 0.8 0.8


4.5 100
0.4 1.6 0.6 30 0.6 0.6 0.6

0.3 1.4 0.4 20 0.4 0.4 0.4


4 50
1.2 0.2 10 0.2 0.2 0.2
0.2
1 3.5 0 0 0 0 0 0
Wikipedia Amazon Huffpost Wikipedia Amazon Huffpost Wikipedia Amazon Huffpost

(a) Completion Time (b) MPTCP OFO Queue (c) Byte on Path: WLAN0 and WLAN1
Figure 10. WLAN+WLAN for web traffic with Wikipedia, Amazon and Huffington Post with minRTT, BLEST and TCP on WLAN (WLAN0 and WLAN1 ).

In our experiments, we used the same parameters and settings


BLEST minRTT 3G WLAN WLAN0
500 15 60
BLEST minRTT
5 from Section III as well as the same traffic from Section VI-A.
Maximum OFO Queue [kiB]

Maximum OFO Queue [kiB]

We evaluate the performance of different schedulers under


Application Delay [ms]
Application Delay [ms]

400 4
10 55 realistic network conditions, with real-network experiments,
300 3
in a constructed non-shared bottleneck scenario as used in the
200 2
5 50 emulation experiments shown in Figure 2.
1
100
1) Bulk: Figure 13 shows the application goodput and OFO
0
1 Mbps
0
1 Mbps
45
1 Mbps
0
1 Mbps buffer size for bulk traffic with minRTT and BLEST compared
(a) 3G+WLAN (b) WLAN+WLAN to TCP on 3G and WLAN paths. BLEST achieves on average
Figure 11. 3G+WLAN and WLAN+WLAN scenarios for 1 Mbps CBR 18% higher application goodput aggregation, while reducing
traffic with minRTT, BLEST and TCP on 3G and WLAN. the amount of retransmissions by more than 37%, see Table V,
with a slight improvement in OFO buffer size of 3%.
2) Web: Figures 14(a) and 14(b) show the completion times
Germany Subflow@3G
and OFO buffer sizes for the web transfers. With larger object
Download U.K. Subflow@WLAN sizes, BLEST reduces the completion time by up to 10%,
while reducing the OFO size by up to 25%. Thus, MPTCP’s
3G
3G
G performance with BLEST is closer to the WLAN path, only
Lab
L b network
t k 3% worse than TCP on the best path (WLAN).
Server
Multi-homed
om
med
Client WLAN
WLAN
U.S.A.
Background 3) CBR: Figure 15(a) shows the the average application
India Traffic
delay and OFO buffer size for the 1 Mbps CBR traffic with
Figure 12. Real network experiment setup both minRTT and BLEST. BLEST improves the application
delay by 11% while reducing the OFO size by more than
20%. We noticed, looking at single experiments, that MPTCP’s
WLAN 3G minRTT BLEST
default scheduler had small packet bursts sent over 3G, causing
Average MPTCP OFO Queue [kiB]

3000 1000

2500 800
some spikes in the OFO buffer and, consequently, increasing
Goodput [kiBps]

2000 the application delay. However, BLEST used the 3G path in


600
1500 the majority of the cases to send few single packets.
400
1000
200
VII. C ONCLUSION
500

0 0
Path heterogeneity is rather the rule than the exception with
MPTCP. Even subflows from a single machine can follow
(a) 3G+WLAN
different paths to the destination with distinct delay, capacity,
Figure 13. 3G+WLAN for bulk traffic in real experiments, see Figure 12.

Authorized licensed use limited to: Nanjing University. Downloaded on August 27,2023 at 09:30:24 UTC from IEEE Xplore. Restrictions apply.
Networking 2016
BLEST minRTT 3G WLAN minRTT BLEST pletion time, and reduced receiver buffer size.

Average MPTCP OFO Queue [kiB]


3 7 1 50 150
0.7 For future work, we believe that both BLEST and OTIAS
6 0.8 40
follow the right approach towards robust and effective schedul-
Completion Time [s]

0.6 2.5
5 100
0.5 0.6 30 ing for heterogeneous scenarios. We want to expand our eval-
2 4
0.4 0.4 20 uation with the method proposed in [5], add other elements of
3 50
0.3 1.5
2 0.2 10 heterogeneity, e.g., other network access technologies, evaluate
0.2
1 1 0 0 0
different application performance metrics, e.g., throughput
Wikipedia Amazon Huffpost Wikipedia Amazon Huffpost
aggregation versus delay constraints, increase the number of
(a) Completion time (b) MPTCP OFO queue subflows and test the approach in mobility scenarios.
Figure 14. 3G+WLAN for web traffic with Wikipedia, Amazon and Huffin-
gton Post in real experiments, see Figure 12. R EFERENCES
BLEST minRTT 3G WLAN [1] G. Sarwar, R. Boreli, E. Lochin, and A. Mifdaoui, “Performance
1000 5
Maximum OFO Queue [kiB]

evaluation of multipath transport protocol in asymmetric heterogeneous


Application Delay [ms]

800 4 network environment,” in ISCIT 2012.


[2] C. Raiciu, C. Paasch, S. Barré, A. Ford, M. Honda, F. Duchêne,
600 3
O. Bonaventure, and M. Handley, “How Hard Can It Be? Designing
400 2 and Implementing a Deployable Multipath TCP,” in NSDI 2012.
[3] S. Ferlin, T. Dreibholz, and O. Alay, “Multi-path transport over hetero-
200 1 geneous wireless networks: Does it really pay off?” in GLOBECOM
0 0 2014.
1 Mbps 1 Mbps [4] C. Paasch, S. Ferlin, O. Alay, and O. Bonaventure, “Experimental
(a) Delay and MPTCP OFO queue evaluation of multipath TCP schedulers,” in ACM SIGCOMM Capacity
Figure 15. 3G+WLAN for CBR traffic in real experiments, see Figure 12. Sharing Workshop (CSWS), 2014.
[5] C. Paasch, “Improving multipath TCP,” Ph.D. dissertation, UCLouvain
/ ICTEAM / EPL, Nov. 2014.
and losses. Such path heterogeneity results in HoL-blocking [6] A. Singh, C. Goerg, A. Timm-Giel, M. Scharf, and T.-R. Banniza, “Per-
formance comparison of scheduling algorithms for multipath transfer,”
at the receiver undermining MPTCP’s overall performance. in GLOBECOM 2012.
To overcome path heterogeneity, MPTCP follows a reactive [7] G. Sarwar, R. Boreli, E. Lochin, A. Mifdaoui, and G. Smith, “Mitigating
approach and penalizes the subflows that cause HoL-blocking, receiver’s buffer blocking by delay aware packet scheduling in multipath
data transfer,” in WAINA 2013.
through the penalisation and retransmission mechanism. [8] N. Kuhn, E. Lochin, A. Mifdaoui, G. Sarwar, O. Mehani, and R. Boreli,
In this paper, we highlighted the limitations of such an “DAPS: Intelligent delay-aware packet scheduling for multipath trans-
approach for different application types in heterogeneous sce- port,” in ICC 2014.
[9] F. Yang, Q. Wang, and P. Amer, “Out-of-order transmission for in-order
nario through emulations and real-world experiments. More- arrival scheduling policy for multipath TCP,” in WAINA 2014.
over, we have implemented and systematically evaluated [10] B. Arzani, A. Gurney, S. Cheng, R. Guerin, and B. T. Loo, “Impact of
scheduling algorithms aiming at mitigating this issue. We path characteristics and scheduling policies on MPTCP performance,”
in WAINA 2014.
found, however, that neither was able to perform well in all [11] S. Barré, C. Paasch, and O. Bonaventure, “Multipath TCP: From theory
multi-homing scenarios and traffic use-cases. We therefore to practice,” in IFIP Networking 2011.
proposed BLEST, a new scheduler based on a BLocking [12] C. Paasch, G. Detal, F. Duchêne, C. Raiciu, and O. Bonaventure,
“Exploring mobile/WiFi handover with multipath TCP,” in CellNet 2012.
time ESTimation. Compared to previous proposals, BLEST [13] C. Paasch, R. Khalili, and O. Bonaventure, “On the benefits of applying
directly considers the prospective HoL-blocking as a metric experimental design to improve multipath TCP,” in CoNEXT 2013.
to minimise, and based on this metric, it dynamically selects [14] C. Raiciu, S. Barré, C. Pluntke, A. Greenhalgh, D. Wischik, and
M. Handley, “Improving Datacenter Performance and Robustness with
whether it is worthwhile to schedule a packet on a specific Multipath TCP,” in SIGCOMM 2011, Toronto, ON, Canada.
subflow, or to ignore it. This allowed us to eliminate the pe- [15] I. A. Halepoto, F. Lau, and Z. Niu, “Management of buffer space for the
nalisation and retransmission by implementing a more robust, concurrent multipath transfer over dissimilar paths,” in DINWC 2015.
[16] J. Ahrenholz, “Comparison of CORE network emulation platforms,” in
proactive, scheduling metric. We evaluated our algorithm in MILCOM 2010.
emulated and real experiments with different application traffic [17] Sandvine Intelligent Broadband Networks, “Global
(CBR, web and bulk). By comparing BLEST with minRTT, Internet Phenomena Report,” Jul. 2013. [Online].
Available: https://web.archive.org/web/20141216103806/https:
as well as the alternative DAPS and OTIAS, we showed that //www.sandvine.com/downloads/general/global-internet-phenomena/
our approach outperfoms all algorithms across the presented 2013/sandvine-global-internet-phenomena-report-1h-2013.pdf
scenarios, achieving its goal of reducing HoL-blocking, and [18] Cisco Visual Networking Index, “Forecast and
Methodology, 2014–2019,” 2014. [Online]. Avail-
consequently unnecessary retransmissions. This results in an able: http://www.cisco.com/c/en/us/solutions/collateral/service-provider/
increasing application goodput, lower packet delay and com- ip-ngn-ip-next-generation-network/white paper c11-481360.pdf
[19] A. Botta, A. Dainotti, and A. Pescapé, “A tool for the generation
of realistic network workload for emerging networking scenarios,”
Table V Computer Networks, vol. 56, no. 15, pp. 3531–3547, 2012.
P ENALISATION AND RETRANSMISSION ALGORITHM RETRANSMISSIONS ’ [20] E. G. Gran, T. Dreibholz, and A. Kvalbein, “NorNet core — a multi-
OVERHEAD IN 3G+WLAN WITH BULK TRAFFIC SHOWN IN F IGURE 12 homed research testbed,” Computer Networks, Special Issue on Future
Scheduler Traffic Retrans. Packets Internet Testbeds, vol. 61, pp. 75–87, Mar. 2014.

minRTT 33.42
3G+WLAN Bulk
BLEST 21.3

Authorized licensed use limited to: Nanjing University. Downloaded on August 27,2023 at 09:30:24 UTC from IEEE Xplore. Restrictions apply.
Networking 2016 439

You might also like