BLEST_Blocking_estimation-based_MPTCP_scheduler_for_heterogeneous_networks
BLEST_Blocking_estimation-based_MPTCP_scheduler_for_heterogeneous_networks
Abstract—With the widespread availability of multi-homed has a long-term impact on the CWND of these subflows, which
devices, multipath transport protocols such as MPTCP are are limited in their growth [3], leading to sub-optimal capacity
becoming increasingly relevant to support better use of multiple aggregation, as higher-delay paths are underused [4]. As a rule-
connectivity through capacity aggregation and seamless failover.
However, capacity aggregation over heterogeneous paths, such of-thumb, it is also recommended to increase the receive buffer
as offered by cellular and Wi-Fi networks, is problematic. It size to further limit HoL-blocking situations [5].
causes packet reordering leading to head-of-line (HoL) blocking The need for multipath transport protocol schedulers is
at the receiver, increased end-to-end delays and lower application known, and a number of proposals have been made and
goodput. MPTCP tackles this issue by penalising the use of evaluated in the past [6]. However, in the specific case of
longer paths, and increasing buffer sizes. This, however, results
in suboptimal resource usage. In this paper, we first evaluate heterogeneous paths, more care is required to avoid the issues
and compare the performance of default MPTCP and alternative discussed above. Such schedulers have been proposed in [7]–
state-of-the-art schedulers, all implemented in the Linux kernel, [9], based on the concept of sending packets out of order
for a range of traffic patterns and network environments. This so they reach the receiver in order. There exists, however,
allows us to identify shortcomings of various approaches. We then no comparison of these schedulers to the MPTCP default
propose a send-window BLocking ESTimation scheduler, BLEST,
which aims to minimise HoL-blocking in heterogeneous networks, scheduler in a consistent environment.
thereby increasing the potential for capacity aggregation by In this paper, we first offer a comparative study of the pro-
reducing the number of spurious retransmissions. The resulting posed MPTCP schedulers [7]–[9], by experimentally evaluat-
scheduler allows an increase by 12% in application goodput with ing our Linux implementation of these algorithms. We evaluate
bulk traffic while reducing unnecessary retransmissions by 80%
their behaviour for different traffic types (Web, Bulk, CBR).
as compared to default MPTCP and other schedulers.
Index Terms—MPTCP, multipath, transport protocol, packet The performance of these schedulers is compared to MPTCP’s
scheduling, head-of-line blocking, receive window limitation, default scheduler as well as plain single-path TCP, in terms
heterogeneous networks of application goodput (for bulk traffic), end-to-end delays
(CBR) and completion time (Web). Based on observations in
I. I NTRODUCTION these experiments, we identify how the studied mechanisms
offer the best performance, and what they fail to properly
Multipath transport protocols, and particularly Multipath account for. We also take insight from the observations
TCP, allow to better use the network resources available of [10] that not all subflows should be used at all times and,
to multi-homed devices such as mobile phones. Two main while scheduling is needed to complement pure congestion
advantages are envisioned: capacity aggregation across mul- control, path selection and send buffer management are also
tiple links, and the ability to maintain connection if one of primordial. We then propose a novel BLocking ESTimation-
the path fails. Capacity aggregation is however challenging based scheduler, BLEST, which takes a proactive stand towards
with heterogeneous paths, such as offered by cellular and minimising HoL-blocking. Rather than penalising the slow
Wi-Fi, in particular because of delay heterogeneity [1]. This subflows, BLEST estimates whether a path will cause HoL-
heterogeneity results in packet reordering, leading to head-of- blocking and dynamically adapts scheduling to prevent block-
line (HoL) blocking, increased out-of-order (OFO) buffer use ing. Although BLEST is designed for heterogeneous paths, we
at the receiver and, ultimately, reduced goodput. show in our experiments that it works as well as MPTCP’s
MPTCP’s default scheduler, minRTT, is based on Round- minRTT scheduler in homogeneous scenarios.1
Trip Time (RTT). minRTT starts by filling the congestion The remainder of this paper is organised as follows. We
window (CWND) of the subflow with the lowest RTT before present the background to this work, and show motivating
advancing to other subflows with higher RTTs. When one of examples in the next section. We describe our evaluation setup
these subflows blocks the connection, e.g., due to head-of-line in Section III. In Section IV, we discuss our implementation
blocking, MPTCP’s default scheduler retransmits the segments of different schedulers [7]–[9] and compare their perfor-
blocking the connection on the lowest-delay path and penalise mance side-by-side with MPTCP’s default scheduler. Based
longer (i.e., higher-delay) paths that caused the issue [2]. This
ISBN 978-3-901882-83-8
c 2016 IFIP 1 BLEST’s code is available at http://nicta.info/mptcp-blest.
Authorized licensed use limited to: Nanjing University. Downloaded on August 27,2023 at 09:30:24 UTC from IEEE Xplore. Restrictions apply.
Networking 2016 431
on observations in these experiments, we propose a proactive 3G WLAN MPTCP 3G WLAN MPTCP
1000 2000
minimum-delay scheduler that can predict the send-window
evaluate its performance in Section VI, both in emulated and 500 1000 4000
1000
real multipath environments. We finally offer some concluding 500 2000
II. BACKGROUND AND M OTIVATION (a) Download time, Wikipedia (b) Download time, Amazon
Authorized licensed use limited to: Nanjing University. Downloaded on August 27,2023 at 09:30:24 UTC from IEEE Xplore. Restrictions apply.
Networking 2016 432
Bottleneck 1 3) Background Traffic: A synthetic mix of TCP and UDP
Client 1 Server 1
traffic was generated with D-ITG [19] as background traffic
in order to create a realistic environment. The TCP traffic
MPTCP MPTCP was composed of saturated sender and rate-limited TCP flows
Client Server
with a exponentially distributed mean rate of 157 pps. The
UDP traffic was composed of UDP on/off flows with Pareto
Server 2 distributed on and exponentially distributed off times. Each
Client 2 Bottleneck 2
flow has an exponentially distributed mean rate of 100 kbps in
Figure 2. Emulation experiment setup the heterogeneous scenario and 500 kbps in the homogeneous
scenario. Packet sizes were varied with a mean of 1000 Bytes
III. M EASUREMENT S ETUP and RTTs between 20 and 100 ms. We repeated all experiment
We used CORE [16] for the initial evaluation. CORE is settings 50 times, in both emulation and real scenarios.
a network emulator able to emulate a real network stack
implementation within Linux containers, making it suitable IV. S CHEDULING AGAINST H O L-B LOCKING
to avoid simulation model simplifications. Figure 2 shows the In the following, we discuss both Delay-Aware Packet
emulation topology. Bottleneck 1 was loaded with background Scheduler (DAPS) [7], [8] and Out-of-order Transmission for
traffic from Server 1 to Client 1, and bottleneck 2 with traffic In-order Arrival Scheduler (OTIAS) [9], evaluating them in
from Server 2 to Client 2. The link characteristics for WLAN common scenarios, and commenting on their implementation.
and 3G links are set as follows.
A. Delay-Aware Packet Scheduler (DAPS)
• WLAN: Capacity=25 Mbit/s, Delay=25 ms, Loss=1%
• 3G: Capacity=5 Mbit/s, Delay=65 ms, Loss=0% The DAPS algorithm was proposed in two versions. In [7],
Based on measurements carried in real networks, the queue it pursues the goal to make segments arrive in order by
lengths at each router interface were set to 100 packets for planning which subflows the next segments should be sent over
WLAN and 3750 packets for 3G. The losses applied to the based on both the forward delay and CWND of each subflow.
WLAN path are random. A schedule is created to span the least common multiple
1) Network and System characteristics: System settings are (LCM) of the forward delays lcm(Di ∈ {D1 , D2 , . . . , Dn }).
known to impact TCP’s performance. In order to emulate Algorithm 1 shows the main loop of the mechanism.
realistic network scenarios, we used system settings close to As an example, assume two subflows with similar capac-
the standard characteristics of each technologies. The TCP ities, but with a subflow having a forward delay ten times
buffer sizes (send/receive) were set to be equivalent to widely higher than the fast subflow. DAPS will derive the following
known Android settings, that are configured as follows. schedule: segments 1. . . 10 will be sent on the fast subflow,
• Homogeneous (WLAN): 1024 KiB/2048 KiB.
and segment 11 on the other subflow. Ideally, segment 11 will
• Heterogeneous (3G+WLAN): 1024 KiB/2048 KiB.
arrive right after segment 10, thereby avoiding HoL-blocking.
In [8], DAPS is formulated for a scenario with only two
For bulk traffic experiments, we set both send and receive
subflows (rs and rf ). It is also a simplification of the original
buffers to 16 MiB to evaluate MPTCP’s aggregation capability.
algorithm [7] as it does not take CWND asymmetry into
To ensure independence between runs, the cached TCP
account, only considering the subflows’ RTT ratio (η) and the
values were flushed after every run. We focused on congestion
CWND of the fast subflow.
avoidance; therefore, we discarded the initial phase for each
Since both algorithms are comparable, we consider only
experiment and analyzed a period of 90 s for bulk and constant
the original DAPS [7] in our evaluations. We ignore the
bitrate (CBR) traffic. For single-path TCP flows, we used TCP-
simplifications presented in [8], as they were only introduced
Reno, therefore, fairly compairing against MPTCP-OLIA.3
to ease the implementation in the ns-2 of CMT-SCTP.
2) Application Traffic: We considered three different types.
a) Video Streaming: We considered constant bit-rate B. Out-of-order Transmission for In-order Arrival Scheduler
(CBR) video traffic with a frame size of 5 KiB on the ap- (OTIAS)
plication level and a rate of 1 Mbps. This is in line with the
The OTIAS algorithm [9] is based on the idea of scheduling
recent measurement studies [17] showing that more than 53%
more segments on a subflow than what it can currently send.
of the downstream traffic in North America is video streaming,
Queues may therefore build up at each subflow of the sender,
and with other reports [18] predicting further increase,
under the assumption that these segments will be sent as soon
b) Web Traffic: We selected three websites of different
sizes, small, medium and large (see Table I), as a good set of
Table I
typical website sizes. To mimic the behavior of a real browser W EB T RAFFIC G ENERATION
downloads were performed with 6 concurrent connections.
Domain name Number of Objects Size of Objects
c) Bulk Transfer: We completed the evaluation with the
http://www.wikipedia.org 15 72 KiB
most common case for MPTCP — a buk transfer, of 64 MiB. http://www.amazon.com 54 1024 KiB
3 TCP-Linux http://www.huffingtonpost.com 138 3994 KiB
kernel 3.14.33 is used throughout our evaluations.
Authorized licensed use limited to: Nanjing University. Downloaded on August 27,2023 at 09:30:24 UTC from IEEE Xplore. Restrictions apply.
Networking 2016 433
Algorithm 1 DAPS [7] minRTT DAPS OTIAS minRTT DAPS OTIAS
Goodput [kiBps]
Goodput [kiBps]
3: SEQPi ← InitializeV ector() 300
2000
300 300
1500
4: end for 200 200 200
5: for Pi ∈ {O1 , O2 , ..., O lcm(Di ) } do
1000
∈1,2,...,n
i Di
100 500 100 100
7: end for
(a) 3G+WLAN (b) WLAN+WLAN
8: t ← 0 Figure 3. Goodput and OFO queue for bulk traffic between DAPS, OTIAS
9: while t < lcm(Di ∈ {D1 , D2 , ..., Dn }) do and minRTT.
10: for Pi ∈ {P1 , P2 , ..., Pn } do
11: if t ≡ 0 (mod Di ) then schedule until it is completed, after which planning for the next
12: T ransmit(Pi , SEQPi [ Dti ]) run is determined. On the other hand, OTIAS decides which
13: Smax ← Smax + Ci subflow to use on a per-packet basis. It takes into account the
14: end if RTTs and the queue sizes of the subflows at a given moment
15: end for and it is closer to MPTCP’s default scheduler (minRTT) in
16: t←t+1 this respect, albeit taking into account more information from
17: end while the subflows.
Where: OTIAS operates based on current data and is able to react
more dynamically to network changes, where DAPS can only
• {P1 , P2 , ..., Pn } set of paths
react to changes in the next scheduling run. OTIAS is however
• {D1 , D2 , ..., Dn } paths’ respective forward delays
still less dynamic than MPTCP’s minRTT since it builds up
• SEQPi seqnos of packets to be transmitted on Pi
queues on the subflows. If a segment that had already been
sent is blocking the connection, e.g., it could be delayed or
as there is space in the CWND for the subflow. When asked lost, the queued packets would linger at the sender more than
to schedule a new segment, the algorithm estimates its arrival assumed, disturbing the created schedule. Moreover, MPTCP’s
time if sent over each subflow (Tij ), and chooses the subflow default scheduler retransmission mechanism, retransmitting a
with the earliest arrival time. The estimation is performed packet on the fastest subflow [4], is not applicable if a send
based on a subflow’s RTT, its CWND, the number of in-flight queue exists for a subflow, as that segment would have to wait
packets and the number of already queued packets. If there is in the queue before retransmission.
space in the CWND, the segment would be sent immediately, In the following we present an evaluation of DAPS and
yielding an arrival time of approximately RT T /2 (assuming OTIAS against MPTCP’s minRTT with bulk, web and CBR
symmetric forward and backward delays). If the CWND is full, traffic through emulations. We look at application goodput for
however, the segments will have to wait in the subflow’s queue. bulk transfers, completion times for web transfers, and average
Assuming a send rate of 1 CWND per RTT, the additional application delay for CBR traffic. In all cases, we also sample
waiting time is calculated as RTT_to_waitji . Algorithm 2 the maximum value of the out-of-order (OFO) queue every
shows the main loop of the OTIAS mechanism. 10 ms during the experiments and present the results.
1) Bulk: Figure 3 shows DAPS, OTIAS and MPTCP’s
Algorithm 2 OTIAS [9] default scheduler goodput and OFO buffer size for bulk
1: for each available subflow j do transfer in both 3G+WLAN and WLAN+WLAN scenarios.
2: pkt_can_be_sent j = cwndj − unackedj OTIAS provides a goodput increase of 6% but requires 35%
j not_yet_sentj −pkt_can_be_sentj
3: RTT_to_waiti = cwndj
less OFO buffer compared to MPTCP’s minRTT. On the other
hand, DAPS provides a goodput decrease of 27% and requires
4: Tij = (RTT_to_waitji + 0.5) × srttj 65% less OFO buffer compared to MPTCP’s default scheduler.
5: if Tij < minT then In WLAN+WLAN scenarios, MPTCP’s default scheduler has
6: minT = Tij a 3.5% lower goodput compared to OTIAS, which on the
7: selected_subflow = j contrary takes about 87% more OFO buffer. DAPS delivers
8: end if goodput values of about 16% less compared to MPTCP’s
9: end for default scheduler with about 97% more OFO buffer.
2) Web: Figure 4 shows the completion times and OFO
buffer sizes for DAPS, OTIAS and MPTCP’s default sched-
C. Comparative evaluation of DAPS and OTIAS ulers in both 3G+WLAN and WLAN+WLAN scenarios. For
Although DAPS and OTIAS have the same goal to reduce 3G+WLAN, in Figure 4(a), all scheduler algorithms per-
HoL-blocking, they follow different approaches: DAPS creates form similarly in terms of completion time. However, for
a schedule for the distribution of future segments into the larger object sizes, we observe a larger OFO buffer size. In
available subflows for a scheduling run and follows this WLAN+WLAN, in Figure 4(c), DAPS and OTIAS struggle
Authorized licensed use limited to: Nanjing University. Downloaded on August 27,2023 at 09:30:24 UTC from IEEE Xplore. Restrictions apply.
Networking 2016 434
OTIAS DAPS minRTT minRTT DAPS OTIAS minRTT DAPS OTIAS minRTT DAPS OTIAS
4 40 200
(a) 3G+WLAN, Completion time (b) 3G+WLAN, OFO queue (a) 3G+WLAN (b) WLAN+WLAN
Figure 5. Packet delay and OFO queue for CBR traffic for DAPS, OTIAS
OTIAS DAPS minRTT minRTT DAPS OTIAS
and minRTT.
Average MPTCP OFO Queue [kiB]
1 2.5 5 5 50 250
Completion Time [s]
Authorized licensed use limited to: Nanjing University. Downloaded on August 27,2023 at 09:30:24 UTC from IEEE Xplore. Restrictions apply.
Networking 2016 435
MPTCP MPTCP
MPTCP Send Window Send Window Send Window
0 … 1100 11…12 13…
13…20
13 …200 21…23
221…23
21
11…
…
…223
23 224…32
44…
…322
1 2 3 4
Figure 6. MPTCP example with BLEST: In x, segments 0. . . 10 are in flight on subflow 1, the subflow with lowest delay. In y it is uncertain how many
segments should be sent on subflow 2, which has a higher delay. While subflow 2’s window could accommodate more data, only segments 11. . . 12 are
allocated, due to BLEST’s blocking prediction. Here, minRTT would allocate as much data as fits into subflow 2’s window given its CWND. In z subflow
1 can advance with segments 13. . . 20, because 0. . . 10 were acknowledged. At { both subflows can advance with MPTCP’s send window with subflow 1
carrying segments 24. . . 32 and subflow 2 carrying 21. . . 23.
Goodput [kiBps]
1 2000
δλ=0.02
on blocking to trigger PR at the sender. The PR mechanism 300
λ
δ
1500
itself is detrimental in the long run, since it keeps the CWND 0.5
200
1000
Authorized licensed use limited to: Nanjing University. Downloaded on August 27,2023 at 09:30:24 UTC from IEEE Xplore. Restrictions apply.
Networking 2016
WLAN 3G minRTT BLEST WLAN
0 minRTT BLEST improvement of 3% for Huffington Post and 2% for Amazon
Goodput [kiBps]
2000
300 300 15 time is achieved by the proposed BLEST algorithm for both
1500
200 200 10 heterogeneous and homogeneous scenarios for all the websites
1000
100 500 100 5
evaluated.
0 0
3) CBR: Live video has higher requirements of low latency
0 0
compared to other forms of video streaming, e.g., video on
(a) 3G+WLAN (b) WLAN+WLAN demand. Moreover, live video is more sensitive to network
Figure 8. 3G+WLAN and WLAN+WLAN scenarios for bulk traffic with
minRTT, BLEST and TCP on 3G and WLAN. delay variations and, therefore, impacts the user experience the
most. As we want to assess whether MPTCP could be used
A. Emulation Experiments for applications other than bulk traffic, we evaluate live video
performance that is more sensitive to latency. Figures 11(a)
1) Bulk: Increasing application goodput for bulk trans-
and 11(b) show the average application delay for minRTT
fer has been one of the most common ways to evaluate
and BLEST for CBR traffic with 1 Mbps. In the 3G+WLAN
MPTCP’s performance. Figures 8(a) and 8(b) compares the
scenario, BLEST improved the application delay over minRTT
performance in terms of goodput and OFO queue size for
by 8% for CBR (1 Mbps) and a slight improvement in
minRTT and BLEST with bulk traffic in 3G+WLAN and
OFO buffer size of 8% is also achieved, see also Table 11.
WLAN+WLAN scenarios, respectively. In 3G+WLAN, we
In the same scenario and with the same application traffic,
observe that BLEST reduces OFO buffer size by 19%, while it
comparing BLEST to results shown in Figures 4, BLEST
increases application goodput by 12%. Note that the MPTCP
performed worse than OTIAS with CBR, because OTIAS
default scheduler’s penalisation and retransmission (PR)
completely discarded the 3G path. In contrast to that, DAPS
mechanism has a particular negative impact in 3G+WLAN. As
keeps utilising the 3G path. In WLAN+WLAN shown in
illustrated in Table II, MPTCP’s PR mechanism can send up to
Figure 11(b), BLEST performed similar to MPTCP’s default
0.53 MiB retransmissions, to overcome blocking of the WLAN
scheduler as expected.
path. BLEST achieves better aggregation with less OFO buffer,
saving up to 80% of retransmissions. In WLAN+WLAN, B. Real Experiments
BLEST achieves similar application goodput with negligible Finally, we validate the performance of the different sched-
OFO buffer size of 2.5 kiB compared to minRTT. ulers with real-network experiments within the same topology
2) Web: The total download time is not a perfect metric as as shown in Figure 2 for the emulation experiments, but now
most browsers start rendering the page before the transmission constructed over NorNet [20]. To generate background traffic,
is complete. However, we are focusing on the transport-level we use Virtual Machines (VM) from five commercial cloud
performance, and discard any browser-related optimisations. service providers (2x in Europe, 1x in North America and 2x
Figures 9(a) and 10(a) show the completion times for minRTT in Asia) connected via 100 Mbps links, as described in Sec-
and BLEST for web traffic with different object sizes, see tion III, towards the server machine in Figure 12. We also use
Table I. We also compare the OFO buffer size shown in consumer hardware with a RaspberryPi connected to a home
Figures 9(b) and 10(b), and quantify the contribution of the DSL provider via WLAN and another interface via 3G/3.5G to
additional subflow with smaller web objects, the amount of a mobile broadband operator. On the RaspberryPi side, back-
bytes transferred on each subflow relative to the transfer size, ground traffic from other connected devices congested both
see Figures 9(c) and 10(c). In 3G+WLAN, for smaller web WLAN and 3G. The experimental setup is shown in Figure 12.
objects such as Wikipedia, the contribution of the additional
subflow (3G) can be considered negligible, with only up to 2% Table III
of the total transfer. However, the small contribution of the 3G AVERAGE WEB COMPLETION TIME , SEE F IGURES 4, 9, AND 10
path for Amazon can cause an impact of up to 7% reduction minRTT OTIAS DAPS BLEST
in the completion time for BLEST compared to minRTT, Scenario Traffic Completion Time [s]
see Table III and Figure 9(b). For Huffington Post, although Wikipedia 0.421 0.392 0.435 0.337
3G+WLAN Web Amazon 1.60 1.724 1.789 1.503
the contribution of the additional subflow is still comparably Huffington Post 4.87 4.858 4.932 4.62
low (about 2%), the completion time for BLEST is 6% Wikipedia 0.398 0.4107 0.333 0.324
lower than minRTT. In WLAN+WLAN, BLEST provides an WLAN+WLAN Web Amazon 1.461 1.621 1.598 1.456
Huffington Post 4.218 4.509 4.393 4.114
Table II Table IV
P ENALISATION AND RETRANSMISSION MECHANISM TRIGGER IN AVERAGE CBR APPLICATION DELAY, SEE F IGURES 5 AND 11
3G+WLAN WITH BULK TRAFFIC SHOWN IN F IGURE 3 minRTT OTIAS DAPS BLEST
Scheduler Traffic Retrans. Packets Scenario Traffic [Mbps] Application Delay [ms]
minRTT 366.37 0.53 MiB 3G+WLAN CBR 1 68 53.2 843.7 62.8
3G+WLAN Bulk
BLEST 70.3 0.1 MiB WLAN+WLAN CBR 1 52.18 53.49 54.02 52.24
Authorized licensed use limited to: Nanjing University. Downloaded on August 27,2023 at 09:30:24 UTC from IEEE Xplore. Restrictions apply.
Networking 2016 437
BLEST minRTT 3G WLAN minRTT BLEST minRTT−3G WLAN BLEST−3G WLAN
(a) Completion Time (b) MPTCP OFO Queue (c) Byte on Path: 3G and WLAN
Figure 9. 3G+WLAN for web traffic with Wikipedia, Amazon and Huffington Post with minRTT, BLEST and TCP on 3G and WLAN.
(a) Completion Time (b) MPTCP OFO Queue (c) Byte on Path: WLAN0 and WLAN1
Figure 10. WLAN+WLAN for web traffic with Wikipedia, Amazon and Huffington Post with minRTT, BLEST and TCP on WLAN (WLAN0 and WLAN1 ).
400 4
10 55 realistic network conditions, with real-network experiments,
300 3
in a constructed non-shared bottleneck scenario as used in the
200 2
5 50 emulation experiments shown in Figure 2.
1
100
1) Bulk: Figure 13 shows the application goodput and OFO
0
1 Mbps
0
1 Mbps
45
1 Mbps
0
1 Mbps buffer size for bulk traffic with minRTT and BLEST compared
(a) 3G+WLAN (b) WLAN+WLAN to TCP on 3G and WLAN paths. BLEST achieves on average
Figure 11. 3G+WLAN and WLAN+WLAN scenarios for 1 Mbps CBR 18% higher application goodput aggregation, while reducing
traffic with minRTT, BLEST and TCP on 3G and WLAN. the amount of retransmissions by more than 37%, see Table V,
with a slight improvement in OFO buffer size of 3%.
2) Web: Figures 14(a) and 14(b) show the completion times
Germany Subflow@3G
and OFO buffer sizes for the web transfers. With larger object
Download U.K. Subflow@WLAN sizes, BLEST reduces the completion time by up to 10%,
while reducing the OFO size by up to 25%. Thus, MPTCP’s
3G
3G
G performance with BLEST is closer to the WLAN path, only
Lab
L b network
t k 3% worse than TCP on the best path (WLAN).
Server
Multi-homed
om
med
Client WLAN
WLAN
U.S.A.
Background 3) CBR: Figure 15(a) shows the the average application
India Traffic
delay and OFO buffer size for the 1 Mbps CBR traffic with
Figure 12. Real network experiment setup both minRTT and BLEST. BLEST improves the application
delay by 11% while reducing the OFO size by more than
20%. We noticed, looking at single experiments, that MPTCP’s
WLAN 3G minRTT BLEST
default scheduler had small packet bursts sent over 3G, causing
Average MPTCP OFO Queue [kiB]
3000 1000
2500 800
some spikes in the OFO buffer and, consequently, increasing
Goodput [kiBps]
0 0
Path heterogeneity is rather the rule than the exception with
MPTCP. Even subflows from a single machine can follow
(a) 3G+WLAN
different paths to the destination with distinct delay, capacity,
Figure 13. 3G+WLAN for bulk traffic in real experiments, see Figure 12.
Authorized licensed use limited to: Nanjing University. Downloaded on August 27,2023 at 09:30:24 UTC from IEEE Xplore. Restrictions apply.
Networking 2016
BLEST minRTT 3G WLAN minRTT BLEST pletion time, and reduced receiver buffer size.
0.6 2.5
5 100
0.5 0.6 30 ing for heterogeneous scenarios. We want to expand our eval-
2 4
0.4 0.4 20 uation with the method proposed in [5], add other elements of
3 50
0.3 1.5
2 0.2 10 heterogeneity, e.g., other network access technologies, evaluate
0.2
1 1 0 0 0
different application performance metrics, e.g., throughput
Wikipedia Amazon Huffpost Wikipedia Amazon Huffpost
aggregation versus delay constraints, increase the number of
(a) Completion time (b) MPTCP OFO queue subflows and test the approach in mobility scenarios.
Figure 14. 3G+WLAN for web traffic with Wikipedia, Amazon and Huffin-
gton Post in real experiments, see Figure 12. R EFERENCES
BLEST minRTT 3G WLAN [1] G. Sarwar, R. Boreli, E. Lochin, and A. Mifdaoui, “Performance
1000 5
Maximum OFO Queue [kiB]
minRTT 33.42
3G+WLAN Bulk
BLEST 21.3
Authorized licensed use limited to: Nanjing University. Downloaded on August 27,2023 at 09:30:24 UTC from IEEE Xplore. Restrictions apply.
Networking 2016 439