SlideShare a Scribd company logo
1 of 10
Download to read offline
Qualitative Communication for Emerging Network
Applications with New IP
Richard Li, Lijun Dong, Cedric Westphal, Kiran Makhijani
Futurewei Technologies Inc.
2220 Central Expressway, Santa Clara, CA, USA
email: {richard.li, lijun.dong, cedric.westphal, kiranm}@futurewei.com
Abstract—Not all data units carried over a packet stream are
equal in value. Often some portions of data are more significant
than others. Qualitative Communication is a paradigm that
leverages the ’quality of data’ attribute to improve both the
end-user experience and the overall performance of the network
specifically when adverse network conditions occur. Qualitative
Communication allows that the content that is received differs
from the one that is transmitted but maintains enough of its
information to still be useful. This paper describes Qualitative
Communication, its packetization methods, and the correspond-
ing mechanisms to process the packets at a finer granularity with
New IP. The paper also discusses its benefits to the emerging
and future network applications through a couple of use cases in
video streaming, multi-camera assisted remote driving, AR/VR
and holographic streaming, and high precision networking. Some
preliminary performance results are illustrated to show such
benefits. This suggests that Qualitative Communication will find
wide applications for many use cases.
Index Terms—New IP, Qualitative Communication (QC),
packet wash, future Internet, holographic type communication,
video streaming, remote driving, AR/VR, in-time guarantee, high
precision networking, end-to-end latency guarantee, Random
Linear Network Coding.
I. INTRODUCTION
In most packet-based communications architectures, the
network is blind to the semantics associated with the packet
payload. Every packet is treated (e.g., classified, forwarded
or dropped) in its entirety as a minimal, independent, and
self-sufficient unit, according to the local configuration and
congestion conditions. The network protocols always ensure
that the data sent matches the data received exactly bit-by-bit
at the receiving end. When the congestion happens and the
network starts dropping packets, the full data carried in the
payload of those packets is lost to the receiver. If the reliability
is enforced through transport protocols, say TCP (Transmis-
sion Control Protocol), the re-transmission is needed. Re-
transmission of packets wastes network resources, reduces the
overall throughput of the connection and results in longer
end-to-end latency for the packet delivery, which often makes
it not applicable to very large volumetric applications such
as multi-camera assisted remote driving, AR/VR (Augmented
Reality/Virtual Reality), holographic type applications [1], [2].
TCP introduces a seesaw in the latency of a flow. As
congestion occurs, the buffer starts filling up, introducing an
increasing buffering delay. Once the buffer overflows, the
congestion window is re-adjusted and the delay becomes short
again. This introduces jitter and unpredictability in the packet
delay. Furthermore, re-transmissions are triggered only after
the packet loss is identified by the sender through timeout or
duplicate acknowledgements. This way of handling network
congestion by discarding the packet in its entirety is not very
suitable to latency sensitive network applications. DiffServ
improves QoS (Quality of Service) and user experience, but the
buffers associated with any class of service can also experience
congestion and thus the problem above remains.
Despite the work done by sender and network, the QoE
(Quality of Experience) is not guaranteed. A late-arriving or a
lost packet would stall the entire stream; that is, data integrity
does not always imply a satisfactory user experience. Specif-
ically, in multimedia applications, when resource constrained
situations arise, an end user would prefer to receive the es-
sential content immediately and forgo peripheral information;
i.e., rather than waiting for everything, it is often sufficient
to receive a lower quality of content which is contextually
relevant.
What if the payload itself could be perceived at a finer
granularity? In other words, what if some portion of the
payload was an independent unit? Then the data carried in
packet payload could be broken down into multiple parts
(called as chunks in the rest of the paper) and each chunk
having its semantics such as (i) how the data is grouped, (ii)
its relative significance compared to other parts, and (iii) the
relationship with other parts in the same or the other packets. If
the networks were made aware of such semantics of the packet
payload, it could then perform contextual actions and opera-
tions at finer granularity than packet level. The data received
is permitted to differ from the data sent, yet the supplied data
will be still meaningful to and consumed by the receiver. This
novel concept is referred to as Qualitative Communication [3]
and the corresponding payload as qualitative payload.
Qualitative Communication allows the network to deliver
the more important information in the packets to their des-
tination, by discarding the less significant one in the face
of congestion. Then the unit of action taken by the network
does not need to be on the entire packet but based on the
payload semantics. This allows the payload to be processed
not as a single raw stream of bits but as chunks within the
payload of differentiated relevance and semantics. Qualita-
tive Communication intends to largely reduce the number of
packet re-transmissions while trading-off between the users’
satisfaction for experienced latency and some tolerable quality
degradation. By embedding semantics at the chunk level
also optimizes the transmission and processing of the packet
headers.
Qualitative Communication (written as QC here on) facili-
tates well-informed in-network decisions on discarding or pro-
cessing on parts of the packets. In this paper we describe all the
important building blocks. In the following sections, we focus
on describing the major characteristics and some promising
use cases of QC. It leverages the New IP packet structure
as described in Section III. It requires support from the
application to encapsulate qualitative information referred to
as packetization scheme, a number of approaches are discussed
in Section IV. Moreover, since the end to end transport can
now be enhanced, one such method of flow control mechanism
is covered in Section V. More importantly, Section VI goes
into the details of applications that benefit from this concept.
Finally, performance evaluation is provided in Section VII
followed by the future directions and conclusion in Section
VIII.
II. RELATED WORK
QC was first introduced by Li et al in [3], facilitated by
Packet Wash. Packet Wash provided the foundations that
data received may not be the same as the data sent; hence the
term wash. It made three key contributions: (i) payload in a
packet itself may be split into smaller units called chunks such
that its chunks are treated independently; (ii) a packet may
describe what kind of in-network per sub-payload processing
is permitted, yet the packet will remain usable. For example,
a packet gets modified (washed or trimmed in transit) by
the time it is delivered to the receiver (iii) It postulated
that by associating context with the payload, key information
contained in the packet payload may be preserved without
inspecting the contents of the payload. It introduced taxonomy
of the building blocks of QC such as the sub-payload portions
called chunk and means to measure the quality of received
packet vis-à-vis the sent packet.
On the transport layer, some recent efforts have been made
using packet wash, which selectively drops off parts of payload
in reaction to congestion [4], [5]. Transport protocols such as
LEDBAT [6] or BBR [7] attempt to provide predictable latency
and minimize packet drops. We find that QC achieves the same
objectives by providing better information of the congestion
in the network to the end points.
On the packet coding side, qualitatively in-network linear re-
coding helps to avoid re-transmissions and is explored in [8].
It proposes to add minimal redundancy necessary in chunks
which are applied with RLNC (Random Linear Network
Coding). In the network, chunks maybe re-coded to add new
coded chunks in the packet if the network condition is good.
The re-coding scheme relies on caching capabilities in the
network, the possibility of cached chunks belonging to the
same network coding group and the packet payload having
space after trimming in the previous hops.
QC based SDN (Software Defined Networking) for layered
video transmission is discussed in [9]. In this test, during
the transmission of the packets, the SDN controller removes
chunks when the network is congested. Hence, the packets
are transmitted to the client albeit with a possible quality
degradation, but without any re-transmission, delay, or loss.
Since SVC (Sacable Video Coding) video is encoded by using
the similarities between consecutive frames, as well as the
dependency between the layers of the same frames, there is a
dependency among the same layers of different frames. QC
utilizes this dependency and the video layers are partitioned
into the packets in such a way that each packet carries each
type of layer. This paper provides comparative performance
measurements with UDP (User Datagram Protocol) under
different network conditions using video QoE metrics: the
outage duration in milliseconds and PSNR (Peak Signal-to-
Noise Ratio) in dB. It showed QC techniques significantly
outperforms UDP in terms of PSNR and duration of pauses.
In the next section, We will describe New IP, the core
technology to support and implement QC.
III. NEW IP OVERVIEW
The data transport in the Internet draws resemblance to the
delivery logistics in traditional postal mail services, which is
very well understood. By relating the “packet header” to be the
same as “envelop” carrying sender and receiver information,
and the “user data or packet payload” to be the same as “letter”
inserted in the envelop, the user data (payload) is encapsulated
into a packet with source and destination address contained in
the packet header. The postal mail is then dispatched by the
postal service while the sender and receiver do not have any
knowledge of the route or time taken by the delivery. This is
how the best-effort service in the current Internet works.
Over the years, the courier services have been upgraded and
many value-added services regarding how packages and letters
are delivered are provided to the customers. The courier ser-
vices become customizable, trackable, assurable, and billable.
The customers are able to customize the guaranteed delivery
time, specify transportation method and route, track where the
package is, achieve anonymity by sending to a rented P.O. box,
provide delivery instructions, require receiver’s acknowledge
by signature and so on. The merits of these services are to
allow customers to do customization, monitoring, and control
of their packages, and the courier service providers receive
their revenues by providing such advanced services that a
typical traditional postal mail services did not do.
The current IP packet format, similar to traditional postal
mail, is elementary and cannot evolve in terms of extensibility
and flexibility of addressing, capturing end-user’s KPI (Key
Performance Index) expectation (e.g., latency, throughput,
packet loss) for a specific delivery, or providing contextual
information about the payload to the network.
By relating to courier services, we observe that providing
any type of service - be it old or new - puts much responsibility
on network operations. There is always an expectation from
networks to insert new services seamlessly. However, the
lack of flexibility in data plane often leads to an overlay-
based approach or the middle-box insertion that needs to be
maintained along with several provisioning touch points in
the underlay networks, hence not only increasing the overall
complexity but also limiting the scale. In order to provide these
functionalities in the existing IP networks, addition to IPv4
options or IPv6 extension headers is necessary, both of which
are very difficult to implement using the current standards.
New IP [10], [11] defines a new network datagram format
as shown in Fig. 1. It is an extension, optimization and
evolution of IP with new functions (capabilities, features),
and is being designed to be inter-operable with IPv4/v6
and many others. As an existing IPv4/v6 packet resembles
a traditional postal mail, a New IP packet resembles the
modern courier service such as FedEx packages. It has four
components: manifest, shipping specification, contract, and
payload. The manifest component keeps logistics and book-
keeping information about the packet, the shipping specifica-
tion contains the sender and receiver’s addressing information,
the contract component specifies application’s KPI and other
sender’s intent in the form of metadata and conditional actions.
New IP is motivated by solving problems as commonly found
in (1) Industrial machine-type communications (Industry 4.0
and 5.0, industrial Internet, industrial IoT, industrial automa-
tion); (2) Emerging applications such as holographic type
communications, holographic teleport, remote vehicle driving
with multi-camera sensory feedback; (3) IP mobile back-haul
transport for 5G/B5G/6G uRLLC (Ultra-Reliable Low Latency
Communications), mMTC (Massive Machine Type Communi-
cations), especially when Connecting Industrial Networks; (4)
Emerging industry verticals (driver-less vehicles, smart city
and smart agriculture); and (5) ITU-T Network 2030 [1], [2].
New IP can support, for example, (1) many different ad-
dressing systems, especially the flexible addressing system
that allows for variable-length addresses, through the shipping
specification; (2) application’s KPI and user’s intent through
the contract component; (3) QC through the joint use of
contract and qualitative payload; (4) intrinsic security through
the joint use of shipping specification and contract.
New IP components and functions enable advanced service
capabilities to the network nodes to support emerging and
future network applications. With New IP, the network services
become customizable, trackable, assurable, and billable at
the packet level. The applications or end users are able to
customize the guaranteed delivery time of a packet or a group
of packets, specify transmission path, achieve anonymity
by sending to proxy node, etc. The source and destination
addresses in the IP header are evolved from fixed format
to flexible and heterogeneous addressing in a Shipping
Spec. The user payload evolves from a pure bit stream that
is meaningless to the network nodes to a qualitative
payload with certain metadata being exposed to the network
Fig. 1. New IP packet
nodes for differentiated treatment.
The New IP payload can be a traditional payload as a
sequence of bits/bytes, or a qualitative payload that is subject
to QC service processing when corresponding events happen.
Qualitative payload is generated from the original payload by
dividing it into multiple chunks, which allows the network to
partially remove some less important portions of the payload
when dealing with congestion, such that the receiver is able to
consume the residual payload instead of getting nothing at all.
New IP Contract Metadata can be utilized to carry the context
of a qualitative payload, e.g., how significant a particular piece
of data within the payload is. In the contract clause for the
packets with qualitative payload (called as qualitative packets),
the following actions may be defined:
• Packet Wash: a generic operation to arbitrarily or
selectively remove the chunks inside the packet payload
instead of completely dropping the packet when encoun-
tering network congestion.
• Enrich: The packet payload may be inserted by the
New IP node with locally cached chunks that match the
packet when the network condition permits this, and the
outgoing queue is able to forward the larger packet.
In the following sections, we focus on describing the major
characteristics, promising use cases of QC.
IV. QUALITATIVE COMMUNICATION (QC)
Qualitative payload as one of the major components in
New IP eliminates the packet payload’s opaqueness to the
network. The semantics of data contained in the qualitative
payload can be exposed to, and understood by, the network
nodes. Currently, this is only available to applications or
end users. QC allows senders to divide the packet payload
into chunks, such that the network is permitted to selectively
discard portions of the payload instead of dropping it entirely.
In QC, packet wash is regarded as a scrubbing operation that
reduces the size of a packet while preserving as much of the
prioritized information in the payload as possible. As shown
in Fig. 2, packet wash selectively drops chunks of the packet
such that the remainder of the packet may be able to reach
the receiver.
A. Significance-Based Packetization
The packetization of qualitative payload could be based on
the relative significance levels of different chunks in the pay-
load. This significance level should be input by the application
at the source. The network nodes then can understand the
significance of the chunks and accordingly make decisions
regarding how to truncate the packets based on the current
situation, such as congestion level.
The New IP Contract is leveraged to facilitate the signifi-
cance based QC and carry the qualitative payload context as
Fig. 2. Packet Wash Operation in Qualitative Communication
shown in Fig. 3. The New IP Contract contains the following
parameters: (1) an action as ’PacketWash’; (2) event and
condition when Packet Wash action is carried out, e.g., when
by default entire packet dropping is executed due to network
congestion, or when in-time guarantee of some urgent packet
behind the queue is in danger of being late (will be discussed
in Section VI-D); (3) a threshold value, beyond which the
chunks cannot be further dropped. Otherwise if truncating the
packet beyond this threshold, the packet would be considered
useless to the receiver and should be dropped completely; and
(4) some additional information about each individual chunk i:
(a) Sigi, a relative significance level associated with the chunk
compared to other chunks; (b) Offi, an offset to describe the
boundary of adjacent chunks in the payload to enable partial
dropping of the packet payload. If the chunks have the same
size, this field can be replaced by the chunk size, which is
universal for all chunks, as shown in Fig. 4; (c) CRCi, a
CRC (Cyclic Redundancy Check) to verify the integrity of the
chunk. CRC is no longer applied to the entire packet/packet
payload, instead each individual chunk is associated with a
CRC; and (d) Flagi, a flag to determine if the chunk was
dropped. This helps receivers know which chunks have been
dropped in the network.
However, it might not be convenient for the network node to
find the least significant chunk(s) and remove them from the
packet payload. Instead of maintaining the original positions of
the chunks, the chunks could be shifted around such that they
are in the decreasing order of significance level from the front
to the tail of the packet payload, as shown in Fig. 4. As a result,
when packet wash operation is needed, the network node
could conveniently truncate the chunk(s) from the tail until the
specified threshold or as necessary. In the New IP Metadata
field, the Sigi is no longer needed for such packetization.
Instead, the original position of the corresponding chunk is
included for the receiver to recover the chunks to the right
positions.
B. Random Linear Network Coding Based Packetization
In another variation [12], RLNC [13] is applied on the
chunks within the packet flow as shown in Fig. 5, if the chunks
share the same size [8], [12]. When packet wash is applied to
the packet, the intermediate forwarding node does not need to
spend any time on deciding which chunk should be dropped.
Fig. 3. Significance-based packetization for QC (maintaining original chunk
order)
Fig. 4. Significance-based packetization for QC (chunks in decreasing order
of significance level)
No matter how many chunks arrive at the receiver, they are
useful to recover the original packet data. If the receiver does
not have enough degrees of freedom to decode the packet
data, more coded chunks belong to the same packet need to
be re-transmitted from the sender. However, the size of the
re-transmitted chunks is reduced. This packetization scheme
is most suitable for the packets that do not have differentiated
significance among different chunks.
The network nodes may try to retain as many chunks as
possible in the packets that were intended to be dropped
entirely. The New IP Metadata is designed as shown in Fig. 5
to describe the context information of the packet payload:
• Network Coding Group: it is used to identify the
original data chunks that the RLNC was applied on. The
Network Coding Group only relates to the data content
and the sender.
• Coding Type: it indicates the payload coding type to
be RLNC.
• Coded Chunk Size: it gives the information on how
large the chunk size is. When packet wash happens, the
network nodes are able to find the boundary of each coded
Fig. 5. Random Linear Network Coding based packetization for Qualitative
Communication
chunk in the payload.
• Coefficients: they are the coefficients with which
the original data chunks are linearly combined to form
coded chunks contained in the current payload.
• Full DoF (Degree of Freedom): it indicates the
full rank of the coded chunks in the payload when they
are inserted by the sender.
When network congestion happens, the intermediate net-
work node does not need to decide which chunk to drop, it can
drop as many chunks as needed from the tail until the outgoing
buffer permits to contain the packet. There is no priority in
this context. Since the system avoids dropping a whole packet,
there are no transport layer timeout, nor any need to interrupt
the transmission session to re-transmit a packet. When the
packet eventually reaches the receiver, any chunks that are
retained in the packet can be cached by the receiver and are
useful for future decoding of the original payload after enough
degrees of freedom are received.
The receiver could request the sender to send more coded
chunks through replying acknowledgement of the recently-
received packet. Such acknowledgement only needs to include
the number of received chunks. The sender can deduce the
number of missing degrees of freedom, and do not need to
know which chunks are lost during the previous transmission.
It only needs to send the newly coded packet with more
linearly independent chunks, in a number equal to the missing
degrees of freedom. The sender could also add some redun-
dancy to the packet payload based on the loss rate deduced
from the acknowledgement, which potentially help fight the
network congestion and decrease the number of possible re-
transmissions.
In-network caching [14] could be facilitated by the interme-
diate network nodes. As long as a cached coded chunk happens
to belong to the same network coding group and is independent
from the already received chunks, the cached coded chunk can
be immediately sent to receiver to help reduce the number of
missing degrees of freedom.
V. QUALITATIVE FLOW CONTROL PARADIGM
Fig. 6. Flow control compatible with QC : a 3-chunk example. The sender,
network, and receiver react based on the level of congestion.
QC reacts to congestion by dropping parts of the payload.
The transport layer also reacts to congestion by adjusting
its sending rate (congestion window). Therefore the transport
layer needs to take into consideration the behavior of QC,
which is referred to as qualitative flow control.
In [5], a transport layer is presented on top of QC. Similar to
TCP, it is a window-based protocol, which adjusts the sending
rate through feedback generated by packet wash and sent
by the receiver. The feedback is sent in the form of chunk
NACK (Negative Acknowledgment) to indicate congestion to
the sender when a chunk has been dropped as shown in Fig. 6
.
The key insight is that dropping chunks significantly reduces
the congestion, so the sending rate does not have to be
modified aggressively upon encountering a congestion event.
Further, the number of received chunks allow the sender
to actually measure the achieved rate, towards which the
congestion window should converge.
Obviously the rate should be adjusted towards something
that is achievable without dropping chunks. To this end, the
network and end-points cooperate to react in proportion to
the extent of congestion. By setting up aggressive congestion
thresholds in the forwarding nodes and mild reaction at
the end-points, it avoids intermittent packet losses (and re-
transmissions).
Another benefit of such a transport protocol is that the per
packet delay is more predictable. The intuition behind this is as
follows: as buffers fill up, they reach the threshold at which
chunks start being dropped. When the chunks are dropped,
packet are smaller and can be processed/forwarded faster. This
then reduces the buffer occupancy below the threshold. If the
arrival rate does not vary, then the buffer will overflow the
threshold again, chunks are dropped again, and the buffer will
empty below the threshold, which creates a virtuous cycle
when dealing with congestion. The threshold therefore acts as
a sticky point for the buffer occupancy. Because the buffer
occupancy sticks around the threshold, the buffering delay
becomes predictable.
VI. USE CASES OF QUALITATIVE COMMUNICATION
After describing QC, we now turn our attention to some of
its use cases.
A. Significance Difference of Packets in Video Streaming
Recent studies [15] show that globally IP video traffic will
be 82 percent of all IP traffic (both business and consumer) by
2022, up from 75 percent in 2017. Internet video traffic will
continue to grow fourfold from 2017 to 2022, a CAGR of 33
percent. With the rapid growth of video streaming traffic, it is
foreseen that multiple video streaming flows are more likely to
share a bottleneck link, which would inevitably cause network
congestion.
A visual scene in videos is represented in digital form by
sampling the real scene spatially on a rectangular grid in the
video image plane, and sampling temporally at regular time in-
tervals as a sequence of still frames. Correspondingly, modern
media codec [16] incorporates three types of “Scalability”: i.e.,
temporal scalability, spatial scalability, and quality scalability,
which adapt the video bit stream by inserting or removing
some portions to/from it in order to accommodate the different
preferences and connection types of end users as well as the
network conditions.
The levels of scalability included in the video stream affects
the quality of media presented to the end users’ devices. For
example, the scalability could be shown in three different
levels of video quality, i.e., low, medium, high, the video
server could adaptively regulate the sending rate according
to the changing network conditions [17]. By leveraging the
flexibility and variety of video qualities enabled by those types
of scalability, for video streaming, minimizing the possibility
of network congestion can often be achieved by rate control
and video adaptation methods [18] [19]. However, the quality
adaption might not be prompt enough to cope with the
dropping of packets on the wire due to network congestion
and resource competition among concurrent video streams.
Although DiffServ [20] [21] is used to manage resources such
as bandwidth and queuing buffers on a per-hop basis between
different classes of traffic, packet dropping does not differ
within the same class. As the majority of the Internet traffic
becomes video streaming, such differentiation and prioritiza-
tion at the traffic class level would not be effective enough
to eliminate the packet dropping from the competing video
streams and the possibility of degraded service levels.
With the various scalability implemented in the video
codecs, it is not difficult to understand that some bits of an en-
coded video stream could be more important than others [22].
Bits belonging to base layer usually are more significant to
the decoder than bits belonging to enhancement layers. In the
following, we take H.264/MPEG-4 as the example to further
analyze the characteristics of video packets.
MPEG-4 exploits the spatial and temporal redundancy in-
herent in video images and sequences. The temporal sequence
of MPEG frames consists of three types:
• I-Frame: I-frames are key frames that provide check-
points for re-synchronization or re-entry to support trick
modes and error recovery. These frames consist only
of macroblocks that use intra-prediction, in other words
spatially encoded within themselves and are reconstructed
without any reference to other frames.
• P-Frame: P-frame stands for Predicted Frame and
allows macroblocks to be compressed using temporal
prediction in addition to spatial prediction. P-frames are
forward predicted or extrapolated and the prediction is
unidirectional. A motion vector is calculated to determine
the value and direction of the prediction for each mac-
roblock. A P-frame refers to a picture in the past, might
be referenced by a P-frame after it, or a B-frame before
or after it.
• B-Frame: B-frame stands for bidirectionally predicted
frame, which can refer to frames that occur both before
and after it. B-frames typically do not serve as a reference
for other frames. Thus, they can be dropped without
significant impact on the video quality.
Losing the first I-frame in the GOP (Group of Pictures)
would cause video picture even missing for few seconds,
because P- and B-frames referencing to the I-frame would
not be decoded nor displayed either. Video scenes with a low
level of movement are less sensitive to both B-frame and P-
frame packet loss, alternatively video scenes with a high level
of movement are more sensitive to both B-frame and P-frame
packet loss. A lost P-frame can impact the remaining part of
the GOP. A lost B-frame has only local effects in a slowly
moving content or with large static background. In a scene
of a dynamically moving content, losing B-frame has more
dramatic impact and its scale can be as far-reaching as a P-
frame loss.
Macroblocks that are identified to represent the objects in
RoI (Region of Interest) are likely more important than other
macroblocks of non-RoI regions. For packets carrying RoI
macroblocks in the video stream need to have higher priority
to be retained compared to other packets carrying non-RoI
macroblocks.
According to the characteristics of frames contained in the
video packet payload, namely: frame type, whether the frames
are referenced by other frames, movement level of the pictures,
whether the picture contained in the packet belongs to RoI or
not, etc., significance difference could present among packets
for the video decoding at the receiver side and the QoE
improvement of end users. The dropping priority is possibly
implemented at packet level in the network. If a packet
payload contains multiple macroblocks which show priority
differences, then packet wash or partial packet dropping could
be applicable based on the priority indicated for different
portions of the packet. The network is able to treat the
packets of video streams in a differentiated manner and at
finer granularity than DiffServ. The re-transmission could be
maximally eliminated. The receiving end user can consume the
delivered packets as many as possible in-time with acceptable
quality.
B. Jumbo Packet in Multi-Camera Assisted Remote Driving
Advances in autonomous driving have been impressive ow-
ing to the technologies that receive and act on the feeds from
a massive number of sensors by the vehicle’s computation unit
itself. On the other hand, a vehicle could be mounted with a
number of cameras to further assist the autopilot. For example,
eight surround cameras mounted on the Tesla cars [23] that
provide 360 degrees of visibility around the car at up to 250
meters of range.
At the same time, remote driving service that resides
remotely in the central cloud or in the nearest edge cloud
further improves the autonomous experience and provides
higher safety guarantee to the vehicles. It requires the data
collected by the vehicle’s computation unit to be sent to the
edge cloud or the central cloud, including the images captured
by the in-car cameras, to apprehend more information than
that the vehicle itself can see and notify the vehicles in the
proximity of the location where the incident happens. Remote
driving complements autonomous driving and enhances safety
when it falls short. Unexpected objects or events can occur
at any point of time, for example deer running out of forest
to pass the road as shown in Fig. 7. Any latency incurred by
Fig. 7. Multiple cameras mounted on car (Some icons made by Freepik from www.flaticon.com)
the network adds to the latency already incurred by human
reaction time. In addition to support significant data volumes
in the network, the end-to-end latency must be extremely low.
Therefore, complete loss and re-transmission of packets due
to network congestion cannot be afforded.
The images that are captured by the in-car cameras (camera
1, 2, 3 and 4 in Fig. 7) are commonly concatenated and
contained in a jumbo packet as shown in Fig. 8, with the
reason to save the signaling and packet header overhead. In
the current Internet, when there is network congestion in the
network, the jumbo packet is likely to be dropped entirely
by the congested router. Since in this example the incident
of deer running out of the forest to cross the road is an
event that requires prompt notification to the nearby vehicles,
such complete dropping of the jumbo packet and future re-
transmissions are not acceptable, which could be mitigated by
QC and the associated packet wash operation. Obviously, the
image from camera 3 captures the deer, which is regarded
as a dangerous object to the on-road vehicles. Therefore, the
image data from camera 3 requires the highest priority to be
retained in the jumbo packet and delivered to the server. If
dropping from the tail of the jumbo packet is adopted for
packet wash operation in the network, the re-ordering of the
image data from camera 1, 2, 3 and 4 according to descending
significance level is shown in Fig. 9, with the tail of the
jumbo packet as the least important image data and the head
of the jumbo packet as the most important image data (i.e.,
the image data from camera 3) in this scenario. With the
re-packetization of the jumbo packet, and QC/packet wash
enabled in the network, even with network congestion happens
at multiple intermediate routers (as shown in Fig. 9, the image
data captured by camera 4, 1, 2 is truncated from the tail of
the jumbo packet sequentially by multiple congested routers),
the most important image data is successfully preserved during
transmission and reaches the server in-time.
Fig. 8. Payload of jumbo packet
Fig. 9. Payload of jumbo packet
C. Field of View in AR/VR
In 360 video streams or AR/VR applications, the video
stream is usually decomposed into tiles. Each tile maps to
a view angle on the sphere. Put together, the tiles recreate the
whole 360 degree domain. However, since the user will only
look at a fraction of this domain, it is not necessary to transmit
all the tiles at all times.
Many mechanisms attempted to prioritize part of the stream.
The saliency in images can be considered to prioritize different
parts of the stream [24], [25]. Selective segmentation [26]
is also used in this context. Prediction algorithms have been
designed to anticipate the content that the user will look at.
This helps prioritizing the views according to the likelihood
the user will watch [27]–[30]. [31] offers a discussion on how
networks can better support 360 video streaming.
We have observed in the previous section that the packet
trimming can be used to respond to congestion and varying
network conditions. [4] used this to design a practical protocol
for immersive streams such as AR, VR or 360 degree video
streaming. It proposes to use QC for 360 video streams in the
following manner: first, it composes a payload with chunks,
where each chunk corresponds to a specific tile for the next
time segment being requested. The tiles most central to the
FoV (Field of View) are assigned with a higher priority. The
payload is therefore composed of k chunks c1, . . . , ck, each
corresponding to the transmission of tiles st,fi
for the next
time interval t, where the fi are ordered by distance from
the center of the FoV. In this basic algorithm, the packets are
transmitted and if congestion occurs, some chunks are dropped
that relate to areas of the FoV that are less central and less
important.
[4] also proposes a second algorithm that uses QC to pre-
fetch tiles. That is, the chunks are divided into chunks for
the next segment at time t and pre-fetching chunks for the
segment at time t + 1.
In this scenario, the payload is composed of chunks
c1, . . . , ck, ck+1, . . . , c2k with c1, . . . , ck as defined above, and
ck+1, . . . , c2k corresponding to the same FoV tiles but for
time segment t + 1. The priority is again decreasing for
the chunks, so that the pre-fetched chunks ck+1, . . . , c2k are
dropped preferentially.
If chunks from the pre-fetching sequence ck+1, . . . , c2k are
successfully transmitted, then there are two possibilities:
• if the FoV stays the same during the next transmission
slot, then the pre-fetched chunks are valid, and are
not transmitted again. More chunks can be pre-fetched
instead.
• if the FoV varies enough that the pre-fetched chunks are
useless, then another transmission of regular and pre-
fetched chunks is initiated for the new FoV.
There is little penalty to increase the number of chunks (as
long as it fits within the maximum transmission unit of the
network) as they may be dropped during congestion.
As seen from this example, there is a symbiosis between
AR/VR applications and QC. The prioritization of chunks
maps directly with the FoV.
D. Last Resort for High-Precision Networking
Today’s Internet technology based on the best effort (BE)
principle is gradually running into its limits. It only does its
utmost to transport packets to their predetermined destinations
and lacks any assurance of QoS in such terms as end-to-end
latency and packet loss ratio, which are increasingly demanded
by future Internet applications [1]. Violations of service level
objectives such as end-to-end latency in those mission-critical
applications would result not merely in a ungraceful degraded
experience but also a disastrous breakdown.
As a result, high-precision networking [2] that guarantees
the service levels with sufficient accuracy is missing from to-
day’s Internet technology, which is the prerequisite to unlock-
ing economic potentials of the future networking applications.
Congestion in the Internet is unpredictable, which naturally
becomes the major obstacle to high-precision networking.
Packets must be queued in a certain order in the routers until
they can be transmitted. The depth of the queue (i.e., the size
of packets that are placed in the front of the queue), has a
dominating effect on the packet’s dwell time, consequently
the packet’s end-to-end latency. The dwell time of a packet
in a router is defined as the amount of time that it spends
in the router after its initial arrival until its full departure.
The dwell time is composed of three major components: (1)
Processing delay: the time it takes the node to process the
packet header. (2) Queueing delay: the time that the packet
spends in the outgoing queue before the last bit of the packet
being transmitted. (3) Transmission delay: the time that it takes
to serialize the packet and transmit it over the wire, generally
proportional to the packet size and egress bandwidth.
Various Internet technologies have been proposed in the
literature, with the intention to improve and optimize the QoS
built on the BE principle, navigating the trade-offs such as
utilization and fairness. They generally involve scheduling
algorithms that determine which packet gets to enter a queue
first, the location where the packet gets placed in the queue,
as well as prioritization schemes among multiple queues
transmitting over the same interface. The path towards high-
precision networking lies in advances of such QoS mech-
anisms. QC and its packet wash operation in the network
could serve as the last resort for high-precision networking
when other QoS approaches fail [32]. Therefore, the following
approach leveraging QC and packet wash does not act as a
competing alternative but as a complementary approach that
can be combined with other QoS approaches.
In the following, we use the end-to-end latency guarantee
(i.e., in-time guarantee) as the example for high-precision
networking. Packets that require in-time guarantee might al-
ready yield the end-to-end latency beyond its budget, in other
words, could be in danger of “being late” if all existing QoS
approaches in the Internet have exhausted their capabilities.
QC with packet wash can potentially accelerate a target packet
if packet wash is applied to the packet, and possibly also to
other packets ahead of it in the queue, resulting in reduced
dwell time of the packet in the congested router.
In order to reduce dwell time, packet wash can be applied
to the packet itself and the front packets in the queue that are
allowed to be trimmed. A packet is assessed whether it is in
danger of being late when it is en-queued. If the local latency
budget (packet per-hop deadline) is less than the sum of the
expected processing delay (likely to be fixed), the estimated
queueing delay and transmission delay (determined by the
current queue depth, the front packets’ total size and the egress
bandwidth), the packet is deemed to be in danger of missing its
deadline. If the local latency budget for a packet i is denoted
as LLBi, while the estimated dwell time is denoted as Ti, the
amount of time that needs to be shaved off, or the amount
of payload that needs to be truncated from the front packets
for the packet i (named as Needed Truncation Amount, i.e.,
NTAi) is calculated as: NTAi = (Ti − LLBi) ∗ B, where B
is the egress bandwidth.
Applying packet wash and truncating chunks from the front
packets in the queue would be the last resort, and such packet
is considered as WTP (Wash Trigger Packet). These operations
should be only applied when truly necessary. The proposed
method is complementary to existing QoS approaches and
allows for their seamless integration.
For those front packets, some of them may be washable,
some may be otherwise non-washable. Fig. 10 shows an
example of an outgoing queue, in which packet 3 and packet
9 are assessed to be WTP, packet 4 and 7 are not allowed to
be washed. In this case, the packet 1 and 2 can be washed
for the avoidance of packet 3 from being late, the packet 5,
packet 6 and packet 8 could be washed for the avoidance of
packet 9 from being late.
VII. PERFORMANCE ILLUSTRATION
A. AR/VR
In [4], the benefit of QC on immersive video streaming by
computing the gain on a set of video traces was evaluated.
The evaluation environment consisted in forming packets
with chunks that would be in the next requested field of view
Fig. 10. Example of WTP and washable packets
Fig. 11. Results on a set of video traces: received chunks for QC with and
without pre-fetching
at time t, with high priority. The remaining space was given to
chunks for the video stream at time t+1 that were pre-fetched,
with low priority.
In case of congestion, the pre-fetched chunks for time t+1
would be dropped first, then the chunks for time t that were
far away from the center of the FoV, then the chunks at time
that were near the center of the FoV.
Fig. 11 presents the results of the algorithms from [4] for a
range of video streams and for a time varying communication
channel. In this illustration, the packets are composed of three
chunks and the y-axis shows the number of received chunks.
The channel varies so that chunks may be dropped and the
average capacity of the channel is 2.5 chunks. Without QC,
no packet would be transmitted as the channel capacity is
too low. With QC, 2.25 chunks get transmitted on average.
When QC is combined with pre-fetching, useful chunks are
transmitted ahead of time when there is enough capacity, yield
a throughput of 2.5 chunks.
It can be seen that pre-fetching chunks with QC allows
to improve the QoE for the end users. Namely, there is a
significant gain in the number of chunks that are eventually
received, since the pre-fetching mechanism allows to transmit
some chunks when the conditions are good; however, when
the conditions degrade, the packet as a whole is not dropped,
but only the less important information.
B. In-Time Guarantee
[32] evaluated how the proposed QC and packet wash
mechanisms could serve as the last resort for high precision
Fig. 12. Packet delivery success ratio comparison under SDFS
in-time guarantees. A simplified topology is considered with n
number of senders sending packets to m number of receivers.
There is one intermediate router between the senders and
receivers, and all packets towards the receivers require in-time
guarantee and buffered in the same outgoing queue. On the
other hand, every packet is subject to packet wash operation
and associated with a wash allowance ratio, which is defined
as the ratio of the threshold that is allowed to be truncated
during transmission to the original packet payload size.
The network nodes adopts Smallest Deadline First Schedule
(SDFS) [33] as the packet scheduling schemes in the outgoing
queue.
Fig. 12 from [32] shows the performance of packet delivery
success ratio with and without QC and packet wash operation
in the network. Packet delivery success ratio is defined as the
ratio of the packets that meet their corresponding deadlines.
The unsuccessful packets are dropped and cannot reach the
receivers in time.
A packet’s dwell time is reduced by applying packet wash
to all packets in front of the packet and the packet itself.
Since in the simulation, the packets’ deadline are intentionally
configured, such that all the packets under SDFS without
packet wash would fail, the packet delivery success ratio
is 0. However, by applying packet wash to the packets in
the outgoing queue, even with small wash allowance ratio
(e.g., 10%), the packet delivery success ratio under SDFS
with packet wash is dramatically increased to more than
80% as shown in Fig. 12. While the packet wash allowance
ratio increases, a higher number of packets under the SDFS
scheme can be delivered successfully to the receivers with their
deadlines being met.
We confidently conclude that QC and packet wash mecha-
nisms can further support the high precision in-time guarantee.
VIII. CONCLUSION AND FUTURE DIRECTIONS
The ITU-T Focus Group on Network 2030 has identified
some emerging applications, analyzed technical gaps, and
defined some new services that are expected to support new
and future applications, but has not provided any solutions to
supporting such new services. This paper describes Qualitative
Communication (QC) using New IP. QC allows for partial
data delivery, which is sometimes good enough for video-like
applications. QC can be taken as a new service or solution to
some problems studied by ITU-T Focus Group on Network
2030.
For QC we have made the following assertions: (1) Not
every byte in the same packet has the same significance, and
partially delivered data is sometimes reasonably usable for
some use cases; (2) Dropping a whole packet in its entirety
in the case of congestion is too wasteful and sometimes
unnecessary; (3) Drops and re-transmissions such as that
done by TCP may cause received packets to arrive too late
and thus to be unusable because they might have missed
their deadlines, especially for mission-critical applications;
(4) Utilizing the significance information, operations such as
packet wash can be performed at sub-packet level so that the
in-time deadline could be guaranteed to make sure the received
packets, even though partially delivered, are still usable for
some applications.
Our future direction includes: (1) Continue to explore the
use cases of using QC for remote driving with multi-camera
sensory feedback, multimedia streaming, etc.; (2) In-depth
study of other network applications that might need the support
from different functions of New IP, including the Shipping
Spec, Contract and Qualitative Payload.
REFERENCES
[1] ITU-T Focus Group on Network 2030 White Paper, “Network 2030 -
A blueprint of technology,applications and market drivers towards the
year 2030 and beyond,” May 2019.
[2] ITU-T Focus Group on Network 2030 Deliverable, “New services and
capabilities for Network 2030: description, technical gap and perfor-
mance target analysis,” May 2019.
[3] R. Li, K. Makhijani, H. Yousefi, C. Westphal, L. Dong, T. Wauters, and
F. De Turck, “A framework for Qualitative Communications using Big
Packet Protocol,” in Proceedings Of The 2019 ACM Sigcomm Workshop
On Networking For Emerging Applications And Technologies, 2019.
[4] C. Westphal, D. He, K. Makhijani, and R. Li, “Qualitative communi-
cations for augmented reality and virtual reality,” in 2021 IEEE 22nd
International Conference on High Performance Switching and Routing
(HPSR), 2021.
[5] A. Albalawi, H. Yousefi, C. Westphal, K. Makhijani, and J. Garcia-
Luna-Aceves, “Enhancing end-to-end transport with packet trimming,”
in GLOBECOM 2020 - 2020 IEEE Global Communications Conference,
2020.
[6] S. Shalunov et al, “Low extra delay background transport (LEDBAT),”
RFC, vol. 6817, pp. 1–25, 2012.
[7] N. Cardwell et al, “BBR: Congestion-based congestion control,” ACM
Queue, 2016.
[8] L. Dong, K. Makhijani, and R. Li, “Qualitative communication via
network coding and New IP : invited paper,” in International Conference
on High Performance Switching and Routing (HPSR), 2020.
[9] S. Clayman, M. Tuker, H. Arasan, and M. Sayıt, “The Future of Media
Streaming Systems: Transferring Video over New IP,” in 2021 IEEE
22nd International Conference on High Performance Switching and
Routing (HPSR), 2021, pp. 1–6.
[10] R. Li, K. Makhijani, and L. Dong, “New IP: A data packet framework
to evolve the Internet : Invited Paper,” in 2020 IEEE 21st International
Conference on High Performance Switching and Routing (HPSR), 2020.
[11] R. Li, A. Clemm, U. Chunduri, L. Dong, and K. Makhijani, “A
new framework and protocol for future networking applications,” ACM
Sigcomm Workshop on Networking for Emerging Applications and
Technologies (NEAT 2018), May 2018.
[12] L. Dong and R. Li, “In-packet network coding for effective packet wash
and packet enrichment,” in IEEE Globecom Workshops, 2019.
[13] C. Fragouli, J. L. Boudec, and J. Widmer, “Network coding: an instant
primer,” ACM Sigcomm Computer Communication Review, vol. 36,
no. 1, pp. 63–68, January 2006.
[14] L. Dong and R. Li, “Optimal chunk caching in
network coding-based qualitative communication,” Digital
Communications and Networks, 2021. [Online]. Available:
https://www.sciencedirect.com/science/article/pii/S2352864821000304
[15] Cisco, “Cisco visual networking index: forecast and trends, 2017–2022,”
White Paper, 2018.
[16] G. Conklin, G. Greenbaum, K. Lillevold, A. Lippman, and Y. Reznik,
“Video coding for streaming media delivery on the internet,” IEEE
Transactions on Circuits and Systems for Video Technology, vol. 11,
no. 3, pp. 269–281, 2001.
[17] ISO/IEC, “23009-1:2019, Dynamic Adaptive Streaming over HTTP
(DASH) - Part 1: Media Presentation Description and Segment Formats,”
2019. [Online]. Available: https://www.iso.org/standard/79329.html
[18] A. Bentaleb, B. Taani, A. C. Begen, C. Timmerer, and R. Zimmermann,
“A survey on bitrate adaptation schemes for streaming media over http,”
IEEE Communications Surveys Tutorials, vol. 21, no. 1, pp. 562–585,
2019.
[19] D. Wu, Y. Hou, W. Zhu, Y.-Q. Zhang, and J. Peha, “Streaming video over
the internet: approaches and directions,” IEEE Transactions on Circuits
and Systems for Video Technology, vol. 11, no. 3, pp. 282–300, 2001.
[20] D. L. Black, Z. Wang, M. A. Carlson, W. Weiss, E. B. Davies, and
S. L. Blake, “An Architecture for Differentiated Services,” RFC 2475,
1998. [Online]. Available: https://rfc-editor.org/rfc/rfc2475.txt
[21] D. L. Black and P. Jones, “Differentiated services (Diffserv) and
real-time communication,” RFC 7657, 2015. [Online]. Available:
https://rfc-editor.org/rfc/rfc7657.txt
[22] L. Dong, K. Makhijani, and R. Li, “A use case of
packets’ significance difference with media scalability,” Internet
Engineering Task Force, Internet-Draft, 2021, work in Progress.
[Online]. Available: https://datatracker.ietf.org/doc/html/draft-dong-
usecase-packet-significance-diff-00
[23] Tesla, “Advanced sensor coverage.” [Online]. Available:
https://www.tesla.com/autopilot
[24] T. Nguyen et al, “Static saliency vs. dynamic saliency: a comparative
study,” in ACM MM, 2013.
[25] C. Fan et al, “Fixation prediction for 360 video streaming in head-
mounted virtual reality,” in ACM Workshop on Network and OS Support
for Digital Audio and Video, 2017.
[26] A. Kirillov, K. He, R. B. Girshick, C. Rother, and P. Dollár, “Panoptic
segmentation,” CoRR, vol. abs/1801.00868, 2018. [Online]. Available:
http://arxiv.org/abs/1801.00868
[27] T. Alshawi et al, “Understanding spatial correlation in eye-fixation maps
for visual attention in videos,” in IEEE ICME, 2016.
[28] S. Chaabouni et al, “Transfer learning with deep networks for saliency
prediction in natural video,” in IEEE ICIP, 2016.
[29] L. Xie et al, “CLS: A cross-user learning based system for improving
QoE in 360-degree video adaptive streaming,” in 2018 ACM Multimedia
Conference, 2018.
[30] D. He, C. Westphal, and J. Garcia-Luna-Aceves, “Joint rate and FoV
adaptation in immersive video streaming,” in ACM SIGCOMM workshop
on AR/VR Networks, Aug. 2018.
[31] D. He et al, “Network support for AR/VR and immersive video
application: A survey,” in ICETE SIGMAP, 2018.
[32] L. Dong and A. Clemm, “High-precision end-to-end latency guarantees
using packet wash,” in 2021 IFIP/IEEE International Symposium on
Integrated Network Management (IM), 2021, pp. 259–267.
[33] C. Wilson, H. Ballani, T. Karagiannis, and A. I. T. Rowstron, “Better
never than late: meeting deadlines in datacenter networks,” in Proceed-
ings of the ACM SIGCOMM, 2011.

More Related Content

Similar to MSN_CameraReady.pdf

Predictable Packet Lossand Proportional Buffer Scaling Mechanism
Predictable Packet Lossand Proportional Buffer Scaling MechanismPredictable Packet Lossand Proportional Buffer Scaling Mechanism
Predictable Packet Lossand Proportional Buffer Scaling MechanismIDES Editor
 
Enhanced Multicast routing for QoS in delay tolerant networks
Enhanced Multicast routing for QoS in delay tolerant networksEnhanced Multicast routing for QoS in delay tolerant networks
Enhanced Multicast routing for QoS in delay tolerant networksIRJET Journal
 
Delay jitter control for real time communication
Delay jitter control for real time communicationDelay jitter control for real time communication
Delay jitter control for real time communicationMasud Rana
 
CPCRT: Crosslayered and Power Conserved Routing Topology for congestion Cont...
CPCRT: Crosslayered and Power Conserved Routing Topology  for congestion Cont...CPCRT: Crosslayered and Power Conserved Routing Topology  for congestion Cont...
CPCRT: Crosslayered and Power Conserved Routing Topology for congestion Cont...IOSR Journals
 
Reduce the False Positive and False Negative from Real Traffic with Intrusion...
Reduce the False Positive and False Negative from Real Traffic with Intrusion...Reduce the False Positive and False Negative from Real Traffic with Intrusion...
Reduce the False Positive and False Negative from Real Traffic with Intrusion...inventy
 
An Insight Into The Qos Techniques
An Insight Into The Qos TechniquesAn Insight Into The Qos Techniques
An Insight Into The Qos TechniquesKatie Gulley
 
Ieeepro techno solutions 2014 ieee dotnet project - cloud bandwidth and cos...
Ieeepro techno solutions   2014 ieee dotnet project - cloud bandwidth and cos...Ieeepro techno solutions   2014 ieee dotnet project - cloud bandwidth and cos...
Ieeepro techno solutions 2014 ieee dotnet project - cloud bandwidth and cos...ASAITHAMBIRAJAA
 
Ieeepro techno solutions 2014 ieee java project - cloud bandwidth and cost ...
Ieeepro techno solutions   2014 ieee java project - cloud bandwidth and cost ...Ieeepro techno solutions   2014 ieee java project - cloud bandwidth and cost ...
Ieeepro techno solutions 2014 ieee java project - cloud bandwidth and cost ...hemanthbbc
 
A dynamic performance-based_flow_control
A dynamic performance-based_flow_controlA dynamic performance-based_flow_control
A dynamic performance-based_flow_controlingenioustech
 
Traffic-aware adaptive server load balancing for softwaredefined networks
Traffic-aware adaptive server load balancing for softwaredefined networks Traffic-aware adaptive server load balancing for softwaredefined networks
Traffic-aware adaptive server load balancing for softwaredefined networks IJECEIAES
 
JPJ1410 PACK: Prediction-Based Cloud Bandwidth and Cost Reduction System
JPJ1410  PACK: Prediction-Based Cloud Bandwidth and Cost Reduction SystemJPJ1410  PACK: Prediction-Based Cloud Bandwidth and Cost Reduction System
JPJ1410 PACK: Prediction-Based Cloud Bandwidth and Cost Reduction Systemchennaijp
 
Dual-resource TCPAQM for Processing-constrained Networks
Dual-resource TCPAQM for Processing-constrained NetworksDual-resource TCPAQM for Processing-constrained Networks
Dual-resource TCPAQM for Processing-constrained Networksambitlick
 
Dynamic Routing for Data Integrity and Delay Differentiated Services in Wirel...
Dynamic Routing for Data Integrity and Delay Differentiated Services in Wirel...Dynamic Routing for Data Integrity and Delay Differentiated Services in Wirel...
Dynamic Routing for Data Integrity and Delay Differentiated Services in Wirel...1crore projects
 
Analysis of Rate Based Congestion Control Algorithms in Wireless Technologies
Analysis of Rate Based Congestion Control Algorithms in Wireless TechnologiesAnalysis of Rate Based Congestion Control Algorithms in Wireless Technologies
Analysis of Rate Based Congestion Control Algorithms in Wireless TechnologiesIOSR Journals
 
Active Network Service Composition
Active Network Service CompositionActive Network Service Composition
Active Network Service CompositionIJERD Editor
 
Comparative Analysis of Different TCP Variants in Mobile Ad-Hoc Network
Comparative Analysis of Different TCP Variants in Mobile Ad-Hoc Network Comparative Analysis of Different TCP Variants in Mobile Ad-Hoc Network
Comparative Analysis of Different TCP Variants in Mobile Ad-Hoc Network partha pratim deb
 

Similar to MSN_CameraReady.pdf (20)

Predictable Packet Lossand Proportional Buffer Scaling Mechanism
Predictable Packet Lossand Proportional Buffer Scaling MechanismPredictable Packet Lossand Proportional Buffer Scaling Mechanism
Predictable Packet Lossand Proportional Buffer Scaling Mechanism
 
Ba25315321
Ba25315321Ba25315321
Ba25315321
 
Enhanced Multicast routing for QoS in delay tolerant networks
Enhanced Multicast routing for QoS in delay tolerant networksEnhanced Multicast routing for QoS in delay tolerant networks
Enhanced Multicast routing for QoS in delay tolerant networks
 
Delay jitter control for real time communication
Delay jitter control for real time communicationDelay jitter control for real time communication
Delay jitter control for real time communication
 
CPCRT: Crosslayered and Power Conserved Routing Topology for congestion Cont...
CPCRT: Crosslayered and Power Conserved Routing Topology  for congestion Cont...CPCRT: Crosslayered and Power Conserved Routing Topology  for congestion Cont...
CPCRT: Crosslayered and Power Conserved Routing Topology for congestion Cont...
 
B010340611
B010340611B010340611
B010340611
 
Reduce the False Positive and False Negative from Real Traffic with Intrusion...
Reduce the False Positive and False Negative from Real Traffic with Intrusion...Reduce the False Positive and False Negative from Real Traffic with Intrusion...
Reduce the False Positive and False Negative from Real Traffic with Intrusion...
 
An Insight Into The Qos Techniques
An Insight Into The Qos TechniquesAn Insight Into The Qos Techniques
An Insight Into The Qos Techniques
 
Ieeepro techno solutions 2014 ieee dotnet project - cloud bandwidth and cos...
Ieeepro techno solutions   2014 ieee dotnet project - cloud bandwidth and cos...Ieeepro techno solutions   2014 ieee dotnet project - cloud bandwidth and cos...
Ieeepro techno solutions 2014 ieee dotnet project - cloud bandwidth and cos...
 
Ieeepro techno solutions 2014 ieee java project - cloud bandwidth and cost ...
Ieeepro techno solutions   2014 ieee java project - cloud bandwidth and cost ...Ieeepro techno solutions   2014 ieee java project - cloud bandwidth and cost ...
Ieeepro techno solutions 2014 ieee java project - cloud bandwidth and cost ...
 
A dynamic performance-based_flow_control
A dynamic performance-based_flow_controlA dynamic performance-based_flow_control
A dynamic performance-based_flow_control
 
[IJCT-V3I3P5] Authors: Alok Kumar Dwivedi, Gouri Shankar Prajapati
[IJCT-V3I3P5] Authors: Alok Kumar Dwivedi, Gouri Shankar Prajapati[IJCT-V3I3P5] Authors: Alok Kumar Dwivedi, Gouri Shankar Prajapati
[IJCT-V3I3P5] Authors: Alok Kumar Dwivedi, Gouri Shankar Prajapati
 
Traffic-aware adaptive server load balancing for softwaredefined networks
Traffic-aware adaptive server load balancing for softwaredefined networks Traffic-aware adaptive server load balancing for softwaredefined networks
Traffic-aware adaptive server load balancing for softwaredefined networks
 
JPJ1410 PACK: Prediction-Based Cloud Bandwidth and Cost Reduction System
JPJ1410  PACK: Prediction-Based Cloud Bandwidth and Cost Reduction SystemJPJ1410  PACK: Prediction-Based Cloud Bandwidth and Cost Reduction System
JPJ1410 PACK: Prediction-Based Cloud Bandwidth and Cost Reduction System
 
Dual-resource TCPAQM for Processing-constrained Networks
Dual-resource TCPAQM for Processing-constrained NetworksDual-resource TCPAQM for Processing-constrained Networks
Dual-resource TCPAQM for Processing-constrained Networks
 
Dynamic Routing for Data Integrity and Delay Differentiated Services in Wirel...
Dynamic Routing for Data Integrity and Delay Differentiated Services in Wirel...Dynamic Routing for Data Integrity and Delay Differentiated Services in Wirel...
Dynamic Routing for Data Integrity and Delay Differentiated Services in Wirel...
 
Analysis of Rate Based Congestion Control Algorithms in Wireless Technologies
Analysis of Rate Based Congestion Control Algorithms in Wireless TechnologiesAnalysis of Rate Based Congestion Control Algorithms in Wireless Technologies
Analysis of Rate Based Congestion Control Algorithms in Wireless Technologies
 
Week10 transport
Week10 transportWeek10 transport
Week10 transport
 
Active Network Service Composition
Active Network Service CompositionActive Network Service Composition
Active Network Service Composition
 
Comparative Analysis of Different TCP Variants in Mobile Ad-Hoc Network
Comparative Analysis of Different TCP Variants in Mobile Ad-Hoc Network Comparative Analysis of Different TCP Variants in Mobile Ad-Hoc Network
Comparative Analysis of Different TCP Variants in Mobile Ad-Hoc Network
 

More from Richard Renwei Li

Five Representative Papers.docx
Five Representative Papers.docxFive Representative Papers.docx
Five Representative Papers.docxRichard Renwei Li
 
Richard - IFIP Networking 2021 - Panel.pdf
Richard - IFIP Networking 2021 - Panel.pdfRichard - IFIP Networking 2021 - Panel.pdf
Richard - IFIP Networking 2021 - Panel.pdfRichard Renwei Li
 
Richard - MedComNet Panel - Final Version.pdf
Richard - MedComNet Panel - Final Version.pdfRichard - MedComNet Panel - Final Version.pdf
Richard - MedComNet Panel - Final Version.pdfRichard Renwei Li
 
Richard - IEEE CSCN 2022 - Panel.pdf
Richard - IEEE CSCN 2022 - Panel.pdfRichard - IEEE CSCN 2022 - Panel.pdf
Richard - IEEE CSCN 2022 - Panel.pdfRichard Renwei Li
 

More from Richard Renwei Li (6)

Five Representative Papers.docx
Five Representative Papers.docxFive Representative Papers.docx
Five Representative Papers.docx
 
Richard - IFIP Networking 2021 - Panel.pdf
Richard - IFIP Networking 2021 - Panel.pdfRichard - IFIP Networking 2021 - Panel.pdf
Richard - IFIP Networking 2021 - Panel.pdf
 
Richard - 6G Symposium.pdf
Richard - 6G Symposium.pdfRichard - 6G Symposium.pdf
Richard - 6G Symposium.pdf
 
Richard - MedComNet Panel - Final Version.pdf
Richard - MedComNet Panel - Final Version.pdfRichard - MedComNet Panel - Final Version.pdf
Richard - MedComNet Panel - Final Version.pdf
 
Richard - IEEE CSCN 2022 - Panel.pdf
Richard - IEEE CSCN 2022 - Panel.pdfRichard - IEEE CSCN 2022 - Panel.pdf
Richard - IEEE CSCN 2022 - Panel.pdf
 
Some Notes on New IP
Some Notes on New IPSome Notes on New IP
Some Notes on New IP
 

Recently uploaded

Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)wesley chun
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking MenDelhi Call girls
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxMalak Abu Hammad
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking MenDelhi Call girls
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsEnterprise Knowledge
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationRadu Cotescu
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Igalia
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘RTylerCroy
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonetsnaman860154
 
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxFactors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxKatpro Technologies
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slidespraypatel2
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024The Digital Insurer
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...Neo4j
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdfhans926745
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?Igalia
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfEnterprise Knowledge
 

Recently uploaded (20)

Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptx
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxFactors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slides
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
 

MSN_CameraReady.pdf

  • 1. Qualitative Communication for Emerging Network Applications with New IP Richard Li, Lijun Dong, Cedric Westphal, Kiran Makhijani Futurewei Technologies Inc. 2220 Central Expressway, Santa Clara, CA, USA email: {richard.li, lijun.dong, cedric.westphal, kiranm}@futurewei.com Abstract—Not all data units carried over a packet stream are equal in value. Often some portions of data are more significant than others. Qualitative Communication is a paradigm that leverages the ’quality of data’ attribute to improve both the end-user experience and the overall performance of the network specifically when adverse network conditions occur. Qualitative Communication allows that the content that is received differs from the one that is transmitted but maintains enough of its information to still be useful. This paper describes Qualitative Communication, its packetization methods, and the correspond- ing mechanisms to process the packets at a finer granularity with New IP. The paper also discusses its benefits to the emerging and future network applications through a couple of use cases in video streaming, multi-camera assisted remote driving, AR/VR and holographic streaming, and high precision networking. Some preliminary performance results are illustrated to show such benefits. This suggests that Qualitative Communication will find wide applications for many use cases. Index Terms—New IP, Qualitative Communication (QC), packet wash, future Internet, holographic type communication, video streaming, remote driving, AR/VR, in-time guarantee, high precision networking, end-to-end latency guarantee, Random Linear Network Coding. I. INTRODUCTION In most packet-based communications architectures, the network is blind to the semantics associated with the packet payload. Every packet is treated (e.g., classified, forwarded or dropped) in its entirety as a minimal, independent, and self-sufficient unit, according to the local configuration and congestion conditions. The network protocols always ensure that the data sent matches the data received exactly bit-by-bit at the receiving end. When the congestion happens and the network starts dropping packets, the full data carried in the payload of those packets is lost to the receiver. If the reliability is enforced through transport protocols, say TCP (Transmis- sion Control Protocol), the re-transmission is needed. Re- transmission of packets wastes network resources, reduces the overall throughput of the connection and results in longer end-to-end latency for the packet delivery, which often makes it not applicable to very large volumetric applications such as multi-camera assisted remote driving, AR/VR (Augmented Reality/Virtual Reality), holographic type applications [1], [2]. TCP introduces a seesaw in the latency of a flow. As congestion occurs, the buffer starts filling up, introducing an increasing buffering delay. Once the buffer overflows, the congestion window is re-adjusted and the delay becomes short again. This introduces jitter and unpredictability in the packet delay. Furthermore, re-transmissions are triggered only after the packet loss is identified by the sender through timeout or duplicate acknowledgements. This way of handling network congestion by discarding the packet in its entirety is not very suitable to latency sensitive network applications. DiffServ improves QoS (Quality of Service) and user experience, but the buffers associated with any class of service can also experience congestion and thus the problem above remains. Despite the work done by sender and network, the QoE (Quality of Experience) is not guaranteed. A late-arriving or a lost packet would stall the entire stream; that is, data integrity does not always imply a satisfactory user experience. Specif- ically, in multimedia applications, when resource constrained situations arise, an end user would prefer to receive the es- sential content immediately and forgo peripheral information; i.e., rather than waiting for everything, it is often sufficient to receive a lower quality of content which is contextually relevant. What if the payload itself could be perceived at a finer granularity? In other words, what if some portion of the payload was an independent unit? Then the data carried in packet payload could be broken down into multiple parts (called as chunks in the rest of the paper) and each chunk having its semantics such as (i) how the data is grouped, (ii) its relative significance compared to other parts, and (iii) the relationship with other parts in the same or the other packets. If the networks were made aware of such semantics of the packet payload, it could then perform contextual actions and opera- tions at finer granularity than packet level. The data received is permitted to differ from the data sent, yet the supplied data will be still meaningful to and consumed by the receiver. This novel concept is referred to as Qualitative Communication [3] and the corresponding payload as qualitative payload. Qualitative Communication allows the network to deliver the more important information in the packets to their des- tination, by discarding the less significant one in the face of congestion. Then the unit of action taken by the network does not need to be on the entire packet but based on the payload semantics. This allows the payload to be processed
  • 2. not as a single raw stream of bits but as chunks within the payload of differentiated relevance and semantics. Qualita- tive Communication intends to largely reduce the number of packet re-transmissions while trading-off between the users’ satisfaction for experienced latency and some tolerable quality degradation. By embedding semantics at the chunk level also optimizes the transmission and processing of the packet headers. Qualitative Communication (written as QC here on) facili- tates well-informed in-network decisions on discarding or pro- cessing on parts of the packets. In this paper we describe all the important building blocks. In the following sections, we focus on describing the major characteristics and some promising use cases of QC. It leverages the New IP packet structure as described in Section III. It requires support from the application to encapsulate qualitative information referred to as packetization scheme, a number of approaches are discussed in Section IV. Moreover, since the end to end transport can now be enhanced, one such method of flow control mechanism is covered in Section V. More importantly, Section VI goes into the details of applications that benefit from this concept. Finally, performance evaluation is provided in Section VII followed by the future directions and conclusion in Section VIII. II. RELATED WORK QC was first introduced by Li et al in [3], facilitated by Packet Wash. Packet Wash provided the foundations that data received may not be the same as the data sent; hence the term wash. It made three key contributions: (i) payload in a packet itself may be split into smaller units called chunks such that its chunks are treated independently; (ii) a packet may describe what kind of in-network per sub-payload processing is permitted, yet the packet will remain usable. For example, a packet gets modified (washed or trimmed in transit) by the time it is delivered to the receiver (iii) It postulated that by associating context with the payload, key information contained in the packet payload may be preserved without inspecting the contents of the payload. It introduced taxonomy of the building blocks of QC such as the sub-payload portions called chunk and means to measure the quality of received packet vis-à-vis the sent packet. On the transport layer, some recent efforts have been made using packet wash, which selectively drops off parts of payload in reaction to congestion [4], [5]. Transport protocols such as LEDBAT [6] or BBR [7] attempt to provide predictable latency and minimize packet drops. We find that QC achieves the same objectives by providing better information of the congestion in the network to the end points. On the packet coding side, qualitatively in-network linear re- coding helps to avoid re-transmissions and is explored in [8]. It proposes to add minimal redundancy necessary in chunks which are applied with RLNC (Random Linear Network Coding). In the network, chunks maybe re-coded to add new coded chunks in the packet if the network condition is good. The re-coding scheme relies on caching capabilities in the network, the possibility of cached chunks belonging to the same network coding group and the packet payload having space after trimming in the previous hops. QC based SDN (Software Defined Networking) for layered video transmission is discussed in [9]. In this test, during the transmission of the packets, the SDN controller removes chunks when the network is congested. Hence, the packets are transmitted to the client albeit with a possible quality degradation, but without any re-transmission, delay, or loss. Since SVC (Sacable Video Coding) video is encoded by using the similarities between consecutive frames, as well as the dependency between the layers of the same frames, there is a dependency among the same layers of different frames. QC utilizes this dependency and the video layers are partitioned into the packets in such a way that each packet carries each type of layer. This paper provides comparative performance measurements with UDP (User Datagram Protocol) under different network conditions using video QoE metrics: the outage duration in milliseconds and PSNR (Peak Signal-to- Noise Ratio) in dB. It showed QC techniques significantly outperforms UDP in terms of PSNR and duration of pauses. In the next section, We will describe New IP, the core technology to support and implement QC. III. NEW IP OVERVIEW The data transport in the Internet draws resemblance to the delivery logistics in traditional postal mail services, which is very well understood. By relating the “packet header” to be the same as “envelop” carrying sender and receiver information, and the “user data or packet payload” to be the same as “letter” inserted in the envelop, the user data (payload) is encapsulated into a packet with source and destination address contained in the packet header. The postal mail is then dispatched by the postal service while the sender and receiver do not have any knowledge of the route or time taken by the delivery. This is how the best-effort service in the current Internet works. Over the years, the courier services have been upgraded and many value-added services regarding how packages and letters are delivered are provided to the customers. The courier ser- vices become customizable, trackable, assurable, and billable. The customers are able to customize the guaranteed delivery time, specify transportation method and route, track where the package is, achieve anonymity by sending to a rented P.O. box, provide delivery instructions, require receiver’s acknowledge by signature and so on. The merits of these services are to allow customers to do customization, monitoring, and control of their packages, and the courier service providers receive their revenues by providing such advanced services that a typical traditional postal mail services did not do. The current IP packet format, similar to traditional postal mail, is elementary and cannot evolve in terms of extensibility and flexibility of addressing, capturing end-user’s KPI (Key Performance Index) expectation (e.g., latency, throughput, packet loss) for a specific delivery, or providing contextual information about the payload to the network.
  • 3. By relating to courier services, we observe that providing any type of service - be it old or new - puts much responsibility on network operations. There is always an expectation from networks to insert new services seamlessly. However, the lack of flexibility in data plane often leads to an overlay- based approach or the middle-box insertion that needs to be maintained along with several provisioning touch points in the underlay networks, hence not only increasing the overall complexity but also limiting the scale. In order to provide these functionalities in the existing IP networks, addition to IPv4 options or IPv6 extension headers is necessary, both of which are very difficult to implement using the current standards. New IP [10], [11] defines a new network datagram format as shown in Fig. 1. It is an extension, optimization and evolution of IP with new functions (capabilities, features), and is being designed to be inter-operable with IPv4/v6 and many others. As an existing IPv4/v6 packet resembles a traditional postal mail, a New IP packet resembles the modern courier service such as FedEx packages. It has four components: manifest, shipping specification, contract, and payload. The manifest component keeps logistics and book- keeping information about the packet, the shipping specifica- tion contains the sender and receiver’s addressing information, the contract component specifies application’s KPI and other sender’s intent in the form of metadata and conditional actions. New IP is motivated by solving problems as commonly found in (1) Industrial machine-type communications (Industry 4.0 and 5.0, industrial Internet, industrial IoT, industrial automa- tion); (2) Emerging applications such as holographic type communications, holographic teleport, remote vehicle driving with multi-camera sensory feedback; (3) IP mobile back-haul transport for 5G/B5G/6G uRLLC (Ultra-Reliable Low Latency Communications), mMTC (Massive Machine Type Communi- cations), especially when Connecting Industrial Networks; (4) Emerging industry verticals (driver-less vehicles, smart city and smart agriculture); and (5) ITU-T Network 2030 [1], [2]. New IP can support, for example, (1) many different ad- dressing systems, especially the flexible addressing system that allows for variable-length addresses, through the shipping specification; (2) application’s KPI and user’s intent through the contract component; (3) QC through the joint use of contract and qualitative payload; (4) intrinsic security through the joint use of shipping specification and contract. New IP components and functions enable advanced service capabilities to the network nodes to support emerging and future network applications. With New IP, the network services become customizable, trackable, assurable, and billable at the packet level. The applications or end users are able to customize the guaranteed delivery time of a packet or a group of packets, specify transmission path, achieve anonymity by sending to proxy node, etc. The source and destination addresses in the IP header are evolved from fixed format to flexible and heterogeneous addressing in a Shipping Spec. The user payload evolves from a pure bit stream that is meaningless to the network nodes to a qualitative payload with certain metadata being exposed to the network Fig. 1. New IP packet nodes for differentiated treatment. The New IP payload can be a traditional payload as a sequence of bits/bytes, or a qualitative payload that is subject to QC service processing when corresponding events happen. Qualitative payload is generated from the original payload by dividing it into multiple chunks, which allows the network to partially remove some less important portions of the payload when dealing with congestion, such that the receiver is able to consume the residual payload instead of getting nothing at all. New IP Contract Metadata can be utilized to carry the context of a qualitative payload, e.g., how significant a particular piece of data within the payload is. In the contract clause for the packets with qualitative payload (called as qualitative packets), the following actions may be defined: • Packet Wash: a generic operation to arbitrarily or selectively remove the chunks inside the packet payload instead of completely dropping the packet when encoun- tering network congestion. • Enrich: The packet payload may be inserted by the New IP node with locally cached chunks that match the packet when the network condition permits this, and the outgoing queue is able to forward the larger packet. In the following sections, we focus on describing the major characteristics, promising use cases of QC. IV. QUALITATIVE COMMUNICATION (QC) Qualitative payload as one of the major components in New IP eliminates the packet payload’s opaqueness to the network. The semantics of data contained in the qualitative payload can be exposed to, and understood by, the network nodes. Currently, this is only available to applications or end users. QC allows senders to divide the packet payload into chunks, such that the network is permitted to selectively discard portions of the payload instead of dropping it entirely. In QC, packet wash is regarded as a scrubbing operation that reduces the size of a packet while preserving as much of the prioritized information in the payload as possible. As shown in Fig. 2, packet wash selectively drops chunks of the packet such that the remainder of the packet may be able to reach the receiver. A. Significance-Based Packetization The packetization of qualitative payload could be based on the relative significance levels of different chunks in the pay- load. This significance level should be input by the application at the source. The network nodes then can understand the significance of the chunks and accordingly make decisions regarding how to truncate the packets based on the current situation, such as congestion level. The New IP Contract is leveraged to facilitate the signifi- cance based QC and carry the qualitative payload context as
  • 4. Fig. 2. Packet Wash Operation in Qualitative Communication shown in Fig. 3. The New IP Contract contains the following parameters: (1) an action as ’PacketWash’; (2) event and condition when Packet Wash action is carried out, e.g., when by default entire packet dropping is executed due to network congestion, or when in-time guarantee of some urgent packet behind the queue is in danger of being late (will be discussed in Section VI-D); (3) a threshold value, beyond which the chunks cannot be further dropped. Otherwise if truncating the packet beyond this threshold, the packet would be considered useless to the receiver and should be dropped completely; and (4) some additional information about each individual chunk i: (a) Sigi, a relative significance level associated with the chunk compared to other chunks; (b) Offi, an offset to describe the boundary of adjacent chunks in the payload to enable partial dropping of the packet payload. If the chunks have the same size, this field can be replaced by the chunk size, which is universal for all chunks, as shown in Fig. 4; (c) CRCi, a CRC (Cyclic Redundancy Check) to verify the integrity of the chunk. CRC is no longer applied to the entire packet/packet payload, instead each individual chunk is associated with a CRC; and (d) Flagi, a flag to determine if the chunk was dropped. This helps receivers know which chunks have been dropped in the network. However, it might not be convenient for the network node to find the least significant chunk(s) and remove them from the packet payload. Instead of maintaining the original positions of the chunks, the chunks could be shifted around such that they are in the decreasing order of significance level from the front to the tail of the packet payload, as shown in Fig. 4. As a result, when packet wash operation is needed, the network node could conveniently truncate the chunk(s) from the tail until the specified threshold or as necessary. In the New IP Metadata field, the Sigi is no longer needed for such packetization. Instead, the original position of the corresponding chunk is included for the receiver to recover the chunks to the right positions. B. Random Linear Network Coding Based Packetization In another variation [12], RLNC [13] is applied on the chunks within the packet flow as shown in Fig. 5, if the chunks share the same size [8], [12]. When packet wash is applied to the packet, the intermediate forwarding node does not need to spend any time on deciding which chunk should be dropped. Fig. 3. Significance-based packetization for QC (maintaining original chunk order) Fig. 4. Significance-based packetization for QC (chunks in decreasing order of significance level) No matter how many chunks arrive at the receiver, they are useful to recover the original packet data. If the receiver does not have enough degrees of freedom to decode the packet data, more coded chunks belong to the same packet need to be re-transmitted from the sender. However, the size of the re-transmitted chunks is reduced. This packetization scheme is most suitable for the packets that do not have differentiated significance among different chunks. The network nodes may try to retain as many chunks as possible in the packets that were intended to be dropped entirely. The New IP Metadata is designed as shown in Fig. 5 to describe the context information of the packet payload: • Network Coding Group: it is used to identify the original data chunks that the RLNC was applied on. The Network Coding Group only relates to the data content and the sender. • Coding Type: it indicates the payload coding type to be RLNC. • Coded Chunk Size: it gives the information on how large the chunk size is. When packet wash happens, the network nodes are able to find the boundary of each coded Fig. 5. Random Linear Network Coding based packetization for Qualitative Communication
  • 5. chunk in the payload. • Coefficients: they are the coefficients with which the original data chunks are linearly combined to form coded chunks contained in the current payload. • Full DoF (Degree of Freedom): it indicates the full rank of the coded chunks in the payload when they are inserted by the sender. When network congestion happens, the intermediate net- work node does not need to decide which chunk to drop, it can drop as many chunks as needed from the tail until the outgoing buffer permits to contain the packet. There is no priority in this context. Since the system avoids dropping a whole packet, there are no transport layer timeout, nor any need to interrupt the transmission session to re-transmit a packet. When the packet eventually reaches the receiver, any chunks that are retained in the packet can be cached by the receiver and are useful for future decoding of the original payload after enough degrees of freedom are received. The receiver could request the sender to send more coded chunks through replying acknowledgement of the recently- received packet. Such acknowledgement only needs to include the number of received chunks. The sender can deduce the number of missing degrees of freedom, and do not need to know which chunks are lost during the previous transmission. It only needs to send the newly coded packet with more linearly independent chunks, in a number equal to the missing degrees of freedom. The sender could also add some redun- dancy to the packet payload based on the loss rate deduced from the acknowledgement, which potentially help fight the network congestion and decrease the number of possible re- transmissions. In-network caching [14] could be facilitated by the interme- diate network nodes. As long as a cached coded chunk happens to belong to the same network coding group and is independent from the already received chunks, the cached coded chunk can be immediately sent to receiver to help reduce the number of missing degrees of freedom. V. QUALITATIVE FLOW CONTROL PARADIGM Fig. 6. Flow control compatible with QC : a 3-chunk example. The sender, network, and receiver react based on the level of congestion. QC reacts to congestion by dropping parts of the payload. The transport layer also reacts to congestion by adjusting its sending rate (congestion window). Therefore the transport layer needs to take into consideration the behavior of QC, which is referred to as qualitative flow control. In [5], a transport layer is presented on top of QC. Similar to TCP, it is a window-based protocol, which adjusts the sending rate through feedback generated by packet wash and sent by the receiver. The feedback is sent in the form of chunk NACK (Negative Acknowledgment) to indicate congestion to the sender when a chunk has been dropped as shown in Fig. 6 . The key insight is that dropping chunks significantly reduces the congestion, so the sending rate does not have to be modified aggressively upon encountering a congestion event. Further, the number of received chunks allow the sender to actually measure the achieved rate, towards which the congestion window should converge. Obviously the rate should be adjusted towards something that is achievable without dropping chunks. To this end, the network and end-points cooperate to react in proportion to the extent of congestion. By setting up aggressive congestion thresholds in the forwarding nodes and mild reaction at the end-points, it avoids intermittent packet losses (and re- transmissions). Another benefit of such a transport protocol is that the per packet delay is more predictable. The intuition behind this is as follows: as buffers fill up, they reach the threshold at which chunks start being dropped. When the chunks are dropped, packet are smaller and can be processed/forwarded faster. This then reduces the buffer occupancy below the threshold. If the arrival rate does not vary, then the buffer will overflow the threshold again, chunks are dropped again, and the buffer will empty below the threshold, which creates a virtuous cycle when dealing with congestion. The threshold therefore acts as a sticky point for the buffer occupancy. Because the buffer occupancy sticks around the threshold, the buffering delay becomes predictable. VI. USE CASES OF QUALITATIVE COMMUNICATION After describing QC, we now turn our attention to some of its use cases. A. Significance Difference of Packets in Video Streaming Recent studies [15] show that globally IP video traffic will be 82 percent of all IP traffic (both business and consumer) by 2022, up from 75 percent in 2017. Internet video traffic will continue to grow fourfold from 2017 to 2022, a CAGR of 33 percent. With the rapid growth of video streaming traffic, it is foreseen that multiple video streaming flows are more likely to share a bottleneck link, which would inevitably cause network congestion. A visual scene in videos is represented in digital form by sampling the real scene spatially on a rectangular grid in the video image plane, and sampling temporally at regular time in- tervals as a sequence of still frames. Correspondingly, modern media codec [16] incorporates three types of “Scalability”: i.e., temporal scalability, spatial scalability, and quality scalability, which adapt the video bit stream by inserting or removing some portions to/from it in order to accommodate the different
  • 6. preferences and connection types of end users as well as the network conditions. The levels of scalability included in the video stream affects the quality of media presented to the end users’ devices. For example, the scalability could be shown in three different levels of video quality, i.e., low, medium, high, the video server could adaptively regulate the sending rate according to the changing network conditions [17]. By leveraging the flexibility and variety of video qualities enabled by those types of scalability, for video streaming, minimizing the possibility of network congestion can often be achieved by rate control and video adaptation methods [18] [19]. However, the quality adaption might not be prompt enough to cope with the dropping of packets on the wire due to network congestion and resource competition among concurrent video streams. Although DiffServ [20] [21] is used to manage resources such as bandwidth and queuing buffers on a per-hop basis between different classes of traffic, packet dropping does not differ within the same class. As the majority of the Internet traffic becomes video streaming, such differentiation and prioritiza- tion at the traffic class level would not be effective enough to eliminate the packet dropping from the competing video streams and the possibility of degraded service levels. With the various scalability implemented in the video codecs, it is not difficult to understand that some bits of an en- coded video stream could be more important than others [22]. Bits belonging to base layer usually are more significant to the decoder than bits belonging to enhancement layers. In the following, we take H.264/MPEG-4 as the example to further analyze the characteristics of video packets. MPEG-4 exploits the spatial and temporal redundancy in- herent in video images and sequences. The temporal sequence of MPEG frames consists of three types: • I-Frame: I-frames are key frames that provide check- points for re-synchronization or re-entry to support trick modes and error recovery. These frames consist only of macroblocks that use intra-prediction, in other words spatially encoded within themselves and are reconstructed without any reference to other frames. • P-Frame: P-frame stands for Predicted Frame and allows macroblocks to be compressed using temporal prediction in addition to spatial prediction. P-frames are forward predicted or extrapolated and the prediction is unidirectional. A motion vector is calculated to determine the value and direction of the prediction for each mac- roblock. A P-frame refers to a picture in the past, might be referenced by a P-frame after it, or a B-frame before or after it. • B-Frame: B-frame stands for bidirectionally predicted frame, which can refer to frames that occur both before and after it. B-frames typically do not serve as a reference for other frames. Thus, they can be dropped without significant impact on the video quality. Losing the first I-frame in the GOP (Group of Pictures) would cause video picture even missing for few seconds, because P- and B-frames referencing to the I-frame would not be decoded nor displayed either. Video scenes with a low level of movement are less sensitive to both B-frame and P- frame packet loss, alternatively video scenes with a high level of movement are more sensitive to both B-frame and P-frame packet loss. A lost P-frame can impact the remaining part of the GOP. A lost B-frame has only local effects in a slowly moving content or with large static background. In a scene of a dynamically moving content, losing B-frame has more dramatic impact and its scale can be as far-reaching as a P- frame loss. Macroblocks that are identified to represent the objects in RoI (Region of Interest) are likely more important than other macroblocks of non-RoI regions. For packets carrying RoI macroblocks in the video stream need to have higher priority to be retained compared to other packets carrying non-RoI macroblocks. According to the characteristics of frames contained in the video packet payload, namely: frame type, whether the frames are referenced by other frames, movement level of the pictures, whether the picture contained in the packet belongs to RoI or not, etc., significance difference could present among packets for the video decoding at the receiver side and the QoE improvement of end users. The dropping priority is possibly implemented at packet level in the network. If a packet payload contains multiple macroblocks which show priority differences, then packet wash or partial packet dropping could be applicable based on the priority indicated for different portions of the packet. The network is able to treat the packets of video streams in a differentiated manner and at finer granularity than DiffServ. The re-transmission could be maximally eliminated. The receiving end user can consume the delivered packets as many as possible in-time with acceptable quality. B. Jumbo Packet in Multi-Camera Assisted Remote Driving Advances in autonomous driving have been impressive ow- ing to the technologies that receive and act on the feeds from a massive number of sensors by the vehicle’s computation unit itself. On the other hand, a vehicle could be mounted with a number of cameras to further assist the autopilot. For example, eight surround cameras mounted on the Tesla cars [23] that provide 360 degrees of visibility around the car at up to 250 meters of range. At the same time, remote driving service that resides remotely in the central cloud or in the nearest edge cloud further improves the autonomous experience and provides higher safety guarantee to the vehicles. It requires the data collected by the vehicle’s computation unit to be sent to the edge cloud or the central cloud, including the images captured by the in-car cameras, to apprehend more information than that the vehicle itself can see and notify the vehicles in the proximity of the location where the incident happens. Remote driving complements autonomous driving and enhances safety when it falls short. Unexpected objects or events can occur at any point of time, for example deer running out of forest to pass the road as shown in Fig. 7. Any latency incurred by
  • 7. Fig. 7. Multiple cameras mounted on car (Some icons made by Freepik from www.flaticon.com) the network adds to the latency already incurred by human reaction time. In addition to support significant data volumes in the network, the end-to-end latency must be extremely low. Therefore, complete loss and re-transmission of packets due to network congestion cannot be afforded. The images that are captured by the in-car cameras (camera 1, 2, 3 and 4 in Fig. 7) are commonly concatenated and contained in a jumbo packet as shown in Fig. 8, with the reason to save the signaling and packet header overhead. In the current Internet, when there is network congestion in the network, the jumbo packet is likely to be dropped entirely by the congested router. Since in this example the incident of deer running out of the forest to cross the road is an event that requires prompt notification to the nearby vehicles, such complete dropping of the jumbo packet and future re- transmissions are not acceptable, which could be mitigated by QC and the associated packet wash operation. Obviously, the image from camera 3 captures the deer, which is regarded as a dangerous object to the on-road vehicles. Therefore, the image data from camera 3 requires the highest priority to be retained in the jumbo packet and delivered to the server. If dropping from the tail of the jumbo packet is adopted for packet wash operation in the network, the re-ordering of the image data from camera 1, 2, 3 and 4 according to descending significance level is shown in Fig. 9, with the tail of the jumbo packet as the least important image data and the head of the jumbo packet as the most important image data (i.e., the image data from camera 3) in this scenario. With the re-packetization of the jumbo packet, and QC/packet wash enabled in the network, even with network congestion happens at multiple intermediate routers (as shown in Fig. 9, the image data captured by camera 4, 1, 2 is truncated from the tail of the jumbo packet sequentially by multiple congested routers), the most important image data is successfully preserved during transmission and reaches the server in-time. Fig. 8. Payload of jumbo packet Fig. 9. Payload of jumbo packet C. Field of View in AR/VR In 360 video streams or AR/VR applications, the video stream is usually decomposed into tiles. Each tile maps to a view angle on the sphere. Put together, the tiles recreate the whole 360 degree domain. However, since the user will only look at a fraction of this domain, it is not necessary to transmit all the tiles at all times. Many mechanisms attempted to prioritize part of the stream. The saliency in images can be considered to prioritize different parts of the stream [24], [25]. Selective segmentation [26] is also used in this context. Prediction algorithms have been designed to anticipate the content that the user will look at. This helps prioritizing the views according to the likelihood the user will watch [27]–[30]. [31] offers a discussion on how networks can better support 360 video streaming. We have observed in the previous section that the packet trimming can be used to respond to congestion and varying network conditions. [4] used this to design a practical protocol for immersive streams such as AR, VR or 360 degree video streaming. It proposes to use QC for 360 video streams in the following manner: first, it composes a payload with chunks, where each chunk corresponds to a specific tile for the next time segment being requested. The tiles most central to the FoV (Field of View) are assigned with a higher priority. The payload is therefore composed of k chunks c1, . . . , ck, each corresponding to the transmission of tiles st,fi for the next time interval t, where the fi are ordered by distance from the center of the FoV. In this basic algorithm, the packets are transmitted and if congestion occurs, some chunks are dropped that relate to areas of the FoV that are less central and less important. [4] also proposes a second algorithm that uses QC to pre- fetch tiles. That is, the chunks are divided into chunks for the next segment at time t and pre-fetching chunks for the segment at time t + 1. In this scenario, the payload is composed of chunks c1, . . . , ck, ck+1, . . . , c2k with c1, . . . , ck as defined above, and
  • 8. ck+1, . . . , c2k corresponding to the same FoV tiles but for time segment t + 1. The priority is again decreasing for the chunks, so that the pre-fetched chunks ck+1, . . . , c2k are dropped preferentially. If chunks from the pre-fetching sequence ck+1, . . . , c2k are successfully transmitted, then there are two possibilities: • if the FoV stays the same during the next transmission slot, then the pre-fetched chunks are valid, and are not transmitted again. More chunks can be pre-fetched instead. • if the FoV varies enough that the pre-fetched chunks are useless, then another transmission of regular and pre- fetched chunks is initiated for the new FoV. There is little penalty to increase the number of chunks (as long as it fits within the maximum transmission unit of the network) as they may be dropped during congestion. As seen from this example, there is a symbiosis between AR/VR applications and QC. The prioritization of chunks maps directly with the FoV. D. Last Resort for High-Precision Networking Today’s Internet technology based on the best effort (BE) principle is gradually running into its limits. It only does its utmost to transport packets to their predetermined destinations and lacks any assurance of QoS in such terms as end-to-end latency and packet loss ratio, which are increasingly demanded by future Internet applications [1]. Violations of service level objectives such as end-to-end latency in those mission-critical applications would result not merely in a ungraceful degraded experience but also a disastrous breakdown. As a result, high-precision networking [2] that guarantees the service levels with sufficient accuracy is missing from to- day’s Internet technology, which is the prerequisite to unlock- ing economic potentials of the future networking applications. Congestion in the Internet is unpredictable, which naturally becomes the major obstacle to high-precision networking. Packets must be queued in a certain order in the routers until they can be transmitted. The depth of the queue (i.e., the size of packets that are placed in the front of the queue), has a dominating effect on the packet’s dwell time, consequently the packet’s end-to-end latency. The dwell time of a packet in a router is defined as the amount of time that it spends in the router after its initial arrival until its full departure. The dwell time is composed of three major components: (1) Processing delay: the time it takes the node to process the packet header. (2) Queueing delay: the time that the packet spends in the outgoing queue before the last bit of the packet being transmitted. (3) Transmission delay: the time that it takes to serialize the packet and transmit it over the wire, generally proportional to the packet size and egress bandwidth. Various Internet technologies have been proposed in the literature, with the intention to improve and optimize the QoS built on the BE principle, navigating the trade-offs such as utilization and fairness. They generally involve scheduling algorithms that determine which packet gets to enter a queue first, the location where the packet gets placed in the queue, as well as prioritization schemes among multiple queues transmitting over the same interface. The path towards high- precision networking lies in advances of such QoS mech- anisms. QC and its packet wash operation in the network could serve as the last resort for high-precision networking when other QoS approaches fail [32]. Therefore, the following approach leveraging QC and packet wash does not act as a competing alternative but as a complementary approach that can be combined with other QoS approaches. In the following, we use the end-to-end latency guarantee (i.e., in-time guarantee) as the example for high-precision networking. Packets that require in-time guarantee might al- ready yield the end-to-end latency beyond its budget, in other words, could be in danger of “being late” if all existing QoS approaches in the Internet have exhausted their capabilities. QC with packet wash can potentially accelerate a target packet if packet wash is applied to the packet, and possibly also to other packets ahead of it in the queue, resulting in reduced dwell time of the packet in the congested router. In order to reduce dwell time, packet wash can be applied to the packet itself and the front packets in the queue that are allowed to be trimmed. A packet is assessed whether it is in danger of being late when it is en-queued. If the local latency budget (packet per-hop deadline) is less than the sum of the expected processing delay (likely to be fixed), the estimated queueing delay and transmission delay (determined by the current queue depth, the front packets’ total size and the egress bandwidth), the packet is deemed to be in danger of missing its deadline. If the local latency budget for a packet i is denoted as LLBi, while the estimated dwell time is denoted as Ti, the amount of time that needs to be shaved off, or the amount of payload that needs to be truncated from the front packets for the packet i (named as Needed Truncation Amount, i.e., NTAi) is calculated as: NTAi = (Ti − LLBi) ∗ B, where B is the egress bandwidth. Applying packet wash and truncating chunks from the front packets in the queue would be the last resort, and such packet is considered as WTP (Wash Trigger Packet). These operations should be only applied when truly necessary. The proposed method is complementary to existing QoS approaches and allows for their seamless integration. For those front packets, some of them may be washable, some may be otherwise non-washable. Fig. 10 shows an example of an outgoing queue, in which packet 3 and packet 9 are assessed to be WTP, packet 4 and 7 are not allowed to be washed. In this case, the packet 1 and 2 can be washed for the avoidance of packet 3 from being late, the packet 5, packet 6 and packet 8 could be washed for the avoidance of packet 9 from being late. VII. PERFORMANCE ILLUSTRATION A. AR/VR In [4], the benefit of QC on immersive video streaming by computing the gain on a set of video traces was evaluated. The evaluation environment consisted in forming packets with chunks that would be in the next requested field of view
  • 9. Fig. 10. Example of WTP and washable packets Fig. 11. Results on a set of video traces: received chunks for QC with and without pre-fetching at time t, with high priority. The remaining space was given to chunks for the video stream at time t+1 that were pre-fetched, with low priority. In case of congestion, the pre-fetched chunks for time t+1 would be dropped first, then the chunks for time t that were far away from the center of the FoV, then the chunks at time that were near the center of the FoV. Fig. 11 presents the results of the algorithms from [4] for a range of video streams and for a time varying communication channel. In this illustration, the packets are composed of three chunks and the y-axis shows the number of received chunks. The channel varies so that chunks may be dropped and the average capacity of the channel is 2.5 chunks. Without QC, no packet would be transmitted as the channel capacity is too low. With QC, 2.25 chunks get transmitted on average. When QC is combined with pre-fetching, useful chunks are transmitted ahead of time when there is enough capacity, yield a throughput of 2.5 chunks. It can be seen that pre-fetching chunks with QC allows to improve the QoE for the end users. Namely, there is a significant gain in the number of chunks that are eventually received, since the pre-fetching mechanism allows to transmit some chunks when the conditions are good; however, when the conditions degrade, the packet as a whole is not dropped, but only the less important information. B. In-Time Guarantee [32] evaluated how the proposed QC and packet wash mechanisms could serve as the last resort for high precision Fig. 12. Packet delivery success ratio comparison under SDFS in-time guarantees. A simplified topology is considered with n number of senders sending packets to m number of receivers. There is one intermediate router between the senders and receivers, and all packets towards the receivers require in-time guarantee and buffered in the same outgoing queue. On the other hand, every packet is subject to packet wash operation and associated with a wash allowance ratio, which is defined as the ratio of the threshold that is allowed to be truncated during transmission to the original packet payload size. The network nodes adopts Smallest Deadline First Schedule (SDFS) [33] as the packet scheduling schemes in the outgoing queue. Fig. 12 from [32] shows the performance of packet delivery success ratio with and without QC and packet wash operation in the network. Packet delivery success ratio is defined as the ratio of the packets that meet their corresponding deadlines. The unsuccessful packets are dropped and cannot reach the receivers in time. A packet’s dwell time is reduced by applying packet wash to all packets in front of the packet and the packet itself. Since in the simulation, the packets’ deadline are intentionally configured, such that all the packets under SDFS without packet wash would fail, the packet delivery success ratio is 0. However, by applying packet wash to the packets in the outgoing queue, even with small wash allowance ratio (e.g., 10%), the packet delivery success ratio under SDFS with packet wash is dramatically increased to more than 80% as shown in Fig. 12. While the packet wash allowance ratio increases, a higher number of packets under the SDFS scheme can be delivered successfully to the receivers with their deadlines being met. We confidently conclude that QC and packet wash mecha- nisms can further support the high precision in-time guarantee. VIII. CONCLUSION AND FUTURE DIRECTIONS The ITU-T Focus Group on Network 2030 has identified some emerging applications, analyzed technical gaps, and defined some new services that are expected to support new
  • 10. and future applications, but has not provided any solutions to supporting such new services. This paper describes Qualitative Communication (QC) using New IP. QC allows for partial data delivery, which is sometimes good enough for video-like applications. QC can be taken as a new service or solution to some problems studied by ITU-T Focus Group on Network 2030. For QC we have made the following assertions: (1) Not every byte in the same packet has the same significance, and partially delivered data is sometimes reasonably usable for some use cases; (2) Dropping a whole packet in its entirety in the case of congestion is too wasteful and sometimes unnecessary; (3) Drops and re-transmissions such as that done by TCP may cause received packets to arrive too late and thus to be unusable because they might have missed their deadlines, especially for mission-critical applications; (4) Utilizing the significance information, operations such as packet wash can be performed at sub-packet level so that the in-time deadline could be guaranteed to make sure the received packets, even though partially delivered, are still usable for some applications. Our future direction includes: (1) Continue to explore the use cases of using QC for remote driving with multi-camera sensory feedback, multimedia streaming, etc.; (2) In-depth study of other network applications that might need the support from different functions of New IP, including the Shipping Spec, Contract and Qualitative Payload. REFERENCES [1] ITU-T Focus Group on Network 2030 White Paper, “Network 2030 - A blueprint of technology,applications and market drivers towards the year 2030 and beyond,” May 2019. [2] ITU-T Focus Group on Network 2030 Deliverable, “New services and capabilities for Network 2030: description, technical gap and perfor- mance target analysis,” May 2019. [3] R. Li, K. Makhijani, H. Yousefi, C. Westphal, L. Dong, T. Wauters, and F. De Turck, “A framework for Qualitative Communications using Big Packet Protocol,” in Proceedings Of The 2019 ACM Sigcomm Workshop On Networking For Emerging Applications And Technologies, 2019. [4] C. Westphal, D. He, K. Makhijani, and R. Li, “Qualitative communi- cations for augmented reality and virtual reality,” in 2021 IEEE 22nd International Conference on High Performance Switching and Routing (HPSR), 2021. [5] A. Albalawi, H. Yousefi, C. Westphal, K. Makhijani, and J. Garcia- Luna-Aceves, “Enhancing end-to-end transport with packet trimming,” in GLOBECOM 2020 - 2020 IEEE Global Communications Conference, 2020. [6] S. Shalunov et al, “Low extra delay background transport (LEDBAT),” RFC, vol. 6817, pp. 1–25, 2012. [7] N. Cardwell et al, “BBR: Congestion-based congestion control,” ACM Queue, 2016. [8] L. Dong, K. Makhijani, and R. Li, “Qualitative communication via network coding and New IP : invited paper,” in International Conference on High Performance Switching and Routing (HPSR), 2020. [9] S. Clayman, M. Tuker, H. Arasan, and M. Sayıt, “The Future of Media Streaming Systems: Transferring Video over New IP,” in 2021 IEEE 22nd International Conference on High Performance Switching and Routing (HPSR), 2021, pp. 1–6. [10] R. Li, K. Makhijani, and L. Dong, “New IP: A data packet framework to evolve the Internet : Invited Paper,” in 2020 IEEE 21st International Conference on High Performance Switching and Routing (HPSR), 2020. [11] R. Li, A. Clemm, U. Chunduri, L. Dong, and K. Makhijani, “A new framework and protocol for future networking applications,” ACM Sigcomm Workshop on Networking for Emerging Applications and Technologies (NEAT 2018), May 2018. [12] L. Dong and R. Li, “In-packet network coding for effective packet wash and packet enrichment,” in IEEE Globecom Workshops, 2019. [13] C. Fragouli, J. L. Boudec, and J. Widmer, “Network coding: an instant primer,” ACM Sigcomm Computer Communication Review, vol. 36, no. 1, pp. 63–68, January 2006. [14] L. Dong and R. Li, “Optimal chunk caching in network coding-based qualitative communication,” Digital Communications and Networks, 2021. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S2352864821000304 [15] Cisco, “Cisco visual networking index: forecast and trends, 2017–2022,” White Paper, 2018. [16] G. Conklin, G. Greenbaum, K. Lillevold, A. Lippman, and Y. Reznik, “Video coding for streaming media delivery on the internet,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 11, no. 3, pp. 269–281, 2001. [17] ISO/IEC, “23009-1:2019, Dynamic Adaptive Streaming over HTTP (DASH) - Part 1: Media Presentation Description and Segment Formats,” 2019. [Online]. Available: https://www.iso.org/standard/79329.html [18] A. Bentaleb, B. Taani, A. C. Begen, C. Timmerer, and R. Zimmermann, “A survey on bitrate adaptation schemes for streaming media over http,” IEEE Communications Surveys Tutorials, vol. 21, no. 1, pp. 562–585, 2019. [19] D. Wu, Y. Hou, W. Zhu, Y.-Q. Zhang, and J. Peha, “Streaming video over the internet: approaches and directions,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 11, no. 3, pp. 282–300, 2001. [20] D. L. Black, Z. Wang, M. A. Carlson, W. Weiss, E. B. Davies, and S. L. Blake, “An Architecture for Differentiated Services,” RFC 2475, 1998. [Online]. Available: https://rfc-editor.org/rfc/rfc2475.txt [21] D. L. Black and P. Jones, “Differentiated services (Diffserv) and real-time communication,” RFC 7657, 2015. [Online]. Available: https://rfc-editor.org/rfc/rfc7657.txt [22] L. Dong, K. Makhijani, and R. Li, “A use case of packets’ significance difference with media scalability,” Internet Engineering Task Force, Internet-Draft, 2021, work in Progress. [Online]. Available: https://datatracker.ietf.org/doc/html/draft-dong- usecase-packet-significance-diff-00 [23] Tesla, “Advanced sensor coverage.” [Online]. Available: https://www.tesla.com/autopilot [24] T. Nguyen et al, “Static saliency vs. dynamic saliency: a comparative study,” in ACM MM, 2013. [25] C. Fan et al, “Fixation prediction for 360 video streaming in head- mounted virtual reality,” in ACM Workshop on Network and OS Support for Digital Audio and Video, 2017. [26] A. Kirillov, K. He, R. B. Girshick, C. Rother, and P. Dollár, “Panoptic segmentation,” CoRR, vol. abs/1801.00868, 2018. [Online]. Available: http://arxiv.org/abs/1801.00868 [27] T. Alshawi et al, “Understanding spatial correlation in eye-fixation maps for visual attention in videos,” in IEEE ICME, 2016. [28] S. Chaabouni et al, “Transfer learning with deep networks for saliency prediction in natural video,” in IEEE ICIP, 2016. [29] L. Xie et al, “CLS: A cross-user learning based system for improving QoE in 360-degree video adaptive streaming,” in 2018 ACM Multimedia Conference, 2018. [30] D. He, C. Westphal, and J. Garcia-Luna-Aceves, “Joint rate and FoV adaptation in immersive video streaming,” in ACM SIGCOMM workshop on AR/VR Networks, Aug. 2018. [31] D. He et al, “Network support for AR/VR and immersive video application: A survey,” in ICETE SIGMAP, 2018. [32] L. Dong and A. Clemm, “High-precision end-to-end latency guarantees using packet wash,” in 2021 IFIP/IEEE International Symposium on Integrated Network Management (IM), 2021, pp. 259–267. [33] C. Wilson, H. Ballani, T. Karagiannis, and A. I. T. Rowstron, “Better never than late: meeting deadlines in datacenter networks,” in Proceed- ings of the ACM SIGCOMM, 2011.