[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2021101640A1 - Method and apparatus of packet wash for in-time packet delivery - Google Patents

Method and apparatus of packet wash for in-time packet delivery Download PDF

Info

Publication number
WO2021101640A1
WO2021101640A1 PCT/US2020/055428 US2020055428W WO2021101640A1 WO 2021101640 A1 WO2021101640 A1 WO 2021101640A1 US 2020055428 W US2020055428 W US 2020055428W WO 2021101640 A1 WO2021101640 A1 WO 2021101640A1
Authority
WO
WIPO (PCT)
Prior art keywords
data packets
queue
packet
data
packets
Prior art date
Application number
PCT/US2020/055428
Other languages
French (fr)
Inventor
Lijun Dong
Alexander Clemm
Original Assignee
Futurewei Technologies, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Futurewei Technologies, Inc. filed Critical Futurewei Technologies, Inc.
Publication of WO2021101640A1 publication Critical patent/WO2021101640A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/28Flow control; Congestion control in relation to timing considerations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/32Flow control; Congestion control by discarding or delaying data units, e.g. packets or frames
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2425Traffic characterised by specific attributes, e.g. priority or QoS for supporting services specification, e.g. SLA

Definitions

  • This disclosure generally relates to data transmission in a network.
  • Network transport control methods are responsible for reliable and in-order delivery of data from a sender to a receiver through various network nodes. Using current control methods, any error due to link congestion or intermittent packet loss in the network can trigger re-transmission of data packets. This results in unpredictable delays as well as an increase in the network load, wasting network resources and capacity, and often failing to meet end-to-end latency objectives.
  • a packet is the fundamental unit upon which different actions such as classification, forwarding, or discarding are performed by the network nodes.
  • Different schemes have been proposed to improve the efficiency of data transmissions and increase predictability, some of which are based on mechanisms for efficient and faster re-transmissions, while others utilize redundant transmissions.
  • a method for controlling data flows in a network comprising receiving a first set of data packets at a queue of a node, each of the first set of data packets include a payload and a delivery time; storing each of the first set of data packets in the queue of the node; classifying a data packet of the first set of data packets as a trigger packet when the node determines that the delivery time of the trigger packet will not be satisfied unless delivery of the first set of data packets in the queue is modified; dropping one or more chunks of data in the payload of a second set of data packets, among the first set of data packets, stored in the queue to modify the delivery of the first set of data packets in the queue and accelerate the delivery of the trigger packet in order to satisfy the delivery time of the trigger packet; and forwarding the first set of data packets stored in the queue to a next hop node such that the delivery time is satisfied.
  • the second set of data packets comprises a data packet ahead of the trigger packet in the queue.
  • the one or more chunks of data in the payload are defined in a qualitative service header of the first set of data packets, each chunk being a sub-set of the payload.
  • the dropping fails to reduce the size of the payload, such that a dwell time of the trigger packet is larger than a per-hop deadline designated for a current hop of the trigger packet, and a total delivery time for the trigger packet is not exceeded, further comprising forwarding the trigger packet to the next hop node and recalculating the per-hop deadline at subsequent hops.
  • dropping one or more chunks of data comprises dropping one or more chunks of data of the trigger packet to reduce a dwell time of the trigger packet such that the trigger packet will satisfy the delivery time.
  • the method further comprising setting a state of the queue to active when one or more of the first set of data packets is classified as the trigger packet, wherein the state of the queue indicates whether the dropping should be performed.
  • the method further comprising classifying multiple data packets of the first set of data packets as trigger packets, and counting the trigger packets by incrementing a number when one of the trigger packets is queued and decrementing the number when a trigger packet is dequeued, wherein the state is set to active when the number is greater than zero and set to inactive when the number is zero, and the dropping is performed on the second set of data packets when the state is active.
  • the method further comprising determining a queue depth for the first set of data packets, where the queue depth is a number of packets ahead of the trigger packet; setting a state of the queue to active when the queue depth is greater than zero, wherein the state of the queue indicates whether the dropping should be performed; and decrementing the queue depth when each of the first set of data packets are dequeued from the queue while the state is active.
  • dropping the one or more chunks of data in the payload is based on a coding scheme and a size of each of the one or more chunks of data.
  • the coding scheme is one of a significance level coding scheme or network coding scheme.
  • the method further comprising calculating a needed truncation amount for the trigger packet; calculating a total allowable wash amount for the second set of data packets; and calculating a truncation size for each of the second set of data packets based on the needed truncation amount and the total allowable wash amount, wherein the needed truncation amount is distributed among a plurality of the second set of data packets.
  • the second set of data packets are washable data packets.
  • modifying the first set of data packets in the queue is relative to a scheduling strategy in which the first set of data packets are transmitted without dropping chunks of data.
  • a node controlling data flow in a network comprising a non-transitory memory storage comprising instructions; and one or more processors in communication with the memory, wherein the one or more processors execute the instructions to receive a first set of data packets at a queue of a node, each of the first set of data packets include a payload and a delivery time; store each of the first set of data packets in the queue of the node; classify a data packet of the first set of data packets as a trigger packet when the node determines that the delivery time of the trigger packet will not be satisfied unless delivery of the first set of data packets in the queue is modified; drop one or more chunks of data in the payload of a second set of data packets, among the first set of data packets, stored in the queue to modify the delivery of the first set of data packets in the queue and accelerate the delivery of the trigger packet in order to satisfy the delivery time of the trigger packet; and forward the first set of data packet
  • non-transitory computer-readable medium non-transitory computer-readable medium storing computer instructions for controlling data in a network, that when executed by one or more processors, cause the one or more processors to perform the steps of receiving a first set of data packets at a queue of a node, each of the first set of data packets include a payload and a delivery time; storing each of the first set of data packets in the queue of the node; classifying a data packet of the first set of data packets as a trigger packet when the node determines that the delivery time of the trigger packet will not be satisfied unless delivery of the first set of data packets in the queue is modified; dropping one or more chunks of data in the payload of a second set of data packets, among the first set of data packets, stored in the queue to modify the delivery of the first set of data packets in the queue and accelerate the delivery of the trigger packet in order to satisfy the delivery time of the trigger packet; and forwarding the
  • FIG. 1 illustrates an embodiment of a network environment suitable for use with the present technology.
  • FIG. 2A illustrates a general data format for the qualitative services data transmission framework described herein.
  • FIG. 2B illustrates the general concept of linear network coding as applied to application chunks in a payload.
  • FIG. 3 illustrates one example of IP packet which may be used to implement the present technology.
  • FIG. 4 illustrates an example network in which routers have a queue storing data packets.
  • FIG. 5A illustrates an example network in which a router includes a latency guarantee queue with dwell time reduction and packet washing services.
  • FIG. 5B illustrates an example flow diagram of the SL assessor in FIG. 5A.
  • FIG. 5C illustrates an example flow diagram of the SL washer in FIG. 5A.
  • FIG. 6 illustrates an example state machine for maintaining a state of the queue.
  • FIGS. 7 A and 7B illustrate examples of a packet wash operation on packets in the front of in the queue.
  • FIG. 8 illustrates an example of packet washing on a packet being dequeued from the queue.
  • FIG. 9 illustrates an example of packet washing on packets in the front of the queue.
  • FIG. 10 illustrates an embodiment of a network node comprising a router.
  • FIG. 11 illustrates a schematic diagram of a general-purpose network component or computer system.
  • the present disclosure will now be described with reference to the figures, which in general relate to managing network traffic to improve network throughput and reliability and reduce latency.
  • the technology includes a data transmission framework which comprises application, transport, and network components, which improves the data flow through the network when adverse network conditions are encountered.
  • the present disclosure describes a triggering mechanism by which packets arriving at a router or network node are marked or otherwise identified such that the size of the data packet may be reduced by performing a packet wash operation on the packet.
  • a packet wash operation (or “packet wash” or “scrubbing”) is defined herein as a mechanism by which to reduce the size of one or more packets (which may include “washing” the entire packet and dropping it). Washing one or more packets may include selective or random removal of chunks of data, such as chunks of the packet’s payload. In one embodiment, the removal is based on a relationship among the chunks of data or the individual significance level of each chunk.
  • Washing packets allows a partial delivery of packets by removing chunks, such as less significant chunks, from the packet while retaining as much of the packet’s information as possible.
  • lower priority chunks of data may be washed or dropped according to information (or meta-data) in the header of the packet.
  • the washing operation occurs at nodes of the network, for example an intermediate network node.
  • packets are received at the node, they are classified and stored in the queue.
  • the packet is analyzed to determine whether it includes a delivery time by which the packet should reach its destination. In one embodiment, the delivery time (or some derivative of the delivery time) is specified in a contract of the packet. If the queue determines that the packet is unable to reach its destination by the delivery time, then the packet is classified (and marked) as a trigger packet.
  • the node maintains a state of the queue. Depending on the current state of the queue, a packet wash operation may be performed on one or more of the packets.
  • the packet wash operation may reduce the size of the packet as they are dequeued (or the packet may be dropped in its entirety) in order to minimize dwell time of the packet in the queue of the node. Reducing the dwell time of one or more packets stored in the queue helps to ensure that the packets may reach their respective destination within the delivery time without over-washing of the packets.
  • FIG. 1 illustrates an embodiment of a network system 50 suitable for use with the present technology.
  • FIG. 1 illustrates a plurality of network enabled devices including, by way of example only, a mobile device 110, a computer 106 and a server 108.
  • the devices are coupled via network data paths 100 and through several network nodes 106-1 , 106-2 and 106-3.
  • Each node may be a router, switch or other network component which is operable to route network traffic in any number of transport control methods.
  • Each network enabled device 108, 110, 112 may include one or more applications 114, which generate network data which may be communicated to other network enabled devices.
  • the applications 114 may be executed by a processor utilizing volatile and non-volatile memory to generate and forward the data, as well as communicate with the network interface 122 and qualitative services (QS) engine 125.
  • Each network enabled device 108, 110, 112 may include one or more network interfaces 122 allowing the device to communicate via the network data paths 100.
  • Each network interface 122 may include a QS engine 125 which may include code adapted to receive data described herein from the applications 114 on each device and perform certain aspects of the technology described herein.
  • Each network enabled device is configured to operate and/or communicate in the system 50 as a data sender or a data receiver.
  • network enabled device 108 is configured to transmit and/or receive data to/from any of the other network devices 110, 112.
  • the data paths 100 may be wireless signals or wired signals and thus the network system 50 of FIG. 1 may comprise a wired or a wireless network.
  • the environment may include additional or alternative networks including private and public data-packet networks, and corporate intranets.
  • Each network enabled device 108, 110, 112 represents any suitable end user device and may include such devices (or may be referred to) as a user equipment/device, wireless transmit/receive unit, mobile station, fixed or mobile subscriber unit, pager, cellular telephone, personal digital assistant (PDA), smartphone, laptop, computer, touchpad, tablet, wireless sensor, wearable devices consumer electronics device, a target device, device-to-device (D2D) machine type user equipment or user equipment capable of machine-to-machine (M2M) communication, and USB dongles.
  • PDA personal digital assistant
  • D2D device-to-device
  • M2M machine-to-machine
  • Each network node 106-1 through 106-3 may likewise include a network interface 124 (or multiple network interfaces) and a QS routing engine 155 allowing the node to perform certain aspects of the technology.
  • the nodes 106-1 to 106-3 may comprise access points which may use technology such as defined by IEEE 802.11 n, 802.11ax, or other 802.11 standards to provide wireless network access to one or more devices.
  • AP access point
  • AP wireless network
  • suitable wireless network which may include a cellular network
  • an AP may be implemented by a base station of a cellular network, and the AP may implement the functions of any network node described herein.
  • the nodes can similarly provide wired or wireless access network access through other networking technologies other than 802.11 such as 4G or 5G.
  • FIG. 1 illustrates one example of a communication system
  • the communication system 100 could include any number of user equipment, access points, nodes, networks, or other components in any suitable configuration.
  • FIG. 2A illustrates a general data format for the qualitative services data transmission framework described herein.
  • FIG. 2A illustrates a generic IP packet 302, relative to a qualitative services IP packet 300. While the technology will be described with respect to its use based on an Internet protocol (IP) packet structure, it should be understood that other transport formats may be modified in accordance with the teachings herein.
  • IP Internet protocol
  • a qualitative services IP packet 300 includes a Qualitative Service (QS) header and a data payload.
  • the QS header includes an indication that the packet supports qualitative services.
  • the QS header is used to identify the payload structure (the “data” in FIG. 2A) as comprising logical chunks. In one embodiment, this comprises using a QS bit in the QS header.
  • Each chunk of data (Data Chunk 1 , Data Chunk 2, Data Chunk 3... Data Chunk N) is identified in the QS header.
  • positions of the data chunks are identified by offsets (in the priority level (PL) offsets field) specified in the QS header.
  • the QS header may include a checksum or cyclic redundancy check (CRC) for different chunks so that the integrity of the packet can still be verified even after a QS packet wash operation.
  • CRC cyclic redundancy check
  • packet-level checks may be disabled and replaced by chunk-level checks.
  • each chunk has a significance-factor associated with it. As illustrated in FIG. 2A, three priority levels are shown: High, Medium, and Low. While three priority levels are illustrated, it should be recognized that any number of priority levels may be utilized in accordance with the technology.
  • the significance factor is assigned by the transmitting application on a sender network enabled device.
  • the QS header indicates the relative data priority or significance assigned by the data transmitting application in the PL offsets field. For example, the QS header can indicate different significance or priority levels of each chunk in the payload, and how to identify the payload chunks associated with these priority levels (i.e. by using an offset). In one embodiment, to identify the chunks of data, the QS header specifies a specific offset for each chunk.
  • any suitable means of communicating the location and size of data chunks in the data of packet 300 are suitable for use in accordance with the present technology.
  • the significance information may be associated with each chunk and may be used by the QS routing engines 155 in each of the network nodes 106-1 through 106 -3 to manage traffic.
  • lower significance chunks may be dropped by a network node (while keeping the rest of the packet) in the case of a network condition, such as network congestion or dwell time of a packet, being detected.
  • each QS routing engine 155 may drop lower priority chunks in low priority packets first (until low priority packets are entirely dropped), then lower priority chunks in medium priority packets, etc., depending on the priority of the chuck and that of the packet, as described below. While three priority levels are shown, the number of priority levels may be based on the network and application.
  • FIG. 2B illustrates the general concept of linear network coding as applied to application chunks in a payload.
  • the payload in a packet which is originated from the sender, is divided into ‘h’ number of chunks with the same size.
  • that size of each chunk in the payload is the same.
  • the one or more of the chunks in the payload may be padded with one or more zeroes such that size of each chunk in the payload is the same.
  • the sender will include as many as ’h’ coded chunks in each packet payload. Where there are a larger than ’h’ number of coded chunks, multiple packets may be used, with the one or multiple packets defining one or more flows. Where there are less than ‘h’ coded chunks, one packet may be used. However, it is appreciated that more chunks than the original data chunks can be included in a payload (of one or more packets) to provide an additional degree of freedom in decoding the original data. Nevertheless, the use of more coded chunks than original data chunks is not required.
  • a number of degrees of freedom for original data may be defined as the number of coded chunks that are needed to decode the original data which was used to code the coded chunks.
  • a receiver would need ‘k’ chunks (or degrees of freedom) to decode the chunks.
  • the degree of freedom is ‘k’ if and only if it needs ‘k’ independent coded packets to decode the original data.
  • FIG. 3 illustrates one example of an IP packet which may be used to implement the present technology.
  • New Internet Protocol the packet is part of an Internet framework that brings several capabilities in the present technology.
  • New IP creates a block of information about the IP packet.
  • a New IP packet 300 as illustrated, carries a contract block between a header and a data payload. Information is carried and processed along each network node hop from the sender to the receiver. The network and network nodes, if enabled, fulfill the contract specified in the packet, assuming the contract has been agreed between the packet sender/receiver and the network.
  • the contract in the New IP packet carries the contract clause of a qualitative service to be performed.
  • a qualitative service to be performed may comprise a packet wash operation, which alters a packet to reduce dwell time at a network node by, for example, dropping or removing one or more chunks of data from a packet until the dwell time has been sufficiently minimized. This is distinct from the case in which a packet wash operation is performed to reduce network congestion, as explained more below.
  • the contract includes a contract clause(s) 310 including associated metadata.
  • a contract clause 310 describes how the routers treat the packet as it traverses the network based on the predefined triggering defined in the event and condition fields, and an action specifies what actions can be taken if the event and condition are satisfied.
  • an event may comprise dwell time identified by a condition such as a lengthy delay in the queue in a network node to which an action (a packet wash operation, for example) may be applied.
  • the “Metadata” field contains data about the packet, e.g. accounting information, customized statistics about the flow on intermediate hops, the contextual information about the user, the application, etc.
  • the Metadata field may include the following to describe the information about the packet payload:
  • a “Network Coding Group” field is used to identify the original data chunks to which the linear network coding operation was applied.
  • the Network Coding Group relates to the original data content and the sender.
  • the sender may partition the original data content into multiple network coding groups, each group containing multiple chunks with equal size. When forwarding additional coded chunks, one or more coded chunks (but less than the number of chunks in the original data content) from the same network coding group may be forwarded.
  • the “Indication of linear-network-coded chunks” field indicates whether the payload contains linear-network-coded chunks or not (e.g., indicates significance level instead).
  • the “coded chunk size” field gives the information on how large the chunk size is.
  • the network nodes can find the boundary of each coded chunk in the payload using the chunk size.
  • a “Coefficients” field provides the coefficients with which the original data chunks are linearly combined to form coded chunks contained in the current payload.
  • the “Current rank” field indicates the current rank of the remaining coded chunks in the payload. Each time when a linearly independent chunk is dropped from the packet, the current rank is reduced by 1.
  • the “Full rank” field indicates the full rank of the coded chunks in the payload when they are inserted by the sender.
  • the format of the IP packet is an example and that the packet encoding may include any number of different fields.
  • the event, condition and/or action fields of the contract clause may not exist in the packet and other or different metadata may be included.
  • the field of the network coding group may include different elements or not be part of the metadata at all.
  • any number of different coding schemes may be employed to support qualitative communication and that coding is not limited to the disclosed embodiments.
  • FIG. 4 illustrates an example network in which routers have a queue storing packets.
  • the network may also establish a guarantee of service delivery (e.g., a guarantee that a packet will arrive at its destination no later than a specified delivery time).
  • a guarantee of service delivery e.g., a guarantee that a packet will arrive at its destination no later than a specified delivery time.
  • the network 400 employs one or more queues that store data packets.
  • the data packets stored in the queue(s) have a delivery time requirement (or on-time delivery), such that the data packet will be delivered on a per-hop basis or end-to-end basis within a specified delivery time.
  • Data packets that do not include any guarantee for an on-time delivery may also be stored in the one or more queues.
  • the queue is a latency guarantee queue (LGQ), such as LGQs 425-1 and 425-3. It is appreciated that while the term LGQ may be used in the various embodiments that follow, data packets that do not include any in-time guarantee may also be stored in the LGQ.
  • LGQ latency guarantee queue
  • the delivery time of a data packet may be measured according to different embodiments.
  • the delivery time may be measured as a remaining time for delivery.
  • the remaining delivery time may be calculated by assessing the remaining latency budget for the packet to reach its destination from the current node and using information about the remaining path (e.g., the number of hops downstream and the amount of fixed latency (latency that cannot be altered)), details of which may be found in International Application PCT/US2020/023289 and U.S. Appl. No. 16/822,458, the entire contents of which are hereby incorporated by reference. For example, assume for purposes of discussion that a data packet arrives at an intermediary node with a delivery time of 10 ps.
  • the delivery time may be measured as an absolute amount of time. For example, if data packet arrives at an intermediary node at 11 :37 am and needs to arrive at its destination by 11 :47 am, the intermediary node can determine the amount of time remaining to meet its deadline (11 :47 am) regardless of the amount of time it takes to hop from one node to the next.
  • the network 400 includes sending nodes 110, routers 106-1 , 106-3 (with dedicated LGQs 425-1 , 425-3) and receiver nodes 108, although the network 400 is not limited to the depicted components, but may be a much larger network with additional components, such as network 50 in FIG. 1.
  • the LGQs 425 receive data packets for latency guarantee according to a contract specified in the received data packets.
  • the LGQs 425 typically include a queuing algorithm, which is usually implemented at the router level. Specifically, output queues process the data packets that have been enqueued to await their turn to be forwarded or routed to the appropriate output port or interface.
  • a primary purpose of these queueing algorithms is to give preferential treatment to data packets having higher priority (which are enqueued at the router output port) over other enqueued data packets having lower priority.
  • the LGQs 425 are a latency guaranteed queue in which the data packets have a per-hop (e.g., router hop) deadline that is guaranteed, as well as a minimal dwell time at each hop.
  • the dwell time as defined herein, is the time a packet spends at a router before being forwarded to the next hop and includes: processing delay, queueing delay (affected by packet scheduling in the outgoing queue) and transmission delay (which is affected by the size of the packet and is proportional to the packet size, and is at least a part of the minimal dwell time).
  • the contract (FIG. 3) of a data packet is configured to achieve a service delivery guarantee (also referred to herein as an InTimeGuaranteed contract, and metadata (FIG. 3) of a packet used to carry the following information in order to instruct the routers 106 in the network to comply with the InTimeGuaranteed contract of the packet.
  • a service delivery guarantee also referred to herein as an InTimeGuaranteed contract
  • metadata (FIG. 3) of a packet used to carry the following information in order to instruct the routers 106 in the network to comply with the InTimeGuaranteed contract of the packet.
  • meta-data in a packet may be used to comply with the InTimeGuaranteed contract that carries the following information:
  • a packet scheduling scheme may use the following parameters:
  • a deadline usually refers to the remaining time until the deadline is met, but could be adjusted to be an absolute time as the deadline (described below).
  • a central controller or sender may set a specific deadline at each hop, or it may be dynamically set at each hop.
  • FIG. 5A illustrates an example network in which a router includes a queue with dwell time reduction and packet washing services.
  • Network 500 is the same or similar to network 400 of FIG. 4, with the addition of service level (SL) assessor 502,
  • SL washer 504 and packet disposer 506 which serve to reduce dwell time of packets stored in the queue, such as LGQ 425-1 , when needed and reduce latency.
  • Each of the sending nodes 110, router 106-1 , LGQ 425-1 and receiver nodes 108 are described above.
  • the SL assessor 502, SL washer 503 and packet disposer 506 are part of the LGQ 425-1 in router 106-1 , as indicated by the dashed box surrounding these components.
  • Network 500 is described with reference to FIGS. 5B and 5C, which illustrate flow diagrams of the SL assessor 502 and packet washer 504, respectively.
  • the processes may be performed at any of network nodes 106-1 , 106-2, 106-3, although other components capable of processing may also implement such processes.
  • a packet including, for example, a QS header and data chunks is received at a network node 106-1 .
  • the packet is received from sending nodes 110.
  • the input buffer may be a dedicated hardware buffer or a dedicated portion of memory.
  • the packet is received and buffered in LGQ 425-1 of router 106-1.
  • the packets are assessed by SL assessor 502 to determine whether they are a washable packet at step 522.
  • a data packet is washable when it contains a chunk of data that is of lower priority, e.g., the packet contains a chunk of data that could be dropped. Washability of a packet can be independent of information specified in the contract, such as delivery time. Rather, it indicates the packet is eligible for a washing operation (described above). If a packet received at the LGQ 425-1 is determined to be a washable packet, the washable packet is marked as “washable” and the “wash potential” is updated in memory at step 524.
  • a data packet is marked as a washable packet when it contains a chunk of data that is of lower priority compared to other chunks of data (i.e. , a lower priority that indicates it may be dropped).
  • the determination of whether a packet is a washable packet is assessed at the SL washer 504 instead of the SL assessor 502. After marking the packet as washable at step 524, or if the incoming packet is determined to not be washable (e.g. , the packet is not qualitative communication compatible) at step 522, the process proceeds to step 526.
  • Packets processed at step 526 are next assessed by the SL assessor 502 to determine whether the packet should be classified as a trigger packet (or washable trigger packet).
  • Status as a trigger packet is distinct from status as a washable packet.
  • a trigger packet is a packet that is at risk of missing its delivery time (e.g., the packet is on the verge of missing the delivery time specified in the contract at the current node or for delivery to the final destination address).
  • a trigger packet may be a washable packet or non-washable packet.
  • An incoming packet is classified by the SL assessor 502 as a trigger packet when the packet is at risk of failing to satisfy (meet) its delivery time (e.g., specified in the contact).
  • a packet that is not at risk of failing to satisfy the delivery time is not classified as a trigger packet and the process proceeds to step 530 where the packet is queued in the LGQ 425-1.
  • the SL assessor 502 classifies the packet as a trigger packet and can update, when necessary, a state (also referred to as a “wash now state”) of the LGQ 425-1 at step 528.
  • a state also referred to as a “wash now state”
  • the process of classifying a received packet includes determining whether the current time (i.e. , the time the packet arrives) plus the amount of time required to transmit a packet (i.e., the time remaining to reach its destination) exceeds a latest time to send a packet that will still permit the packet to reach its destination within the delivery time.
  • an amount of dwell time in the queue needs to be reduced by truncating or removing data chunks from the payload of one or more packets in the queue, including potentially the packet itself or another packet in the queue.
  • the size of chunks of data to be truncated or removed from the one or more packets may be calculated as the amount of time required to be saved to meet the delivery time multiplied by a bandwidth in the network ((G - L) * B), where L is the delivery time, T is the dwell time of the packet in the router according to a depth of the queue (explained below) and B is an available bandwidth in the network.
  • the reduction in dwell time may be determined by dividing a size of the washed chunks of data by the bandwidth.
  • the transmission time for the 1000 bits in a chunk would be 1 K/100M, i.e. 10 psec.
  • the amount of packet washing that is needed may be determined, in bits or other units of data size, by taking the time required to satisfy the delivery time (e.g., the amount of time that must be reduced to reach the destination by the delivery time) and multiplying this amount with the bandwidth. For example, a 100 Mbps interface transmits 100 bits per psec. To save 50 psec, 50 * 100 bits (i.e., 5000 bits in total) must be dropped to reach the packet destination in time to satisfy the delivery time.
  • the node may determine how many chunks of data need to be dropped. In the example with 1000 bits in each chunk, 5 chunks would need to be dropped to save the 50 psec.
  • the assessed packet at step 526 is at risk of failing to meet a total delivery time.
  • a total delivery time may be the absolute time by which the packet must arrive at its destination or a sum of time the packet can take to reach its destination along a network path without exceeding the delivery time (e.g., specified in the contract).
  • the SL assessor 502 may determine that the packet is at risk of arriving late at its destination because the packet or packets ahead in the LGQ 425-1 have a transmission time that, when totaled, results in the current packet failing to reach its destination by the delivery time (without packet washing).
  • the packet will be at risk of failing to meet it’s deadline when the time spent transmitting the current data packet (at risk of being late) to the current node plus the time remaining to transmit the current data packet to its destination, including any dwell time, exceeds the total delivery time.
  • the assessed packet is at risk of not meeting its delivery time based on the per-hop dwell time. That is, the packet dwell time in the queue places the packet at risk of failing to reach its destination by a specified delivery time that can be called a per-hop deadline, based on an expected amount of delay at each subsequent hop.
  • the per-hop deadline may be determined, in one embodiment, using a remaining amount of time before the packet deadline expires and the remaining number of hops toward the packet’s destination.
  • the dwell time (77) of a packet and its per-hop deadline (Li) are further defined above.
  • packets that are at risk of being late can be potentially accelerated to reach their destination on time by application of a packet wash operation (e.g., the delivery time specified in the contact may be satisfied with sufficient packet washing).
  • the packet wash operation may sufficiently minimize the amount of dwell time for one or more packets in the LFQ 425-1 , or another packet may be dropped in its entirety, to sufficiently reduce latency.
  • the packets at risk of being late will not have their delivery time satisfied unless delivery of one or more packets in the queue is modified. For example, a payload (or portion of a payload) of one or more of the packets is dropped.
  • packets and their payloads may be queued, dequeued, or dropped according to any number of different techniques, such as a first-in-first-out (FIFO) technique, a last-in-first-out (LIFO) technique, etc.
  • FIFO first-in-first-out
  • LIFO last-in-first-out
  • the wash now state may be used to identify when packet washing is required in order accelerate delivery of a packet at risk of being late to its destination. For example, assume we wish to reduce the dwell time in the queue of an incoming packet classified as a trigger packet. Each data chunk in the trigger packet may only contribute to a few microseconds of delay. Truncating data chunks from the trigger packet may therefore only reduce the dwell time in the queue (and accelerate delivery) by a small amount. However, if the trigger packet is stored in a queue with dozens or hundreds (or more) of other packets (i.e., a queue with a large queue depth), data chunks from each of the other packets in the queue may be truncated (assuming the packet is a washable packet).
  • truncating data chunks of the individual trigger packet only contributes to a few microsecond reduction in dwell time
  • truncating data chunks from each of the other data packets in the queue contributes to dozens or hundreds (or more) of microseconds in dwell time reduction.
  • the dwell time of the trigger packet itself is also reduced. Accelerating a data packet in the queue by reducing its overall dwell time provides the packet sufficient time to reach its destination within a specified delivery time (i.e., an in-time guarantee is met for the data packet).
  • this is distinct from providing a packet wash on packets in a queue to reduce network congestion. Washing data packets in a queue to reduce congestion, while alleviating outgoing buffer space in the queue, does not account for queueing delays or any in-time guarantee that a packet will be delivered to its destination by a specified delivery time.
  • the packet will not meet the delivery time (e.g., specified in the contract) even when a washing operation is performed on the packet itself or other packets in the LGQ 425-1.
  • the packet as a whole may be dequeued and dropped or otherwise disposed of (e.g., the packet may be handled with traditional IP protocols).
  • step 526 when a packet is determined to be a trigger packet, the process proceeds to step 528 where a state (wash now state) of the LGQ 425-1 is updated.
  • the state of the LGQ 425-1 is maintained by a count and/or wash queue depth (WQD) (also termed herein, queue depth) of packets stored therein. This count or queue depth enables the LGQ 425-1 to determine whether any trigger packets are currently stored in the queue.
  • WQD wash queue depth
  • the LGQ 425-1 remains in an “active” state (or “wash trigger state”).
  • the LGQ 425-1 may perform wash operations on washable packets (including trigger packets) stored in the queue. Washing is performed until all trigger packets stored in the LGQ 425-1 reach the front and are dequeued, at which time the count and/or the queue depth reaches zero. In one embodiment, as additional trigger packets enter the queue, the count and/or the queue depth (and the state of the queue) may be updated. When no trigger packets are stored in the LGQ 425-1 , the LGQ 425-1 remains in an “inactive” state (or “non wash trigger state”). When the LGQ 425-1 is in an “inactive” state, the LGQ 425-1 will not perform any wash operations on packets stored in queue.
  • a wash packet counter is used to maintain a count of the number of trigger packets stored in the LGQ 425-1 . As packets classified as a trigger packet arrive for storage in the LGQ 425-1 , the packets are marked (for example, using a flag) and counted. The count is incremented as trigger packets are queued in the LGQ 425-1 and decremented as packets are dequeued from the LGQ 425-1. In one embodiment, incrementing the counter is performed by the SL assessor 502. In a further embodiment, decrementing the count is performed by the SL washer 504.
  • a queue depth (or queue depth count) is used to maintain the status of the LGQ 425-1.
  • the queue depth count maintains a number of packets queued between the packet in the front of the queue and a newest arriving packet in the queue (usually or alternatively, a packet nearest the back of the queue) classified as a trigger packet.
  • the queue depth is greater than zero, the LGQ 425-1 state is set to ‘active.’
  • the queue depth is decremented (and the state remains ‘active’ as long as the queue depth is greater than zero).
  • the state of the LGQ 425- 1 is set to ‘inactive.’
  • the SL washer 504 sets the queue depth to the current queue depth when a trigger packet arrives at the LGQ 425-1 and decrements the queue depth for each washable packet that is dequeued while the state of the LGQ 425-1 is active.
  • a finite state machine maintains the count and queue depth of the queue, as shown in FIG. 6.
  • the finite state machine shows two states — an inactive (or “False”) state 602 and an active (or “True”) state 604.
  • the count or queue depth are adjusted (incremented or decremented) to maintain the state of the LGQ 425-1 .
  • step 530 After incrementing or decrementing the count or queue depth, the process proceeds to step 530 where the packet is queued in the LGQ 425-1.
  • the SL washer 504 adds a processing stage that performs a wash operation on packets based on the state of the LGQ 425-1.
  • packets that are classified and marked as a washable packet are processed by the SL washer 504 to reduce the dwell time of the packet in the LGQ 425-1.
  • packet disposer 506 As packets are washed and disposed (dropped from or transmitted out of the queue) by packet disposer 506, the SL washer 504 adjusts the state of the LGQ 425-1.
  • the queue depth is decremented (until reaching zero) and the state of the queue may be adjusted from ‘active’ to ‘inactive.’
  • a packet has been dequeued from the LGQ 425-1 for processing.
  • the SL washer 504 determines whether the queue is in an active or inactive state (or in a “wash state”) at step 534.
  • a queue is determined to be in an “active” state when the count or queue depth is greater than zero. Queues in an active state will perform a packet wash operation on washable packets being dequeued.
  • the LGQ 425-1 is in an active state
  • the process proceeds to step 536 where the SL washer 504 determines whether the packet is a washable packet. In some embodiments, only washable trigger packets are marked as washable.
  • the queue is determined to be in an “inactive” state.
  • a packet wash operation is not performed during dequeuing of packets. Accordingly, the process proceeds to step 542 for continued processing.
  • Packets that are marked as a washable packet (in step 524), and determined to be washable at step 536, can be washed at step 538 to accelerate the disposal of packets to thereby reduce latency.
  • the SL washer 504 may apply the packet wash operation to either the packet being dequeued or to other packets in the queue that are ahead of a trigger packet arriving at the queue.
  • the SL washer 504 applies a packet wash operation to the packet being dequeued.
  • the payload in the packet is truncated according to a qualitative communications coding scheme (e.g., significance level or random network coding as described above) and a size of the chunks of data.
  • the size of chunks of data to be truncated or removed from the packet may be calculated as the amount of time required to meet the delivery time multiplied by a bandwidth in the network ((G - L) * B), such that T £ L, where L is the delivery time, T is the dwell time of the packet in the router according to the a queue depth and B is the bandwidth.
  • An example of packet washing on packets being dequeued may be found below with reference to FIG. 8.
  • FIGS. 7A and 7B One example of a packet wash operation on other packets ahead in the queue is shown in FIGS. 7A and 7B.
  • the LGQ 425-1 stores a number of packets (e.g., packets 1 - 4 in FIG. 7A and packets 1 - 9 in FIG. 7B).
  • the packets marked with an “X” e.g., packets 4 and 9) are classified as trigger packets, packets that are “darkened” (e.g., packets 1 , 2, 4, 6, 7 and 8) are washable packets, and packets that are “white” (e.g., packets 3, 5 and 9) are non-washable packets).
  • FIG. 1 the example of FIG.
  • packet 4 arrives at the back of the LGQ 425-1 and is marked as a trigger packet.
  • the number of washable packets in front (ahead) of the trigger packet 4 is 2 (e.g., packets 1 and 2), with a total of 3 washable packets in the LGQ 425-1 (packets 1 , 2 and 4).
  • FIG. 7B which is a continuation of the example in FIG. 7A, packet 1 is dequeued from the LGQ 425-1 and packets 5 - 9 arrive at the LGQ 425-1 , with packet 9 arriving last (at the back of the queue) and marked as a trigger packet.
  • the number of washable packets in the queue is 5 (packets 2, 4, 6, 7 and 8).
  • the washable packets in front of the trigger packet may be washed (e.g., washable packet 2 is washed for trigger packet 4 and washable packets 2, 6, 7, 8 are washed for trigger packet 9).
  • Other examples of packet washing may be found below with reference to FIGS. 8 and 9.
  • a “fair” strategy based on a weighted wash ratio determines the “wash potential” or “available wash amount” (AWA) of a packet.
  • a fair strategy spreads (optionally evenly) the washing of packets in the queue over multiple packets.
  • the current AWA of the queue (and each washable packet) may be tracked in order to avoid over-washing of a single packet.
  • a size of each of the washable chunks of data in the payload of the packet are added to the AWA when the packet is queued and the size of each of the washable chunks of data of the payload are subtracted from the AWA when the packet is dequeued.
  • the size updated as the AWA is compared against a size of the chunks that need to be truncated from packets in the queue in order for a trigger packet to reach its destination (or reach its next hop) within the specified delivery time (which may be specified in the contract).
  • the AWA may be in units of chunks, bytes, or some other measure.
  • the comparison may be against a size of chunks, bytes or some other measure of data that needs to be truncated. Based on the comparison, a wash operation may be performed on the packet and/or other washable packets in the queue.
  • a needed truncation amount is calculated as (T-L) / B, where Tis the dwell time, L is the delivery time (per-hop) and B is the link bandwidth.
  • Each washable packet has an allowable wash amount (AWA), which is defined as a maximum number of bytes that that may be truncated from the packet in a wash operation.
  • AWA totai AWAi, where ⁇ 1 .../ ⁇ is the list of washable packets in the queue ahead of (in front of) the new trigger packet arriving at the queue.
  • AWA totai is reduced by the AWA of the packet.
  • the data that must be dropped is distributed among multiple washable packets as they are dequeued.
  • the truncation size for a washable packet being dequeued is where K is a variable defining the number of trigger packets.
  • This process may continue iteratively until the washable packets are dequeued. In this way, dropped chunks needed for one trigger packet to meet its delivery time can be distributed relatively evenly across multiple packets.
  • the NTA may be determined for multiple trigger packets before one or more washable packets are dequeued.
  • the NTA for a trigger packet may be based on the NTA for one or more other trigger packets in the queue. For example, assume for purposes of discussion that a first trigger packet arrives at the queue with an NTA of 5 bytes and, prior to washing packets ahead in the queue, a second trigger packet arrives at the queue with an NTA of 5 bytes.
  • the node (such as the SL assessor 502 or SL washer 504 in the node) is aware of the NTA (5 bytes) for the first trigger packet and assumes that washing packets ahead in the queue will result in at least the removal of 5 bytes. If removal of the 5 bytes for the first trigger packet satisfies the per-hop deadline of the second trigger packet, then no further washing is necessary for the second trigger packet. Otherwise, if the per-hop deadline is not satisfied, additional washing will be required (although the additional washing may be less than the NTA of the second trigger packet as a result of washing packets for the first trigger packet).
  • the packet wash operation does not exhaust the wash allowance of the packets in the front of the queue. Rather, it is “fairly” distributed among the washable packets according to their wash allowance sizes.
  • step 540 to update the wash state of the queue by updating the count and/or the queue depth (WQD).
  • the count is increased as trigger packets are queued in the LGQ 425-1 and decreased as trigger packets are dequeued from the LGQ 425-1.
  • the queue depth is set (or updated) when a trigger packet arrives based on the number of packets stored in the LGQ 425-1 and decreases as packets are dequeued (until no packets remain in the queue (the depth is zero)).
  • the washed packet is then checked to determine whether it remains at risk for failing to satisfy the delivery time for reaching its destination at step 542.
  • the queue may determine whether the delivery time (e.g., specified in the contract) will be met or exceeded based on an amount of time lapsed for transmission of the packet (now washed) to the current node and a remaining time required to transmit the washed data packet to its destination.
  • the washed packet is entirely dropped when the dwell time of the packet is determined to exceed the delivery time (e.g., specified in the contract) at step 546.
  • the washed packet when the packet wash operation fails to reduce the size of the payload such that the dwell time of the washed packet is less than or equal to the delivery time of the washed packet, the washed packet is transmitted (forwarded) to the next hop node, at step 544, and the remaining time required to transmit the washed packet to its destination is recalculated.
  • FIG. 8 illustrates an example of packet washing on a packet being dequeued from the latency guarantee queue.
  • the depicted embodiment shows a table 800 with packets arriving 802 in the queue (e.g., LGQ 425-1), packets being dequeued 804 from the queue, packets currently in the queue 806, a state (T or ‘F’) of the queue 808 (“wns” or “wash now state”) and a count of the number of washable trigger packets 810 (“WTP”) in the queue. Packets arrive in the queue from top to bottom on the table 800, as shown by the “time” arrow along the vertical axis.
  • Bolded and italicized packets represent washable trigger packets (WTPs), underlined packets represent washable packets and all other packets are non-trigger packets and non-washable packets.
  • the state of the queue (wns) in the depicted embodiment is maintained using a count of the number of trigger packets (WTP count) stored in the queue.
  • the state is ‘active’ (T) when the WTP count is greater than zero, and the state is ‘inactive’ (‘F’) when the WTP is zero.
  • the queue In an initial state (row 1), the queue has two packets ‘b’ and ‘a’ stored in the queue. Packet ‘b’ is a washable packet (but not a trigger packet) and packet ‘a’ is neither washable nor a trigger packet. Accordingly, the state of the queue remains ‘inactive’ (F) and the WTP count remains ⁇ .’ As packet ‘c’ (classified as a trigger packet) arrives and is stored in the queue (row 2), the state of the queue becomes ‘active’ (T) and the WTP count is incremented to ⁇ .’ After the arrival of packet ‘c,’ packets ‘a’ and ‘b’ are dequeued from the queue (rows 3 and 4).
  • FIG. 9 illustrates an example of packet washing on packets in the front of the queue.
  • columns 802 - 810 are the same as in table 800.
  • the state of the queue (wns) is maintained using a queue depth 912 (WQD) of the number of washable packets stored in the queue.
  • WQD queue depth 912
  • a trigger packet requires 10 chunks of data from other packets ahead in the queue to be dropped in order to satisfy its deliver time (e.g., per-hop delivery time) and a washable packet ahead in the queue can have 5 chunks of data dropped from its payload and still be transmitted.
  • the queue initially stores packets ‘b’ and ‘a’ (row 1 ), the state of the queue 808 is ‘inactive’ (F), the WTP count 810 is ⁇ ,’ the queue depth 912 (WQD) is ⁇ ’ and no truncation of data is initially required.
  • packet ‘c’ arrives in queue (row 2), there are three packets in the queue (packets ‘o’, ‘b’ and ‘a’).
  • packet ‘c’ is a trigger packet
  • the state 808 (wns) changes from ‘inactive’ (F) to ‘active’ (T)
  • the WTP count 810 is incremented to ⁇
  • the WQD 912 is incremented to ‘3.’
  • packet ‘c’ since packet ‘c’ is a trigger packet, it needs 10 chunks of data to be washed in the queue. Accordingly, the needed truncation amount 914 is updated to 10.
  • the needed truncation amount 914 identifies the amount of data (chunks of data) that must be removed or dropped to reduce the dwell time of the trigger packets in the queue (in this case, just packet ‘c’) such that the packets may satisfy their delivery times.
  • the queue depth 912 is reduced to ‘2.’ However, since packet ‘a’ is not a washable packet, the needed truncation amount 914 does not change. As packet ‘b’ is dequeued from the queue (row 4), the queue depth 912 is decremented to ⁇ .’ Since packet ‘b’ is a washable packet, after a wash operation is performed on the packet (and the packet is dequeued), the needed truncation amount 914 is reduced to 5. When packet ‘c’ is dequeued (row 5), 5 chunks of data are removed since it is a washable packet.
  • the state 808 become ‘inactive’ (F)
  • the WTP 810 drops to ⁇
  • the WQD 912 is decremented to ⁇ ’ along with the needed truncation amount 914.
  • FIG. 10 illustrates an embodiment of a network node which may implement a router.
  • the node e.g., a router
  • the node 1000 may be, for example, a node 106 or any other node or router as described above in communication system 50.
  • the node 1000 may comprise a plurality of input/output ports 1010/1030 and/or receivers (Rx) 1012 and transmitters (Tx) 1032 for receiving and transmitting data from other nodes, a processor 1020 to process data and determine which node to send the data to and a memory.
  • the node 1000 may also generate and distribute data in the form of data packets in the communication system.
  • the processor 1020 is not so limited and may comprise multiple processors.
  • the processor 1020 may be implemented as one or more central processing unit (CPU) chips, cores (e.g., a multi-core processor), field-programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), and/or digital signal processors (DSPs), and/or may be part of one or more ASICs. Moreover, the processor 1020 may be implemented using hardware, software, or both.
  • the memory 1022 may be configured to store routing tables, forwarding tables, or other tables or information disclosed herein. Although illustrated as a single memory, memory 1022 may be implemented as a combination of read only memory (ROM), random access memory (RAM), or secondary storage (e.g., one or more disk drives or tape drives used for non-volatile storage of data). In one embodiment, memory 1022 stores code that enables the processor 1020 to implement the QS routing engine 155, and encoder/decoder 185. Memory 1022 may include reserved space to implement the coded chunk cache 195.
  • the QS routing engine 155 is in the form of code which is operable to instruct the processor 1020 to perform those tasks attributed to the QS routing engine 155.
  • the technology described above may also be implemented on any general- purpose network component, such as a computer or network component with sufficient processing power, memory resources, and network throughput capability to handle the necessary workload placed upon it.
  • FIG. 11 shows an example embodiment of a computing system for implementing embodiments of the disclosure.
  • Computer system 1100 includes a processor 1104 and a memory 1108 that communicate with each other, and with other components, via a bus 1112.
  • Bus 1112 may include any of several types of bus structures including, but not limited to, a memory bus, a memory controller, a peripheral bus, a local bus, and any combinations thereof, using any of a variety of bus architectures.
  • Memory 1108 may include various components (e.g., machine-readable media) including, but not limited to, a random access memory component, a read only component, and any combinations thereof.
  • a basic input/output system 1116 (BIOS), including basic routines that help to transfer information between elements within computer system 1100, such as during start-up, may be stored in memory 1108.
  • BIOS basic input/output system
  • Memory 1108 may also include (e.g., stored on one or more machine- readable media) instructions (e.g., software) 1120 embodying any one or more of the aspects and/or methodologies of the present disclosure.
  • memory 1108 may further include any number of program modules including, but not limited to, an operating system, one or more application programs, other program modules, program data, and any combinations thereof.
  • Computer system 1100 may also include a storage device 1124.
  • a storage device e.g., storage device 1124
  • Examples of a storage device include, but are not limited to, a hard disk drive, a magnetic disk drive, an optical disc drive in combination with an optical medium, a solid-state memory device, and any combinations thereof.
  • Storage device 1124 may be connected to bus 1112 by an appropriate interface (not shown).
  • Example interfaces include, but are not limited to, SCSI, advanced technology attachment (ATA), serial ATA, universal serial bus (USB), IEEE 1394 (FIREWIRE), and any combinations thereof.
  • storage device 1124 (or one or more components thereof) may be removably interfaced with computer system 1100 (e.g., via an external port connector (not shown)).
  • storage device 1124 and an associated machine-readable medium 1128 may provide nonvolatile and/or volatile storage of machine-readable instructions, data structures, program modules, and/or other data for computer system 1100.
  • software 1120 may reside, completely or partially, within machine-readable medium 1128.
  • software 1120 may reside, completely or partially, within processor 1104.
  • Computer system 1100 may also include an input device 1132.
  • a user of computer system 1100 may enter commands and/or other information into computer system 700 via input device 1132.
  • Examples of an input device 1132 include, but are not limited to, an alpha-numeric input device (e.g., a keyboard), a pointing device, a joystick, a gamepad, an audio input device (e.g., a microphone, a voice response system, etc.), a cursor control device (e.g., a mouse), a touchpad, an optical scanner, a video capture device (e.g., a still camera, a video camera), a touchscreen, and any combinations thereof.
  • an alpha-numeric input device e.g., a keyboard
  • a pointing device e.g., a joystick, a gamepad
  • an audio input device e.g., a microphone, a voice response system, etc.
  • a cursor control device e.g., a mouse
  • Input device 1132 may be interfaced to bus 1112 via any of a variety of interfaces (not shown) including, but not limited to, a serial interface, a parallel interface, a game port, a USB interface, a FIREWIRE interface, a direct interface to bus 1112, and any combinations thereof.
  • Input device 1132 may include a touch screen interface that may be a part of or separate from display 1136, discussed further below.
  • Input device 1132 may be utilized as a user selection device for selecting one or more graphical representations in a graphical interface as described above.
  • a user may also input commands and/or other information to computer system 1100 via storage device 1124 (e.g., a removable disk drive, a flash drive, etc.) and/or network interface device 1140.
  • a network interface device such as network interface device 1140, may be utilized for connecting computer system 1100 to one or more of a variety of networks, such as network 1144, and one or more remote devices 1148 connected thereto. Examples of a network interface device include, but are not limited to, a network interface card (e.g., a mobile network interface card, a LAN card), a modem, and any combination thereof.
  • Examples of a network include, but are not limited to, a wide area network (e.g., the Internet, an enterprise network), a local area network (e.g., a network associated with an office, a building, a campus or other relatively small geographic space), a telephone network, a data network associated with a telephone/voice provider (e.g., a mobile communications provider data and/or voice network), a direct connection between two computing devices, and any combinations thereof.
  • a network such as network 1144, may employ a wired and/or a wireless mode of communication. In general, any network topology may be used.
  • Information e.g., data, software 1120, etc.
  • Computer system 1100 may further include a video display adapter 1152 for communicating a displayable image to a display device, such as display device 1136.
  • a display device include, but are not limited to, a liquid crystal display (LCD), a cathode ray tube (CRT), a plasma display, a light emitting diode (LED) display, and any combinations thereof.
  • Display adapter 1152 and display device 1136 may be utilized in combination with processor 1104 to provide graphical representations of aspects of the present disclosure.
  • computer system 1100 may include one or more other peripheral output devices including, but not limited to, an audio speaker, a printer, and any combinations thereof.
  • peripheral output devices may be connected to bus 1112 via a peripheral interface 1156. Examples of a peripheral interface include, but are not limited to, a serial port, a USB connection, a FIREWIRE connection, a parallel connection, and any combinations thereof.
  • the computer-readable non-transitory media includes all types of computer readable media, including magnetic storage media, optical storage media, and solid- state storage media and specifically excludes signals.
  • the software can be installed in and sold with the device. Alternatively the software can be obtained and loaded into the device, including obtaining the software via a disc medium or from any manner of network or distribution system, including, for example, from a server owned by the software creator or from a server not owned but used by the software creator.
  • the software can be stored on a server for distribution over the Internet, for example.
  • Computer-readable storage media exclude (excludes) propagated signals per se, can be accessed by a computer and/or processor(s), and include volatile and non-volatile internal and/or external media that is removable and/or non-removable.
  • the various types of storage media accommodate the storage of data in any suitable digital format. It should be appreciated by those skilled in the art that other types of computer readable medium can be employed such as zip drives, solid state drives, magnetic tape, flash memory cards, flash drives, cartridges, and the like, for storing computer executable instructions for performing the novel methods (acts) of the disclosed architecture.
  • the terminology used herein is for the purpose of describing particular aspects only and is not intended to be limiting of the disclosure.
  • each process associated with the disclosed technology may be performed continuously and by one or more computing devices.
  • Each step in a process may be performed by the same or different computing devices as those used in other steps, and each step need not necessarily be performed by a single computing device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

Controlling data flows in a network includes receiving data packets at a node, where each of the data packets includes a payload and a delivery time. Storing the data packets in a queue of the node. The data packets are classified as a trigger packet when the node determines that the delivery time will not be satisfied unless delivery of the data packets in the queue is modified. To accelerate delivery and to satisfy the delivery time, chunks of data may be dropped from the payload of the data packets stored in the queue. The data packets stored in the queue may then be forwarded to a next hop node such that the deliver time is satisfied.

Description

METHOD AND APPARATUS OF PACKET WASH FOR IN-TIME
PACKET DELIVERY
CLAIM OF PRIORITY
[0001] This application claims the benefit of priority to U.S. Provisional Patent Application Number 63/032,582, filed May 30, 2020, the entire contents of which are hereby incorporated by reference.
FIELD
[0002] This disclosure generally relates to data transmission in a network.
BACKGROUND
[0003] Network transport control methods are responsible for reliable and in-order delivery of data from a sender to a receiver through various network nodes. Using current control methods, any error due to link congestion or intermittent packet loss in the network can trigger re-transmission of data packets. This results in unpredictable delays as well as an increase in the network load, wasting network resources and capacity, and often failing to meet end-to-end latency objectives. For packet-based network architectures, a packet is the fundamental unit upon which different actions such as classification, forwarding, or discarding are performed by the network nodes. Different schemes have been proposed to improve the efficiency of data transmissions and increase predictability, some of which are based on mechanisms for efficient and faster re-transmissions, while others utilize redundant transmissions.
SUMMARY
[0004] According to one aspect of the present disclosure, there is a method for controlling data flows in a network, comprising receiving a first set of data packets at a queue of a node, each of the first set of data packets include a payload and a delivery time; storing each of the first set of data packets in the queue of the node; classifying a data packet of the first set of data packets as a trigger packet when the node determines that the delivery time of the trigger packet will not be satisfied unless delivery of the first set of data packets in the queue is modified; dropping one or more chunks of data in the payload of a second set of data packets, among the first set of data packets, stored in the queue to modify the delivery of the first set of data packets in the queue and accelerate the delivery of the trigger packet in order to satisfy the delivery time of the trigger packet; and forwarding the first set of data packets stored in the queue to a next hop node such that the delivery time is satisfied.
[0005] Optionally, in any of the preceding aspects, wherein the second set of data packets comprises a data packet ahead of the trigger packet in the queue.
[0006] Optionally, in any of the preceding aspects, wherein one or more chunks of data in the second set of data packets have a low priority.
[0007] Optionally, in any of the preceding aspects, wherein the one or more chunks of data in the payload are defined in a qualitative service header of the first set of data packets, each chunk being a sub-set of the payload.
[0008] Optionally, in any of the preceding aspects, wherein when the dropping fails to reduce the size of the payload, such that a dwell time of the trigger packet is larger than a per-hop deadline designated for a current hop of the trigger packet, and a total delivery time for the trigger packet is not exceeded, further comprising forwarding the trigger packet to the next hop node and recalculating the per-hop deadline at subsequent hops.
[0009] Optionally, in any of the preceding aspects, wherein the dropping one or more chunks of data comprises dropping one or more chunks of data of the trigger packet to reduce a dwell time of the trigger packet such that the trigger packet will satisfy the delivery time.
[0010] Optionally, in any of the preceding aspects, the method further comprising setting a state of the queue to active when one or more of the first set of data packets is classified as the trigger packet, wherein the state of the queue indicates whether the dropping should be performed. [0011] Optionally, in any of the preceding aspects, the method further comprising classifying multiple data packets of the first set of data packets as trigger packets, and counting the trigger packets by incrementing a number when one of the trigger packets is queued and decrementing the number when a trigger packet is dequeued, wherein the state is set to active when the number is greater than zero and set to inactive when the number is zero, and the dropping is performed on the second set of data packets when the state is active.
[0012] Optionally, in any of the preceding aspects, the method further comprising determining a queue depth for the first set of data packets, where the queue depth is a number of packets ahead of the trigger packet; setting a state of the queue to active when the queue depth is greater than zero, wherein the state of the queue indicates whether the dropping should be performed; and decrementing the queue depth when each of the first set of data packets are dequeued from the queue while the state is active.
[0013] Optionally, in any of the preceding aspects, wherein the dropping is performed on all of the second set of data packets ahead in the queue as long as the queue depth is greater than zero while the second set of data packets are dequeued.
[0014] Optionally, in any of the preceding aspects, wherein the dropping the one or more chunks of data in the payload is based on a coding scheme and a size of each of the one or more chunks of data.
[0015] Optionally, in any of the preceding aspects, wherein the coding scheme is one of a significance level coding scheme or network coding scheme.
[0016] Optionally, in any of the preceding aspects, the method further comprising calculating a needed truncation amount for the trigger packet; calculating a total allowable wash amount for the second set of data packets; and calculating a truncation size for each of the second set of data packets based on the needed truncation amount and the total allowable wash amount, wherein the needed truncation amount is distributed among a plurality of the second set of data packets.
[0017] Optionally, in any of the preceding aspects, wherein the second set of data packets are washable data packets. [0018] Optionally, in any of the preceding aspects, wherein modifying the first set of data packets in the queue is relative to a scheduling strategy in which the first set of data packets are transmitted without dropping chunks of data.
[0019] According to one aspect of the present disclosure, there is a node controlling data flow in a network, comprising a non-transitory memory storage comprising instructions; and one or more processors in communication with the memory, wherein the one or more processors execute the instructions to receive a first set of data packets at a queue of a node, each of the first set of data packets include a payload and a delivery time; store each of the first set of data packets in the queue of the node; classify a data packet of the first set of data packets as a trigger packet when the node determines that the delivery time of the trigger packet will not be satisfied unless delivery of the first set of data packets in the queue is modified; drop one or more chunks of data in the payload of a second set of data packets, among the first set of data packets, stored in the queue to modify the delivery of the first set of data packets in the queue and accelerate the delivery of the trigger packet in order to satisfy the delivery time of the trigger packet; and forward the first set of data packets stored in the queue to a next hop node such that the delivery time is satisfied.
[0020] According to one aspect of the present disclosure, there is a non-transitory computer-readable medium non-transitory computer-readable medium storing computer instructions for controlling data in a network, that when executed by one or more processors, cause the one or more processors to perform the steps of receiving a first set of data packets at a queue of a node, each of the first set of data packets include a payload and a delivery time; storing each of the first set of data packets in the queue of the node; classifying a data packet of the first set of data packets as a trigger packet when the node determines that the delivery time of the trigger packet will not be satisfied unless delivery of the first set of data packets in the queue is modified; dropping one or more chunks of data in the payload of a second set of data packets, among the first set of data packets, stored in the queue to modify the delivery of the first set of data packets in the queue and accelerate the delivery of the trigger packet in order to satisfy the delivery time of the trigger packet; and forwarding the first set of data packets stored in the queue to a next hop node such that the delivery time is satisfied. [0021] This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter. The claimed subject matter is not limited to implementations that solve any or all disadvantages noted in the Background.
BRIEF DESCRIPTION OF THE DRAWINGS
[0022] Aspects of the present disclosure are illustrated by way of example and are not limited by the accompanying figures for which like references indicate elements.
[0023] FIG. 1 illustrates an embodiment of a network environment suitable for use with the present technology.
[0024] FIG. 2A illustrates a general data format for the qualitative services data transmission framework described herein.
[0025] FIG. 2B illustrates the general concept of linear network coding as applied to application chunks in a payload.
[0026] FIG. 3 illustrates one example of IP packet which may be used to implement the present technology.
[0027] FIG. 4 illustrates an example network in which routers have a queue storing data packets.
[0028] FIG. 5A illustrates an example network in which a router includes a latency guarantee queue with dwell time reduction and packet washing services.
[0029] FIG. 5B illustrates an example flow diagram of the SL assessor in FIG. 5A.
[0030] FIG. 5C illustrates an example flow diagram of the SL washer in FIG. 5A.
[0031] FIG. 6 illustrates an example state machine for maintaining a state of the queue. [0032] FIGS. 7 A and 7B illustrate examples of a packet wash operation on packets in the front of in the queue.
[0033] FIG. 8 illustrates an example of packet washing on a packet being dequeued from the queue.
[0034] FIG. 9 illustrates an example of packet washing on packets in the front of the queue.
[0035] FIG. 10 illustrates an embodiment of a network node comprising a router.
[0036] FIG. 11 illustrates a schematic diagram of a general-purpose network component or computer system.
DETAILED DESCRIPTION
[0037] The present disclosure will now be described with reference to the figures, which in general relate to managing network traffic to improve network throughput and reliability and reduce latency. The technology includes a data transmission framework which comprises application, transport, and network components, which improves the data flow through the network when adverse network conditions are encountered.
[0038] In particular, the present disclosure describes a triggering mechanism by which packets arriving at a router or network node are marked or otherwise identified such that the size of the data packet may be reduced by performing a packet wash operation on the packet. A packet wash operation (or “packet wash” or “scrubbing”) is defined herein as a mechanism by which to reduce the size of one or more packets (which may include “washing” the entire packet and dropping it). Washing one or more packets may include selective or random removal of chunks of data, such as chunks of the packet’s payload. In one embodiment, the removal is based on a relationship among the chunks of data or the individual significance level of each chunk. Washing packets allows a partial delivery of packets by removing chunks, such as less significant chunks, from the packet while retaining as much of the packet’s information as possible. In one embodiment, lower priority chunks of data may be washed or dropped according to information (or meta-data) in the header of the packet. ln one embodiment, the washing operation occurs at nodes of the network, for example an intermediate network node.
[0039] As packets are received at the node, they are classified and stored in the queue. To classify packets arriving at the node, the packet is analyzed to determine whether it includes a delivery time by which the packet should reach its destination. In one embodiment, the delivery time (or some derivative of the delivery time) is specified in a contract of the packet. If the queue determines that the packet is unable to reach its destination by the delivery time, then the packet is classified (and marked) as a trigger packet. As trigger packets are queued and dequeued, the node maintains a state of the queue. Depending on the current state of the queue, a packet wash operation may be performed on one or more of the packets. For example, the packet wash operation may reduce the size of the packet as they are dequeued (or the packet may be dropped in its entirety) in order to minimize dwell time of the packet in the queue of the node. Reducing the dwell time of one or more packets stored in the queue helps to ensure that the packets may reach their respective destination within the delivery time without over-washing of the packets.
[0040] It is understood that the present embodiments of the disclosure may be implemented in many different forms and that claims scopes should not be construed as being limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete and will fully convey the inventive embodiment concepts to those skilled in the art. Indeed, the disclosure is intended to cover alternatives, modifications and equivalents of these embodiments, which are included within the scope and spirit of the disclosure as defined by the appended claims. Furthermore, in the following detailed description of the present embodiments of the disclosure, numerous specific details are set forth in order to provide a thorough understanding. However, it will be clear to those of ordinary skill in the art that the present embodiments of the disclosure may be practiced without such specific details.
[0041] FIG. 1 illustrates an embodiment of a network system 50 suitable for use with the present technology. FIG. 1 illustrates a plurality of network enabled devices including, by way of example only, a mobile device 110, a computer 106 and a server 108. The devices are coupled via network data paths 100 and through several network nodes 106-1 , 106-2 and 106-3. Each node may be a router, switch or other network component which is operable to route network traffic in any number of transport control methods.
[0042] Each network enabled device 108, 110, 112 may include one or more applications 114, which generate network data which may be communicated to other network enabled devices. The applications 114 may be executed by a processor utilizing volatile and non-volatile memory to generate and forward the data, as well as communicate with the network interface 122 and qualitative services (QS) engine 125. Each network enabled device 108, 110, 112 may include one or more network interfaces 122 allowing the device to communicate via the network data paths 100. Each network interface 122 may include a QS engine 125 which may include code adapted to receive data described herein from the applications 114 on each device and perform certain aspects of the technology described herein. Each network enabled device is configured to operate and/or communicate in the system 50 as a data sender or a data receiver. For example, network enabled device 108 is configured to transmit and/or receive data to/from any of the other network devices 110, 112. The data paths 100 may be wireless signals or wired signals and thus the network system 50 of FIG. 1 may comprise a wired or a wireless network. The environment may include additional or alternative networks including private and public data-packet networks, and corporate intranets.
[0043] Each network enabled device 108, 110, 112 represents any suitable end user device and may include such devices (or may be referred to) as a user equipment/device, wireless transmit/receive unit, mobile station, fixed or mobile subscriber unit, pager, cellular telephone, personal digital assistant (PDA), smartphone, laptop, computer, touchpad, tablet, wireless sensor, wearable devices consumer electronics device, a target device, device-to-device (D2D) machine type user equipment or user equipment capable of machine-to-machine (M2M) communication, and USB dongles.
[0044] Each network node 106-1 through 106-3 may likewise include a network interface 124 (or multiple network interfaces) and a QS routing engine 155 allowing the node to perform certain aspects of the technology. [0045] In some embodiments, for example, the nodes 106-1 to 106-3 may comprise access points which may use technology such as defined by IEEE 802.11 n, 802.11ax, or other 802.11 standards to provide wireless network access to one or more devices. The term “access point” or “AP” is generally used in this document to refer to an apparatus that provides wireless communication to user equipment through a suitable wireless network, which may include a cellular network, and it will be understood that an AP may be implemented by a base station of a cellular network, and the AP may implement the functions of any network node described herein. The nodes can similarly provide wired or wireless access network access through other networking technologies other than 802.11 such as 4G or 5G.
[0046] Although FIG. 1 illustrates one example of a communication system, various changes may be made to FIG. 1 . For example, the communication system 100 could include any number of user equipment, access points, nodes, networks, or other components in any suitable configuration.
[0047] FIG. 2A illustrates a general data format for the qualitative services data transmission framework described herein. FIG. 2A illustrates a generic IP packet 302, relative to a qualitative services IP packet 300. While the technology will be described with respect to its use based on an Internet protocol (IP) packet structure, it should be understood that other transport formats may be modified in accordance with the teachings herein.
[0048] A qualitative services IP packet 300 includes a Qualitative Service (QS) header and a data payload. The QS header includes an indication that the packet supports qualitative services. The QS header is used to identify the payload structure (the “data” in FIG. 2A) as comprising logical chunks. In one embodiment, this comprises using a QS bit in the QS header. Each chunk of data (Data Chunk 1 , Data Chunk 2, Data Chunk 3... Data Chunk N) is identified in the QS header. In one embodiment, positions of the data chunks are identified by offsets (in the priority level (PL) offsets field) specified in the QS header.
[0049] The QS header may include a checksum or cyclic redundancy check (CRC) for different chunks so that the integrity of the packet can still be verified even after a QS packet wash operation. In this technology, packet-level checks may be disabled and replaced by chunk-level checks.
[0050] In one embodiment, each chunk has a significance-factor associated with it. As illustrated in FIG. 2A, three priority levels are shown: High, Medium, and Low. While three priority levels are illustrated, it should be recognized that any number of priority levels may be utilized in accordance with the technology. The significance factor is assigned by the transmitting application on a sender network enabled device. The QS header indicates the relative data priority or significance assigned by the data transmitting application in the PL offsets field. For example, the QS header can indicate different significance or priority levels of each chunk in the payload, and how to identify the payload chunks associated with these priority levels (i.e. by using an offset). In one embodiment, to identify the chunks of data, the QS header specifies a specific offset for each chunk. Alternatively, it may refer to a known vector of offsets. While using offsets is one manner in which the chunks in the data payload may be identified, any suitable means of communicating the location and size of data chunks in the data of packet 300 are suitable for use in accordance with the present technology.
[0051] The significance information may be associated with each chunk and may be used by the QS routing engines 155 in each of the network nodes 106-1 through 106 -3 to manage traffic. In one embodiment of a packet wash operation, lower significance chunks may be dropped by a network node (while keeping the rest of the packet) in the case of a network condition, such as network congestion or dwell time of a packet, being detected. In one embodiment, each QS routing engine 155 may drop lower priority chunks in low priority packets first (until low priority packets are entirely dropped), then lower priority chunks in medium priority packets, etc., depending on the priority of the chuck and that of the packet, as described below. While three priority levels are shown, the number of priority levels may be based on the network and application. For instance, in a data center, it is sometimes beneficial to cut the payload and only forward the header. This is due to the use of shallow buffers in order to speed up communications. In networks where buffers would fill up more slowly, more priority levels can be supported. [0052] FIG. 2B illustrates the general concept of linear network coding as applied to application chunks in a payload. The payload in a packet, which is originated from the sender, is divided into ‘h’ number of chunks with the same size. The constraint of selecting the amount of ‘h’ is that h chunksize == payloadsize. In one embodiment, that size of each chunk in the payload is the same. In an additional embodiment, the one or more of the chunks in the payload may be padded with one or more zeroes such that size of each chunk in the payload is the same.
[0053] After random linear network coding is applied to the original data chunks of the packet payload as shown, the sender will include as many as ’h’ coded chunks in each packet payload. Where there are a larger than ’h’ number of coded chunks, multiple packets may be used, with the one or multiple packets defining one or more flows. Where there are less than ‘h’ coded chunks, one packet may be used. However, it is appreciated that more chunks than the original data chunks can be included in a payload (of one or more packets) to provide an additional degree of freedom in decoding the original data. Nevertheless, the use of more coded chunks than original data chunks is not required. (A number of degrees of freedom for original data may be defined as the number of coded chunks that are needed to decode the original data which was used to code the coded chunks. In this technology, if data to be transmitted is segmented into groups of ‘k’ chunks in a network coding group and are then network-coded together into ‘k’ chunks, a receiver would need ‘k’ chunks (or degrees of freedom) to decode the chunks. Hence, the degree of freedom is ‘k’ if and only if it needs ‘k’ independent coded packets to decode the original data.)
[0054] FIG. 3 illustrates one example of an IP packet which may be used to implement the present technology. Termed “New Internet Protocol” (New IP), the packet is part of an Internet framework that brings several capabilities in the present technology. New IP creates a block of information about the IP packet. A New IP packet 300, as illustrated, carries a contract block between a header and a data payload. Information is carried and processed along each network node hop from the sender to the receiver. The network and network nodes, if enabled, fulfill the contract specified in the packet, assuming the contract has been agreed between the packet sender/receiver and the network. The contract in the New IP packet carries the contract clause of a qualitative service to be performed. In one embodiment, a qualitative service to be performed may comprise a packet wash operation, which alters a packet to reduce dwell time at a network node by, for example, dropping or removing one or more chunks of data from a packet until the dwell time has been sufficiently minimized. This is distinct from the case in which a packet wash operation is performed to reduce network congestion, as explained more below.
[0055] The contract includes a contract clause(s) 310 including associated metadata. A contract clause 310 describes how the routers treat the packet as it traverses the network based on the predefined triggering defined in the event and condition fields, and an action specifies what actions can be taken if the event and condition are satisfied. In one embodiment, an event may comprise dwell time identified by a condition such as a lengthy delay in the queue in a network node to which an action (a packet wash operation, for example) may be applied. The “Metadata” field contains data about the packet, e.g. accounting information, customized statistics about the flow on intermediate hops, the contextual information about the user, the application, etc.
[0056] The Metadata field may include the following to describe the information about the packet payload:
[0057] A “Network Coding Group” field is used to identify the original data chunks to which the linear network coding operation was applied. The Network Coding Group relates to the original data content and the sender. The sender may partition the original data content into multiple network coding groups, each group containing multiple chunks with equal size. When forwarding additional coded chunks, one or more coded chunks (but less than the number of chunks in the original data content) from the same network coding group may be forwarded.
[0058] The “Indication of linear-network-coded chunks” field indicates whether the payload contains linear-network-coded chunks or not (e.g., indicates significance level instead).
[0059] The “coded chunk size” field gives the information on how large the chunk size is. When partial packet dropping happens, the network nodes can find the boundary of each coded chunk in the payload using the chunk size.
[0060] A “Coefficients” field provides the coefficients with which the original data chunks are linearly combined to form coded chunks contained in the current payload. [0061] The “Current rank” field indicates the current rank of the remaining coded chunks in the payload. Each time when a linearly independent chunk is dropped from the packet, the current rank is reduced by 1.
[0062] The “Full rank” field indicates the full rank of the coded chunks in the payload when they are inserted by the sender.
[0063] It is appreciated that the format of the IP packet is an example and that the packet encoding may include any number of different fields. For example, the event, condition and/or action fields of the contract clause may not exist in the packet and other or different metadata may be included. Likewise, the field of the network coding group may include different elements or not be part of the metadata at all. Moreover, it is appreciated that any number of different coding schemes may be employed to support qualitative communication and that coding is not limited to the disclosed embodiments.
[0064] FIG. 4 illustrates an example network in which routers have a queue storing packets. In addition to the qualitative service capabilities discussed above, the network may also establish a guarantee of service delivery (e.g., a guarantee that a packet will arrive at its destination no later than a specified delivery time). To establish such a service guarantee (or in-time guarantee), the network 400 employs one or more queues that store data packets. In one embodiment, the data packets stored in the queue(s) have a delivery time requirement (or on-time delivery), such that the data packet will be delivered on a per-hop basis or end-to-end basis within a specified delivery time. Data packets that do not include any guarantee for an on-time delivery may also be stored in the one or more queues. In one embodiment, the queue is a latency guarantee queue (LGQ), such as LGQs 425-1 and 425-3. It is appreciated that while the term LGQ may be used in the various embodiments that follow, data packets that do not include any in-time guarantee may also be stored in the LGQ.
[0065] The delivery time of a data packet may be measured according to different embodiments. In one embodiment, the delivery time may be measured as a remaining time for delivery. The remaining delivery time may be calculated by assessing the remaining latency budget for the packet to reach its destination from the current node and using information about the remaining path (e.g., the number of hops downstream and the amount of fixed latency (latency that cannot be altered)), details of which may be found in International Application PCT/US2020/023289 and U.S. Appl. No. 16/822,458, the entire contents of which are hereby incorporated by reference. For example, assume for purposes of discussion that a data packet arrives at an intermediary node with a delivery time of 10 ps. If the data packet dwells at the intermediary node for 1 ps, and the transmission time to the next node is 1 ps, then remaining time for delivery is 8 ps. In another embodiment, the delivery time may be measured as an absolute amount of time. For example, if data packet arrives at an intermediary node at 11 :37 am and needs to arrive at its destination by 11 :47 am, the intermediary node can determine the amount of time remaining to meet its deadline (11 :47 am) regardless of the amount of time it takes to hop from one node to the next.
[0066] The network 400 includes sending nodes 110, routers 106-1 , 106-3 (with dedicated LGQs 425-1 , 425-3) and receiver nodes 108, although the network 400 is not limited to the depicted components, but may be a much larger network with additional components, such as network 50 in FIG. 1.
[0067] At routers 106, the LGQs 425 receive data packets for latency guarantee according to a contract specified in the received data packets. The LGQs 425 typically include a queuing algorithm, which is usually implemented at the router level. Specifically, output queues process the data packets that have been enqueued to await their turn to be forwarded or routed to the appropriate output port or interface. A primary purpose of these queueing algorithms is to give preferential treatment to data packets having higher priority (which are enqueued at the router output port) over other enqueued data packets having lower priority.
[0068] In one embodiment, the LGQs 425 are a latency guaranteed queue in which the data packets have a per-hop (e.g., router hop) deadline that is guaranteed, as well as a minimal dwell time at each hop. The dwell time, as defined herein, is the time a packet spends at a router before being forwarded to the next hop and includes: processing delay, queueing delay (affected by packet scheduling in the outgoing queue) and transmission delay (which is affected by the size of the packet and is proportional to the packet size, and is at least a part of the minimal dwell time).
[0069] In one embodiment, the contract (FIG. 3) of a data packet is configured to achieve a service delivery guarantee (also referred to herein as an InTimeGuaranteed contract, and metadata (FIG. 3) of a packet used to carry the following information in order to instruct the routers 106 in the network to comply with the InTimeGuaranteed contract of the packet. For example, meta-data in a packet may be used to comply with the InTimeGuaranteed contract that carries the following information:
• Budget The residual time budget before the packet deadline runs out of time to be delivered to its final destination and is considered unsuccessful, and
• Hop The residual number of hops towards a destination, given the routing path is configured.
[0070] As noted above, queueing delays may be affected by packet scheduling in the outgoing queue. To ensure packets comply with the InTimeGuranteed contract, a packet scheduling scheme may use the following parameters:
• St is the size of the ith packet to be scheduled,
• Dt is the transmission time of the ith packet, which is proportional to the packet size D, = S,/S. Over the sampling time, an average link rate may be assumed. The transmission time may be calculated as packet size divided by the average link transmission rate (S),
• Tt is the dwell time of the ith packet, which includes the transmission time and waiting time, which is the sum of the transmission latency of all packets in front of packet ϊ as well as the packet /: Tt = (Dt + D2 - 1 - l· D ), and
• Lt the deadline for each packet at the current hop along a respective path. A deadline usually refers to the remaining time until the deadline is met, but could be adjusted to be an absolute time as the deadline (described below). In one embodiment, a central controller or sender may set a specific deadline at each hop, or it may be dynamically set at each hop. In one further embodiment, a per-hop deadline of the packets may be calculated as U = budget,/ hop,. In order to make sure a time-sensitive packet meets its deadline at the current hop, the dwell time of the ith packet usually needs to satisfy Tt £ L as explained further below.
[0071] FIG. 5A illustrates an example network in which a router includes a queue with dwell time reduction and packet washing services. Network 500 is the same or similar to network 400 of FIG. 4, with the addition of service level (SL) assessor 502,
SL washer 504 and packet disposer 506 which serve to reduce dwell time of packets stored in the queue, such as LGQ 425-1 , when needed and reduce latency. Each of the sending nodes 110, router 106-1 , LGQ 425-1 and receiver nodes 108 are described above. In one embodiment, the SL assessor 502, SL washer 503 and packet disposer 506 are part of the LGQ 425-1 in router 106-1 , as indicated by the dashed box surrounding these components. Network 500 is described with reference to FIGS. 5B and 5C, which illustrate flow diagrams of the SL assessor 502 and packet washer 504, respectively. For purposes of discussion, the processes may be performed at any of network nodes 106-1 , 106-2, 106-3, although other components capable of processing may also implement such processes.
[0072] At step 520 of FIG. 5B, a packet including, for example, a QS header and data chunks is received at a network node 106-1 . In one embodiment, the packet is received from sending nodes 110. Typically, more than one packet will be received at each node and buffered in an input buffer. The input buffer may be a dedicated hardware buffer or a dedicated portion of memory. In one embodiment, the packet is received and buffered in LGQ 425-1 of router 106-1.
[0073] As packets arrive and are buffered in LGQ 425-1 , the packets are assessed by SL assessor 502 to determine whether they are a washable packet at step 522. A data packet is washable when it contains a chunk of data that is of lower priority, e.g., the packet contains a chunk of data that could be dropped. Washability of a packet can be independent of information specified in the contract, such as delivery time. Rather, it indicates the packet is eligible for a washing operation (described above). If a packet received at the LGQ 425-1 is determined to be a washable packet, the washable packet is marked as “washable” and the “wash potential” is updated in memory at step 524. For example, as noted above, a data packet is marked as a washable packet when it contains a chunk of data that is of lower priority compared to other chunks of data (i.e. , a lower priority that indicates it may be dropped). In one embodiment, the determination of whether a packet is a washable packet is assessed at the SL washer 504 instead of the SL assessor 502. After marking the packet as washable at step 524, or if the incoming packet is determined to not be washable (e.g. , the packet is not qualitative communication compatible) at step 522, the process proceeds to step 526.
[0074] Packets processed at step 526 are next assessed by the SL assessor 502 to determine whether the packet should be classified as a trigger packet (or washable trigger packet). Status as a trigger packet is distinct from status as a washable packet. A trigger packet is a packet that is at risk of missing its delivery time (e.g., the packet is on the verge of missing the delivery time specified in the contract at the current node or for delivery to the final destination address). A trigger packet may be a washable packet or non-washable packet. An incoming packet is classified by the SL assessor 502 as a trigger packet when the packet is at risk of failing to satisfy (meet) its delivery time (e.g., specified in the contact). A packet that is not at risk of failing to satisfy the delivery time is not classified as a trigger packet and the process proceeds to step 530 where the packet is queued in the LGQ 425-1.
[0075] At step 526, the SL assessor 502 classifies the packet as a trigger packet and can update, when necessary, a state (also referred to as a “wash now state”) of the LGQ 425-1 at step 528. In general, the process of classifying a received packet includes determining whether the current time (i.e. , the time the packet arrives) plus the amount of time required to transmit a packet (i.e., the time remaining to reach its destination) exceeds a latest time to send a packet that will still permit the packet to reach its destination within the delivery time. If an insufficient amount of time is available for the packet to reach its destination by the delivery time, then an amount of dwell time in the queue needs to be reduced by truncating or removing data chunks from the payload of one or more packets in the queue, including potentially the packet itself or another packet in the queue.
[0076] In one embodiment, the size of chunks of data to be truncated or removed from the one or more packets (including the packet and/or another packet ahead of it in the queue) may be calculated as the amount of time required to be saved to meet the delivery time multiplied by a bandwidth in the network ((G - L) * B), where L is the delivery time, T is the dwell time of the packet in the router according to a depth of the queue (explained below) and B is an available bandwidth in the network. For example, the reduction in dwell time may be determined by dividing a size of the washed chunks of data by the bandwidth. Consider, for purposes of discussion, there are 1000 bits in each chunk of data and a 100 Mbps interface to transmit the chunks of data, so the transmission time for the 1000 bits in a chunk would be 1 K/100M, i.e. 10 psec. The amount of packet washing that is needed may be determined, in bits or other units of data size, by taking the time required to satisfy the delivery time (e.g., the amount of time that must be reduced to reach the destination by the delivery time) and multiplying this amount with the bandwidth. For example, a 100 Mbps interface transmits 100 bits per psec. To save 50 psec, 50*100 bits (i.e., 5000 bits in total) must be dropped to reach the packet destination in time to satisfy the delivery time. Depending on the size of the chunks of data, the node may determine how many chunks of data need to be dropped. In the example with 1000 bits in each chunk, 5 chunks would need to be dropped to save the 50 psec.
[0077] In one embodiment, the assessed packet at step 526 is at risk of failing to meet a total delivery time. A total delivery time may be the absolute time by which the packet must arrive at its destination or a sum of time the packet can take to reach its destination along a network path without exceeding the delivery time (e.g., specified in the contract). For example, the SL assessor 502 may determine that the packet is at risk of arriving late at its destination because the packet or packets ahead in the LGQ 425-1 have a transmission time that, when totaled, results in the current packet failing to reach its destination by the delivery time (without packet washing). That is, the packet will be at risk of failing to meet it’s deadline when the time spent transmitting the current data packet (at risk of being late) to the current node plus the time remaining to transmit the current data packet to its destination, including any dwell time, exceeds the total delivery time.
[0078] In another embodiment, the assessed packet is at risk of not meeting its delivery time based on the per-hop dwell time. That is, the packet dwell time in the queue places the packet at risk of failing to reach its destination by a specified delivery time that can be called a per-hop deadline, based on an expected amount of delay at each subsequent hop. The per-hop deadline may be determined, in one embodiment, using a remaining amount of time before the packet deadline expires and the remaining number of hops toward the packet’s destination. The dwell time (77) of a packet and its per-hop deadline (Li) are further defined above. In either case, packets that are at risk of being late can be potentially accelerated to reach their destination on time by application of a packet wash operation (e.g., the delivery time specified in the contact may be satisfied with sufficient packet washing). In one embodiment, the packet wash operation may sufficiently minimize the amount of dwell time for one or more packets in the LFQ 425-1 , or another packet may be dropped in its entirety, to sufficiently reduce latency. In a further embodiment, the packets at risk of being late will not have their delivery time satisfied unless delivery of one or more packets in the queue is modified. For example, a payload (or portion of a payload) of one or more of the packets is dropped. It is appreciated that packets and their payloads (or a portion of their payload) may be queued, dequeued, or dropped according to any number of different techniques, such as a first-in-first-out (FIFO) technique, a last-in-first-out (LIFO) technique, etc.
[0079] The wash now state may be used to identify when packet washing is required in order accelerate delivery of a packet at risk of being late to its destination. For example, assume we wish to reduce the dwell time in the queue of an incoming packet classified as a trigger packet. Each data chunk in the trigger packet may only contribute to a few microseconds of delay. Truncating data chunks from the trigger packet may therefore only reduce the dwell time in the queue (and accelerate delivery) by a small amount. However, if the trigger packet is stored in a queue with dozens or hundreds (or more) of other packets (i.e., a queue with a large queue depth), data chunks from each of the other packets in the queue may be truncated (assuming the packet is a washable packet). Whereas truncating data chunks of the individual trigger packet only contributes to a few microsecond reduction in dwell time, truncating data chunks from each of the other data packets in the queue contributes to dozens or hundreds (or more) of microseconds in dwell time reduction. Thus, if the other packets ahead in the queue of the incoming trigger packet are washed, the dwell time of the trigger packet itself is also reduced. Accelerating a data packet in the queue by reducing its overall dwell time provides the packet sufficient time to reach its destination within a specified delivery time (i.e., an in-time guarantee is met for the data packet). As alluded to above, this is distinct from providing a packet wash on packets in a queue to reduce network congestion. Washing data packets in a queue to reduce congestion, while alleviating outgoing buffer space in the queue, does not account for queueing delays or any in-time guarantee that a packet will be delivered to its destination by a specified delivery time.
[0080] In one embodiment, the packet will not meet the delivery time (e.g., specified in the contract) even when a washing operation is performed on the packet itself or other packets in the LGQ 425-1. In this case, the packet as a whole may be dequeued and dropped or otherwise disposed of (e.g., the packet may be handled with traditional IP protocols).
[0081] At step 526, when a packet is determined to be a trigger packet, the process proceeds to step 528 where a state (wash now state) of the LGQ 425-1 is updated. The state of the LGQ 425-1 is maintained by a count and/or wash queue depth (WQD) (also termed herein, queue depth) of packets stored therein. This count or queue depth enables the LGQ 425-1 to determine whether any trigger packets are currently stored in the queue. When one or more trigger packets are stored in the LGQ 425-1 (the queue depth is greater than zero), the LGQ 425-1 remains in an “active” state (or “wash trigger state”). While in the “active” state, the LGQ 425-1 may perform wash operations on washable packets (including trigger packets) stored in the queue. Washing is performed until all trigger packets stored in the LGQ 425-1 reach the front and are dequeued, at which time the count and/or the queue depth reaches zero. In one embodiment, as additional trigger packets enter the queue, the count and/or the queue depth (and the state of the queue) may be updated. When no trigger packets are stored in the LGQ 425-1 , the LGQ 425-1 remains in an “inactive” state (or “non wash trigger state”). When the LGQ 425-1 is in an “inactive” state, the LGQ 425-1 will not perform any wash operations on packets stored in queue.
[0082] In one embodiment, a wash packet counter is used to maintain a count of the number of trigger packets stored in the LGQ 425-1 . As packets classified as a trigger packet arrive for storage in the LGQ 425-1 , the packets are marked (for example, using a flag) and counted. The count is incremented as trigger packets are queued in the LGQ 425-1 and decremented as packets are dequeued from the LGQ 425-1. In one embodiment, incrementing the counter is performed by the SL assessor 502. In a further embodiment, decrementing the count is performed by the SL washer 504.
[0083] In another embodiment, a queue depth (or queue depth count) is used to maintain the status of the LGQ 425-1. In particular, the queue depth count maintains a number of packets queued between the packet in the front of the queue and a newest arriving packet in the queue (usually or alternatively, a packet nearest the back of the queue) classified as a trigger packet. When the queue depth is greater than zero, the LGQ 425-1 state is set to ‘active.’ As packets are dequeued from the LGQ 425-1 , the queue depth is decremented (and the state remains ‘active’ as long as the queue depth is greater than zero). Once the queue depth reaches zero, the state of the LGQ 425- 1 is set to ‘inactive.’ In one embodiment, the SL washer 504 sets the queue depth to the current queue depth when a trigger packet arrives at the LGQ 425-1 and decrements the queue depth for each washable packet that is dequeued while the state of the LGQ 425-1 is active.
[0084] In one embodiment, a finite state machine maintains the count and queue depth of the queue, as shown in FIG. 6. For example, the finite state machine shows two states — an inactive (or “False”) state 602 and an active (or “True”) state 604. As trigger packets enter the queue, the count or queue depth are adjusted (incremented or decremented) to maintain the state of the LGQ 425-1 .
[0085] After incrementing or decrementing the count or queue depth, the process proceeds to step 530 where the packet is queued in the LGQ 425-1.
[0086] Turning to FIG. 5C, the SL washer 504 adds a processing stage that performs a wash operation on packets based on the state of the LGQ 425-1. As discussed above, packets that are classified and marked as a washable packet (including trigger packets) are processed by the SL washer 504 to reduce the dwell time of the packet in the LGQ 425-1. As packets are washed and disposed (dropped from or transmitted out of the queue) by packet disposer 506, the SL washer 504 adjusts the state of the LGQ 425-1. For example, when the queue is in an ‘active’ state, as a trigger packet is transmitted or dropped from the LGQ 425-1 after processing, the queue depth is decremented (until reaching zero) and the state of the queue may be adjusted from ‘active’ to ‘inactive.’
[0087] At step 532, a packet has been dequeued from the LGQ 425-1 for processing. As the packets are dequeued for processing, the SL washer 504 determines whether the queue is in an active or inactive state (or in a “wash state”) at step 534. A queue is determined to be in an “active” state when the count or queue depth is greater than zero. Queues in an active state will perform a packet wash operation on washable packets being dequeued. Thus, if the LGQ 425-1 is in an active state, the process proceeds to step 536 where the SL washer 504 determines whether the packet is a washable packet. In some embodiments, only washable trigger packets are marked as washable. Otherwise, the queue is determined to be in an “inactive” state. When in an inactive state, a packet wash operation is not performed during dequeuing of packets. Accordingly, the process proceeds to step 542 for continued processing. [0088] Packets that are marked as a washable packet (in step 524), and determined to be washable at step 536, can be washed at step 538 to accelerate the disposal of packets to thereby reduce latency. The SL washer 504 may apply the packet wash operation to either the packet being dequeued or to other packets in the queue that are ahead of a trigger packet arriving at the queue.
[0089] In a first embodiment, the SL washer 504 applies a packet wash operation to the packet being dequeued. In this embodiment, the payload in the packet is truncated according to a qualitative communications coding scheme (e.g., significance level or random network coding as described above) and a size of the chunks of data. The size of chunks of data to be truncated or removed from the packet may be calculated as the amount of time required to meet the delivery time multiplied by a bandwidth in the network ((G - L) * B), such that T £ L, where L is the delivery time, T is the dwell time of the packet in the router according to the a queue depth and B is the bandwidth. An example of packet washing on packets being dequeued may be found below with reference to FIG. 8.
[0090] In a second embodiment, other packets in front of a trigger packet arriving in the queue are washed. The number of chunks of data from a payload to be truncated is recorded as the trigger packets are received at the LGQ 425-1. As packets are dequeued from the LGQ 425-1 , the SL washer 504 subtracts the number of chunks of data to be truncated as packets are washed and dequeued from the queue.
[0091] One example of a packet wash operation on other packets ahead in the queue is shown in FIGS. 7A and 7B. In the depicted embodiments, the LGQ 425-1 stores a number of packets (e.g., packets 1 - 4 in FIG. 7A and packets 1 - 9 in FIG. 7B). The packets marked with an “X” (e.g., packets 4 and 9) are classified as trigger packets, packets that are “darkened” (e.g., packets 1 , 2, 4, 6, 7 and 8) are washable packets, and packets that are “white” (e.g., packets 3, 5 and 9) are non-washable packets). In the example of FIG. 7A, packet 4 arrives at the back of the LGQ 425-1 and is marked as a trigger packet. The number of washable packets in front (ahead) of the trigger packet 4 is 2 (e.g., packets 1 and 2), with a total of 3 washable packets in the LGQ 425-1 (packets 1 , 2 and 4). In the example of FIG. 7B, which is a continuation of the example in FIG. 7A, packet 1 is dequeued from the LGQ 425-1 and packets 5 - 9 arrive at the LGQ 425-1 , with packet 9 arriving last (at the back of the queue) and marked as a trigger packet. The number of washable packets in the queue is 5 (packets 2, 4, 6, 7 and 8). In this example, there are multiple trigger packets (e.g., packets 4 and 9). Accordingly, the washable packets in front of the trigger packet may be washed (e.g., washable packet 2 is washed for trigger packet 4 and washable packets 2, 6, 7, 8 are washed for trigger packet 9). Other examples of packet washing may be found below with reference to FIGS. 8 and 9.
[0092] In one embodiment, a “fair” strategy based on a weighted wash ratio determines the “wash potential” or “available wash amount” (AWA) of a packet. A fair strategy spreads (optionally evenly) the washing of packets in the queue over multiple packets. In this case, and for multiple packets stored in the queue (e.g., LGQ 425-1), the current AWA of the queue (and each washable packet) may be tracked in order to avoid over-washing of a single packet. For example, as packets are queued and dequeued from the LGQ 425-1 , a size of each of the washable chunks of data in the payload of the packet are added to the AWA when the packet is queued and the size of each of the washable chunks of data of the payload are subtracted from the AWA when the packet is dequeued. The size updated as the AWA is compared against a size of the chunks that need to be truncated from packets in the queue in order for a trigger packet to reach its destination (or reach its next hop) within the specified delivery time (which may be specified in the contract). In one embodiment, the AWA may be in units of chunks, bytes, or some other measure. Likewise, the comparison may be against a size of chunks, bytes or some other measure of data that needs to be truncated. Based on the comparison, a wash operation may be performed on the packet and/or other washable packets in the queue.
[0093] For implementing a fair strategy on a packet classified as a trigger packet, a needed truncation amount (NTA) is calculated as (T-L) / B, where Tis the dwell time, L is the delivery time (per-hop) and B is the link bandwidth. Each washable packet has an allowable wash amount (AWA), which is defined as a maximum number of bytes that that may be truncated from the packet in a wash operation. A total allowable wash amount (AWAioia/) for the queue and for a new trigger packet arriving at the queue is calculated as: AWAtotai = AWAi, where {1 .../} is the list of washable packets in the queue ahead of (in front of) the new trigger packet arriving at the queue. When a washable packet dequeues, AWAtotai is reduced by the AWA of the packet. The data that must be dropped is distributed among multiple washable packets as they are dequeued. The truncation size for a washable packet being dequeued ( TSdequeued ) is
Figure imgf000025_0001
where K is a variable defining the number of trigger packets. For example, assume for purposes of discussion that there are four washable packets in the queue stored ahead of two trigger packets (such that K=2), where the four washable packets have an AWA of 6 bytes, 7 bytes, 9 bytes and 8 bytes, respectively, and the NTA for the first trigger packet is 4 bytes and the second trigger packet is 6 bytes (in addition to the 4 bytes required by the first trigger packet, which can be ahead of the second trigger packet in the queue). In this case, the sum of the needed truncation amounts
Figure imgf000025_0002
30. When the first of the washable packets dequeues (with allowable wash amount AW Adequeued = 6), the truncated amount of the dequeued packet is
Figure imgf000025_0003
When the next washable packet is dequeued ( AW Adequeued = 7) , the truncated amount of the dequeued packet is
TSdequeued = ± * 7 = 2.33 bytes
(rounded up to 3 bytes but optionally rounded down or to the nearest integer chunk size), where
Figure imgf000025_0004
NTAk = 10 from which the prior truncation amount is subtracted (e.g., 10 - 2 = 8) and AWAtotal = 7 + 9 + 8 = 24. This process may continue iteratively until the washable packets are dequeued. In this way, dropped chunks needed for one trigger packet to meet its delivery time can be distributed relatively evenly across multiple packets.
[0094] In one embodiment, the NTA may be determined for multiple trigger packets before one or more washable packets are dequeued. In this case, the NTA for a trigger packet may be based on the NTA for one or more other trigger packets in the queue. For example, assume for purposes of discussion that a first trigger packet arrives at the queue with an NTA of 5 bytes and, prior to washing packets ahead in the queue, a second trigger packet arrives at the queue with an NTA of 5 bytes. When the second trigger packet arrives at the queue, the node (such as the SL assessor 502 or SL washer 504 in the node) is aware of the NTA (5 bytes) for the first trigger packet and assumes that washing packets ahead in the queue will result in at least the removal of 5 bytes. If removal of the 5 bytes for the first trigger packet satisfies the per-hop deadline of the second trigger packet, then no further washing is necessary for the second trigger packet. Otherwise, if the per-hop deadline is not satisfied, additional washing will be required (although the additional washing may be less than the NTA of the second trigger packet as a result of washing packets for the first trigger packet).
[0095] Accordingly, by application of the TSdeqeuued formula, the packet wash operation does not exhaust the wash allowance of the packets in the front of the queue. Rather, it is “fairly” distributed among the washable packets according to their wash allowance sizes.
[0096] After packet washing is completed at step 538 (or the packet is determined to not be washable at step 536), the process proceeds to step 540 to update the wash state of the queue by updating the count and/or the queue depth (WQD). The count is increased as trigger packets are queued in the LGQ 425-1 and decreased as trigger packets are dequeued from the LGQ 425-1. The queue depth is set (or updated) when a trigger packet arrives based on the number of packets stored in the LGQ 425-1 and decreases as packets are dequeued (until no packets remain in the queue (the depth is zero)). The washed packet is then checked to determine whether it remains at risk for failing to satisfy the delivery time for reaching its destination at step 542. For example, the queue may determine whether the delivery time (e.g., specified in the contract) will be met or exceeded based on an amount of time lapsed for transmission of the packet (now washed) to the current node and a remaining time required to transmit the washed data packet to its destination. In one embodiment, the washed packet is entirely dropped when the dwell time of the packet is determined to exceed the delivery time (e.g., specified in the contract) at step 546. In another embodiment, when the packet wash operation fails to reduce the size of the payload such that the dwell time of the washed packet is less than or equal to the delivery time of the washed packet, the washed packet is transmitted (forwarded) to the next hop node, at step 544, and the remaining time required to transmit the washed packet to its destination is recalculated.
[0097] FIG. 8 illustrates an example of packet washing on a packet being dequeued from the latency guarantee queue. The depicted embodiment shows a table 800 with packets arriving 802 in the queue (e.g., LGQ 425-1), packets being dequeued 804 from the queue, packets currently in the queue 806, a state (T or ‘F’) of the queue 808 (“wns” or “wash now state”) and a count of the number of washable trigger packets 810 (“WTP”) in the queue. Packets arrive in the queue from top to bottom on the table 800, as shown by the “time” arrow along the vertical axis. Bolded and italicized packets represent washable trigger packets (WTPs), underlined packets represent washable packets and all other packets are non-trigger packets and non-washable packets. The state of the queue (wns) in the depicted embodiment is maintained using a count of the number of trigger packets (WTP count) stored in the queue. The state is ‘active’ (T) when the WTP count is greater than zero, and the state is ‘inactive’ (‘F’) when the WTP is zero.
[0098] In an initial state (row 1), the queue has two packets ‘b’ and ‘a’ stored in the queue. Packet ‘b’ is a washable packet (but not a trigger packet) and packet ‘a’ is neither washable nor a trigger packet. Accordingly, the state of the queue remains ‘inactive’ (F) and the WTP count remains Ό.’ As packet ‘c’ (classified as a trigger packet) arrives and is stored in the queue (row 2), the state of the queue becomes ‘active’ (T) and the WTP count is incremented to Ί .’ After the arrival of packet ‘c,’ packets ‘a’ and ‘b’ are dequeued from the queue (rows 3 and 4). Since neither packet ‘a’ nor packet ‘b’ are trigger packets, the state of the queue and the WTP count remain unchanged. In row 5, packet ‘c’ is being dequeued from the queue. Since packet ‘c’ is a trigger packet, the state of the queue is set to ‘inactive’ (no other trigger packets remain in the queue) and the WTP count is decremented to "O’ (no other packets remain in the queue). As the packet is dequeued, it is washed by the queue and either transmitted to the next hop node or dropped. As packets are queued and dequeued in rows 6 - 11 , the state of the queue and the trigger packet count are maintained.
[0099] FIG. 9 illustrates an example of packet washing on packets in the front of the queue. In the illustrated embodiment of table 900, columns 802 - 810 are the same as in table 800. In this embodiment, the state of the queue (wns) is maintained using a queue depth 912 (WQD) of the number of washable packets stored in the queue. For purposes of this example, a trigger packet requires 10 chunks of data from other packets ahead in the queue to be dropped in order to satisfy its deliver time (e.g., per-hop delivery time) and a washable packet ahead in the queue can have 5 chunks of data dropped from its payload and still be transmitted.
[00100] As with the example in FIG. 8, the queue initially stores packets ‘b’ and ‘a’ (row 1 ), the state of the queue 808 is ‘inactive’ (F), the WTP count 810 is Ό,’ the queue depth 912 (WQD) is Ό’ and no truncation of data is initially required. As packet ‘c’ arrives in queue (row 2), there are three packets in the queue (packets ‘o’, ‘b’ and ‘a’). Since packet ‘c’ is a trigger packet, the state 808 (wns) changes from ‘inactive’ (F) to ‘active’ (T), the WTP count 810 is incremented to Ί ,’ and the WQD 912 is incremented to ‘3.’ In this embodiment, since packet ‘c’ is a trigger packet, it needs 10 chunks of data to be washed in the queue. Accordingly, the needed truncation amount 914 is updated to 10. The needed truncation amount 914 identifies the amount of data (chunks of data) that must be removed or dropped to reduce the dwell time of the trigger packets in the queue (in this case, just packet ‘c’) such that the packets may satisfy their delivery times. As packet ‘a’ is dequeued from the queue (row 3), the queue depth 912 is reduced to ‘2.’ However, since packet ‘a’ is not a washable packet, the needed truncation amount 914 does not change. As packet ‘b’ is dequeued from the queue (row 4), the queue depth 912 is decremented to Ί .’ Since packet ‘b’ is a washable packet, after a wash operation is performed on the packet (and the packet is dequeued), the needed truncation amount 914 is reduced to 5. When packet ‘c’ is dequeued (row 5), 5 chunks of data are removed since it is a washable packet. At this time, no packets remain in the queue, the state 808 become ‘inactive’ (F), the WTP 810 drops to Ό,’ and the WQD 912 is decremented to Ό’ along with the needed truncation amount 914. As packets are queued and dequeued in rows 6 - 11 , the state of the queue, the trigger packet count, queue depth and needed truncation amount are maintained.
[00101] FIG. 10 illustrates an embodiment of a network node which may implement a router. The node (e.g., a router) 1000 may be, for example, a node 106 or any other node or router as described above in communication system 50. The node 1000 may comprise a plurality of input/output ports 1010/1030 and/or receivers (Rx) 1012 and transmitters (Tx) 1032 for receiving and transmitting data from other nodes, a processor 1020 to process data and determine which node to send the data to and a memory. The node 1000 may also generate and distribute data in the form of data packets in the communication system. Although illustrated as a single processor, the processor 1020 is not so limited and may comprise multiple processors. The processor 1020 may be implemented as one or more central processing unit (CPU) chips, cores (e.g., a multi-core processor), field-programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), and/or digital signal processors (DSPs), and/or may be part of one or more ASICs. Moreover, the processor 1020 may be implemented using hardware, software, or both. The memory 1022 may be configured to store routing tables, forwarding tables, or other tables or information disclosed herein. Although illustrated as a single memory, memory 1022 may be implemented as a combination of read only memory (ROM), random access memory (RAM), or secondary storage (e.g., one or more disk drives or tape drives used for non-volatile storage of data). In one embodiment, memory 1022 stores code that enables the processor 1020 to implement the QS routing engine 155, and encoder/decoder 185. Memory 1022 may include reserved space to implement the coded chunk cache 195.
[00102] In one embodiment, the QS routing engine 155 is in the form of code which is operable to instruct the processor 1020 to perform those tasks attributed to the QS routing engine 155.
[00103] The technology described above may also be implemented on any general- purpose network component, such as a computer or network component with sufficient processing power, memory resources, and network throughput capability to handle the necessary workload placed upon it.
[00104] FIG. 11 shows an example embodiment of a computing system for implementing embodiments of the disclosure. Computer system 1100 includes a processor 1104 and a memory 1108 that communicate with each other, and with other components, via a bus 1112. Bus 1112 may include any of several types of bus structures including, but not limited to, a memory bus, a memory controller, a peripheral bus, a local bus, and any combinations thereof, using any of a variety of bus architectures. [00105] Memory 1108 may include various components (e.g., machine-readable media) including, but not limited to, a random access memory component, a read only component, and any combinations thereof. In one example, a basic input/output system 1116 (BIOS), including basic routines that help to transfer information between elements within computer system 1100, such as during start-up, may be stored in memory 1108. Memory 1108 may also include (e.g., stored on one or more machine- readable media) instructions (e.g., software) 1120 embodying any one or more of the aspects and/or methodologies of the present disclosure. In another example, memory 1108 may further include any number of program modules including, but not limited to, an operating system, one or more application programs, other program modules, program data, and any combinations thereof.
[00106] Computer system 1100 may also include a storage device 1124. Examples of a storage device (e.g., storage device 1124) include, but are not limited to, a hard disk drive, a magnetic disk drive, an optical disc drive in combination with an optical medium, a solid-state memory device, and any combinations thereof. Storage device 1124 may be connected to bus 1112 by an appropriate interface (not shown). Example interfaces include, but are not limited to, SCSI, advanced technology attachment (ATA), serial ATA, universal serial bus (USB), IEEE 1394 (FIREWIRE), and any combinations thereof. In one example, storage device 1124 (or one or more components thereof) may be removably interfaced with computer system 1100 (e.g., via an external port connector (not shown)). Particularly, storage device 1124 and an associated machine-readable medium 1128 may provide nonvolatile and/or volatile storage of machine-readable instructions, data structures, program modules, and/or other data for computer system 1100. In one example, software 1120 may reside, completely or partially, within machine-readable medium 1128. In another example, software 1120 may reside, completely or partially, within processor 1104.
[00107] Computer system 1100 may also include an input device 1132. In one example, a user of computer system 1100 may enter commands and/or other information into computer system 700 via input device 1132. Examples of an input device 1132 include, but are not limited to, an alpha-numeric input device (e.g., a keyboard), a pointing device, a joystick, a gamepad, an audio input device (e.g., a microphone, a voice response system, etc.), a cursor control device (e.g., a mouse), a touchpad, an optical scanner, a video capture device (e.g., a still camera, a video camera), a touchscreen, and any combinations thereof. Input device 1132 may be interfaced to bus 1112 via any of a variety of interfaces (not shown) including, but not limited to, a serial interface, a parallel interface, a game port, a USB interface, a FIREWIRE interface, a direct interface to bus 1112, and any combinations thereof. Input device 1132 may include a touch screen interface that may be a part of or separate from display 1136, discussed further below. Input device 1132 may be utilized as a user selection device for selecting one or more graphical representations in a graphical interface as described above.
[00108] A user may also input commands and/or other information to computer system 1100 via storage device 1124 (e.g., a removable disk drive, a flash drive, etc.) and/or network interface device 1140. A network interface device, such as network interface device 1140, may be utilized for connecting computer system 1100 to one or more of a variety of networks, such as network 1144, and one or more remote devices 1148 connected thereto. Examples of a network interface device include, but are not limited to, a network interface card (e.g., a mobile network interface card, a LAN card), a modem, and any combination thereof. Examples of a network include, but are not limited to, a wide area network (e.g., the Internet, an enterprise network), a local area network (e.g., a network associated with an office, a building, a campus or other relatively small geographic space), a telephone network, a data network associated with a telephone/voice provider (e.g., a mobile communications provider data and/or voice network), a direct connection between two computing devices, and any combinations thereof. A network, such as network 1144, may employ a wired and/or a wireless mode of communication. In general, any network topology may be used. Information (e.g., data, software 1120, etc.) may be communicated to and/or from computer system 1100 via network interface device 1140.
[00109] Computer system 1100 may further include a video display adapter 1152 for communicating a displayable image to a display device, such as display device 1136. Examples of a display device include, but are not limited to, a liquid crystal display (LCD), a cathode ray tube (CRT), a plasma display, a light emitting diode (LED) display, and any combinations thereof. Display adapter 1152 and display device 1136 may be utilized in combination with processor 1104 to provide graphical representations of aspects of the present disclosure. In addition to a display device, computer system 1100 may include one or more other peripheral output devices including, but not limited to, an audio speaker, a printer, and any combinations thereof. Such peripheral output devices may be connected to bus 1112 via a peripheral interface 1156. Examples of a peripheral interface include, but are not limited to, a serial port, a USB connection, a FIREWIRE connection, a parallel connection, and any combinations thereof.
[00110] It is understood that the present subject matter may be embodied in many different forms and should not be construed as being limited to the embodiments set forth herein. Rather, these embodiments are provided so that this subject matter will be thorough and complete and will fully convey the disclosure to those skilled in the art. Indeed, the subject matter is intended to cover alternatives, modifications and equivalents of these embodiments, which are included within the scope and spirit of the subject matter as defined by the appended claims. Furthermore, in the following detailed description of the present subject matter, numerous specific details are set forth in order to provide a thorough understanding of the present subject matter. However, it will be clear to those of ordinary skill in the art that the present subject matter may be practiced without such specific details.
[00111] Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatuses (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable instruction execution apparatus, create a mechanism for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
[00112] The computer-readable non-transitory media includes all types of computer readable media, including magnetic storage media, optical storage media, and solid- state storage media and specifically excludes signals. It should be understood that the software can be installed in and sold with the device. Alternatively the software can be obtained and loaded into the device, including obtaining the software via a disc medium or from any manner of network or distribution system, including, for example, from a server owned by the software creator or from a server not owned but used by the software creator. The software can be stored on a server for distribution over the Internet, for example.
[00113] Computer-readable storage media (medium) exclude (excludes) propagated signals per se, can be accessed by a computer and/or processor(s), and include volatile and non-volatile internal and/or external media that is removable and/or non-removable. For the computer, the various types of storage media accommodate the storage of data in any suitable digital format. It should be appreciated by those skilled in the art that other types of computer readable medium can be employed such as zip drives, solid state drives, magnetic tape, flash memory cards, flash drives, cartridges, and the like, for storing computer executable instructions for performing the novel methods (acts) of the disclosed architecture. [00114] The terminology used herein is for the purpose of describing particular aspects only and is not intended to be limiting of the disclosure. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. [00115] The description of the present disclosure has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the disclosure in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the disclosure. The aspects of the disclosure herein were chosen and described in order to best explain the principles of the disclosure and the practical application, and to enable others of ordinary skill in the art to understand the disclosure with various modifications as are suited to the particular use contemplated.
[00116] For purposes of this document, each process associated with the disclosed technology may be performed continuously and by one or more computing devices. Each step in a process may be performed by the same or different computing devices as those used in other steps, and each step need not necessarily be performed by a single computing device. [00117] Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims

CLAIMS What is claimed is:
1. A method for controlling data flows in a network, comprising: receiving a first set of data packets at a queue of a node, each of the first set of data packets include a payload and a delivery time; storing each of the first set of data packets in the queue of the node; classifying a data packet of the first set of data packets as a trigger packet when the node determines that the delivery time of the trigger packet will not be satisfied unless delivery of the first set of data packets in the queue is modified; dropping one or more chunks of data in the payload of a second set of data packets, among the first set of data packets, stored in the queue to modify the delivery of the first set of data packets in the queue and accelerate the delivery of the trigger packet in order to satisfy the delivery time of the trigger packet; and forwarding the first set of data packets stored in the queue to a next hop node such that the delivery time is satisfied.
2. The method of claim 1 , wherein the second set of data packets comprises a data packet ahead of the trigger packet in the queue.
3. The method of claims 1-2, wherein one or more chunks of data in the second set of data packets have a low priority.
4. The method of claims 1 -3, wherein the one or more chunks of data in the payload are defined in a qualitative service header of the first set of data packets, each chunk being a sub-set of the payload.
5. The method of claim 1-4, wherein when the dropping fails to reduce a size of the payload, such that a dwell time of the trigger packet is larger than a per- hop deadline designated for a current hop of the trigger packet, and a total delivery time for the trigger packet is not exceeded, further comprising forwarding the trigger packet to the next hop node and recalculating the per-hop deadline at subsequent hops.
6. The method of claims 1-4, wherein the dropping one or more chunks of data comprises dropping one or more chunks of data of the trigger packet to reduce a dwell time of the trigger packet such that the trigger packet will satisfy the delivery time.
7. The method of claims 1 -6, further comprising setting a state of the queue to active when one or more of the first set of data packets is classified as the trigger packet, wherein the state of the queue indicates whether the dropping should be performed.
8. The method of claim 7, further comprising: classifying multiple data packets of the first set of data packets as trigger packets, and counting the trigger packets by incrementing a number when one of the trigger packets is queued and decrementing the number when a trigger packet is dequeued, wherein the state is set to active when the number is greater than zero and set to inactive when the number is zero, and the dropping is performed on the second set of data packets when the state is active.
9. The method of claim 1-8, further comprising determining a queue depth for the first set of data packets, where the queue depth is a number of packets ahead of the trigger packet; setting a state of the queue to active when the queue depth is greater than zero, wherein the state of the queue indicates whether the dropping should be performed; and decrementing the queue depth when each of the first set of data packets are dequeued from the queue while the state is active.
10. The method of claim 9, wherein the dropping is performed on all of the second set of data packets ahead in the queue as long as the queue depth is greater than zero while the second set of data packets are dequeued.
11. The method of claims 1-10, wherein the dropping the one or more chunks of data in the payload is based on a coding scheme and a size of each of the one or more chunks of data.
12. The method of claim 11 , wherein the coding scheme is one of a significance level coding scheme or network coding scheme.
13. The method of claims 1-12, further comprising calculating a needed truncation amount for the trigger packet; calculating a total allowable wash amount for the second set of data packets; and calculating a truncation size for each of the second set of data packets based on the needed truncation amount and the total allowable wash amount, wherein the needed truncation amount is distributed among a plurality of the second set of data packets.
14. The method of claim 1-13, wherein the second set of data packets are washable data packets.
15. The method of claim 1 -14, wherein modifying the first set of data packets in the queue is relative to a scheduling strategy in which the first set of data packets are transmitted without dropping chunks of data.
16. A node controlling data flow in a network, comprising: a non-transitory memory storage comprising instructions; and one or more processors in communication with the memory, wherein the one or more processors execute the instructions to: receive a first set of data packets at a queue of a node, each of the first set of data packets include a payload and a delivery time; store each of the first set of data packets in the queue of the node; classify a data packet of the first set of data packets as a trigger packet when the node determines that the delivery time of the trigger packet will not be satisfied unless delivery of the first set of data packets in the queue is modified; drop one or more chunks of data in the payload of a second set of data packets, among the first set of data packets, stored in the queue to modify the delivery of the first set of data packets in the queue and accelerate the delivery of the trigger packet in order to satisfy the delivery time of the trigger packet; and forward the first set of data packets stored in the queue to a next hop node such that the delivery time is satisfied.
17. The node of claim 16, wherein the second set of data packets comprises a data packet ahead of the trigger packet in the queue.
18. The node of claims 16-17, wherein one or more chunks of data in the second set of data packets have a low priority.
19. The node of claims 16-18, wherein the one or more chunks of data in the payload are defined in a qualitative service header of the first set of data packets, each chunk being a sub-set of the payload.
20. The node of claim 16-19, wherein when the dropping fails to reduce a size of the payload, such that a dwell time of the trigger packet is larger than a per- hop deadline designated for a current hop of the trigger packet, and a total delivery time for the trigger packet is not exceeded, the one or more processors further execute the instructions to forward the trigger packet to the next hop node and recalculating the per-hop deadline at subsequent hops.
21. The node of claims 16-20, wherein the dropping one or more chunks of data comprises the one or more processors further execute the instructions to drop one or more chunks of data of the trigger packet to reduce a dwell time of the trigger packet such that the trigger packet will satisfy the delivery time.
22. The node of claims 16-17, the one or more processors further execute the instructions to set a state of the queue to active when one or more of the first set of data packets is classified as the trigger packet, wherein the state of the queue indicates whether the dropping should be performed.
23. The node of claim 22, the one or more processors further execute the instructions to: classify multiple data packets of the first set of data packets as trigger packets, and counting the trigger packets by incrementing a number when one of the trigger packets is queued and decrementing the number when a trigger packet is dequeued, wherein the state is set to active when the number is greater than zero and set to inactive when the number is zero, and the dropping is performed on the second set of data packets when the state is active.
24. The node of claim 16-23, the one or more processors further execute the instructions to: determine a queue depth for the first set of data packets, where the queue depth is a number of packets ahead of the trigger packet; set a state of the queue to active when the queue depth is greater than zero, wherein the state of the queue indicates whether the dropping should be performed; and decrement the queue depth when each of the first set of data packets are dequeued from the queue while the state is active.
25. The node of claim 24, wherein the dropping is performed on all of the second set of data packets ahead in the queue as long as the queue depth is greater than zero while the second set of data packets are dequeued.
26. The node of claims 16-25, wherein the dropping the one or more chunks of data in the payload is based on a coding scheme and a size of each of the one or more chunks of data.
27. The node of claim 26, wherein the coding scheme is one of a significance level coding scheme or network coding scheme.
28. The node of claims 16-27, the one or more processors further execute the instructions to: calculate a needed truncation amount for the trigger packet; calculate a total allowable wash amount for the second set of data packets; and calculate a truncation size for each of the second set of data packets based on the needed truncation amount and the total allowable wash amount, wherein the needed truncation amount is distributed among a plurality of the second set of data packets.
29. The node of claim 16-28, wherein the second set of data packets are washable data packets.
30. The node of claim 16-29, wherein modifying the first set of data packets in the queue is relative to a scheduling strategy in which the first set of data packets are transmitted without dropping chunks of data.
31. A non-transitory computer-readable medium storing computer instructions for controlling data in a network, that when executed by one or more processors, cause the one or more processors to perform the steps of: receiving a first set of data packets at a queue of a node, each of the first set of data packets include a payload and a delivery time; storing each of the first set of data packets in the queue of the node; classifying a data packet of the first set of data packets as a trigger packet when the node determines that the delivery time of the trigger packet will not be satisfied unless delivery of the first set of data packets in the queue is modified; dropping one or more chunks of data in the payload of a second set of data packets, among the first set of data packets, stored in the queue to modify the delivery of the first set of data packets in the queue and accelerate the delivery of the trigger packet in order to satisfy the delivery time of the trigger packet; and forwarding the first set of data packets stored in the queue to a next hop node such that the delivery time is satisfied.
32. The non-transitory computer-readable medium of claim 31 , wherein the second set of data packets comprises a data packet ahead of the trigger packet in the queue.
33. The non-transitory computer-readable medium of claims 31-32, wherein one or more chunks of data in the second set of data packets have a low priority.
34. The non-transitory computer-readable medium of claims 31-33, wherein the one or more chunks of data in the payload are defined in a qualitative service header of the first set of data packets, each chunk being a sub-set of the payload.
35. The non-transitory computer-readable medium of claim 31-34, wherein when the dropping fails to reduce a size of the payload, such that a dwell time of the trigger packet is larger than a per-hop deadline designated for a current hop of the trigger packet, and a total delivery time for the trigger packet is not exceeded, cause the one or more processors to further perform the steps of forwarding the trigger packet to the next hop node and recalculating the per-hop deadline at subsequent hops.
36. The non-transitory computer-readable medium of claims 31-34, wherein the dropping one or more chunks of data comprises dropping one or more chunks of data of the trigger packet to reduce a dwell time of the trigger packet such that the trigger packet will satisfy the delivery time.
37. The non-transitory computer-readable medium of claims 31-36, cause the one or more processors to further perform the steps of setting a state of the queue to active when one or more of the first set of data packets is classified as the trigger packet, wherein the state of the queue indicates whether the dropping should be performed.
38. The non-transitory computer-readable medium of claim 37, cause the one or more processors to further perform the steps of: classifying multiple data packets of the first set of data packets as trigger packets, and counting the trigger packets by incrementing a number when one of the trigger packets is queued and decrementing the number when a trigger packet is dequeued, wherein the state is set to active when the number is greater than zero and set to inactive when the number is zero, and the dropping is performed on the second set of data packets when the state is active.
39. The non-transitory computer-readable medium of claim 31 -38, cause the one or more processors to further perform the steps of: determining a queue depth for the first set of data packets, where the queue depth is a number of packets ahead of the trigger packet; setting a state of the queue to active when the queue depth is greater than zero, wherein the state of the queue indicates whether the dropping should be performed; and decrementing the queue depth when each of the first set of data packets are dequeued from the queue while the state is active.
40. The non-transitory computer-readable medium of claim 39, wherein the dropping is performed on all of the second set of data packets ahead in the queue as long as the queue depth is greater than zero while the second set of data packets are dequeued.
41. The non-transitory computer-readable medium of claims 31-40, wherein the dropping the one or more chunks of data in the payload is based on a coding scheme and a size of each of the one or more chunks of data.
42. The non-transitory computer-readable medium of claim 41 , wherein the coding scheme is one of a significance level coding scheme or network coding scheme.
43. The non-transitory computer-readable medium of claims 41-42, cause the one or more processors to further perform the steps of: calculating a needed truncation amount for the trigger packet; calculating a total allowable wash amount for the second set of data packets; and calculating a truncation size for each of the second set of data packets based on the needed truncation amount and the total allowable wash amount, wherein the needed truncation amount is distributed among a plurality of the second set of data packets.
44. The non-transitory computer-readable medium of claim 41-43, wherein the second set of data packets are washable data packets.
45. The non-transitory computer-readable medium of claim 41-44, wherein modifying the first set of data packets in the queue is relative to a scheduling strategy in which the first set of data packets are transmitted without dropping chunks of data.
PCT/US2020/055428 2020-05-30 2020-10-13 Method and apparatus of packet wash for in-time packet delivery WO2021101640A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202063032582P 2020-05-30 2020-05-30
US63/032,582 2020-05-30

Publications (1)

Publication Number Publication Date
WO2021101640A1 true WO2021101640A1 (en) 2021-05-27

Family

ID=73040362

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2020/055428 WO2021101640A1 (en) 2020-05-30 2020-10-13 Method and apparatus of packet wash for in-time packet delivery

Country Status (1)

Country Link
WO (1) WO2021101640A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114780527A (en) * 2022-04-21 2022-07-22 中国农业银行股份有限公司 Data cleaning method and device
WO2024049442A1 (en) * 2022-09-02 2024-03-07 Futurewei Technologies, Inc. An efficient mechanism to process qualitative packets in a router

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CLEMM ALEXANDER ET AL: "High-Precision Latency Forwarding over Packet-Programmable Networks", NOMS 2020 - 2020 IEEE/IFIP NETWORK OPERATIONS AND MANAGEMENT SYMPOSIUM, IEEE, 20 April 2020 (2020-04-20), pages 1 - 8, XP033777739, DOI: 10.1109/NOMS47738.2020.9110431 *
RICHARD LI ET AL: "A Framework for Qualitative Communications Using Big Packet Protocol", NETWORKING FOR EMERGING APPLICATIONS AND TECHNOLOGIES, ACM, 2 PENN PLAZA, SUITE 701NEW YORKNY10121-0701USA, 14 August 2019 (2019-08-14), pages 22 - 28, XP058440139, ISBN: 978-1-4503-6876-6, DOI: 10.1145/3341558.3342201 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114780527A (en) * 2022-04-21 2022-07-22 中国农业银行股份有限公司 Data cleaning method and device
WO2024049442A1 (en) * 2022-09-02 2024-03-07 Futurewei Technologies, Inc. An efficient mechanism to process qualitative packets in a router

Similar Documents

Publication Publication Date Title
US7724750B2 (en) Expedited data transmission in packet based network
CN109120544B (en) A transmission control method based on host-side traffic scheduling in a data center network
US10432556B1 (en) Enhanced audio video bridging (AVB) methods and apparatus
US6765905B2 (en) Method for reducing packet data delay variation in an internet protocol network
US20060203730A1 (en) Method and system for reducing end station latency in response to network congestion
US20190044878A1 (en) Technologies for managing tcp/ip packet delivery
EP3588880B1 (en) Method, device, and computer program for predicting packet lifetime in a computing device
US11290380B2 (en) Method for transferring information across a data center network
CN106027416A (en) Data centre network flow dispatching method and system based on space and time combination
WO2018121535A1 (en) Load balance processing method and apparatus
CN111224888A (en) Method for sending message and message forwarding device
US20140281034A1 (en) System and Method for Compressing Data Associated with a Buffer
WO2020210780A1 (en) Chunk based network qualitative services
US20220086680A1 (en) Data packet prioritization for downlink transmission at network level
US7916743B2 (en) System and method for improved multicast performance
CN112995048B (en) Data center network congestion control and scheduling fusion method and terminal equipment
US20220086681A1 (en) Data packet prioritization for downlink transmission at sender level
WO2021101640A1 (en) Method and apparatus of packet wash for in-time packet delivery
CN113225196A (en) Service level configuration method and device
WO2019165855A1 (en) Message transmission method and device
CN113765812A (en) Method and device for marking message
WO2021101610A1 (en) Latency guarantee for data packets in a network
CN115514709B (en) Congestion control event queue scheduling method, device, equipment and storage medium
EP1730903B1 (en) Expedited data transmission in packet based network
JP4838739B2 (en) Router buffer management method and router using the management method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20800513

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20800513

Country of ref document: EP

Kind code of ref document: A1