WO2021101610A1 - Latency guarantee for data packets in a network - Google Patents
Latency guarantee for data packets in a network Download PDFInfo
- Publication number
- WO2021101610A1 WO2021101610A1 PCT/US2020/048244 US2020048244W WO2021101610A1 WO 2021101610 A1 WO2021101610 A1 WO 2021101610A1 US 2020048244 W US2020048244 W US 2020048244W WO 2021101610 A1 WO2021101610 A1 WO 2021101610A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data packets
- node
- data packet
- time
- data
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/28—Flow control; Congestion control in relation to timing considerations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/24—Traffic characterised by specific attributes, e.g. priority or QoS
- H04L47/2425—Traffic characterised by specific attributes, e.g. priority or QoS for supporting services specification, e.g. SLA
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/24—Traffic characterised by specific attributes, e.g. priority or QoS
- H04L47/2466—Traffic characterised by specific attributes, e.g. priority or QoS using signalling traffic
Definitions
- the disclosure generally relates to latency guarantee for packet delivery in a network.
- Computer networks such as the Internet, are in wide-spread use today. These networks provide network devices, namely devices connected to the network such as personal computers, servers, or the like, with the ability to communicate with each other. Network devices communicate with each other by converting the data to be communicated into data packets and transmitting the packets across the network. In a typical network, however, a direct physical connection between two devices is often not possible due to the large number of devices using the network. As such, the packets may pass through several intermediate network devices, such as routers, switches etc., which direct and help deliver the packets to their intended destination network device.
- a method for regulating delivery time of a data packet between communication devices comprising receiving a plurality of data packets at a node, the data packets each specifying a delivery time; storing, in one or more queues of the node, the data packets; generating, by the node, an ordered list of the data packets stored in the one or more queues based on a transmission time for each of the data packets and the delivery time for each of the data packets; and sending, by the node, the data packets to a next hop node according to the ordered list.
- the method further comprising sorting the data packets in the one or more queues in order of transmission time to form an ordered data packet set.
- generating the ordered list further comprises calculating a total transmission time as a sum of the transmission time of each data packet in the plurality of data packets stored in the one or more queues; and determining a data packet with a largest transmission time, from one or more of the plurality of data packets having a delivery time indicating a deadline at a current hop that is greater than or equal to the total transmission time.
- the generating the ordered list further comprises adding the determined data packet to a beginning of the ordered list such that the data packets are ordered beginning with a first of the data packets to be sent and ending with a last of the data packets to be sent in the sending step; and removing the determined data packet from the one or more queues.
- the method further comprising dropping a current data packet when there is insufficient time to send the current data packet by a time required by the delivery time.
- the data packets stored in the one or more queues have a highest scheduling priority compared to other received data packets without a specified delivery time.
- the ordered list achieves a minimum average dwell time of the data packets in the queue and satisfies the delivery time for each of the data packets.
- the plurality of data packets are received using a new internet protocol
- each of the plurality of data packets comprise a header specification field, a shipping specification field, a contract specification field and a payload specification field
- the contract specification field includes a contract specifying the delivery time
- each contract includes one or more contract clauses, and each contract clause includes an action indicating the data packet has the delivery time and metadata identifying a budget of data packet budget and remaining number of hops.
- the method further comprising calculating the deadline at the current hop using a remainder of the budget and the remaining number of hops.
- the node includes multiple output ports and each output port is associated with the one or more queues.
- the one or more queues are a latency guarantee queue.
- a node for regulating delivery time of a data packet between communication devices comprising a non-transitory memory storage comprising instructions; and one or more processors in communication with the memory, wherein the one or more processors execute the instructions to receive a plurality of data packets at a node, the data packets each specifying a delivery time; store, in one or more queues of the node, the data packets; generate, by the node, an ordered list of the data packets stored in the one or more queues based on a transmission time for each of the data packets and the delivery time for each of the data packets; and sending, by the node, the data packets to a next hop node according to the ordered list.
- a node for regulating delivery time of a data packet between communication devices comprising a non-transitory memory storage comprising instructions; and one or more processors in communication with the memory, wherein the one or more processors execute the instructions to receiving a plurality of data packets at a node, the data packets each including a contract specifying a delivery time; storing, in one or more queues of the node, the data packets; generating, by the node, an ordered list of the data packets stored in the one or more queues based on a transmission time for each of the data packets and the delivery time included in the contract for each of the data packets; and sending, by the node, the data packets to a next hop node according to the ordered list.
- non-transitory computer-readable medium storing computer instructions for regulating delivery time of a data packet between communication devices that, when executed by one or more processors, cause the one or more processors to perform the steps of receiving a plurality of data packets at a node, the data packets each specifying a delivery time; storing, in one or more queues of the node, the data packets; generating, by the node, an ordered list of the data packets stored in the one or more queues based on a transmission time for each of the data packets and the delivery time for each of the data packets; and sending, by the node, the data packets to a next hop node according to the ordered list.
- a method for scheduling a plurality of data packets, each indicating a delivery time, between communication devices comprising determining whether a schedule satisfying the delivery time deadline for each of the plurality of data packets is feasible; ordering the plurality of data packets in a queue to reduce an average dwell time when the schedule is feasible and transmitting the plurality of packets in the order to satisfy the delivery time for each of the plurality of data packets; and dropping a data packet from the plurality of data packets when the schedule is not feasible.
- the schedule is determined to be feasible.
- the dropped data packet has a per-hop deadline less than a total transmission time of the plurality of data packets.
- the reducing comprises minimizing the average dwell time for each of the ordered plurality of data packets as long as the delivery time for each of the plurality of data packets is satisfied.
- FIG. 1 illustrates an example system in which embodiments of the disclosure may be implemented.
- FIG. 2 illustrates an example New IP packet which may be used to implement the present technology.
- FIG. 3 illustrates and example contract format for use in the New IP packet of FIG. 2.
- FIG. 4 illustrates an example of actions that may be specified in a contract clause.
- FIG. 5 illustrates an example network in which routers have a latency guarantee queue (LGQ).
- LGQ latency guarantee queue
- FIG. 6 illustrates an example scheduling algorithm in accordance with the present embodiments.
- FIGS. 7A - 7F illustrate an example embodiment of implementing the
- FIGS. 8 and 9 illustrate flow diagrams of regulating delivery time of data packets in a network in accordance with the disclosed embodiments.
- FIG. 10 illustrates an embodiment of a node in accordance with embodiments of the disclosure.
- FIG. 11 shows an example embodiment of a computing system for implementing embodiments of the disclosure.
- the present technology relates to quality of service (QoS) guarantees, and in particular, to latency guarantee for packet delivery in a network.
- QoS quality of service
- a “New Internet Protocol” (New IP) packet is part of an Internet framework that brings several capabilities to the present technology.
- the New IP includes a contract that may be fulfilled by routers in a network. Through use of the contracts, service assurance requirements at a packet level, such as end-to-end latency guarantees, may be provided.
- the technology provides a scheduling algorithm that regulates the transmission of the data packets stored at the router to minimize dwell time.
- such services can also be requested in non contract segments of a packet, and it will be understood that contracts are just one option for specifying the services.
- FIG. 1 illustrates an example system in which embodiments of the disclosure may be implemented.
- the system 100 includes routers 120, 122 and 124.
- routers 122 are intermediate routers.
- Computing devices 110 connect to network 130 via router 120 in order to access resources provided by network 130.
- Computing devices 110 may be virtually any type of general- or specific- purpose computing device.
- these computing devices 110 may be user devices such as desktop computers, laptop computers, tablet computers, display devices, cameras, printers, Internet of Things (loT) device, wearable computing devices, mobile devices or smartphones.
- these computing devices may be server devices such as application server computers, virtual computing host computers, or file server computers.
- computing devices 110 may be individually configured to provide computing, storage, and/or other suitable computing services.
- Network 130 may include a plurality of network devices that facilitate the access of content by devices 110.
- Each of the plurality of network devices may comprise one of a router, a switch, a server, a database server, a hub, a firewall, an intrusion detection/prevention (IDP) device and/or any other type of networking equipment or device that facilitates the transfer of data to and from devices 110.
- Network 130 includes routers 120, 122 and 124, which communicate using various protocols, such as the Border Gateway Protocol and the Internet Control Message Protocol, in order to exchange routing, network configuration information, and other information. In one embodiment, the routers communicate using a new internet protocol (IP).
- IP internet protocol
- the network 130 may be a local area network (“LAN”), such as a token ring or Ethernet network, a virtual local area network (“VLAN”), or another type of network.
- the network 130 may comprise one or more wired or wireless links.
- network 130 may be an Ethernet network that comprises one or more Ethernet cables.
- the network 130 may be a Wireless Fidelity (“Wi Fi”) network that uses wireless radio transmissions to communicate information.
- network 130 may be a mobile network. Although shown as a single network 130, the network 130 may comprise any number of interconnected networks, either public or private, in which the various networks interconnect to form one or more virtual networks.
- Network 130 provides a variety of resources that may be accessed by devices 110.
- network 130 includes content server 126 that stores or otherwise sources content, which refers to any data commonly transmitted and/or stored within a network, such as web-based applications, images, documents, web pages, video data, audio data such as voice, web-based games, scripts, or any other type of network-based content.
- Network 130 may support multicast techniques to improve the delivery efficiency of data transmitted with the network 130.
- the network 130 supports the new IP to improve latency and guarantee data packet delivery, as discussed herein.
- Network 130 may also connect to a variety of other types of devices (e.g., file servers, printers, telephones, and e-mail and other application servers).
- Network 130 is also shown coupled to external network 140 (e.g., the Internet) via router 124.
- Public network 140 may include, for example, one or more client computing devices.
- Public network 140 may provide access to web servers, application servers, public databases, media servers, end-user devices, and many other types of network resource devices and content.
- Network 130 may transmit content to devices 110 through router 120 using one or more packet-based protocols, such as an Internet Protocol (IP)/Transmission Control Protocol (TCP) or the new IP.
- IP Internet Protocol
- TCP Transmission Control Protocol
- network 130 may support the transmission of data via discrete data units, often referred to as “packets.”
- packets may be referred to as a “packet-based” or “packet switched” network.
- Network traffic delivered by network 140 may be classified according to a number of categories. For instance, a content server (not shown) may stream live video to one of devices 110 through router 120. Packets that transmit such video may be classified as streaming multimedia packets and may require a guaranteed bandwidth to provide an acceptable user experience.
- the content server may also send web pages to one of devices 110 using HTTP packets.
- information exchanged by routers 120, 122 and 124 may be categorized as high precision communication (HPC) services.
- Information categorized as HPC requires, for example, a particular quality of service (QoS), such as latency, throughput, which may be defined at the granularity of a packet, a group of packets or a flow.
- QoS quality of service
- a latency guarantee in HPC services refers to the network needing to ensure the delivery of a packet, a group of packets or all packets in a flow within a bounded time frame. When messages are specified with a same deadline, then the latency guarantee is ensured at the flow level. If each message is specified with an independent deadline, the latency guarantee is ensured at the data packet level.
- various categories of network traffic may require varying levels of network performance.
- Routers 120, 122 124 receive, analyze, and classify data packets to assign the data packets to a suitable priority level. In addition to classifying data packets, routers 120, 122 and 124 process the received and classified data packets according to their priority level. In this manner, routers 120, 122 124 implement aspects of the QoS guarantees provided by network 130. In addition, based on information received from other devices in system 100, routers 120, 122 124 determine the appropriate route through the system for each received data packet and forward the data packet accordingly. In one embodiment, as will be described below with reference to FIG. 2, the priority level of the data packets is determined using the new IP in which a contract specification details the guarantees, such as QoS guarantees, of the data packets. For example, the contract specification may detail an end-to-end latency requirement for a data packet that traverses the network 130.
- the contract specification may detail an end-to-end latency requirement for a data packet that traverses the network 130.
- Routers 120, 122 and 124 may each include one or more queues 125 configured to store data packets as they arrive at intermediate nodes, such as routers 122.
- the queues 125 typically include a queuing algorithm, which are usually implemented at the router level.
- output queues process the data packets that have been enqueued to await their turn to be forwarded or routed to the appropriate output port or interface.
- a primary purpose of these queuing algorithms is to give preferential treatment to data packets having higher priority (which are enqueued at the router output port) over other enqueued data packets having lower priority.
- the queue 125 is a latency guaranteed queue (LGQ) in which the data packets have a per-hop (e.g., router hop) deadline that is guaranteed, as well as a minimal dwell time at each hop.
- the dwell time is defined as the time a packet spends at a router before being forwarded to the next hop (referred to as “dwell time”) and includes: processing delay, queueing delay (affected by packet scheduling in the outgoing queue) and transmission delay (which is affected by the size of the packet and is proportional to the packet size).
- FIG. 2 illustrates an example IP packet which may be used to implement the present technology.
- New Internet Protocol the packet 200 is part of an Internet framework that brings several capabilities in the present technology.
- New IP is a data plane technology that defines a new IP packet format, its specification, and corresponding capabilities in the network nodes.
- the new IP packet format is shown in FIG. 2, which includes three types of characteristics, each solving a specific class of problem in the network 130, namely, a) addressing evolution, b) the contract inclusion, and c) the payload extension.
- a new header specification is the start of the packet 200 that describes the offsets to the specifications for shipping, contract and payload.
- the new IP address (shipping spec) evolution aims to replace the current fixed type of addressing order to provide flexibility to include all types of addresses and fit different reachability scenarios.
- the New IP shipping specification is backward compatible with the existing address schemes (e.g., IPv4 and IPv6).
- the new IP contract (contract Spec) inclusion enables a large selection of network capabilities, their functioning and regulative control at the finest packet- level granularity, as shown in FIG. 3.
- the network 130 and routers 120, 122 and 124 fulfill the contract 300, assuming the contract has been agreed between the packet sender (e.g., sending computing device 110) and the packet receiver (e.g., receiving computing device 110) and the network 130.
- the contract 300 describes a formal service-specific arrangement between two or more parties, which includes one or more contract clauses 210 to describe the type of network service capability, actions and accounting information. Through use of the contracts 300, service assurance requirements at a packet level are provided.
- a contract 300 carries specific attributes associated with time-engineered services, high-throughput media services and mission-critical ultra-reliable services.
- the structure of the contract 300 is defined in a Chomsky style. Compared to traditional QoS, contracts operate at much lower-level — per packet, and instruct in high-level abstract commands.
- Each of the contract clauses 210 is a combination of an Event
- ECA Condition, and Action
- Metadata 212 can optionally include Metadata 212.
- an atomic contract ECA exists in which the event and condition are empty.
- a contract clause 210 describes how the routers 120, 122 and 124 treat the packet as it traverses the network 130 based on the predefined triggering event and condition. Given a predefined triggering event and condition has occurred, various actions are processed by the intermediate routers 122. For example, in order to support ultra-reliable low latency (uRLLC) in 5G, two contacts C1 and C2 may be used, where the C1 contract clause indicates a BoundedLatency action and the C2 contract clause has a NoPktLoss action (i.e., the low latency and reliability are to be met).
- uRLLC ultra-reliable low latency
- the Metadata contains data about the packet, e.g. accounting information, customized statistics about the flow on intermediate hops, the contextual information about the user and application, etc.
- the in-network node intelligence is naturally embedded and supported by the New IP framework.
- the new IP Payload (payload spec) associates semantics to the data payload.
- New IP payload provides options to the receiving computing device 110 to consume any residual information in the payload while allowing the network to drop portions of the payload when congestion occurs. This type of communication is referred to as a qualitative communication, which helps to mitigate re-transmission overheads and delays when faced with slow or congested conditions.
- FIG. 4 illustrates an example of actions that may be specified in a contract clause.
- An action set includes one or more actions 400 defined in the contract specification that is known and processed by all new IP nodes. For example, applications will insert operator-defined and/or application-defined actions. These actions 400 may be generally categorized into operations, monitoring, telemetry or signaling. New contracts as defined in the specification may be implemented across different hardware platforms using different methods or algorithms. Flowever, the result of implementation leads to packet delivery guarantees between the sender and the receiver. Several actions are described below (some of which are shown in action 400):
- InTimeGuarantee instructs the router to deliver a packet any time before the t (with prescribed unit of time). It may use corresponding metadata to describe an end-to-end network latency or available latency since transmission starts from the sender.
- An algorithm called latency-based forwarding (LBF) implements this action. Instead of a class of services, contract clauses embed exact parameters and objectives in the packet.
- the LBF algorithm is disclosed in A. Clemm, T. Eckert, “High-Precision Latency Forwarding over Packet- Programmable Networks,” IEEE/IFIP Network Operations and Management Symposium, 2020, which is incorporated by reference herein in its entirety. • Action OnTimeDelivery(t, t’) with metadata t and t’ as a total end-to-end time and elapsed time respectively delivers packet at a specific time in order to accommodate very low values of time-jitter.
- Action PreferredPath may be a set of node addresses or other forms of path identifiers embedded in the packet to provide path guarantees.
- PktTrace tracks the packet flow behavior across the network and may be used to understand the end-to-end service assurance and performance degradations in particular. Such actions add noise and, therefore, are subjected to specific events or conditions of interest. For example, in order to understand hop-by-hop latency, PktTrace action may capture a path in the network along with the time spent in each node.
- Action PktMonitor helps gain visibility into the current state of the system. This action captures events in a network relating to queue thresholds, packet drops, etc. Such actions identify situations such as congestion before they occur by monitoring the thresholds. For example, in order to identify real-time congestion, if a queue is built up to 70%, then this action sets the corresponding metric value in the metadata.
- Action ReportlnsuringParty is an operator driven action to be executed when a service objective violation occurs; the node in error is then required to report such violations to the insuring party. Operators use this for the assessment of damages due to service level objectives violations, which may help build trust between different network systems.
- This disclosure focuses on a new Action called InTimeGuarantee, which instructs the router to deliver a data packet with a latency guarantee.
- the Action InTimeGuarantee uses deadline-aware scheduling with an algorithm (referred to herein as the Total Dwell Minimizing Schedule or “TDMS,” described below) such that the packets specified with InTimeGuarantee can meet corresponding end-to-end deadlines. It may use corresponding metadata to describe an end-to-end network latency, in which the metadata is defined by (1 ) a budget, which denotes the residual or remaining time before the packet deadline runs out and is considered unsuccessful, and (2) a hop, which denotes the residual or remaining number of hops towards destination.
- TDMS Total Dwell Minimizing Schedule
- the remaining number of hops may be determined when the routing path is configured, and the total number of hops between the source and destination is fixed.
- the afore-mentioned technology may be applied to many different networks, such as an Internet-of-Things (loT) network, flexible addressing for smart homes, service-aware routing and more.
- LoT Internet-of-Things
- similar requests for a particular delivery time can be associated with packets in other forms, and neither the InTimeGuarantee action nor the contracts framework shall be limiting to this disclosure.
- FIG. 5 illustrates an example network in which routers have a latency guarantee queue (LGQ).
- the network 500 shows sending nodes 110, router 122 with a dedicated LGQ 125 and receiver nodes 110.
- the LGQ is a queue that includes a scheduling mechanism to regulate transmission of data packets stored in a node of the network in which to minimize dwell time. Such a scheduling mechanism may provide a level or quality of service for delivery of the data packets. It is appreciated that network 500 is not limited to the depicted components, but may be a much larger network with additional components, such as network 100 in FIG. 1.
- the LGQ 125 queues received data packets for latency guarantee according to the contract specified in the received data packets.
- data packets received by the router 122 that include an InTimeGuarantee contract clause are stored in the LGQ 125. These data packets are assigned a highest scheduling priority compared to other received data packets that do not include the deadline constraints (i.e. , do not include the Action InTime Guarantee in the contract clause). Data packets that do not include the deadline constraints may be stored in other, non-LGQ queues (not shown).
- network 500 is not limited to the depicted components, but may be a much larger network with additional components, such as network 100 in FIG. 1.
- the routers 122 may include multiple input interfaces, with each input interface having one or more queued packets waiting to be processed and routed by the router 122. Additionally, each queued packet may have a different associated priority level which specifies the particular Quality of Service (QoS) level to be used when handling that packet.
- QoS Quality of Service
- the packet is first dequeued from the input interface, and is then decapsulated from its data link frame. After decapsulation, the packet is classified with an associated priority level. The packet is then encapsulated and routed to its appropriate output interface queue within the output queuing structure.
- a packet scheduling algorithm is implemented in which to achieve a minimal average dwell time at each router during packet forwarding.
- the packet schedule guarantees the delivery of a data packet stored in the LGQ 125 within its designated time frame (i.e., satisfy the InTimeGuarantee).
- a packet schedule is generally defined as the order of the data packets
- the average dwell time may be determined using the following parameters and equations:
- the transmission time may be calculated as packet size divided by the average link rate
- T 1 ,T 2 , ... T, ... T m the dwell time of the packets, which includes the transmission time and the waiting time;
- L 1 ,L 2 , -L , ... L m the deadline for each packet at the current hop along a respective path.
- a deadline refers to the remaining time, but could be adjusted to accommodate an absolute time (described below).
- a central controller may set a specific deadline or it may be dynamically set at each hop.
- a per-hop deadline of the packets may be calculated as L t
- Tt D 1 + D 2 + — D t ,
- the total dwell time of the packets in the LGQ is:
- the average dwell time of the packets is: r T total l av 3 - m
- the average dwell time determines how long an average packet stays in a router before its last bit gets transmitted.
- the smaller the average dwell time the less the amount of end-to-end latency will be experienced by end users. That is, the smaller the average dwell time for a packet in the LGQ 125, the less amount of latency exists between sending nodes 110 and receiver nodes 110.
- FIG. 6 illustrates an example scheduling algorithm in accordance with the present embodiments.
- the scheduling algorithm 600 referred to as the Total Dwell Minimizing Schedule (TDMS) algorithm, determines whether a minimal average dwell time of packets stored in a queue can be met in order to meet delivery time guarantees, as set forth in the contract clauses of the packets. Minimizing the average dwell time is equivalent to minimizing the total dwell time. The minimal average dwell time results in minimal average packet delay at each router, thus minimizing average end-to-end delay for packet deliveries in the network. Moreover, if a feasible schedule can be determined, then a schedule is generated and the data packets are scheduled for delivery according to the schedule. If a feasible schedule cannot be determined, the data packet is discarded.
- TDMS Total Dwell Minimizing Schedule
- step 1 of the TDMS algorithm 600 the variables are set as packet set, / is the i th data packet in the set J l m is total number of packets in the LGQ 125, D is the transmission time of the packet and L is the deadline of the packet.
- the TDMS schedule is determined from the last to the first packet through steps 3 - 14 (referred to herein as the “outer loop” of the algorithm), where steps 6 - 9 of the algorithm (referred to herein as the “inner loop” of the algorithm) identify the last packet to be transmitted of the remaining packets to be placed in the TDMS schedule.
- j * is a bookkeeping variable that is initially set to zero.
- the total transmission time is calculated as a sum of the transmission time of each data packet stored in the queue.
- the total transmission time is calculated as a sum of the transmission time of each data packet in the ordered (sorted) data packet set. It will be understood that the total transmission time can also be adjusted if the router will experience other delays (for example, for transmission of higher-priority packets or other operations.
- Steps 6 - 9 then identify the last packet to be transmitted, of the remaining packets to be placed in the schedule. This may be accomplished by determining the data packet in the ordered data packet set with the largest transmission time whose deadline is greater than or equal to the total transmission time of packets in the current data packet set.
- j* is set to the index of that packet, having the largest transmission time and an acceptable deadline.
- the “deadline” is an amount of time remaining until the packet must be transmitted. Thus, if the deadline is greater than the total transmission time, the packet can be transmitted last and still meet its deadline.
- the “deadline” can also be expressed in other ways, such as an absolute time when the packet must be sent (for example 12:34pm).
- data packets may be dropped when not placed in the schedule.
- data packets may be sent late (using another scheduling policy) when not placed in the schedule. Other actions can also be taken to make a schedule feasible, such as reducing a size of some packets.
- the determined data packet is prepended to the schedule and removed from the ordered data packet set.
- the process is repeated with the new ordered data packet set until no more data packets remain in the ordered data packet set. That is, at step 13 the new ordered data packet set J M is generated by removing j* from the previous set J / , and j* is added to the front of the TDMS set at step 14. When no more data packets remain, the TDMS schedule is returned at step 15.
- FIGS. 7 A - 7F illustrate an example embodiment of implementing the
- the queues (such as LGQ 125) of the intermediate routers 122 contain a packet set 700 that includes two properties — deadline and transmission time.
- the deadline refers to the deadline at a current hop for an associated packet in the packet set
- the transmission time refers to the amount of time it takes to transmit the packet from the queue to the next hop node.
- packet P1 has a deadline of 10 and a transmission time of 5.
- ms milliseconds
- the TDMS scheduling algorithm 600 first orders the packets in the packet set 700 into a decreasing (descending) order of transmission time, as shown in 702.
- the packets in the packet set 700 are rearranged into the following order: P1 , P4, P2, P3, as shown in FIG. 7B.
- the packets arranged in decreasing order of transmission time will be used as the ordered packet set operated upon by the TDMS scheduling algorithm.
- steps 3 - 13 of the TDMS scheduling algorithm 600 are executed four times (equal to the number of packets to be scheduled or placed in an ordered list), as shown in FIGS. 7C - 7F, with steps 6 - 9 responsible for identifying the last packet.
- a total transmission time is calculated as a sum of the transmission time of each data packet in the ordered data packet set ] .
- the data packet in the ordered data packet set J 1 with a deadline that is greater than or equal to the total transmission time, with a largest transmission time, will be selected.
- the data packet selected for placement as the last data packet in the TDMS schedule is the data packet with the largest transmission time and that meets its deadline (e.g., the deadline is equal to or greater than the total transmission time), which is data packet P2.
- the data packet may be transmitted last and still meet its deadline. In this case, only data packets P2 and P3 can meet their deadline (i.e. , both data packets have a deadline that is equal to or greater than the total transmission time of 11 ms).
- data packet P2 has the largest transmission time. Data packet P2 is therefore selected as the last entry into the TDMS scheduling algorithm, as shown in FIG. 7C. Once placed in the schedule, the data packet P2 is removed from the ordered data packet set to form a second data packet set J 2 with the remaining data packets P1 , P3 and P4.
- the data packet selected for placement as the last data packet in the TDMS schedule is the data packet with the largest transmission time that meets its deadline, which is data packet P1.
- data packets P1 and P3 can meet their deadline (i.e. , a deadline that is equal to or greater than the total transmission time of 9 ms). Since data packet P1 has a larger transmission time, it is therefore selected as the last open entry into the TDMS scheduling algorithm, as shown in FIG. 7D.
- the data packet P1 is removed from the ordered data packet set J 2 to form a third data packet set / 3 with the remaining data packets P3 and P4.
- the data packet selected for placement as the last data packet in the TDMS schedule is the data packet with the largest transmission time that meets its deadline, which is data packet P4.
- data packets P3 and P4 can meet their deadline (i.e., a deadline that is equal to or greater than the total transmission time of 4 ms). Since data packet P4 has a larger transmission time, it is therefore selected as the last open entry into the TDMS scheduling algorithm, as shown in FIG. 7E.
- the data packet P4 is removed from the ordered data packet set / 3 to form a fourth data packet set 4 with the remaining data packet P3.
- the total transmission time of set / 4 is 1 since the only data packet remaining is P3.
- the data packet P3 is placed in the front of the queue.
- the schedule has been generated in which data packet P3 is first in the queue and data packet P2 is last in the queue.
- the dwell time for each of the data packets may also be calculated as the transmission time of the data packet plus any waiting time. For example, since data packet P3 is in the front of the queue, there is no waiting time. The dwell time is therefore equal to the transmission time of 1 ms.
- Data packet P4 has a transmission time of 3 ms and a waiting time of 1 ms (the dwell time of packets prior to P4).
- the TDMS scheduling algorithm results in a lower minimal average dwell time.
- FIGS. 8 and 9 illustrate flow diagrams of regulating delivery time of data packets in a network in accordance with the disclosed embodiments.
- FIG. 8 illustrates a general overview of the process for regulating delivery time of data packets
- FIG. 9 illustrates the process flow of the TDMS scheduling algorithm.
- the intermediate routers or nodes perform the procedures. Flowever, it is appreciated that any other functional unit or processing unit may implement the processes described herein, and the disclosure is not limited to implementation by the routers.
- the delivery time of a data packet may be measured according to different embodiments.
- the delivery time may be measured as a remaining time for delivery. For example, if a data packet arrives at an intermediary node with a delivery time of 10 ps, the data packet dwells at the intermediary node for 1 ps and the transmission time to the next node is 1 ps, then the remaining time for delivery is 8 ps.
- the delivery time may be measured as an absolute amount of time.
- the intermediary node can determine the amount of time remaining (10 minutes) to meet its deadline (11 :47 am) regardless of the amount of time it takes to hop from one node to the next.
- a plurality of data packets are received at a node, where each of the data packets is associated with a delivery time deadline.
- the delivery time deadline may be specified in a contract clause of the data packet using the New IP.
- the data packets are stored in one or more queues of the node at step 804.
- a schedule is generated at step 806, and the data packets are sent to a next hop node according to the schedule at step 808.
- FIG. 9 the TDMS schedule process of FIG. 6 is described.
- data packets received at the intermediate nodes are optionally sorted in the LGQs 125 in decreasing order of transmission time.
- the data packets are new IP packets 200 that include a contract clause 210 having an action and metadata.
- the data packets form an ordered data packet set. For each of the data packets in the ordered data packet set (sorted) or stored in the queue (not sorted), a total transmission time is calculated as a sum of the transmission time of each data packet at step 812.
- Step 814 determines the data packet in the ordered data packet set with the largest transmission time that satisfies the deadline at the current hop and is greater than or equal to the total transmission time. If no data packet is found (i.e.
- a data packet is dropped (or sent to a lower priority queue, or some other action is taken) at step 818. Otherwise, if a data packet is found to satisfy the conditions, then the data packet is stored in LGQ 125 at step 820.
- the data packets are stored in an order beginning with a first of the data packets to be sent and ending with a last of the data packets to be sent, such that each new packet is placed at the beginning. The determined data packet is then removed from the data packet set at step 822. At step 824, if no more data packets exist in the data set, then the schedule is returned at step 826. Otherwise, the process returns to step 812 and is repeated.
- FIG. 10 illustrates an embodiment of a node in accordance with embodiments of the disclosure.
- the node e.g., router
- the node may transmit and receive data (e.g., a new IP packet) to and from at least one electronic device and/or a server 110, etc., through a network (e.g., global network), such as network 130.
- the node 1000 may transmit the new IP packet, which is received through the network, to another electronic device 110 through a local network.
- the node 100 may transmit a new IP packet, which is received from the other electronic device, to the electronic device or the server 110 through the network. It is appreciated that the node may also use other IP packets, such as IPv4 and IPv6 packets.
- the node 1000 may comprise a plurality of input/output ports 1010/1030 and/or receivers (Rx) 1012 and transmitters (Tx) 1032 for receiving and transmitting data from other nodes, a processor 1020 including an address translation circuit to process data and determine which node to send the data, storage 1022 including cache 1024 and long-term storage 1026.
- a processor 1020 including an address translation circuit to process data and determine which node to send the data
- storage 1022 including cache 1024 and long-term storage 1026.
- the processor 1020 is not so limited and may comprise multiple processors.
- the processor 1020 may be implemented as one or more central processing unit (CPU) chips, cores (e.g., a multi-core processor), field-programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), and/or digital signal processors (DSPs), and/or may be part of one or more ASICs.
- the processor 1020 may be configured to implement any of the schemes described herein using any one or combination of steps described in the embodiments.
- the processor 1020 may be implemented using hardware, software, or both.
- the storage 1022 may include cache 1024 and long-term storage 1026, and may be configured to store routing tables, forwarding tables, or other tables or information disclosed herein. Although illustrated as a single storage, storage 1022 may be implemented as a combination of read only memory (ROM), random access memory (RAM), or secondary storage (e.g., one or more disk drives or tape drives used for non-volatile storage of data).
- ROM read only memory
- RAM random access memory
- secondary storage e.g., one or more disk drives or tape drives used for non-volatile storage of data.
- the schedule of data packets, which are received by the node may be minimized by the total dwell minimizing schedule (TDMS) module 1028 in accordance with embodiments of the disclosure.
- the data packets may be stored in the storage 1022.
- the processor 1020 may calculate a schedule of data packets to minimize dwell time.
- Other electronic devices or servers 110 in the network may process the new IP packet.
- the processor 1020 may generate a schedule based on delivery time deadlines embedded in the new IP packet.
- FIG. 11 shows an example embodiment of a computing system for implementing embodiments of the disclosure.
- Computer system 1100 includes a processor 1104 and a memory 1108 that communicate with each other, and with other components, via a bus 1112.
- Bus 1112 may include any of several types of bus structures including, but not limited to, a memory bus, a memory controller, a peripheral bus, a local bus, and any combinations thereof, using any of a variety of bus architectures.
- Memory 1108 may include various components (e.g., machine- readable media) including, but not limited to, a random access memory component, a read only component, and any combinations thereof.
- a basic input/output system 1116 (BIOS), including basic routines that help to transfer information between elements within computer system 1100, such as during start-up, may be stored in memory 1108.
- BIOS basic input/output system
- Memory 1108 may also include (e.g., stored on one or more machine-readable media) instructions (e.g., software) 1120 embodying any one or more of the aspects and/or methodologies of the present disclosure.
- memory 1108 may further include any number of program modules including, but not limited to, an operating system, one or more application programs, other program modules, program data, and any combinations thereof.
- Computer system 1100 may also include a storage device 1124.
- Examples of a storage device include, but are not limited to, a hard disk drive, a magnetic disk drive, an optical disc drive in combination with an optical medium, a solid-state memory device, and any combinations thereof.
- Storage device 1124 may be connected to bus 1112 by an appropriate interface (not shown).
- Example interfaces include, but are not limited to, SCSI, advanced technology attachment (ATA), serial ATA, universal serial bus (USB), IEEE 1394 (FIREWIRE), and any combinations thereof.
- storage device 1124 (or one or more components thereof) may be removably interfaced with computer system 1100 (e.g., via an external port connector (not shown)).
- storage device 1124 and an associated machine-readable medium 1128 may provide nonvolatile and/or volatile storage of machine-readable instructions, data structures, program modules, and/or other data for computer system 1100.
- software 1120 may reside, completely or partially, within machine-readable medium 1128.
- software 1120 may reside, completely or partially, within processor 1104.
- Computer system 1100 may also include an input device 1132.
- a user of computer system 1100 may enter commands and/or other information into computer system 700 via input device 1132.
- Examples of an input device 1132 include, but are not limited to, an alpha-numeric input device (e.g., a keyboard), a pointing device, a joystick, a gamepad, an audio input device (e.g., a microphone, a voice response system, etc.), a cursor control device (e.g., a mouse), a touchpad, an optical scanner, a video capture device (e.g., a still camera, a video camera), a touchscreen, and any combinations thereof.
- an alpha-numeric input device e.g., a keyboard
- a pointing device e.g., a joystick, a gamepad
- an audio input device e.g., a microphone, a voice response system, etc.
- a cursor control device e.g., a mouse
- Input device 1132 may be interfaced to bus 1112 via any of a variety of interfaces (not shown) including, but not limited to, a serial interface, a parallel interface, a game port, a USB interface, a FIREWIRE interface, a direct interface to bus 1112, and any combinations thereof.
- Input device 1132 may include a touch screen interface that may be a part of or separate from display 1136, discussed further below.
- Input device 1132 may be utilized as a user selection device for selecting one or more graphical representations in a graphical interface as described above.
- a user may also input commands and/or other information to computer system 1100 via storage device 1124 (e.g., a removable disk drive, a flash drive, etc.) and/or network interface device 1140.
- a network interface device such as network interface device 1140, may be utilized for connecting computer system 1100 to one or more of a variety of networks, such as network 1144, and one or more remote devices 1148 connected thereto. Examples of a network interface device include, but are not limited to, a network interface card (e.g., a mobile network interface card, a LAN card), a modem, and any combination thereof.
- Examples of a network include, but are not limited to, a wide area network (e.g., the Internet, an enterprise network), a local area network (e.g., a network associated with an office, a building, a campus or other relatively small geographic space), a telephone network, a data network associated with a telephone/voice provider (e.g., a mobile communications provider data and/or voice network), a direct connection between two computing devices, and any combinations thereof.
- a network such as network 1144, may employ a wired and/or a wireless mode of communication. In general, any network topology may be used.
- Information e.g., data, software 1120, etc.
- Computer system 1100 may further include a video display adapter
- a display device for communicating a displayable image to a display device, such as display device 1136.
- a display device include, but are not limited to, a liquid crystal display (LCD), a cathode ray tube (CRT), a plasma display, a light emitting diode (LED) display, and any combinations thereof.
- Display adapter 1152 and display device 1136 may be utilized in combination with processor 1104 to provide graphical representations of aspects of the present disclosure.
- computer system 1100 may include one or more other peripheral output devices including, but not limited to, an audio speaker, a printer, and any combinations thereof.
- peripheral output devices may be connected to bus 1112 via a peripheral interface 1156. Examples of a peripheral interface include, but are not limited to, a serial port, a USB connection, a FIREWIRE connection, a parallel connection, and any combinations thereof.
- the computer-readable non-transitory media includes all types of computer readable media, including magnetic storage media, optical storage media, and solid state storage media and specifically excludes signals.
- the software can be installed in and sold with the device. Alternatively the software can be obtained and loaded into the device, including obtaining the software via a disc medium or from any manner of network or distribution system, including, for example, from a server owned by the software creator or from a server not owned but used by the software creator.
- the software can be stored on a server for distribution over the Internet, for example.
- Computer-readable storage media exclude (excludes) propagated signals per se, can be accessed by a computer and/or processor(s), and include volatile and non-volatile internal and/or external media that is removable and/or non-removable.
- the various types of storage media accommodate the storage of data in any suitable digital format. It should be appreciated by those skilled in the art that other types of computer readable medium can be employed such as zip drives, solid state drives, magnetic tape, flash memory cards, flash drives, cartridges, and the like, for storing computer executable instructions for performing the novel methods (acts) of the disclosed architecture.
- each process associated with the disclosed technology may be performed continuously and by one or more computing devices.
- Each step in a process may be performed by the same or different computing devices as those used in other steps, and each step need not necessarily be performed by a single computing device.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The present technology discloses an apparatus and method for regulating delivery time of a data packet between communication devices. A plurality of data packets are received at a node, where the data packets each specify a delivery time. The data packets are stored in one or more queues of the node, and an ordered list of the data packets is generated based on a transmission time for each of the data packets and the delivery time for each of the data packets. The data packets may then be sent to a next hop node according to the ordered list.
Description
LATENCY GUARANTEE FOR DATA PACKETS IN A NETWORK
CLAIM FOR PRIORITY
[0001] This application claims the benefit of priority to U.S. Provisional
Application No. 63/020,970, filed May 06, 2020, the entire contents of which are hereby incorporated by reference.
FIELD
[0002] The disclosure generally relates to latency guarantee for packet delivery in a network.
BACKGROUND
[0003] Computer networks, such as the Internet, are in wide-spread use today. These networks provide network devices, namely devices connected to the network such as personal computers, servers, or the like, with the ability to communicate with each other. Network devices communicate with each other by converting the data to be communicated into data packets and transmitting the packets across the network. In a typical network, however, a direct physical connection between two devices is often not possible due to the large number of devices using the network. As such, the packets may pass through several intermediate network devices, such as routers, switches etc., which direct and help deliver the packets to their intended destination network device.
[0004] When a large number of network devices are present in a network, a massive numbers of data packets may be in transit across the network at any given point in time. As such, the network may become congested at one or more points along the path of the data packets, most often at the switching or routing stations
tasked with redirecting the packets (e.g., the intermediate nodes). A delay at any given point can result in an overall delay, or latency, in the transmission time of the data packets. This problem becomes particularly acute in the case of time-sensitive transmissions of data, such as in the case of remote surgery or autonomous vehicle systems. It is therefore highly desirable to reduce the latency at each intermediate node in order to effectively ensure that the data packets reach their destination within the network in a specified period of time.
BRIEF SUMMARY
[0005] According to one aspect of the present disclosure, there is a method for regulating delivery time of a data packet between communication devices, comprising receiving a plurality of data packets at a node, the data packets each specifying a delivery time; storing, in one or more queues of the node, the data packets; generating, by the node, an ordered list of the data packets stored in the one or more queues based on a transmission time for each of the data packets and the delivery time for each of the data packets; and sending, by the node, the data packets to a next hop node according to the ordered list.
[0006] Optionally, in any of the preceding aspects, the method further comprising sorting the data packets in the one or more queues in order of transmission time to form an ordered data packet set.
[0007] Optionally, in any of the preceding aspects, wherein generating the ordered list further comprises calculating a total transmission time as a sum of the transmission time of each data packet in the plurality of data packets stored in the one or more queues; and determining a data packet with a largest transmission time, from one or more of the plurality of data packets having a delivery time indicating a deadline at a current hop that is greater than or equal to the total transmission time.
[0008] Optionally, in any of the preceding aspects, the generating the ordered list further comprises adding the determined data packet to a beginning of the ordered list such that the data packets are ordered beginning with a first of the data packets to
be sent and ending with a last of the data packets to be sent in the sending step; and removing the determined data packet from the one or more queues.
[0009] Optionally, in any of the preceding aspects, 5. The method of claim 4, further comprising iteratively repeating the calculating, the determining, the adding and the removing steps until none of the data packets remain in the ordered data packet set.
[0010] Optionally, in any of the preceding aspects, the method further comprising dropping a current data packet when there is insufficient time to send the current data packet by a time required by the delivery time.
[0011] Optionally, in any of the preceding aspects, the data packets stored in the one or more queues have a highest scheduling priority compared to other received data packets without a specified delivery time.
[0012] Optionally, in any of the preceding aspects, the ordered list achieves a minimum average dwell time of the data packets in the queue and satisfies the delivery time for each of the data packets.
[0013] Optionally, in any of the preceding aspects, the plurality of data packets are received using a new internet protocol, each of the plurality of data packets comprise a header specification field, a shipping specification field, a contract specification field and a payload specification field, and the contract specification field includes a contract specifying the delivery time.
[0014] Optionally, in any of the preceding aspects, each contract includes one or more contract clauses, and each contract clause includes an action indicating the data packet has the delivery time and metadata identifying a budget of data packet budget and remaining number of hops.
[0015] Optionally, in any of the preceding aspects, the method further comprising calculating the deadline at the current hop using a remainder of the budget and the remaining number of hops.
[0016] Optionally, in any of the preceding aspects, the node includes multiple output ports and each output port is associated with the one or more queues.
[0017] Optionally, in any of the preceding aspects, the one or more queues are a latency guarantee queue.
[0018] According to one other aspect of the present disclosure, there is provided a node for regulating delivery time of a data packet between communication devices, comprising a non-transitory memory storage comprising instructions; and one or more processors in communication with the memory, wherein the one or more processors execute the instructions to receive a plurality of data packets at a node, the data packets each specifying a delivery time; store, in one or more queues of the node, the data packets; generate, by the node, an ordered list of the data packets stored in the one or more queues based on a transmission time for each of the data packets and the delivery time for each of the data packets; and sending, by the node, the data packets to a next hop node according to the ordered list.
[0019] According to one other aspect of the present disclosure, there is provided a node for regulating delivery time of a data packet between communication devices, comprising a non-transitory memory storage comprising instructions; and one or more processors in communication with the memory, wherein the one or more processors execute the instructions to receiving a plurality of data packets at a node, the data packets each including a contract specifying a delivery time; storing, in one or more queues of the node, the data packets; generating, by the node, an ordered list of the data packets stored in the one or more queues based on a transmission time for each of the data packets and the delivery time included in the contract for each of the data packets; and sending, by the node, the data packets to a next hop node according to the ordered list.
[0020] According to still one other aspect of the present disclosure, there is a non-transitory computer-readable medium storing computer instructions for regulating delivery time of a data packet between communication devices that, when executed by one or more processors, cause the one or more processors to perform the steps of receiving a plurality of data packets at a node, the data packets each specifying a delivery time; storing, in one or more queues of the node, the data packets; generating, by the node, an ordered list of the data packets stored in the one or more queues based on a transmission time for each of the data packets and the delivery time for
each of the data packets; and sending, by the node, the data packets to a next hop node according to the ordered list.
[0021] According to yet one other aspect of the present disclosure there is a method for scheduling a plurality of data packets, each indicating a delivery time, between communication devices, comprising determining whether a schedule satisfying the delivery time deadline for each of the plurality of data packets is feasible; ordering the plurality of data packets in a queue to reduce an average dwell time when the schedule is feasible and transmitting the plurality of packets in the order to satisfy the delivery time for each of the plurality of data packets; and dropping a data packet from the plurality of data packets when the schedule is not feasible.
[0022] Optionally, in any of the preceding aspects, the schedule is determined to be feasible.
[0023] Optionally, in any of the preceding aspects, the dropped data packet has a per-hop deadline less than a total transmission time of the plurality of data packets.
[0024] Optionally, in any of the preceding aspects, the reducing comprises minimizing the average dwell time for each of the ordered plurality of data packets as long as the delivery time for each of the plurality of data packets is satisfied.
[0025] This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter. The claimed subject matter is not limited to implementations that solve any or all disadvantages noted in the Background.
BRIEF DESCRIPTION OF THE DRAWINGS
[0026] Aspects of the present disclosure are illustrated by way of example and are not limited by the accompanying figures for which like references indicate like elements.
[0027] FIG. 1 illustrates an example system in which embodiments of the disclosure may be implemented.
[0028] FIG. 2 illustrates an example New IP packet which may be used to implement the present technology.
[0029] FIG. 3 illustrates and example contract format for use in the New IP packet of FIG. 2.
[0030] FIG. 4 illustrates an example of actions that may be specified in a contract clause.
[0031] FIG. 5 illustrates an example network in which routers have a latency guarantee queue (LGQ).
[0032] FIG. 6 illustrates an example scheduling algorithm in accordance with the present embodiments.
[0033] FIGS. 7A - 7F illustrate an example embodiment of implementing the
TDMS scheduling algorithm of FIG. 6.
[0034] FIGS. 8 and 9 illustrate flow diagrams of regulating delivery time of data packets in a network in accordance with the disclosed embodiments.
[0035] FIG. 10 illustrates an embodiment of a node in accordance with embodiments of the disclosure.
[0036] FIG. 11 shows an example embodiment of a computing system for implementing embodiments of the disclosure.
DETAILED DESCRIPTION
[0037] The present disclosure will now be described with reference to the figures, which in general relate to technology for establishing a trusted relationship in a distributed system.
[0038] The present technology relates to quality of service (QoS) guarantees, and in particular, to latency guarantee for packet delivery in a network. A “New Internet Protocol” (New IP) packet is part of an Internet framework that brings several
capabilities to the present technology. The New IP, among other features, includes a contract that may be fulfilled by routers in a network. Through use of the contracts, service assurance requirements at a packet level, such as end-to-end latency guarantees, may be provided. As part of its guarantee, the technology provides a scheduling algorithm that regulates the transmission of the data packets stored at the router to minimize dwell time. However, such services can also be requested in non contract segments of a packet, and it will be understood that contracts are just one option for specifying the services.
[0039] It is understood that the present embodiments of the disclosure may be implemented in many different forms and that claim scope should not be construed as being limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete and will fully convey the inventive embodiment concepts to those skilled in the art. Indeed, the disclosure is intended to cover alternatives, modifications and equivalents of these embodiments, which are included within the scope and spirit of the disclosure as defined by the appended claims. Furthermore, in the following detailed description of the present embodiments of the disclosure, numerous specific details are set forth in order to provide a thorough understanding. However, it will be clear to those of ordinary skill in the art that the present embodiments of the disclosure may be practiced without such specific details.
[0040] FIG. 1 illustrates an example system in which embodiments of the disclosure may be implemented. The system 100 includes routers 120, 122 and 124. In one embodiment, routers 122 are intermediate routers. Computing devices 110 connect to network 130 via router 120 in order to access resources provided by network 130. Computing devices 110 may be virtually any type of general- or specific- purpose computing device. For example, these computing devices 110 may be user devices such as desktop computers, laptop computers, tablet computers, display devices, cameras, printers, Internet of Things (loT) device, wearable computing devices, mobile devices or smartphones. However, in a data center environment, these computing devices may be server devices such as application server computers, virtual computing host computers, or file server computers. Moreover, computing
devices 110 may be individually configured to provide computing, storage, and/or other suitable computing services.
[0041] Network 130 may include a plurality of network devices that facilitate the access of content by devices 110. Each of the plurality of network devices may comprise one of a router, a switch, a server, a database server, a hub, a firewall, an intrusion detection/prevention (IDP) device and/or any other type of networking equipment or device that facilitates the transfer of data to and from devices 110. Network 130 includes routers 120, 122 and 124, which communicate using various protocols, such as the Border Gateway Protocol and the Internet Control Message Protocol, in order to exchange routing, network configuration information, and other information. In one embodiment, the routers communicate using a new internet protocol (IP). The network 130 may be a local area network (“LAN”), such as a token ring or Ethernet network, a virtual local area network (“VLAN”), or another type of network. The network 130 may comprise one or more wired or wireless links. For example, network 130 may be an Ethernet network that comprises one or more Ethernet cables. In another example, the network 130 may be a Wireless Fidelity (“Wi Fi”) network that uses wireless radio transmissions to communicate information. In another example, network 130 may be a mobile network. Although shown as a single network 130, the network 130 may comprise any number of interconnected networks, either public or private, in which the various networks interconnect to form one or more virtual networks.
[0042] Network 130 provides a variety of resources that may be accessed by devices 110. In the example of FIG. 1 , network 130 includes content server 126 that stores or otherwise sources content, which refers to any data commonly transmitted and/or stored within a network, such as web-based applications, images, documents, web pages, video data, audio data such as voice, web-based games, scripts, or any other type of network-based content. Network 130 may support multicast techniques to improve the delivery efficiency of data transmitted with the network 130. In one embodiment, the network 130 supports the new IP to improve latency and guarantee data packet delivery, as discussed herein. Network 130 may also connect to a variety of other types of devices (e.g., file servers, printers, telephones, and e-mail and other application servers). Network 130 is also shown coupled to external network 140 (e.g.,
the Internet) via router 124. Public network 140 may include, for example, one or more client computing devices. Public network 140 may provide access to web servers, application servers, public databases, media servers, end-user devices, and many other types of network resource devices and content.
[0043] Network 130 may transmit content to devices 110 through router 120 using one or more packet-based protocols, such as an Internet Protocol (IP)/Transmission Control Protocol (TCP) or the new IP. In this respect, network 130 may support the transmission of data via discrete data units, often referred to as “packets.” As a result, network 130 may be referred to as a “packet-based” or “packet switched” network. Network traffic delivered by network 140 may be classified according to a number of categories. For instance, a content server (not shown) may stream live video to one of devices 110 through router 120. Packets that transmit such video may be classified as streaming multimedia packets and may require a guaranteed bandwidth to provide an acceptable user experience. The content server may also send web pages to one of devices 110 using HTTP packets. As another example, information exchanged by routers 120, 122 and 124 may be categorized as high precision communication (HPC) services. Information categorized as HPC requires, for example, a particular quality of service (QoS), such as latency, throughput, which may be defined at the granularity of a packet, a group of packets or a flow. Taking latency as an example, a latency guarantee in HPC services refers to the network needing to ensure the delivery of a packet, a group of packets or all packets in a flow within a bounded time frame. When messages are specified with a same deadline, then the latency guarantee is ensured at the flow level. If each message is specified with an independent deadline, the latency guarantee is ensured at the data packet level. Thus, it is appreciated that various categories of network traffic may require varying levels of network performance.
[0044] Routers 120, 122 124 receive, analyze, and classify data packets to assign the data packets to a suitable priority level. In addition to classifying data packets, routers 120, 122 and 124 process the received and classified data packets according to their priority level. In this manner, routers 120, 122 124 implement aspects of the QoS guarantees provided by network 130. In addition, based on information received from other devices in system 100, routers 120, 122 124
determine the appropriate route through the system for each received data packet and forward the data packet accordingly. In one embodiment, as will be described below with reference to FIG. 2, the priority level of the data packets is determined using the new IP in which a contract specification details the guarantees, such as QoS guarantees, of the data packets. For example, the contract specification may detail an end-to-end latency requirement for a data packet that traverses the network 130.
[0045] Routers 120, 122 and 124 may each include one or more queues 125 configured to store data packets as they arrive at intermediate nodes, such as routers 122. The queues 125 typically include a queuing algorithm, which are usually implemented at the router level. In particular, output queues process the data packets that have been enqueued to await their turn to be forwarded or routed to the appropriate output port or interface. A primary purpose of these queuing algorithms is to give preferential treatment to data packets having higher priority (which are enqueued at the router output port) over other enqueued data packets having lower priority. In one embodiment, the queue 125 is a latency guaranteed queue (LGQ) in which the data packets have a per-hop (e.g., router hop) deadline that is guaranteed, as well as a minimal dwell time at each hop. The dwell time is defined as the time a packet spends at a router before being forwarded to the next hop (referred to as “dwell time”) and includes: processing delay, queueing delay (affected by packet scheduling in the outgoing queue) and transmission delay (which is affected by the size of the packet and is proportional to the packet size).
[0046] FIG. 2 illustrates an example IP packet which may be used to implement the present technology. Termed “New Internet Protocol” (New IP), the packet 200 is part of an Internet framework that brings several capabilities in the present technology. New IP is a data plane technology that defines a new IP packet format, its specification, and corresponding capabilities in the network nodes. The new IP packet format is shown in FIG. 2, which includes three types of characteristics, each solving a specific class of problem in the network 130, namely, a) addressing evolution, b) the contract inclusion, and c) the payload extension. A new header specification is the start of the packet 200 that describes the offsets to the specifications for shipping, contract and payload.
[0047] The new IP address (shipping spec) evolution aims to replace the current fixed type of addressing order to provide flexibility to include all types of addresses and fit different reachability scenarios. The New IP shipping specification is backward compatible with the existing address schemes (e.g., IPv4 and IPv6).
[0048] The new IP contract (contract Spec) inclusion enables a large selection of network capabilities, their functioning and regulative control at the finest packet- level granularity, as shown in FIG. 3. The network 130 and routers 120, 122 and 124 fulfill the contract 300, assuming the contract has been agreed between the packet sender (e.g., sending computing device 110) and the packet receiver (e.g., receiving computing device 110) and the network 130. The contract 300 describes a formal service-specific arrangement between two or more parties, which includes one or more contract clauses 210 to describe the type of network service capability, actions and accounting information. Through use of the contracts 300, service assurance requirements at a packet level are provided. In particular, a contract 300 carries specific attributes associated with time-engineered services, high-throughput media services and mission-critical ultra-reliable services. In one embodiment, the structure of the contract 300 is defined in a Chomsky style. Compared to traditional QoS, contracts operate at much lower-level — per packet, and instruct in high-level abstract commands.
[0049] Each of the contract clauses 210 is a combination of an Event,
Condition, and Action (ECA) and can optionally include Metadata 212. In one embodiment, an atomic contract ECA exists in which the event and condition are empty. A contract clause 210 describes how the routers 120, 122 and 124 treat the packet as it traverses the network 130 based on the predefined triggering event and condition. Given a predefined triggering event and condition has occurred, various actions are processed by the intermediate routers 122. For example, in order to support ultra-reliable low latency (uRLLC) in 5G, two contacts C1 and C2 may be used, where the C1 contract clause indicates a BoundedLatency action and the C2 contract clause has a NoPktLoss action (i.e., the low latency and reliability are to be met). Actions are described with reference to FIG. 4 below.
[0050] The Metadata contains data about the packet, e.g. accounting information, customized statistics about the flow on intermediate hops, the contextual information about the user and application, etc. The in-network node intelligence is naturally embedded and supported by the New IP framework.
[0051] The new IP Payload (payload spec) associates semantics to the data payload. New IP payload provides options to the receiving computing device 110 to consume any residual information in the payload while allowing the network to drop portions of the payload when congestion occurs. This type of communication is referred to as a qualitative communication, which helps to mitigate re-transmission overheads and delays when faced with slow or congested conditions.
[0052] FIG. 4 illustrates an example of actions that may be specified in a contract clause. An action set includes one or more actions 400 defined in the contract specification that is known and processed by all new IP nodes. For example, applications will insert operator-defined and/or application-defined actions. These actions 400 may be generally categorized into operations, monitoring, telemetry or signaling. New contracts as defined in the specification may be implemented across different hardware platforms using different methods or algorithms. Flowever, the result of implementation leads to packet delivery guarantees between the sender and the receiver. Several actions are described below (some of which are shown in action 400):
• Action BoundedLatencyft ) (also referred to herein below as
“InTimeGuarantee”) instructs the router to deliver a packet any time before the t (with prescribed unit of time). It may use corresponding metadata to describe an end-to-end network latency or available latency since transmission starts from the sender. An algorithm called latency-based forwarding (LBF) implements this action. Instead of a class of services, contract clauses embed exact parameters and objectives in the packet. The LBF algorithm is disclosed in A. Clemm, T. Eckert, “High-Precision Latency Forwarding over Packet- Programmable Networks,” IEEE/IFIP Network Operations and Management Symposium, 2020, which is incorporated by reference herein in its entirety.
• Action OnTimeDelivery(t, t’) with metadata t and t’ as a total end-to-end time and elapsed time respectively delivers packet at a specific time in order to accommodate very low values of time-jitter.
• Action Coordinate enables multi-user applications to adjust packet delivery timings in the network. The action needs to identify co-flows, which is actually done from address casting part of the Shipping Spec along with timing dependency parameters as specified in metadata. For an explanation, see K. Makhijani, H. Yousefi, K. K. Ramakrishnan and R. Li, "Extended Abstract: Coordinated Communications for Next-Generation Networks," 2019 IEEE 27th International Conference on Network Protocols (ICNP), 2019, which is incorporated by reference herein in its entirety.
• Action NoPacketLoss instructs networks to make possible by every means to deliver the packet.
• Action PreferredPath may be a set of node addresses or other forms of path identifiers embedded in the packet to provide path guarantees.
• Action PktTrace tracks the packet flow behavior across the network and may be used to understand the end-to-end service assurance and performance degradations in particular. Such actions add noise and, therefore, are subjected to specific events or conditions of interest. For example, in order to understand hop-by-hop latency, PktTrace action may capture a path in the network along with the time spent in each node.
• Action PktMonitor helps gain visibility into the current state of the system. This action captures events in a network relating to queue thresholds, packet drops, etc. Such actions identify situations such as congestion before they occur by monitoring the thresholds. For example, in order to identify real-time congestion, if a queue is built up to 70%, then this action sets the corresponding metric value in the metadata.
• Action ReportlnsuringParty is an operator driven action to be executed when a service objective violation occurs; the node in error is then required to report such violations to the insuring party. Operators use this for the assessment of
damages due to service level objectives violations, which may help build trust between different network systems.
[0053] This disclosure focuses on a new Action called InTimeGuarantee, which instructs the router to deliver a data packet with a latency guarantee. The Action InTimeGuarantee uses deadline-aware scheduling with an algorithm (referred to herein as the Total Dwell Minimizing Schedule or “TDMS,” described below) such that the packets specified with InTimeGuarantee can meet corresponding end-to-end deadlines. It may use corresponding metadata to describe an end-to-end network latency, in which the metadata is defined by (1 ) a budget, which denotes the residual or remaining time before the packet deadline runs out and is considered unsuccessful, and (2) a hop, which denotes the residual or remaining number of hops towards destination. For purposes of discussion, the remaining number of hops may be determined when the routing path is configured, and the total number of hops between the source and destination is fixed. The afore-mentioned technology may be applied to many different networks, such as an Internet-of-Things (loT) network, flexible addressing for smart homes, service-aware routing and more. Further, similar requests for a particular delivery time can be associated with packets in other forms, and neither the InTimeGuarantee action nor the contracts framework shall be limiting to this disclosure.
[0054] FIG. 5 illustrates an example network in which routers have a latency guarantee queue (LGQ). The network 500 shows sending nodes 110, router 122 with a dedicated LGQ 125 and receiver nodes 110. The LGQ is a queue that includes a scheduling mechanism to regulate transmission of data packets stored in a node of the network in which to minimize dwell time. Such a scheduling mechanism may provide a level or quality of service for delivery of the data packets. It is appreciated that network 500 is not limited to the depicted components, but may be a much larger network with additional components, such as network 100 in FIG. 1.
[0055] At router 122 the LGQ 125 queues received data packets for latency guarantee according to the contract specified in the received data packets. In one embodiment, data packets received by the router 122 that include an InTimeGuarantee contract clause are stored in the LGQ 125. These data packets are
assigned a highest scheduling priority compared to other received data packets that do not include the deadline constraints (i.e. , do not include the Action InTime Guarantee in the contract clause). Data packets that do not include the deadline constraints may be stored in other, non-LGQ queues (not shown). It is appreciated that network 500 is not limited to the depicted components, but may be a much larger network with additional components, such as network 100 in FIG. 1.
[0056] The routers 122 may include multiple input interfaces, with each input interface having one or more queued packets waiting to be processed and routed by the router 122. Additionally, each queued packet may have a different associated priority level which specifies the particular Quality of Service (QoS) level to be used when handling that packet. When a packet at the input interface is processed by router 122, the packet is first dequeued from the input interface, and is then decapsulated from its data link frame. After decapsulation, the packet is classified with an associated priority level. The packet is then encapsulated and routed to its appropriate output interface queue within the output queuing structure. Since multiple latency-sensitive packets may need to be scheduled on a same output interface of a LGQ, a packet scheduling algorithm is implemented in which to achieve a minimal average dwell time at each router during packet forwarding. The packet schedule guarantees the delivery of a data packet stored in the LGQ 125 within its designated time frame (i.e., satisfy the InTimeGuarantee).
[0057] A packet schedule is generally defined as the order of the data packets
(also referred to herein as “ordered list of the data packets”) that get transmitted from LGQ 125 through an output interface. The term “feasible schedule” is defined herein as a schedule under which all packets being scheduled in the queue have a deadline that can be satisfied. The average dwell time may be determined using the following parameters and equations:
• m: the number of packets in the LGQ;
• 51,52, ...Si, ... Sm : the size of the packets in the LGQ;
• Dl D2, -Di, ... Dm : the transmission time of the packets, which is proportional to the packet size. ( Dt = — ). Over the sampling time, an
B
average link rate may be assumed. The transmission time may be calculated as packet size divided by the average link rate;
• T1,T2, ... T, ... Tm : the dwell time of the packets, which includes the transmission time and the waiting time;
• L1,L2, -L , ... Lm : the deadline for each packet at the current hop along a respective path. A deadline refers to the remaining time, but could be adjusted to accommodate an absolute time (described below). In one embodiment, a central controller may set a specific deadline or it may be dynamically set at each hop. In one further embodiment, a per-hop deadline of the packets may be calculated as Lt
[0058] For a packet schedule, if packets are transmitted in the order of
1, 2, 3, ... , m, then the dwell time for packet i is:
Tt = D1 + D2 + — Dt,
The total dwell time of the packets in the LGQ is:
The average dwell time of the packets is: r T total l av3 - m
[0059] Thus, the average dwell time determines how long an average packet stays in a router before its last bit gets transmitted. The smaller the average dwell time, the less the amount of end-to-end latency will be experienced by end users. That is, the smaller the average dwell time for a packet in the LGQ 125, the less amount of latency exists between sending nodes 110 and receiver nodes 110.
[0060] FIG. 6 illustrates an example scheduling algorithm in accordance with the present embodiments. The scheduling algorithm 600, referred to as the Total Dwell Minimizing Schedule (TDMS) algorithm, determines whether a minimal average dwell time of packets stored in a queue can be met in order to meet delivery time
guarantees, as set forth in the contract clauses of the packets. Minimizing the average dwell time is equivalent to minimizing the total dwell time. The minimal average dwell time results in minimal average packet delay at each router, thus minimizing average end-to-end delay for packet deliveries in the network. Moreover, if a feasible schedule can be determined, then a schedule is generated and the data packets are scheduled for delivery according to the schedule. If a feasible schedule cannot be determined, the data packet is discarded.
[0061] In step 1 of the TDMS algorithm 600, the variables are set as
packet set, / is the ith data packet in the set Jl m is total number of packets in the LGQ 125, D is the transmission time of the packet and L is the deadline of the packet.
[0062] The packets stored in the LGQ 125 are optionally sorted by the decreasing order of transmission time at step 2. That is, J1 is sorted such that Dj^q > D [i+i]· one embodiment, if Dj^q = Dj^+^, sorting is unnecessary as the order does not matter. In another embodiment, the packets stored in the LGQ 125 are optionally sorted by increasing order of transmission time at step 2.
[0063] The TDMS schedule is determined from the last to the first packet through steps 3 - 14 (referred to herein as the “outer loop” of the algorithm), where steps 6 - 9 of the algorithm (referred to herein as the “inner loop” of the algorithm) identify the last packet to be transmitted of the remaining packets to be placed in the TDMS schedule.
[0064] At step 4, j* is a bookkeeping variable that is initially set to zero.
[0065] At step 5, the total transmission time is calculated as a sum of the transmission time of each data packet stored in the queue. In one embodiment, when the data packets are sorted, the total transmission time is calculated as a sum of the transmission time of each data packet in the ordered (sorted) data packet set. It will be understood that the total transmission time can also be adjusted if the router will experience other delays (for example, for transmission of higher-priority packets or other operations.
[0066] Steps 6 - 9 then identify the last packet to be transmitted, of the remaining packets to be placed in the schedule. This may be accomplished by determining the data packet in the ordered data packet set with the largest transmission time whose deadline is greater than or equal to the total transmission time of packets in the current data packet set. More specifically, proceeding through the packets 1 to m in order, if the deadline of a packet is greater than or equal to the total transmission time, then j* is set to the index of that packet, having the largest transmission time and an acceptable deadline. In this context, the “deadline” is an amount of time remaining until the packet must be transmitted. Thus, if the deadline is greater than the total transmission time, the packet can be transmitted last and still meet its deadline. However, the “deadline” can also be expressed in other ways, such as an absolute time when the packet must be sent (for example 12:34pm).
[0067] At step 10, if j* remains zero G*==0) G* has not been updated and set to the index of the packet with the largest transmission time), then the schedule is determined to be not feasible (steps 11 - 12). That is, no data packet exists in the current data packet set that has a deadline that is greater than or equal to the total transmission time of the packets. In one embodiment, data packets may be dropped when not placed in the schedule. In another embodiment, data packets may be sent late (using another scheduling policy) when not placed in the schedule. Other actions can also be taken to make a schedule feasible, such as reducing a size of some packets.
[0068] If j* has been updated at step 10 to the index of the packet with the largest transmission time, the determined data packet is prepended to the schedule and removed from the ordered data packet set. The process is repeated with the new ordered data packet set until no more data packets remain in the ordered data packet set. That is, at step 13 the new ordered data packet set JM is generated by removing j* from the previous set J/, and j* is added to the front of the TDMS set at step 14. When no more data packets remain, the TDMS schedule is returned at step 15.
[0069] FIGS. 7 A - 7F illustrate an example embodiment of implementing the
TDMS scheduling algorithm of FIG. 6. As shown in FIG. 7 A, the queues (such as LGQ 125) of the intermediate routers 122 contain a packet set 700 that includes two
properties — deadline and transmission time. The deadline refers to the deadline at a current hop for an associated packet in the packet set, and the transmission time refers to the amount of time it takes to transmit the packet from the queue to the next hop node. For example, packet P1 has a deadline of 10 and a transmission time of 5. For purposes of discussion, we can assume that all times are in milliseconds (ms).
[0070] As discussed above, the TDMS scheduling algorithm 600 first orders the packets in the packet set 700 into a decreasing (descending) order of transmission time, as shown in 702. In this case, the packets in the packet set 700 are rearranged into the following order: P1 , P4, P2, P3, as shown in FIG. 7B. The packets arranged in decreasing order of transmission time will be used as the ordered packet set operated upon by the TDMS scheduling algorithm.
[0071] In this case, steps 3 - 13 of the TDMS scheduling algorithm 600 are executed four times (equal to the number of packets to be scheduled or placed in an ordered list), as shown in FIGS. 7C - 7F, with steps 6 - 9 responsible for identifying the last packet. At each iteration (loop) of the TDMS scheduling algorithm 600, a total transmission time is calculated as a sum of the transmission time of each data packet in the ordered data packet set ] . The data packet in the ordered data packet set J1 with a deadline that is greater than or equal to the total transmission time, with a largest transmission time, will be selected.
[0072] For example, in the first iteration, the total transmission time of data packet set is the sum of the transmission time for packets P1 , P2, P3 and P4 (sum of the transmission time for /c= 5+3+2+1=11 ms). The data packet selected for placement as the last data packet in the TDMS schedule is the data packet with the largest transmission time and that meets its deadline (e.g., the deadline is equal to or greater than the total transmission time), which is data packet P2. Stated differently, the data packet may be transmitted last and still meet its deadline. In this case, only data packets P2 and P3 can meet their deadline (i.e. , both data packets have a deadline that is equal to or greater than the total transmission time of 11 ms). Of the data packets P2 and P3 that meet their deadline, data packet P2 has the largest transmission time. Data packet P2 is therefore selected as the last entry into the TDMS scheduling algorithm, as shown in FIG. 7C. Once placed in the schedule, the
data packet P2 is removed from the ordered data packet set to form a second data packet set J2 with the remaining data packets P1 , P3 and P4.
[0073] In the second iteration, the total transmission time of data packet set J2 is the sum of the transmission time for packets P1 , P3 and P4 (sum of the transmission time J2= 5+3+1 = 9 ms). The data packet selected for placement as the last data packet in the TDMS schedule is the data packet with the largest transmission time that meets its deadline, which is data packet P1. At this stage of the example, data packets P1 and P3 can meet their deadline (i.e. , a deadline that is equal to or greater than the total transmission time of 9 ms). Since data packet P1 has a larger transmission time, it is therefore selected as the last open entry into the TDMS scheduling algorithm, as shown in FIG. 7D. Once placed in the schedule, the data packet P1 is removed from the ordered data packet set J2 to form a third data packet set /3 with the remaining data packets P3 and P4.
[0074] Repeating the loop for iteration three, the total transmission time of data packet set /3 is the sum of the transmission time for packets P3 and P4 (sum of the transmission time J3= 3+1=4 ms). The data packet selected for placement as the last data packet in the TDMS schedule is the data packet with the largest transmission time that meets its deadline, which is data packet P4. At this stage of the example, data packets P3 and P4 can meet their deadline (i.e., a deadline that is equal to or greater than the total transmission time of 4 ms). Since data packet P4 has a larger transmission time, it is therefore selected as the last open entry into the TDMS scheduling algorithm, as shown in FIG. 7E. Once placed in the schedule, the data packet P4 is removed from the ordered data packet set /3 to form a fourth data packet set 4 with the remaining data packet P3.
[0075] In the fourth iteration, the total transmission time of set /4 is 1 since the only data packet remaining is P3. The data packet P3 is placed in the front of the queue. As shown in FIG. 7F, the schedule has been generated in which data packet P3 is first in the queue and data packet P2 is last in the queue. The dwell time for each of the data packets may also be calculated as the transmission time of the data packet plus any waiting time. For example, since data packet P3 is in the front of the queue, there is no waiting time. The dwell time is therefore equal to the transmission
time of 1 ms. Data packet P4 has a transmission time of 3 ms and a waiting time of 1 ms (the dwell time of packets prior to P4). Thus, the dwell time of data packet P4 is the transmission time (3 ms) + dwell time (1 ms) = 4 ms. Similarly, the dwell time for data packet P1 is the transmission time (5 ms) + dwell time (4 ms) = 9 ms, and the dwell time of data packet P2 is the transmission time (2 ms) + dwell time (9 ms) = 11 ms. This results in a minimal average dwell time of (1 + 4 + 9 + 11 =25) divided by 4 data packets, which equals a minimal dwell time of 25/4. Compared to other known dwell time techniques, the TDMS scheduling algorithm results in a lower minimal average dwell time.
[0076] In one embodiment, during implementation of the TDMS scheduling algorithm, if no data packet has a deadline equal to or greater than the total transmission time, then the schedule cannot be determined and the data packet is dropped.
[0077] FIGS. 8 and 9 illustrate flow diagrams of regulating delivery time of data packets in a network in accordance with the disclosed embodiments. In particular, FIG. 8 illustrates a general overview of the process for regulating delivery time of data packets, and FIG. 9 illustrates the process flow of the TDMS scheduling algorithm. In the discussion that follows, the intermediate routers or nodes perform the procedures. Flowever, it is appreciated that any other functional unit or processing unit may implement the processes described herein, and the disclosure is not limited to implementation by the routers.
[0078] As defined herein, the delivery time of a data packet may be measured according to different embodiments. In one embodiment, the delivery time may be measured as a remaining time for delivery. For example, if a data packet arrives at an intermediary node with a delivery time of 10 ps, the data packet dwells at the intermediary node for 1 ps and the transmission time to the next node is 1 ps, then the remaining time for delivery is 8 ps. In another embodiment, the delivery time may be measured as an absolute amount of time. For example, if a data packet arrives at an intermediary node at 11 :37 am and needs to arrive at its destination by 11 :47 am, the intermediary node can determine the amount of time remaining (10 minutes) to meet
its deadline (11 :47 am) regardless of the amount of time it takes to hop from one node to the next.
[0079] With reference to FIG. 8, at step 802, a plurality of data packets are received at a node, where each of the data packets is associated with a delivery time deadline. In one embodiment, the delivery time deadline may be specified in a contract clause of the data packet using the New IP. For the data packets that include a contract clause specifying a delivery time deadline, the data packets are stored in one or more queues of the node at step 804. Based at least in part on the delivery time deadlines in the data packets, a schedule is generated at step 806, and the data packets are sent to a next hop node according to the schedule at step 808.
[0080] Turning to FIG. 9, the TDMS schedule process of FIG. 6 is described.
At step 810, data packets received at the intermediate nodes are optionally sorted in the LGQs 125 in decreasing order of transmission time. In one embodiment, the data packets are new IP packets 200 that include a contract clause 210 having an action and metadata. When sorted, the data packets form an ordered data packet set. For each of the data packets in the ordered data packet set (sorted) or stored in the queue (not sorted), a total transmission time is calculated as a sum of the transmission time of each data packet at step 812. Step 814 determines the data packet in the ordered data packet set with the largest transmission time that satisfies the deadline at the current hop and is greater than or equal to the total transmission time. If no data packet is found (i.e. , no data packet satisfies the conditions), then a data packet is dropped (or sent to a lower priority queue, or some other action is taken) at step 818. Otherwise, if a data packet is found to satisfy the conditions, then the data packet is stored in LGQ 125 at step 820. In one embodiment, the data packets are stored in an order beginning with a first of the data packets to be sent and ending with a last of the data packets to be sent, such that each new packet is placed at the beginning. The determined data packet is then removed from the data packet set at step 822. At step 824, if no more data packets exist in the data set, then the schedule is returned at step 826. Otherwise, the process returns to step 812 and is repeated.
[0081] FIG. 10 illustrates an embodiment of a node in accordance with embodiments of the disclosure. The node (e.g., router) may transmit and receive data (e.g., a new IP packet) to and from at least one electronic device and/or a server 110,
etc., through a network (e.g., global network), such as network 130. The node 1000 may transmit the new IP packet, which is received through the network, to another electronic device 110 through a local network. Additionally, the node 100 may transmit a new IP packet, which is received from the other electronic device, to the electronic device or the server 110 through the network. It is appreciated that the node may also use other IP packets, such as IPv4 and IPv6 packets.
[0082] In one embodiment, the node 1000 may comprise a plurality of input/output ports 1010/1030 and/or receivers (Rx) 1012 and transmitters (Tx) 1032 for receiving and transmitting data from other nodes, a processor 1020 including an address translation circuit to process data and determine which node to send the data, storage 1022 including cache 1024 and long-term storage 1026. Although illustrated as a single processor, the processor 1020 is not so limited and may comprise multiple processors. The processor 1020 may be implemented as one or more central processing unit (CPU) chips, cores (e.g., a multi-core processor), field-programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), and/or digital signal processors (DSPs), and/or may be part of one or more ASICs. The processor 1020 may be configured to implement any of the schemes described herein using any one or combination of steps described in the embodiments. Moreover, the processor 1020 may be implemented using hardware, software, or both.
[0083] The storage 1022 (or memory) may include cache 1024 and long-term storage 1026, and may be configured to store routing tables, forwarding tables, or other tables or information disclosed herein. Although illustrated as a single storage, storage 1022 may be implemented as a combination of read only memory (ROM), random access memory (RAM), or secondary storage (e.g., one or more disk drives or tape drives used for non-volatile storage of data).
[0084] In one embodiment, the schedule of data packets, which are received by the node, may be minimized by the total dwell minimizing schedule (TDMS) module 1028 in accordance with embodiments of the disclosure. The data packets may be stored in the storage 1022. The processor 1020 may calculate a schedule of data packets to minimize dwell time. Other electronic devices or servers 110 in the network
may process the new IP packet. In another embodiment, the processor 1020 may generate a schedule based on delivery time deadlines embedded in the new IP packet.
[0085] FIG. 11 shows an example embodiment of a computing system for implementing embodiments of the disclosure. Computer system 1100 includes a processor 1104 and a memory 1108 that communicate with each other, and with other components, via a bus 1112. Bus 1112 may include any of several types of bus structures including, but not limited to, a memory bus, a memory controller, a peripheral bus, a local bus, and any combinations thereof, using any of a variety of bus architectures.
[0086] Memory 1108 may include various components (e.g., machine- readable media) including, but not limited to, a random access memory component, a read only component, and any combinations thereof. In one example, a basic input/output system 1116 (BIOS), including basic routines that help to transfer information between elements within computer system 1100, such as during start-up, may be stored in memory 1108. Memory 1108 may also include (e.g., stored on one or more machine-readable media) instructions (e.g., software) 1120 embodying any one or more of the aspects and/or methodologies of the present disclosure. In another example, memory 1108 may further include any number of program modules including, but not limited to, an operating system, one or more application programs, other program modules, program data, and any combinations thereof.
[0087] Computer system 1100 may also include a storage device 1124.
Examples of a storage device (e.g., storage device 1124) include, but are not limited to, a hard disk drive, a magnetic disk drive, an optical disc drive in combination with an optical medium, a solid-state memory device, and any combinations thereof. Storage device 1124 may be connected to bus 1112 by an appropriate interface (not shown). Example interfaces include, but are not limited to, SCSI, advanced technology attachment (ATA), serial ATA, universal serial bus (USB), IEEE 1394 (FIREWIRE), and any combinations thereof. In one example, storage device 1124 (or one or more components thereof) may be removably interfaced with computer system 1100 (e.g., via an external port connector (not shown)). Particularly, storage device 1124 and an associated machine-readable medium 1128 may provide nonvolatile and/or volatile
storage of machine-readable instructions, data structures, program modules, and/or other data for computer system 1100. In one example, software 1120 may reside, completely or partially, within machine-readable medium 1128. In another example, software 1120 may reside, completely or partially, within processor 1104.
[0088] Computer system 1100 may also include an input device 1132. In one example, a user of computer system 1100 may enter commands and/or other information into computer system 700 via input device 1132. Examples of an input device 1132 include, but are not limited to, an alpha-numeric input device (e.g., a keyboard), a pointing device, a joystick, a gamepad, an audio input device (e.g., a microphone, a voice response system, etc.), a cursor control device (e.g., a mouse), a touchpad, an optical scanner, a video capture device (e.g., a still camera, a video camera), a touchscreen, and any combinations thereof. Input device 1132 may be interfaced to bus 1112 via any of a variety of interfaces (not shown) including, but not limited to, a serial interface, a parallel interface, a game port, a USB interface, a FIREWIRE interface, a direct interface to bus 1112, and any combinations thereof. Input device 1132 may include a touch screen interface that may be a part of or separate from display 1136, discussed further below. Input device 1132 may be utilized as a user selection device for selecting one or more graphical representations in a graphical interface as described above.
[0089] A user may also input commands and/or other information to computer system 1100 via storage device 1124 (e.g., a removable disk drive, a flash drive, etc.) and/or network interface device 1140. A network interface device, such as network interface device 1140, may be utilized for connecting computer system 1100 to one or more of a variety of networks, such as network 1144, and one or more remote devices 1148 connected thereto. Examples of a network interface device include, but are not limited to, a network interface card (e.g., a mobile network interface card, a LAN card), a modem, and any combination thereof. Examples of a network include, but are not limited to, a wide area network (e.g., the Internet, an enterprise network), a local area network (e.g., a network associated with an office, a building, a campus or other relatively small geographic space), a telephone network, a data network associated with a telephone/voice provider (e.g., a mobile communications provider data and/or voice network), a direct connection between two computing devices, and any
combinations thereof. A network, such as network 1144, may employ a wired and/or a wireless mode of communication. In general, any network topology may be used. Information (e.g., data, software 1120, etc.) may be communicated to and/or from computer system 1100 via network interface device 1140.
[0090] Computer system 1100 may further include a video display adapter
1152 for communicating a displayable image to a display device, such as display device 1136. Examples of a display device include, but are not limited to, a liquid crystal display (LCD), a cathode ray tube (CRT), a plasma display, a light emitting diode (LED) display, and any combinations thereof. Display adapter 1152 and display device 1136 may be utilized in combination with processor 1104 to provide graphical representations of aspects of the present disclosure. In addition to a display device, computer system 1100 may include one or more other peripheral output devices including, but not limited to, an audio speaker, a printer, and any combinations thereof. Such peripheral output devices may be connected to bus 1112 via a peripheral interface 1156. Examples of a peripheral interface include, but are not limited to, a serial port, a USB connection, a FIREWIRE connection, a parallel connection, and any combinations thereof.
[0091] It is understood that the present subject matter may be embodied in many different forms and should not be construed as being limited to the embodiments set forth herein. Rather, these embodiments are provided so that this subject matter will be thorough and complete and will fully convey the disclosure to those skilled in the art. Indeed, the subject matter is intended to cover alternatives, modifications and equivalents of these embodiments, which are included within the scope and spirit of the subject matter as defined by the appended claims. Furthermore, in the following detailed description of the present subject matter, numerous specific details are set forth in order to provide a thorough understanding of the present subject matter. However, it will be clear to those of ordinary skill in the art that the present subject matter may be practiced without such specific details.
[0092] Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatuses (systems) and computer program products according to embodiments of the disclosure. It will be
understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable instruction execution apparatus, create a mechanism for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
[0093] The computer-readable non-transitory media includes all types of computer readable media, including magnetic storage media, optical storage media, and solid state storage media and specifically excludes signals. It should be understood that the software can be installed in and sold with the device. Alternatively the software can be obtained and loaded into the device, including obtaining the software via a disc medium or from any manner of network or distribution system, including, for example, from a server owned by the software creator or from a server not owned but used by the software creator. The software can be stored on a server for distribution over the Internet, for example.
[0094] Computer-readable storage media (medium) exclude (excludes) propagated signals per se, can be accessed by a computer and/or processor(s), and include volatile and non-volatile internal and/or external media that is removable and/or non-removable. For the computer, the various types of storage media accommodate the storage of data in any suitable digital format. It should be appreciated by those skilled in the art that other types of computer readable medium can be employed such as zip drives, solid state drives, magnetic tape, flash memory cards, flash drives, cartridges, and the like, for storing computer executable instructions for performing the novel methods (acts) of the disclosed architecture.
[0095] The terminology used herein is for the purpose of describing particular aspects only and is not intended to be limiting of the disclosure. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the
presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
[0096] The description of the present disclosure has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the disclosure in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the disclosure. The aspects of the disclosure herein were chosen and described in order to best explain the principles of the disclosure and the practical application, and to enable others of ordinary skill in the art to understand the disclosure with various modifications as are suited to the particular use contemplated.
[0097] For purposes of this document, each process associated with the disclosed technology may be performed continuously and by one or more computing devices. Each step in a process may be performed by the same or different computing devices as those used in other steps, and each step need not necessarily be performed by a single computing device.
[0098] Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
Claims
1. A method for regulating delivery time of a data packet between communication devices, comprising: receiving a plurality of data packets at a node, the data packets each specifying a delivery time; storing, in one or more queues of the node, the data packets; generating, by the node, an ordered list of the data packets stored in the one or more queues based on a transmission time for each of the data packets and the delivery time for each of the data packets; and sending, by the node, the data packets to a next hop node according to the ordered list.
2. The method of claim 1 , further comprising sorting the data packets in the one or more queues in order of transmission time to form an ordered data packet set.
3. The method of claim 1 or 2, wherein generating the ordered list further comprises: calculating a total transmission time as a sum of the transmission time of each data packet in the plurality of data packets stored in the one or more queues; and determining a data packet with a largest transmission time, from one or more of the plurality of data packets having a delivery time indicating a deadline at a current hop that is greater than or equal to the total transmission time.
4. The method of claim 3, wherein generating the ordered list further comprises: adding the determined data packet to a beginning of the ordered list such that the data packets are ordered beginning with a first of the data packets to be sent and ending with a last of the data packets to be sent in the sending step; and removing the determined data packet from the one or more queues.
5. The method of claim 4, further comprising iteratively repeating the calculating, the determining, the adding and the removing steps until none of the data packets remain in the ordered data packet set.
6. The method of claim 1 , further comprising dropping a current data packet from the plurality of data packets when there is insufficient time to send the current data packet by a time required by the delivery time.
7. The method of any one of claims 1-6, wherein the data packets stored in the one or more queues have a highest scheduling priority compared to other received data packets without a specified delivery time.
8. The method of any one of claims 1-7, wherein the ordered list achieves a minimum average dwell time of the data packets in the queue and satisfies the delivery time for each of the data packets.
9. The method of any one of claims 1 -7, wherein the plurality of data packets are received using a new internet protocol, each of the plurality of data packets comprise a header specification field, a shipping specification field, a contract specification field and a payload specification field, and the contract specification field includes a contract specifying the delivery time.
10. The method of any one of claims 1 -9, wherein each contract includes one or more contract clauses, and each contract clause includes an action indicating the data packet has the delivery time and metadata identifying a budget of the data packet and remaining number of hops.
11 . The method of any one of claims 10, further comprising calculating the deadline at the current hop using a remainder of the budget and the remaining number of hops.
12. The method of any one of claims 1-11 , wherein the node includes multiple output ports and each output port is associated with the one or more queues.
13. The method of any one of claims 1 -12, wherein the one or more queues are a latency guarantee queue.
14. A node for regulating delivery time of a data packet between communication devices, comprising: a non-transitory memory storage comprising instructions; and one or more processors in communication with the memory, wherein the one or more processors execute the instructions to: receive a plurality of data packets at a node, the data packets each specifying a delivery time; store, in one or more queues of the node, the data packets; generate, by the node, an ordered list of the data packets stored in the one or more queues based on a transmission time for each of the data packets and the delivery time for each of the data packets; and sending, by the node, the data packets to a next hop node according to the ordered list.
15. The node of claim 14, wherein the one or more processors further execute the instructions to sort the data packets in the one or more queues in order of transmission time to form an ordered data packet set.
16. The node of claim 14 or 15, wherein generating the ordered list causes the one or more processors to further execute the instructions to: calculate a total transmission time as a sum of the transmission time of each data packet in the plurality of data packets stored in the one or more queues; and determining a data packet with a largest transmission time, from one or more of the plurality of data packets having a delivery time indicating a deadline at a current hop that is greater than or equal to the total transmission time.
17. The node of claim 16, wherein generating the ordered list further causes the one or more processors to execute the instructions to: add the determined data packet to a beginning of the ordered list such that the data packets are ordered beginning with a first of the data packets to be sent and ending with a last of the data packets to be sent in the sending step; and remove the determined data packet from the one or more queues.
18. The node of claim 17, wherein the one or more processors further execute the instructions to iteratively repeat the calculate, the determine, the add and the remove steps until none of the data packets remain in the ordered data packet set.
19. The node of claim 15, wherein the one or more processors further execute the instructions to drop a current data packet when there is insufficient time to send the current data packet by a time required by the delivery time.
20. The node of any one of claims 14-15, wherein the data packets stored in the one or more queues have a highest scheduling priority compared to other received data packets without a specified delivery time.
21. The node of any one of claims 14-20, wherein the ordered list achieves a minimum average dwell time of the data packets in the queue and satisfies the delivery time for each of the data packets.
22. The node of any one of claims 14-21 , wherein the plurality of data packets are received using a new internet protocol, each of the plurality of data packets comprise a header specification field, a shipping specification field, a contract specification field and a payload specification field, and the contract specification field includes a contract specifying the delivery time.
23. The node of any one of claims 14-22, wherein each contract includes one or more contract clauses, and each contract clause includes an action indicating the data packet has the delivery time and metadata identifying a budget of the data packet and remaining number of hops.
24. The node of claim 23, wherein the one or more processors further execute the instructions to calculate the deadline at the current hop using a remainder of the budget and the remaining number of hops.
25. The node of any one of claims 14-24, wherein the node includes multiple output ports and each output port is associated with the one or more queues.
26. The node of any one of claims 15-25, wherein the one or more queues are a latency guarantee queue.
27 . A non-transitory computer-readable medium storing computer instructions for regulating delivery time of a data packet between communication devices that, when executed by one or more processors, cause the one or more processors to perform the steps of: receiving a plurality of data packets at a node, the data packets each specifying a delivery time; storing, in one or more queues of the node, the data packets; generating, by the node, an ordered list of the data packets stored in the one or more queues based on a transmission time for each of the data packets and the delivery time for each of the data packets; and sending, by the node, the data packets to a next hop node according to the ordered list.
28. The non-transitory computer-readable medium of claim 27, further causes the one or more processors to perform the step of sorting the data packets in the one or more queues in order of transmission time to form an ordered data packet set.
29. The non-transitory computer-readable medium of claim 27 or 28, wherein generating the ordered list further causes the one or more processors to perform the steps of: calculating a total transmission time as a sum of the transmission time of each data packet in the plurality of data packets stored in the one or more queues; and determining a data packet with a largest transmission time, from one or more of the plurality of data packets having a delivery time indicating a deadline at a current hop that is greater than or equal to the total transmission time.
30. The non-transitory computer-readable medium of claim 29, wherein generating the ordered list further causes the one or more processors to perform the steps of: adding the determined data packet to a beginning of the ordered list such that the data packets are ordered beginning with a first of the data packets to be sent and ending with a last of the data packets to be sent in the sending step; and removing the determined data packet from the one or more queues.
31. The non-transitory computer-readable medium of claim 30, further causes the one or more processors to perform the step of iteratively repeating the calculating, the determining, the adding and the removing steps until none of the data packets remain in the ordered data packet set.
32. The non-transitory computer-readable medium of claim 28, further causes the one or more processors to perform the step of dropping a current data packet when there is insufficient time to send the current data packet by a time required by the delivery time.
33. The non-transitory computer-readable medium of any one of claims 27- 29, wherein the data packets stored in the one or more queues have a highest scheduling priority compared to other received data packets without a specified delivery time.
34. The non-transitory computer-readable medium of any one of claims 27- 33, wherein the ordered list achieves a minimum average dwell time of the data packets in the queue and satisfies the delivery time for each of the data packets.
35. The non-transitory computer-readable medium of any one of claims 27- 34 wherein the plurality of data packets are received using a new internet protocol, each of the plurality of data packets comprise a header specification field, a shipping specification field, a contract specification field and a payload specification field, and the contract specification field includes a contract specifying the delivery time.
36. The non-transitory computer-readable medium of any one of claims 27- 35, wherein each contract includes one or more contract clauses, and each contract clause includes an action indicating the data packet has the delivery time and metadata identifying a budget of the data packet and remaining number of hops.
37. The non-transitory computer-readable medium of claim 36, further causing the one or more processors to perform the steps of calculating the deadline at the current hop using a remainder of the budget and the remaining number of hops.
38. The non-transitory computer-readable medium of any one of claims 27- 37, wherein the node includes multiple output ports and each output port is associated with the one or more queues.
39. The non-transitory computer-readable medium of any one of claims 27- 38, wherein the one or more queues are a latency guarantee queue.
40. A method for scheduling a plurality of data packets, each indicating a delivery time, between communication devices, comprising: determining whether a schedule satisfying the delivery time for each of the plurality of data packets is feasible; ordering the plurality of data packets in a queue to reduce an average dwell time when the schedule is feasible and transmitting the plurality of packets in the order to satisfy the delivery time for each of the plurality of data packets; and dropping a data packet from the plurality of data packets when the schedule is not feasible.
41. The method of claim 40, wherein the schedule is determined to be feasible.
42. The method of claim 40, wherein the schedule is determined to be not feasible.
43. The method of any one of claims 42, wherein the dropped data packet has a per-hop deadline less than a total transmission time of the plurality of data packets.
44. The method of any one of claims 41 , wherein the reducing comprises minimizing the average dwell time for each of the ordered plurality of data packets as long as the delivery time for each of the plurality of data packets is satisfied.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202063020970P | 2020-05-06 | 2020-05-06 | |
US63/020,970 | 2020-05-06 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021101610A1 true WO2021101610A1 (en) | 2021-05-27 |
Family
ID=72433046
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2020/048244 WO2021101610A1 (en) | 2020-05-06 | 2020-08-27 | Latency guarantee for data packets in a network |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2021101610A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023130744A1 (en) * | 2022-01-04 | 2023-07-13 | 中兴通讯股份有限公司 | Message scheduling method, network device, storage medium, and computer program product |
CN118035683A (en) * | 2024-02-21 | 2024-05-14 | 广州龙数科技有限公司 | Time sequence data analysis method, system and equipment |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018086558A1 (en) * | 2016-11-10 | 2018-05-17 | Huawei Technologies Co., Ltd. | Network latency scheduling |
-
2020
- 2020-08-27 WO PCT/US2020/048244 patent/WO2021101610A1/en active Application Filing
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018086558A1 (en) * | 2016-11-10 | 2018-05-17 | Huawei Technologies Co., Ltd. | Network latency scheduling |
Non-Patent Citations (3)
Title |
---|
A. CLEMMT. ECKERT: "High-Precision Latency Forwarding over Packet-Programmable Networks", IEEE/IFIP NETWORK OPERATIONS AND MANAGEMENT SYMPOSIUM, 2020 |
K. MAKHIJANI, R. LI, AND H. ELBAKOURY.: "Using Big Packet Protocol Framework to Support Low Latency based Large Scale Networks.", 6 February 2019 (2019-02-06), XP009524226, ISBN: 978-1-61208-711-5, Retrieved from the Internet <URL:http://ns2.thinkmind.org/index.php?view=article&articleid=icns_2019_1_20_10030> [retrieved on 20201123] * |
K. MAKHIJANIH. YOUSEFIK. K. RAMAKRISHNANR. LI: "Extended Abstract: Coordinated Communications for Next-Generation Networks", 2019 IEEE 27TH INTERNATIONAL CONFERENCE ON NETWORK PROTOCOLS (ICNP, 2019 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023130744A1 (en) * | 2022-01-04 | 2023-07-13 | 中兴通讯股份有限公司 | Message scheduling method, network device, storage medium, and computer program product |
CN118035683A (en) * | 2024-02-21 | 2024-05-14 | 广州龙数科技有限公司 | Time sequence data analysis method, system and equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20070280277A1 (en) | Method and system for adaptive queue and buffer control based on monitoring in a packet network switch | |
US8520520B2 (en) | System and method for per flow guaranteed throughput, multiple TCP flow bandwidth provisioning, and elimination of packet drops for transmission control protocol (TCP) and TCP-friendly protocols | |
BR112015004565B1 (en) | METHOD FOR CLASSIFICATION OF TRAFFIC IN STAGES BETWEEN TERMINAL AND AGGREGATION NODES OF A BROADBAND COMMUNICATIONS SYSTEM AND APPARATUS FOR CLASSIFICATION OF TRAFFIC IN STAGES BETWEEN TERMINAL AND AGGREGATION NODES OF A BROADBAND COMMUNICATIONS SYSTEM | |
US9042355B2 (en) | Quality of service (QoS) for satellite communications network | |
US9059921B2 (en) | Method, network, and computer product for flow based quality of service | |
Kundel et al. | OpenBNG: Central office network functions on programmable data plane hardware | |
KR20090039768A (en) | Methods for providing quality of service by applying back-pressure in sequencing | |
CN1643858B (en) | Quality of service request correlation | |
WO2021101610A1 (en) | Latency guarantee for data packets in a network | |
Lhamo et al. | RED-SP-CoDel: Random early detection with static priority scheduling and controlled delay AQM in programmable data planes | |
Sieber et al. | Scalable application-and user-aware resource allocation in enterprise networks using end-host pacing | |
WO2017088460A1 (en) | Service packet transmission control method, device and system | |
Schmitt et al. | Per-flow guarantees under class-based priority queueing | |
US7804773B2 (en) | System and method of managing data flow in a network | |
WO2021101640A1 (en) | Method and apparatus of packet wash for in-time packet delivery | |
US20230057487A1 (en) | Data packet format to communicate across different networks | |
Adami et al. | TCP/IP-based multimedia applications and services over satellite links: experience from an ASI/CNIT project | |
Al-Haddad et al. | A Survey of Quality of Service (QoS) Protocols and Software-Defined Networks (SDN) From the Traditional to the Latest Network Architecture | |
CN113395612A (en) | Data forwarding method in optical fiber communication and related device | |
US11805071B2 (en) | Congestion control processing method, packet forwarding apparatus, and packet receiving apparatus | |
JP2006262417A (en) | Communication speed control method and apparatus therefor | |
WO2021174236A2 (en) | In-band signaling for latency guarantee service (lgs) | |
CN1728679A (en) | Method for configuring routers | |
US7821933B2 (en) | Apparatus and associated methodology of processing a network communication flow | |
US20230088536A1 (en) | Network contracts in communication packets |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20768791 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20768791 Country of ref document: EP Kind code of ref document: A1 |