US20070230352A1 - Multipath Routing Architecture for Large Data Transfers - Google Patents
Multipath Routing Architecture for Large Data Transfers Download PDFInfo
- Publication number
- US20070230352A1 US20070230352A1 US11/690,942 US69094207A US2007230352A1 US 20070230352 A1 US20070230352 A1 US 20070230352A1 US 69094207 A US69094207 A US 69094207A US 2007230352 A1 US2007230352 A1 US 2007230352A1
- Authority
- US
- United States
- Prior art keywords
- path
- node
- exit
- packet
- entry
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/24—Multipath
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/11—Identifying congestion
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/12—Avoiding congestion; Recovering from congestion
- H04L47/122—Avoiding congestion; Recovering from congestion by diverting traffic away from congested entities
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/28—Flow control; Congestion control in relation to timing considerations
- H04L47/283—Flow control; Congestion control in relation to timing considerations in response to processing delays, e.g. caused by jitter or round trip time [RTT]
Definitions
- the present invention relates generally to communication networks, and more particularly, to a multipath routing architecture and congestion control protocol for harnessing network capacity across multiple Internet paths for point-to-point large data transfers.
- packets of a point-to-point transport-level connection from one end-host to another often traverse a single network path (comprised of a set of routers and links). This can cause high load on some paths, while underutilization on others, thereby leading to lower throughput on several connections.
- a network architecture In general, the goal of a network architecture is to achieve high utilization, fairness of network resource allocation, and stability.
- a unipath network like the Internet cleanly separates routing and congestion control.
- fairness of network resource allocation simply reduces to a fair sending rate on a path independent of other paths.
- a TCP-fair resource allocation simply means a sending rate inversely proportional to the round-trip time (RTT) and square root of the loss rate on the path.
- RTT round-trip time
- Other notions of fairness include max-min, proportional fairness, and the like.
- a max-min fair allocation is an allocation that maximizes the minimum sending rate while satisfying link capacity constraints.
- fairness of resource allocation takes on an analogous network-wide meaning, and is defined over the aggregate sending rates of users in the system.
- Each user is a source-destination pair and has potentially multiple paths available.
- a max-min fair allocation is one that maximizes the minimum aggregate rate of a user in the network while satisfying link capacity constraints.
- a utility-theoretic framework permits the generalization of unipath congestion controllers (e.g., TCP) and associated notions of fairness in a multipath network. See F. Kelly, A. Maulloo, and D. Tan, Rate Control in Communication Networks: Shadow Prices, Proportional Fairness and Stability, In Journal of the Operational Research Society, volume 49, 1998.
- This framework allows one to view different congestion controllers as distributed algorithms to optimize a global objective function defined in terms of individual utilities U(x) for each user as a function of his sending rate x. Different definitions of U(x) yield different kinds of fairness properties.
- the multipath scenario cleanly extends this framework by retaining well-understood utility functions (corresponding to different fairness schemes) with the unipath sending rate simply replaced by the aggregate multipath sending rate.
- a multipath congestion controller specifies how to control the rates on a set of paths to achieve the corresponding level of fairness of resource allocation.
- Multipath routing and congestion control is a powerful architectural building block to improve utilization and fairness of resource allocation in a network, and end-to-end reliability. It would therefore be desirable to provide a multipath network architecture to support large data transfers as an edge service.
- mTCP transport-level multipath solution
- M. Zhang, J. Lai, A. Krishnamurthy, L. Peterson, R. Wang A Transport Layer Approach for Improving End-to-end Performance and Robustness Using Redundant Paths, In Proc. of Usenix Annual Technical Conference, June 2004.
- This approach has two problems: First, modifying the network stack (that is often implemented in the operating system) is a significant barrier to widespread use because of the reluctance of users to upgrade operating systems unless they are stable and reliable releases. Second, mTCP uses independent congestion control on each path that may not ensure fair allocation of network resources.
- the present invention involves defining a multipath network architecture that addresses the above two problems.
- the present invention involves defining a multipath network architecture that harnesses network capacity across several paths while ensuring fair allocation across competing data transfers.
- a method for facilitating large data transfers between a sender and a receiver (point-to-point) through a network comprising an entry node communicating with the sender, an exit node communicating with the receiver, and a plurality of paths between the entry node and the exit node, at least one of the plurality of paths being via at least one relay node between the entry node and the exit node to provide multipath routing of packets between the entry node and the exit node.
- the method comprises the steps of: receiving packets from the sender at the entry node; at the entry node, selecting at least one path among the plurality of paths over which to send the packets to the exit node, the selection of the at least one path being a function of path congestion; sending the packets from the entry node to the exit node via the at least one path among the plurality of paths between the entry node and exit node; reordering the packets received at the exit node; and sending the reordered packets from the exit node to the receiver.
- the method further comprises executing a multi-path congestion control protocol at the exit node to detect congestion on each path between the entry node and the exit node based on packet delays and packet losses, and executing a multipath congestion control protocol at the entry node to estimate the size of a congestion window for each path that indicates a stable sending rate for each path.
- the congestion control protocol uses a multiplicative increase multiplicative decrease (MIMD) protocol to control the size of the congestion window for each path.
- MIMD multiplicative increase multiplicative decrease
- FIG. 1 is a schematic of an exemplary network architecture in accordance with an aspect of the invention
- FIG. 2 is a schematic of the overlay network architecture, showing details of how packets from a sender to a receiver are routed from an entry node to an exit node over multiple paths in the network and delivered to the receiver by the exit node in accordance with an aspect of the invention
- FIG. 3 is a schematic of multiple senders sharing a common path vs. a single path for illustrating an aspect of the invention
- FIG. 4 is a flow diagram of a path selection process at an entry node in the network of FIG. 2 ;
- FIG. 5 is a flow diagram of a process for receiving packets at the exit node in the network of FIG. 2 ;
- FIG. 6 is a flow diagram of process for estimating packet delays at the exit node.
- FIG. 7 is a flow diagram of a process for detecting packet loss at the exit node and determining which path the packet loss occurred at the entry node.
- FIG. 1 is a schematic of an exemplary network architecture 100 in accordance with an aspect of the invention.
- the overlay network 100 comprises two types of transit nodes, identified as relay nodes 102 1 , 102 2 and 102 3 and gateway nodes 104 1 , 104 2 . . . 104 4 .
- Such a deployment scenario can be implemented where an edge service provider supports such relays distributed across the Internet to enable multipath routing, and organizations install gateways at their network exits.
- End hosts 106 1 , 106 2 . . . 106 4 communicate with the Internet 108 via their respective gateway node 104 1 , 104 2 . . . 104 4 .
- a multipath congestion control algorithm is employed to provide improved utilization and a fair allocation of network-wide resources. This is described in further detail below.
- FIG. 2 is a schematic of a multipath routing network 200 for providing support for large data transfers in accordance with an aspect of the invention.
- an edge service is provided that utilizes a set of transit nodes.
- a transit node can logically operate as an entry gateway or node 206 , as relay nodes 208 1 or 208 2 , and an exit gateway or node 210 .
- a transmitting end host 202 communicates with the entry node 206 . Packets P 1 and P 2 from end host 202 enter the entry node 206 , and are communicated to the exit node 210 either directly or via one of the relay nodes 208 1 or 208 2 .
- the exit node 210 reorders the packets in a reorder queue 211 as P′ and delivers P′ to the receiving end host 204 . Acknowledgments generated by the receiving node 204 for all packets are sent back directly from the exit node 210 to the entry node 206 .
- Each entry node contains a congestion control module 214 , path selection module 216 and window control module 218 .
- Exit node 210 includes a congestion detection module 220 for detecting congestion on incoming paths as a result of packet delays or packet losses. When congestion is detected at the exit node 210 , congestion signals are communicated back to the entry node 206 .
- the functions of the end host 202 and entry node 206 may be implemented on a single network access device 228 .
- the functions of the exit node 210 and the end host 204 may take place on a single network access device 230 .
- FIG. 2 depicts a logical separation of these components.
- the entry node 206 may be an application running on network access device 228 and the entry node 206 is an agent running on network access device 228 .
- the end host 204 is an application running on network access device 230 and the exit node 210 is an agent running on network access device 230 .
- the entry and exit node architectures can be implemented on any router in the network 200 .
- the multipath routing network 200 implements a congestion control algorithm based on a modified Kelly Voice (KV) multipath routing and congestion controller, TCP Nice, and TCP-LP, see, F. Kelly and T. Voice, Stability of end-to-end Algorithms for Joint Routing and Rate Control, SIGCOMM Comput. Commun. Rev., 35(2):5-12, 2005; A. Venkatramani, R. Kokku, and M. Dahlin, TCP-Nice: A Mechanism for Background Transfers, In Proc. of OSDI, 2002, and A. Kuzmanovic and E. W. Knightly; TCP-LP: A Distributed Algorithm for Low Priority Data Transfer, In Proc. of INFOCOM, 2003 the contents of which are hereby incorporated by reference herein.
- KV Kelly Voice
- a congestion control algorithm is executed by module 214 on the entry node 206 .
- the congestion control module 214 estimates the size of the congestion window on each path that indicates a stable sending rate on the path.
- the congestion control module uses a multiplicative increase multiplicative decrease (MIMD) scheme.
- MI multiplicative increase
- MD multiplicative decrease
- MI multiplicative decrease
- MD on a congestion signal on a path, decrement the window of the path by an MD parameter ⁇ times the weighted sum of the congestion window on the current path (w i ) and the total congestion window on all paths (W).
- the MI is represented by w i ⁇ w i+ ⁇ .
- MD is represented by w i ⁇ max(1, w i ⁇ (w i ⁇ +W ⁇ (1 ⁇ )) and w i ⁇ max(1, w i ⁇ W/2).
- the congestion window is decremented only once per-round trip time on a path after observing a threshold number of congestion indications.
- the window is not incremented within one round-trip time of the previous decrement. Setting the value of ⁇ to different values permits several variants of multipath control as described in the following sections. There are three types of control variants, classified as independent, joint and adaptive.
- FIG. 3 is a schematic illustrating the above multipath congestion control variants for a pair of senders 302 1 (S 1 ) and 302 2 (S 2 ) that communicate with receivers 304 1 (R 1 ) and 304 2 (R 2 ).
- the sender/receiver pair 302 1 - 304 1 has access to two paths 306 and 308 .
- the sender/receiver pair 302 2 - 304 2 has access to one path 310 that is shared with path 308 .
- sender/receiver pair 302 1 - 304 1 gets an equal share of bandwidth on the path 308 shared with path 310 between sender/receiver pair 302 2 - 304 2 .
- sender/receiver pair 302 1 - 304 1 This leads to a higher bandwidth allocation to sender/receiver pair 302 1 - 304 1 as compared to sender/receiver pair 302 2 - 304 2 .
- uses the joint variant moves each transfer to the best set of paths, where sender/receiver pair 302 1 - 304 1 occupies only path 306 (10 Mbps) and sender/receiver pair 302 2 - 304 2 occupies path 310 (8 Mbps), thus providing a fairer allocation of resources.
- sender/receiver pair 302 1 - 304 1 is the only transfer, however, this can lead to under-utilization of capacity on a path. This occurs because even if sender/receiver pair 302 1 - 304 1 is the only transfer on the path, the transfer backs off proportionally to the cumulative congestion window on all paths. This results in the loss of greater throughput than using the independent variant.
- the features of the independent and joint variants may be combined in an adaptive control in accordance with an aspect of the invention.
- the adaptive control variant has the following properties.
- w i is the congestion control window during decrement
- M i is the maximum congestion window size observed by the transfer on path i.
- the adaptive control variant has the following properties. When a multipath connection is the only one that is active on one or more of its paths, the multiplicative decrement on such paths behaves more like the independent control variant because w i is close to M i . As the number of transfers sharing the path increases, the characteristics of adaptive control variant will become more like the joint control variant. To ensure best performance, each transfer should observe the maximum congestion window M i .
- a path selection process at the entry node 206 (see FIG. 2 ).
- a packet arrives from sender 202 (see FIG. 2 ).
- the entry node 206 attempts to balance load across available paths by choosing for each packet (e.g., P 1 , P 2 . . . ) a path with a minimum
- the entry node 206 selects a path using the corresponding relay node (i.e., 208 1 or 208 2 ) to send the packet to the exit node 210 .
- the entry node 206 encapsulates the packets and at step 408 routes the packet the appropriate relay node.
- Each encapsulated packet carries a multipath header that contains a packet type (representing data, probe, lost signal, and the like), a timestamp (representing the time the packet left the entry node 206 ), and a path identifier (i.e., the IP address of the relay node).
- This header permits the exit node 210 to identify the path which the packet traversed through the network, and to detect and associate congestion on the path based on the delay observed for this packet.
- packet P 1 traverses path 222 via relay node 208 1
- packet P 2 traverses path 224 via relay node 208 2 from entry node 206 to exit node 210 .
- FIG. 5 is a flow diagram of a process for receiving packets at the exit node. At step 500 , the packets are received at the exit node 210 from the different paths selected by the entry node 206 .
- the packets are in sequence at step 502 , they are immediately sent to the receiver 204 at step 504 . If not, these packets are kept in a reorder queue (block 211 ) at step 506 until a sequence is complete at step 508 , or a timer expires at step 510 . Since the reorder delay required at the exit node 210 is governed by the path with the longest delay, a timer is set to a value that is a factor ⁇ of the minimum delay on the longest path. The one-way delay for each path can be estimated at the entry node 206 (in cooperation with the exit node 210 ), and this value is then sent to the exit node 210 .
- FIG. 6 is a flow diagram of a process for estimating packet delays at the exit node 210 .
- Each packet is timestamped (in the multipath header) at the entry node 206 prior to being communicated on one of the available paths 222 , 224 .
- the exit node 210 receives packets from the entry node 206 .
- the exit node 210 calculates the one-way delay using the timestamp and the current time.
- the exit node 210 keeps track of minimum (dmin i ) and maximum (dmax i ) delays observed by packets on each path for a connection.
- the exit node 210 sends a congestion indication represented by 226 to the entry node 206 .
- the congestion indication 226 can be indicated by either sending an explicit message from exit node 210 to entry node 206 , or by piggybacking the congestion indication on a returning acknowledgment. It will be appreciated by those skilled in the art that the latter is preferred for efficiency. If there is congestion, then at step 608 the entry node implements the congestion control protocol described above to reduce the congestion window.
- FIG. 7 is a flow diagram of a process for detecting packet loss at the exit node 210 and determining at the entry node 206 on which path the packet loss occurred.
- the exit node 210 maintains a reorder queue 211 and can thus detect packet losses earlier than the receiver 204 .
- the exit node 210 maintains a variable last_byte_rcvd i for each path i that indicates the highest byte received on the path.
- the exit node further maintains a variable last_byte_rcvd i that indicates the next byte expected in the sequence.
- the exit node 210 detects a possible packet loss.
- the exit node 210 determines the range of missing bytes from rcvnxt and the sequence number of the packet at the head of the reorder queue.
- the exit node 210 sends a loss indication message containing the range of missing bytes. While the exit node 210 cannot exactly determine on which path the loss occurred, the range can be used at the entry node 206 to determine the path(s) on which the packets containing the missing bytes were sent. For each path on which any of the missing bytes were sent, at step 710 the congestion window is reduced at the entry node 206 as described above. As will be appreciated by those skilled in the art, this technique of detecting packet losses is simpler and faster than one that waits for and interprets duplicate acknowledgements from the receiver 204 .
- a probe (unchoke request) with a timestamp is periodically sent on each choked path. If the probe does not perceive a delay greater than ⁇ i described above, the exit node 210 returns an unchoke indication to the entry node 206 . Otherwise, the exit node 210 drops the probe. Implementing choking and unchoking automatically handles path failures.
- the techniques described above are independent of the congestion control algorithm that is implemented by the sender 202 . This can lead to a mismatch between congestion windows between the sender 202 and the entry node 204 . Such a mismatch can result in packet losses that reduce the congestion window at the sender 202 , and thereby reduce the throughput achieved by the data transfer.
- the entry node 204 can rewrite the TCP header in acknowledgments being returned to the sender 202 with a receiver window equal to the minimum of the window allowed by the entry node 204 .
- SYN packets can be monitored to check if the end-hosts exchange the scaling option, and the receiver window can be scaled accordingly and rewritten in the acknowledgements.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
Description
- This non-provisional application claims the benefit of U.S. Provisional Application Ser. No. 60/804,674, filed on Jun. 14, 2006, and U.S. Provisional Application Ser. No. 60/743,846 filed on Mar. 28, 2006, both entitled “A Multipath Routing Architecture for Background Data Transfers, the contents of which are hereby incorporated by reference herein.
- The present invention relates generally to communication networks, and more particularly, to a multipath routing architecture and congestion control protocol for harnessing network capacity across multiple Internet paths for point-to-point large data transfers.
- Large (or Bulk) data transfers dominate Internet traffic today. Examples of such data transfers include peer-to-peer file sharing, content distribution, remote backups and software updates. A recent study suggests that up to 90% of bytes traversing the Internet may be bulk data in nature. Bulk of this data is transferred between a sender and a receiver using point-to-point transport-level protocols such as TCP.
- In today's Internet, packets of a point-to-point transport-level connection from one end-host to another often traverse a single network path (comprised of a set of routers and links). This can cause high load on some paths, while underutilization on others, thereby leading to lower throughput on several connections.
- In general, the goal of a network architecture is to achieve high utilization, fairness of network resource allocation, and stability. A unipath network like the Internet cleanly separates routing and congestion control. Thus, fairness of network resource allocation simply reduces to a fair sending rate on a path independent of other paths. For example, a TCP-fair resource allocation simply means a sending rate inversely proportional to the round-trip time (RTT) and square root of the loss rate on the path. Other notions of fairness include max-min, proportional fairness, and the like. For example, a max-min fair allocation is an allocation that maximizes the minimum sending rate while satisfying link capacity constraints.
- In a multipath network, fairness of resource allocation takes on an analogous network-wide meaning, and is defined over the aggregate sending rates of users in the system. Each user is a source-destination pair and has potentially multiple paths available. For example, informally, a max-min fair allocation is one that maximizes the minimum aggregate rate of a user in the network while satisfying link capacity constraints.
- A utility-theoretic framework permits the generalization of unipath congestion controllers (e.g., TCP) and associated notions of fairness in a multipath network. See F. Kelly, A. Maulloo, and D. Tan, Rate Control in Communication Networks: Shadow Prices, Proportional Fairness and Stability, In Journal of the Operational Research Society, volume 49, 1998. This framework allows one to view different congestion controllers as distributed algorithms to optimize a global objective function defined in terms of individual utilities U(x) for each user as a function of his sending rate x. Different definitions of U(x) yield different kinds of fairness properties. The multipath scenario cleanly extends this framework by retaining well-understood utility functions (corresponding to different fairness schemes) with the unipath sending rate simply replaced by the aggregate multipath sending rate. A multipath congestion controller specifies how to control the rates on a set of paths to achieve the corresponding level of fairness of resource allocation.
- Multipath routing and congestion control is a powerful architectural building block to improve utilization and fairness of resource allocation in a network, and end-to-end reliability. It would therefore be desirable to provide a multipath network architecture to support large data transfers as an edge service.
- A recently proposed transport-level multipath solution, mTCP, modifies the network protocol stack at the end-hosts to utilize multiple paths. See M. Zhang, J. Lai, A. Krishnamurthy, L. Peterson, R. Wang, A Transport Layer Approach for Improving End-to-end Performance and Robustness Using Redundant Paths, In Proc. of Usenix Annual Technical Conference, June 2004. This approach has two problems: First, modifying the network stack (that is often implemented in the operating system) is a significant barrier to widespread use because of the reluctance of users to upgrade operating systems unless they are stable and reliable releases. Second, mTCP uses independent congestion control on each path that may not ensure fair allocation of network resources. The present invention involves defining a multipath network architecture that addresses the above two problems.
- The present invention involves defining a multipath network architecture that harnesses network capacity across several paths while ensuring fair allocation across competing data transfers.
- In accordance with an aspect of the invention, a method is provided for facilitating large data transfers between a sender and a receiver (point-to-point) through a network comprising an entry node communicating with the sender, an exit node communicating with the receiver, and a plurality of paths between the entry node and the exit node, at least one of the plurality of paths being via at least one relay node between the entry node and the exit node to provide multipath routing of packets between the entry node and the exit node. The method comprises the steps of: receiving packets from the sender at the entry node; at the entry node, selecting at least one path among the plurality of paths over which to send the packets to the exit node, the selection of the at least one path being a function of path congestion; sending the packets from the entry node to the exit node via the at least one path among the plurality of paths between the entry node and exit node; reordering the packets received at the exit node; and sending the reordered packets from the exit node to the receiver.
- The method further comprises executing a multi-path congestion control protocol at the exit node to detect congestion on each path between the entry node and the exit node based on packet delays and packet losses, and executing a multipath congestion control protocol at the entry node to estimate the size of a congestion window for each path that indicates a stable sending rate for each path. The congestion control protocol uses a multiplicative increase multiplicative decrease (MIMD) protocol to control the size of the congestion window for each path.
- These and other advantages of the invention will be apparent to those of ordinary skill in the art by reference to the following detailed description and the accompanying drawings.
-
FIG. 1 is a schematic of an exemplary network architecture in accordance with an aspect of the invention; -
FIG. 2 is a schematic of the overlay network architecture, showing details of how packets from a sender to a receiver are routed from an entry node to an exit node over multiple paths in the network and delivered to the receiver by the exit node in accordance with an aspect of the invention; -
FIG. 3 is a schematic of multiple senders sharing a common path vs. a single path for illustrating an aspect of the invention; -
FIG. 4 is a flow diagram of a path selection process at an entry node in the network ofFIG. 2 ; -
FIG. 5 is a flow diagram of a process for receiving packets at the exit node in the network ofFIG. 2 ; -
FIG. 6 is a flow diagram of process for estimating packet delays at the exit node; and -
FIG. 7 is a flow diagram of a process for detecting packet loss at the exit node and determining which path the packet loss occurred at the entry node. - Embodiments of the invention will be described with reference to the accompanying drawing figures wherein like numbers represent like elements throughout. Before embodiments of the invention are explained in detail, it is to be understood that the invention is not limited in its application to the details of the examples set forth in the following description or illustrated in the figures. The invention is capable of other embodiments and of being practiced or carried out in a variety of applications and in various ways. Also, it is to be understood that the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of “including,” “comprising,” or “having” and variations thereof herein is meant to encompass the items listed thereafter and equivalents thereof as well as additional items. While individual functional blocks and components are shown in the drawings, those skilled in the art will appreciate that these functions can be performed by software modules or programs executed using a suitably programmed digital microprocessor or general purpose computer by individual hardware circuits, by an application specific integrated circuit, and or by one or more digital signal processors.
-
FIG. 1 is a schematic of anexemplary network architecture 100 in accordance with an aspect of the invention. Theoverlay network 100 comprises two types of transit nodes, identified asrelay nodes gateway nodes respective gateway node gateway nodes relay nodes overlay network 100 to aggressively exploit the path diversity and load imbalance in the Internet 108 to seek and utilize paths with abundant capacity. In accordance with an aspect of the present invention, a multipath congestion control algorithm is employed to provide improved utilization and a fair allocation of network-wide resources. This is described in further detail below. -
FIG. 2 is a schematic of amultipath routing network 200 for providing support for large data transfers in accordance with an aspect of the invention. To avoid having to modify the end hosts 202 and 204, an edge service is provided that utilizes a set of transit nodes. For each end-to-end connection, a transit node can logically operate as an entry gateway ornode 206, asrelay nodes node 210. A transmittingend host 202 communicates with theentry node 206. Packets P1 and P2 fromend host 202 enter theentry node 206, and are communicated to theexit node 210 either directly or via one of therelay nodes exit node 210 reorders the packets in areorder queue 211 as P′ and delivers P′ to the receivingend host 204. Acknowledgments generated by the receivingnode 204 for all packets are sent back directly from theexit node 210 to theentry node 206. Each entry node contains acongestion control module 214,path selection module 216 andwindow control module 218.Exit node 210 includes acongestion detection module 220 for detecting congestion on incoming paths as a result of packet delays or packet losses. When congestion is detected at theexit node 210, congestion signals are communicated back to theentry node 206. In an alternative embodiment, it will be appreciated by those skilled in the art that the functions of theend host 202 and entry node 206 (host gateway) may be implemented on a singlenetwork access device 228. Similarly, the functions of theexit node 210 and theend host 204 may take place on a singlenetwork access device 230.FIG. 2 depicts a logical separation of these components. In this regard, theentry node 206 may be an application running onnetwork access device 228 and theentry node 206 is an agent running onnetwork access device 228. Likewise, theend host 204 is an application running onnetwork access device 230 and theexit node 210 is an agent running onnetwork access device 230. Furthermore, the entry and exit node architectures can be implemented on any router in thenetwork 200. - In order to maximize bandwidth utilization on multiple paths while ensuring fair allocation of network resources, the
multipath routing network 200 implements a congestion control algorithm based on a modified Kelly Voice (KV) multipath routing and congestion controller, TCP Nice, and TCP-LP, see, F. Kelly and T. Voice, Stability of end-to-end Algorithms for Joint Routing and Rate Control, SIGCOMM Comput. Commun. Rev., 35(2):5-12, 2005; A. Venkatramani, R. Kokku, and M. Dahlin, TCP-Nice: A Mechanism for Background Transfers, In Proc. of OSDI, 2002, and A. Kuzmanovic and E. W. Knightly; TCP-LP: A Distributed Algorithm for Low Priority Data Transfer, In Proc. of INFOCOM, 2003 the contents of which are hereby incorporated by reference herein. - Multipath Congestion Control
- A congestion control algorithm is executed by
module 214 on theentry node 206. Given a set of paths on which packets may be sent, thecongestion control module 214 estimates the size of the congestion window on each path that indicates a stable sending rate on the path. In accordance with an aspect of the invention, the congestion control module uses a multiplicative increase multiplicative decrease (MIMD) scheme. For multiplicative increase (MI): on each positive acknowledgment on a path, increment the congestion window of the path by a MI parameter α. For multiplicative decrease (MD): on a congestion signal on a path, decrement the window of the path by an MD parameter β times the weighted sum of the congestion window on the current path (wi) and the total congestion window on all paths (W). The following is an exemplary congestion control algorithm executed by the congestion control module: -
1: Definitions: i: path (numbered from 1 to n) 2: wi: congestion control window (in bytes) on path i; 3: 4: 5: On ack for path i 6: wi ← wi+a 7: 8: On delay signal for path i 9: wi ← max(1, wi − β × (wi × ξ + W × (1 − ξ)) 10: 11: On loss signal for path i 12: wi ← max(1, wi − W/2)
The congestion control algorithm shows the pseudo code of the above described congestion control methodology. The MI is represented by wi←wi+α. MD is represented by wi←max(1, wi−β×(wi×ξ+W×(1−ξ)) and wi←max(1, wi−W/2). To 2prevent over-reacting to congestion indications and ensure stability, the congestion window is decremented only once per-round trip time on a path after observing a threshold number of congestion indications. In addition, the window is not incremented within one round-trip time of the previous decrement. Setting the value of ξ to different values permits several variants of multipath control as described in the following sections. There are three types of control variants, classified as independent, joint and adaptive. - Independent: ξ=1 makes the multiplicative decrease on a path proportional to the sending rate on that path. This is the same as independent congestion control on each path, i.e., each path operates as an individual TCP flow. See M. Zhang et al., A Transport Layer Approach for Improving End-to-End Performance and Robustness Using Redundant Paths, In Proc. of the USENIX 2004, which is hereby incorporated by reference herein. Thus, a multipath connection using such an independent control variant with n paths operates as n TCP flows.
- Joint: ξ=0 makes the multiplicative decrease similar to that of a joint routing and congestion control as disclosed in H. Han et al., Multi-path TCP: A Joint Congestion Control and Routing Scheme to Exploit Path Diversity in the Internet, In IMA Workshop on Measurements and Modeling of the Internet, 2004, which is hereby incorporated by reference herein and F. Kelly and T. Voice, supra.
-
FIG. 3 is a schematic illustrating the above multipath congestion control variants for a pair of senders 302 1 (S1) and 302 2 (S2) that communicate with receivers 304 1 (R1) and 304 2 (R2). The sender/receiver pair 302 1-304 1 has access to twopaths path 310 that is shared withpath 308. Using the independent variant, sender/receiver pair 302 1-304 1 gets an equal share of bandwidth on thepath 308 shared withpath 310 between sender/receiver pair 302 2-304 2. This leads to a higher bandwidth allocation to sender/receiver pair 302 1-304 1 as compared to sender/receiver pair 302 2-304 2. Using the joint variant, moves each transfer to the best set of paths, where sender/receiver pair 302 1-304 1 occupies only path 306 (10 Mbps) and sender/receiver pair 302 2-304 2 occupies path 310 (8 Mbps), thus providing a fairer allocation of resources. - If sender/receiver pair 302 1-304 1 is the only transfer, however, this can lead to under-utilization of capacity on a path. This occurs because even if sender/receiver pair 302 1-304 1 is the only transfer on the path, the transfer backs off proportionally to the cumulative congestion window on all paths. This results in the loss of greater throughput than using the independent variant. Thus, the features of the independent and joint variants may be combined in an adaptive control in accordance with an aspect of the invention.
-
- where wi is the congestion control window during decrement, and Mi is the maximum congestion window size observed by the transfer on path i. The adaptive control variant has the following properties. When a multipath connection is the only one that is active on one or more of its paths, the multiplicative decrement on such paths behaves more like the independent control variant because wi is close to Mi. As the number of transfers sharing the path increases, the characteristics of adaptive control variant will become more like the joint control variant. To ensure best performance, each transfer should observe the maximum congestion window Mi.
- Path Selection
- Referring now to
FIG. 4 , there is depicted a path selection process at the entry node 206 (seeFIG. 2 ). Atstep 400, a packet arrives from sender 202 (seeFIG. 2 ). Atstep 402, theentry node 206 attempts to balance load across available paths by choosing for each packet (e.g., P1, P2 . . . ) a path with a minimum -
- where bytes_in_nwi represents the number of unacknowledged bytes sent on path i and wi represents the size of the congestion window on path i. This same expression is used to stripe packets even when bytes_in_nwi exceeds wi to ensure load balancing on the paths. At
step 404, theentry node 206 then selects a path using the corresponding relay node (i.e., 208 1 or 208 2) to send the packet to theexit node 210. Atstep 406, theentry node 206 encapsulates the packets and atstep 408 routes the packet the appropriate relay node. Each encapsulated packet carries a multipath header that contains a packet type (representing data, probe, lost signal, and the like), a timestamp (representing the time the packet left the entry node 206), and a path identifier (i.e., the IP address of the relay node). This header permits theexit node 210 to identify the path which the packet traversed through the network, and to detect and associate congestion on the path based on the delay observed for this packet. In the example shown inFIG. 2 , packet P1 traverses path 222 viarelay node 208 1 and packet P2 traverses path 224 viarelay node 208 2 fromentry node 206 to exitnode 210. - Reordering
- Packets that are sent on multiple paths with different latencies can arrive out of order and cause the receiver to send duplicate acknowledgements to the sender. Such duplicate acknowledgements falsely indicate packet loss and can lead to a substantial reduction of the congestion window, thereby reducing throughput. In accordance with an aspect of the invention, packets received from multiple paths are reordered at the
exit node 210 prior to being communicated to thereceiver 204. In the example shown inFIG. 2 , packets P1 and P2 are received from different paths 222 and 224.FIG. 5 is a flow diagram of a process for receiving packets at the exit node. Atstep 500, the packets are received at theexit node 210 from the different paths selected by theentry node 206. If the packets are in sequence atstep 502, they are immediately sent to thereceiver 204 atstep 504. If not, these packets are kept in a reorder queue (block 211) atstep 506 until a sequence is complete atstep 508, or a timer expires atstep 510. Since the reorder delay required at theexit node 210 is governed by the path with the longest delay, a timer is set to a value that is a factor ρ of the minimum delay on the longest path. The one-way delay for each path can be estimated at the entry node 206 (in cooperation with the exit node 210), and this value is then sent to theexit node 210. - Congestion Indication
- To ensure fair allocation of resources, flows react to congestion signals and “back-off” to reduce their sending rate. This back-off is achieved by reacting to increased packet delays and packet losses.
-
FIG. 6 is a flow diagram of a process for estimating packet delays at theexit node 210. Each packet is timestamped (in the multipath header) at theentry node 206 prior to being communicated on one of the available paths 222, 224. Atstep 600, theexit node 210 receives packets from theentry node 206. Atstep 602, theexit node 210 calculates the one-way delay using the timestamp and the current time. Atstep 604, theexit node 210 keeps track of minimum (dmini) and maximum (dmaxi) delays observed by packets on each path for a connection. If, atstep 604, a packet's delay is greater than Δi=dmini+(dmaxi−dmini)×δ, where δ is a threshold parameter such that Δi is set to a small value, atstep 606 theexit node 210 sends a congestion indication represented by 226 to theentry node 206. Thecongestion indication 226 can be indicated by either sending an explicit message fromexit node 210 toentry node 206, or by piggybacking the congestion indication on a returning acknowledgment. It will be appreciated by those skilled in the art that the latter is preferred for efficiency. If there is congestion, then atstep 608 the entry node implements the congestion control protocol described above to reduce the congestion window. -
FIG. 7 is a flow diagram of a process for detecting packet loss at theexit node 210 and determining at theentry node 206 on which path the packet loss occurred. Theexit node 210 maintains areorder queue 211 and can thus detect packet losses earlier than thereceiver 204. In this regard, atstep 700 theexit node 210 maintains a variable last_byte_rcvdi for each path i that indicates the highest byte received on the path. The exit node further maintains a variable last_byte_rcvdi that indicates the next byte expected in the sequence. When the last_byte_rcvdi on each path exceeds rcvnxt atstep 702, theexit node 210 detects a possible packet loss. Atstep 704, theexit node 210 determines the range of missing bytes from rcvnxt and the sequence number of the packet at the head of the reorder queue. Atstep 706, theexit node 210 sends a loss indication message containing the range of missing bytes. While theexit node 210 cannot exactly determine on which path the loss occurred, the range can be used at theentry node 206 to determine the path(s) on which the packets containing the missing bytes were sent. For each path on which any of the missing bytes were sent, atstep 710 the congestion window is reduced at theentry node 206 as described above. As will be appreciated by those skilled in the art, this technique of detecting packet losses is simpler and faster than one that waits for and interprets duplicate acknowledgements from thereceiver 204. - Congested-Path Suppression
- To reduce the impact of congested paths on network throughput, whenever the congestion window for a path wi reaches a threshold MIN_CWND, the path is temporarily marked as “choked.” No subsequent packets are sent on this path until it is “unchoked.” From then on, a probe (unchoke request) with a timestamp is periodically sent on each choked path. If the probe does not perceive a delay greater than Δi described above, the
exit node 210 returns an unchoke indication to theentry node 206. Otherwise, theexit node 210 drops the probe. Implementing choking and unchoking automatically handles path failures. No packets are sent on the failed path, and if an unchoke request does not reach theexit node 210, then no unchoke indication is sent back and the path remains choked from the perspective of theentry node 206. This feature is analogous to permitting the congestion window to drop below one. - Sender Rate Control
- The techniques described above are independent of the congestion control algorithm that is implemented by the
sender 202. This can lead to a mismatch between congestion windows between thesender 202 and theentry node 204. Such a mismatch can result in packet losses that reduce the congestion window at thesender 202, and thereby reduce the throughput achieved by the data transfer. - One way to overcome this mismatch is to ensure that the
sender 202 does not send more bytes than the entry node's congestion window permits across all paths. In this regard, theentry node 204 can rewrite the TCP header in acknowledgments being returned to thesender 202 with a receiver window equal to the minimum of the window allowed by theentry node 204. To handle receiver window scaling employed by most bulk transfer applications, SYN packets can be monitored to check if the end-hosts exchange the scaling option, and the receiver window can be scaled accordingly and rewritten in the acknowledgements. - The foregoing detailed description is to be understood as being in every respect illustrative and exemplary, but not restrictive, and the scope of the invention disclosed herein is not to be determined from the description of the invention, but rather from the claims as interpreted according to the full breadth permitted by the patent laws. It is to be understood that the embodiments shown and described herein are only illustrative of the principles of the present invention and that various modifications may be implemented by those skilled in the art without departing from the scope and spirit of the invention.
Claims (11)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/690,942 US7643427B2 (en) | 2006-03-28 | 2007-03-26 | Multipath routing architecture for large data transfers |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US74384606P | 2006-03-28 | 2006-03-28 | |
US80467406P | 2006-06-14 | 2006-06-14 | |
US11/690,942 US7643427B2 (en) | 2006-03-28 | 2007-03-26 | Multipath routing architecture for large data transfers |
Publications (2)
Publication Number | Publication Date |
---|---|
US20070230352A1 true US20070230352A1 (en) | 2007-10-04 |
US7643427B2 US7643427B2 (en) | 2010-01-05 |
Family
ID=38558734
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/690,942 Active 2028-05-07 US7643427B2 (en) | 2006-03-28 | 2007-03-26 | Multipath routing architecture for large data transfers |
Country Status (1)
Country | Link |
---|---|
US (1) | US7643427B2 (en) |
Cited By (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2009138745A1 (en) * | 2008-05-15 | 2009-11-19 | Gnodal Limited | A method of data delivery across a network |
US20100008369A1 (en) * | 2008-07-14 | 2010-01-14 | Lemko, Corporation | System, Method, and Device for Routing Calls Using a Distributed Mobile Architecture |
US20100014528A1 (en) * | 2008-07-21 | 2010-01-21 | LiveTimeNet, Inc. | Scalable flow transport and delivery network and associated methods and systems |
US20100103861A1 (en) * | 2008-10-24 | 2010-04-29 | Qualcomm Incorporated | Cell relay packet routing |
US20110016209A1 (en) * | 2008-01-14 | 2011-01-20 | Tobias Moncaster | Network characterisation |
WO2011101425A1 (en) * | 2010-02-19 | 2011-08-25 | Thomson Licensing | Control of packet transfer through a multipath session comprising a single congestion window |
US8224322B2 (en) | 2006-06-12 | 2012-07-17 | Lemko Corporation | Roaming mobile subscriber registration in a distributed mobile architecture |
US8326286B2 (en) | 2008-09-25 | 2012-12-04 | Lemko Corporation | Multiple IMSI numbers |
US8340667B2 (en) | 2008-06-26 | 2012-12-25 | Lemko Corporation | System and method to control wireless communications |
US8359029B2 (en) | 2006-03-30 | 2013-01-22 | Lemko Corporation | System, method, and device for providing communications using a distributed mobile architecture |
US20130318239A1 (en) * | 2011-03-02 | 2013-11-28 | Alcatel-Lucent | Concept for providing information on a data packet association and for forwarding a data packet |
US8599851B2 (en) | 2009-04-03 | 2013-12-03 | Ltn Global Communications, Inc. | System and method that routes flows via multicast flow transport for groups |
US8676197B2 (en) | 2006-12-13 | 2014-03-18 | Lemko Corporation | System, method, and device to control wireless communications |
US8706105B2 (en) | 2008-06-27 | 2014-04-22 | Lemko Corporation | Fault tolerant distributed mobile architecture |
US20140192645A1 (en) * | 2013-01-04 | 2014-07-10 | Futurewei Technologies, Inc. | Method for Internet Traffic Management Using a Central Traffic Controller |
US8780804B2 (en) | 2004-11-08 | 2014-07-15 | Lemko Corporation | Providing communications using a distributed mobile architecture |
US9014264B1 (en) * | 2011-11-10 | 2015-04-21 | Google Inc. | Dynamic media transmission rate control using congestion window size |
US20150215345A1 (en) * | 2014-01-27 | 2015-07-30 | International Business Machines Corporation | Path selection using tcp handshake in a multipath environment |
US9106569B2 (en) | 2009-03-29 | 2015-08-11 | Ltn Global Communications, Inc. | System and method that routes flows via multicast flow transport for groups |
US9191980B2 (en) | 2008-04-23 | 2015-11-17 | Lemko Corporation | System and method to control wireless communications |
US9198020B2 (en) | 2008-07-11 | 2015-11-24 | Lemko Corporation | OAMP for distributed mobile architecture |
US20160182369A1 (en) * | 2014-12-23 | 2016-06-23 | Anil Vasudevan | Reorder resilient transport |
WO2017115907A1 (en) * | 2015-12-28 | 2017-07-06 | 전자부품연구원 | Transmission device and method for measuring dynamic path state in various network environments |
US9942131B2 (en) * | 2015-07-29 | 2018-04-10 | International Business Machines Corporation | Multipathing using flow tunneling through bound overlay virtual machines |
US10069726B1 (en) * | 2018-03-16 | 2018-09-04 | Tempered Networks, Inc. | Overlay network identity-based relay |
US10116539B1 (en) | 2018-05-23 | 2018-10-30 | Tempered Networks, Inc. | Multi-link network gateway with monitoring and dynamic failover |
US10158545B1 (en) | 2018-05-31 | 2018-12-18 | Tempered Networks, Inc. | Monitoring overlay networks |
US10178133B2 (en) | 2014-07-30 | 2019-01-08 | Tempered Networks, Inc. | Performing actions via devices that establish a secure, private network |
US10326799B2 (en) | 2016-07-01 | 2019-06-18 | Tempered Networks, Inc. Reel/Frame: 043222/0041 | Horizontal switch scalability via load balancing |
US10911418B1 (en) | 2020-06-26 | 2021-02-02 | Tempered Networks, Inc. | Port level policy isolation in overlay networks |
US10999154B1 (en) | 2020-10-23 | 2021-05-04 | Tempered Networks, Inc. | Relay node management for overlay networks |
US11057319B2 (en) | 2008-12-22 | 2021-07-06 | LTN Global Inc. | System and method for recovery of packets in overlay networks |
US11070594B1 (en) | 2020-10-16 | 2021-07-20 | Tempered Networks, Inc. | Applying overlay network policy based on users |
US11563539B2 (en) * | 2019-04-16 | 2023-01-24 | At&T Intellectual Property I, L.P. | Agile transport for background traffic in cellular networks |
US11876715B2 (en) * | 2018-04-13 | 2024-01-16 | Huawei Technologies Co., Ltd. | Load balancing method, device, and system |
US20240045874A1 (en) * | 2021-06-22 | 2024-02-08 | International Business Machines Corporation | Processing large query results in a database accelerator environment |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9049015B2 (en) * | 2007-09-12 | 2015-06-02 | Cisco Technology, Inc. | Allowing TCP ACK to pass a gateway while queuing data for parsing |
US8873580B2 (en) * | 2008-10-10 | 2014-10-28 | The Trustees Of The Stevens Institute Of Technology | Method and apparatus for dynamic spectrum access |
US9455897B2 (en) | 2010-04-06 | 2016-09-27 | Qualcomm Incorporated | Cooperative bandwidth aggregation using multipath transport |
JP5672779B2 (en) * | 2010-06-08 | 2015-02-18 | ソニー株式会社 | Transmission control apparatus and transmission control method |
US8694618B2 (en) | 2011-04-13 | 2014-04-08 | Microsoft Corporation | Maximizing data transfer through multiple network devices |
US8627412B2 (en) | 2011-04-14 | 2014-01-07 | Microsoft Corporation | Transparent database connection reconnect |
US8995338B2 (en) | 2011-05-26 | 2015-03-31 | Qualcomm Incorporated | Multipath overlay network and its multipath management protocol |
US9444887B2 (en) | 2011-05-26 | 2016-09-13 | Qualcomm Incorporated | Multipath overlay network and its multipath management protocol |
US9451415B2 (en) | 2011-06-17 | 2016-09-20 | Qualcomm Incorporated | Cooperative data transport |
US8885502B2 (en) | 2011-09-09 | 2014-11-11 | Qualcomm Incorporated | Feedback protocol for end-to-end multiple path network systems |
US9264353B2 (en) * | 2011-09-22 | 2016-02-16 | Qualcomm Incorporated | Dynamic subflow control for a multipath transport connection in a wireless communication network |
US10355880B2 (en) * | 2014-08-06 | 2019-07-16 | Watchy Technology Private Limited | System for communicating data |
US10904150B1 (en) * | 2016-02-02 | 2021-01-26 | Marvell Israel (M.I.S.L) Ltd. | Distributed dynamic load balancing in network systems |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6738352B1 (en) * | 1999-02-26 | 2004-05-18 | Nec Corporation | Transfer destination determining process apparatus |
US20050086363A1 (en) * | 2003-10-17 | 2005-04-21 | Minwen Ji | Traffic flow management through a multipath network |
US20060133282A1 (en) * | 2004-12-21 | 2006-06-22 | Nortel Networks Limited | Systems and methods for multipath routing |
US20060198305A1 (en) * | 2005-03-03 | 2006-09-07 | Stmicroelectronics, Inc. | Wireless LAN data rate adaptation |
US20070053300A1 (en) * | 2003-10-01 | 2007-03-08 | Santera Systems, Inc. | Methods, systems, and computer program products for multi-path shortest-path-first computations and distance-based interface selection for VoIP traffic |
US20080186863A1 (en) * | 2003-08-14 | 2008-08-07 | Baratakke Kavitha Vittal Murth | Method, system and article for improved tcp performance during packet reordering |
-
2007
- 2007-03-26 US US11/690,942 patent/US7643427B2/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6738352B1 (en) * | 1999-02-26 | 2004-05-18 | Nec Corporation | Transfer destination determining process apparatus |
US20080186863A1 (en) * | 2003-08-14 | 2008-08-07 | Baratakke Kavitha Vittal Murth | Method, system and article for improved tcp performance during packet reordering |
US20070053300A1 (en) * | 2003-10-01 | 2007-03-08 | Santera Systems, Inc. | Methods, systems, and computer program products for multi-path shortest-path-first computations and distance-based interface selection for VoIP traffic |
US20050086363A1 (en) * | 2003-10-17 | 2005-04-21 | Minwen Ji | Traffic flow management through a multipath network |
US20060133282A1 (en) * | 2004-12-21 | 2006-06-22 | Nortel Networks Limited | Systems and methods for multipath routing |
US20060198305A1 (en) * | 2005-03-03 | 2006-09-07 | Stmicroelectronics, Inc. | Wireless LAN data rate adaptation |
Cited By (81)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8780804B2 (en) | 2004-11-08 | 2014-07-15 | Lemko Corporation | Providing communications using a distributed mobile architecture |
US8359029B2 (en) | 2006-03-30 | 2013-01-22 | Lemko Corporation | System, method, and device for providing communications using a distributed mobile architecture |
US8688111B2 (en) | 2006-03-30 | 2014-04-01 | Lemko Corporation | System, method, and device for providing communications using a distributed mobile architecture |
US9253622B2 (en) | 2006-06-12 | 2016-02-02 | Lemko Corporation | Roaming mobile subscriber registration in a distributed mobile architecture |
US8224322B2 (en) | 2006-06-12 | 2012-07-17 | Lemko Corporation | Roaming mobile subscriber registration in a distributed mobile architecture |
US9515770B2 (en) | 2006-12-13 | 2016-12-06 | Lemko Corporation | System, method, and device to control wireless communications |
US8676197B2 (en) | 2006-12-13 | 2014-03-18 | Lemko Corporation | System, method, and device to control wireless communications |
US8880681B2 (en) * | 2008-01-14 | 2014-11-04 | British Telecommunications Public Limited Company | Network characterisation |
US20110016209A1 (en) * | 2008-01-14 | 2011-01-20 | Tobias Moncaster | Network characterisation |
US9191980B2 (en) | 2008-04-23 | 2015-11-17 | Lemko Corporation | System and method to control wireless communications |
WO2009138745A1 (en) * | 2008-05-15 | 2009-11-19 | Gnodal Limited | A method of data delivery across a network |
US20110075592A1 (en) * | 2008-05-15 | 2011-03-31 | Gnodal Limited | Method of Data Delivery Across a Network |
US8774063B2 (en) | 2008-05-15 | 2014-07-08 | Cray Uk Limited | Method of data delivery across a network |
US9749204B2 (en) | 2008-05-15 | 2017-08-29 | Cray Uk Limited | Method of data delivery across a network |
US9215098B2 (en) | 2008-06-26 | 2015-12-15 | Lemko Corporation | System and method to control wireless communications |
US8340667B2 (en) | 2008-06-26 | 2012-12-25 | Lemko Corporation | System and method to control wireless communications |
US9755931B2 (en) | 2008-06-27 | 2017-09-05 | Lemko Corporation | Fault tolerant distributed mobile architecture |
US8706105B2 (en) | 2008-06-27 | 2014-04-22 | Lemko Corporation | Fault tolerant distributed mobile architecture |
US10547530B2 (en) | 2008-06-27 | 2020-01-28 | Lemko Corporation | Fault tolerant distributed mobile architecture |
US9198020B2 (en) | 2008-07-11 | 2015-11-24 | Lemko Corporation | OAMP for distributed mobile architecture |
US7855988B2 (en) | 2008-07-14 | 2010-12-21 | Lemko Corporation | System, method, and device for routing calls using a distributed mobile architecture |
US8310990B2 (en) | 2008-07-14 | 2012-11-13 | Lemko Corporation | System, method, and device for routing calls using a distributed mobile architecture |
US9332478B2 (en) | 2008-07-14 | 2016-05-03 | Lemko Corporation | System, method, and device for routing calls using a distributed mobile architecture |
WO2010008695A2 (en) * | 2008-07-14 | 2010-01-21 | Lemko Corporation | System, method, and device for routing calls using a distributed mobile architecture |
WO2010008695A3 (en) * | 2008-07-14 | 2010-04-29 | Lemko Corporation | System, method, and device for routing calls using a distributed mobile architecture |
US20100008369A1 (en) * | 2008-07-14 | 2010-01-14 | Lemko, Corporation | System, Method, and Device for Routing Calls Using a Distributed Mobile Architecture |
US8619775B2 (en) * | 2008-07-21 | 2013-12-31 | Ltn Global Communications, Inc. | Scalable flow transport and delivery network and associated methods and systems |
US20100014528A1 (en) * | 2008-07-21 | 2010-01-21 | LiveTimeNet, Inc. | Scalable flow transport and delivery network and associated methods and systems |
US8326286B2 (en) | 2008-09-25 | 2012-12-04 | Lemko Corporation | Multiple IMSI numbers |
US8744435B2 (en) | 2008-09-25 | 2014-06-03 | Lemko Corporation | Multiple IMSI numbers |
US20100103864A1 (en) * | 2008-10-24 | 2010-04-29 | Qualcomm Incorporated | Cell relay protocol |
US20100103863A1 (en) * | 2008-10-24 | 2010-04-29 | Qualcomm Incorporated | BEARER QoS MAPPING FOR CELL RELAYS |
US20100103865A1 (en) * | 2008-10-24 | 2010-04-29 | Qualcomm Incorporated | Header compression for cell relay communications |
US8902805B2 (en) | 2008-10-24 | 2014-12-02 | Qualcomm Incorporated | Cell relay packet routing |
US20100103845A1 (en) * | 2008-10-24 | 2010-04-29 | Qualcomm Incorporated | Cell relay mobility procedures |
US9088939B2 (en) | 2008-10-24 | 2015-07-21 | Qualcomm Incorporated | Bearer QoS mapping for cell relays |
US20100103862A1 (en) * | 2008-10-24 | 2010-04-29 | Qualcomm Incorporated | Device attachment and bearer activation using cell relays |
US20100103857A1 (en) * | 2008-10-24 | 2010-04-29 | Qualcomm Incorporated | Cell relay network attachment procedures |
US20100103861A1 (en) * | 2008-10-24 | 2010-04-29 | Qualcomm Incorporated | Cell relay packet routing |
US8401068B2 (en) | 2008-10-24 | 2013-03-19 | Qualcomm Incorporated | Device attachment and bearer activation using cell relays |
US11057319B2 (en) | 2008-12-22 | 2021-07-06 | LTN Global Inc. | System and method for recovery of packets in overlay networks |
US9106569B2 (en) | 2009-03-29 | 2015-08-11 | Ltn Global Communications, Inc. | System and method that routes flows via multicast flow transport for groups |
US8599851B2 (en) | 2009-04-03 | 2013-12-03 | Ltn Global Communications, Inc. | System and method that routes flows via multicast flow transport for groups |
WO2011101425A1 (en) * | 2010-02-19 | 2011-08-25 | Thomson Licensing | Control of packet transfer through a multipath session comprising a single congestion window |
US9660912B2 (en) | 2010-02-19 | 2017-05-23 | Thomson Licensing | Control of packet transfer through a multipath session comprising a single congestion window |
US9674054B2 (en) * | 2011-03-02 | 2017-06-06 | Alcatel Lucent | Concept for providing information on a data packet association and for forwarding a data packet |
US20130318239A1 (en) * | 2011-03-02 | 2013-11-28 | Alcatel-Lucent | Concept for providing information on a data packet association and for forwarding a data packet |
US9014264B1 (en) * | 2011-11-10 | 2015-04-21 | Google Inc. | Dynamic media transmission rate control using congestion window size |
US9450874B2 (en) * | 2013-01-04 | 2016-09-20 | Futurewei Technologies, Inc. | Method for internet traffic management using a central traffic controller |
US20140192645A1 (en) * | 2013-01-04 | 2014-07-10 | Futurewei Technologies, Inc. | Method for Internet Traffic Management Using a Central Traffic Controller |
US20150215345A1 (en) * | 2014-01-27 | 2015-07-30 | International Business Machines Corporation | Path selection using tcp handshake in a multipath environment |
US10749993B2 (en) | 2014-01-27 | 2020-08-18 | International Business Machines Corporation | Path selection using TCP handshake in a multipath environment |
US10362148B2 (en) * | 2014-01-27 | 2019-07-23 | International Business Machines Corporation | Path selection using TCP handshake in a multipath environment |
US20190273811A1 (en) * | 2014-01-27 | 2019-09-05 | International Business Machines Corporation | Path selection using tcp handshake in a multipath environment |
US10178133B2 (en) | 2014-07-30 | 2019-01-08 | Tempered Networks, Inc. | Performing actions via devices that establish a secure, private network |
US9979640B2 (en) * | 2014-12-23 | 2018-05-22 | Intel Corporation | Reorder resilient transport |
US20160182369A1 (en) * | 2014-12-23 | 2016-06-23 | Anil Vasudevan | Reorder resilient transport |
US11502952B2 (en) | 2014-12-23 | 2022-11-15 | Intel Corporation | Reorder resilient transport |
US9942131B2 (en) * | 2015-07-29 | 2018-04-10 | International Business Machines Corporation | Multipathing using flow tunneling through bound overlay virtual machines |
WO2017115907A1 (en) * | 2015-12-28 | 2017-07-06 | 전자부품연구원 | Transmission device and method for measuring dynamic path state in various network environments |
US10326799B2 (en) | 2016-07-01 | 2019-06-18 | Tempered Networks, Inc. Reel/Frame: 043222/0041 | Horizontal switch scalability via load balancing |
US10069726B1 (en) * | 2018-03-16 | 2018-09-04 | Tempered Networks, Inc. | Overlay network identity-based relay |
US10200281B1 (en) | 2018-03-16 | 2019-02-05 | Tempered Networks, Inc. | Overlay network identity-based relay |
US10797993B2 (en) | 2018-03-16 | 2020-10-06 | Tempered Networks, Inc. | Overlay network identity-based relay |
US11876715B2 (en) * | 2018-04-13 | 2024-01-16 | Huawei Technologies Co., Ltd. | Load balancing method, device, and system |
US10116539B1 (en) | 2018-05-23 | 2018-10-30 | Tempered Networks, Inc. | Multi-link network gateway with monitoring and dynamic failover |
US10797979B2 (en) | 2018-05-23 | 2020-10-06 | Tempered Networks, Inc. | Multi-link network gateway with monitoring and dynamic failover |
US11582129B2 (en) | 2018-05-31 | 2023-02-14 | Tempered Networks, Inc. | Monitoring overlay networks |
US10158545B1 (en) | 2018-05-31 | 2018-12-18 | Tempered Networks, Inc. | Monitoring overlay networks |
US11509559B2 (en) | 2018-05-31 | 2022-11-22 | Tempered Networks, Inc. | Monitoring overlay networks |
US11563539B2 (en) * | 2019-04-16 | 2023-01-24 | At&T Intellectual Property I, L.P. | Agile transport for background traffic in cellular networks |
US10911418B1 (en) | 2020-06-26 | 2021-02-02 | Tempered Networks, Inc. | Port level policy isolation in overlay networks |
US12095743B2 (en) | 2020-06-26 | 2024-09-17 | Tyco Fire & Security Gmbh | Port level policy isolation in overlay networks |
US11729152B2 (en) | 2020-06-26 | 2023-08-15 | Tempered Networks, Inc. | Port level policy isolation in overlay networks |
US11824901B2 (en) | 2020-10-16 | 2023-11-21 | Tempered Networks, Inc. | Applying overlay network policy based on users |
US11070594B1 (en) | 2020-10-16 | 2021-07-20 | Tempered Networks, Inc. | Applying overlay network policy based on users |
US11831514B2 (en) | 2020-10-23 | 2023-11-28 | Tempered Networks, Inc. | Relay node management for overlay networks |
US10999154B1 (en) | 2020-10-23 | 2021-05-04 | Tempered Networks, Inc. | Relay node management for overlay networks |
US12224912B2 (en) | 2020-10-23 | 2025-02-11 | Tyco Fire & Security Gmbh | Relay node management for overlay networks |
US20240045874A1 (en) * | 2021-06-22 | 2024-02-08 | International Business Machines Corporation | Processing large query results in a database accelerator environment |
US12259892B2 (en) * | 2021-06-22 | 2025-03-25 | International Business Machines Corporation | Processing large query results in a database accelerator environment |
Also Published As
Publication number | Publication date |
---|---|
US7643427B2 (en) | 2010-01-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7643427B2 (en) | Multipath routing architecture for large data transfers | |
JP4632874B2 (en) | Communication terminal | |
US9621384B2 (en) | Systems and methods for communicating data over parallel data paths | |
US7369498B1 (en) | Congestion control method for a packet-switched network | |
Floyd | A report on recent developments in TCP congestion control | |
US9660912B2 (en) | Control of packet transfer through a multipath session comprising a single congestion window | |
Mankin et al. | Gateway congestion control survey | |
CN103490972B (en) | Multilink tunnel message transmission method and system | |
EP1344359B1 (en) | Method of enhancing the efficiency of data flow in communication systems | |
US20060209838A1 (en) | Method and system for estimating average bandwidth in a communication network based on transmission control protocol | |
Rojviboonchai et al. | RM/TCP: Protocol for reliable multi-path transport over the internet | |
CA2372023A1 (en) | Overload control method for a packet-switched network | |
JP3862003B2 (en) | Band control method, congestion control method, and network configuration apparatus | |
Ayar et al. | A transparent reordering robust TCP proxy to allow per-packet load balancing in core networks | |
JP4505575B2 (en) | COMMUNICATION SYSTEM, GATEWAY TRANSMISSION DEVICE, GATEWAY RECEPTION DEVICE, TRANSMISSION METHOD, RECEPTION METHOD, AND INFORMATION RECORDING MEDIUM | |
Jungmaier et al. | On SCTP multi-homing performance | |
Shailendra et al. | MPSCTP: A multipath variant of SCTP and its performance comparison with other multipath protocols | |
Wu et al. | Dynamic congestion control to improve performance of TCP split-connections over satellite links | |
Liu et al. | Delivering faster congestion feedback with the mark-front strategy | |
Bisoy et al. | Throughput and Compatibility Analysis of TCP Variants in Heterogeneous Environment | |
Magalhaes et al. | Improving Performance of Rate-Based Transport Protocols in Wireless Environments | |
五十嵐和美 | Studies on congestion control mechanisms realizing various end-to-end communication qualities | |
Santhi et al. | MNEWQUE: A New Approach to TCP/AQM with ECN | |
Djonova-Popova et al. | Congestion Control Strategies | |
Zheng | Adaptive Explicit Congestion Notification (AECN) for Heterogeneous Flows |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: NEC LABORATORIES AMERICA, INC., NEW JERSEY Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KOKKU, RAVINDRANATH;BOHRA, ANIRUDDHA;GANGULY, SAMRAT;AND OTHERS;REEL/FRAME:019062/0211 Effective date: 20070326 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
AS | Assignment |
Owner name: NEC CORPORATION, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:NEC LABORATORIES AMERICA, INC.;REEL/FRAME:025599/0212 Effective date: 20110106 |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
FPAY | Fee payment |
Year of fee payment: 8 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 12 |