+

HK1180494A - Computer system and communication method in computer system - Google Patents

Computer system and communication method in computer system Download PDF

Info

Publication number
HK1180494A
HK1180494A HK13107728.9A HK13107728A HK1180494A HK 1180494 A HK1180494 A HK 1180494A HK 13107728 A HK13107728 A HK 13107728A HK 1180494 A HK1180494 A HK 1180494A
Authority
HK
Hong Kong
Prior art keywords
node
flow
switches
computer system
entry
Prior art date
Application number
HK13107728.9A
Other languages
Chinese (zh)
Inventor
高岛正德
加濑知博
上野洋史
增田刚久
尹秀薰
Original Assignee
日本电气株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本电气株式会社 filed Critical 日本电气株式会社
Publication of HK1180494A publication Critical patent/HK1180494A/en

Links

Description

Computer system and communication method in computer system
Technical Field
The present invention relates to a computer system and a communication method in the computer system, and more particularly to a computer system using an open flow (OpenFlow) technique.
Background
In communications using ethernet (registered trademark), multipath communications are becoming impossible due to the loss of flexibility of physical links available in the network caused by the Spanning Tree Protocol (STP).
In order to solve this problem, open flow channel routing control is proposed (see non-patent document 1). For example, a computer system using the open flow technique is disclosed in JP2003-229913a (patent document 1). A network switch (hereinafter referred to as a Programmable Flow Switch (PFS)) corresponding to this technique holds detailed information such as a protocol type and a port number in a flow table, and can control a flow. It should be noted that PFS is also referred to as an openflow switch.
Fig. 1 is a diagram showing a configuration example of a computer system using the open flow technique. Referring to fig. 1, a programmable flow controller (PFC, also called an open flow controller) 100 sets flow entries of PFSs 200 and 300 in a single subnet (P-flow network) to perform flow control in the subnet.
Each of PFSs 200 and 300 refers to its flow table to perform actions (e.g., relaying and dropping of data packets) defined in the flow table entry and corresponding to header information of a received packet. Specifically, when a packet transferred between hosts 400 is received, each of PFSs 200 and 300 performs an action defined in a flow entry if header information of the received packet conforms to (matches with) the (rule of the) flow entry set in its own flow table. On the other hand, when the header information of the received packet does not coincide with (does not match with) the (rule of the) flow entry set in the flow table, each of the PFSs 200 and 300 recognizes the received packet as a first packet, notifies the PFC100 of the reception of the first packet, and transmits the header information of the packet to the PFC 100. The PFC100 sets a flow entry corresponding to the notified header information to the PFS that is the notification source of the first packet (flow + action).
As described above, in the conventional open flow technique, after any one of PFSs 200 and 300 receives a packet transferred between hosts 400, PFC100 performs transfer control on the packet transmitted and received between hosts 400.
List of cited documents
Patent document 1: JP2003-229913A
Non-patent document 1: OpenFlow Switch Specification version1.0.0(Wire protocol0x01) Decumber 31, 2009
Disclosure of Invention
The PFC in the conventional open flow technique sets a route of a packet transferred between a source terminal and a destination terminal and sets a flow entry for a switch on the route. Further, even if the destinations are the same, it is necessary to set a flow entry and a route between the source terminal and the destination terminal each time a packet different from the source terminal is generated. Therefore, when the open flow technique is used, it is feared that the resource (the number of flow entries) of the entire system is consumed too much.
The computer system of the present invention comprises: a controller; a plurality of switches each of which performs a relay operation defined in a flow entry set by the controller with respect to a packet conforming to the flow entry; and a plurality of nodes communicating through any one of the plurality of switches. The controller sets a destination address as a rule of the flow entry and sets a delivery process to a destination node as an action of the flow entry. Each of the plurality of switches transfers a packet including the destination address to the destination node based on the flow entry set to the switch, regardless of the source address of the received packet.
Further, it is desirable that the controller sets a flow entry to each of the plurality of switches before passing the packet between the plurality of nodes.
Further, it is desirable that the controller acquires a first MAC (media access control) address of a first node of the plurality of nodes in response to a first ARP (address resolution protocol) request from the first node, and sets the first MAC address to each of the plurality of switches as a rule of the flow entry.
Further, it is desirable for the controller to send an ARP reply with a MAC address of another node of the plurality of nodes as a transmission source to the first node as a reply to a first ARP request from the first node to the another node.
Further, the controller acquires a first MAC (media access control) address of a first node (VM1) of the plurality of nodes based on a first ARP (address resolution protocol) request from the first node, and sets the first MAC address to each of the plurality of switches as a rule of the flow table entry. Further, it is desirable that the controller issues a second ARP request, and sets a rule in which a second MAC address of the second node acquired based on a response to the second ARP request is set to each of the plurality of switches as a flow entry.
Further, the controller transmits an ARP reply that has the MAC address of the other node as a source address to the first node as a reply to a first ARP request from the first node that is destined to the other node. Further, it is desirable that the controller sends an ARP reply to the other node for a third ARP request destined to and sent from the first node.
Further, it is desirable that the plurality of switches include a plurality of first switches directly connected to the plurality of nodes. In this case, it is desirable that the controller sets the flow entry to any selected switch among the plurality of first switches without setting the flow entry to the remaining switches.
Further, it is desirable for the controller to set the flow table entry to each of the plurality of switches to perform ECMP (equal cost multi path) routing for the received packet.
The communication method of the present invention includes the steps of: setting, by a controller, a flow table entry for each of a plurality of switches; performing, by each of the plurality of switches, a relay operation defined in the flow entry for a received packet conforming to the flow entry set by the controller; and communicating by each of the nodes through each of the plurality of switches. The setting of the flow table entry comprises the following steps: a rule for setting, by the controller, a destination address to the flow entry; and an act of setting a delivery process to the destination node to the flow entry. The communication includes: passing, by each of the plurality of switches, the received packet including the destination address to the destination node regardless of a transmission source address of the received packet.
Further, it is desirable to perform the setting of the flow table entry before passing packets between the plurality of nodes.
According to the present invention, resource consumption of the entire computer system using the open flow technique can be reduced.
Drawings
Other objects, effects and features of the above invention will be further clarified according to the description of exemplary embodiments with reference to the accompanying drawings. In the drawings:
fig. 1 is a diagram showing a configuration example of a computer system using the open flow technique;
fig. 2 is a diagram showing a configuration example of a computer system according to the present invention;
fig. 3A is a diagram showing an example of a flow setting method and a communication method in a computer system according to the present invention;
fig. 3B is a diagram showing an example of a flow setting method and a communication method in a computer system according to the present invention;
fig. 3C is a diagram showing an example of a flow setting method and a communication method in the computer system according to the present invention;
fig. 3D is a diagram showing an example of a flow setting method and a communication method in the computer system according to the present invention;
fig. 3E is a diagram showing an example of a flow setting method and a communication method in the computer system according to the present invention;
fig. 3F is a diagram showing an example of a flow setting method and a communication method in the computer system according to the present invention;
fig. 3G is a diagram showing an example of a flow setting method and a communication method in the computer system according to the present invention;
fig. 3H is a diagram showing an example of a flow setting method and a communication method in the computer system according to the present invention;
fig. 3I is a diagram showing an example of a flow setting method and a communication method in a computer system according to the present invention;
fig. 3J is a diagram showing an example of a flow setting method and a communication method in the computer system according to the present invention; and
fig. 4 is a diagram showing a configuration of a logical network divided into a plurality of networks due to flow control according to the present invention.
Detailed Description
Hereinafter, exemplary embodiments of the present invention will be described with reference to the accompanying drawings. In the drawings, the same or similar components are assigned the same or similar reference numerals.
(configuration of computer System)
With reference to fig. 2, a configuration of a computer system according to the present invention will be described. Fig. 2 is a diagram showing a configuration example of a computer system according to the present invention. The computer system according to the present invention includes a programmable flow controller 10 (hereinafter referred to as PFC10), a plurality of programmable flow switches 20-1 to 20-3 and 30-1 to 30-3 (hereinafter referred to as PFSs 20-1 to 20-3 and 30-1 to 30-3), physical servers 40-1 to 40-5 (hereinafter referred to as SV40-1 to 40-5), and a memory 50, which are connected through a communication network. Meanwhile, when PFS20-1 to 20-3 and 30-1 to 30-3 are described without distinguishing from each other, each of PFS20-1 to 20-3 and each of PFS30-1 to 30-3 are referred to as PFS20 and PFS30, respectively. And, when SV40-1 to 40-5 is described without distinguishing from each other, each of SV40-1 to 40-5 is referred to as SV 40.
SV40 and memory 50 are computer units each having a CPU, a main storage unit, and an external storage device (which are not shown in the figure), and communicate with other SVs 40 by executing programs stored in the external storage device. Communication with SV40 is performed through PFSs 20 and 30. SV40 implements functions exemplified by a web server, file server, application server, client terminal, etc., in accordance with executed programs. For example, when SV40 functions as a web server, SV40 transfers an HTML document and image data in a storage unit (not shown) to another SV40 (client terminal) according to a request of the client terminal (not shown).
SV40 includes a virtual machine VM implemented by logically and physically dividing the storage area of a CPU (not shown) and a storage unit (not shown). In the example shown in FIG. 2, virtual machine VM1 and VM2 are implemented in SV40-1, virtual machine VM3 and VM4 are implemented in SV40-2, virtual machine VM5 and VM6 are implemented in SV40-3, and virtual machine VM7 and VM8 are implemented in SV 40-4. Virtual machines VM1 through VM8 may be implemented by a Guest Operating System (GOS) emulated on a Host Operating System (HOS) of each server or software operating on a GOS.
The virtual machine VM transmits and receives data to and from other devices (for example, a computer unit on an external network and a virtual machine VM in another physical server 40) through a virtual switch (not shown) managed by a virtual machine monitor or a physical NIC (not shown). In the present exemplary embodiment, for example, packet communication is performed in accordance with TCP/IP (transmission control protocol/internet protocol).
Further, a virtual switch (not shown) according to the present invention can be controlled based on the open flow technique to be described later, and can perform a conventional switching operation (layer 2). Further, each of the virtual machines VM1 to VM8 and the outside of the physical server are connected to each other by a bridge connection. That is, direct communication can be performed from the outside based on the MAC addresses and IP addresses of the virtual machines VM1 to VM 8.
The PFC10 controls communication in the system based on open flow technology. The open flow technique shows the following techniques: the controller (here, PFC10) sets a multi-layer structure and routing data to PFSs 20 and 30 in units of flows according to a routing policy (flow table entry: flow and action) to perform routing control and node control. Thus, the routing control function is separated from the router and the switch, and optimal routing and traffic management can be achieved by centralized control of the controller. The PFSs 20 and 30 to which the open flow technique is applied do not handle communication in units of hops as in the conventional router and switch, but handle communication of a flow as END2END (END-to-END).
The PFC10 is implemented by a computer having a CPU and a memory unit (not shown). The flow control process in the PFC10 is realized by executing a program stored in a storage unit (not shown), and controls the operations of the PFSs 20 and 30 (for example, a relay operation on data packets) by setting a flow entry (flow and action) to each of the PFSs 20 and 30.
Further, prior to packet transfer between terminals (e.g., between virtual machines VM), MAC addresses of the host terminal (SV40 and memory 50) and the virtual machines VM are set to the PFC10 according to the present invention. For example, the PFC10 acquires MAC addresses of the host terminal and the virtual machine VM in response to ARP (address resolution protocol) in advance.
The PFC10 generates a flow entry using the acquired MAC address for a rule and sets the flow entry to all PFSs 20 and 30 in the network. For example, the PFC10 generates, for each PFS, a flow entry for specifying a transfer destination unit of a packet destined to the MAC address of the virtual machine VM1 and transferring the packet; and the PFC10 sets the flow table entry for all switches PFS20 and 30 in the network. In the present invention, since the flow is controlled based on only the destination MAC address, the delivery destination of the packet corresponding to the rule (destination MAC address) set to the flow entry is determined irrespective of the transmission source. For this reason, flow control can be performed without knowing the transmission source of the packet. That is, according to the present invention, since a multipath for packet transfer is formed by setting an optimal route for a destination terminal, an optimal multipath operation can be achieved. Further, since the PFS can be set with the flow table entry without waiting for the reception of the first packet unlike the conventional technique, the network throughput can be improved. Further, in the present invention, since the flow table entry is generated and set before the packet is transferred between the terminals (i.e., before the system operation is started), the processing load for the flow control during the operation is reduced compared to the conventional art.
Further, the PFC10 generates a flow entry using the acquired MAC address for the rule, and sets the flow entry to a PFS arbitrarily selected from among the PFSs 20 and 30 in the network, and does not set the flow entry to the remaining PFSs. For example, a flow entry using the MAC address of the virtual machine VM1 as a rule is set to a selected part of the PFS30 directly connected to the host terminal (SV40 and memory 50). In this case, when PFS30, for which no flow table entry is set, receives a packet destined for virtual machine VM1, the packet is discarded and not passed anywhere. In this way, since the delivery destinations of the packet can be logically separated, one physical network can be divided into a plurality of logical networks and operated. It should be noted that a similar effect can also be achieved when a flow entry defined to discard a packet destined to a specific MAC address is set to a specific PFS.
Each of PFSs 20 and 30 includes a flow table (not shown) to which a flow entry is set, and performs processing (e.g., relay processing and discarding) on a received packet according to the set flow entry. The PFS30 is a first-level switch directly connected to the host terminals (SV40 and memory 50), for example, a top-of-rack (TOR) switch is preferably used for the PFS 30. Further, for the L2 switch and the L3 switch connected to the second stage or subsequent stage from the master terminal, for example, a CORE switch (CORE) is preferably used for the PFS 20.
Each of PFSs 20 and 30 refers to its own flow table (not shown) and performs actions (e.g., relaying and dropping of data packets) defined in the flow table entry and corresponding to header data (in particular, destination MAC address) of the received packet. Specifically, when header data of a received packet matches (corresponds to) a flow defined by a flow entry set in its own flow table, each of PFSs 20 and 30 performs an action defined in the flow entry. Further, when header data of a received packet does not match (does not correspond to) a flow defined by a flow entry set in the flow table, each of PFSs 20 and 30 does not perform any processing for the packet. In this case, the PFSs 20 and 30 may notify the PFC10 of the reception of the packet and may discard the packet.
In the flow table entry, an arbitrary combination of addresses and identifiers of layers 1 to 4 in the OSI (open systems interconnection) reference model is defined as data (hereinafter referred to as a rule) for specifying a flow (data packet), and the addresses and identifiers are included in header data of the data packet of TCP/IP, for example. For example, any combination of a physical port of layer 1, a MAC address of layer 2, an IP address of layer 3, a physical port of layer 4, and a VLAN tag is set to the flow entry as a rule. However, in the present invention, the MAC address and the IP address of the transmission source are not set to the flow table entry, and the destination MAC address is always set to the flow table entry. Here, a predetermined range of identifiers (e.g., port numbers, addresses, etc.) may be set to the flow entry. For example, the MAC addresses of the virtual machines VM1 and VM2 may be set as the rule of the flow table entry as the destination MAC address.
For example, the actions of the flow table entry define a method for processing data packets of TCP/IP. For example, information showing whether or not the received data packet is relayed is set, and a destination of the data packet is also set in the case of relaying the data packet. Further, in this action, data indicating duplication or discarding of the data packet may be set.
(flow setting method and communication method in computer System)
Next, details of a flow setting method and a communication method in the computer system according to the present invention will be described with reference to fig. 3A to 3J. The flow setting for the virtual machine VM1 and the flow setting for the virtual machine VM5 will be described below as examples. Further, when virtual machines VM1 through VM8, physical servers 40-1 through 40-5, and memory 50 are not distinguished with respect to each other, they are collectively referred to as nodes.
Upon completion of (or change of) the system configuration, PFC10 knows the system topology in a similar way as a conventional flow controller. The topology data known at this time includes data related to connection states of the PFSs 20 and 30, nodes (virtual machines VM1 to VM8, physical servers 40-1 to 40-5, and memories), an external network (e.g., the internet) not shown, and the like. Specifically, as topology data, the number of device ports and port destination data are associated with device identifiers for specifying PFSs 20 and 30 and nodes, and thus the device identifiers are recorded in the storage unit of the PFC 10. The port destination data includes a connection type (switch/node/external network) for specifying a connection opposite side (a connection counter side), and data (switch ID in the case of a switch, MAC address in the case of a node, external network ID in the case of an external network) for specifying a connection destination.
Referring to fig. 3A, PFC10 traps ARP requests (ARP. req) from a node to obtain (learn) the location (MAC address) of the requesting node. For example, an ARP request addressed to virtual machine VM5 is transmitted from virtual machine VM1 to PFC 10. The PFC10 extracts the MAC address of the virtual machine VM1 as the source node from the received ARP request. PFC10 defines rules for setting the MAC address as a destination to generate a flow table entry. In this case, flow entries for all PFSs 20 and 30 in the system are generated. Note that the flow entry of the MAC address may be set in advance into the storage unit of the PFC 10.
Referring to fig. 3B, PFC10, which has learned the location (MAC address) of a node, registers a route to the node. For example, the PFC10 sets flow table entries defining the transfer of a packet destined to the MAC address of the virtual machine VM1 and the transfer destination device to all PFSs 20 and 30. In this case, it is preferable to set a flow entry to the PFS30-1 to define a physical port connected to the virtual machine VM1 as an output destination, and to set a flow entry to the PFS30 of the first stage other than the PFS30-1 to achieve load balancing for the PFS20 of the second stage or the subsequent stage. For example, it is preferable to set a flow table entry to PFS30 to perform ECMP (equal cost multi-routing) routing for PFS 30.
During normal learning of layer 2 (L2 learning), there are the following cases: a case where a LOOP (LOOP) is generated due to FLOODING (FLOODING), and a case where desired learning cannot be performed due to load balancing. However, in the present invention, the open flow technique is employed, and thus these problems do not occur.
Referring to fig. 3C, in acquisition (learning) of a MAC address, PFC10, which has a flow table entry set, transmits an ARP request for a requested destination from one node to all nodes except for the node. For example, the PFC10 transmits an ARP request destined to the virtual machine VM5 that is the destination of the ARP request shown in fig. 3A to all nodes (virtual machine VM2 to VM8, SV40-5, and memory 50) except for the requesting virtual machine VM 1.
Referring to fig. 3D, PFC10 obtains (acquires) the location (MAC address) of the destination node based on the reply to the ARP request (ARP reply) shown in fig. 3C. In this example, sending an ARP reply from virtual machine VM5, PFC10 obtains the location (MAC address) of virtual machine VM5 by trapping the ARP reply.
Referring to fig. 3E, PFC10 having acquired (learned) the node location (MAC address) registers with the route of the node. Here, the PFC10 sets a flow entry and a destination device defining the transfer of a packet destined to the MAC address of the virtual machine VM5 to all PFSs 20 and 30. In this case, in the same manner as above, it is preferable to set the flow entries to the PFS30 of the first stage from the host terminal to achieve load balancing for the PFS20 of the second stage or subsequent stages.
Referring to fig. 3F, PFC10 responds to ARP requests from the nodes shown in fig. 3A through a proxy. Here, the PFC10 uses the MAC address of the virtual machine VM5 as a transmission source and issues an ARP reply whose destination is the virtual machine VM 1. Virtual machine VM1 receives the ARP reply to the ARP request sent by itself and obtains the MAC address of the requested virtual machine VM 5.
In the above operation, the processing contents (flow entry) for the packets destined to the destination node and the request source node of the ARP request, respectively, are set to all PFSs 20 and 30 in the system. In the example shown in fig. 3G, by the above-described operation, flow entries for packets destined to the virtual machines VM1 and VM5, respectively, are set to all PFSs 20 and 30. In this way, communication destined to the virtual machine VM1 and communication destined to the virtual machine VM5 are normally performed. In this case, a packet destined to each destination is sent through a route conforming to a flow entry defined by a destination MAC address regardless of a transmission source.
Further, in order to configure a single tree structure in accordance with the conventional ethernet (registered trademark) in the spanning tree protocol, a physical link which is not used is generated. For this reason, a plurality of routes cannot be set between specific nodes in the ethernet (registered trademark). However, in the present invention, a packet transfer destination is set to each PFS according to the destination, so that multipath is formed to achieve load distribution. For example, in the case of the above-described example, multipath is formed and load distribution is realized according to the flow table entry of each communication among the communication to the virtual machine VM1 and the communication to the virtual machine VM 5.
In the above example, load balancing by ECMP defined in the flow table entry is employed. However, the present invention is not limited thereto, and link aggregation or load distribution per flow entry may also be employed.
On the other hand, in order that the ARP request can be sent and bidirectional communication between the request source node and the destination node can be performed, the node as the destination acquires (learns) the position (MAC address) of the request source node from the PFC 10. Specifically, referring to fig. 3H, an ARP request from virtual machine VM5 destined for virtual machine VM1 is sent to PFC 10. Referring to fig. 3I, the PFC10, which has reserved the location (MAC address) of the virtual machine VM1, sends an ARP reply that has the MAC address of the virtual machine VM1 as a transmission source to the virtual machine VM 5. Virtual machine VM5 traps it to obtain the location (MAC address) of virtual machine VM 1. Thus, as shown in fig. 3J, virtual machine VM5 may send data packets destined for virtual machine VM 1. It should be noted that since the flow entry destined to the virtual machine VM1 and the flow entry destined to the virtual machine VM5 are independently set, the communication route from the virtual machine V1 to the virtual machine V5 and the communication path from the virtual machine V5 to the virtual machine V1 are not always the same.
Through the above operation, both the virtual machines VM1 and VM5 acquire (learn) the mutual positions (MAC addresses), and the transfer destination of the packet destined to each of the virtual machines VM1 and VM5 is set to all PFSs 20 and 30. In this way, bidirectional communication between virtual machine VM1 and virtual machine VM5 may be achieved.
In the present invention, since the flow entry is set based on the destination MAC address, the position of the transmission source node is not always required in the setting of the flow entry. To this end, the flow entry may be set before communication between the nodes is started. Further, it is not necessary to set a flow entry for a communication route between nodes as in the conventional technique, but it is sufficient to set a flow entry of a destination MAC address to each PFS. Therefore, resource consumption in the entire computer system can be reduced.
Next, an application example of the computer system according to the present invention will be described with reference to fig. 4. In the above example, all PFSs 20 and 30 are set with flow entries for packets destined to a certain node. However, the present invention is not limited in this regard and the node that sets the flow table entry may be limited to a portion of PFS30 that is directly connected to the node.
The computer system shown in fig. 4 includes upper layer switches (PFSs 20-1 and 20-2) connected to network 500, PFSs 30-1, 30-2, and 30-3 directly connected to a host terminal (not shown) such as SV40, and nodes S and a. Here, node A is connected to the system through PFS30-2 and node S is connected to the system through PFS 30-3.
In this example, a flow table entry is set for PFS20-1, 20-2 and 20-3 to control flows destined for node S and PFS20-1, 20-2, 30-1 and 30-2 to control flows destined for node A through PFC10 (not shown). In this case, a packet destined for node S reaches node S through a communication route passing through any one of PFSs 20-1, 20-2, and 30-3, and a packet destined for node a reaches node a through a communication route passing through any one of PFSs 20-1, 20-2, 30-1, and 30-2. That is, node S is accommodated in a logical network configured by PFS20-1, 20-2, and 20-3, and node A is accommodated in a logical network configured by PFS20-1, 20-2, 30-1, and 30-2.
As mentioned above, the computer system shown in FIG. 4 is configured as a physical network. However, when the flow table entry is selectively set, the computer system is divided into two logical networks. Thus, one physical topology can be treated as multiple VLANs.
As described above, the exemplary embodiments of the present invention are described in detail. However, the specific configuration is not limited to the above-described exemplary embodiments. The present invention includes various modifications within the scope of the present invention. In fig. 2, a system having a PFS group in a two-stage configuration is shown as an example. However, the present invention is not limited thereto, and the system may have PFS groups in a configuration of a plurality of levels. Further, as in the conventional art, an external network may be connected with the PFS20 through a layer 3(L3) switch.
The present application is based on japanese patent No. JP2010-202468, the disclosure of which is incorporated herein by reference.

Claims (10)

1. A computer system, comprising:
a controller;
a plurality of switches each of which performs a relay operation defined in a flow entry set by the controller with respect to a packet conforming to the flow entry; and
a plurality of nodes communicating through any one of the plurality of switches,
wherein the controller sets a destination address as a rule for the flow entry and sets a delivery process to a destination node as an action for the flow entry, an
Wherein each of the plurality of switches transfers a packet containing the destination address to the destination node regardless of a transmission source address of the packet based on a flow entry set to the switch.
2. A computer system, wherein the controller sets the flow table entry for each of the plurality of switches prior to passing packets between the plurality of nodes.
3. The computer system according to claim 1 or 2, wherein the controller obtains a first MAC (media access control) address of a first node of the plurality of nodes in response to a first ARP (address resolution protocol) request from the first node, and sets the first MAC address to each of the plurality of switches as a rule of a flow entry.
4. The computer system of claim 3, wherein the controller sends an ARP reply to the first node with a MAC address of another node of the plurality of nodes as a transmission source as a reply to a first ARP request from the first node to the another node.
5. The computer system according to claim 2 or 3, wherein the controller issues a second ARP request, and sets a rule that a second MAC address of the second node acquired based on a response to the second ARP request is set to each of the plurality of switches as a flow entry.
6. The computer system of claim 4, wherein the controller sends an ARP reply to the other node for a third ARP request sent from the other node that is destined for the first node.
7. The computer system of any of claims 1-6, wherein the plurality of switches comprises a first plurality of switches directly connected to the plurality of nodes, an
Wherein the controller sets the flow table entry to a switch arbitrarily selected from the plurality of first switches without setting the flow table entry to the remaining switches.
8. The computer system of any of claims 1-7, wherein the controller sets the flow table entry for each of the plurality of switches to perform ECMP (equal cost Multi Path) routing on packets.
9. A method of communication, comprising:
setting, by a controller, a flow table entry for each of a plurality of switches;
performing, by each of the plurality of switches, a relay operation defined in the flow entry for a packet conforming to the flow entry; and
communicating between a source node and a destination node of the plurality of nodes through the plurality of switches,
wherein the setting the flow table entry comprises:
a rule for setting, by the controller, a destination address to the flow entry; and
an act of setting a delivery process destined to the destination node as the flow entry, an
Wherein the communication comprises:
communicating, by each of the plurality of switches, a packet containing the destination address to the destination node regardless of a transmission source address of the packet.
10. The communication method according to claim 9, wherein the setting of the flow entry is performed before a packet is transferred between the plurality of nodes.
HK13107728.9A 2010-09-09 2011-09-05 Computer system and communication method in computer system HK1180494A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2010-202468 2010-09-09

Publications (1)

Publication Number Publication Date
HK1180494A true HK1180494A (en) 2013-10-18

Family

ID=

Similar Documents

Publication Publication Date Title
CA2810660C (en) Computer system and communication method in computer system
KR102701639B1 (en) Multi-cloud connectivity using SRV6 and BGP
JP6004405B2 (en) System and method for managing network packet forwarding in a controller
CN102857416B (en) A kind of realize the method for virtual network, controller and virtual network
EP3063903B1 (en) Method and system for load balancing at a data network
Mudigonda et al. Netlord: a scalable multi-tenant network architecture for virtualized datacenters
JP5991424B2 (en) Packet rewriting device, control device, communication system, packet transmission method and program
KR101572771B1 (en) System and methods for controlling network traffic through virtual switches
US10009267B2 (en) Method and system for controlling an underlying physical network by a software defined network
JP5654142B2 (en) Method for configuring network switches
CN103444143B (en) Network system and policy route configuration method
US20210320865A1 (en) Flow-based local egress in a multisite datacenter
JP5861772B2 (en) Network appliance redundancy system, control device, network appliance redundancy method and program
TW201541262A (en) Method for virtual machine migration using software defined networking (SDN)
EP2924925A1 (en) Communication system, virtual-network management device, communication node, and communication method and program
KR101794719B1 (en) Method and system for ip address virtualization in sdn-based network virthalization platform
HK1180494A (en) Computer system and communication method in computer system
RU2574350C2 (en) Computer system and method for communication in computer system
Shahrokhkhani An Analysis on Network Virtualization Protocols and Technologies
KR100772182B1 (en) ROUTER AND METHOD FOR PROCESSING IPv4 PACKET EGREGATING BETWEEN OUTER TRAFFIC AND INNER TRAFFIC THEREOF
点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载